title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
18 values
text
stringlengths
0
8.42M
Covid-19: The challenges facing endocrinology
ff314167-4510-480d-8807-2be12734d138
7195032
Physiology[mh]
The Covid-19 pandemic has hit the planet like a tidal wave, imperiling the lives of thousands and threatening health systems with collapse. In this issue of the Annals of Endocrinology, Alexandre et al. remind us that the gateway to Sars-CoV-2 is the angiotensin II converting enzyme (ACE2), physiological regulator of the renin-angiotensin system (RAS). Angiotensin II stimulates the secretion of aldosterone via the AT1 receptor of the adrenal gland (glomerulated zone) and has its own vasoconstrictive, pro-fibrosing, pro-inflammatory activity. ACE2 converts angiotensin II-[1–8] to angiotensin-[1–7], which has properties opposite to those of angiotensin II, and is therefore a negative regulator of the RAS. The legitimate question raised by the authors is whether the prescription of converting enzyme inhibitor [1–10] (ACEi) and angiotensin receptor type 1 blockers (ARAII), very widely used in hypertension treatment, may increase the risk of developing severe acute respiratory syndrome in Covid-19-infected patients. It is convincingly explained why, on the basis of the available evidence, scientific societies do not recommend discontinuation of hypertension treatment by ACEi and ARAIIs in Covid-19+ patients. Therapeutic prospects and the first trials of the use of the soluble form of ACE2 as a virus trap are underway. Covid-19 challenges the endocrinologist in many ways. Three in particular are worth raising here: • the relative protection of children, presumed to be healthy carriers, and the lower incidence of death in women (1/3) suggest that the hormonal environment and genetic aspects are factors for surviving Covid-19. We know for example that the TLR7 gene present on the X chromosome is a receptor which influences antiviral response . Paradoxically, as immune response is stronger in women, contributing to greater susceptibility to autoimmune disease, this flaw becomes an advantage over viral infection. In men, who make up more than two-thirds of deaths from Covid-19, risk is higher after 50 years and even more after 70 years. But age is not the only factor; a context of metabolic syndrome with overweight, diabetes and hypertension seems to be strongly associated with this risk. It is important to remember that the decline in testosterone secretion in humans can be explained by 4 factors: age, obesity, associated comorbidities, and smoking . Strangely, the ACE2 protein is expressed in many tissues, including the testicle . These avenues could be explored to better identify the influence of sex hormones on the ability to resist viral disease; • a cytokine storm is reported during the respiratory distress phase in 20% of Covid-19 + patients with multi-organ failure and hypotension refractory to standard treatment . How does the adrenal cortex function react in this critical situation? Is there an analogy with what has been well described in septic shock? . We already know that corticosteroids are not useful in the treatment of pulmonary lesions associated with severe respiratory distress, and are indeed deleterious by delaying elimination of the virus. Is there any form of resistance to glucocorticoids? • the Covid-19 epidemic has served as an opportunity to adopt teleconsultation to respond to the urgent need for continuity of care. This is bound to lead to reimagining a definitive shift to telemedicine in the management of chronic diseases, which is liable to disrupt our practice and teaching and also the entire economic system of health-care. In this issue of the Annals, a current update on the relationships of diabetes, but also obesity, with the risk of contracting Covid-19 or developing a severe form is presented. This didactic article by Laura Orioli et al. , documented by a literature in full effervescence would be very useful for the informed reader or the general practitioner and will enable him to follow the future recommendations to treat the Covid+ diabetic patients. Endocrinologists and diabetologists, like many other specialists, must prepare for this very immediate deadline. The reader will find the answers to the questions posed by aging, which is the center of attention of the whole medical community, in the management of thyroid diseases. This consensus statement, coordinated by Philippe Caron, is remarkable, practical and exhaustive. It is based on the extensive clinical experience of its authors. La pandémie de Covi19 s’est abattue sur la planète comme une déferlante qui menace la vie de milliers d’individus avec le risque d’explosion des systèmes de santé. Dans ce numéro des Annales d’Endocrinologie, Alexandre et al. nous rappelle que la porte d’entrée du Sars-CoV-2 est l’enzyme de conversion de l’angiotensine II (ACE2) régulateur physiologique du système rénine-angiotensine (SRA). L’angiotensine II stimule la sécrétion d’aldostérone via le récepteur AT1 de la surrénale (zone glomérulée) et possède une activité propre vasoconstrictrice, pro-fibrosante, et pro-inflammatoire. L’ACE2 en convertissant l’angiotensine II [1–8] en angiotensine [1–7], qui a des propriétés opposées à celles de l’angiotensine II, est donc un régulateur négatif du système rénine-angiotensine. La question légitime soulevée par les auteurs est de savoir si la prescription d’inhibiteur de l’enzyme de conversion [1–10] (IEC) et les bloqueurs du récepteur de type 1 à l’angiotensine (ARAII), très largement utilisés dans le traitement de l’hypertension, pourraient augmenter le risque de développer un syndrome respiratoire aigu sévère en cas d’infection au COVID-19. Il est expliqué de façon convaincante pourquoi, sur la base des preuves disponibles, les sociétés savantes ne recommandent pas l’interruption des traitements de l’hypertension artérielle par IEC et ARAII chez les patients Covi19+. Les perspectives thérapeutiques et les premiers essais de l’utilisation de la forme soluble d’ACE2 comme piège de virus sont en cours. Le Covi-19 interpelle l’endocrinologue par bien des aspects. Trois peuvent être identifiés: • la relative protection des enfants, présumés porteurs sains, et la moindre incidence de décès chez la femme (1/3) suggèrent que l’environnement hormonal et des facteurs génétiques pourraient compter parmi les facteurs de chance de survivre au Covid-19. On sait par exemple que le gène TLR7 présent sur le chromosome X est un récepteur qui influence la réponse antivirale . Paradoxalement, si la réponse immunitaire est plus forte chez la femme ce qui contribue à une plus grande susceptibilité au développement de maladies auto-immunes, cette aptitude devient un avantage vis à vis de l’infection virale. Chez l’homme qui représente plus de 2/3 des décès lors du Covid-19, le risque est plus élevé à 50 ans et plus encore après 70 ans. Mais l’âge n’est pas le seul facteur, le contexte de syndrome métabolique avec surpoids, diabète et hypertension semble en grande partie associé à ce risque. Il est important de noter que le déclin de la sécrétion de testostérone chez l’homme s’explique par 4 facteurs: l’âge, l’obésité, les comorbidités associées et le tabac . Curieusement la protéine ACE2 est exprimée dans de nombreux tissus dont le testicule . Ces pistes pourraient être explorées pour mieux identifier l’influence des hormones sexuelles sur la capacité de se défendre contre les maladies virales; • un orage de cytokines a été décrit lors de la phase de détresse respiratoire dans 20 % des patients Covid-19 + avec une défaillance multi-organique et hypotension réfractaire au traitement standard . Comment la fonction corticosurrénale réagit-elle dans cette situation critique? Y-a-t-il une analogie avec ce qui a été bien décrit au cours du choc septique? . On sait déjà que les corticoïdes ne sont pas utiles voir délétères, en retardant l’élimination du virus, dans le traitement des lésions pulmonaires associées au tableau de détresse respiratoire sévères . Existe-t-il une forme de résistance aux glucocorticoïdes? • l’épidémie de Covid-19 a créé l’opportunité d’adopter les consultations téléphoniques pour répondre à l’urgence de la continuité des soins . Il s’en suivra une réflexion sur la pérennité de l’usage de la télémédecine dans la prise en charge des maladies chroniques qui bouleverserait à la fois nos pratiques et son enseignement mais aussi tout le système économique de santé. Les endocrinologues et les diabétologues comme beaucoup de spécialistes doivent se préparer à cette échéance immédiate. Dans ce numéro des Annales, le lecteur trouvera une mise au point d'actualité sur les relations du diabète, mais aussi de l'obésité, avec le risque de contracter le Covid-19 ou de développer une forme sévère. Cet article didactique et documenté par une littérature en pleine effervescence sera très utile pour le lecteur averti ou le médecin généraliste et lui permettra de suivre les recommandations futures pour traiter les patients diabétiques infectés. Le lecteur trouvera les réponses aux questions que pose le vieillissement, qui fait l’objet de toute l’attention du corps médical, dans la prise en charge de la pathologie thyroïdienne. Ce consensus coordonné par Philippe Caron est remarquable, pratique et exhaustif. Il repose sur une grande expérience clinique de ses auteurs. The authors declare that they have no competing interest.
Optimising paediatric afferent component early warning systems: a hermeneutic systematic literature review and model development
0f3d3621-7b30-46da-ba89-5d421d0f74b2
6886951
Pediatrics[mh]
Failure to recognise and act on signs of clinical deterioration in the hospitalised child is an acknowledged safety concern. Track and trigger tools (TTT) are a common response to this problem. A TTT consists of sequential recording and monitoring of physiological, clinical and observational data. When a certain score or trigger is reached then a clinical action should occur including, but not limited to, altered frequency of observation, senior review or more appropriate treatment or management. Tools may be paper based or electronic and monitoring can be automated or undertaken manually by staff. Despite the growing use of TTTs there is limited evidence of their effectiveness as a single intervention in reducing mortality or arrest rates in hospitalised children. Results from the largest international cluster randomised controlled trial of a TTT (the Bedside Paediatric Early Warning System (BedsidePEWS)) did not support TTT use to reduce mortality, and highlighted the multifactorial mechanisms involved in detecting and initiating action in response to deterioration. These findings lend further weight to a developing consensus about the need to look beyond TTTs to the impact of wider system factors on detecting and responding to deterioration in the inpatient paediatric population. This paper reports on a theoretically informed systematic hermeneutic literature review to identify the core components and mechanisms of action of successful afferent component early warning systems (EWS) in paediatric hospitals and is one of three linked reviews undertaken as part of a wider UK study commissioned to develop and evaluate an evidence-based paediatric warning system. It addressed the following question: ​What sociomaterial and contextual factors are associated with successful or unsuccessful Paediatric Early Warning Systems (with or without TTTs)? Design We performed a hermeneutic systematic review of the relevant literature. A hermeneutic systematic review is an iterative process, integrating analysis and interpretation of evidence with literature searching and is designed to develop a better understanding of the field. The popularity of the method is growing in health services research where it has value in generating insights from heterogeneous literatures that cannot be synthesised through standard review methodology and would otherwise produce inconclusive findings (see ref ). The purpose of the review was not exhaustive aggregation of evidence, but to develop an understanding of the social, material and contextual factors associated with successful or unsuccessful paediatric early warning systems (PEWS). Theoretical framework Data extraction and interpretation was informed by translational mobilisation theory (TMT) and normalisation process theory (NPT). TMT is a practice theory which explains how goal-oriented collaborative activity is mobilised in unpredictable environments and how the relevant mechanisms of action are conditioned by the local context. It is well suited for understanding EWS which require the organisation of action in evolving conditions, in a variety of clinical environments, with different teams, skill mixes, resources, structures and technologies. NPT shares the same domain assumptions as TMT and is concerned with ‘how and why things become, or do not become, routine and normal components of everyday work’, directing attention to the preconditions necessary for successful implementation of interventions. The theoretical framework informed our data extraction strategy, interpretation of the evidence and the development of a propositional model of an optimal paediatric early warning system. Box 1 Mechanisms of translational mobilisation and their application to rescue trajectories Object formation— how people draw on the interpretative resources available to them within a strategic action field to create the objects of their practice. Enrolment into an escalation trajectory requires multiple examples of object formation beginning with construction of an individual as at risk of deterioration and a regime of vital signs monitoring instigated, through recognition that the patient’s physiological status is a cause for concern, to the identification of the patient as requiring a specific intervention. How this is achieved is highly dependent on the features of the local strategic action field. Translation —the processes that enable practice objects to be shared and different understandings accommodated. It points to the actions necessary in order for a patient that is an object of concern for nursing staff to be translated into a clinical priority for the doctor and, if necessary, to be translated into the focus of intervention by the emergency response team. Articulation refers to the secondary work processes that align the actions, knowledge and resources necessary for the mobilisation of projects of collective action. It is the work that makes the work, work. Responding to deterioration is time critical and articulation work is necessary to ensure the availability of resources and materials to support clinical management. This is not a mundane observation; catastrophic failures in patient safety are often attributed to the lack of functioning equipment and the absence of monitoring equipment has been identified as a factor undermining the implementation of early warning track and trigger tools. Attending to articulation in rescue trajectories also underlines the temporal ordering of action and the mechanisms required to achieve this, directing improvement efforts towards the organisation’s escalation policy, for example. Reflexive monitoring refers to the processes through which people collectively or individually appraise and review activity. In a distributed field of action, reflexive monitoring is the means through which members accomplish situational awareness of an overall project. The importance of situation awareness in rescue trajectories is well recognised, but achieving this is challenging. Reflexive monitoring is conditioned by the wider institutional context which will include a multiplicity of informal and formal mechanisms designed for this purpose: nursing and medical handovers, the ward round, safety briefings. The form, frequency and effectiveness of these processes in supporting detecting and acting on deterioration would need to be taken into account in any improvement initiative. Sensemaking refers to the processes through which agents create order in conditions of complexity. It draws attention to how the material and discursive processes by which members organise their work, account for their actions and construct the objects of their practice also give meaning and substance to the institutional components of strategic action fields that shape activity and condition future activity. Focus of the review The literature in this field identifies four integrated components which work together to provide a safety system for at-risk patients: (1) the afferent component which detects deterioration and triggers timely and appropriate action; (2) the efferent component which consists of the people and resources providing a response; (3) a process improvement component, which includes system auditing and monitoring; and (4) an administrative component focusing on organisational leadership and education required to implement and sustain the system. Our focus was limited to the afferent components of the system. Stages of the review Stage 1: scoping the literature Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data Stage 2: searching for the evidence We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. Stage 3: data extraction and appraisal Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer. Stage 4: developing a propositional model A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice. Patient and public involvement This review was conducted as part of a larger mixed methods study (ISRCTN 94228292), which used a formal, facilitated parental advisory group. The group comprised parents of children who had experienced an unexpected adverse event in a paediatric unit and provided input which helped shape the broader research questions and wider contextual factors to consider, specifically within the family involvement element of the system. The results of the review will be disseminated to parents through this group. We performed a hermeneutic systematic review of the relevant literature. A hermeneutic systematic review is an iterative process, integrating analysis and interpretation of evidence with literature searching and is designed to develop a better understanding of the field. The popularity of the method is growing in health services research where it has value in generating insights from heterogeneous literatures that cannot be synthesised through standard review methodology and would otherwise produce inconclusive findings (see ref ). The purpose of the review was not exhaustive aggregation of evidence, but to develop an understanding of the social, material and contextual factors associated with successful or unsuccessful paediatric early warning systems (PEWS). Data extraction and interpretation was informed by translational mobilisation theory (TMT) and normalisation process theory (NPT). TMT is a practice theory which explains how goal-oriented collaborative activity is mobilised in unpredictable environments and how the relevant mechanisms of action are conditioned by the local context. It is well suited for understanding EWS which require the organisation of action in evolving conditions, in a variety of clinical environments, with different teams, skill mixes, resources, structures and technologies. NPT shares the same domain assumptions as TMT and is concerned with ‘how and why things become, or do not become, routine and normal components of everyday work’, directing attention to the preconditions necessary for successful implementation of interventions. The theoretical framework informed our data extraction strategy, interpretation of the evidence and the development of a propositional model of an optimal paediatric early warning system. Box 1 Mechanisms of translational mobilisation and their application to rescue trajectories Object formation— how people draw on the interpretative resources available to them within a strategic action field to create the objects of their practice. Enrolment into an escalation trajectory requires multiple examples of object formation beginning with construction of an individual as at risk of deterioration and a regime of vital signs monitoring instigated, through recognition that the patient’s physiological status is a cause for concern, to the identification of the patient as requiring a specific intervention. How this is achieved is highly dependent on the features of the local strategic action field. Translation —the processes that enable practice objects to be shared and different understandings accommodated. It points to the actions necessary in order for a patient that is an object of concern for nursing staff to be translated into a clinical priority for the doctor and, if necessary, to be translated into the focus of intervention by the emergency response team. Articulation refers to the secondary work processes that align the actions, knowledge and resources necessary for the mobilisation of projects of collective action. It is the work that makes the work, work. Responding to deterioration is time critical and articulation work is necessary to ensure the availability of resources and materials to support clinical management. This is not a mundane observation; catastrophic failures in patient safety are often attributed to the lack of functioning equipment and the absence of monitoring equipment has been identified as a factor undermining the implementation of early warning track and trigger tools. Attending to articulation in rescue trajectories also underlines the temporal ordering of action and the mechanisms required to achieve this, directing improvement efforts towards the organisation’s escalation policy, for example. Reflexive monitoring refers to the processes through which people collectively or individually appraise and review activity. In a distributed field of action, reflexive monitoring is the means through which members accomplish situational awareness of an overall project. The importance of situation awareness in rescue trajectories is well recognised, but achieving this is challenging. Reflexive monitoring is conditioned by the wider institutional context which will include a multiplicity of informal and formal mechanisms designed for this purpose: nursing and medical handovers, the ward round, safety briefings. The form, frequency and effectiveness of these processes in supporting detecting and acting on deterioration would need to be taken into account in any improvement initiative. Sensemaking refers to the processes through which agents create order in conditions of complexity. It draws attention to how the material and discursive processes by which members organise their work, account for their actions and construct the objects of their practice also give meaning and substance to the institutional components of strategic action fields that shape activity and condition future activity. The literature in this field identifies four integrated components which work together to provide a safety system for at-risk patients: (1) the afferent component which detects deterioration and triggers timely and appropriate action; (2) the efferent component which consists of the people and resources providing a response; (3) a process improvement component, which includes system auditing and monitoring; and (4) an administrative component focusing on organisational leadership and education required to implement and sustain the system. Our focus was limited to the afferent components of the system. Stage 1: scoping the literature Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data Stage 2: searching for the evidence We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. Stage 3: data extraction and appraisal Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer. Stage 4: developing a propositional model A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice. Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer. A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice. This review was conducted as part of a larger mixed methods study (ISRCTN 94228292), which used a formal, facilitated parental advisory group. The group comprised parents of children who had experienced an unexpected adverse event in a paediatric unit and provided input which helped shape the broader research questions and wider contextual factors to consider, specifically within the family involvement element of the system. The results of the review will be disseminated to parents through this group. Included studies Eighty-two papers were included in the review. Forty-six papers focused on TTT implementation and use in paediatric and adult contexts (24 from the paediatric search and the remaining 22 from the adult-focused search); the remaining 36 papers contributed supplementary data on factors related to the wider warning system. See for a detailed breakdown of this process. No studies were located that adopted a whole systems approach to detecting and responding to deterioration. Analysis In TMT the primary unit of analysis is the ‘project’, which defines the social and material actors (people, materials, technologies) and their relationships involved in achieving a particular goal. The goals of the afferent paediatric warning system are: first, that the child is identified as at risk and a vital signs monitoring regime instigated; second, that evidence of deterioration is identified through monitoring and categorised as such; and third, that timely and appropriate action is initiated in response to deterioration. Our analysis of the literature suggests that three subsystems within the afferent component of EWS support these processes: the detection of signs deterioration; the planning needed to ensure teams are ready to act when deterioration is detected; and the initiation of timely action . While we have focused on the afferent component, it is important to remember that all elements of the overall safety system (efferent component, process improvement and administrative arm) need to be working in concert in order to maintain an optimal paediatric early warning system. In the next section, we report on the literature in relation to each subsystem. Detection The goal of the detection subsystem is to recognise early signs of deterioration, so the child becomes the focus of further clinical attention. This requires, first, that the child is identified as at risk and a vital signs monitoring regime instigated and, second, that the child is identified as showing signs of deterioration. Despite widespread use, the evidence on TTT effectiveness in predicting adverse outcomes in hospitalised children is weak. Many TTTs have only been validated retrospectively and postpredictive values were generally low. Studies reporting significant decreases in cardiac arrest calls or mortality had methodological concerns. The literature does suggest that TTTs have value in supporting process mechanisms in the detection subsystem. Vital signs monitoring is undertaken on all hospital inpatients and, like other high-volume routine activity, is often delegated to junior staff who may not have sufficient skills to interpret results. TTTs have value in mitigating these risks: by specifying physiological thresholds that indicate deterioration they take knowledge to the bedside and act as prompts to action which can lead to a more systematic and frequent approach to monitoring and improved detection of deterioration. TTT’s effectiveness in fulfilling these functions depends on certain preconditions. The review highlighted that TTT use was impacted by the availability of appropriate and functioning equipment, (in)adequate staffing and night-time pressures and an appropriately skilled workforce. On this latter point, while several papers report on education packages to improve the detection of deterioration, the evidence is not robust enough to recommend specific programmes. There were also times whereby nursing staff prioritised sleep over waking a patient to take vital signs. TTTs are also used differently depending on the experience of the user. For juniors, they provide a methodology and structure for monitoring clinical instability and identifying deterioration, whereas more experienced staff reportedly use TTTs as confirmatory technologies. The importance of professional intuition in detecting deterioration is extensively reported across the literature and several authors recommend the inclusion of ‘staff concern’ in tool criteria. This is important; TTTs may be of less value in patients with chronic conditions because of altered normal physiology or where subtle changes are difficult to detect. It is also the case that TTTs are implemented in contexts governed by competing organisational logics which impact on their value and use. For example, Mohammed Iddrisu et al show TTTs have limited value immediately after surgery because acceptable vital sign parameters are different in the immediate postoperative period. There is growing interest in the literature in strategies that facilitate patient and relative involvement in the early detection of deterioration. Healthcare professionals depend on families to explain their child’s normal physiological baseline and identify subtle changes in their child’s condition but this information is not always systematically obtained. Some authors propose family involvement in interdisciplinary rounds (This is an editorial paper), but this requires parents to have detailed information about the signs and symptoms they should be attending to and as yet there is little evidence on effective strategies for how they might be involved in the detection of deterioration. While much of the literature reports on intermittent manual vital signs monitoring and paper-based recording systems, across the developed world there is a growing use of electronic technologies, which have important implications for the wider detection subsystem. We considered a number of evaluations of new technologies which indicated that electronic vital signs recording is associated with a number of positive outcomes, particularly timeliness and accuracy, when compared with paper-based systems. They can provide prompts or alerts for monitoring, which facilitates better recognition of deterioration and is associated with a reduction in mortality. These studies tend to evaluate new technologies in isolation, however, and do not engage with the literature highlighting alarm fatigue which is known to mitigate effectiveness over time or concerns about overburdening staff with alerts. Moreover, the successful implementation of new technologies is conditioned by the local context. For instance, where manual input into an electronic device is required, access to computers is an essential precondition. When computers were not available, staff ‘batch’ the collection of vital signs before data entry, thereby delaying the timely detection of deterioration. In another study where the electronic system was found to be cumbersome and separated the collection and entry of data from the review of vital signs, verbal reports were favoured to ensure timely communication of information. See for a summary of the evidence reported. Planning Detecting and responding to deterioration involves the coordination of action in conditions of uncertainty and competing priorities. The goal of the ‘Planning’ subsystem is to ensure the clinical team are ready to act in the event of evidence of deterioration and is reflected in the growing interest in the literature on structures to facilitate team SA, group decisions and planning. TTTs have been found to support SA. Their use enabled clinicians to have a ‘bird’s-eye’ view over all admitted patients on a ward as well as encouraging staff to consider projected acuity levels of the ward. A number of studies also report on ‘huddles’ in facilitating SA. A huddle is a multidisciplinary event scheduled at predetermined times where members discuss specific risk factors around deterioration and develop mitigation plans. One study combined the introduction of huddles with a ‘watchstander’, a role fulfilled by a charge nurse or senior resident, whose primary function is to know patients at high risk for deterioration. These initiatives were associated with a near 50% reduction in transfers from acute to intensive care determined to be unrecognised situation awareness events. A further strategy identified by Goldenhar et al describes the use of the ‘watcher’ category to designate a patient as at risk where staff have a ‘gut feeling’ deterioration is likely. A recent study used the category of ‘watcher’ to create a bundle of expectations to standardise communication and contingency planning. Once a patient was labelled ‘a watcher’ a series of five specific tasks, such as documentation of physician awareness of watcher status and that the family had been notified of the change in the patient’s status, needed to be completed within 2 hours. Handovers are integral to clinical communication and contribute to SA. The extensive literature on handover indicates that information sharing can be of variable quality and there is growing evidence that structured approaches improve this. Ranging from a checklist system to a cognitive aid developed through consensus, most of the published interventions are variations of the Situation-Background-Assessment-Recommendation (SBAR) tool. While effective handover depends on communicative forms that extend beyond the information transfer that is typically the focus of structured handover tools, in the context of EWS a lack of standardisation allows greater margin for individualistic practices and difficulties accessing complementary knowledge and establishing shared understandings. There is also a literature on the use of common information spaces—such as whiteboards—in facilitating SA in the healthcare team. These should be in a visible location and colour coded to correspond with the TTT score, where relevant. Electronic systems automate this information and allow information to be reviewed remotely. However, they disconnect vital signs data from the patient and hence other indicators of clinical status and access to data is contingent upon the availability of computers. The literature indicates that SA can be facilitated in different ways in different contexts and it is the relationship between system elements that is important. In their study on SA in delivery suites, Mackintosh et al discuss the three main supports for SA—whiteboard, handover and coordinator role—and illustrate how these interacted in organisations with strong SA compared with those with reduced levels. Crucially, this ‘interplay’ between the different activities was highly context dependent; ‘the same supports used differently generate different outcomes’ (p 52). See for a summary of the planning evidence. Action The goal of the ‘Action’ subsystem is to initiate appropriate action in response to evidence of deterioration. The literature suggests that mobilising action across professional boundaries/hierarchies is challenging, with differences in language between doctors and nurses and power dynamics contributory factors. TTTs are in part a response to the challenges of communication in mobilising action in response to deterioration. By transforming a series of discrete observations into a summative indicator of deterioration—such as a score or a trigger—TTTs ‘translate’ and package the patient’s status into a form that can be readily communicated enabling individual-level clinical data to be synthesised, made sense of and shared. One study, however, found that TTTs were regarded as a nursing tool and were therefore not valued by clinicians. Consequently, nurses encountered difficulties in summoning a response. Several studies also report on the use of SBAR in this context. Like TTTs, SBAR translates information into a form that provides structure, consistency and predictability when presenting patient information. SBAR has been shown to help establish common language and expectations, minimising differences in training, experience and hierarchy and facilitating nurse–clinician communication. While several papers advocate combining SBAR with TTTs, none specifically evaluated SBAR use. Mackintosh et al highlight that audit data suggest resistance to SBAR, with others cautioning that overextending SBAR use carries the risk of SBAR fatigue and attenuation of its effects. Structured communication tools like TTTs and SBAR do not solve all the challenges of acting in response to evidence of deterioration. Barriers to action were widely reported in the literature where these tools were in place. These include: a general disinclination to seek help, concerns about appearing inadequate in front of colleagues and failure of staff to invest in the escalation or calling criteria. A number of papers also reported negative attitudes to rapid response team (RRT) or medical emergency team (MET) use in the efferent component of safety systems. METs and RRTs operate outside the immediate medical team and create different issues in paediatric warning systems than when the escalation response is managed by the treating team. These include a reluctance to activate because of the perceived busyness of paediatric intensive care unit or medical staff, because previous expectations about an appropriate response were not met, or a sense that the situation was under control (particularly when the physiological instability is in the area of expertise of the treating team). No literature reported on successful interventions to facilitate RRT use, but several propose strategies to support escalation where there was no designated response team in place in the efferent component. These include informal peer support, where inexperienced staff team up with more experienced staff ; clear structures to support action and a supportive culture that does not penalise individual decision-making, including the use of a ‘no false alarms’ policy so staff are not deterred from escalating care. Senior leadership is consistently identified as important ; lack of support from superiors meant that staff are less likely to escalate and more likely to adhere to hierarchies within the current system. There is some evidence to suggest that any escalation policy should be linked to an administrative arm that reinforces the system, measures outcomes and works to ensure an effective system. There is a small literature on family involvement in the Action subsystem. Several studies report on Condition-Help, a programme developed in the USA to support families to directly activate an RRT if they have concerns about their child’s condition. Families are also becoming increasingly recognised as playing a key role in the activation of RRTs in Australia. Research has evaluated the appropriateness of calls that were made by patients or relatives but has not considered why calls were not made. Involving family members in escalation demands vigilance, requiring them to take a proactive and interactive role with staff with potentially some degree of confrontation, particularly if challenging the appropriateness of decisions taken. Families need both cognitive and emotional resources to raise concerns that involve negotiating hierarchies and boundaries. The literature points to a degree of professional resistance to family involvement in activation, with reports of physician concern that their role would be undermined, that resources would be stretched with an increase in calls and that it might divert attention away from those in need although these fears are not supported by the evidence. See for a summary of the evidence relating to the action component of the model. Synthesis and model development The literature in this field is heterogeneous and stronger on the sociomaterial barriers to successful afferent component paediatric early warning systems than it is on solutions. While a number of different single interventions have been proposed and some have been evaluated, there is limited evidence to recommend their use beyond the specific clinical contexts described in the papers. This reflects both the weight and quality of the evidence, the extent to which paediatric systems are conditioned by the local clinical context and also the need to attend to the relationship between system components and interventions which work in concert not in isolation. There is also a growing realisation in the quality improvement field that an intervention that has been successful in one context does not necessarily produce the same results elsewhere which cautions against a ‘one size fits all’ approach. While it is not possible to make empirical recommendations for practice, a hermeneutic review methodology enabled the generation of theoretical inferences about the core components of an optimal paediatric early warning system. These model components are logical inferences derived from an overall synthesis of the evidence, informed by our theoretical framework and clinical expertise. These are presented as a propositional model conceptualised as three subsystems: detection, planning and action (see ). Eighty-two papers were included in the review. Forty-six papers focused on TTT implementation and use in paediatric and adult contexts (24 from the paediatric search and the remaining 22 from the adult-focused search); the remaining 36 papers contributed supplementary data on factors related to the wider warning system. See for a detailed breakdown of this process. No studies were located that adopted a whole systems approach to detecting and responding to deterioration. In TMT the primary unit of analysis is the ‘project’, which defines the social and material actors (people, materials, technologies) and their relationships involved in achieving a particular goal. The goals of the afferent paediatric warning system are: first, that the child is identified as at risk and a vital signs monitoring regime instigated; second, that evidence of deterioration is identified through monitoring and categorised as such; and third, that timely and appropriate action is initiated in response to deterioration. Our analysis of the literature suggests that three subsystems within the afferent component of EWS support these processes: the detection of signs deterioration; the planning needed to ensure teams are ready to act when deterioration is detected; and the initiation of timely action . While we have focused on the afferent component, it is important to remember that all elements of the overall safety system (efferent component, process improvement and administrative arm) need to be working in concert in order to maintain an optimal paediatric early warning system. In the next section, we report on the literature in relation to each subsystem. The goal of the detection subsystem is to recognise early signs of deterioration, so the child becomes the focus of further clinical attention. This requires, first, that the child is identified as at risk and a vital signs monitoring regime instigated and, second, that the child is identified as showing signs of deterioration. Despite widespread use, the evidence on TTT effectiveness in predicting adverse outcomes in hospitalised children is weak. Many TTTs have only been validated retrospectively and postpredictive values were generally low. Studies reporting significant decreases in cardiac arrest calls or mortality had methodological concerns. The literature does suggest that TTTs have value in supporting process mechanisms in the detection subsystem. Vital signs monitoring is undertaken on all hospital inpatients and, like other high-volume routine activity, is often delegated to junior staff who may not have sufficient skills to interpret results. TTTs have value in mitigating these risks: by specifying physiological thresholds that indicate deterioration they take knowledge to the bedside and act as prompts to action which can lead to a more systematic and frequent approach to monitoring and improved detection of deterioration. TTT’s effectiveness in fulfilling these functions depends on certain preconditions. The review highlighted that TTT use was impacted by the availability of appropriate and functioning equipment, (in)adequate staffing and night-time pressures and an appropriately skilled workforce. On this latter point, while several papers report on education packages to improve the detection of deterioration, the evidence is not robust enough to recommend specific programmes. There were also times whereby nursing staff prioritised sleep over waking a patient to take vital signs. TTTs are also used differently depending on the experience of the user. For juniors, they provide a methodology and structure for monitoring clinical instability and identifying deterioration, whereas more experienced staff reportedly use TTTs as confirmatory technologies. The importance of professional intuition in detecting deterioration is extensively reported across the literature and several authors recommend the inclusion of ‘staff concern’ in tool criteria. This is important; TTTs may be of less value in patients with chronic conditions because of altered normal physiology or where subtle changes are difficult to detect. It is also the case that TTTs are implemented in contexts governed by competing organisational logics which impact on their value and use. For example, Mohammed Iddrisu et al show TTTs have limited value immediately after surgery because acceptable vital sign parameters are different in the immediate postoperative period. There is growing interest in the literature in strategies that facilitate patient and relative involvement in the early detection of deterioration. Healthcare professionals depend on families to explain their child’s normal physiological baseline and identify subtle changes in their child’s condition but this information is not always systematically obtained. Some authors propose family involvement in interdisciplinary rounds (This is an editorial paper), but this requires parents to have detailed information about the signs and symptoms they should be attending to and as yet there is little evidence on effective strategies for how they might be involved in the detection of deterioration. While much of the literature reports on intermittent manual vital signs monitoring and paper-based recording systems, across the developed world there is a growing use of electronic technologies, which have important implications for the wider detection subsystem. We considered a number of evaluations of new technologies which indicated that electronic vital signs recording is associated with a number of positive outcomes, particularly timeliness and accuracy, when compared with paper-based systems. They can provide prompts or alerts for monitoring, which facilitates better recognition of deterioration and is associated with a reduction in mortality. These studies tend to evaluate new technologies in isolation, however, and do not engage with the literature highlighting alarm fatigue which is known to mitigate effectiveness over time or concerns about overburdening staff with alerts. Moreover, the successful implementation of new technologies is conditioned by the local context. For instance, where manual input into an electronic device is required, access to computers is an essential precondition. When computers were not available, staff ‘batch’ the collection of vital signs before data entry, thereby delaying the timely detection of deterioration. In another study where the electronic system was found to be cumbersome and separated the collection and entry of data from the review of vital signs, verbal reports were favoured to ensure timely communication of information. See for a summary of the evidence reported. Detecting and responding to deterioration involves the coordination of action in conditions of uncertainty and competing priorities. The goal of the ‘Planning’ subsystem is to ensure the clinical team are ready to act in the event of evidence of deterioration and is reflected in the growing interest in the literature on structures to facilitate team SA, group decisions and planning. TTTs have been found to support SA. Their use enabled clinicians to have a ‘bird’s-eye’ view over all admitted patients on a ward as well as encouraging staff to consider projected acuity levels of the ward. A number of studies also report on ‘huddles’ in facilitating SA. A huddle is a multidisciplinary event scheduled at predetermined times where members discuss specific risk factors around deterioration and develop mitigation plans. One study combined the introduction of huddles with a ‘watchstander’, a role fulfilled by a charge nurse or senior resident, whose primary function is to know patients at high risk for deterioration. These initiatives were associated with a near 50% reduction in transfers from acute to intensive care determined to be unrecognised situation awareness events. A further strategy identified by Goldenhar et al describes the use of the ‘watcher’ category to designate a patient as at risk where staff have a ‘gut feeling’ deterioration is likely. A recent study used the category of ‘watcher’ to create a bundle of expectations to standardise communication and contingency planning. Once a patient was labelled ‘a watcher’ a series of five specific tasks, such as documentation of physician awareness of watcher status and that the family had been notified of the change in the patient’s status, needed to be completed within 2 hours. Handovers are integral to clinical communication and contribute to SA. The extensive literature on handover indicates that information sharing can be of variable quality and there is growing evidence that structured approaches improve this. Ranging from a checklist system to a cognitive aid developed through consensus, most of the published interventions are variations of the Situation-Background-Assessment-Recommendation (SBAR) tool. While effective handover depends on communicative forms that extend beyond the information transfer that is typically the focus of structured handover tools, in the context of EWS a lack of standardisation allows greater margin for individualistic practices and difficulties accessing complementary knowledge and establishing shared understandings. There is also a literature on the use of common information spaces—such as whiteboards—in facilitating SA in the healthcare team. These should be in a visible location and colour coded to correspond with the TTT score, where relevant. Electronic systems automate this information and allow information to be reviewed remotely. However, they disconnect vital signs data from the patient and hence other indicators of clinical status and access to data is contingent upon the availability of computers. The literature indicates that SA can be facilitated in different ways in different contexts and it is the relationship between system elements that is important. In their study on SA in delivery suites, Mackintosh et al discuss the three main supports for SA—whiteboard, handover and coordinator role—and illustrate how these interacted in organisations with strong SA compared with those with reduced levels. Crucially, this ‘interplay’ between the different activities was highly context dependent; ‘the same supports used differently generate different outcomes’ (p 52). See for a summary of the planning evidence. The goal of the ‘Action’ subsystem is to initiate appropriate action in response to evidence of deterioration. The literature suggests that mobilising action across professional boundaries/hierarchies is challenging, with differences in language between doctors and nurses and power dynamics contributory factors. TTTs are in part a response to the challenges of communication in mobilising action in response to deterioration. By transforming a series of discrete observations into a summative indicator of deterioration—such as a score or a trigger—TTTs ‘translate’ and package the patient’s status into a form that can be readily communicated enabling individual-level clinical data to be synthesised, made sense of and shared. One study, however, found that TTTs were regarded as a nursing tool and were therefore not valued by clinicians. Consequently, nurses encountered difficulties in summoning a response. Several studies also report on the use of SBAR in this context. Like TTTs, SBAR translates information into a form that provides structure, consistency and predictability when presenting patient information. SBAR has been shown to help establish common language and expectations, minimising differences in training, experience and hierarchy and facilitating nurse–clinician communication. While several papers advocate combining SBAR with TTTs, none specifically evaluated SBAR use. Mackintosh et al highlight that audit data suggest resistance to SBAR, with others cautioning that overextending SBAR use carries the risk of SBAR fatigue and attenuation of its effects. Structured communication tools like TTTs and SBAR do not solve all the challenges of acting in response to evidence of deterioration. Barriers to action were widely reported in the literature where these tools were in place. These include: a general disinclination to seek help, concerns about appearing inadequate in front of colleagues and failure of staff to invest in the escalation or calling criteria. A number of papers also reported negative attitudes to rapid response team (RRT) or medical emergency team (MET) use in the efferent component of safety systems. METs and RRTs operate outside the immediate medical team and create different issues in paediatric warning systems than when the escalation response is managed by the treating team. These include a reluctance to activate because of the perceived busyness of paediatric intensive care unit or medical staff, because previous expectations about an appropriate response were not met, or a sense that the situation was under control (particularly when the physiological instability is in the area of expertise of the treating team). No literature reported on successful interventions to facilitate RRT use, but several propose strategies to support escalation where there was no designated response team in place in the efferent component. These include informal peer support, where inexperienced staff team up with more experienced staff ; clear structures to support action and a supportive culture that does not penalise individual decision-making, including the use of a ‘no false alarms’ policy so staff are not deterred from escalating care. Senior leadership is consistently identified as important ; lack of support from superiors meant that staff are less likely to escalate and more likely to adhere to hierarchies within the current system. There is some evidence to suggest that any escalation policy should be linked to an administrative arm that reinforces the system, measures outcomes and works to ensure an effective system. There is a small literature on family involvement in the Action subsystem. Several studies report on Condition-Help, a programme developed in the USA to support families to directly activate an RRT if they have concerns about their child’s condition. Families are also becoming increasingly recognised as playing a key role in the activation of RRTs in Australia. Research has evaluated the appropriateness of calls that were made by patients or relatives but has not considered why calls were not made. Involving family members in escalation demands vigilance, requiring them to take a proactive and interactive role with staff with potentially some degree of confrontation, particularly if challenging the appropriateness of decisions taken. Families need both cognitive and emotional resources to raise concerns that involve negotiating hierarchies and boundaries. The literature points to a degree of professional resistance to family involvement in activation, with reports of physician concern that their role would be undermined, that resources would be stretched with an increase in calls and that it might divert attention away from those in need although these fears are not supported by the evidence. See for a summary of the evidence relating to the action component of the model. The literature in this field is heterogeneous and stronger on the sociomaterial barriers to successful afferent component paediatric early warning systems than it is on solutions. While a number of different single interventions have been proposed and some have been evaluated, there is limited evidence to recommend their use beyond the specific clinical contexts described in the papers. This reflects both the weight and quality of the evidence, the extent to which paediatric systems are conditioned by the local clinical context and also the need to attend to the relationship between system components and interventions which work in concert not in isolation. There is also a growing realisation in the quality improvement field that an intervention that has been successful in one context does not necessarily produce the same results elsewhere which cautions against a ‘one size fits all’ approach. While it is not possible to make empirical recommendations for practice, a hermeneutic review methodology enabled the generation of theoretical inferences about the core components of an optimal paediatric early warning system. These model components are logical inferences derived from an overall synthesis of the evidence, informed by our theoretical framework and clinical expertise. These are presented as a propositional model conceptualised as three subsystems: detection, planning and action (see ). This paper reports on one of three linked reviews undertaken as part of a wider UK study commissioned to develop and evaluate an evidence-based national paediatric early warning system. Drawing on TMT and NPT, we have synthesised and analysed the findings from the review to develop a propositional model to specify the core components of optimal afferent component paediatric early warning systems. While there is a growing consensus of the need to think beyond TTTs to consider the whole system, no frameworks exist to support such an approach. Clinical teams wishing to improve rescue trajectories should take a whole systems perspective focused on the constellation of factors necessary to support detection, planning and action and consider how these relationships can be managed in their local setting. TTTs have value in paediatric early warning systems but they are not the sole solution and depend on certain preconditions for their use. An emerging literature highlights the importance of planning and indicates that combinations of interventions may facilitate situation awareness. Professional judgement is also important in detecting and acting on deterioration and the evidence points to the importance of a wider organisational culture that is supportive of this. Innovative approaches are needed to support family involvement in all aspects of paediatric early warning systems, which are sensitive to the cognitive and emotional resources this requires. System effectiveness requires attention to the sociomaterial relationships in the local context, senior support and leadership and continuous monitoring and evaluation. New technologies, such as moving from paper-based to electronic TTTs, have important implications for all three subsystems and critical consideration should be given to their wider impacts and the preconditions for their integration into practice. Limitations of the review The literature in this field is heterogeneous and better at identifying system weakness than it is effective improvement interventions. It was only by deploying social theories and a hermeneutic review methodology did it prove possible to develop a propositional model of the core components of an afferent component paediatric early warning system. This model is derived from logical inferences drawing on the overall evidence synthesis, social theories and clinical expertise, rather than strong empirical evidence of single intervention effectiveness. Consequently, there is a growing consensus of the need to take a whole systems approach to improve the detection and response to deterioration in the inpatient paediatric population. The literature in this field is heterogeneous and better at identifying system weakness than it is effective improvement interventions. It was only by deploying social theories and a hermeneutic review methodology did it prove possible to develop a propositional model of the core components of an afferent component paediatric early warning system. This model is derived from logical inferences drawing on the overall evidence synthesis, social theories and clinical expertise, rather than strong empirical evidence of single intervention effectiveness. Consequently, there is a growing consensus of the need to take a whole systems approach to improve the detection and response to deterioration in the inpatient paediatric population. Failure to recognise and act on signs of deterioration is an acknowledged safety concern and TTTs are a common response to this problem. There is, however, a growing recognition of the importance of wider system factors on the effectiveness of responses to deterioration. We have reviewed a wide literature and analysed this using social theories to develop a propositional model of an optimal afferent component paediatric early warning system that can be used as a framework for paediatric units to evaluate their current practices and identify areas for improvement. TTT use should be driven by the extent to which teams think that they will help improve the effectiveness of their system as a whole. Reviewer comments Author's manuscript
Identification of plasma protein biomarkers for endometriosis and the development of statistical models for disease diagnosis
72b4f771-a93f-457c-9af0-365b7d9992e8
11788222
Biochemistry[mh]
Endometriosis is a chronic and progressive inflammatory disease characterized by the presence of estrogen-dependent endometrial-like tissue (or lesions) outside the uterus. Its symptoms include persistent pelvic pain and infertility. Endometriosis occurs in ∼11% of women and girls of reproductive age and has been observed in 35% of women using assisted reproductive technology procedures . Disease severity does not always correlate with symptom severity, leading to diagnostic challenges and limited treatment options. In addition to the acute and chronic symptoms associated with the condition, endometriosis has been linked to long-term negative health consequences, including a higher risk of cardiovascular disease, ovarian cancer, and autoimmune diseases . Based on the location and depth of lesions, the main types of endometriosis are superficial (peritoneal or other sites), deep endometriosis (DE), and ovarian (endometrioma). Disease stage is most commonly classified based on the revised American Society for Reproductive Medicine (rASRM) system, which considers the location, extent, and depth of lesions, as well as adhesions, all visualized at surgery . Despite its recognition for over a century, the exact cause of endometriosis remains elusive, resulting in delays in diagnosis and treatment. With the time for patient diagnosis averaging 7 years from symptom onset, there are negative impacts on physical, mental, and social well-being . Endometriosis also imposes a substantial economic burden due to productivity losses and healthcare costs . Diagnosis involves medical history, physical examination, imaging, laparoscopy, and histopathology. The dependability of current diagnostic tools varies, owing to factors such as the location and severity of lesions, as well as the experience of the healthcare provider . The gold standard for diagnosing endometriosis is via laparoscopy, but the procedure is invasive and costly, and carries risks including adverse events such as nerve damage, damage to pelvic organs or major blood vessels, and formation of post-surgical adhesions . Imaging techniques such as transvaginal ultrasound and magnetic resonance imaging can identify ovarian endometriosis and DE, nonetheless, the non-invasive identification of endometriosis, particularly in superficial cases, continues to pose a challenge . Non-invasive diagnostic biomarkers would significantly improve early detection and management of endometriosis. Several potential blood biomarkers have been proposed, however, studies to date have been limited by cohort size or lacked validation studies . This study aimed to identify and validate plasma protein biomarkers specific to endometriosis using a proteomics-based approach, which involved discovery, analytical, and clinical validation phases. The study hypothesized that people with endometriosis would have significantly different plasma concentrations of select proteins compared to the general population, or to those with similar pelvic symptoms but no endometriosis, and that such plasma biomarkers could be used for early diagnostic screening. Ethical approval Recruitment and collection protocols were approved by the appropriate ethical review boards (Bellberry Human Research Ethics Committee (ref: 2016-05-383); Royal Women’s Hospital Human Research Ethics Committee (Project No. 10-43, No. 11-24, and No. 16-43)), and all participants provided informed written consent. Clinical and reference samples Discovery phase The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Analytical validation phase Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. Clinical validation phase To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. Sample collection In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. Participant characteristics The clinical and demographic characteristics of participants in the discovery and clinical validation cohort are presented in . Associations between clinical variables and outcome (endometriosis vs symptomatic controls or general population), were tested using the chi-square test of independence for categorical clinical variables and the Point-biserial correlation test for continuous clinical variables. Proteomics workflow Discovery phase This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). Analytical validation phase For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. Clinical validation phase In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. Statistical and data analyses The peptide data presented reflect the relative concentration of a protein biomarker between samples. To maximize the likelihood of identifying biomarkers for the disease, changes in protein concentration were initially assessed at the extremes of the disease spectrum, for example, symptomatic controls versus severe endometriosis or general population controls versus endometriosis. To improve the normality of the data, a natural logarithmic transformation was applied to all measurements. Candidate biomarkers were confirmed in bivariate analysis by two-way comparisons of medians using a Mann–Whitney U -test. To evaluate the diagnostic relationship between clinical characteristics, biomarker concentration, and clinical groups, elastic-net logistic regression modeling was employed (R Statistical Software, v4.2.2; ). Clinical variables for inclusion in the models were restricted to age and BMI due to practical usability and accessibility. Repeated or nested cross-validation was performed (glmnet package v4.1-6; caret v6.0-93 ; nestedcv.glmnet package v0.7.4). During the nested cross-validation approach, variables were filtered using a Wilcoxon U -test with a significance threshold of 0.2. A series of multivariate logistic regression models containing both clinical factors and biomarker concentrations were developed to distinguish: (i) endometriosis cases from general population controls and (ii) endometriosis cases (stages II–IV) from symptomatic controls. To further evaluate the complex interactions and non-linear relationships between predictors, a random forest classifier was employed using the predictors identified during elastic-net logistic regression modeling. This third model was constructed by comparing stage IV endometriosis and symptomatic controls. The performance of Model 3 was then tested across the stages of endometriosis (stages I–IV, i.e. minimal to severe) to assess its effectiveness in diagnosing endometriosis at different disease levels. The randomForest package v4.6-14 was used with 5-fold cross-validation and hyper-parameter tuning (mtry = 2, 3, 4, ntree = 100). Only participants with complete data were included in each model. To assess the discriminative performance of each model, the area under the receiver operating characteristic curve (AUC) was assessed. DeLong’s test was used to compare the AUC between biomarker models with and without clinical variables. The optimal predicted probability threshold was determined at the maximum Youden Index. Diagnostic performance metrics were computed based at this optimal threshold, including sensitivity (Sn) and specificity (Sp), and positive predictive value (PPV) and negative predictive value (NPV). A power analysis was conducted to assess the study’s power for subgroup analysis in different stages of endometriosis. The power analysis was performed using the pwr package version 1.3-0 in R. The parameters for the power analysis included a sample size for the subgroup (stage I: n = 241, stage II: n = 65, stage III: n = 58, and stage IV: n = 89; only participants with complete data were included in this analysis), an effect size of 0.5 (Cohen’s d ), and a significance level of 0.05. The target statistical power was set at 0.8. The interaction pathways of the proteins identified in the diagnostic models were examined to provide insights into the biological processes and molecular functions associated with these proteins (STRING database, v12.0; ). Only interactions above a score of 4.0 (medium) were included in the predicted network. Recruitment and collection protocols were approved by the appropriate ethical review boards (Bellberry Human Research Ethics Committee (ref: 2016-05-383); Royal Women’s Hospital Human Research Ethics Committee (Project No. 10-43, No. 11-24, and No. 16-43)), and all participants provided informed written consent. Discovery phase The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Analytical validation phase Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. Clinical validation phase To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. Sample collection In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. The clinical and demographic characteristics of participants in the discovery and clinical validation cohort are presented in . Associations between clinical variables and outcome (endometriosis vs symptomatic controls or general population), were tested using the chi-square test of independence for categorical clinical variables and the Point-biserial correlation test for continuous clinical variables. Discovery phase This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). Analytical validation phase For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. Clinical validation phase In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. The peptide data presented reflect the relative concentration of a protein biomarker between samples. To maximize the likelihood of identifying biomarkers for the disease, changes in protein concentration were initially assessed at the extremes of the disease spectrum, for example, symptomatic controls versus severe endometriosis or general population controls versus endometriosis. To improve the normality of the data, a natural logarithmic transformation was applied to all measurements. Candidate biomarkers were confirmed in bivariate analysis by two-way comparisons of medians using a Mann–Whitney U -test. To evaluate the diagnostic relationship between clinical characteristics, biomarker concentration, and clinical groups, elastic-net logistic regression modeling was employed (R Statistical Software, v4.2.2; ). Clinical variables for inclusion in the models were restricted to age and BMI due to practical usability and accessibility. Repeated or nested cross-validation was performed (glmnet package v4.1-6; caret v6.0-93 ; nestedcv.glmnet package v0.7.4). During the nested cross-validation approach, variables were filtered using a Wilcoxon U -test with a significance threshold of 0.2. A series of multivariate logistic regression models containing both clinical factors and biomarker concentrations were developed to distinguish: (i) endometriosis cases from general population controls and (ii) endometriosis cases (stages II–IV) from symptomatic controls. To further evaluate the complex interactions and non-linear relationships between predictors, a random forest classifier was employed using the predictors identified during elastic-net logistic regression modeling. This third model was constructed by comparing stage IV endometriosis and symptomatic controls. The performance of Model 3 was then tested across the stages of endometriosis (stages I–IV, i.e. minimal to severe) to assess its effectiveness in diagnosing endometriosis at different disease levels. The randomForest package v4.6-14 was used with 5-fold cross-validation and hyper-parameter tuning (mtry = 2, 3, 4, ntree = 100). Only participants with complete data were included in each model. To assess the discriminative performance of each model, the area under the receiver operating characteristic curve (AUC) was assessed. DeLong’s test was used to compare the AUC between biomarker models with and without clinical variables. The optimal predicted probability threshold was determined at the maximum Youden Index. Diagnostic performance metrics were computed based at this optimal threshold, including sensitivity (Sn) and specificity (Sp), and positive predictive value (PPV) and negative predictive value (NPV). A power analysis was conducted to assess the study’s power for subgroup analysis in different stages of endometriosis. The power analysis was performed using the pwr package version 1.3-0 in R. The parameters for the power analysis included a sample size for the subgroup (stage I: n = 241, stage II: n = 65, stage III: n = 58, and stage IV: n = 89; only participants with complete data were included in this analysis), an effect size of 0.5 (Cohen’s d ), and a significance level of 0.05. The target statistical power was set at 0.8. The interaction pathways of the proteins identified in the diagnostic models were examined to provide insights into the biological processes and molecular functions associated with these proteins (STRING database, v12.0; ). Only interactions above a score of 4.0 (medium) were included in the predicted network. Participant demographics and clinical characteristics presents the demographics and clinical characteristics of the participants (n = 805) in both the discovery and clinical validation cohorts. Age was the only characteristic available for the discovery cohort and no significant difference was observed across the clinical groups. In the clinical validation cohort, BMI, gravidity, and live births were significantly different between endometriosis patients and symptomatic controls. Additionally, age, smoking status, family history of endometriosis, pain characteristics, cycle length, cycle stage, exogenous hormone medication use, and ethnicity were significantly different between the endometriosis patients and the general population. The significant differences in the cycle stage between endometriosis patients and general population women may be largely explained by the higher proportion of endometriosis patients in the ‘unknown or on hormones’ group. A common management option for symptomatic endometriosis is hormone therapies, which are aimed at inducing amenorrhea and therefore hormone effects will be visualized on histology which do not permit grouping into menstrual/proliferative/secretory phases. It should also be noted that for the general population controls, the menstrual phase was calculated from self-reported data, whereas this was not the case for endometriosis participants (where menstrual dating was carried out using histological assessment by a pathologist). Biomarker identification The proteomics discovery experiment identified 48 candidate plasma protein biomarkers that were differentially expressed between endometriosis cases and both symptomatic controls and general population controls ( and ). Targeted mass spectrometry assays were then built against all candidates, and well-defined assays were successfully developed for 39 of these, plus 12 putative biomarkers taken from the literature. Analytical validation was successful if analytically acceptable levels of reproducibility and signal to noise were achieved. For the clinical validation phase, 51 protein biomarkers were analyzed. During two-way comparisons using a Mann–Whitney U -test, significant ( P ≤ 0.05) differences were observed for 41 of the 51 candidate proteins across one or both clinical group comparisons. Ten protein biomarkers were found to be independently associated with endometriosis after adjusting for age and BMI. These biomarkers were assessed for any correlation with the other available clinical information (e.g. menstrual cycle length), and no significant strong or moderate correlations were observed (maximum correlation coefficient of 0.26). Model development and validation Regression models were developed to discriminate between endometriosis cases and the general population (Model 1) or symptomatic controls (Model 2), as shown in . A random forest model (Model 3) was subsequently developed using the same biomarkers as Model 2 and constructed by comparing severe endometriosis and symptomatic controls, before being applied to all stages of endometriosis. No proteins provided utility in both models 1 and 2/3. For each model, the predicted probabilities for an endometriosis diagnosis were significantly higher ( P < 0.0001) in the endometriosis group compared to the general population and symptomatic control groups. and the receiver operating characteristic (ROC) curves in compare the outcomes predicted by the models against the observed diagnosis of endometriosis, along with the performance metrics (AUC, Sn, Sp, PPV, NPV) for each model. Three of the 10 protein biomarkers demonstrated excellent utility in distinguishing between the two clinical groups in Model 1 (AUC = 0.993, 95% CI 0.988–0.998) compared to age and BMI alone ( P < 0.001). In Model 2, age and BMI were significant independent associates of endometriosis (stages II–IV) (AUC = 0.649, 95% CI 0.589–0.709). After adjusting for age and BMI, the remaining seven biomarkers provided significant incremental value to Model 2 (AUC = 0.729, 95% CI 0.676–0.783, P < 0.01). The same seven biomarkers demonstrated significant diagnostic accuracy in Model 3, with an AUC of 0.997 (95% CI 0.994–1.000) for discriminating stage IV endometriosis from symptomatic controls. Critically for clinical usage, Model 3 also showed strong diagnostic performance when applied to all stages of endometriosis (AUC for stage I: 0.852 (95% CI 0.811–0.893); stage II: 0.903 (95% CI 0.853–0.953); stage III: 0.908 (95% CI 0.852–0.965); stage IV: 0.997 (95% CI 0.994–1.000), respectively) . Power analysis indicates that the study is well-powered for subgroup analysis in stage I, II, and IV endometriosis groups, with power levels of 100%, 80.8%, and 91.3%, respectively, however, the power for the stage III endometriosis subgroup was below the desired threshold at 76.1%. Functional enrichment in the network of protein biomarkers for endometriosis A network analysis of the 10 protein biomarkers associated with endometriosis revealed that most can be broadly categorized into three groups: coagulation cascade, complement system, and protein–lipid complex. Specific associations include: Coagulation factor XII, Complement component C9, and Vitamin K-dependent protein S with the complement and coagulation cascades ( P < 0.01); and Afamin and Serum paraoxonase/arylesterase 1 with protein–lipid complex ( P < 0.001). presents the demographics and clinical characteristics of the participants (n = 805) in both the discovery and clinical validation cohorts. Age was the only characteristic available for the discovery cohort and no significant difference was observed across the clinical groups. In the clinical validation cohort, BMI, gravidity, and live births were significantly different between endometriosis patients and symptomatic controls. Additionally, age, smoking status, family history of endometriosis, pain characteristics, cycle length, cycle stage, exogenous hormone medication use, and ethnicity were significantly different between the endometriosis patients and the general population. The significant differences in the cycle stage between endometriosis patients and general population women may be largely explained by the higher proportion of endometriosis patients in the ‘unknown or on hormones’ group. A common management option for symptomatic endometriosis is hormone therapies, which are aimed at inducing amenorrhea and therefore hormone effects will be visualized on histology which do not permit grouping into menstrual/proliferative/secretory phases. It should also be noted that for the general population controls, the menstrual phase was calculated from self-reported data, whereas this was not the case for endometriosis participants (where menstrual dating was carried out using histological assessment by a pathologist). The proteomics discovery experiment identified 48 candidate plasma protein biomarkers that were differentially expressed between endometriosis cases and both symptomatic controls and general population controls ( and ). Targeted mass spectrometry assays were then built against all candidates, and well-defined assays were successfully developed for 39 of these, plus 12 putative biomarkers taken from the literature. Analytical validation was successful if analytically acceptable levels of reproducibility and signal to noise were achieved. For the clinical validation phase, 51 protein biomarkers were analyzed. During two-way comparisons using a Mann–Whitney U -test, significant ( P ≤ 0.05) differences were observed for 41 of the 51 candidate proteins across one or both clinical group comparisons. Ten protein biomarkers were found to be independently associated with endometriosis after adjusting for age and BMI. These biomarkers were assessed for any correlation with the other available clinical information (e.g. menstrual cycle length), and no significant strong or moderate correlations were observed (maximum correlation coefficient of 0.26). Regression models were developed to discriminate between endometriosis cases and the general population (Model 1) or symptomatic controls (Model 2), as shown in . A random forest model (Model 3) was subsequently developed using the same biomarkers as Model 2 and constructed by comparing severe endometriosis and symptomatic controls, before being applied to all stages of endometriosis. No proteins provided utility in both models 1 and 2/3. For each model, the predicted probabilities for an endometriosis diagnosis were significantly higher ( P < 0.0001) in the endometriosis group compared to the general population and symptomatic control groups. and the receiver operating characteristic (ROC) curves in compare the outcomes predicted by the models against the observed diagnosis of endometriosis, along with the performance metrics (AUC, Sn, Sp, PPV, NPV) for each model. Three of the 10 protein biomarkers demonstrated excellent utility in distinguishing between the two clinical groups in Model 1 (AUC = 0.993, 95% CI 0.988–0.998) compared to age and BMI alone ( P < 0.001). In Model 2, age and BMI were significant independent associates of endometriosis (stages II–IV) (AUC = 0.649, 95% CI 0.589–0.709). After adjusting for age and BMI, the remaining seven biomarkers provided significant incremental value to Model 2 (AUC = 0.729, 95% CI 0.676–0.783, P < 0.01). The same seven biomarkers demonstrated significant diagnostic accuracy in Model 3, with an AUC of 0.997 (95% CI 0.994–1.000) for discriminating stage IV endometriosis from symptomatic controls. Critically for clinical usage, Model 3 also showed strong diagnostic performance when applied to all stages of endometriosis (AUC for stage I: 0.852 (95% CI 0.811–0.893); stage II: 0.903 (95% CI 0.853–0.953); stage III: 0.908 (95% CI 0.852–0.965); stage IV: 0.997 (95% CI 0.994–1.000), respectively) . Power analysis indicates that the study is well-powered for subgroup analysis in stage I, II, and IV endometriosis groups, with power levels of 100%, 80.8%, and 91.3%, respectively, however, the power for the stage III endometriosis subgroup was below the desired threshold at 76.1%. A network analysis of the 10 protein biomarkers associated with endometriosis revealed that most can be broadly categorized into three groups: coagulation cascade, complement system, and protein–lipid complex. Specific associations include: Coagulation factor XII, Complement component C9, and Vitamin K-dependent protein S with the complement and coagulation cascades ( P < 0.01); and Afamin and Serum paraoxonase/arylesterase 1 with protein–lipid complex ( P < 0.001). This study sought to develop a diagnostic blood test for endometriosis. A proteomics discovery workflow was used to identify and validate a novel panel of plasma protein biomarkers for the disease. Utilization of a large, clinically well-defined, independent cohort (n = 749) led to the development of three multivariate models, which demonstrated good to excellent performance for distinguishing endometriosis from both the general population and symptomatic controls. The models contained a panel of 10 protein biomarkers, which added significant value to clinical factors. This research contributes to the development of non-invasive diagnostic tools for endometriosis, which will have significant implications in reducing diagnostic delay and providing screening tools for surveillance of disease recurrence. The primary objective of developing a diagnostic test for endometriosis was to distinguish symptomatic controls from endometriosis patients. To facilitate this, general population controls were included to allow investigation of the extremes of disease, thereby identifying potential protein biomarkers for the disease. A model distinguishing healthy women from those with endometriosis also has both biological and clinical relevance. The biology enables an understanding of disease pathophysiology, while clinically there is relevance in the context of fertility where a 3-fold increased incidence of endometriosis in women undergoing fertility treatments is observed . Infertility is a common consequence of endometriosis and a test to rule in or rule out endometriosis could help guide clinical decisions for assisted reproductive treatments. Model 1 (endometriosis vs the general population) demonstrates the biological association of these protein biomarkers with the disease state. Model 2 (stage II–IV endometriosis vs symptomatic controls) extends the utility of the test into a real-world scenario: differentiating the presence of endometriosis (lesions) from symptomatic pelvic pain in the absence of lesions. The inferior performance of Model 2 may reflect common symptom attributions between groups, and the marginal differences between patients with stage II endometriosis as compared to symptomatic patients, where no endometriosis is observed. Individuals with stage I endometriosis were specifically excluded to improve this, and further work is required to examine this. Nonetheless, Model 3, which applied alternative statistical modeling to allow for the complex interactions and non-linear relationships between predictors in Model 2 (and was built by comparing the extremes of disease, namely stage IV endometriosis vs symptomatic controls), demonstrates strong performance for discriminating disease across all stages of endometriosis, suggesting a clear association of the biomarkers with disease state. Laparoscopy is the gold standard for diagnosing endometriosis, but it is invasive and costly, carries risks, and is not readily accessible to all patients. Of known plasma biomarkers, CA-125 is sometimes used as a single biomarker for endometriosis. However, CA-125 has limited Sn and Sp, and elevated CA-125 levels can occur in multiple conditions such as ovarian cancer, pelvic inflammatory disease, and menstruation. A recent multicenter study showed that CA-125 differentiated endometriosis from symptomatic controls with Sn 61% at a pre-defined Sp of 60% , with better performance for stage III/IV endometriosis compared to stage I/II (AUC 0.795 vs 0.583, respectively). A 2016 systematic review and meta-analysis reported a pooled Sn of 52% and Sp of 93% for CA-125, with significantly higher Sn for stage III/IV compared to stage I/II endometriosis (Sn 63% vs 25%, respectively) . CA-125 can be effective for diagnosing stage IV endometriosis cases, such as those with dense pelvic adhesions or ovarian endometriomas but is less reliable for other forms of endometriosis, and its use may lead to potential false positives due to the presence of other conditions. Consequently, it is not widely recommended as a diagnostic or screening tool by major guidelines such as the ESHRE . The diagnostic models distinguishing endometriosis patients from symptomatic controls are particularly relevant for clinical practice. In comparison to known biomarkers, the multivariate biomarker models developed in the present study to distinguish endometriosis from symptomatic controls have sensitivities of 73% and 98% and specificities of 67% and 96%, for Models 2 and 3, respectively. Important to improving patient outcomes by enabling earlier and more accurate diagnosis, results indicate that Model 3 has potential utility across earlier stages of the disease, with AUCs of ≥0.85 for stage I–III endometriosis. These results compare favorably to the performance of known biomarkers in terms of AUC. In the present study, model cut-offs to assess performance metrics such as Sn and Sp were arbitrarily set at the maximum Youden Index, but further optimization should be considered before using in a clinical setting. By providing a non-invasive diagnostic method to differentiate endometriosis from other causes of pelvic pain, such tools can help clinicians make more informed decisions about which patients should undergo invasive procedures like laparoscopy, and facilitate more targeted and effective treatment plans, enhancing overall patient care. The discrepancies observed between bivariate and multivariate results in (Complement component C9, Inter-alpha-trypsin inhibitor light chain, Selenoprotein P, and Proteoglycan 4) can be attributed to two key factors: unmeasured confounders and the suppressor effect. Bivariate analysis assesses the association between two variables without considering other confounding factors. A suppressor variable, unlike a typical confounder, does not directly affect the outcome but interacts with the predictor, altering the strength or direction of the association. Inclusion of a suppressor variable in a multivariate model, can reveal the true relationship between the predictor and the outcome . Biologically, each of the 10 protein biomarkers identified in the diagnostic models for endometriosis plays a role relevant to disease pathophysiology, including in the coagulation and complement cascades, lipid metabolism, oxidative defense, immune regulation, and tissue homeostasis and morphogenesis. Of the 10 proteins listed in , only three (Selenoprotein P, Neuropilin-1, and Serum paraoxonase/arylesterase 1) have previously been directly linked with endometriosis, as discussed below. In the complement cascade, Complement component C9 is required for target cell lysis during complement activation . In Model 2, Complement component C9 has a positive association with endometriosis. Complement dysregulation has been implicated in the pathophysiology of endometriosis . For the coagulation cascade, Coagulation factor XII is crucial for fibrin clot formation . Similarly, Vitamin K-dependent protein S has a role in regulating coagulation . In Model 1, Vitamin K-dependent protein S showed a negative association with endometriosis, whereas Coagulation factor XII had a negative association in Model 2. A previous small study failed to find a statistically significant difference in blood Vitamin K-dependent protein S levels between endometriosis patients and controls . Hemoglobin subunit beta, a component of hemoglobin, indirectly contributes to coagulation by contributing to overall blood function . In Model 1, Hemoglobin subunit beta exhibited a positive association with endometriosis. Afamin, Selenoprotein P, and Serum paraoxonase/arylesterase 1 play roles in lipid metabolism and oxidative defense . While earlier work has found no significant difference in mean serum Afamin concentrations between endometriosis and control groups using ELISA , Afamin correlated negatively with endometriosis in Model 2. A positive correlation was observed for Selenoprotein P in Model 2, and the bivariate results for Selenoprotein P are consistent with previous findings from a small study (n = 8), where downregulated gene expression was reported in tissue samples from patients with endometriosis . Additionally, a positive correlation exists between Serum paraoxonase/arylesterase 1 and endometriosis in Model 1. Interestingly, another study reported reduced Serum paraoxonase/arylesterase 1 activity (not concentration) in women with endometriosis compared to controls . This apparent discrepancy may arise from assessing different aspects of Serum paraoxonase/arylesterase 1 within distinct biological contexts or using varied measurement methods. Neuropilin-1, Inter-alpha-trypsin inhibitor light chain, and Proteoglycan 4 each contribute significantly to immune regulation . Neuropilin-1 also plays important roles in vascular development , while Inter-alpha-trypsin inhibitor light chain contributes to extracellular matrix organization and Proteoglycan 4 acts as a boundary lubricant . The negative association observed for Neuropilin-1 in Model 2 contrasts to previous findings of elevated serum Neuropilin-1 for endometriosis patients when compared to healthy controls by ELISA . Both Inter-alpha-trypsin inhibitor light chain and Proteoglycan 4 also showed negative associations with endometriosis in Model 2. Further investigations are warranted to unravel the precise mechanisms underlying the associations of these protein biomarkers with endometriosis and potential therapeutic implications. The results presented here, which contrast with the literature, highlight the challenges of biomarker analysis in small cohorts and across different laboratories. Differences between published studies and the results in this manuscript may be also due to pre-analytical and analytical factors. Varying gene expression profiles across different tissues affected by endometriosis, along with post-transcriptional regulation and translation efficiency, may contribute to the divergent results. The strengths of this study are in its robust sample size, independent cohorts, a well-defined clinical validation cohort, and high-performing models using simple components. The use of clinical variables in the models was deliberately limited to age and BMI because this information can be easily and precisely determined. In concert, the exclusion from the models of other clinical information such as menstrual cycle stage, exogenous hormone use, or family history of endometriosis avoids potentially imprecise variables, whilst also ensuring the test can be widely used. The robust sample size enhances statistical power, leading to more reliable conclusions. The utilization of independent cohorts strengthens the findings by validating the biomarkers across different populations, thereby minimizing bias and increasing generalizability. This study also benefits from a diagnosis method grounded in laparoscopy and histopathology, ensuring a more reliable assessment of disease absence or presence. The use of both elastic-net logistic regression and random forest algorithms in the analysis underscores the robustness and versatility of the models, providing a comprehensive evaluation of the diagnostic potential of the identified biomarkers. The random forest approach confirms the importance of the biomarkers, but may be dataset specific, and further validation is warranted. The study also has potential limitations. The participants were mostly of European ethnicity, and the study was not powered to detect differences across ethnic groups. The use of minimal clinical variables in the models may reduce diagnostic performance, however, information on more complex clinical variables may not always be available or consistently measured. It is also possible that some general population controls might have endometriosis, which could potentially skew results. Given the nature of the condition, the prevalence of asymptomatic endometriosis at the general population level is difficult to ascertain, but has been reported to be as high as 11% . While the study was not specifically designed to stratify patients based on the stage of endometriosis, it is well-powered for subgroup analysis of stage I, II, and IV endometriosis. The experimental design used matched samples within each cohort, however, the delay in time between sample collection and processing and the difference in sample storage could affect biomarker concentrations observed in this study. Further analysis is required to enable generalizability of the findings to other populations or settings, including stratification of patients by type or stage of endometriosis. This study represents an advancement toward precise non-invasive endometriosis diagnosis and personalized care, achieved through the integration of proteomics and clinical expertise. A panel of novel plasma protein biomarkers was identified that enabled the development of diagnostic models demonstrating strong discriminatory capabilities. The reported functions of these protein biomarkers offer potential insights into endometriosis pathogenesis. Further validation of these biomarkers will fortify the robustness and reliability of this diagnostic tool and enable its integration into clinical practice, benefiting individuals affected by endometriosis and paving the way for improved patient care. deae278_Supplementary_Data
Decentralising paediatric hearing services through district healthcare screening in Western Cape province, South Africa
bbe9c928-da92-42ca-8d98-ae2cbebefc08
8252164
Pediatrics[mh]
Hearing loss is the second most prevalent developmental disability, affecting approximately 15.5 million children under the age of 5 years globally. Approximately 95% of children with developmental disabilities reside in low- and middle-income countries (LMICs). Sub-Saharan Africa has one of the highest prevalence rates of hearing loss, with an estimated 10.3 million children under the age of 10 years who suffer from permanent disabling hearing loss. Undetected and untreated hearing loss has a major negative impact on a child’s speech, language, cognitive, educational and socio-emotional development. Hearing healthcare services in LMICs are not prioritised by health systems overwhelmed by life-threatening diseases. Identification of hearing loss in children is often impeded in LMICs because of the absence of well-managed hearing screening programmes, the impact of poverty and malnutrition on hearing and the lack of public and professional awareness of hearing loss and its devastating effects in children. In addition, poor hearing health infrastructure and resources (personnel and equipment) and geographical barriers such as distance, lead to limited accessibility of hearing healthcare services. , Children born into a lower socioeconomic status have considerably less access to non-emergency health resources. , , Furthermore, the risk of poor follow-up rates for hearing assessments and timely intervention is higher in families who need to travel greater distances. , Compared with high-income countries, LMICs have an unequal proportion of hearing loss burden and a limited number of well-trained hearing healthcare professionals. The number of audiologists and Ear–Nose–Throat (ENT) specialists are reported to be lowest in African countries, with an average estimate of one audiologist for every 0.8 million people and one ENT specialist for every 1.2 million people in sub-Saharan Africa. Over a 10-year period, between 2005 and 2015, there has been no substantial improvement in these numbers. In LMICs such as South Africa, healthcare facilities are typically tiered into three main levels of care: primary such as point-of-entry clinics, secondary that includes district and regional hospitals and tertiary which encompasses specialised services. As a result of the limited number of primary-level hearing screening sites in these settings, children are often referred directly to a centralised tertiary-level hospital for initial hearing screening, when available. Referrals for primary care services such as hearing screening at central tertiary-level hospitals add to growing waiting lists for specialised care such as diagnostic hearing assessments and hearing aid fittings. Direct referrals to a central tertiary hospital often imply that parents and caregivers must travel further to access hearing healthcare infrastructure, which may in turn lead to poor follow-up rates, late diagnoses and late access to hearing technology. Childhood hearing loss impedes speech, language and academic development, and early auditory stimulation is crucial to minimise the adverse effects of hearing loss in children. Access to sustainable hearing healthcare services in LMICs is an important public health priority. Innovative service delivery models, with an emphasis on decentralisation, are required to develop sustainable services in these settings. Decentralisation is the transfer of responsibility for planning, management and financing from central to peripheral levels of government and has been a key health sector reform in a wide range of LMICs over the past decade. Despite being implemented as a strategy across many health systems, the impact of decentralisation on health equity is still unclear. However, it has been suggested that in order to minimise such inequity, government, health sectors and communities must address socio-economic and financial barriers and implement complementary mechanisms alongside decentralisation. The growing burden of hearing loss in LMICs is disproportionate to the lack of hearing healthcare services available and current efforts to reach underserved communities are inadequate. If hearing healthcare services are not available at primary-level healthcare clinics, many communities in LMICs do not have access to these services at all and tertiary-level services are being overburdened with screening services that should be conducted at a lower level of care. Therefore, approaches that incorporate the delivery of community-based hearing care in order to decentralise hearing healthcare services is a priority. , This study aimed to compare a centralised tertiary model of hearing healthcare to a decentralised model through district hearing screening for children in the Western Cape province, South Africa. The effects of a decentralised model of hearing healthcare were measured in terms of attendance rates for initial hearing screening, patient travelling distance, number of referrals to a tertiary-level hospital and hearing outcomes. Study design A pragmatic quasi-experimental study design was implemented, with a 7-month control group receiving standard hearing service provision at a tertiary hospital (from June 2018 to December 2018), compared with a 7-month intervention group where hearing screening was offered at a district hospital (from June 2019 to December 2019). Setting The Cape Town metropole has a population of 4 067 774 and is situated in the Southern Peninsula of the Western Cape province, South Africa. The metropole incorporates eight health subdistricts with eight district-level hospitals of which only three have audiology services. Victoria Hospital is a district hospital with 159 beds in the South Peninsula health district of the metropolitan region and currently has no audiology services. No audiological services are available at any of the primary healthcare clinics or maternity and obstetric units (MOU) in this area, which result in referrals for initial hearing screening of older children based on risk factors or concerns for hearing loss. All patients aged 0–13 years who are from the district hospital catchment area and who need audiology services are referred directly to Red Cross War Memorial Children’s Hospital, which is a central tertiary-level hospital in Cape Town. The Western Cape has three tertiary academic hospitals. Red Cross War Memorial Children’s Hospital is one of two dedicated paediatric tertiary-level academic hospitals in sub-Saharan Africa and serves as a central referral hospital for paediatric patients across the entire Western Cape who require specialised healthcare services. The Department of Audiology at this tertiary facility assesses and provides hearing rehabilitation for approximately 300 children per month. Referrals are received from district hospitals, primary level clinics and MOUs. Both the district and tertiary hospitals in this study are situated in a LMIC and serve mostly children from the public healthcare sector who do not have access to private medical insurance. Study population and sampling strategy Consecutive sampling was used to select participants for both the tertiary and district groups. Tertiary group sampling All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients. District group sampling All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital. Data collection An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital. Equipment The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. , Data analysis Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant. Ethical considerations The study was approved by the University of Pretoria Research Ethics Committee of the Faculty of Humanities (HUM024/0419), the University of Cape Town Human Research Ethics Committee (365/2019), Red Cross War Memorial Children’s Hospital Ethics Committee (RCC203) and the Western Cape Health Research sub-directorate (WC_201906_023). The tertiary hospital in this study has an Outreach Policy Agreement with all Western Cape Health Facilities, which was used in conjunction with a letter requesting institutional permission from the district hospital to conduct an outreach OAE-screening service there twice per month for 7 months. A letter of informed consent was issued to the caregivers of participants prior to data collection. Informed assent was obtained from children over the age of 7 years. A pragmatic quasi-experimental study design was implemented, with a 7-month control group receiving standard hearing service provision at a tertiary hospital (from June 2018 to December 2018), compared with a 7-month intervention group where hearing screening was offered at a district hospital (from June 2019 to December 2019). The Cape Town metropole has a population of 4 067 774 and is situated in the Southern Peninsula of the Western Cape province, South Africa. The metropole incorporates eight health subdistricts with eight district-level hospitals of which only three have audiology services. Victoria Hospital is a district hospital with 159 beds in the South Peninsula health district of the metropolitan region and currently has no audiology services. No audiological services are available at any of the primary healthcare clinics or maternity and obstetric units (MOU) in this area, which result in referrals for initial hearing screening of older children based on risk factors or concerns for hearing loss. All patients aged 0–13 years who are from the district hospital catchment area and who need audiology services are referred directly to Red Cross War Memorial Children’s Hospital, which is a central tertiary-level hospital in Cape Town. The Western Cape has three tertiary academic hospitals. Red Cross War Memorial Children’s Hospital is one of two dedicated paediatric tertiary-level academic hospitals in sub-Saharan Africa and serves as a central referral hospital for paediatric patients across the entire Western Cape who require specialised healthcare services. The Department of Audiology at this tertiary facility assesses and provides hearing rehabilitation for approximately 300 children per month. Referrals are received from district hospitals, primary level clinics and MOUs. Both the district and tertiary hospitals in this study are situated in a LMIC and serve mostly children from the public healthcare sector who do not have access to private medical insurance. Consecutive sampling was used to select participants for both the tertiary and district groups. Tertiary group sampling All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients. District group sampling All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital. Data collection An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital. Equipment The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. , Data analysis Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant. All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients. All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital. An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital. The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. , Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant. The study was approved by the University of Pretoria Research Ethics Committee of the Faculty of Humanities (HUM024/0419), the University of Cape Town Human Research Ethics Committee (365/2019), Red Cross War Memorial Children’s Hospital Ethics Committee (RCC203) and the Western Cape Health Research sub-directorate (WC_201906_023). The tertiary hospital in this study has an Outreach Policy Agreement with all Western Cape Health Facilities, which was used in conjunction with a letter requesting institutional permission from the district hospital to conduct an outreach OAE-screening service there twice per month for 7 months. A letter of informed consent was issued to the caregivers of participants prior to data collection. Informed assent was obtained from children over the age of 7 years. Demographics The mean age of patients at the time of initial hearing screening was 48.4 months (39.0 standard deviation [s.d.]; range: 1–156) and 52.3 months (35.1 s.d.; range: 1–144) in the tertiary and district groups, respectively. The tertiary and district groups were similar in terms of age, gender and language distribution . Attendance rates An attendance rate of 83.2% (158/190) was found during the 7-month intervention period for patients attending the district hearing screening project, which was significantly higher than the attendance rate of 70.2% (315/449) for patients from the district hospital catchment area who were seen for initial hearing screening at the tertiary hospital during the control period ( p < 0.001). Travel distance The mean travel distance for patients in the district group commuting from home to the district hospital was 12.6 km (7.7 s.d.; range: 1.2–36.8). This distance was significantly shorter than the travel distance of 19.1 km (9.1 s.d.; range: 5.1–37.6), which patients would have had to travel from home to the tertiary hospital ( p < 0.001). Number of initial hearing screening referrals to the tertiary hospital A total of 1729 patients were referred from facilities across the Western Cape to the tertiary hospital during the control period (from June 2018 to December 2018), of which 449 (26.0%) referrals were for initial hearing screening from the district hospital catchment area. Throughout the intervention period (from June 2019 to December 2019), during which the district screening project was being conducted, the tertiary hospital received a total of 1601 referrals from facilities across the Western Cape province, with a significant decrease to 114 (7.1%) referrals for initial hearing screening from the district hospital catchment area ( p < 0.001). Reasons for referral The reasons for referral for initial hearing screening are depicted . During the control period ( n = 315), 115 referrals (36.5%) were received for reasons that were excluded from the intervention period analysis. When excluding these 115 referrals, the most common reasons for referral in the tertiary group were speech delay (35.0%) and behavioural or school-related concerns (28.5%) ( n = 200). In the district group, speech delay (33.5%) and meningitis (33.5%) were the most common reasons for referral ( n = 158). Hearing screening outcomes for the control and intervention period Outcomes of the initial OAE hearing screenings for the tertiary group and diagnostic assessment results for patients who referred initial OAE screening unilaterally or bilaterally from June 2018 to December 2018 are presented . For the tertiary group, most patients ( n = 248/315, 78.7%) passed the initial OAE screening bilaterally. The number of patients who required diagnostic assessment in the tertiary group were 67 (21.3%). Of the 67 patients who required diagnostic assessment, 54 (80.6%) attended their appointments. Half of the patients ( n = 27/54, 50%) were diagnosed with mild conductive hearing loss. Outcomes of the initial OAE screenings from the intervention period at the district hospital and the diagnostic assessment results for patients referred to the tertiary hospital after a unilateral or bilateral refer result on rescreening at the district hospital, are also presented . For the district group, most patients ( n = 127/158, 80.4%) passed OAE screening bilaterally, whilst less than 10% referred OAE screening in both ears. The follow-up attendance rate for rescreening at the district hospital 2 weeks after the initial screening was 80.8% ( n = 21/26). The total number of patients in the district group that needed referral to the tertiary hospital for specialised diagnostic assessment were 15 ( n = 15/158, 9.5%), of which 11 ( n = 11/15, 73.3%) attended the diagnostic hearing assessment appointment. Of these 11 patients, nearly half ( n = 5/11, 45.5%) presented with mild conductive hearing loss. The mean age of patients at the time of initial hearing screening was 48.4 months (39.0 standard deviation [s.d.]; range: 1–156) and 52.3 months (35.1 s.d.; range: 1–144) in the tertiary and district groups, respectively. The tertiary and district groups were similar in terms of age, gender and language distribution . An attendance rate of 83.2% (158/190) was found during the 7-month intervention period for patients attending the district hearing screening project, which was significantly higher than the attendance rate of 70.2% (315/449) for patients from the district hospital catchment area who were seen for initial hearing screening at the tertiary hospital during the control period ( p < 0.001). The mean travel distance for patients in the district group commuting from home to the district hospital was 12.6 km (7.7 s.d.; range: 1.2–36.8). This distance was significantly shorter than the travel distance of 19.1 km (9.1 s.d.; range: 5.1–37.6), which patients would have had to travel from home to the tertiary hospital ( p < 0.001). A total of 1729 patients were referred from facilities across the Western Cape to the tertiary hospital during the control period (from June 2018 to December 2018), of which 449 (26.0%) referrals were for initial hearing screening from the district hospital catchment area. Throughout the intervention period (from June 2019 to December 2019), during which the district screening project was being conducted, the tertiary hospital received a total of 1601 referrals from facilities across the Western Cape province, with a significant decrease to 114 (7.1%) referrals for initial hearing screening from the district hospital catchment area ( p < 0.001). The reasons for referral for initial hearing screening are depicted . During the control period ( n = 315), 115 referrals (36.5%) were received for reasons that were excluded from the intervention period analysis. When excluding these 115 referrals, the most common reasons for referral in the tertiary group were speech delay (35.0%) and behavioural or school-related concerns (28.5%) ( n = 200). In the district group, speech delay (33.5%) and meningitis (33.5%) were the most common reasons for referral ( n = 158). Outcomes of the initial OAE hearing screenings for the tertiary group and diagnostic assessment results for patients who referred initial OAE screening unilaterally or bilaterally from June 2018 to December 2018 are presented . For the tertiary group, most patients ( n = 248/315, 78.7%) passed the initial OAE screening bilaterally. The number of patients who required diagnostic assessment in the tertiary group were 67 (21.3%). Of the 67 patients who required diagnostic assessment, 54 (80.6%) attended their appointments. Half of the patients ( n = 27/54, 50%) were diagnosed with mild conductive hearing loss. Outcomes of the initial OAE screenings from the intervention period at the district hospital and the diagnostic assessment results for patients referred to the tertiary hospital after a unilateral or bilateral refer result on rescreening at the district hospital, are also presented . For the district group, most patients ( n = 127/158, 80.4%) passed OAE screening bilaterally, whilst less than 10% referred OAE screening in both ears. The follow-up attendance rate for rescreening at the district hospital 2 weeks after the initial screening was 80.8% ( n = 21/26). The total number of patients in the district group that needed referral to the tertiary hospital for specialised diagnostic assessment were 15 ( n = 15/158, 9.5%), of which 11 ( n = 11/15, 73.3%) attended the diagnostic hearing assessment appointment. Of these 11 patients, nearly half ( n = 5/11, 45.5%) presented with mild conductive hearing loss. This study explored the effect of decentralising hearing healthcare services from a tertiary-level hospital to a district-level hospital in the Western Cape province, South Africa. Decentralised hearing screening resulted in increased attendance rates for initial hearing screening, shorter travelling distances for patients and decreased referral rates to a tertiary-level hospital. Attendance rates were significantly higher for initial hearing screening at the district hospital when compared with initial screening at the tertiary hospital. Non-attendance can result in underutilisation of healthcare provider time and can lead to longer appointment waiting time for patients. Furthermore, especially in severely resource-constrained settings typical of LMICs, non-attendance delays the identification, diagnosis and timeous intervention of healthcare conditions. The Health Professions Council of South Africa Early Hearing Detection and Intervention Guidelines suggest that a 70% and higher follow-up return rate for hearing screening is considered ideal, but that the feasibility of attaining a high follow-up rate is influenced by various factors such as access to healthcare facilities and personal constraints such as poverty. The follow-up attendance rate for rescreening at the district hospital two weeks after the initial screening was high (80.8%). This could be attributed to the fact that the second screening was also conducted at a community level and coincided with a paediatrician visit to follow up on middle ear pathology for the majority of patients who referred OAE screening bilaterally. A high follow-up attendance rate (89.4%) for hearing screening was also found in a recent South African community-based study when the rescreening was conducted at a community-level as opposed to a public healthcare institution. Patients who needed referral to the tertiary hospital for specialised diagnostic assessment had an attendance rate of 73.3%, which is in line with a previous South African community-based hearing screening study that found an attendance rate for diagnostic assessments of 75.8%. Patient travelling distance was significantly shorter to the district hospital as opposed to the tertiary hospital. Access to services is one of the leading barriers to hearing healthcare in underserved communities. The costs involved in attending healthcare appointments, both in terms of time taken off from work and travel costs for patients with limited resources, remain a further challenge in accessing healthcare in LMICs. Therefore, primary healthcare is an important strategy employed in South Africa, in order to provide more accessible patient-centred services closer to home. Community delivered hearing healthcare models have been identified as an important strategy to increase the accessibility and affordability of hearing healthcare in underserved communities. , The inaccessibility of hearing healthcare services at a primary- or district-level, which adds severe strain on tertiary-level specialised services, may be alleviated by decentralising services. The results of this study corroborate this. The number of direct referrals for initial hearing screening from the district hospital catchment area to the tertiary hospital significantly decreased after implementation of the decentralised hearing screening project at the district hospital. The decreased number of referrals to the tertiary hospital for initial hearing screening support decreased waiting times and improved capacity to provide specialised diagnostic hearing assessments and intervention to patients requiring tertiary-level care. More than 80% of children who attended the initial hearing screening during the intervention period at the district hospital passed initial OAEs bilaterally. This high pass rate is a positive outcome for the premise of decentralising hearing screening services to a more appropriate level of care. The majority of patients (78.7%) in the tertiary group also passed initial OAE screening, which supports the premise that hearing outcomes are similar for initial hearing screening regardless of the level of care where hearing screening is conducted. Telehealth applications are available for hearing assessment of older children, however, utilising OAEs in a screening setting is advantageous in terms of time taken to conduct and minimal training that is required. The referral rate for diagnostic hearing assessment at the tertiary hospital for the children who attended hearing screening during the intervention period at the district hospital was 9.5%. This percentage is higher than the reported referral rate of a South African community-based hearing and vision screening study of 5.4%, which utilised smartphone-based pure tone audiometry screening. A possible reason for the higher referral rate is the method of screening. Otoacoustic emissions screening is sensitive to middle ear pathology and it is more likely to fail in the presence of abnormal middle ear function. Referral for diagnostic testing in the tertiary group (21.3%) was twice as high in the district group (9.5%). The higher number of diagnostic assessments in the tertiary group were because of the fact that no opportunity for rescreening after two weeks was provided, as all patients who referred initial screening unilaterally or bilaterally or those for whom OAE screening results could not be elicited, underwent diagnostic assessment on the same day in order to minimise follow-up appointments at the tertiary hospital. Providing hearing screening at a district level increased access to medical treatment for all children who presented with middle ear pathology as evidenced by abnormal tympanometry results on the day of initial OAE screening. These children were assessed and treated by the paediatrician on the same day, instead of waiting for months to get an ENT appointment at the tertiary hospital. Thus, middle ear pathology was treated timeously and effectively at a more appropriate level of care, decreasing the added burden to long tertiary waiting lists. Early identification of middle ear pathology is a primary-level healthcare service, and it would be more appropriate to refer children even closer to home to their nearest community healthcare centres for treatment. This would in turn minimise the burden on district level staff and address the problem of preventative hearing loss in children at grassroots level. A limitation of this study was that tertiary-level audiologists conducted the hearing screening at the district hospital during the intervention period. Future studies should assess the training needs of community healthcare workers and nurses to conduct hearing screening at district hospital facilities. The premise of task-shifting through community-based hearing screening programmes has been proposed as a way to improve access to hearing healthcare. , Community healthcare workers and nurses can be trained to screen for hearing loss using mobile health technology via home-based visits to reach vulnerable communities in LMICs, thereby improving access to hearing healthcare services and reducing the demands on the limited number of hearing healthcare professionals in South Africa. In addition no sample size calculation was conducted and group size was pragmatically determined by number of patients over the specified time periods. Decentralised hearing screening programmes conducted at the appropriate level of care can increase access to hearing healthcare, reduce patient travelling distances and associated costs and reduce the burden on tertiary-level hospitals. Accessible hearing screening yields higher attendance rates, leading to more effective and timeous treatment of the adverse effects of childhood hearing loss.
An overview of the challenges and key initiatives in hepatology practice in the UK in 2022: a cautionary tale, but reasons for optimism – British Association for the Study of the Liver (BASL) Annual Meeting 2022 Conference Report
030c5191-4f5e-42bb-8503-07745172b443
11046520
Internal Medicine[mh]
Morbidity and mortality from chronic liver disease have significantly increased in the UK over the past 50 years, in stark contrast to other common conditions, such as heart disease. There is a need to develop liver services to meet this increasing demand while also developing early detection strategies aligned with public health policies to prevent the development of significant liver disease. This report highlights the major themes in work presented at the 2022 British Association for the Study of the Liver (BASL) Annual Meeting and summarises the challenges facing the specialty. We discuss innovative work relating to sustainable hepatology, telemedicine, hepatology training and the growing role of allied health professionals (AHPs) in the care of hepatology patients as the specialty continues to develop. These challenges and innovations are ubiquitously encountered in other primary and secondary care specialties across the UK. Access to specialist hepatology services A major theme across the conference was regional variability in liver services across the UK. The Trainee Collaborative for Research and Audit in Hepatology UK (ToRcH-UK) presented a subgroup analysis from their recently completed UK audit of decompensated cirrhosis admissions, comparing patients presenting with hepatic encephalopathy (HE) with those presenting with ascites and variceal haemorrhage. Patients presenting with HE to non-specialist centres were less likely to survive their admission compared with other patients with decompensated cirrhosis. This was not the case at specialist centres. There was also significant differences in care provision for patients with HE at non-specialist centres compared with specialist centres, where they were more likely to be looked after by a gastroenterologist/hepatologist on a dedicated specialist ward. Variation between different geographical areas and between specialist and non-specialist centres was also demonstrated in the likelihood of patients with end-stage liver disease being referred for transplant assessment. Additionally, the UK-PBC audit highlighted variability between specialist and non-specialist centres in prescribing second-line drugs for patients with inadequate response to first-line medication. It was noted that there was no significant variation in the prescribing practices in England and Wales compared with Scotland. These data highlight the need to standardise care delivered across the UK while improving access to specialist services to deliver better outcomes for patients with liver disease. The Royal College of Physicians Improving Quality in Liver Services (IQILS) accreditation process represents a possible strategy to reduce variability between hospitals and regions by establishing requisite standards for hepatology services and supporting hospitals to achieve these. It is encouraging that many hospitals have already signed up to IQILS and have become accredited. NHS Trusts will need to commit time and funding to service development to meet these standards and improve outcomes for their patients. The association between deprivation and liver disease Liver disease is strongly associated with deprivation, which might explain some of the regional variations in hospital admissions relating to liver disease. In Leeds, a strong association was demonstrated between areas of deprivation, high alcohol use, obesity and severe liver disease. Interestingly, however, healthcare utilisation by those in the most deprived cohorts was lower than those from less deprived areas. This suggests that current strategies to prevent liver disease are targeting the wrong cohorts of patients. Developing services and increasing workforce within areas of high deprivation could improve healthcare delivery and maximise the impact of early detection pathways. Sustainability Liver disease is associated with significant economic ramifications and is set to become the largest cause of working-years lost in Europe Similarly, this has enormous implications for the environmental cost of hospital admissions and there is an urgent need for UK healthcare to become more environmentally sustainable. Within our own specialty, it is clear that an inpatient admission of a patient with decompensated liver disease is associated with significant carbon emissions. In particular, admissions with specific clinical deterioration that requires emergency interventions, such as variceal bleeding or intensive care admission, has substantial ecological and resource implications. Not only does this highlight the importance of preventing liver disease, but also emphasises the need to utilise strategies to reduce both the associated carbon footprint and financial cost to the NHS. Telemedicine is increasingly being utilised to reduce patient travel time and is associated with increased patient satisfaction. Risk-stratifying patients using non-invasive modalities to avoid carbon-generating procedures, such as endoscopy, combined with regular medication review should reduce carbon emissions. Another novel approach is the use of implanted long-term ascitic drains (LTAD) rather than ambulatory paracentesis services. The REDUCe study demonstrated feasibility with preliminary evidence of LTAD effectiveness, safety, acceptability and reduced health resource utilisation. This is now the basis of a National Institute for Health and Care Research (NIHR) trial looking at establishing definitive evidence to support this development. These progressive approaches will improve not only the carbon footprint of the NHS, but possibly also patient care and satisfaction. Shape of training The changes to gastroenterology higher specialist training are a welcome step in hepatology training. Although the changes will see a shortened training scheme, significant efforts have gone into curriculum design to ensure that trainees are well equipped to work across all aspects of hepatology, including transplant hepatology, at the time of Certificate of Completion of Training (CCT). A survey of UK trainees found that over half of gastroenterology trainees intended to specialise in hepatology. However, it also demonstrated that trainees were more likely to prefer a consultant job in a specialist centre rather than in a non-specialist centre. How do we ensure that all patients with liver disease have the same access to services? It is impractical to expect patients to travel large distances to see a specialist from a convenience, cost and environmental perspective. The potential solutions to this problem were discussed in the ‘Variations in UK Liver disease’ panel session. Among the solutions discussed was the potential for job plans split across specialist and non-specialist centres. However, given that nearly 50% of advertised UK consultant gastroenterologist/hepatologist posts were unfilled in 2020, split-site job plans, particularly over significant geographical areas, might not be attractive to trainees applying for consultant posts. Developing formal regional networks of specialist centres and non-specialist centres has the potential to improve access to specialist services. Accessible multidisciplinary meetings across the network will increase dialogue between centres and likely improve patient care. Combining this approach with ‘levelling up’ of non-specialist services through IQILS and innovated job plans will hopefully attract future consultants to work in areas of need. Additionally, it was noted that future efforts to develop a sustainable hepatology workforce should focus on the current body of consultants within district general hospitals (DGHs) who deliver the majority of care to patients with decompensated liver disease in the UK. These consultants have ongoing educational needs, which should be recognised, in addition to access to collective research opportunities and collaborative engagement with referral centres as required. Embracing technology to improve patient care Multiple presentations highlighted the potential role for telemedicine in the care of hepatology patients. Examples included: • electronic mental health screening questionnaires, which allow prompt communication and resolution of concerns within outpatient clinics for patients with primary sclerosing cholangitis • the emerging role for app technology in early detection of complications associated with cirrhosis, with one group demonstrating promising results for the early identification of overt hepatic encephalopathy • the use of accelerometers and virtual follow-up calls to monitor exercise engagement within the ExaLT trial • the use of big data to improve/inform our detection programs. For example, a group in Somerset demonstrated changes in alanine aminotransferase and platelet count that, over time, could predict advanced chronic liver disease with high sensitivity and specificity. Embracing such technology should not only improve patient care, but also reduce costs to the NHS and the associated carbon footprint. Allied health professionals BASL 2022 had the largest attendance of AHPs on record. The main programme featured many elements of their involvement in liver disease care, demonstrating the integral role that they have within the multidisciplinary team. The prevalence of physical frailty is high among those with liver disease and is associated with poor outcomes. Regular assessment of physical frailty utilising quick and easy to use measures, such as the Liver Frailty Index and Duke Activity Status Index, were highlighted as a way of identifying those requiring access to AHP care. Although preventative medicine has been central to NHS initiatives for a long time, it is evident that this needs to be moved to the forefront of liver care. Lifestyle modification to include engagement of patients with physical activity has the potential to reduce the development and progression of liver disease as well as liver-related mortality. Investment in AHPs to engage and support patients with liver disease in these lifestyle modifications, as well as to treat those at risk of becoming physically frail, will not only likely improve patient care, but also reduce the NHS carbon footprint. Research There was significant focus on the inequity in research and research delivery in hepatology across the UK. Research in hepatology is not reflective of the diseases most likely to cause cirrhosis, with relatively few studies investigating alcohol-related liver disease compared with autoimmune diseases. Additionally, most research is conducted in a small number of specialist centres. A paradigm shift is required to deliver clinically relevant studies that improve outcomes for all of our patients. Developing research networks that focus on inclusivity, such as the Trainee Collaborative for Hepatology Research and Audit UK, is vital to achieve this goal. The recent NIHR funding call focused on proposals that aimed to develop relationships with less research active institutions. We are hopeful that this will start to address this inequity. A major theme across the conference was regional variability in liver services across the UK. The Trainee Collaborative for Research and Audit in Hepatology UK (ToRcH-UK) presented a subgroup analysis from their recently completed UK audit of decompensated cirrhosis admissions, comparing patients presenting with hepatic encephalopathy (HE) with those presenting with ascites and variceal haemorrhage. Patients presenting with HE to non-specialist centres were less likely to survive their admission compared with other patients with decompensated cirrhosis. This was not the case at specialist centres. There was also significant differences in care provision for patients with HE at non-specialist centres compared with specialist centres, where they were more likely to be looked after by a gastroenterologist/hepatologist on a dedicated specialist ward. Variation between different geographical areas and between specialist and non-specialist centres was also demonstrated in the likelihood of patients with end-stage liver disease being referred for transplant assessment. Additionally, the UK-PBC audit highlighted variability between specialist and non-specialist centres in prescribing second-line drugs for patients with inadequate response to first-line medication. It was noted that there was no significant variation in the prescribing practices in England and Wales compared with Scotland. These data highlight the need to standardise care delivered across the UK while improving access to specialist services to deliver better outcomes for patients with liver disease. The Royal College of Physicians Improving Quality in Liver Services (IQILS) accreditation process represents a possible strategy to reduce variability between hospitals and regions by establishing requisite standards for hepatology services and supporting hospitals to achieve these. It is encouraging that many hospitals have already signed up to IQILS and have become accredited. NHS Trusts will need to commit time and funding to service development to meet these standards and improve outcomes for their patients. Liver disease is strongly associated with deprivation, which might explain some of the regional variations in hospital admissions relating to liver disease. In Leeds, a strong association was demonstrated between areas of deprivation, high alcohol use, obesity and severe liver disease. Interestingly, however, healthcare utilisation by those in the most deprived cohorts was lower than those from less deprived areas. This suggests that current strategies to prevent liver disease are targeting the wrong cohorts of patients. Developing services and increasing workforce within areas of high deprivation could improve healthcare delivery and maximise the impact of early detection pathways. Liver disease is associated with significant economic ramifications and is set to become the largest cause of working-years lost in Europe Similarly, this has enormous implications for the environmental cost of hospital admissions and there is an urgent need for UK healthcare to become more environmentally sustainable. Within our own specialty, it is clear that an inpatient admission of a patient with decompensated liver disease is associated with significant carbon emissions. In particular, admissions with specific clinical deterioration that requires emergency interventions, such as variceal bleeding or intensive care admission, has substantial ecological and resource implications. Not only does this highlight the importance of preventing liver disease, but also emphasises the need to utilise strategies to reduce both the associated carbon footprint and financial cost to the NHS. Telemedicine is increasingly being utilised to reduce patient travel time and is associated with increased patient satisfaction. Risk-stratifying patients using non-invasive modalities to avoid carbon-generating procedures, such as endoscopy, combined with regular medication review should reduce carbon emissions. Another novel approach is the use of implanted long-term ascitic drains (LTAD) rather than ambulatory paracentesis services. The REDUCe study demonstrated feasibility with preliminary evidence of LTAD effectiveness, safety, acceptability and reduced health resource utilisation. This is now the basis of a National Institute for Health and Care Research (NIHR) trial looking at establishing definitive evidence to support this development. These progressive approaches will improve not only the carbon footprint of the NHS, but possibly also patient care and satisfaction. The changes to gastroenterology higher specialist training are a welcome step in hepatology training. Although the changes will see a shortened training scheme, significant efforts have gone into curriculum design to ensure that trainees are well equipped to work across all aspects of hepatology, including transplant hepatology, at the time of Certificate of Completion of Training (CCT). A survey of UK trainees found that over half of gastroenterology trainees intended to specialise in hepatology. However, it also demonstrated that trainees were more likely to prefer a consultant job in a specialist centre rather than in a non-specialist centre. How do we ensure that all patients with liver disease have the same access to services? It is impractical to expect patients to travel large distances to see a specialist from a convenience, cost and environmental perspective. The potential solutions to this problem were discussed in the ‘Variations in UK Liver disease’ panel session. Among the solutions discussed was the potential for job plans split across specialist and non-specialist centres. However, given that nearly 50% of advertised UK consultant gastroenterologist/hepatologist posts were unfilled in 2020, split-site job plans, particularly over significant geographical areas, might not be attractive to trainees applying for consultant posts. Developing formal regional networks of specialist centres and non-specialist centres has the potential to improve access to specialist services. Accessible multidisciplinary meetings across the network will increase dialogue between centres and likely improve patient care. Combining this approach with ‘levelling up’ of non-specialist services through IQILS and innovated job plans will hopefully attract future consultants to work in areas of need. Additionally, it was noted that future efforts to develop a sustainable hepatology workforce should focus on the current body of consultants within district general hospitals (DGHs) who deliver the majority of care to patients with decompensated liver disease in the UK. These consultants have ongoing educational needs, which should be recognised, in addition to access to collective research opportunities and collaborative engagement with referral centres as required. Multiple presentations highlighted the potential role for telemedicine in the care of hepatology patients. Examples included: • electronic mental health screening questionnaires, which allow prompt communication and resolution of concerns within outpatient clinics for patients with primary sclerosing cholangitis • the emerging role for app technology in early detection of complications associated with cirrhosis, with one group demonstrating promising results for the early identification of overt hepatic encephalopathy • the use of accelerometers and virtual follow-up calls to monitor exercise engagement within the ExaLT trial • the use of big data to improve/inform our detection programs. For example, a group in Somerset demonstrated changes in alanine aminotransferase and platelet count that, over time, could predict advanced chronic liver disease with high sensitivity and specificity. Embracing such technology should not only improve patient care, but also reduce costs to the NHS and the associated carbon footprint. BASL 2022 had the largest attendance of AHPs on record. The main programme featured many elements of their involvement in liver disease care, demonstrating the integral role that they have within the multidisciplinary team. The prevalence of physical frailty is high among those with liver disease and is associated with poor outcomes. Regular assessment of physical frailty utilising quick and easy to use measures, such as the Liver Frailty Index and Duke Activity Status Index, were highlighted as a way of identifying those requiring access to AHP care. Although preventative medicine has been central to NHS initiatives for a long time, it is evident that this needs to be moved to the forefront of liver care. Lifestyle modification to include engagement of patients with physical activity has the potential to reduce the development and progression of liver disease as well as liver-related mortality. Investment in AHPs to engage and support patients with liver disease in these lifestyle modifications, as well as to treat those at risk of becoming physically frail, will not only likely improve patient care, but also reduce the NHS carbon footprint. There was significant focus on the inequity in research and research delivery in hepatology across the UK. Research in hepatology is not reflective of the diseases most likely to cause cirrhosis, with relatively few studies investigating alcohol-related liver disease compared with autoimmune diseases. Additionally, most research is conducted in a small number of specialist centres. A paradigm shift is required to deliver clinically relevant studies that improve outcomes for all of our patients. Developing research networks that focus on inclusivity, such as the Trainee Collaborative for Hepatology Research and Audit UK, is vital to achieve this goal. The recent NIHR funding call focused on proposals that aimed to develop relationships with less research active institutions. We are hopeful that this will start to address this inequity. Despite the undoubted challenges facing the hepatology community, there is a clear focus on reducing healthcare inequities and improving outcomes for our patients. Emphasis has been placed on finding solutions to these inequities and providing quality, cost-effective, multidisciplinary and environmentally friendly care for our patients. Similarly, it is recognised that the far-reaching effects of liver disease will necessitate closer working with colleagues across all specialisms to deliver comprehensive, holistic, patient-centred liver care to those who are often marginalised and less likely to encounter healthcare services in a timely manner. PNB has received educational honoraria from Takeda. ODT has received educational honoraria from Gilead Sciences. He is also a former BASL trainee representative.
A qualitative study exploring strategies to improve the inter-professional management of diabetes and periodontitis
5c7d14df-699f-4d54-9639-3a542ae6d8ba
7059110
Health Communication[mh]
Introduction Diabetes has been recognised as a risk factor for periodontitis (advanced gum disease) since the early 1990s, with the risk of periodontitis being increased 2–3 times in individuals with poorly controlled diabetes compared to individuals without . Periodontitis is a distressing chronic inflammatory disease of the gums and other supporting tissues of the teeth (including the alveolar jaw bone) which results in progressive tissue damage and ultimately tooth loss if untreated . Notwithstanding the effects of periodontitis on quality of life , it also impacts on a number of systemic conditions, including diabetes and cardiovascular disease . Severe periodontitis has been reported to be the sixth most prevalent disease globally and UK prevalence data have shown that 8% of the adult population have advanced periodontitis . The pathogenic mechanisms linking periodontitis and diabetes are incompletely understood but the level of glycaemic control is key in determining risk . Similar to the other complications of diabetes, the risk for periodontitis increases with poorer glycaemic control . Evidence has emerged to support a bidirectional relationship between diabetes and periodontitis; that is, diabetes increases risk for periodontitis, and periodontitis increases risk of diabetes complications and renders glycaemic control more difficult. Furthermore, there is compelling evidence that there is potential to improve glycaemic control through the treatment of periodontitis . Meta-analyses and Cochrane reviews have confirmed reductions in HbA1c of 3−4 mmol/mol (0.3–0.4%) following effective periodontal therapy up to 3–4 months after treatment . Over the last decade, guidance documents have been published by various professional and scientific organisations to improve inter-professional working in the context of diabetes and periodontitis, examples of which are summarised in . More recently (2018), the European Federation of Periodontology (EFP) and the International Diabetes Federation (IDF) held a joint workshop on diabetes and periodontitis. They published identical papers in both a dental journal ( Journal of Clinical Periodontology ) and a medical journal ( Diabetes Research & Clinical Practice ) aiming to improve inter-professional awareness of the links between the diseases . The publications included a suite of guidelines for dental and medical professionals, patients (whether being seen in the context of the medical practice or the dental practice), pharmacists, policymakers, and universities and research centres. The guidelines are all freely available to download from the EFP website . Key recommendations in these publications and others include that: - medical and dental healthcare professionals should inform patients of the bidirectional relationship between periodontitis and diabetes; - medical professionals should recommend that patients with diabetes visit a dental professional for assessment, and consider collaborating with the dental team; - dental professionals should consider liaising with the patient’s physician regarding their patient’s diabetes control (in the case of patients with known diabetes) or suspected diabetes (in the case of patients who do not currently have a diagnosis of diabetes). Mixed methods research exploring current practice and views of dental clinicians relating to guidance in the context of diabetes and periodontitis has shown that there is good uptake of informing patients about the bidirectional relationship, but contacting the patient’s doctor is not reported as happening to any great extent . A similar study exploring medical clinicians’ current practice and motivation relating to the guidance has shown that the evidence base and published guidance is not widely known and best practice recommendations are not being followed . Notwithstanding, this study found that the evidence for the bidirectional relationship between diabetes and periodontitis was valued by medical clinicians, and informing patients was considered legitimate by the medical team, particularly to the role of nurses . As difficulties with collaborative working between dental and medical clinicians have been reported previously in the literature , and dissemination of guidelines alone is insufficient in promoting a change in clinical practice , this study aimed to explore potential ways to enable improved inter-professional working as outlined in extant diabetes and periodontitis guidance documents. Methods 2.1 Setting and study sample Qualitative research design with six iterative workshops each lasting between 30−60 min conducted with staff in two medical and two dental primary care practices in the North of England, and two workshops were held with people with diabetes at Newcastle University . Conducting the separate workshops for people with diabetes, the medical staff and dental practice staff was intended to create a comfortable (familiar) and uninhibited space for discussion, as suggested in multidisciplinary healthcare and social research . Recruitment of the medical and dental practice staff was facilitated by the North East and North Cumbria (NENC) Clinical Research Network (CRN), who distributed a study summary (inviting potential participants to email an expression of interest to the researcher) to research-active dental and medical practices in their region. Workshops took place at lunchtime in the practices, and participants were remunerated in accordance with the Department of Health and Social Care AcoRD guidance . The people with diabetes were recruited from a patient and public involvement (PPI) group at Newcastle University’s School of Dental Sciences and the workshops were held in a seminar room at the university. Travel expenses were refunded and the participants were given a gift card for their participation. All participants were provided with written and verbal information about the study prior to signing consent forms. The recruitment period ran from September 2017 until January 2018. A favourable ethical opinion was obtained from North West-Greater Manchester West National Health Service (NHS) Research Ethics Committee (16/NW/0030). 2.2 Workshop delivery At the beginning of the workshops, a summary of key components of the association between diabetes and periodontitis was given, together with results of the previous workshop to provide context for discussion. The workshops followed a topic guide, however the participants were encouraged to talk freely and the discussion was participant-led. The discussion concluded with a recap of the main discussion findings (delivered by the workshop facilitator, SMB), ensuring an accurate account of the discussion, which also allowed the participants an opportunity to reflect on their contributions and refine their comments should they wish to do so. 2.3 Analysis Consent was obtained from the participants to audio-record the workshops and reflective notes were made by the researcher (SMB). The audio recordings were transcribed, anonymised and subsequently checked for accuracy against the recording. Thematic analysis was used to identify common attributes within the data . Notable discussion points and specific comments of interest were identified from the transcripts and supporting reflective notes, and codes or key words were applied by the researcher (SMB), and subsequently discussed with the research team. Following completion of the sixth workshop, the transcripts were revisited and a process of re-reading (whilst listening to the audio recordings) enabled application of the constant comparison method to revise the codes . Emergent patterns and resultant themes were formulated via an inductive approach to the data analysis . Quotes which illustrated concepts relating to a particular theme were considered in detail and unpacked to explore meaning and develop better understanding. Analytical discussion during meetings of the research team provided the opportunity to further explore and clarify the emergent themes. Setting and study sample Qualitative research design with six iterative workshops each lasting between 30−60 min conducted with staff in two medical and two dental primary care practices in the North of England, and two workshops were held with people with diabetes at Newcastle University . Conducting the separate workshops for people with diabetes, the medical staff and dental practice staff was intended to create a comfortable (familiar) and uninhibited space for discussion, as suggested in multidisciplinary healthcare and social research . Recruitment of the medical and dental practice staff was facilitated by the North East and North Cumbria (NENC) Clinical Research Network (CRN), who distributed a study summary (inviting potential participants to email an expression of interest to the researcher) to research-active dental and medical practices in their region. Workshops took place at lunchtime in the practices, and participants were remunerated in accordance with the Department of Health and Social Care AcoRD guidance . The people with diabetes were recruited from a patient and public involvement (PPI) group at Newcastle University’s School of Dental Sciences and the workshops were held in a seminar room at the university. Travel expenses were refunded and the participants were given a gift card for their participation. All participants were provided with written and verbal information about the study prior to signing consent forms. The recruitment period ran from September 2017 until January 2018. A favourable ethical opinion was obtained from North West-Greater Manchester West National Health Service (NHS) Research Ethics Committee (16/NW/0030). Workshop delivery At the beginning of the workshops, a summary of key components of the association between diabetes and periodontitis was given, together with results of the previous workshop to provide context for discussion. The workshops followed a topic guide, however the participants were encouraged to talk freely and the discussion was participant-led. The discussion concluded with a recap of the main discussion findings (delivered by the workshop facilitator, SMB), ensuring an accurate account of the discussion, which also allowed the participants an opportunity to reflect on their contributions and refine their comments should they wish to do so. Analysis Consent was obtained from the participants to audio-record the workshops and reflective notes were made by the researcher (SMB). The audio recordings were transcribed, anonymised and subsequently checked for accuracy against the recording. Thematic analysis was used to identify common attributes within the data . Notable discussion points and specific comments of interest were identified from the transcripts and supporting reflective notes, and codes or key words were applied by the researcher (SMB), and subsequently discussed with the research team. Following completion of the sixth workshop, the transcripts were revisited and a process of re-reading (whilst listening to the audio recordings) enabled application of the constant comparison method to revise the codes . Emergent patterns and resultant themes were formulated via an inductive approach to the data analysis . Quotes which illustrated concepts relating to a particular theme were considered in detail and unpacked to explore meaning and develop better understanding. Analytical discussion during meetings of the research team provided the opportunity to further explore and clarify the emergent themes. Results 3.1 Participant characteristics Participant characteristics (n = 43 in total) are shown in . Two medical practices were recruited, one with a below-average percentage of patients with diabetes (4%) and one with an above-average percentage (16%). Participants were medical practice staff members with a range of job roles including general practitioner (GP), nurse, practice manager and administrator. Two dental practices were recruited, one located in an urban area whilst the other was in a semi-rural area. Participants were dental practice staff members from a range of job roles including dentist (GDP), dental hygienist/therapist (DHT) and dental nurse. Two-thirds of the participants in the workshops that recruited people with diabetes were female, two-thirds were retired and participant ages ranged from 22 to 75 years. 3.2 Themes Major themes and illustrative quotes are cross-referenced in the text to . The discussions were prompted by the topic guide and focused on two broad areas: accommodation of evidence and guidelines; and interaction, both experienced and planned. 3.3 Accommodation of evidence and guidelines During the workshops, participants engaged with the evidence and recommendations, and discussion focused on accommodating new knowledge into the context of their existing views and experience. Medical and dental professionals reported a lack of awareness of various aspects of the published guidance for management of patients with periodontitis and diabetes (Quote 1 (Q1), ). The medical teams had no knowledge of the guidelines or published evidence in the scientific literature but they were surprised to learn, and enthused by, the evidence regarding the effect of periodontitis treatment in improving glycaemic control. Whilst dental professionals were aware of the evidence, some of them were unfamiliar regarding the units used to measure glycaemic control (e.g. mmol/mol or % HbA1c values) and others were unsure of the reliability of patient self-report regarding diabetes control, both of which could reduce effective communication (Q2 and Q3, ). Both the medical and dental staff members expressed doubt in relation to each other’s knowledge on the topic, stating that they felt the lack of knowledge was mutual across both professions (medical and dental) (Q4 and Q5, ). Patient participants (diagnosed with diabetes) reported that they had never been informed about the links between diabetes and periodontitis by either their GP or GDP; and they felt the probability of receiving that information in the future was low due to the rushed nature of appointments. Furthermore, there was a suggestion that an oral health educational intervention delivered around the time of diabetes diagnosis, alongside information regarding the complications of diabetes, would ensure that all newly diagnosed patients were informed (Q6, ). 3.4 Experienced and planned interaction A key factor that led the medical practice participants to doubt the knowledge of dental professionals on the subject area was the absence of referrals from dental professionals in this context, as reported by the medical practice staff (Q7, ). Whilst it was clear that inter-professional communication did exist in other contexts between medicine and dentistry, albeit rarely, there were numerous accounts of negative experiences from both the medical and dental professionals. For example, some medical professionals reported that they were only contacted by dental professionals in relation to queries regarding a dose adjustment of anti-coagulant medication prior to certain dental procedures (such as tooth extractions); which they (the medical professionals) considered was inappropriate. They felt that dental professionals should seek relevant advice from a dental regulatory or advisory source (Q8, ). In addition, another concern among medical professionals arose when patients with toothache attended the medical practice on the advice of the dental practice (who may not have been able to schedule a timely appointment for the patient), suggesting that the GP would be able to prescribe antibiotics (Q9, ). Issuing antibiotics to patients with toothache was looked upon poorly by these medical practice staff as it was effectively asking the GP to work outside of their scope of practice. Dental professionals also noted poor inter-professional communication and stated that whilst they considered their enquiries to be legitimate, they felt they were often ignored or dismissed by their medical practice colleagues (Q10 and Q11, ). Patients with diabetes wanted their healthcare professionals to take time and explain the relationship between their diabetes and periodontitis, however they reported being rushed ‘in and out’ of consultations, particularly dental appointments. One patient had encountered an aggressive response from their GP in relation to a dental matter and therefore expressed doubt that communication could ever be improved (Q12, ). Medical professionals outlined that although they would not recommend direct communication via letters or phone calls due to operational time constraints, they would welcome an indirect referral via the patient (Q13, ). Furthermore, they reported that signposting a patient to the GP was common practice and used by a whole range of people, including hairdressers (Q14, ). Whilst it was noted that not all patients would act on signposting, dental professionals concurred that in the context of diabetes and periodontitis, signposting an individual with suspected diabetes to their GP for investigation was perceived and experienced to be acceptable (Q15 and Q16, ). Participant characteristics Participant characteristics (n = 43 in total) are shown in . Two medical practices were recruited, one with a below-average percentage of patients with diabetes (4%) and one with an above-average percentage (16%). Participants were medical practice staff members with a range of job roles including general practitioner (GP), nurse, practice manager and administrator. Two dental practices were recruited, one located in an urban area whilst the other was in a semi-rural area. Participants were dental practice staff members from a range of job roles including dentist (GDP), dental hygienist/therapist (DHT) and dental nurse. Two-thirds of the participants in the workshops that recruited people with diabetes were female, two-thirds were retired and participant ages ranged from 22 to 75 years. Themes Major themes and illustrative quotes are cross-referenced in the text to . The discussions were prompted by the topic guide and focused on two broad areas: accommodation of evidence and guidelines; and interaction, both experienced and planned. Accommodation of evidence and guidelines During the workshops, participants engaged with the evidence and recommendations, and discussion focused on accommodating new knowledge into the context of their existing views and experience. Medical and dental professionals reported a lack of awareness of various aspects of the published guidance for management of patients with periodontitis and diabetes (Quote 1 (Q1), ). The medical teams had no knowledge of the guidelines or published evidence in the scientific literature but they were surprised to learn, and enthused by, the evidence regarding the effect of periodontitis treatment in improving glycaemic control. Whilst dental professionals were aware of the evidence, some of them were unfamiliar regarding the units used to measure glycaemic control (e.g. mmol/mol or % HbA1c values) and others were unsure of the reliability of patient self-report regarding diabetes control, both of which could reduce effective communication (Q2 and Q3, ). Both the medical and dental staff members expressed doubt in relation to each other’s knowledge on the topic, stating that they felt the lack of knowledge was mutual across both professions (medical and dental) (Q4 and Q5, ). Patient participants (diagnosed with diabetes) reported that they had never been informed about the links between diabetes and periodontitis by either their GP or GDP; and they felt the probability of receiving that information in the future was low due to the rushed nature of appointments. Furthermore, there was a suggestion that an oral health educational intervention delivered around the time of diabetes diagnosis, alongside information regarding the complications of diabetes, would ensure that all newly diagnosed patients were informed (Q6, ). Experienced and planned interaction A key factor that led the medical practice participants to doubt the knowledge of dental professionals on the subject area was the absence of referrals from dental professionals in this context, as reported by the medical practice staff (Q7, ). Whilst it was clear that inter-professional communication did exist in other contexts between medicine and dentistry, albeit rarely, there were numerous accounts of negative experiences from both the medical and dental professionals. For example, some medical professionals reported that they were only contacted by dental professionals in relation to queries regarding a dose adjustment of anti-coagulant medication prior to certain dental procedures (such as tooth extractions); which they (the medical professionals) considered was inappropriate. They felt that dental professionals should seek relevant advice from a dental regulatory or advisory source (Q8, ). In addition, another concern among medical professionals arose when patients with toothache attended the medical practice on the advice of the dental practice (who may not have been able to schedule a timely appointment for the patient), suggesting that the GP would be able to prescribe antibiotics (Q9, ). Issuing antibiotics to patients with toothache was looked upon poorly by these medical practice staff as it was effectively asking the GP to work outside of their scope of practice. Dental professionals also noted poor inter-professional communication and stated that whilst they considered their enquiries to be legitimate, they felt they were often ignored or dismissed by their medical practice colleagues (Q10 and Q11, ). Patients with diabetes wanted their healthcare professionals to take time and explain the relationship between their diabetes and periodontitis, however they reported being rushed ‘in and out’ of consultations, particularly dental appointments. One patient had encountered an aggressive response from their GP in relation to a dental matter and therefore expressed doubt that communication could ever be improved (Q12, ). Medical professionals outlined that although they would not recommend direct communication via letters or phone calls due to operational time constraints, they would welcome an indirect referral via the patient (Q13, ). Furthermore, they reported that signposting a patient to the GP was common practice and used by a whole range of people, including hairdressers (Q14, ). Whilst it was noted that not all patients would act on signposting, dental professionals concurred that in the context of diabetes and periodontitis, signposting an individual with suspected diabetes to their GP for investigation was perceived and experienced to be acceptable (Q15 and Q16, ). Discussion Inter-professional communication and collaborative working in the context of diabetes and periodontitis have been recommended in various best practice guidance publications over the last decade to improve patient care and diabetes outcomes. This study suggests that there may be currently little-to-no interaction between dental and medical clinicians in the context of diabetes and periodontitis, and there appears to be little appetite for improved (direct) communication by clinicians. Successful introduction and implementation of clinical guidelines have been shown to vary and whilst knowledge is important, previous studies looking at the uptake of diabetes recommendations have identified that contextual and motivational barriers can affect implementation . Behavioural change is complex and multifactorial. Whilst the workshops revealed a lack of knowledge regarding various elements of the guidelines, previous negative interactions such as inappropriate enquiries and stymied inter-professional communication were the focus of much discussion. Furthermore, a lack of referrals from dental professionals in the context of diabetes and periodontitis, caused medical teams to challenge dental professional’s ownership of the guidance. For their part, dental participants knew of the evidence, but described a history of non-replies or dismissive responses from GPs. Miscommunication between dental and medical clinicians has previously been reported elsewhere in the literature. A study of German GPs and GDPs showed problematic collaborative working with allegations of poor knowledge, uncertainty of role and previous difficult interactions . Holzinger et al. also reported GPs’ frustration relating to requests from GDPs for advice on anticoagulant therapy and dose adjustment for dental procedures, suggesting that this practice is widespread . Cope et al. found UK GPs to be equally frustrated when faced with prescribing antibiotics to treat dental pain (particularly given concerns regarding antimicrobial resistance) . Miscommunication between physicians and other allied health professionals in the context of diabetes has also been reported, with barriers relating to uncertainty of role and distrust of inter-professional working . Schweizer studied inter-professional collaboration and diabetes care in Switzerland and suggested that perception about collaboration is important and the negative experiences of communication are likely to influence team-working . The findings of the current study are consistent with the literature and suggest that despite the continued publication of international guidance documents advocating the benefits of inter-professional communication, the implementation of these recommendations offer a significant challenge for dental and medical clinicians and additional strategies are needed to change clinical practice. With time constraints being common in healthcare, minimal disruption has been reported to be important in the implementation of clinical behaviours . Active signposting has been shown to be effective in the context of reducing the amount of inappropriate GP consultations by triaging patients to a more appropriate healthcare professional ; and it is recommended in the UK NHS Year of Care initiative for managing long term conditions . During the workshops, previous occurrences of indirect referral were described as acceptable by dental professionals in the case of suspected diabetes; and medical professionals suggested that indirect referral from a variety of sources was not only commonplace, but actually preferred over a letter or a telephone call. Furthermore, NHS England have (August 2019) published a commissioning standard for the dental care of people with diabetes that recommends signposting. The document states that considerable NHS savings can be made by informing patients with diabetes (in the medical practice) about the links between periodontitis and diabetes and signposting them to a dentist for periodontal screening . In the event of patients with diabetes and periodontitis, particularly those with poorly controlled diabetes who do not have a dentist, engaging with and acting on information and signposting by their family physician (or nurse), they stand to benefit from periodontal treatment that has the potential to improve their glycaemic control. HbA1c reductions of up to 3−4 mmol/mol seen up to 3–4 months after periodontitis treatment could be a significant incentive for improved inter-professional communication. Whilst it is uncertain whether the implementation of the NHS commissioning standard will be more successful than that of previous diabetes and periodontitis guidance, further research is needed to explore whether signposting in the context of diabetes and periodontitis offers an effective substitute to direct referral and an important solution to the problems associated with inter-professional communication. 4.1 Strengths and weaknesses Six workshops enabled exploration of the perspectives of patients, medical and dental professionals to interprofessional communication in the context of diabetes and periodontitis. Workshops were considered appropriate methodology as they would enable learning, broad discussion and problem solving. However, the availability of interested healthcare professionals and patients made recruitment problematic. This lead to the decision to recruit at a practice level and through a PPI group. To facilitate recruitment further, the workshops were held either at lunchtime or as part of an established meeting, and refreshments and remuneration provided. Future research should aim to conduct an integrated workshop with people with diabetes and healthcare professionals together as it may stimulate alternative discussion on this topic. Strengths and weaknesses Six workshops enabled exploration of the perspectives of patients, medical and dental professionals to interprofessional communication in the context of diabetes and periodontitis. Workshops were considered appropriate methodology as they would enable learning, broad discussion and problem solving. However, the availability of interested healthcare professionals and patients made recruitment problematic. This lead to the decision to recruit at a practice level and through a PPI group. To facilitate recruitment further, the workshops were held either at lunchtime or as part of an established meeting, and refreshments and remuneration provided. Future research should aim to conduct an integrated workshop with people with diabetes and healthcare professionals together as it may stimulate alternative discussion on this topic. Conclusion Whilst inter-professional collaboration in the context of diabetes and periodontitis is a key recommendation that features in numerous published best practice guidance documents, it is clearly complex and challenging to implement. We consider that it is important for academics and specialists involved in guideline publication to consider the implementation of their recommendations as part of the process of developing guidance. Indirect referral, whereby the patient is signposted to a healthcare professional, was suggested by medical and dental professionals as a useful alternative to the traditional (and time consuming) letter or telephone call and has recently been recommended in an NHS commissioning standard outlining dental care for people with diabetes. Further research is necessary to evaluate signposting in this context to establish whether it is an effective, albeit indirect, communication tool. No potential conflict of interest was reported by the authors. This research was funded by a UK National Institute for Health Research Doctoral Research Fellowship (DRF-2014-07-023).
Bone marrow embolism: should it result from traumatic bone lesions? A histopathological human autopsy study
a1a0e28c-2a4a-499e-a225-946c0d9000a1
11297083
Pathology[mh]
Non-thrombotic pulmonary embolism is a less common cause of morbidity and mortality when compared to thrombotic pulmonary embolism. The latter refers to embolization of the pulmonary circulation mainly containing non-thrombotic elements, e.g., bacteria, foreign materials, and marrow elements. Bone marrow embolism (BME) is rare and generally understood to occur after trauma to the bones containing red marrow. The embolus is mainly composed of bone marrow elements including  marrow adipocytes. Frequently, small-sized pulmonary arteries are the most affected. Separate cases of BME however, have been observed in the coronary arteries . The first evidence of BME in man and animals was reported by Lubarsch and Lengemann . The suggested predisposing factors were (1) disturbances of the bone marrow, characterized mainly by decreased cohesiveness of cellular elements, and (2) bone contusion even without accompanying fractures . Maximow and others reported BME in animals and suspected that trauma, and even contusion, was the underlying cause. Additional human cases were reported by Sotti and others . Rappaport et al. suggested that BME may occur after fractures of the red marrow-containing bones . A fracture may occur either internally, resulting from violent muscular contractions after convulsions, or externally . Subsequently, most cases of BME have relied on fractures for determining the origin. Even in circumstances where facture was not apparent, death was assumed to be due to existing, but undetected, fractures . Trauma-related BME, as proposed by Gauss, is caused by torn medullary veins, where the pressure of the marrow increasingly forces it into the venous system . Several trauma-inducing factors have been proposed. Grandi et al. analyzed 53 cases, where in 31 cases, emboli were attributed to cardiac massage and in an individual case to an accident. No clear etiology was identified for the remaining 21 cases . Likewise, Buchanan and Mason and others attributed the occurrence of BME to resuscitative measures in cases of natural death. Arai indicated a significant correlation between the size of emboli and the extent of bone destruction, whereas a non-significant correlation between the number of emboli and the magnitude of destruction . Moreover, Blumenthal and Saayman reported two cases of BME in electrocution; one of the subjects showed a skeletal injury from high-voltage exposure, and the other case, involving domestic current, displayed no evidence of skeletal injury. Iatrogenic BME has been reported in the case of Gleason and Aufderheide , who suggested that the marrow of tuberculous vertebrae accidentally entered the circulation under compression during cystoscopy. BME is noted as a complication of multiple myeloma and sickle cell anemia and a consequence of costal and sternal fractures in the course of malignant neoplasms and shock . Another significant BME case reported BME in clinically suspected dengue shock syndrome , raising the controversial question of whether traumatic lesions are a necessary prequel to BME. Yet, evidence of BME occurring in non-traumatic cases is limited. Recently, a general agreement has established that BME is associated with skeletal injury , reflecting a paucity of evidence on non-traumatic BME. In this study, we present data for pathological causes other than trauma as etiologies for BME. The study is an observational, descriptive, and cross sectional study of autopsy cases. The cases were examined in the central forensic pathology laboratory in Cairo, Egypt. This study protocol was reviewed by the Research Ethics Committee (REC) for Human and Animal Research at the Faculty of Medicine at Helwan University (serial no. 21–2021). Case selection Cases referred to the central forensic pathology lab in Cairo for a period of 2 years, with identifiable BME in the pulmonary vessels, were selected, regardless of the circumstances or the cause of death. Eleven cases are selected from 400 consecutive autopsy cases based on the presence of BME or fat embolism. Anonymous archived data (no personal information) were used in this study. Since the data was archived, no consent to participation and publication forms were signed nor collected. Autopsy and histopathological study Routine autopsies included complete dissection for macroscopic evaluation of organs, including the heart, lungs, and brain. The kidneys, liver, and gastrointestinal organs, if relevant, were also examined. BME is detected morphologically by the routine H&E stain (fat and immature blood elements including megakaryocytes in the pulmonary vessels point to bone marrow embolism). Martius scarlet blue trichrome (MSB) stain is used to highlight fibrin. CD117 immunostains were used to highlight the hematopoietic progenitor cells. Photos were taken by AxioCam ERc5s Zeiss camera connected to Zeiss microscopy. Cases referred to the central forensic pathology lab in Cairo for a period of 2 years, with identifiable BME in the pulmonary vessels, were selected, regardless of the circumstances or the cause of death. Eleven cases are selected from 400 consecutive autopsy cases based on the presence of BME or fat embolism. Anonymous archived data (no personal information) were used in this study. Since the data was archived, no consent to participation and publication forms were signed nor collected. Routine autopsies included complete dissection for macroscopic evaluation of organs, including the heart, lungs, and brain. The kidneys, liver, and gastrointestinal organs, if relevant, were also examined. BME is detected morphologically by the routine H&E stain (fat and immature blood elements including megakaryocytes in the pulmonary vessels point to bone marrow embolism). Martius scarlet blue trichrome (MSB) stain is used to highlight fibrin. CD117 immunostains were used to highlight the hematopoietic progenitor cells. Photos were taken by AxioCam ERc5s Zeiss camera connected to Zeiss microscopy. Clinical data, if any, autopsy, and pathology diagnoses are grouped in the following table (Table ). Cases (1, 2, and 3) “Traumatic” Cases 1, 2, and 3 showing BME following trauma are illustrated in Fig. . Case 1, a male in his 50s, displayed facial injuries and arm fractures on gross examination. The histopathological examination of the lungs revealed a pulmonary BME with fibrin thrombi in medium sized vessels, highlighted by MSB stain (Fig. A, B). Patient history suggested that fractures pushed — by increasing intramedullary pressure — the bone marrow to enter the venous circulation and reached the lungs, causing serious respiratory insufficiency. Salient findings were severe edema, hemorrhage, and collapsed alveoli, the classic features of shock lung. Histopathological examination of the heart revealed mild coronary stenosis and cardiomegaly. Case 2, a male in his 40s, showed multiple fractures in gross examination. A pulmonary BME was noted during histopathological examination of the lungs. The embolus was accompanied by features suggestive of shock lung: severe congestion, hemorrhage, and focal collapsed alveoli. Case 3, a female in her 30s, displayed multiple trauma and fractures. Heart examination was unremarkable showing only congestion. She suffered from blood hyperviscosity syndrome. She exhibited shock lung as her lungs were not filled with sufficient air. The body organs did not receive enough oxygen for normal functions. This patient developed fat embolism that occurred when embolic fat macroglobules passed into small vessels of the lungs and other organs. Autopsy also showed some minimal marrow elements. Cases (4, 5, 6, 7, and 8) “Non-traumatic” Cases 4 and 5 presented with acute lung injury as the cause of death. Case 4, a male in his 50s, died during renal surgery due to a complication of anesthesia. He exhibited facial congestion. A sutured lateral abdominal incision was observed. Renal sutures and a ureteric stent were also reported on gross examination. Histopathological examination of the brain revealed severe congestion. Figure shows severe congestion, extravasation, edema, early neutrophilic, and entrapped megakaryocytes (Fig. ), microscopic features of shock lung. Case 5 is a female with a history of drug abuse. Histopathological examination of the heart showed few macrophage and micro foci of granulation tissue in the myocardium. The coronaries were patent. However, drug intake can cause the respiratory centers to depress, resulting in hypoxic episodes (Fig. ). Case 6 is a female in her 30s who suffered from hypovolemic shock due to post-CS hemorrhage and DIC. Any sustained shock, following severe injury and hemorrhage, stimulates bone marrow function and hematopoietic stem/progenitor cell proliferation and differentiation. These processes could lead to BME (Fig. ). Thrombi are present in a large size vessel as highlighted (Fig. C, D). Cases 7 and 8 are individuals with shock and advanced pulmonary hypertension. Case 7 is a male in his 50s with a history of respiratory distress but no signs of trauma. Histopathological examination of the lung revealed pulmonary fibrosis, grade III pulmonary hypertension, non-invasive Aspergillus fungal balls, and BME (Fig. ). Case 8 is a male with a history of dyspnea but no signs of trauma. He had cardiomegaly (587 gm) with right ventricular dilatation (right-side heart failure) and grade IV pulmonary hypertension. Histopathological examination of the heart tissue revealed advanced atherosclerosis with calcification; however, stenosis was unremarkable. Minimal myocardial fibrosis was noted. A decline in heart-pumping capacity led to volume overload in the left ventricle resulted in over-abundance of blood in the left atrium causing backward pressure on the lungs. The result was pulmonary hypertension. This condition, over time, produced right ventricle dilatation and cardiomegaly (Fig. ). Cases 9 and 10 (cancer) Case 9 is a 70-year-old man who died of cancer. Autopsy revealed mucinous carcinoma, bronchitis, emphysema, advanced pulmonary hypertension, perivascular giant cell reaction, and BME in the lung. Gross autopsy and radiology findings did not suggest any fractures. Autopsy of the liver, brain, heart, and kidneys revealed chronic active hepatitis, atherosclerosis, and arteriolosclerosis. Autopsy of the brain and kidneys revealed meningitis and renal pyemic abscesses. This constellation of findings is most likely a consequence of cancer. Chronic hepatitis was part of this constellation. Case 10 is a 60-year-old male with jaundice; there were no gross findings in the autopsy. Autopsy of the heart revealed advanced atherosclerosis, stenosis, and inflammation along with right ventricular marrow, indicating extramedullary hematopoiesis. Liver autopsy revealed hepatocellular carcinoma, complicated by cirrhosis; this finding may explain jaundice and atherosclerosis. Case 11 (liposuction) A female in her 50s died during liposuction. She suffered from shock lung with fat embolism and pulmonary edema (Fig. ). As shown in Fig. , the CD117 highlights the immature hemopoietic cells and immature marrow elements in the capillaries. Cases 1, 2, and 3 showing BME following trauma are illustrated in Fig. . Case 1, a male in his 50s, displayed facial injuries and arm fractures on gross examination. The histopathological examination of the lungs revealed a pulmonary BME with fibrin thrombi in medium sized vessels, highlighted by MSB stain (Fig. A, B). Patient history suggested that fractures pushed — by increasing intramedullary pressure — the bone marrow to enter the venous circulation and reached the lungs, causing serious respiratory insufficiency. Salient findings were severe edema, hemorrhage, and collapsed alveoli, the classic features of shock lung. Histopathological examination of the heart revealed mild coronary stenosis and cardiomegaly. Case 2, a male in his 40s, showed multiple fractures in gross examination. A pulmonary BME was noted during histopathological examination of the lungs. The embolus was accompanied by features suggestive of shock lung: severe congestion, hemorrhage, and focal collapsed alveoli. Case 3, a female in her 30s, displayed multiple trauma and fractures. Heart examination was unremarkable showing only congestion. She suffered from blood hyperviscosity syndrome. She exhibited shock lung as her lungs were not filled with sufficient air. The body organs did not receive enough oxygen for normal functions. This patient developed fat embolism that occurred when embolic fat macroglobules passed into small vessels of the lungs and other organs. Autopsy also showed some minimal marrow elements. Cases 4 and 5 presented with acute lung injury as the cause of death. Case 4, a male in his 50s, died during renal surgery due to a complication of anesthesia. He exhibited facial congestion. A sutured lateral abdominal incision was observed. Renal sutures and a ureteric stent were also reported on gross examination. Histopathological examination of the brain revealed severe congestion. Figure shows severe congestion, extravasation, edema, early neutrophilic, and entrapped megakaryocytes (Fig. ), microscopic features of shock lung. Case 5 is a female with a history of drug abuse. Histopathological examination of the heart showed few macrophage and micro foci of granulation tissue in the myocardium. The coronaries were patent. However, drug intake can cause the respiratory centers to depress, resulting in hypoxic episodes (Fig. ). Case 6 is a female in her 30s who suffered from hypovolemic shock due to post-CS hemorrhage and DIC. Any sustained shock, following severe injury and hemorrhage, stimulates bone marrow function and hematopoietic stem/progenitor cell proliferation and differentiation. These processes could lead to BME (Fig. ). Thrombi are present in a large size vessel as highlighted (Fig. C, D). Cases 7 and 8 are individuals with shock and advanced pulmonary hypertension. Case 7 is a male in his 50s with a history of respiratory distress but no signs of trauma. Histopathological examination of the lung revealed pulmonary fibrosis, grade III pulmonary hypertension, non-invasive Aspergillus fungal balls, and BME (Fig. ). Case 8 is a male with a history of dyspnea but no signs of trauma. He had cardiomegaly (587 gm) with right ventricular dilatation (right-side heart failure) and grade IV pulmonary hypertension. Histopathological examination of the heart tissue revealed advanced atherosclerosis with calcification; however, stenosis was unremarkable. Minimal myocardial fibrosis was noted. A decline in heart-pumping capacity led to volume overload in the left ventricle resulted in over-abundance of blood in the left atrium causing backward pressure on the lungs. The result was pulmonary hypertension. This condition, over time, produced right ventricle dilatation and cardiomegaly (Fig. ). Case 9 is a 70-year-old man who died of cancer. Autopsy revealed mucinous carcinoma, bronchitis, emphysema, advanced pulmonary hypertension, perivascular giant cell reaction, and BME in the lung. Gross autopsy and radiology findings did not suggest any fractures. Autopsy of the liver, brain, heart, and kidneys revealed chronic active hepatitis, atherosclerosis, and arteriolosclerosis. Autopsy of the brain and kidneys revealed meningitis and renal pyemic abscesses. This constellation of findings is most likely a consequence of cancer. Chronic hepatitis was part of this constellation. Case 10 is a 60-year-old male with jaundice; there were no gross findings in the autopsy. Autopsy of the heart revealed advanced atherosclerosis, stenosis, and inflammation along with right ventricular marrow, indicating extramedullary hematopoiesis. Liver autopsy revealed hepatocellular carcinoma, complicated by cirrhosis; this finding may explain jaundice and atherosclerosis. A female in her 50s died during liposuction. She suffered from shock lung with fat embolism and pulmonary edema (Fig. ). As shown in Fig. , the CD117 highlights the immature hemopoietic cells and immature marrow elements in the capillaries. The cause of death in the previous described cases 1, 2, and 3 is BME and shock lung after a quarrel and consequent trauma. These findings are consistent with the common concept in the literature that BME occurs after multiple trauma and bone fractures. For case 4, the histopathology of the lungs revealed BME. Cardiovascular collapse or the state of shock seems to have a direct correlation with the numbers of megakaryocytes in the blood. These cells could be a factor in BME . The presence of bone marrow in pulmonary vessels can increase vascular permeability to proteins, thereby increasing pulmonary arterial pressure and causing pulmonary edema . Activation of the coagulation cascade may also occur, which, in turn, increases activation of platelets and their release into the pulmonary circulation. This release worsens pulmonary edema. Finally, shock lung develops, which was the cause of death for this individual . Hematopoietic stem cells (HSC) are located in the stroma of the bone marrow. In the presence of the relevent stimuli, they produce huge, diverse colonies of mature functional blood cells. Then the maturing cells travel from the bone marrow to the peripheral blood where they replace malfunctioned cells and maintain immune function. Furthermore, HSC differentiate into multipotent progenitor cells that become lineage-restricted during proliferation and maturation. However, small numbers of immature progenitor cells pass into the periphery to aid in the repair. HSC and hematopoietic progenitor cells are not found in the peripheral circulation under normal conditions . Myocardial granulation tissue in case 5 is a likely a consequence of hypoxia and respiratory center depression caused by drug abuse. Death probably occurred due to an overdose that caused marked respiratory suppression and shock lung with BME. A less likely hypothesis is a relationship between drug abuse and septic inflammation, such as osteomyelitis. Thus, BME could be attributed to osteomyelitis, which would reduce bone marrow integrity . Embolism leads to an increase in pulmonary vascular permeability to proteins. This increase elevates pulmonary pressure and finally causes pulmonary edema. Edema is aggravated by an increased tendency of megakaryocytes to deposit in the lungs. Karyocyte deposition and the resultant platelet activation can eventually cause acute lung injury and death . For case 6, hypovolemic shock leads to disseminated intravascular coagulation (DIC) associated with increased maternal morbidity and mortality. DIC produces widespread microvascular thrombosis, which can compromise the blood supply and cause various organs to fail. Finally, exhaustion of coagulation/anticoagulation factors and platelets may lead to profuse uncontrollable bleeding and, often, death . In fact, during normal pregnancy, a prothrombotic state is more active than fibrinolysis (hyper state of coagulation). This response is a natural protection against blood loss during and after delivery . Two separate factors are postulated to induce DIC — slow capillary flow and secretion of thromboplastin into the blood. Experiments tend to confirm the hypothesis that a thromboplastic substance in the bloodstream is harmless when blood flow is normal. However, slowing of the capillary blood flow may cause the same amount of thromboplastic material, to produce DIC and cause death linked to clotting defect . For case 7, non-invasive aspergillosis causes an allergic reaction and explains the histological finding of eosinophilia. BME may have been caused by cardiorespiratory failure. Furthermore, concentric hypertrophic foci to hypertensive heart disease affected cerebral vessels causing lacunar infarcts. Coronary atherosclerosis with ischemic heart disease aggravated the cardiac condition. These two conditions contribute to left- and right-side heart failure. Right-side failure leads to pulmonary hypertension that may result in cardiorespiratory failure. The patient in case 8 underwent cardiogenic shock, which occurs when the heart cannot efficiently pump blood and oxygen. This shock led to a change in pressure in the fat-containing cavity of the bone marrow that allowed the escape of marrow elements and fat to the circulation. This hypothesis would explain the presence of BME without noticeable trauma . Cardiovascular diseases are associated with bad prognosis in patients with malignancies. Patients with cancer, as in case 9, are often found to have conditions related to metabolic and vascular  pathologies, including abdominal obesity, altered glucose metabolism, lipoprotein abnormalities, and hypertension . The chemical theory explains BME as a process that begins with lipoprotein lipase action on fat globules; then C-reactive protein and free fatty acids are released. These metabolites cause local and systemic inflammatory responses and may lead to direct injury by agglutination and vascular obstruction. Free fatty acids and other mediators are associated with inflammatory responses in the lungs, such as pneumonitis and vasculitis. This pathway for inflammatory response is thought to mimic the acute lung injury (ALI) and adult respiratory distress syndrome (ARDS) pathways. A study in rats with corn oil-induced fat embolism syndrome (FES), indicated markers of inflammation and microvascular obstruction, and increased permeability and pulmonary hypertension. They identified inflammatory cytokines, phospholipase A2, nitric oxide, and inducible nitric oxide synthase as the toxic biochemical mediators underlying the development of this condition . An alternative explanation may be that cancer can weaken the immune system by spreading into the bone marrow. Lung cancer is a solid tumor with low antigenicity and a heterogenic phenotype that evades host immune defenses . This cancer can lead to osteomyelitis that affects the bone marrow integrity and causes BME. These findings indicate that BME is not exclusively related to fracture or trauma . As for case 10, concomitant ischemic heart disease and neoplasia in the same patient is not a rare occurance, and 4 to 10% of cases with acute coronary syndrome (ACS) or chronic ischemic heart disease have a history of cancer . Chronic activation of the immune system and inflammatory state underlie the pathophysiology of atherosclerosis and neoplasms . This concept would explain the finding of BME; the biochemical theory indicated that the clinical presentation of FES is inferible to a proinflammatory state. Bone marrow fat is catabolized by tissue lipases, resulting in increased levels of glycerol and toxic free radicals. These intermediary products lead to end-organ dysfunction. Toxic injury to pneumocytes and pulmonary endothelial cells induces vasogenic edema, cytotoxicity, and hemorrhage. Disrupted pulmonary endothelium triggers the cascade of proinflammatory cytokines and the progression to acute lung injury or acute respiratory distress syndrome . During liposuction and fat grafting as in case 11, small blood vessels are ruptured, and the adipocytes are damaged, and consequently the lung injury is caused by the production of  lipid micro fragments reaching the venous circulation. Liposuction-induced fat embolism syndrome classically occurs 12 to 72 hours after surgery. Three theories are reported to describe the pathogenesis and the timing of the embolic events of this syndrome; first, the mechanical theory suggets that fat cell disruption in the fractured bone leads to the release of fat droplets. Fat droplets enter the torn veins near the injury and are then transported to the pulmonary vascular bed. Large fat globules form in this region and result in mechanical obstruction when trapped in the lung capillaries. Still, this theory does not provide explanation for cases showing delayed onset of symptoms (over 72 hours) following liposuction . An alternative biochemical theory explains non-traumatic and delayed fat embolic events. This theory postulates that when fat globules reach the pulmonary capillaries, pneumocytes produce hydrolytic lipase which convert fats into glycerol and free radicals. High concentrations of these toxic byproducts trigger alveolar and endothelial cell injury. This injury inactivates lung surfactant release due to type II pneumocyte apoptosis. Finally, vascular permeability increases via the release of vasoactive amines and prostaglandins and recruitment of neutrophils. These alterations induce interstitial and alveolar hemorrhage, edema, chemical pneumonitis, and formation of hyaline membrane. This multi-step process of fat degradation suggested by the biochemical theory proposes an acceptable explanation to the delayed onset of symptoms related to embolism following liposuction. A local inflammatory process is also required before the symptoms appear. Additional evidence of this theory is reported in cases with non-traumatic aetiology, such as inflammation in pancreatitis. Serum from acutely ill patients can induce agglutination of chylomicrons, low-density lipoproteins, and liposomes of nutritional fat emulsions. In such patients, the levels of C-reactive protein are elevated, indicating the ability to induce calcium-dependent lipid agglutination . The third and most recent theory is the least supported. It is the coagulation theory suggesting the release of tissue thromboplastin and marrow elements after long bone fractures, followed by triggering the complement system and the extrinsic coagulation cascade. These events lead to intravascular coagulation via fibrin and fibrin degradation products, which combine with leukocytes, fat globules, and platelets to increase pulmonary vascular permeability. Permeability increases through direct action on endothelial cells and indirectly through the release of vasoactive substances. However, this theory fails to validate the etiology of non-traumatic FES. These three theories may coexist, and are not necessarily mutually exclusive. They have all been reported after major traumatic events involving long bone fractures, and following intramedullary orthopedic procedures. These theories likely play a contributory role to the etiology and time path of traumatic versus non-traumatic pathogenesis of FES) . BME, which is rarely observed in man, is attributed to traumatic and non-traumatic causes; two theories describe the mechanism of BME: chemical and mechanical. The chemical theory explains BME in non-traumatic cases as attributable to a proinflammatory state. In traumatic cases, the mechanical theory explains BME as a consequence of a transient increase in the pressure of fat-containing cavity in association with torn blood vessels. This condition allows the marrow and adipose fat cells to escape into the circulation. The autopsy findings in the 11 cases discussed above contradict the common concept of BME as exclusively a consequence of traumatic injury. BME is a lesion in which the bone marrow elements, including cell debris and yellow bone marrow, reach the systemic circulation and invade the lung parenchyma through the venous sinuses. Non-traumatic cases of BME observed in individuals with cancer, atherosclerosis, DIC, and drug abuse can be explained by the biochemical theory in which the clinical presentation of FES is inferible to a proinflammatory state. As most legal authorities may view BME as a signal of traumatic death, which in turn may have additional legal consequences especially in solving cases of homicides and violence, the identification of postmortem findings of BME owing to non-traumatic causes is of major relevance in the forensic pathology discipline. Autopsy cases show evidence that BME may be caused by non-traumatic injuries. The pathophysiology behind non-traumatic BME is still unclear and further studies to be fully understood. Cardiovascular collapse or the state of shock seems to have a direct correlation with the numbers of megakaryocytes in the blood and in turn related to non-traumatic BME. Over-dose intoxication may be related to BME through hypoxia and cardiorespiratory center failure.
The Pathology according to p53 Pathway
8e06cd03-d0de-4b15-96d3-e2ebee480752
11313058
Anatomy[mh]
What is the basis for pathological diagnosis? In the case of tumors, the site of origin and histological type are essential, corresponding to macroscopic and microscopic information, respectively. Basically, the latest tumor classification defined by the anatomic site is selected first. Afterward, the histological type is selected from the cellular phenotype. However, such diagnostic methods are sometimes impotent in small specimens with an unknown primary cancer or no clinical information. Drawing explainable conclusions from microscopic findings in such exceptional situations requires a clear diagnostic rationale or algorithm that goes beyond the tacit knowledge of personal experience or sense. Molecular characteristics, as determined by biomarkers, are emerging as the third fundamental element of tumor classification. Biomarkers are used for disease risk stratification, differential diagnosis, prediction of therapeutic efficacy and toxicity, predictive prognosis, and monitoring . Genetic biomarkers have already been incorporated into pathological diagnosis in some cancers, and tissue agonistic markers, such as NTRK fusion and/or tumor mutation burden, are becoming widespread . These cross-tumor biomarkers will undoubtedly become essential for tumor diagnosis and therapeutic decision making. Upon speculating the universal biomarkers that could incorporate the diagnosis of tumor pathology, the author focused on the p53 pathway, which is one of the major cancer signaling pathways. First, this review briefly summarizes two different diagnostic approaches to biomarkers, cancer genome profiling test using next-generation sequencing (NGS) and in situ validation, such as immunohistochemistry. Subsequently, the author summarizes the accumulating data on representative p53 pathway genes, TP53 , CDKN2A , and MDM2 , and discusses how to exploit such biomarkers in several practical diagnostic settings. Recently, genomic analysis using NGS has been applied in clinical oncology to detect abnormalities in cancer-related genes comprehensively . The main objective of the cancer genome profiling tests is to detect actionable alterations associated with the molecular treatment of metastatic solid tumors. While it may be clinically sufficient to pick out the actionable ones from a dispersed set of mutations, the comprehensive profiling of cancer includes nonactionable driver events and, in some cases, genetic variants of unknown significance. Comprehensive genomic cancer profiling studies are being conducted by international scientific consortia in at least three phases. The Cancer Genome Atlas (TCGA) Research Network reported genomic information and analysis of more than 30 cancer types, comprising both common and rare types, over the decade beginning in the late 2000s . In addition, the Pan-Cancer Atlas (PCA) Project explored three main topics, cell-of-origin patterns , oncogenic processes , and signaling pathways . Furthermore, the Pan-Cancer Analysis of Whole Genome (PCAWG) project analyzed the whole genomes of cancers to elucidate the unclear aspects of noncoding DNA sequences . Integration of such findings could facilitate tumor classification, as large-scale data linking pathology and cancer genomics would enhance our understanding of the specific roles of cancer-related genes and the interrelationships among them. Regarding diagnostics, genotype-based molecular classification has been reported to be less discriminative than phenotype-based classification for 12 major cancer types or 33 pan-types . Cancer harbors approximately four coding point mutations under positive selection . Conversely, the majority of individual cancer-related genes have only low- (<2%) or intermediate- (2−20%) frequency mutation rates in pan-cancer cohorts . Currently, conventional histological analysis remains the mainstay of cancer diagnosis, and because of the scarcity of data, genomic variation analysis is the next best thing. However, when considering the significance of mutated genes coexisting in cancer, cell biological signaling pathways yield insights into cancer genetics. According to the signaling pathway analysis of the PCA project , the canonical oncogenic pathways are classified into 10 groups: p53, Wnt/β-catenin, receptor tyrosine kinase (RTK)-RAS, Notch, Hippo, transforming growth factor β (TGFβ), Myc, Nrf2, PI-3-kinase/AKT (PI3K), and cell cycle pathways. Among them, abnormalities of the p53 pathway have been detected in more than a half of the tumors and are mainly caused by the TP53 and CDKN2A genes. These p53 pathway genes are almost always included in NGS analyses and provide clues that facilitate the comprehensive understanding of the co-occurrence and mutual exclusivity of the genetic variants and their relationships with cancer type. Immunohistochemistry can also detect abnormalities in oncogenic pathways by targeting biomarkers derived from cancer-related genes . Unlike NGS, clinical immunohistochemistry is basically a singleplex assay for evaluating protein expression. However, the greatest advantage is the opportunity to analyze biomarker expression at the cellular level, revealing spatial information about the tumor and its microenvironment. The immunohistochemistry results, which can be evaluated either qualitatively or quantitatively, can be classified into four staining patterns in the case of the p53 protein . These are called overexpressed, null, ectopic, and reactive immunophenotypes and allow for the evaluation of other biomarkers . First, overexpression immunophenotype is a diffuse positive expression found in tumor cells ( a, b). Interestingly, the signal intensity of this pattern is often more substantial than that of a reactive pattern. This diffuse and strong expression pattern is interpreted as an activated oncogenic signal, probably due to gain-of-function mutation and/or copy number amplification. Second, the null immunophenotype refers to diffuse negative expression and, ideally, no expression in tumor cells ( c, d). This pattern typically indicates the loss of function of tumor suppressor genes, nonsense mutation, or copy number loss. Third, ectopic immunophenotype is a positive signal detected in an unusual location ( e, f). This dysregulated signal arises from genetic alternations, including gene rearrangement. These three immunophenotypes easily distinguish neoplastic lesions from non-neoplastic populations. In contrast, reactive immunophenotype means heterogeneity of positive cell ratio and signal intensity ( g, h). This heterogeneous pattern is interpreted as mostly normal or nonspecific because it suggests controlled protein expression. Reactive immunophenotype refers to intact genotype (i.e., wild-type pattern) of the corresponding gene but fails to deny clonality of the cell population of interest, except for a few cases or cancer types. The concept of the four immunophenotypes is essential for interpretation of the immunohistochemistry result. summarizes the tumor-associated molecules in this review, their representative antibody clones, and the immunophenotypes derived from them . Similar to immunohistochemistry, in situ hybridization (ISH), which targets nucleic acid sequences, is a biomarker test that preserves cancer spatial information . One type of ISH uses fluorophores (i.e., FISH), which can detect structural variants of the genome that are difficult to detect by short-read NGS, including gene amplification, deep deletions, and translocations; nevertheless, it can detect only one known structural variant per assay. Therefore, NGS, immunohistochemistry, and FISH should each be used depending on the situation. A counter-cellular stress response, p53 pathway, consists of the principal transcriptional factor p53, its target genes, and several critical p53 regulators, including p16 and MDM2 (mouse double minute2 homolog) . This pathway responds to cell stress by activating various cellular signaling activities in the context and interacting with other oncogenic signaling classes, such as cell cycle pathway . As a member of the DNA damage checkpoint, p53 blocks G1/S cell cycle regulation by expressing p21 encoded by CDKN1A . Based on the finding that stress induces cell aging, the p53 pathway can also be interpreted as a series of responses related to senescence. The triad of cellular senescence is cell cycle arrest, apoptosis resistance, and altered transcription . Depending on the cellular context, these phenotypes can inhibit or promote oncogenesis. While the “quiescent cell-like” state is the opposite of a proliferative lesion, evasion of apoptosis is one of the hallmarks of cancer. Stresses that induce senescence include oncogene aberrations, DNA damage, telomere evasion, the narrowly defined p53 pathway, and the retinoblastoma pathway mediated by RB transcriptional corepressor 1 (RB1) at G1/S checkpoint in the cell cycle that is involved in the regulation of cellular senescence . /p53 The TP53 gene, which encodes for p53, was initially mapped to chromosome 17p13 in 1986 . The alias “guardian of the genome” was named by David Lane, who identified p53 through his work on the SV40 virus . Germline variants of TP53 are associated with a cancer-predisposing condition known as Li-Fraumeni syndrome . TP53 , TP63 , and TP73 were identified as members of the TP53 /p53 family based on sequence similarity in the DNA-binding domain . The two p53 homologs are associated with cancer, but their roles in the maintenance and development of germline and somatic cells have been investigated. Two p63 protein isoforms encoded by TP63 , full-length TAp63α, and shortened ΔNp63, also called p40 , are associated with proliferative stratified epithelial cells and are therefore used as immunohistochemical markers, particularly for lung squamous cell carcinoma and basal cell/myoepithelial markers . Conversely, p73 is essential for ciliogenesis ; however, p73-related molecules have not been used in diagnostic pathology as cell differentiation markers. TP53 mutations are crucial to cancer in various aspects. Epidemiologically, since half of the advanced cancers carry TP53 mutations , it is practical to classify cancers into two populations according to the mutation status. Limited to specific cancer types, TP53 mutations may be a prognostic marker , probably due to differences in cellular context . Genomically, TP53 -mutant cancers are associated with copy number alterations due to genomic instability, distinguished from somatic mutation-driven cancers . These copy number alterations include whole-genome doubling and chromothripsis . Morphologically, polyploid giant cancer cells and multipolar mitosis are possible clues of TP53 mutation. TP53 mutation frequency is not uniform across tumors but depends on histological types and/or anatomical locations . While TP53 mutations are often considered late events, as in adenoma-carcinoma sequence of conventional colon cancer , they can also occur early in certain tumor subtypes . Such TP53 mutation-driven cancers are high-grade serous carcinoma (HGSC) , inflammatory bowel disease-associated colorectal cancer , esophageal cancer , and vulvar cancer . These cancers naturally have a high frequency of TP53 mutations and already have the mutations in their non-invasive cancer stage. In addition, extensive molecular genetics and pathological investigations have showed that TP53 mutations are present even in seemingly normal cell populations, including p53 signature and clonal hematopoiesis . Identifying minimal TP53 -mutated clones represents a novel potential strategy for preventing, diagnosing, and treating TP53 mutation-driven cancers. TP53 mutations can be visualized by immunostaining and can be interpreted in the four patterns as described in . Typically, the antibodies used in practice are DO-1 or DO-7, which recognize the N-terminal transactivation domain . Analysis of female adnexal HGSCs with high TP53 mutations showed that p53 overexpression and null and cytoplasmic (ectopic) immunophenotypes are mainly associated with missense, frameshift, and nuclear localization signaling domain mutations, respectively . Conversely, reactive p53 is typically wild-type TP53 but is also rarely associated with splicing site mutations or truncating mutations. Köbel et al. observed that the p53 immunophenotype of most HGSCs is associated with the predicted TP53 mutation pattern; however, in approximately 9% of cases, there is a discordance between genotype and immunophenotype . While indels and splicing mutations in the DNA-binding domain rarely exhibit overexpression patterns, the majority of nonsense and splicing mutations are associated with null patterns. Nevertheless, immunostaining for p53 is a helpful tool for estimating the presence or absence of TP53 mutations and the type of mutation. Specific tumors are challenging to interpret with the p53 immunophenotype; therefore, other criteria should be used. First, even p53-positive cell rates as low as >10% in gliomas are associated with TP53 mutations . Second, a precursor lesion of acute myeloid leukemia , myelodysplastic syndrome (MDS) with 5q deletion, is significantly associated with TP5 3 mutation . In MDS, those detected with TP53 mutations express p53 in 40% of cells. p53 expression in more than 0.5% of cells has a poor prognosis . Third, cutaneous malignant melanomas occasionally have increased or aberrant p53 expression . The aberrant expression does not inevitably correlate with TP53 mutations , and the expression of wild-type p53 may favor melanoma . Interestingly, the Spitz tumor, a distinctive nevus cell neoplasm, has been reported to have a TP53-NTRK1 fusion gene , but its p53 immunotypes are not yet evident. Apart from p53 immunostaining, there have been attempts to evaluate the copy number of chromosome 17 by FISH, mainly in hematologic neoplasms . Biallelic inactivation of the tumor suppressor gene is considered significant; however, in chronic lymphocytic leukemia, even a monoallelic TP53 mutation or deletion is clinically important. The deletion of 17p by FISH must be confirmed in at least 20% of cells , whereas it is deemed remarkable that at least 10% of variant allele frequency is detected by NGS . In contrast, most TP53 -mutated solid cancers harbor other TP53 inactivations, such as loss of heterozygosity and decreased wild-type TP53 transcription . Therefore, immunohistochemistry alone is sufficient for solid cancers to evaluate p53 abnormalities, but NGS and FISH can point to specific sequences and structural variants of the cancer genome. /p16 The human genetic locus at 9p21 , CDKN2A , encodes two distinct proteins, p16 IN hibitor of CD K4 (INK4A) and p14-Alternative Reading Frame (ARF) . The tumor suppressor, p16, is a gatekeeper of G1 phase, which inhibits cyclin D-dependent kinase (CDK4/CDK6) that phosphorylates RB1 . In contrast, p14 increases p53 activity through MDM2 suppression . Interestingly, p16-RB1 signaling also regulates p53 through p14 expression . This two-faced tumor suppressor gene involves p53 and the cell cycle pathways p14-MDM2-p53 signaling and p16-cyclin D-CDK4/CDK6-RB1 signaling. The hereditary cancer disease related to this tumor suppressor gene is familial atypical multiple mole melanoma syndromes (FAMMM) . CDKN2A is the second cancer suppressor gene found to be abnormal after TP53 , and dysfunction of p16 is one of the most frequent events in cancer. As such, attempts to visualize p16 by immunohistochemistry are already widespread . Based on the molecular mechanism, the abnormal patterns of the p16 immunophenotype can be classified into three categories: p16 loss due to genetic and epigenetic abnormalities, p16 overexpression associated with persistent high-risk human papillomavirus (HPV) infection, and p16 overexpression reflecting cellular senescence due to dysregulation of several oncogenes or TP53 . Loss of p16 is associated with deletion of the genetic locus 9p21 , methylation of the promoter region , and genetic mutations associated with loss of function . Accordingly, the p16-null immunophenotype can be interpreted as a loss of tumor suppressor function . Instead, since CDKN2A function is due to decreased copy number rather than genetic mutation, FISH helps to diagnose mesotheliomas and melanomas in which the CDKN2A locus is frequently lost . Furthermore, loss of CDKN2A locus and p16 expression may be negative prognostic factors in the case of gliomas . Similarly, p16 silencing by promoter hypermethylation is frequently observed in a variety of cancers. Although p16 is one of the classical CpG island methylator phenotype markers for colorectal cancer , such epigenomic abnormalities cannot be detected on histological sections. Because the 9p21 locus contains the CDKN2A , CDKN2B , and MTAP genes, deletion could result in multiple co-occurring genetic abnormalities. CDKN2B encodes a CDK4/CDK6 inhibitor, p15 INK4B, regulated by TGFβ signaling . However, p15 has not been used as an immunohistological marker in clinical pathology. In contrast, 5′-deoxy-5′-methylthioadenosine phosphorylase (MTAP) has been used as a surrogate marker for p16 loss because of indirect evidence of 9p21 deletion . Therefore, MTAP-deficient cell populations in pancreas , melanocytes , mesothelium , and central nervous system can be interpreted as CDKN2A -depleted neoplastic clones ; however, some of the reduced MTAP expressions may be related to other molecular mechanisms, such as methylation of the promoter region . Another immunophenotype, p16 overexpression, which appears atypical for a tumor suppressor gene, is a surrogate marker for HPV. Cancers associated with HPV, such as type 16 and type 18, have different molecular characteristics than HPV-negative cancers . The HPV oncoprotein E6 induces p53 degradation via the ubiquitin-proteasome system, whereas E7 inactivates or induces degradation of RB1 protein through complex formation . These two oncogenic signals synergistically disrupt the cell cycle, and p16 overexpression can be explained by a negative feedback mechanism due to abnormal E7-RB1 interaction . Thus, the HPV-associated cancers exhibit few TP53 and CDKN2A mutations . In addition, these cancers exhibit p16 diffuse nuclear and cytoplasmic expression , also called block positive , except for a minor discordance between p16 and HPV mRNA expression status . Conversely, cancers without HPV in these anatomical sites are typically associated with alterations of the TP53 and CDKN2A genes. In summary, the dualistic tumor classification by HPV is underpinned by differences in the molecular mechanisms, leading to the p53 pathway dysregulation. Although p16 overexpression is a surrogate marker of HPV, it is also a cell senescence marker. Senescent cells seldom proliferate; instead, they evade cell death and alter their surrounding environment mainly by secreting various pro-cytokines and chemokines . Such cell characteristics are recognized as secretory phenotype (SASP), and p16, along with β-galactosidase , is used as a marker to recognize senescence. In neoplastic lesions, overexpression of p16 could be interpreted as oncogene-induced or tumor suppressor loss-induced senescence caused by abnormalities in HRAS , KRAS, BRAF , TP53 , PTEN , RB1 , and PTEN . Specifically, p16-expressing lesions without HPV include reactive-like lesions, including endometrial polyp , atypical polypoid adenomyoma , and bile duct lesions , as well as malignant tumors , such as adenoid cystic carcinoma . To determine whether p16 overexpression is related to HPV or other cellular senescence, a minimal p53 immunohistochemical panel including p16, p53 , and a cell proliferation marker Ki-67 represents an optimized immunohistochemical analysis . Suppressed cell proliferation and abnormal p53 immunophenotype may serve as clues to senescence. MDM2, encoded by MDM2 at 12q15, is an E3 ubiquitin ligase that negatively regulates p53 protein via proteasome degradation . “Double Minute” means a circular extrachromosomal DNA fragment that does not have telomeres or centromeres and is replicated during cell division . As MDM2 abnormalities in cancer are predominantly due to amplification, the significance of its mutations is limited. Since an association between single nucleotide polymorphism, “SNP309” in the promoter region of the MDM2 gene, with tumorigenesis has been suggested, this genotype can be interpreted as an MDM2 -related hereditary cancer syndrome. However, the variant in the noncoding region is still challenging to apply to practical pathology. MDM2 is one of 24 recurrent amplified oncogenes in human cancers and favors circular amplification . MDM2 amplification averages approximately 4% across cancer types but is very frequent in highly differentiated and dedifferentiated liposarcomas . Amplification of the 12q locus is triggered by genomic events called chromothripsis . The extrachromosomal DNA pattern is concordant with the fact that liposarcomas do not tend to be accompanied by amplification of the 12q arm level . Notably, the 12q13-15 region contains other cancer-related genes: a G1/S cell cycle regulator called CDK4 and a member of zinc finger protein family, GLI1 . The vast and complex MDM 2 and/or CDK4 amplification structural variants, also called tyfonas , are associated with dedifferentiated liposarcomas. MDM2 amplification can be used mostly for the pathological diagnosis of sarcomas. Again, MDM2 amplification is frequent in well-differentiated and dedifferentiated liposarcomas and can be a definitive diagnosis if consistent with the histology . It is also helpful in diagnosing tumors that are difficult to diagnose, such as low-grade central osteosarcoma , dedifferentiated osteosarcoma , parosteal osteosarcoma , GLI1 -associated sarcoma , uterine adenosarcoma , and malignant Leydig cell tumor . Even though MDM2-reactive immunophenotypes have diagnostic significance , copy number evaluation of MDM 2 by FISH is superior for practical diagnostic work in adipocytic tumors . In practical pathology, immunohistochemistry for tumor-related proteins is applied in four main situations: determining neoplastic/dysplastic or not, confirming a specific tumor type, annotating miscellaneous and unclassifiable tumors, and verifying genomic information about the cancer. To Determine Neoplastic/Dysplastic or Not To demonstrate that a particular cell population is clonal, some diagnostic strategies use immunostaining to detect at least one abnormal oncogenic signal. If malignancy is suspected, p53 immunostaining is most appropriate, since approximately a half of the pan-cancer cohort also suffers TP53 abnormalities . Even in such cases, identifying an abnormal p53 pattern, especially in small specimens or microscopic lesions, should not be immediately regarded as malignant. p53 abnormalities may be found in posttherapeutic responsive lesions , normal-looking clonal expansions, and precancerous lesions with TP53 abnormalities . Simultaneous Ki-67 immunostaining is recommended to confirm proliferating cell distribution and avoid overdiagnosis . In order to predict the results of p53 immunostaining, summarizes TP53 mutation rates stratified by histological types. If a tumor is assumed to be in a very high frequency group of mutations (PG1, >90%), a p53-reactive pattern would be an opportunity to reconsider the diagnosis. Conversely, if a tumor is expected to be in a very low frequency TP53 -mutation group (PG5, <10%), searching for other histological type-specific markers may prove better. Among the tumors with an intermediate to below-average frequency of TP53 mutations (PG4, 10%–39%) are those that are occasionally difficult to diagnose morphologically, i.e., melanoma, mesothelioma, and adrenocortical carcinoma. In conclusion, the high prevalence of TP53 mutations (50%) in the pan-cancer cohort also suggests that TP53 mutations alone are insignificant in determining the histological type. To Confirm a Specific Tumor Type summarizes the association between tumor-related proteins and the histological types with a relatively high frequency (although some are modest) of their aberrant expression. Oncogenic signals other than p53 are useful for definitive diagnosis and subtyping of tumors, requiring additional testing based on anatomic site, cell type, clinical information, and other contextual factors. In parallel, co-occurrence and mutual exclusivity of cancer genes should be considered for cost-effective additional testing. For example, consistent with the molecular mechanism, MDM2 amplification is significantly mutually exclusive with TP53 mutations , and MDM2 -driven tumors rarely require immunohistochemistry for p53. In addition, MDM2 amplification tends to coexist with CDKN2A mutations , possibly explained by p53 pathway-independent functions of ARF . This combination of individual biomarkers would further refine tumor classification. To Annotate Miscellaneous and Unclassifiable Tumors Contrary to the subsection title, diagnostic strategies using an immunohistochemical panel rich in cancer-related proteins are not recommended for the following reasons. As mentioned, majority of the proteins are not worth investigating considering the mutational frequency of cancer-related genes in the pan-cancer cohort. Even if several alterations are detected from the biomarker tests, pan-tumor molecular classification using them has not been established so far. The proposed minimal p53 pathway immunohistochemistry panel, outlined in this review, is primarily intended as a diagnostic tool for pathologists in achieving a simplified molecular classification of cancers . In terms of therapeutic strategies, immunohistochemistry can play a clinically remarkable role as a test for tumor-agnostic biomarkers, enabling the rapid identification of actionable driver events in tumors. However, it should be noted that currently, p53 pathway components are not applicable to these biomarkers. To Validate Consistency with Genomic Information In situ validation assays, such as immunostaining and FISH, are effective in confirming the results of cancer genome-profiling tests because they are proficient in detecting known abnormalities. The signals visualized by these tests provide a spatial distribution of tumor cells not available in genomic testing and can be applied to detect microscopic residual or recurrent lesions after treatment. Combining genomic profiles with morphological images will allow us to track tumor changes over time and better understand cancer progression. In particular, p53 is the most frequently defective cancer-related protein; thus, it is useful for tracking clones when TP53 mutations are found. To demonstrate that a particular cell population is clonal, some diagnostic strategies use immunostaining to detect at least one abnormal oncogenic signal. If malignancy is suspected, p53 immunostaining is most appropriate, since approximately a half of the pan-cancer cohort also suffers TP53 abnormalities . Even in such cases, identifying an abnormal p53 pattern, especially in small specimens or microscopic lesions, should not be immediately regarded as malignant. p53 abnormalities may be found in posttherapeutic responsive lesions , normal-looking clonal expansions, and precancerous lesions with TP53 abnormalities . Simultaneous Ki-67 immunostaining is recommended to confirm proliferating cell distribution and avoid overdiagnosis . In order to predict the results of p53 immunostaining, summarizes TP53 mutation rates stratified by histological types. If a tumor is assumed to be in a very high frequency group of mutations (PG1, >90%), a p53-reactive pattern would be an opportunity to reconsider the diagnosis. Conversely, if a tumor is expected to be in a very low frequency TP53 -mutation group (PG5, <10%), searching for other histological type-specific markers may prove better. Among the tumors with an intermediate to below-average frequency of TP53 mutations (PG4, 10%–39%) are those that are occasionally difficult to diagnose morphologically, i.e., melanoma, mesothelioma, and adrenocortical carcinoma. In conclusion, the high prevalence of TP53 mutations (50%) in the pan-cancer cohort also suggests that TP53 mutations alone are insignificant in determining the histological type. summarizes the association between tumor-related proteins and the histological types with a relatively high frequency (although some are modest) of their aberrant expression. Oncogenic signals other than p53 are useful for definitive diagnosis and subtyping of tumors, requiring additional testing based on anatomic site, cell type, clinical information, and other contextual factors. In parallel, co-occurrence and mutual exclusivity of cancer genes should be considered for cost-effective additional testing. For example, consistent with the molecular mechanism, MDM2 amplification is significantly mutually exclusive with TP53 mutations , and MDM2 -driven tumors rarely require immunohistochemistry for p53. In addition, MDM2 amplification tends to coexist with CDKN2A mutations , possibly explained by p53 pathway-independent functions of ARF . This combination of individual biomarkers would further refine tumor classification. Contrary to the subsection title, diagnostic strategies using an immunohistochemical panel rich in cancer-related proteins are not recommended for the following reasons. As mentioned, majority of the proteins are not worth investigating considering the mutational frequency of cancer-related genes in the pan-cancer cohort. Even if several alterations are detected from the biomarker tests, pan-tumor molecular classification using them has not been established so far. The proposed minimal p53 pathway immunohistochemistry panel, outlined in this review, is primarily intended as a diagnostic tool for pathologists in achieving a simplified molecular classification of cancers . In terms of therapeutic strategies, immunohistochemistry can play a clinically remarkable role as a test for tumor-agnostic biomarkers, enabling the rapid identification of actionable driver events in tumors. However, it should be noted that currently, p53 pathway components are not applicable to these biomarkers. In situ validation assays, such as immunostaining and FISH, are effective in confirming the results of cancer genome-profiling tests because they are proficient in detecting known abnormalities. The signals visualized by these tests provide a spatial distribution of tumor cells not available in genomic testing and can be applied to detect microscopic residual or recurrent lesions after treatment. Combining genomic profiles with morphological images will allow us to track tumor changes over time and better understand cancer progression. In particular, p53 is the most frequently defective cancer-related protein; thus, it is useful for tracking clones when TP53 mutations are found. Even in the coming era of biomarker-based clinical oncology, traditional tumor pathology is essential because it provides contextual readouts of cancer. Such information includes precise cancer cell distribution, cell fraction, and coexisting noncancerous cells; they are interpretable as TNM stage, tumor purity, and tumor microenvironment. In contrast, molecular analysis reveals background about cancer cells from a different perspective: cancer-related gene mutation, genomic structural variants, transcriptional and proteomic network frameworks, and the prerequisite epigenetic landscape. Therefore, it is necessary to establish a pathological diagnostic system that incorporates implemented biomarker tests and to revise the anatomical site-specific classification from a cross-tumor perspective. Such hierarchical classification of anatomic sites, cell types, and biomarkers, as in the concept of the human reference atlas , would require the status of canonical oncogenic signaling, such as the p53 pathway, in addition to tumor agnostic biomarkers. Y.H. received honoraria for lecturing from AstraZeneca. There is no support/funding relevant to this review. Y.H. conceptualized the topics of the review, prepared all figures and tables, and wrote the manuscript.
A Lightweight Drive Implant for Chronic Tetrode Recordings in Juvenile Mice
6c346b54-0cff-4125-9c51-028ad67cd0e3
10903788
Physiology[mh]
The brain undergoes large-scale changes during the critical developmental windows of childhood and adolescence , , . Many neurologic and psychiatric diseases, including autism and schizophrenia, first manifest behaviorally and biologically during this period of juvenile and adolescent brain development , , . While much is known regarding the cellular, synaptic, and genetic changes that occur across early development, comparatively little is known regarding how circuit- or network-level processes change throughout this time window. Importantly, circuit-level brain function, which ultimately underlies complex behaviors, memory, and cognition, is a non-predictable, emergent property of cellular and synaptic function , , , . Thus, to fully understand network-level brain function, it is necessary to directly study neural activity at the level of an intact neural circuit. In addition, to identify how brain activity is altered throughout the progression of neuropsychiatric disorders, it is critical to examine network activity in a valid disease model during the specific temporal window when the behavioral phenotypes of the disease manifest and to track the observed changes as they persist into adulthood. One of the most common and powerful scientific model organisms is the mouse, with large numbers of unique genetic strains that model neurodevelopmental disorders with age-dependent onset of the behavioral and/or mnemonic phenotypes , , , , , , , , , , . While it is challenging to correlate precise developmental time points between the brains of humans and mice, morphological and behavioral comparisons indicate that p20-p21 mice represent the human ages of 2-3 years old, and p25-p35 mice represent the human ages of 11-14 years old, with mice likely reaching the developmental equivalent of a human 20 year old adult by p60 , . Thus, to better understand how the juvenile brain develops and to identify how the neural networks of the brain become dysfunctional in diseases like autism or schizophrenia, it would be ideal to directly monitor brain activity in vivo in mice across the ages of 20 days to 60 days old. However, a fundamental challenge in monitoring brain activity across early development in mice is the small size and relative weakness of juvenile mice. The chronic implantation of electrodes, which is necessary for longitudinal studies of brain development, typically requires large, bulky housing to protect the fine electrode wires and interface boards , , and the implants must be firmly attached to the mouse skull, which is thinner and less rigid in young mice due to reduced ossification. Thus, virtually all studies of in vivo rodent physiology have been performed in adult subjects due to their relative size, strength, and skull thickness. To date, most studies exploring in vivo juvenile rodent brain physiology have been performed in wild-type juvenile rats, which necessarily limits the ability to experimentally monitor juvenile brain function in a freely behaving model of a human disorder , , , , , . This manuscript describes novel implant housing, a surgical implantation procedure, and a post-surgery recovery strategy to chronically study the long-term (up to 4 or more weeks) in vivo brain function of juvenile mice across a developmentally critical time window (p20 to p60 and beyond). The implantation procedure allows for the reliable, permanent affixation of the electrodes to the skulls of juvenile mice. Furthermore, the micro-drive design is lightweight, as this micro-drive weighs ~4-6 g when fully assembled, and due to the minimal counterbalancing required to offset the weight of the implant, it does not impact the behavioral performance of juvenile mice during typical behavioral paradigms. The present study was approved by the University of Texas Southwestern Medical Center Institutional Animal Care and Use Committee (protocol 2015-100867) and performed in compliance with both institutional and National Institute of Health guidelines. The C57/Bl6 male and female mice used in the present study were implanted at p20 (weight 8.3-11.1 g at the time of implantation). 1. Micro-drive design and construction Digitally designing and printing the micro-drive Download the micro-drive model templates ( https://github.com/Brad-E-Pfeiffer/JuvenileMouseMicroDrive ). Identify the stereotaxic locations of the target brain region(s) in an appropriate stereotaxic atlas. Using three-dimensional computer-aided design (3D CAD) software, load the template micro-drive cannula . If necessary, modify the output cannula output locations on the micro-drive cannula model to target the desired brain region(s). NOTE: Each cannula hole extrusion should be at least 2 mm long to ensure that the tetrode will emerge from the cannula hole aiming straight at the target. The micro-drive cannula template is designed to bilaterally target the anterior cingulate cortex (one tetrode per hemisphere), hippocampal area CA1 (four tetrodes per hemisphere), and hippocampal area CA3 (two tetrodes per hemisphere), with one reference tetrode per hemisphere positioned in the white matter above hippocampal area CA1. If necessary, modify the micro-drive body to accommodate the attachment of the electronic interface board (EIB). Print the micro-drive body, cannula, cone, and lid at high resolution on a 3D printer (ideally with a resolution better than 25 μm), and prepare the printed materials according to the manufacturer's protocols. Use printer resins with high rigidity. Assembling the custom screws and attachments Using 3D CAD software, load the screw attachment models . Print the screw attachments at high resolution on a 3D printer (ideally with a resolution of at least 25 μm), and prepare the printed materials according to the manufacturer's protocols. Use printer resins with high rigidity. Affix the screw attachments to each tetrode-advancing screw (tetrode-advancing screws are custom manufactured at a machine shop prior to micro-drive construction). Affix two screw attachments to each screw, with one above and one below the ridge. Ensure that the bottom of each screw attachment contacts the ridge. Hold the screw attachments together with gel cyanoacrylate. Once affixed, ensure that the screw attachments do not move in the longitudinal axis of the screw but rotate freely with minimal resistance. Assembling the micro-drive body Using fine, sharp scissors, cut large polyimide tubing (outer diameter: 0.2921 mm, inner diameter: 0.1803 mm) into ~6 cm long sections. Pass the large polyimide sections through the output holes on the micro-drive cannula so that each tube extends beyond the bottom of the cannula by a few millimeters. Using a clean 30 G needle, affix the polyimide to the cannula by applying small amounts of liquid cyanoacrylate. Take care not to allow the cyanoacrylate to enter the inside of the polyimide tube. NOTE: Dripping liquid cyanoacrylate down into the cannula through the top of the drive body can expedite this process but will require re-clearing of the guide holes later with a fine-tipped drill. Pass the large polyimide tubes from the top of the micro-drive cannula through the appropriate large polyimide holes in the micro-drive body. Slowly push the micro-drive cannula and micro-drive body together until they are adjacent and the cannula/body attachment tabs interlock. Be careful not to kink or damage the polyimide tubes in the process. NOTE: Each polyimide tube should smoothly pass from the bottom of the cannula out through the top of the micro-drive body. Slight bending is normal, but excessive bending of the polyimide tube can warp the tetrode and prevent it from passing straight into the brain. Affix the micro-drive body and the micro-drive cannula together using cyanoacrylate. Using a new, sharp razor blade, sever the large polyimide tube ends that are extruding from the bottom of the cannula output holes. Ensure that the cut is exactly at the base of the cannula, making the tubes and the cannula bottom flush with one another. Using sharp scissors, cut the large polyimide tubing just above the edge of the inner rim of the drive body at a ~45° angle. Loading the assembled custom screws Screw each assembled custom screw into the outer holes of the micro-drive body. Ensure the screw guidepost passes through the large hole in the screw attachments. Advance each screw fully until it will not advance further. Pre-lubricating the screws with mineral oil or axle grease is recommended. Using extremely sharp scissors, cut small polyimide tubing (outer diameter: 0.1397 mm, inner diameter: 0.1016) into ~4 cm long sections. Pass the small polyimide sections through the large polyimide tubing already mounted in the micro-drive. Ensure that excess small polyimide tubing protrudes from the top and bottom of each large polyimide tube. Affix the small polyimide tubes to the screw attachments via cyanoacrylate, taking care not to let any cyanoacrylate enter into either the large or small polyimide tubes. Using a new, sharp razor blade, sever the small polyimide tube ends that are extruding from the bottom of the cannula holes. Ensure that the cut is exactly at the base of the cannula and that the cut is clean, with nothing blocking the polyimide tube hole. Using sharp scissors, cut the top of the small polyimide a few millimeters above the top of the screw attachment at a ~45° angle. Ensure the cut is clean, with nothing blocking the polyimide tube hole. Loading the tetrodes Prepare the tetrodes (~6 cm in length) using previously described methods . Using ceramic- or rubber-tipped forceps, carefully pass a tetrode through one of the small polyimide tubes, leaving ~2 cm protruding from the top of the small polyimide tube. Affix the tetrode to the top of the small polyimide tube via liquid cyanoacrylate, taking care not to affix the small and large polyimide tubes together in the process. Retract the screw until it is near the top of the drive. Grab the tetrode wire protruding from the bottom of the drive, and gently kink it at the point where it emerges from the cannula. Advance the screw fully back into the drive. Using very sharp scissors, cut the tetrode wire just above the kink. Under a microscope, ensure that the cut is clean and that the metal of all four tetrodes is exposed. Retract the screw until the tetrode is just secured within the cannula. Repeat steps 1.5.2-1.5.8 for all the screws. Attach the EIB to the EIB support platform via small jewelry screws. Connect each electrode of each tetrode to the appropriate port on the EIB. Preparing the micro-drive for surgery Electrically plate the tetrodes to reduce the electrical impedance using previously described methods . After plating, ensure that each tetrode is housed in the cannula such that the tip of the tetrode is flush with the bottom of each cannula hole. Slide the micro-drive cone around the completed micro-drive. Attach the micro-drive lid to the micro-drive cone by sliding the cone attachment pole into the lid port. Orient the cone so that the EIB connectors pass freely through the EIB connection pass-through holes when the lid is closed, and glue the cone in place with cyanoacrylate placed around the base of the cone, taking care not to let any cyanoacrylate enter into any of the cannula output holes. Remove the lid. Carefully backfill each cannula hole with sterile mineral oil to prevent bodily fluids from entering the polyimide holes after surgical implantation. Carefully coat the base of the cannula with sterile petroleum jelly. This will serve as a barrier to prevent chemical agents (e.g., dental cement) from entering the exposed brain during the surgery. Weigh the fully assembled micro-drive, lid, and four bone screws to prepare an equal-weight counterbalance. Optionally, prior to the surgery, extrude the tetrodes at a distance that is appropriate for reaching the target brain regions once the drive is flush with the skull. NOTE: Prior to surgical implantation, sterilize the implant via gas sterilization in ethylene oxide (500-1200 mg/L, 2-4 hrs). All bone screws and surgical instruments should be sterilized via autoclave (121°C, 30 minutes). 2. Surgical implantation Anesthetizing the mouse and mounting it in the stereotaxic apparatus Place the mouse in a small box with sufficient space to move, and anesthetize the mouse with 3%-4% isoflurane. NOTE: Other anesthetic agents can be used, but caution should be used due to the age, size, and weight of the juvenile mouse subject. Once the mouse is unresponsive (no response to tail pinch, a ventilation rate of ~60 breaths per minute), remove it from the box, and quickly mount it on the stereotaxic apparatus. Quickly, place the stereotaxic mask over the mouse’s snout and maintain anesthesia at 1-3% isoflurane. Apply any veterinary-approved pain relief, such as sustained-release buprenorphine (0.05-0.5 mg/kg subcutaneous), or anti-inflammatory agents, such as carprofen (5-10 mg/kg subcutaneous), prior to initial surgical incision. Fully secure the mouse's head in the stereotaxic apparatus using ear bars. Ensure the skull is level and immobile without placing unnecessary pressure on the mouse's ear canals. Due to the limited ossification of juvenile skull bones, it is possible to cause permanent damage during head fixation. Preparing the mouse for surgery and exposing the skull Protect the mouse's eyes by placing a small volume of synthetic tear gel on each eye and covering each eye with an autoclaved patch of foil. NOTE: The synthetic tears will keep the eyes moist, while the foil will prevent any light source from causing long-term damage. Thicker synthetic tear solutions are preferred as they can also serve as a barrier to the inadvertent introduction of other potentially toxic surgical solutions (ethanol, dental acrylic, etc.) into the eyes. Using sterile cotton-tipped swabs, apply hair-removal cream over the surgical area to remove the hair from the scalp. Be careful not to get the cream near the eyes. After removing the hair, place a sterile drape over the scalp to secure the surgical area. Using sterile cotton-tipped swabs, clean the scalp via three consecutive washes of povidone-iodine (10%) solution followed by isopropyl alcohol (100%). Using a sterile scalpel or fine scissors, remove the scalp. Using sterile cotton-tipped swabs and sterile solutions of saline solution (0.9% NaCl) and hydrogen peroxide, thoroughly clean the skull. Identify bregma, and using the stereotaxic apparatus, carefully mark the target recording locations on the skull with a permanent marker. Opening the cannula hole and attaching the bone anchors Remove the skull overlaying the recording sites. # Due to the thinness of the skull at this age, cut the skull with a scalpel blade; this removes the necessity of using a drill, which may damage the underlying dura. Keep the exposed dura moist with the application of sterile saline solution (0.9% NaCl) or sterile mineral oil. Do not remove or puncture the dura at this stage, as it is sufficiently thin in juvenile mice for the tetrodes to pass through in future steps. Carefully drill pilot holes for four bone screws. Place the bone screws in the extreme lateral and rostral or caudal portions of the skull, where the bone is thickest and the bone screws are sufficiently far away from the micro-drive implant. For the bone screws, use sterile, fine jewelry screws (e.g., UNM 120 thread, 1.5 mm head). Tightly wind one bone screw with a thin, highly conductive wire that will serve as a ground and be attached to the EIB in step 2.4.6. Using a scalpel blade or carefully using a drill bit, score the skull near the bone screw hole locations. Scoring is important to provide a sufficiently rough surface for the liquid cyanoacrylate to bind in step 2.3.5. Using a sterile screwdriver and sterile screw clamp, thread each sterile bone screw into place, taking care not to pierce the underlying dura. Using a sterile 30 G needle, place liquid cyanoacrylate around each bone screw. This effectively thickens the skull where the bone screws have been attached. Take care not to allow any cyanoacrylate to enter into the exposed dura above the recording sites. Lowering and attaching the micro-drive Mount the completed micro-drive onto the stereotaxic apparatus to be carefully lowered onto the mouse skull. Ensure that the micro-drive cannula will be at the appropriate coordinates when lowered. Lower the micro-drive slowly, moving only in the dorsal/ventral direction. Lower the micro-drive with the tetrodes already advanced out of the cannula holes (step 1.6.6) in order to visualize their entry into the brain; any medial/lateral or rostral/caudal movement when the tetrodes are touching the mouse can bend the tetrodes and cause them to miss their final destination. Once the micro-drive is fully lowered, ensure that the base of the cannula just makes contact with the skull/ dura. The layer of petroleum jelly and/or mineral oil will serve as a barrier to cover the exposed dura. If necessary, add sterile petroleum jelly or sterile bone wax to cover excess exposed dura. While holding the micro-drive in place with the stereotaxic apparatus, coat the skull in dental cement to affix the micro-drive base to the implanted bone screws. NOTE: The dental cement should fully encase all the bone screws and should cover the dental cement anchor ledge on the micro-drive cannula. While the dental cement sets, carefully shape it to prevent sharp corners or edges that may harm the mouse or damage the micro-drive. Ensure there is sufficient dental cement to hold the micro-drive, but eliminate excessive dental cement that will add unnecessary weight. Carefully thread the ground wire through the micro-drive, and attach it to the appropriate slot on the EIB. Once the dental cement is fully set, carefully detach the micro-drive from the stereotaxic apparatus. Place the lid on the micro-drive. With a sterile cotton-tipped swab and sterile saline, clean the mouse. With a sterile cotton-tipped swab, apply a thin layer of antibiotic ointment to any exposed scalp near the implant site. Remove the foil from the mouse's eyes. Remove the mouse from the stereotaxic apparatus, taking care to support the additional weight of the micro-drive as the mouse is transported to a clean cage. 3. Post-surgery recovery Immediate recovery Prior to the surgery, prepare the counterbalance system by connecting a 0.75 in diameter PVC pipe, as shown in . One arm of the system passes through holes drilled into the lid of the cage, the second arm rests on top of the cage lid, and the third arm extends above and beyond the cage. The topmost arm is capped. Carefully attach the micro-drive to the counterbalance system , and use a counterbalance weight identical to the weight of the micro-drive and bone screws. Run a strong thread or fishing line from a connector attached to the EIB over the three arms of the counterbalance system to the counterbalance weight, which hangs over the topmost arm. Ensure that the counterbalance is strongly connected to the micro-drive EIB and that there is sufficient line to give the mouse complete access to the entirety of the cage. Provide nutrient-rich gel in the cage alongside moistened normal rodent chow to ensure rehydration and recovery. Monitor the mouse until it fully recovers from the surgical anesthesia. Long-term recovery At all times, when not attached to the recording equipment, ensure that the micro-drive is supported by the counterbalance system. Reduce the counterweight over time, but never remove it entirely to avoid unanticipated stress to the mouse or torque to the bone screws. To prevent damage to the implant and counterbalance system, house the mouse without the possibility of direct interaction with other mice for the duration of the experiment. Provide nutrient-rich gel for at least 3 days following surgery, at which point solid food alone will be sufficient. Due to the overhead requirements of the counterbalance system, do not provide food and water in an overhead wire mesh; place the food on the cage floor, and provide water through the side of the cage. To prevent spoilage, replace the food entirely daily. Daily, ensure that the mouse has free access to the entirety of the cage and that the counterbalance is robustly and strongly attached to the micro-drive. Micro-drive design and construction Digitally designing and printing the micro-drive Download the micro-drive model templates ( https://github.com/Brad-E-Pfeiffer/JuvenileMouseMicroDrive ). Identify the stereotaxic locations of the target brain region(s) in an appropriate stereotaxic atlas. Using three-dimensional computer-aided design (3D CAD) software, load the template micro-drive cannula . If necessary, modify the output cannula output locations on the micro-drive cannula model to target the desired brain region(s). NOTE: Each cannula hole extrusion should be at least 2 mm long to ensure that the tetrode will emerge from the cannula hole aiming straight at the target. The micro-drive cannula template is designed to bilaterally target the anterior cingulate cortex (one tetrode per hemisphere), hippocampal area CA1 (four tetrodes per hemisphere), and hippocampal area CA3 (two tetrodes per hemisphere), with one reference tetrode per hemisphere positioned in the white matter above hippocampal area CA1. If necessary, modify the micro-drive body to accommodate the attachment of the electronic interface board (EIB). Print the micro-drive body, cannula, cone, and lid at high resolution on a 3D printer (ideally with a resolution better than 25 μm), and prepare the printed materials according to the manufacturer's protocols. Use printer resins with high rigidity. Assembling the custom screws and attachments Using 3D CAD software, load the screw attachment models . Print the screw attachments at high resolution on a 3D printer (ideally with a resolution of at least 25 μm), and prepare the printed materials according to the manufacturer's protocols. Use printer resins with high rigidity. Affix the screw attachments to each tetrode-advancing screw (tetrode-advancing screws are custom manufactured at a machine shop prior to micro-drive construction). Affix two screw attachments to each screw, with one above and one below the ridge. Ensure that the bottom of each screw attachment contacts the ridge. Hold the screw attachments together with gel cyanoacrylate. Once affixed, ensure that the screw attachments do not move in the longitudinal axis of the screw but rotate freely with minimal resistance. Assembling the micro-drive body Using fine, sharp scissors, cut large polyimide tubing (outer diameter: 0.2921 mm, inner diameter: 0.1803 mm) into ~6 cm long sections. Pass the large polyimide sections through the output holes on the micro-drive cannula so that each tube extends beyond the bottom of the cannula by a few millimeters. Using a clean 30 G needle, affix the polyimide to the cannula by applying small amounts of liquid cyanoacrylate. Take care not to allow the cyanoacrylate to enter the inside of the polyimide tube. NOTE: Dripping liquid cyanoacrylate down into the cannula through the top of the drive body can expedite this process but will require re-clearing of the guide holes later with a fine-tipped drill. Pass the large polyimide tubes from the top of the micro-drive cannula through the appropriate large polyimide holes in the micro-drive body. Slowly push the micro-drive cannula and micro-drive body together until they are adjacent and the cannula/body attachment tabs interlock. Be careful not to kink or damage the polyimide tubes in the process. NOTE: Each polyimide tube should smoothly pass from the bottom of the cannula out through the top of the micro-drive body. Slight bending is normal, but excessive bending of the polyimide tube can warp the tetrode and prevent it from passing straight into the brain. Affix the micro-drive body and the micro-drive cannula together using cyanoacrylate. Using a new, sharp razor blade, sever the large polyimide tube ends that are extruding from the bottom of the cannula output holes. Ensure that the cut is exactly at the base of the cannula, making the tubes and the cannula bottom flush with one another. Using sharp scissors, cut the large polyimide tubing just above the edge of the inner rim of the drive body at a ~45° angle. Loading the assembled custom screws Screw each assembled custom screw into the outer holes of the micro-drive body. Ensure the screw guidepost passes through the large hole in the screw attachments. Advance each screw fully until it will not advance further. Pre-lubricating the screws with mineral oil or axle grease is recommended. Using extremely sharp scissors, cut small polyimide tubing (outer diameter: 0.1397 mm, inner diameter: 0.1016) into ~4 cm long sections. Pass the small polyimide sections through the large polyimide tubing already mounted in the micro-drive. Ensure that excess small polyimide tubing protrudes from the top and bottom of each large polyimide tube. Affix the small polyimide tubes to the screw attachments via cyanoacrylate, taking care not to let any cyanoacrylate enter into either the large or small polyimide tubes. Using a new, sharp razor blade, sever the small polyimide tube ends that are extruding from the bottom of the cannula holes. Ensure that the cut is exactly at the base of the cannula and that the cut is clean, with nothing blocking the polyimide tube hole. Using sharp scissors, cut the top of the small polyimide a few millimeters above the top of the screw attachment at a ~45° angle. Ensure the cut is clean, with nothing blocking the polyimide tube hole. Loading the tetrodes Prepare the tetrodes (~6 cm in length) using previously described methods . Using ceramic- or rubber-tipped forceps, carefully pass a tetrode through one of the small polyimide tubes, leaving ~2 cm protruding from the top of the small polyimide tube. Affix the tetrode to the top of the small polyimide tube via liquid cyanoacrylate, taking care not to affix the small and large polyimide tubes together in the process. Retract the screw until it is near the top of the drive. Grab the tetrode wire protruding from the bottom of the drive, and gently kink it at the point where it emerges from the cannula. Advance the screw fully back into the drive. Using very sharp scissors, cut the tetrode wire just above the kink. Under a microscope, ensure that the cut is clean and that the metal of all four tetrodes is exposed. Retract the screw until the tetrode is just secured within the cannula. Repeat steps 1.5.2-1.5.8 for all the screws. Attach the EIB to the EIB support platform via small jewelry screws. Connect each electrode of each tetrode to the appropriate port on the EIB. Preparing the micro-drive for surgery Electrically plate the tetrodes to reduce the electrical impedance using previously described methods . After plating, ensure that each tetrode is housed in the cannula such that the tip of the tetrode is flush with the bottom of each cannula hole. Slide the micro-drive cone around the completed micro-drive. Attach the micro-drive lid to the micro-drive cone by sliding the cone attachment pole into the lid port. Orient the cone so that the EIB connectors pass freely through the EIB connection pass-through holes when the lid is closed, and glue the cone in place with cyanoacrylate placed around the base of the cone, taking care not to let any cyanoacrylate enter into any of the cannula output holes. Remove the lid. Carefully backfill each cannula hole with sterile mineral oil to prevent bodily fluids from entering the polyimide holes after surgical implantation. Carefully coat the base of the cannula with sterile petroleum jelly. This will serve as a barrier to prevent chemical agents (e.g., dental cement) from entering the exposed brain during the surgery. Weigh the fully assembled micro-drive, lid, and four bone screws to prepare an equal-weight counterbalance. Optionally, prior to the surgery, extrude the tetrodes at a distance that is appropriate for reaching the target brain regions once the drive is flush with the skull. NOTE: Prior to surgical implantation, sterilize the implant via gas sterilization in ethylene oxide (500-1200 mg/L, 2-4 hrs). All bone screws and surgical instruments should be sterilized via autoclave (121°C, 30 minutes). Surgical implantation Anesthetizing the mouse and mounting it in the stereotaxic apparatus Place the mouse in a small box with sufficient space to move, and anesthetize the mouse with 3%-4% isoflurane. NOTE: Other anesthetic agents can be used, but caution should be used due to the age, size, and weight of the juvenile mouse subject. Once the mouse is unresponsive (no response to tail pinch, a ventilation rate of ~60 breaths per minute), remove it from the box, and quickly mount it on the stereotaxic apparatus. Quickly, place the stereotaxic mask over the mouse’s snout and maintain anesthesia at 1-3% isoflurane. Apply any veterinary-approved pain relief, such as sustained-release buprenorphine (0.05-0.5 mg/kg subcutaneous), or anti-inflammatory agents, such as carprofen (5-10 mg/kg subcutaneous), prior to initial surgical incision. Fully secure the mouse's head in the stereotaxic apparatus using ear bars. Ensure the skull is level and immobile without placing unnecessary pressure on the mouse's ear canals. Due to the limited ossification of juvenile skull bones, it is possible to cause permanent damage during head fixation. Preparing the mouse for surgery and exposing the skull Protect the mouse's eyes by placing a small volume of synthetic tear gel on each eye and covering each eye with an autoclaved patch of foil. NOTE: The synthetic tears will keep the eyes moist, while the foil will prevent any light source from causing long-term damage. Thicker synthetic tear solutions are preferred as they can also serve as a barrier to the inadvertent introduction of other potentially toxic surgical solutions (ethanol, dental acrylic, etc.) into the eyes. Using sterile cotton-tipped swabs, apply hair-removal cream over the surgical area to remove the hair from the scalp. Be careful not to get the cream near the eyes. After removing the hair, place a sterile drape over the scalp to secure the surgical area. Using sterile cotton-tipped swabs, clean the scalp via three consecutive washes of povidone-iodine (10%) solution followed by isopropyl alcohol (100%). Using a sterile scalpel or fine scissors, remove the scalp. Using sterile cotton-tipped swabs and sterile solutions of saline solution (0.9% NaCl) and hydrogen peroxide, thoroughly clean the skull. Identify bregma, and using the stereotaxic apparatus, carefully mark the target recording locations on the skull with a permanent marker. Opening the cannula hole and attaching the bone anchors Remove the skull overlaying the recording sites. # Due to the thinness of the skull at this age, cut the skull with a scalpel blade; this removes the necessity of using a drill, which may damage the underlying dura. Keep the exposed dura moist with the application of sterile saline solution (0.9% NaCl) or sterile mineral oil. Do not remove or puncture the dura at this stage, as it is sufficiently thin in juvenile mice for the tetrodes to pass through in future steps. Carefully drill pilot holes for four bone screws. Place the bone screws in the extreme lateral and rostral or caudal portions of the skull, where the bone is thickest and the bone screws are sufficiently far away from the micro-drive implant. For the bone screws, use sterile, fine jewelry screws (e.g., UNM 120 thread, 1.5 mm head). Tightly wind one bone screw with a thin, highly conductive wire that will serve as a ground and be attached to the EIB in step 2.4.6. Using a scalpel blade or carefully using a drill bit, score the skull near the bone screw hole locations. Scoring is important to provide a sufficiently rough surface for the liquid cyanoacrylate to bind in step 2.3.5. Using a sterile screwdriver and sterile screw clamp, thread each sterile bone screw into place, taking care not to pierce the underlying dura. Using a sterile 30 G needle, place liquid cyanoacrylate around each bone screw. This effectively thickens the skull where the bone screws have been attached. Take care not to allow any cyanoacrylate to enter into the exposed dura above the recording sites. Lowering and attaching the micro-drive Mount the completed micro-drive onto the stereotaxic apparatus to be carefully lowered onto the mouse skull. Ensure that the micro-drive cannula will be at the appropriate coordinates when lowered. Lower the micro-drive slowly, moving only in the dorsal/ventral direction. Lower the micro-drive with the tetrodes already advanced out of the cannula holes (step 1.6.6) in order to visualize their entry into the brain; any medial/lateral or rostral/caudal movement when the tetrodes are touching the mouse can bend the tetrodes and cause them to miss their final destination. Once the micro-drive is fully lowered, ensure that the base of the cannula just makes contact with the skull/ dura. The layer of petroleum jelly and/or mineral oil will serve as a barrier to cover the exposed dura. If necessary, add sterile petroleum jelly or sterile bone wax to cover excess exposed dura. While holding the micro-drive in place with the stereotaxic apparatus, coat the skull in dental cement to affix the micro-drive base to the implanted bone screws. NOTE: The dental cement should fully encase all the bone screws and should cover the dental cement anchor ledge on the micro-drive cannula. While the dental cement sets, carefully shape it to prevent sharp corners or edges that may harm the mouse or damage the micro-drive. Ensure there is sufficient dental cement to hold the micro-drive, but eliminate excessive dental cement that will add unnecessary weight. Carefully thread the ground wire through the micro-drive, and attach it to the appropriate slot on the EIB. Once the dental cement is fully set, carefully detach the micro-drive from the stereotaxic apparatus. Place the lid on the micro-drive. With a sterile cotton-tipped swab and sterile saline, clean the mouse. With a sterile cotton-tipped swab, apply a thin layer of antibiotic ointment to any exposed scalp near the implant site. Remove the foil from the mouse's eyes. Remove the mouse from the stereotaxic apparatus, taking care to support the additional weight of the micro-drive as the mouse is transported to a clean cage. Post-surgery recovery Immediate recovery Prior to the surgery, prepare the counterbalance system by connecting a 0.75 in diameter PVC pipe, as shown in . One arm of the system passes through holes drilled into the lid of the cage, the second arm rests on top of the cage lid, and the third arm extends above and beyond the cage. The topmost arm is capped. Carefully attach the micro-drive to the counterbalance system , and use a counterbalance weight identical to the weight of the micro-drive and bone screws. Run a strong thread or fishing line from a connector attached to the EIB over the three arms of the counterbalance system to the counterbalance weight, which hangs over the topmost arm. Ensure that the counterbalance is strongly connected to the micro-drive EIB and that there is sufficient line to give the mouse complete access to the entirety of the cage. Provide nutrient-rich gel in the cage alongside moistened normal rodent chow to ensure rehydration and recovery. Monitor the mouse until it fully recovers from the surgical anesthesia. Long-term recovery At all times, when not attached to the recording equipment, ensure that the micro-drive is supported by the counterbalance system. Reduce the counterweight over time, but never remove it entirely to avoid unanticipated stress to the mouse or torque to the bone screws. To prevent damage to the implant and counterbalance system, house the mouse without the possibility of direct interaction with other mice for the duration of the experiment. Provide nutrient-rich gel for at least 3 days following surgery, at which point solid food alone will be sufficient. Due to the overhead requirements of the counterbalance system, do not provide food and water in an overhead wire mesh; place the food on the cage floor, and provide water through the side of the cage. To prevent spoilage, replace the food entirely daily. Daily, ensure that the mouse has free access to the entirety of the cage and that the counterbalance is robustly and strongly attached to the micro-drive. The above-described protocol was used to record local field potential signals and single units from multiple brain areas simultaneously in mice, with daily recordings conducted in the same mice from p20 to p60. Reported here are representative electrophysiological recordings from two mice and post-experiment histology demonstrating the final recording locations. Surgical implantation of the micro-drive into p20 mice A micro-drive was constructed and surgically implanted into a p20 mouse, as described above. Immediately following the surgery, the mouse was attached to the counterbalance system and allowed to recover. Once the mouse was fully mobile, the micro-drive was plugged into an in vivo electrophysiology recording system. The cables connecting the micro-drive to the recording equipment were suspended above the mouse. Electrophysiological recordings (32 kHz) were obtained across all the channels for 1 h while the mouse behaved naturally in its home cage. Following the recording, the mouse was unplugged from the recording system, reattached to the counterbalance system, and returned to the vivarium with free access to water and chow. Daily recording of neural activity Electrophysiological recordings were obtained daily for several weeks to enable the chronic monitoring of the same brain region across the critical developmental windows of p20-p60. Sample raw local field potentials (LFP) from across the chronic recordings are shown in , . Isolated single units were simultaneously obtained from multiple tetrodes . Units with similar waveforms were identified across multiple days ( , middle and right), but due to the potential drift of the recording electrode, it was not possible to definitively claim that the same unit was being identified across days. In a separate mouse implanted at p20 and recorded daily for several weeks, neural activity was examined on a tetrode targeting dorsal area CA1. Large-amplitude ripples and well-isolated single units were identified on each day of the recording .These data indicate that stable, high-quality in vivo electrophysiological recordings could be from the same mouse across early development. Histological confirmation of the recording sites and the developmental impact of chronic implantation Following the final recording day, the mouse was thoroughly anesthetized via isoflurane anesthesia followed by a lethal injection of pentobarbital sodium, and a current was passed through the electrode tips to produce small lesions at the recording sites. Post-experiment histological sectioning of the mouse brain allowed the visualization of the final recording sites . In a separate cohort, three male and three female mice were surgically implanted at p20 as described above. Equal numbers of littermates were left unimplanted and maintained in identical housing conditions. The mice were sacrificed at p62 (6 weeks post-surgery for the implanted cohort). The skulls were carefully cleaned, and external measurements were taken of the bregma-to-lambda distance ( , top left) and external maximal skull width at lambda ( , top right). An incision was made along the midline of the skull, and one-half of the skull was removed to excise the brain for mass measurement ( , bottom right). The height of the skull cavity at bregma was measured from the intact skull half ( , bottom left). No measure was significantly different between the implanted and unimplanted cohorts (Wilcoxon rank-sum test), indicating that long-term implantation, starting at p20, has no gross impact on the natural development of the skull or brain volume. A micro-drive was constructed and surgically implanted into a p20 mouse, as described above. Immediately following the surgery, the mouse was attached to the counterbalance system and allowed to recover. Once the mouse was fully mobile, the micro-drive was plugged into an in vivo electrophysiology recording system. The cables connecting the micro-drive to the recording equipment were suspended above the mouse. Electrophysiological recordings (32 kHz) were obtained across all the channels for 1 h while the mouse behaved naturally in its home cage. Following the recording, the mouse was unplugged from the recording system, reattached to the counterbalance system, and returned to the vivarium with free access to water and chow. Electrophysiological recordings were obtained daily for several weeks to enable the chronic monitoring of the same brain region across the critical developmental windows of p20-p60. Sample raw local field potentials (LFP) from across the chronic recordings are shown in , . Isolated single units were simultaneously obtained from multiple tetrodes . Units with similar waveforms were identified across multiple days ( , middle and right), but due to the potential drift of the recording electrode, it was not possible to definitively claim that the same unit was being identified across days. In a separate mouse implanted at p20 and recorded daily for several weeks, neural activity was examined on a tetrode targeting dorsal area CA1. Large-amplitude ripples and well-isolated single units were identified on each day of the recording .These data indicate that stable, high-quality in vivo electrophysiological recordings could be from the same mouse across early development. Following the final recording day, the mouse was thoroughly anesthetized via isoflurane anesthesia followed by a lethal injection of pentobarbital sodium, and a current was passed through the electrode tips to produce small lesions at the recording sites. Post-experiment histological sectioning of the mouse brain allowed the visualization of the final recording sites . In a separate cohort, three male and three female mice were surgically implanted at p20 as described above. Equal numbers of littermates were left unimplanted and maintained in identical housing conditions. The mice were sacrificed at p62 (6 weeks post-surgery for the implanted cohort). The skulls were carefully cleaned, and external measurements were taken of the bregma-to-lambda distance ( , top left) and external maximal skull width at lambda ( , top right). An incision was made along the midline of the skull, and one-half of the skull was removed to excise the brain for mass measurement ( , bottom right). The height of the skull cavity at bregma was measured from the intact skull half ( , bottom left). No measure was significantly different between the implanted and unimplanted cohorts (Wilcoxon rank-sum test), indicating that long-term implantation, starting at p20, has no gross impact on the natural development of the skull or brain volume. Modern experiments exploring in vivo neural circuit function in rodents often utilize extracellular electrophysiology via permanently implanted electrodes to monitor the activity of individual neurons (i.e., single units) or local populations ( via local field potentials, LFP), but such methods are rarely applied to juvenile mice due to technical challenges. This manuscript describes a method for obtaining in vivo electrophysiological recordings in mice across the developmentally critical windows of p20 to p60 and beyond. This methodology involves a manufacturing process for the printing and construction of a micro-drive implant, a surgical implantation procedure, and a post-surgery recovery strategy, all of which are specifically tailored for use in juvenile mice. Several considerations were influential in the development of this protocol, including the small size and relative weakness of juvenile mice compared to their adult counterparts, as well as the reduced ossification of the juvenile mouse skull onto which the micro-drive needed to be attached. Two primary methods commonly used to perform in vivo electrophysiology are arrays of electrodes (e.g., tetrodes) and silicon probes. Silicon probes are lightweight, can provide a large number of recording sites per unit weight, and have been previously utilized in juvenile rats . However, silicon probes are relatively expensive per unit. In contrast, the micro-drive described in this manuscript can be constructed using less than $50 USD in raw materials, making it a cost-effective option for in vivo recording. In addition, silicon probes must often be implanted in fixed lines, which prohibits the recording of spatially diverse brain regions. In contrast, the micro-drive design described in this manuscript utilizes independently adjustable tetrodes to accommodate simultaneous recordings in up to 16 different locations with virtually no restriction on the spatial relationship between those locations. This micro-drive design can be easily modified to allow for targeting different locations than those described here by moving the cannula hole extrusions to any desired anterior/posterior and medial/distal location. When targeting alternate brain areas, it is important to note that while the tetrodes will often travel straight, it is possible for these thin wires to deflect slightly as they exit the micro-drive cannula. Thus, the smaller or more ventral a brain region is, the more challenging it will be to successfully target the area with tetrodes. The micro-drive implant described in this manuscript is foundationally similar to several prior tetrode-based micro-drive designs , , , , in that the individual tetrodes are affixed to screws, which allow for the fine control of the recording depth of each tetrode. While several features of the current micro-drive design are unique, including the ease of targeting spatially distributed brain areas, the primary novelty of the current manuscript is the description of surgical implantation and post-surgery recovery strategies, which allow for chronic studies of network activity in still-developing juvenile mice. Indeed, the surgery and recovery methodologies described here could be adapted to support other implants in juvenile mice. To maintain a consistent recording across multiple days, the wires or probes must be rigidly affixed to the skull. While the overall structure of the mouse skull undergoes only minor changes after p20, the skull thickens considerably between ages p20 and p45 . Indeed, the skull at p20 is insufficiently rigid to support an attached implant without being damaged. To overcome this biological limitation, this protocol artificially thickens the skull via cyanoacrylate during the implantation surgery. Implantation in mice younger than p20 is likely possible using this strategy, but the mouse skull undergoes considerable size and shape changes until roughly p20 . Thus, implantation for extended periods in mice younger than p20 is not recommended as the cyanoacrylate and fixed bone screws in the still-developing skull may significantly impact the natural growth of the skull and the underlying brain tissue development. Importantly, in this study, no impact on the gross measurements of the skull or brain size was observed following chronic implantation starting at p20 . A critical step in the method described in this manuscript is the post-surgery recovery strategy; according to this strategy, the weight of the implant should be continuously counterbalanced as the mouse matures and undergoes muscular and musculoskeletal system development. Early after implantation, mice are unable to successfully bear the weight of the implant without the counterbalance, leading to malnourishment and dehydration as the mouse cannot adequately reach the food and water sources in its cage. The counterbalance system is easy and inexpensive to construct, trivial to implement, and allows mice of any implantable age to freely explore the entirety of their home cage, thus ensuring adequate nutrition and hydration. As mice age, the amount of counterbalance can be lessened until it can be entirely removed in adult mice; however, the continued use of the counterbalance system is recommended for the duration of the experiment with at least a nominal counterweight attached at all times. While an adult mouse may be able to bear the size and weight of the micro-drive over time, continued natural movement during free behavior with no ameliorating counterweight produces torque and shearing force on the bone screws that anchor the micro-drive onto the skull, making it increasingly likely to become detached, especially during longer chronic experiments. Two important limitations are of note for the current study. First, to assess the impact of implantation at p20 on skull and brain development, several cohorts of mice were sacrificed after prolonged implantation . While these analyses revealed no significant impact of implantation on the skull cavity size or brain mass , the current study did not examine the skull size or brain mass at multiple time points throughout the early developmental period of p20-p60. While prior work demonstrates that the development of the brain cavity is completed by p20 , it is possible that implantation at this early window may produce unanticipated changes that are corrected or compensated for by the adult ages that were evaluated here. Second, the experiments that produced the electrophysiological data shown in and were not designed to maximize the cell yield. Thus, while the data presented here demonstrate stable, chronic recordings and well-isolated single units, they should not be taken as representative of the maximal potential yield for this device. Many human neurological and psychiatric disorders manifest during periods of early development or across adolescence, including autism and schizophrenia. However, little is known regarding the circuit-level dysfunction that may underlie these diseases, despite the plethora of mouse models available. The identification of these initial network changes is critical for creating early detection strategies and treatment paradigms. Yet, due to technical challenges, it remains unclear how network function is disrupted across development in mouse models of neuropsychiatric diseases. The micro-drive and recovery strategy described here is designed to support investigations into multi-regional brain network development in the mouse brain and, thus, allow researchers to measure healthy brain development as well as identify alterations to that development in mouse models of disease.
An Evaluation of Dental Caries Status in Children with Oral Clefts: A Cross-Sectional Study
5013850a-04e1-4c2b-9540-ff7d9cb602cc
11854929
Dentistry[mh]
Oral cleft (OC) is a common congenital craniofacial anomaly with a global prevalence of 0.15% per live birth, with an estimated 348,000 babies born with OC yearly . This condition causes anatomical and dental alterations and requires long-term treatment by a multidisciplinary team . Considering the type of OC, the rehabilitation process should start early, in the first year of life, with surgical repair . In this context, the occurrence of caries negatively affects individuals with OC since oral health is a prerequisite for rehabilitation . Dental caries is a multifactorial and progressive disease that involves the interaction between microbiological, dietary, and environmental factors, with a direct impact on dental health and the quality of life of individuals . Caries is a public health problem and, as such, is a challenge to the good oral health of children with OC . The study of caries experience in individuals with oral cleft (OC) is essential for outlining the vulnerable aspects of proper oral health because of its importance for rehabilitation, especially regarding reparative surgeries since caries or any other infection in the oral cavity might impair the success of these procedures . The relationship between the oral cleft and caries prevalence has been investigated . Some authors believe that more severe types of clefts increase caries prevalence . On the other hand, other authors did not find any difference . In general, the prevalence of caries in children and adolescents with OC, in most studies, is presented in a broad and general way without detailing the disease in periods. No study has investigated the caries experience, especially in primary dentition, considering short age intervals throughout early childhood and preschool. By analyzing this aspect, it is possible to understand in more detail the period of susceptibility in which there is an increase in the caries experience. This type of study is justified because knowing the caries experience profile in childhood has been considered a predictor for the occurrence of this pathology in adolescence and throughout life, affecting long-term oral health . According to the understanding of the WHO, the investigation of the caries experience in different populations has as its primary objective to offer an overview of the oral health condition of a population with specific particularities, such as the population of children with oral clefts (OCs). Therefore, it is crucial to determine periods of greater susceptibility and highlight the need for prevention and intervention to control the disease. This detailed analysis of the disease status in children with OC is essential because it allows for establishing clinical practice guidelines with preventive strategies specifically targeted at this population . Thus, this study aimed to investigate the dental caries status in the different stages of the primary dentition of children with OC and determine the first period of susceptibility to increased caries. The null hypothesis is that dental caries status does not increase with age in the deciduous dentition of children with oral clefts. This retrospective cross-sectional study was conducted at the Hospital for Rehabilitation of Craniofacial Anomalies—University of São Paulo (HRAC-USP). According to the principles of the “Declaration of Helsinki”, it was approved by the Human Research Ethics Committee of the HRAC-USP. All parents of the children included in the study were given verbal explanations and asked to sign the informed consent form to participate in the study. 2.1. Study Participants The sample size was calculated using the G*Power (version 3.1, Heinrich Hei-ne-Universität, Dusseldorf, Germany) program. Considering data from the literature on the prevalence of caries in children with OC in early childhood (20%—between 0 and 3 years) and at preschool age (71%) , it was determined that the number of samples adopting α error at 0.05 and β error at 0.10 (power = 0.90) should be a minimum of 18 in each age group. A total of 300 children (150 boys and 150 girls) participated in this study. The screening process considered as inclusion criteria that the child should be between 7 and 66 months old and have been diagnosed with nonsyndromic cleft lip and alveolus (NSCLA) or nonsyndromic cleft lip and palate (NSCLP). Children with other types of clefts, associated anomalies, or syndromes were excluded. The recruited participants were homogeneously divided into ten groups with 30 children, each matched by sex (15 boys and 15 girls), with an age interval every six months . According to the criteria described above, this recruitment and clinical examination occurred during the routine dates for children’s care at the hospital (HRAC-USP). 2.2. Clinical Examination All children were examined for dental caries lying on a medical bed under natural and artificial light with the aid of a dental mirror and Community Periodontal Index Probe (CPI Probe) for removal of debris from the teeth. The examination was performed by a single calibrated examiner and experienced LTN. A pilot study was performed before the initiation of data collection to assess intrarater reliability using Cohen’s kappa score. For intrarater reliability, 30 random children enrolled at the HRAC-USP were examined at two different time points. The kappa coefficient was 0.98, which demonstrated intraobserver solid reliability. For caries evaluation, the diagnostic criteria established by the World Health Organization (WHO) were employed . All teeth that crossed the gum tissue, with some visible portions, were considered erupted. The status of the deciduous dentition was recorded using the dmft index (decayed, missing, and filled teeth index). The dmft index was calculated for each subject. The method used for the intraoral exam was systematic per quadrant. The sequence of examination was clockwise from the last maxillary right tooth to the last mandibular right tooth. No dental radiographs were taken. Dental caries experience was assessed by prevalence, mean dmft, and distribution. Prevalence was determined for the percentage of children with past or present caries experience in each age group. Caries experience (present or past) was considered when dmft ≠ 0 and without caries (caries-free) when dmft = 0. The mean number of teeth affected by caries in each group was indicated by the dmft index. Distribution was the observation of the frequency of occurrence of caries in each type of tooth. After the assessment proposed in the study, the parents of all participants were advised on dietary habits and oral hygiene, and those children who had cavities were referred for treatment. 2.3. Statistical Analysis Descriptive statistics (i.e., frequencies, percentages, means ± standard deviations (SDs)) were performed. Categorical variables were expressed as numbers and percentages . Univariate analyses used the Chi-square test and Fisher’s exact test to assess the differences in the prevalence between age groups, comparing the frequencies of the age groups sequentially. Student’s t -test was applied to analyze the differences in the mean dmft values between sexes and between age groups. A multiple linear regression analysis for the dmft caries score was performed, including the age group, sex, and cleft type, to analyze the significance of each of them as a predictive variable of caries in this evaluated group of children with OC. The statistical analysis was performed using R statistical software (version 4.0.2; R Foundation for Statistical Computing, Vienna, Austria). Statistical significance was set as p < 0.05. The sample size was calculated using the G*Power (version 3.1, Heinrich Hei-ne-Universität, Dusseldorf, Germany) program. Considering data from the literature on the prevalence of caries in children with OC in early childhood (20%—between 0 and 3 years) and at preschool age (71%) , it was determined that the number of samples adopting α error at 0.05 and β error at 0.10 (power = 0.90) should be a minimum of 18 in each age group. A total of 300 children (150 boys and 150 girls) participated in this study. The screening process considered as inclusion criteria that the child should be between 7 and 66 months old and have been diagnosed with nonsyndromic cleft lip and alveolus (NSCLA) or nonsyndromic cleft lip and palate (NSCLP). Children with other types of clefts, associated anomalies, or syndromes were excluded. The recruited participants were homogeneously divided into ten groups with 30 children, each matched by sex (15 boys and 15 girls), with an age interval every six months . According to the criteria described above, this recruitment and clinical examination occurred during the routine dates for children’s care at the hospital (HRAC-USP). All children were examined for dental caries lying on a medical bed under natural and artificial light with the aid of a dental mirror and Community Periodontal Index Probe (CPI Probe) for removal of debris from the teeth. The examination was performed by a single calibrated examiner and experienced LTN. A pilot study was performed before the initiation of data collection to assess intrarater reliability using Cohen’s kappa score. For intrarater reliability, 30 random children enrolled at the HRAC-USP were examined at two different time points. The kappa coefficient was 0.98, which demonstrated intraobserver solid reliability. For caries evaluation, the diagnostic criteria established by the World Health Organization (WHO) were employed . All teeth that crossed the gum tissue, with some visible portions, were considered erupted. The status of the deciduous dentition was recorded using the dmft index (decayed, missing, and filled teeth index). The dmft index was calculated for each subject. The method used for the intraoral exam was systematic per quadrant. The sequence of examination was clockwise from the last maxillary right tooth to the last mandibular right tooth. No dental radiographs were taken. Dental caries experience was assessed by prevalence, mean dmft, and distribution. Prevalence was determined for the percentage of children with past or present caries experience in each age group. Caries experience (present or past) was considered when dmft ≠ 0 and without caries (caries-free) when dmft = 0. The mean number of teeth affected by caries in each group was indicated by the dmft index. Distribution was the observation of the frequency of occurrence of caries in each type of tooth. After the assessment proposed in the study, the parents of all participants were advised on dietary habits and oral hygiene, and those children who had cavities were referred for treatment. Descriptive statistics (i.e., frequencies, percentages, means ± standard deviations (SDs)) were performed. Categorical variables were expressed as numbers and percentages . Univariate analyses used the Chi-square test and Fisher’s exact test to assess the differences in the prevalence between age groups, comparing the frequencies of the age groups sequentially. Student’s t -test was applied to analyze the differences in the mean dmft values between sexes and between age groups. A multiple linear regression analysis for the dmft caries score was performed, including the age group, sex, and cleft type, to analyze the significance of each of them as a predictive variable of caries in this evaluated group of children with OC. The statistical analysis was performed using R statistical software (version 4.0.2; R Foundation for Statistical Computing, Vienna, Austria). Statistical significance was set as p < 0.05. The prevalence of caries in the total group was 59.4%, with a mean age of 36 months. The mean decayed, missing, and filled teeth (dmft) index was 3.4 ± 4.51 for the total group. There was no statistically significant difference between the sexes for the prevalence (61% for males and 58% for females) ( p = 0.638) or the mean dmft index (3.54 for males and 3.27 for females) ( p = 0.609). The cleft types of the sample are shown in , where it is possible to observe a predominance of unilateral cleft lip and palate ( n = 176). Considering each age group and comparing the caries prevalence and the mean dmft between sexes in each age group, no statistically significant differences were found. Thus, data on caries experience were pooled between sexes into a single group to present the results. Caries prevalence was progressive, increasing with age, occurring in 10% of children aged 7–12 months and 86.6% of children aged 61–66 months . Regarding the period of increasing prevalence, the first statistically significant period occurred between 13 and 18 months and 19 and 24 months, with an increase in prevalence from 6.6% to 40%, respectively ( p = 0.005). The mean of the dmft index also increased with age despite some variations between groups ( and ). The first period of the increase that was statistically significant also occurred from 13 to 18 months to 19 to 24 months, with values of 0.1 ± 0.40 and 1.0 ± 2.59, respectively ( p = 0.02). Other statistically significant differences in the mean of dmft also occurred in other age groups, as shown in . A multiple linear regression analysis showed that of all factors included in the proposed model, only the age factor was significant ( p < 0.001). This result revealed that the increasing age was significantly associated with increased dental caries . Of the 300 children evaluated, a total of 4813 teeth were examined. Of these, 1025 were computed in the dmft index. The maxilla was the most affected arch ( n = 592). The distribution of caries for each type of tooth in the total group revealed higher caries experience in the central incisors of the maxilla. In the mandibular arch, the second and first molars were the teeth most affected . These results for the upper arch point to a possible correlation between the cleft and dental caries, especially in the upper anterior teeth adjacent to the area affected by the cleft. The null hypothesis was rejected. There was an increase in dental caries status in the deciduous dentition of children with oral clefts with increasing age. At this point, the literature on caries in subjects with OC is still controversial. Some authors have reported the highest caries experience in subjects with OC compared to those in controls without clefts . Other researchers could not observe this difference . A recent literature review published by Grewcok and collaborators points out a few studies on the experience of caries, especially in primary dentition in children with NSOC . Some limitations for comparison with other studies involve that some analyzed older children or vast age groups or even analyzed samples with different types of clefts . Other studies present limitations for the comparison of the results, with too wide age ranges and the presentation of data divided according to the type of cleft, thus reducing the sample size, as well as differences in the methods employed for caries diagnoses . Grewcock et al. indicated that clinical studies investigating the dental caries in children with clefts can help in training future dentists in the clinical management of these children, as well as in allocating resources for public policies aimed at prevention in oral health for this population . A differential of the present study is that a significant number of cases were analyzed, with standardized cases of cleft lip and alveolus or cleft lip and palate, and stratified by age group in a paired way in an early stage of childhood. This period in which hygiene and diet habits are established is directly related to the occurrence of caries throughout life. This selection criterion considering the type of cleft is essential since in cases where the clefts involve the alveolar ridge, dental anomalies in the region are more common, as well as greater difficulty or fear of parents in cleaning, predisposing them to caries due to poor oral hygiene . An evaluation of the caries prevalence in the present study demonstrated a progressive increase with increasing age. The variation was 10% from earlier ages (7–12 months) to the highest prevalence (90%) in the 49–54-month age group. Notably, the occurrence of caries in this group with NSOC may be regarded as early since 10% of the children in the first age range evaluated (7–12 months) were affected, and the criterion for a caries diagnosis considers the presence of caries when cavities may already be observed. A possible hypothesis for these findings is that the first reconstructive surgeries occur in this age group. Cheiloplasty, a surgery that repairs the lip, is ideally performed in the first year of life. With this surgery, scar tissue forms on the upper lip, making mouth hygiene difficult due to restricted lip mobility, which is quite common . Parents generally report fear of manipulating the child’s mouth for dental hygiene. In addition, the early introduction of the bottle for feeding, due to the difficulties in natural breastfeeding, also can impact in the initial phase of primary dentition. The first increase in caries prevalence occurred between 13 and 18 months (6.6%) and 19 and 24 months (40%). These prevalence findings are higher than those observed in a study conducted in Nigeria, which found no caries lesions in children with clefts aged 0 to 2 years . Considering the other broader age groups in the preschool phase, our prevalence data are higher than those observed in other studies investigating children with clefts from other ethnic populations. Okoye and collaborators, evaluating Nigerian children aged 3 to 6 years, reported a 25% prevalence . Likewise, Kirchberg and colleagues, who evaluated German children aged 1 to 6 years, reported a caries prevalence of 25% . On the other hand, prevalence rates close to those found in the present study were observed by Briton and colleagues studying the age group from 4.5 to 6 years old at a prevalence of 68.2% . In children with an average age of 57 months, Chopra and colleagues reported a prevalence of 71.9% . Similarly to the increase in prevalence, the first increase in the mean dmft occurred from 13 to 18 months to 19 to 24 months. The variation in mean dmft for all age ranges was not uniform; instead, it was mediated by lower values in some groups, with a general variation from a minimum of 0.10 (at 13–18 months) to a maximum value of 7.4 (at 61–66 months). Briton and collaborators (2010), who evaluated age groups at 1-year intervals, reported a mean dmft of 0.49 in the range of 1.5 to 2.5 years, 1.03 between 2.5 and 3.5 years, and 0.94 between 3.5 and 4.5 years . The present study found dmft values of 2.4, 3.28, and 5.05 in these same age ranges, respectively. A comparison of these findings revealed that the dmft values in the present study were, on average, 3 to 5 times higher. Okoye et al. reported a mean dmft of 0.38 in children between 3 and 6 years old . In the present study, the mean dmft for similar age groups was 5.38, 14 times higher than those reported by these authors. At isolated ages, the mean dmft observed in this study (from 6.0 at 49–54 months to 7.4 at 61–66 months) was higher than the results obtained in other studies, indicating that the dmft ranged between 1.3 and 3.8 . Despite the different results found in these studies, it is important to highlight that the studies were conducted with children with other types of clefts and, in addition, represent different socioeconomic achievements, which may impact their oral health conditions. Nevertheless, they corroborate a tendency for the experience of caries to increase with age in early childhood and preschool. An evaluation of the caries distribution in the primary dentition revealed that the maxillary central incisors were most affected, followed by the lower molars. These results corroborate the findings of other authors who also found a higher occurrence of caries in central incisors and lower molars . A possible explanation for these findings is that, in addition to the presence of the cleft, the anterior teeth, especially in the cleft area, are susceptible to tooth anomalies of shape, number, position, and structure, which predisposes them to caries . In addition, the scarring resulting from plastic lip rehabilitation surgery can make oral hygiene difficult in this area. The psychological aspects of caregivers who fear performing oral hygiene in the cleft area should also be considered . The occurrence of tooth decay in the anterior region has several impacts on the child. Eating and speech are impaired, as well as psychological and social aspects, which are added to the presence of the cleft itself, which brings numerous stigmas. In summary, in our study, we found that, in Brazilian children with NSOC, the first period that increased the prevalence and mean dmft was between the ages of 13 and 18 months and 19 and 24 months. It is essential to highlight that the prevalence rate indicates the number of children affected without considering the number of teeth involved, measured by the dmft index. In the present study, both indices increased during the same age transition, which means that at the same time, we observed a more significant number of children with cavities and a greater number of teeth affected by the disease in these children. These findings can involve several possible related general and specific factors. As a general factor, the mechanism of caries disease involves a multifactorial etiology in its development and progression. Thus, it involves intrinsic factors related to the characteristics of the host, such as salivary parameters, active contamination by cariogenic pathogens, and the immune response mechanism against bacterial infection. It also involves extrinsic factors related to risk habits and behavior, such as diet and oral hygiene . Specific factors in children with clefts include local anatomical particularities due to local characteristics such as the presence of the cleft itself, surgical repair, and scarring resulting from the surgery, which leads to less mobility of the upper lip. Regarding this age range specifically, our findings coincide with the postsurgical period of cheiloplasty and palatoplasty, which, in our surgical protocol at the hospital, occurs from 12 months onwards. Furthermore, this stage of development is an active phase of the eruption of deciduous teeth, with the eruption period of the first deciduous molars. The anatomy of these teeth and the frequent occurrence of dental anomalies lead to malalignment of the teeth, which further facilitates the accumulation of dental plaque, increasing the retentive areas and the number of microorganisms in the local microbiota . In this phase, new foods are introduced into the diet, often with a preference for sugary foods. Moreover, from a developmental point of view, children are more resistant to parental help in brushing. However, they still do not have the fine motor coordination to perform oral hygiene efficiently, especially for the posterior teeth . Furthermore, there were biological factors due to the various types of dental anomalies prevalent in children with clefts, such as supernumerary teeth, hypoplasia, and malocclusion, representing an additional risk factor for caries. Moreover, in addition to these anatomical and biological factors, there are also social and behavioral aspects, since it comprises children with low socioeconomic backgrounds, and limited access to dental care, hygiene products, and preventive information/reassurance, in addition to sociocultural variables related to sugar intake, as well as psychological aspects that intensify this risk status . In some instances, this gives rise to parents’ permissible reactions regarding dietary and oral hygiene habits, among others . In addition, for parents, the central aspect of rehabilitation is reconstructive plastic surgery, with less interest in the importance of oral health, which can lead to poor oral hygiene, increasing the risk of caries experience . In this sense, some authors indicate early preventive care in oral health care for children with clefts . This study had some limitations. The sample came from various regions of the country, and no data were collected on the family’s socioeconomic level or parents’ education level. Furthermore, covariate data, such as information on diet and oral hygiene habits, were not collected. A recommendation for future studies involving this type of investigation would be the distribution of the casuistry in short periods for the age group and the stratification of the sample for the different types of clefts. In addition to collecting information on hygiene and diet habits, complementing the risk estimation analysis allows for a better visualization of the different needs concerning the age groups and different types of clefts. In conclusion, our results show that in the primary dentition of children with nonsyndromic oral clefts, the first period of susceptibility occurs from the age group of 13 to 18 months. And the mean dmft is higher in children in the older age group. These findings demonstrate the need and recommendation for the establishment, in the age group of 6–12 months, of an early preventive protocol of care to modify the way of thinking related to the emphasis on dental care and highlight the importance of teeth and good oral health conditions for the entire rehabilitative process and health promotion.
Evaluation of musculoskeletal complaints, treatment approaches, and patient perceptions in family medicine clinics in a tertiary center in Jordan: a cross-sectional study
8406c45f-193b-4f0b-a041-2a7ababb6e9a
11748281
Family Medicine[mh]
Orthopedic conditions are among the most common health complaints encountered globally, affecting millions of individuals of various ages and backgrounds . Conditions such as back pain, neck pain, and joint disorders, including osteoarthritis and tendonitis, are frequent presentations in medical clinics . These musculoskeletal (MSK) complaints often lead to discomfort, reduced mobility, and a decreased quality of life . Due to the high prevalence of these conditions, patients frequently seek medical advice to alleviate symptoms such as pain, stiffness, and functional limitations. Consequently, musculoskeletal disorders represent a significant burden on healthcare systems worldwide, highlighting the importance of effective management strategies . In many cases, family medicine or primary care clinics serve as the initial point of contact for patients with MSK complaints . This can present a significant challenge for primary healthcare providers, as they must be equipped to manage a wide variety of musculoskeletal conditions. The diversity of orthopedic presentations, combined with the complexity of differentiating between acute, chronic, and potentially serious conditions, places substantial pressure on family medicine systems . Primary care physicians are tasked with providing accurate diagnoses, implementing appropriate management plans, and determining when to refer patients to specialists. These challenges underscore the critical role of family medicine in the early diagnosis and management of orthopedic conditions. Given the high volume of musculoskeletal complaints seen in family medicine clinics, it is essential to evaluate how these conditions are managed and the effectiveness of the treatment approaches employed . Equally important is the assessment of patient perceptions regarding the quality of care they receive. Patient satisfaction and their perception of the quality of care are vital components of healthcare delivery, as they directly influence patient compliance and health outcomes . Understanding the patient experience, particularly in relation to musculoskeletal disorders, can provide valuable insights into the effectiveness of current healthcare practices and highlight areas for improvement. While global studies highlight the prevalence and impact of MSK disorders, there is limited evidence on how these conditions are managed in primary care, particularly in family medicine clinics. This is especially true in developing regions, such as the Middle East, where healthcare systems face unique challenges, including resource constraints and inconsistent service delivery . Existing research provides valuable insights into MSK management in high-income countries, but it does not account for the socio-economic, cultural, and systemic factors that influence care in resource-limited settings. Despite these challenges, the extent to which MSK care is optimized in primary care, especially in addressing gender-specific needs and patient perceptions, remains unclear. Furthermore, there is a significant gap in understanding how treatment pathways and patient outcomes are shaped by these factors in developing countries, leaving a critical grey area in the evidence. Addressing these unknowns is essential to bridge the gap between global standards of care and region-specific needs. This study hypothesizes that variations in the presentation, management, and patient perceptions of musculoskeletal (MSK) complaints exist within family medicine clinics, influenced by factors such as gender, psychosocial considerations, and healthcare delivery practices. By investigating these variations in a tertiary center in Jordan, a developing Middle Eastern country, we aim to address gaps in the literature and provide insights into the complex interplay of these factors. The findings from this study are intended to inform strategies for improving the quality of MSK care and optimizing patient outcomes in resource-constrained healthcare systems. This study was prepared following the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guideline for cross-sectional studies . Study design We employed a cross-sectional descriptive design using a convenience sampling method. The objective was to evaluate the presentation, management, and perceptions of musculoskeletal (MSK) complaints in family medicine clinics. A comprehensive approach was taken to assess demographic variables, health profiles, comorbidities, visit-specific characteristics, healthcare-related factors, psychological dimensions, and occupational influences. This design provided a snapshot of patient care and the interplay of these factors within the study period. Setting The study was conducted in the family medicine clinics of a central teaching hospital in Amman, Jordan. The clinics, comprising approximately 10 units, operate throughout the week and serve a diverse patient population from urban and rural areas. Data were collected over six months, from January 1st, 2024, to June 30th, 2024, ensuring seasonal variability was captured. The research team consisted of family medicine residents trained in standardized data collection methods and supervised by 2–3 family medicine consultants. Data were gathered during routine clinical visits, ensuring real-time documentation of patient complaints and management pathways. Participants The study’s inclusion criteria targeted adults aged 18 years or older presenting with musculoskeletal (MSK) complaints. These complaints were defined as disorders involving muscles, bones, tendons, ligaments, and connective tissues, typically manifesting as pain, stiffness, or functional impairment. To participate, individuals needed to provide informed consent and agree to interviews and data collection using their authorized medical records. This approach ensured a comprehensive understanding of the participants’ MSK conditions while maintaining ethical standards. Exclusion criteria were implemented to minimize confounding factors and ensure the reliability of the study findings. Patients with non-MSK complaints, cognitive impairments, severe trauma requiring emergency care, recent orthopedic surgeries, or advanced comorbidities such as cancer were excluded. Additionally, pregnant women were not included due to the unique nature of pregnancy-related MSK conditions. Overall, 500 participants were carefully selected, reflecting the diverse spectrum of MSK conditions commonly encountered in primary care settings. Variables The study variables were structured to analyze factors influencing outcomes such as patient-reported satisfaction, perceptions of care, and quality of life (QoL). Exposures included demographic factors (age, gender, socioeconomic status), health profiles, and types of musculoskeletal (MSK) complaints. Predictors and confounders, such as medical comorbidities (e.g., hypertension, diabetes), employment status, psychological variables, and work-related factors, were considered to ensure a thorough and unbiased assessment of relationships between exposures and outcomes. Data collection and measurement Data collection followed a structured protocol combining personalized patient interviews and reviews of electronic health records to ensure comprehensive and accurate data capture. Trained family medicine residents conducted the interviews using a standardized questionnaire specifically developed for this study. The questionnaire was designed to capture demographic, clinical, and psychosocial variables, drawing from validated tools and existing literature on musculoskeletal complaints. Content validity was ensured through expert review, and a pilot test with 20 eligible patients refined its clarity and structure. Reliability was assessed with Cronbach’s alpha, achieving a value above 0.7, and test-retest reliability showed a strong intraclass correlation coefficient (ICC) for key variables. These steps confirmed the questionnaire’s validity and reliability for the study. The interviews, conducted during routine clinic visits, gathered self-reported information on demographic, psychological, and occupational factors, while electronic health records provided clinical data on comorbidities and treatment approaches. Variables such as pain intensity and functional status were assessed using validated scales (e.g., a 0–10 pain scale). All data were recorded electronically, consolidated into a single dataset, and cross-verified for accuracy. The process adhered strictly to the predefined study timeline to maintain consistency and reliability. Study size A sample size of 500 was determined using a power analysis conducted in G*Power software (version 3.1.9.7), ensuring sufficient statistical power to detect meaningful associations between variables. The calculation was based on a 95% confidence level, a 5% margin of error, and an expected effect size of 0.3, derived from prior studies in musculoskeletal health research . Bias Efforts were made to minimize bias throughout the study. To address selection bias, convenience sampling was conducted across various clinic sessions to ensure the inclusion of a representative patient sample. Information bias was mitigated by adhering to a standardized data collection protocol, with trained interviewers ensuring consistency and uniformity in data recording. Additionally, recall bias was minimized by focusing the data collection on recent patient experiences, reducing the likelihood of inaccuracies related to memory. Ethical considerations In conducting this study, all ethical considerations were carefully addressed to ensure the protection and privacy of the participants. Approval was obtained from the institutional review board (IRB) of the teaching hospital where the research was conducted, IRB approval No. (10/2024/25/22). Informed consent was obtained from all participants prior to their inclusion in the study, ensuring that they fully understood the purpose of the research. Patient confidentiality was strictly maintained by anonymizing all data, and access to medical records was restricted to authorized personnel directly involved in the study. Furthermore, the study was conducted in accordance with the principles outlined in the Declaration of Helsinki, ensuring that all participants were treated with respect and dignity throughout the research process. Statistical analysis For statistical analysis, descriptive statistics were used to summarize demographic characteristics and clinical outcomes. Categorical variables were analyzed using Chi-square tests, while continuous variables were compared using t-tests or Mann-Whitney U tests, depending on the data distribution. Odds ratios (OR) with 95% confidence intervals (CI) were calculated to explore associations between variables. All analyses were conducted using the Statistical Package for Social Sciences (SPSS version 23) and Microsoft Excel, with a p -value of less than 0.05 considered statistically significant. Data were compiled into a single electronic file for analysis to ensure consistency and accuracy throughout the study. Dependent variables in this study included patient satisfaction, pain scores, functional status, and quality of life metrics. Independent variables comprised demographic factors (e.g., age, gender, socioeconomic status), clinical factors (e.g., MSK complaint type, comorbidities, and treatment pathways), and psychosocial factors (e.g., coping strategies, anxiety levels, and fear of disability). Variables were categorized based on their role in influencing outcomes, allowing for a structured approach to statistical analysis. Descriptive statistics summarized these variables, and inferential tests examined associations between independent variables and outcomes. This framework ensured a comprehensive understanding of the factors impacting musculoskeletal health and patient experiences. The study methodology is illustrated in Fig. . We employed a cross-sectional descriptive design using a convenience sampling method. The objective was to evaluate the presentation, management, and perceptions of musculoskeletal (MSK) complaints in family medicine clinics. A comprehensive approach was taken to assess demographic variables, health profiles, comorbidities, visit-specific characteristics, healthcare-related factors, psychological dimensions, and occupational influences. This design provided a snapshot of patient care and the interplay of these factors within the study period. The study was conducted in the family medicine clinics of a central teaching hospital in Amman, Jordan. The clinics, comprising approximately 10 units, operate throughout the week and serve a diverse patient population from urban and rural areas. Data were collected over six months, from January 1st, 2024, to June 30th, 2024, ensuring seasonal variability was captured. The research team consisted of family medicine residents trained in standardized data collection methods and supervised by 2–3 family medicine consultants. Data were gathered during routine clinical visits, ensuring real-time documentation of patient complaints and management pathways. The study’s inclusion criteria targeted adults aged 18 years or older presenting with musculoskeletal (MSK) complaints. These complaints were defined as disorders involving muscles, bones, tendons, ligaments, and connective tissues, typically manifesting as pain, stiffness, or functional impairment. To participate, individuals needed to provide informed consent and agree to interviews and data collection using their authorized medical records. This approach ensured a comprehensive understanding of the participants’ MSK conditions while maintaining ethical standards. Exclusion criteria were implemented to minimize confounding factors and ensure the reliability of the study findings. Patients with non-MSK complaints, cognitive impairments, severe trauma requiring emergency care, recent orthopedic surgeries, or advanced comorbidities such as cancer were excluded. Additionally, pregnant women were not included due to the unique nature of pregnancy-related MSK conditions. Overall, 500 participants were carefully selected, reflecting the diverse spectrum of MSK conditions commonly encountered in primary care settings. The study variables were structured to analyze factors influencing outcomes such as patient-reported satisfaction, perceptions of care, and quality of life (QoL). Exposures included demographic factors (age, gender, socioeconomic status), health profiles, and types of musculoskeletal (MSK) complaints. Predictors and confounders, such as medical comorbidities (e.g., hypertension, diabetes), employment status, psychological variables, and work-related factors, were considered to ensure a thorough and unbiased assessment of relationships between exposures and outcomes. Data collection followed a structured protocol combining personalized patient interviews and reviews of electronic health records to ensure comprehensive and accurate data capture. Trained family medicine residents conducted the interviews using a standardized questionnaire specifically developed for this study. The questionnaire was designed to capture demographic, clinical, and psychosocial variables, drawing from validated tools and existing literature on musculoskeletal complaints. Content validity was ensured through expert review, and a pilot test with 20 eligible patients refined its clarity and structure. Reliability was assessed with Cronbach’s alpha, achieving a value above 0.7, and test-retest reliability showed a strong intraclass correlation coefficient (ICC) for key variables. These steps confirmed the questionnaire’s validity and reliability for the study. The interviews, conducted during routine clinic visits, gathered self-reported information on demographic, psychological, and occupational factors, while electronic health records provided clinical data on comorbidities and treatment approaches. Variables such as pain intensity and functional status were assessed using validated scales (e.g., a 0–10 pain scale). All data were recorded electronically, consolidated into a single dataset, and cross-verified for accuracy. The process adhered strictly to the predefined study timeline to maintain consistency and reliability. A sample size of 500 was determined using a power analysis conducted in G*Power software (version 3.1.9.7), ensuring sufficient statistical power to detect meaningful associations between variables. The calculation was based on a 95% confidence level, a 5% margin of error, and an expected effect size of 0.3, derived from prior studies in musculoskeletal health research . Efforts were made to minimize bias throughout the study. To address selection bias, convenience sampling was conducted across various clinic sessions to ensure the inclusion of a representative patient sample. Information bias was mitigated by adhering to a standardized data collection protocol, with trained interviewers ensuring consistency and uniformity in data recording. Additionally, recall bias was minimized by focusing the data collection on recent patient experiences, reducing the likelihood of inaccuracies related to memory. In conducting this study, all ethical considerations were carefully addressed to ensure the protection and privacy of the participants. Approval was obtained from the institutional review board (IRB) of the teaching hospital where the research was conducted, IRB approval No. (10/2024/25/22). Informed consent was obtained from all participants prior to their inclusion in the study, ensuring that they fully understood the purpose of the research. Patient confidentiality was strictly maintained by anonymizing all data, and access to medical records was restricted to authorized personnel directly involved in the study. Furthermore, the study was conducted in accordance with the principles outlined in the Declaration of Helsinki, ensuring that all participants were treated with respect and dignity throughout the research process. For statistical analysis, descriptive statistics were used to summarize demographic characteristics and clinical outcomes. Categorical variables were analyzed using Chi-square tests, while continuous variables were compared using t-tests or Mann-Whitney U tests, depending on the data distribution. Odds ratios (OR) with 95% confidence intervals (CI) were calculated to explore associations between variables. All analyses were conducted using the Statistical Package for Social Sciences (SPSS version 23) and Microsoft Excel, with a p -value of less than 0.05 considered statistically significant. Data were compiled into a single electronic file for analysis to ensure consistency and accuracy throughout the study. Dependent variables in this study included patient satisfaction, pain scores, functional status, and quality of life metrics. Independent variables comprised demographic factors (e.g., age, gender, socioeconomic status), clinical factors (e.g., MSK complaint type, comorbidities, and treatment pathways), and psychosocial factors (e.g., coping strategies, anxiety levels, and fear of disability). Variables were categorized based on their role in influencing outcomes, allowing for a structured approach to statistical analysis. Descriptive statistics summarized these variables, and inferential tests examined associations between independent variables and outcomes. This framework ensured a comprehensive understanding of the factors impacting musculoskeletal health and patient experiences. The study methodology is illustrated in Fig. . Demographic and medical characteristics The questionnaire was completed by a total of 500 patients, with a mean age of 46.11 ± 15.32 years. Of these, 305 patients (61.0%) were female. In terms of education, 332 respondents (66.3%) were university graduates. The majority of participants resided in urban areas (376, 75.2%), were married (342, 68.7%), and reported an average economic status (421, 84.4%). Regarding medical history, hypertension (HTN) was present in 126 patients (25.2%), diabetes mellitus (DM) in 109 patients (21.8%), and cardiovascular disease (CVD) in 53 patients (10.6%). Additionally, 147 patients (29.4%) were smokers. Table provides a detailed summary of the demographic and medical characteristics of the participants. Musculoskeletal pain, treatment pathways, and patient experience in family clinics The most frequently reported musculoskeletal complaints were low back pain (23.8%), knee pain (23.4%), and foot and ankle pain (16.9%). The median pain score was 7 (IQR 2) on a scale of 10, with females showing significantly higher odds of presenting with hip pain (OR = 1.650, p = 0.008) and a higher prevalence of low back pain compared to males (61.9% vs. 38.1%, OR = 1.12, p = 0.024). Pain scores also differed significantly between genders, with females reporting higher scores ( p < 0.001). Most patients (98.4%) received analgesics as initial management, with no significant gender difference. Only 18.1% were referred to orthopedic specialists, while 57.3% reported receiving counseling on musculoskeletal health, and just 41.5% underwent diagnostic tests during their visits, highlighting gaps in assessment. Patients expressed high satisfaction with their clinic visits, with a median satisfaction score of 8 (IQR 3) and counseling adequacy rated similarly at 8 (IQR 2). However, disability and functional status showed gender-specific differences: males reported higher disability scores (median 8, IQR 4) compared to females (median 4, IQR 4), while females demonstrated better functional status in daily activities (median 8, IQR 3 vs. males at 6, IQR 2). These findings underscore overall positive patient experiences alongside gender-related variations in pain, disability, and functional capabilities, summarized in Tables and . Patient perceptions and confidence in family medicine healthcare Gender differences in healthcare access and perceptions revealed notable trends. Most participants reported easy access to healthcare facilities, with females slightly more likely to report difficulties, though this was not statistically significant ( p = 0.087). Health insurance coverage was predominantly partial for both genders, with no significant differences in type. Males reported greater access to social support (OR = 1.309, p = 0.039), while fear of disability or loss of independence was more common among females (OR = 0.992, p = 0.048), highlighting distinct psychosocial concerns. Males also expressed significantly higher confidence in the healthcare system’s capacity to manage musculoskeletal issues (OR = 1.926, p = 0.035). Despite these variations, satisfaction with healthcare services was high for both genders (median score 8), and females perceived clinics as busier, though this was not statistically significant ( p = 0.082). These findings emphasize nuanced gender differences in social support, confidence in healthcare, and psychosocial concerns, as detailed in Table . Gender differences in mental health and quality of life related to musculoskeletal conditions Gender differences in the psychosocial impact of musculoskeletal conditions reveal significant trends. Males report greater concerns about employment and income loss (58.0% vs. 42.0%, OR = 1.525, p = 0.032) and are more likely to use coping strategies (OR = 1.363, p = 0.036), while females experience higher anxiety about disease progression (62.2% vs. 37.8%, OR = 1.22, p = 0.045). Males also report better mental health (median score 9 vs. 7, p = 0.036), sleep quality (median 8 vs. 6, p = 0.044), and overall quality of life (median 8 vs. 6, p = 0.019), highlighting distinct gender disparities. These findings underscore the need for gender-sensitive approaches to address coping strategies, anxiety, and quality of life in managing musculoskeletal conditions, as detailed in Table . Occupational factors, work-related support, and safety concerns in patients attending a family medicine clinic Gender-related differences in occupational factors impacting musculoskeletal health indicate that males experience a significantly higher workload, with 54.4% of males reporting high workload compared to 45.6% of females (OR = 1.564, p = 0.020). Males are also far more likely to report work-related injuries, accounting for 82.8% of those injured (OR = 8.623, p < 0.001). Access to recreational facilities for physical activity is notably higher for males (52.4% vs. 47.6% for females, OR = 1.956, p < 0.001), and males are more frequently advised to adjust their work patterns, with 48.5% of males versus 51.5% of females receiving such recommendations (OR = 1.679, p = 0.020). These findings, summarized in Table , emphasize that males face a greater occupational burden. The questionnaire was completed by a total of 500 patients, with a mean age of 46.11 ± 15.32 years. Of these, 305 patients (61.0%) were female. In terms of education, 332 respondents (66.3%) were university graduates. The majority of participants resided in urban areas (376, 75.2%), were married (342, 68.7%), and reported an average economic status (421, 84.4%). Regarding medical history, hypertension (HTN) was present in 126 patients (25.2%), diabetes mellitus (DM) in 109 patients (21.8%), and cardiovascular disease (CVD) in 53 patients (10.6%). Additionally, 147 patients (29.4%) were smokers. Table provides a detailed summary of the demographic and medical characteristics of the participants. The most frequently reported musculoskeletal complaints were low back pain (23.8%), knee pain (23.4%), and foot and ankle pain (16.9%). The median pain score was 7 (IQR 2) on a scale of 10, with females showing significantly higher odds of presenting with hip pain (OR = 1.650, p = 0.008) and a higher prevalence of low back pain compared to males (61.9% vs. 38.1%, OR = 1.12, p = 0.024). Pain scores also differed significantly between genders, with females reporting higher scores ( p < 0.001). Most patients (98.4%) received analgesics as initial management, with no significant gender difference. Only 18.1% were referred to orthopedic specialists, while 57.3% reported receiving counseling on musculoskeletal health, and just 41.5% underwent diagnostic tests during their visits, highlighting gaps in assessment. Patients expressed high satisfaction with their clinic visits, with a median satisfaction score of 8 (IQR 3) and counseling adequacy rated similarly at 8 (IQR 2). However, disability and functional status showed gender-specific differences: males reported higher disability scores (median 8, IQR 4) compared to females (median 4, IQR 4), while females demonstrated better functional status in daily activities (median 8, IQR 3 vs. males at 6, IQR 2). These findings underscore overall positive patient experiences alongside gender-related variations in pain, disability, and functional capabilities, summarized in Tables and . Gender differences in healthcare access and perceptions revealed notable trends. Most participants reported easy access to healthcare facilities, with females slightly more likely to report difficulties, though this was not statistically significant ( p = 0.087). Health insurance coverage was predominantly partial for both genders, with no significant differences in type. Males reported greater access to social support (OR = 1.309, p = 0.039), while fear of disability or loss of independence was more common among females (OR = 0.992, p = 0.048), highlighting distinct psychosocial concerns. Males also expressed significantly higher confidence in the healthcare system’s capacity to manage musculoskeletal issues (OR = 1.926, p = 0.035). Despite these variations, satisfaction with healthcare services was high for both genders (median score 8), and females perceived clinics as busier, though this was not statistically significant ( p = 0.082). These findings emphasize nuanced gender differences in social support, confidence in healthcare, and psychosocial concerns, as detailed in Table . Gender differences in the psychosocial impact of musculoskeletal conditions reveal significant trends. Males report greater concerns about employment and income loss (58.0% vs. 42.0%, OR = 1.525, p = 0.032) and are more likely to use coping strategies (OR = 1.363, p = 0.036), while females experience higher anxiety about disease progression (62.2% vs. 37.8%, OR = 1.22, p = 0.045). Males also report better mental health (median score 9 vs. 7, p = 0.036), sleep quality (median 8 vs. 6, p = 0.044), and overall quality of life (median 8 vs. 6, p = 0.019), highlighting distinct gender disparities. These findings underscore the need for gender-sensitive approaches to address coping strategies, anxiety, and quality of life in managing musculoskeletal conditions, as detailed in Table . Gender-related differences in occupational factors impacting musculoskeletal health indicate that males experience a significantly higher workload, with 54.4% of males reporting high workload compared to 45.6% of females (OR = 1.564, p = 0.020). Males are also far more likely to report work-related injuries, accounting for 82.8% of those injured (OR = 8.623, p < 0.001). Access to recreational facilities for physical activity is notably higher for males (52.4% vs. 47.6% for females, OR = 1.956, p < 0.001), and males are more frequently advised to adjust their work patterns, with 48.5% of males versus 51.5% of females receiving such recommendations (OR = 1.679, p = 0.020). These findings, summarized in Table , emphasize that males face a greater occupational burden. The main findings of this study reveal distinct gender differences across multiple dimensions of musculoskeletal health. Females reported a higher prevalence of conditions such as low back and hip pain, greater anxiety about disease progression, and better functional status in daily activities, but experienced higher pain levels and poorer quality of life indicators, including sleep and mental health. In contrast, males demonstrated higher confidence in the healthcare system, greater use of coping strategies, and more concerns about employment and income loss related to their conditions. These gender-specific patterns highlight the diverse experiences of patients with musculoskeletal complaints, emphasizing the importance of tailoring healthcare approaches to address the unique needs and challenges of both genders. Greater prevalence of pain in females Our findings indicate that female patients are significantly more likely to report pain than male patients, with a particular prevalence of low back and hip pain. Low back pain was notably more common among females, while hip pain was reported exclusively by female patients, underscoring a clear gender-specific pattern. These results align with previous studies suggesting that biological, hormonal, and psychosocial factors contribute to the higher pain prevalence and sensitivity observed in females. Research consistently supports the greater prevalence of low back pain (LBP) in women compared to men. For instance, Wáng et al. and Brady et al. found that females exhibit higher rates of low back pain across all age groups, with this gender difference widening post-menopause . Schneider et al. reported a 40% prevalence of back pain among women in Germany, notably higher than in men, attributing this disparity to hormonal changes, psychological influences, and anatomical differences . Furthermore, Chenot et al. observed that women not only have lower functional capacity but also higher rates of chronic LBP and a greater likelihood of experiencing depression related to LBP . Our findings, consistent with these studies, underscore the need for gender-sensitive pain management approaches to address the specific pain patterns and experiences of female patients effectively. Notably, our study observed a higher prevalence of hip pain among female patients compared to males. This finding may be attributed to a higher incidence of hip pathologies, such as hip dysplasia, prevalent within our population . However, it is important to highlight that previous studies have not specifically examined the prevalence of hip pain among patients visiting family or orthopedic clinics without a prior diagnosis of hip pathology. This gap underscores the unique contribution of our study in highlighting hip pain patterns in a general clinical population, pointing to a need for further research on undiagnosed hip pain, especially in primary care settings. Gender differences in diagnostic and treatment approaches for musculoskeletal pain Our study found that female patients were more likely than males to receive analgesia, referrals to physical therapy, and diagnostic testing during clinic visits. This trend may reflect the higher prevalence and intensity of musculoskeletal pain reported by females, especially in low back and hip pain, which likely prompts more extensive management and evaluation. Additionally, healthcare providers may view female patients as needing broader diagnostic and therapeutic support due to gender-based differences in pain presentation and reporting patterns. These findings suggest that clinicians may adopt a more comprehensive approach to meet the specific healthcare needs of female patients, integrating diverse diagnostic and treatment measures. Our findings align with Barr et al., who observed that females generally use more pain-relieving medications, though it remains uncertain whether this is due to actual usage differences or potential reporting bias . In contrast, Weimer et al. and Procento et al. found that females are less frequently prescribed strong opioids compared to males , while Richardson reported that women are more likely to visit emergency departments for pain-related complaints, leading to higher rates of diagnostic evaluations . Our results align with these findings, demonstrating that female patients not only receive more referrals to physical therapy and diagnostic evaluations but also utilize a broader range of pain management strategies compared to their male counterparts. Greater disability, reduced functional status, and employment anxiety in male patients Our study found that male patients with musculoskeletal (MSK) conditions report higher levels of disability, lower functional status, and greater anxiety about potential loss of employment or income compared to female patients. This suggests that MSK conditions may affect men differently, likely due to societal expectations that emphasize physical capacity and occupational stability. Physical limitations from MSK conditions can directly impact male patients’ job performance, particularly in physically demanding roles, potentially explaining the increased levels of disability and functional impairment we observed. Additionally, as males often carry primary financial responsibilities, their concerns about job and income stability may be heightened. These findings highlight the importance of integrated MSK management that includes occupational support to address both physical limitations and economic concerns specific to male patients. Our findings align with gender-based patterns reported in prior research on MSK disorders, with notable similarities and contrasts. Consistent with our results, Wolf et al. and Raina et al. observed that males with MSK conditions experience greater disability, reduced functional status, and higher employment anxiety . Conversely, Holland et al. and Stubbs et al. reported that females often experience more pain-related disability and are more likely to leave employment, potentially reflecting a different coping mechanism compared to males . Supporting our observation of functional impairment in male patients, Zink et al. found severe disability in males with specific MSK conditions like ankylosing spondylitis , while Wijnhoven et al. identified greater work-related disability in men with low back pain . Additionally, Bailey et al. noted that men are more frequently referred to mental health services for work-related issues, suggesting added psychological strain in employment settings . Together, these studies underscore the complex employment-related challenges faced by male MSK patients and the need for occupational and psychological support tailored to their specific needs. Lower social support, higher anxiety, and coping challenges in female patients Our study found that female patients with musculoskeletal (MSK) conditions reported lower levels of social support, a greater fear of disability and loss of independence, fewer coping mechanisms, and increased anxiety about disease progression compared to their male counterparts. These findings may reflect social and cultural dynamics in which females have limited access to social resources or perceive less support in managing chronic health conditions. This reduced social support, combined with an elevated fear of disability, could contribute to heightened anxiety, as managing chronic pain without sufficient support or effective coping strategies can intensify concerns about the condition’s impact on their future independence. These observations underscore the need for integrated support systems and psychosocial interventions that enhance coping skills and directly address the unique challenges faced by female MSK patients. Previous research supports these findings and offers further insight into the coping challenges and psychosocial dynamics specific to female MSK patients. Grossi et al. observed that females with musculoskeletal pain report lower coping capacity, which correlates with higher levels of emotional distress and greater disability . Similarly, Soares et al. found that lower self-esteem among female patients with MSK pain is associated with increased anxiety and depression, while positive self-esteem aligns with active coping mechanisms . Espwall et al. noted that women with undefined MSK disorders receive less emotional support than those with non-MSK conditions, potentially exacerbating their psychosocial burdens . Further, Cheng et al. reported that females with chronic MSK pain experience worse physical functioning than males . Together, these studies and our findings highlight the importance of tailored psychosocial support to bolster coping strategies and enhance social support for female MSK patients, aiming to improve both mental health and physical outcomes. Challenges in mental health and quality of life for female patients with MSK disorders Our study found that female patients with musculoskeletal (MSK) conditions experience a greater impact on emotional well-being, with lower reported quality of mental health, poorer sleep quality, and a reduced overall quality of life compared to males. This pattern may reflect the compounded effect of higher pain prevalence, reduced social support, and fewer coping mechanisms among female patients, contributing to increased emotional and physical strain. These findings emphasize the need for comprehensive care approaches that address mental health, improve sleep quality, and support overall quality of life in female MSK patients. Previous studies reinforce our findings on the emotional and quality-of-life impacts of MSK conditions in female patients. Björnsdóttir et al. similarly reported that chronic musculoskeletal pain is linked to poor mental health, diminished quality of life, and disrupted sleep in women , mirroring the challenges we observed. Heikkinen et al. also found that common musculoskeletal conditions are associated with worsened mental health, especially in adults over 45 , aligning with our study’s results on mental health impact in female MSK patients. Additionally, Molina et al. observed significant psychological distress and reduced quality of life in adolescents with idiopathic MSK pain, suggesting that these effects span across ages . Together, these studies and our findings underscore the need for targeted mental health and quality-of-life interventions for female MSK patients. Gender-based disparities in occupational strain and workplace support among MSK patients Our study reveals notable gender-based differences in occupational strain and workplace support for patients with musculoskeletal (MSK) conditions. Male patients reported significantly higher workloads and more frequent work-related injuries, likely due to greater engagement in physically demanding roles, which may exacerbate MSK-related risks and severity. Conversely, female patients reported less access to recreational activities, received less workplace support, and demonstrated a stronger interest in modifying their work environments or patterns. This disparity suggests that female-dominated roles may offer fewer opportunities for physical respite and support, which could intensify MSK-related strain. Addressing these differences through targeted workplace interventions—such as injury prevention programs for men and increased support and recreation access for women—may help alleviate the occupational burden of MSK conditions for both genders. Previous research on the occupational impact of MSK complaints is sparse. However, Wolf et al. reported similar findings, noting that men with MSK conditions face higher workloads and more work-related injuries, while women have less access to recreational activities and workplace support, and show more interest in changing work environments . These findings underscore the importance of further research into the occupational ramifications of MSK conditions to develop effective, gender-sensitive interventions. Integrating patient perspectives and multidisciplinary approaches in musculoskeletal pain management Addressing musculoskeletal (MSK) complaints effectively requires integrating patient perspectives and expectations into care strategies, as this approach has been shown to enhance satisfaction and treatment outcomes . Understanding individual experiences enables clinicians to align treatment goals with patient needs, fostering trust and improving adherence to management plans. Additionally, managing MSK conditions through a multidisciplinary team, including physicians, nurses, and physiotherapists, has demonstrated superior outcomes by addressing the complex and multifaceted nature of these conditions . This collaborative approach ensures comprehensive assessments, personalized interventions, and enhanced patient satisfaction. Together, these strategies highlight the importance of patient-centered and team-based care in optimizing MSK management and improving healthcare delivery. Need for policy development and research in Jordan and the broader middle east Our study highlights a critical need for targeted health policy development in Jordan and the broader Middle East to address the gender-specific impacts of musculoskeletal (MSK) conditions. Findings from our research reveal notable differences in pain prevalence, treatment approaches, occupational strain, and psychosocial effects between male and female MSK patients. However, there is a distinct lack of research investigating these issues within Middle Eastern populations. Existing studies on MSK conditions are primarily focused on Western contexts, where health policies, social norms, and workplace practices differ considerably from those in the Middle East, leaving critical gaps in region-specific knowledge. Addressing these gaps requires comprehensive, gender-sensitive research across the Middle East to understand the unique social, cultural, and occupational factors influencing MSK conditions. Such studies could inform the development of policies tailored to the needs of Middle Eastern populations, ultimately guiding healthcare practices, workplace safety standards, and psychosocial support mechanisms for MSK patients. Implementing these policies would promote more equitable healthcare access, safer work environments, and stronger support systems, enhancing quality of life for MSK patients in Jordan and the broader region. Strength and limitations Our study provides valuable insights into musculoskeletal (MSK) complaints and their management in a Jordanian tertiary care setting, with notable strengths and limitations. A key strength is the focus on gender-specific differences, addressing significant gaps in regional literature, while the urban hospital setting allowed us to capture a broad and diverse patient profile reflective of similar healthcare contexts. The use of a structured protocol ensured reliable data collection, and the findings offer a foundation for future research and policy development. However, the cross-sectional design limits longitudinal analysis, and the use of a self-created questionnaire may reduce comparability with validated tools. Additionally, reliance on self-reported data introduces potential subjective bias, and the biological or social mechanisms behind observed gender differences were not explored. Despite these limitations, the study provides essential data to guide clinical practices and inform healthcare strategies in the region. Our findings indicate that female patients are significantly more likely to report pain than male patients, with a particular prevalence of low back and hip pain. Low back pain was notably more common among females, while hip pain was reported exclusively by female patients, underscoring a clear gender-specific pattern. These results align with previous studies suggesting that biological, hormonal, and psychosocial factors contribute to the higher pain prevalence and sensitivity observed in females. Research consistently supports the greater prevalence of low back pain (LBP) in women compared to men. For instance, Wáng et al. and Brady et al. found that females exhibit higher rates of low back pain across all age groups, with this gender difference widening post-menopause . Schneider et al. reported a 40% prevalence of back pain among women in Germany, notably higher than in men, attributing this disparity to hormonal changes, psychological influences, and anatomical differences . Furthermore, Chenot et al. observed that women not only have lower functional capacity but also higher rates of chronic LBP and a greater likelihood of experiencing depression related to LBP . Our findings, consistent with these studies, underscore the need for gender-sensitive pain management approaches to address the specific pain patterns and experiences of female patients effectively. Notably, our study observed a higher prevalence of hip pain among female patients compared to males. This finding may be attributed to a higher incidence of hip pathologies, such as hip dysplasia, prevalent within our population . However, it is important to highlight that previous studies have not specifically examined the prevalence of hip pain among patients visiting family or orthopedic clinics without a prior diagnosis of hip pathology. This gap underscores the unique contribution of our study in highlighting hip pain patterns in a general clinical population, pointing to a need for further research on undiagnosed hip pain, especially in primary care settings. Our study found that female patients were more likely than males to receive analgesia, referrals to physical therapy, and diagnostic testing during clinic visits. This trend may reflect the higher prevalence and intensity of musculoskeletal pain reported by females, especially in low back and hip pain, which likely prompts more extensive management and evaluation. Additionally, healthcare providers may view female patients as needing broader diagnostic and therapeutic support due to gender-based differences in pain presentation and reporting patterns. These findings suggest that clinicians may adopt a more comprehensive approach to meet the specific healthcare needs of female patients, integrating diverse diagnostic and treatment measures. Our findings align with Barr et al., who observed that females generally use more pain-relieving medications, though it remains uncertain whether this is due to actual usage differences or potential reporting bias . In contrast, Weimer et al. and Procento et al. found that females are less frequently prescribed strong opioids compared to males , while Richardson reported that women are more likely to visit emergency departments for pain-related complaints, leading to higher rates of diagnostic evaluations . Our results align with these findings, demonstrating that female patients not only receive more referrals to physical therapy and diagnostic evaluations but also utilize a broader range of pain management strategies compared to their male counterparts. Our study found that male patients with musculoskeletal (MSK) conditions report higher levels of disability, lower functional status, and greater anxiety about potential loss of employment or income compared to female patients. This suggests that MSK conditions may affect men differently, likely due to societal expectations that emphasize physical capacity and occupational stability. Physical limitations from MSK conditions can directly impact male patients’ job performance, particularly in physically demanding roles, potentially explaining the increased levels of disability and functional impairment we observed. Additionally, as males often carry primary financial responsibilities, their concerns about job and income stability may be heightened. These findings highlight the importance of integrated MSK management that includes occupational support to address both physical limitations and economic concerns specific to male patients. Our findings align with gender-based patterns reported in prior research on MSK disorders, with notable similarities and contrasts. Consistent with our results, Wolf et al. and Raina et al. observed that males with MSK conditions experience greater disability, reduced functional status, and higher employment anxiety . Conversely, Holland et al. and Stubbs et al. reported that females often experience more pain-related disability and are more likely to leave employment, potentially reflecting a different coping mechanism compared to males . Supporting our observation of functional impairment in male patients, Zink et al. found severe disability in males with specific MSK conditions like ankylosing spondylitis , while Wijnhoven et al. identified greater work-related disability in men with low back pain . Additionally, Bailey et al. noted that men are more frequently referred to mental health services for work-related issues, suggesting added psychological strain in employment settings . Together, these studies underscore the complex employment-related challenges faced by male MSK patients and the need for occupational and psychological support tailored to their specific needs. Our study found that female patients with musculoskeletal (MSK) conditions reported lower levels of social support, a greater fear of disability and loss of independence, fewer coping mechanisms, and increased anxiety about disease progression compared to their male counterparts. These findings may reflect social and cultural dynamics in which females have limited access to social resources or perceive less support in managing chronic health conditions. This reduced social support, combined with an elevated fear of disability, could contribute to heightened anxiety, as managing chronic pain without sufficient support or effective coping strategies can intensify concerns about the condition’s impact on their future independence. These observations underscore the need for integrated support systems and psychosocial interventions that enhance coping skills and directly address the unique challenges faced by female MSK patients. Previous research supports these findings and offers further insight into the coping challenges and psychosocial dynamics specific to female MSK patients. Grossi et al. observed that females with musculoskeletal pain report lower coping capacity, which correlates with higher levels of emotional distress and greater disability . Similarly, Soares et al. found that lower self-esteem among female patients with MSK pain is associated with increased anxiety and depression, while positive self-esteem aligns with active coping mechanisms . Espwall et al. noted that women with undefined MSK disorders receive less emotional support than those with non-MSK conditions, potentially exacerbating their psychosocial burdens . Further, Cheng et al. reported that females with chronic MSK pain experience worse physical functioning than males . Together, these studies and our findings highlight the importance of tailored psychosocial support to bolster coping strategies and enhance social support for female MSK patients, aiming to improve both mental health and physical outcomes. Our study found that female patients with musculoskeletal (MSK) conditions experience a greater impact on emotional well-being, with lower reported quality of mental health, poorer sleep quality, and a reduced overall quality of life compared to males. This pattern may reflect the compounded effect of higher pain prevalence, reduced social support, and fewer coping mechanisms among female patients, contributing to increased emotional and physical strain. These findings emphasize the need for comprehensive care approaches that address mental health, improve sleep quality, and support overall quality of life in female MSK patients. Previous studies reinforce our findings on the emotional and quality-of-life impacts of MSK conditions in female patients. Björnsdóttir et al. similarly reported that chronic musculoskeletal pain is linked to poor mental health, diminished quality of life, and disrupted sleep in women , mirroring the challenges we observed. Heikkinen et al. also found that common musculoskeletal conditions are associated with worsened mental health, especially in adults over 45 , aligning with our study’s results on mental health impact in female MSK patients. Additionally, Molina et al. observed significant psychological distress and reduced quality of life in adolescents with idiopathic MSK pain, suggesting that these effects span across ages . Together, these studies and our findings underscore the need for targeted mental health and quality-of-life interventions for female MSK patients. Our study reveals notable gender-based differences in occupational strain and workplace support for patients with musculoskeletal (MSK) conditions. Male patients reported significantly higher workloads and more frequent work-related injuries, likely due to greater engagement in physically demanding roles, which may exacerbate MSK-related risks and severity. Conversely, female patients reported less access to recreational activities, received less workplace support, and demonstrated a stronger interest in modifying their work environments or patterns. This disparity suggests that female-dominated roles may offer fewer opportunities for physical respite and support, which could intensify MSK-related strain. Addressing these differences through targeted workplace interventions—such as injury prevention programs for men and increased support and recreation access for women—may help alleviate the occupational burden of MSK conditions for both genders. Previous research on the occupational impact of MSK complaints is sparse. However, Wolf et al. reported similar findings, noting that men with MSK conditions face higher workloads and more work-related injuries, while women have less access to recreational activities and workplace support, and show more interest in changing work environments . These findings underscore the importance of further research into the occupational ramifications of MSK conditions to develop effective, gender-sensitive interventions. Addressing musculoskeletal (MSK) complaints effectively requires integrating patient perspectives and expectations into care strategies, as this approach has been shown to enhance satisfaction and treatment outcomes . Understanding individual experiences enables clinicians to align treatment goals with patient needs, fostering trust and improving adherence to management plans. Additionally, managing MSK conditions through a multidisciplinary team, including physicians, nurses, and physiotherapists, has demonstrated superior outcomes by addressing the complex and multifaceted nature of these conditions . This collaborative approach ensures comprehensive assessments, personalized interventions, and enhanced patient satisfaction. Together, these strategies highlight the importance of patient-centered and team-based care in optimizing MSK management and improving healthcare delivery. Our study highlights a critical need for targeted health policy development in Jordan and the broader Middle East to address the gender-specific impacts of musculoskeletal (MSK) conditions. Findings from our research reveal notable differences in pain prevalence, treatment approaches, occupational strain, and psychosocial effects between male and female MSK patients. However, there is a distinct lack of research investigating these issues within Middle Eastern populations. Existing studies on MSK conditions are primarily focused on Western contexts, where health policies, social norms, and workplace practices differ considerably from those in the Middle East, leaving critical gaps in region-specific knowledge. Addressing these gaps requires comprehensive, gender-sensitive research across the Middle East to understand the unique social, cultural, and occupational factors influencing MSK conditions. Such studies could inform the development of policies tailored to the needs of Middle Eastern populations, ultimately guiding healthcare practices, workplace safety standards, and psychosocial support mechanisms for MSK patients. Implementing these policies would promote more equitable healthcare access, safer work environments, and stronger support systems, enhancing quality of life for MSK patients in Jordan and the broader region. Our study provides valuable insights into musculoskeletal (MSK) complaints and their management in a Jordanian tertiary care setting, with notable strengths and limitations. A key strength is the focus on gender-specific differences, addressing significant gaps in regional literature, while the urban hospital setting allowed us to capture a broad and diverse patient profile reflective of similar healthcare contexts. The use of a structured protocol ensured reliable data collection, and the findings offer a foundation for future research and policy development. However, the cross-sectional design limits longitudinal analysis, and the use of a self-created questionnaire may reduce comparability with validated tools. Additionally, reliance on self-reported data introduces potential subjective bias, and the biological or social mechanisms behind observed gender differences were not explored. Despite these limitations, the study provides essential data to guide clinical practices and inform healthcare strategies in the region. Based on our study findings, several clinical implications emerge. Gender differences in pain prevalence and intensity, particularly the higher rates of low back and hip pain among female patients, emphasize the importance of adopting individualized pain management strategies. Tailored evaluations and treatments that consider gender-specific pain patterns are essential for improving patient outcomes and enhancing treatment adherence. The study also highlights the need for integrated psychosocial support services. Female patients reported a greater impact on emotional well-being, lower mental health scores, and increased anxiety about disease progression. To address this, clinics should incorporate routine mental health screenings and psychosocial support into musculoskeletal (MSK) care. Providing resources for coping strategies, emotional support, and stress management can significantly reduce mental health burdens and improve the overall quality of life for these patients. Workplace-focused interventions are crucial for addressing the higher incidence of work-related injuries and physically demanding workloads among male patients. Collaborations with occupational health programs can provide injury prevention guidance, ergonomic recommendations, and workplace safety consultations. Such initiatives are especially beneficial for patients engaged in physically intensive roles. Proactive monitoring and follow-up care are also essential, particularly for female patients who received more diagnostic tests and physical therapy referrals. Establishing structured follow-up systems ensures comprehensive care continuity and enables timely reassessment of pain and functional status. This approach is particularly beneficial for high-risk groups, optimizing long-term outcomes and preventing the progression of MSK conditions. Lastly, the findings reveal the need for strengthening social and mental health support for female patients. Many reported lower social support and higher anxiety related to disease progression and disability. Clinics should prioritize developing support networks and mental health resources tailored to women. Initiatives such as peer support groups, community resources, anxiety management workshops, and individual counseling can help patients manage fears about future disability, foster resilience, and improve emotional well-being. In conclusion, our study reveals significant gender-based differences in musculoskeletal complaints, treatment approaches, and patient perceptions in a Jordanian tertiary care setting. Female patients experienced higher rates of low back and hip pain, more frequent diagnostic testing and physical therapy referrals, as well as greater anxiety about disease progression, lower social support, and reduced quality of life. In contrast, male patients reported higher occupational workloads, more work-related injuries, increased disability, and greater anxiety about income loss due to MSK conditions. These findings highlight the importance of gender-sensitive approaches in MSK management that address both physical and psychosocial needs to improve patient outcomes and quality of life.
Evaluation of Noise in Paediatric Dentistry and Change in Perception of Operators with Use of Ear Protection Devices
441ce149-5e4a-40fb-879a-4a1f4a8a05b8
11813240
Dentistry[mh]
EY M ESSAGES (1) The noise in the paediatric dental office often exceeds recommended limits harming the auditory system of dental operators (2) Despite the risks associated with noise exposure, many dental operators do not regularly use ear protection devices (EPDs) likely due to a combination of factors, such as discomfort, perceived inconvenience and lack of awareness about the benefits (3) The use of EPDs can significantly reduce noise exposure. NTRODUCTION Noise is a common issue and a major contributor to occupational ailments, especially those affecting the auditory system. The dental office is particularly noisy, with high-speed handpieces, suctions, scalers and lab equipment such as trimmers and mixers generating high-frequency noise. Paediatric dentists are also inherently subjected to noise from children who cry or scream out of fear or discomfort, especially during procedures such as tooth extractions, despite adequate local anaesthesia. Noise in the clinical setup is not only harmful for the auditory system but also disturbs a person’s level of concentration while working. At the end of the day, the dentist starts to complain of irritability, headache, tinnitus and difficulty in understanding speech. Occupational noise-induced hearing loss (ONIHL) is one of the occupational hazards that results from extended exposure to noise levels at work. ONIHL is not widely known among dentists. They are unsure of how noise at work affects auditory and non-auditory systems, nor do they comprehend the short- and long-term consequences. Even individuals who are aware of ONIHL lack expertise and prefer not to use ear protection devices (EPDs). EPD create a physical barrier when inserted correctly into the ear canal. EPDs can also reduce the risk of non-auditory effects of high-frequency noise such as fatigue, nausea, headaches, irritation, tinnitus and even hypertension. The long-term benefits of wearing an EPD increase work performance and satisfaction. The use of high-fidelity earplugs for noise cancellation is recommended for dentists and dental personnel, as they have acoustic filters that can cancel the noise effectively and allow dentists to maintain effective communication with patients and colleagues. The study assessed the noise level consistently generated in the operatory, as well as the shift in operators’ perception following EPD use. We hope this work will encourage operators to employ EPDs. ATERIALS AND M ETHODS This work was an observational study carried out among paediatric dentists working in the Department of Paediatric and Preventive Dentistry of three different institutes after obtaining ethical approval from the institution (SVIEC/ON/Dent/RP/Aug/23/3). Sample size was calculated to be 78 paediatric dentists using the formula: n = N * x /(( N − 1) E 2 + x ), where N is population size, E is the margin of error, and x is the value that comes after considering fraction of responses at Z(c/100), which is the critical value for the confidence level c at the 80% power of study. The study included paediatric dentists working in hospital set up wishing to be a part of the study and where at least three noise-causing units such as a high-speed turbine handpiece (air rotors or micromotors), high-velocity suction, ultrasonic scalers and model trimmers were being used. Any dentist with hearing aid or complaints of hearing loss was excluded from the study. All information related to the study was delivered to the operators. Considering the small sample size, all operators of the three different institutes, that is, 93 operators, were included in the study after considering inclusion and exclusion criteria. This work was a crossover study, in which out of a sample size of 93, 47 operators initially worked without EPD, and the other 46 worked with EPD. The questionnaires were adapted from a study by Lazarotto-Schettini et al. and validated by four experts, and a pilot study on eight operators (10% of the sample size) was performed. The operators participating in the pilot study were excluded from the main study. Questionnaire 1 was given to the operators to be filled out before the commencement of the study. Sound produced near the operator was evaluated using the “Sound Analyzer” app (Version 2.7, Dominique Rodrigues). This app uses a smartphone as a sound level meter and real-time audio analyser. Parameters such as LAF (sound level with “A” frequency weighting and fast time weighting) and LAeq (equivalent A-weighted continuous sound level) can be recorded using this app. The app shows the average LAF and LAeq at the bottom and minimum LAF and maximum LAF can be checked at all times by selecting the statistics button that is present at the top left side. These values are used as indicators to describe sound and noise levels. The app was installed in the operator’s mobile and kept near the operator while working. Screen recording of the app was done. The records were collected in decibels (dBs) for 8 working hours for 5 days, and the maximum sound in decibel was noted for each day. An average of 5 days was considered for each operator. The maximum sound in decibel was then compared with the Occupational Safety and Health Administration (OSHA)- and National Institute for Occupational Safety and Health (NIOSH)-recommended exposure limits. In this work, disposable ear plugs (3M 1110 Ear Plugs Corded; 3M India Ltd., 48-51 Electronics City Hosur Road Bengaluru − 560100) as a form of EPD were given to all the operators. For the first group, in the first 5 days, the operators treated patients without any EPD. In the next 5 days, the operators treated patients with EPD . The operators were demonstrated on how to use the EPDs and were given directions to do so by first rolling the earplug into a small, thin shape with one or both hands, and then using the other hand to pull the top of the ear upward and backward to straighten out the ear canal. They were then instructed to carefully insert the rolled-up earplugs into the ears and hold it with the finger up till it expands and fills the ear canal. For the second group, in the first 5 days, the operators used EPD while treating patients. In the next 5 days, they worked without EPD. Questionnaire 2 was then given to all of them, asking them about their work experience with the use of EPD. Statistical analysis Using the coin toss method, the operators were randomly assigned to either one of the groups. A spreadsheet created in Microsoft Excel (2016) included the information collected. Using statistical operations in Excel, descriptive statistical tests were calculated. SPSS version 21 was utilised to calculate inferential statistics. Chi-square test was employed to analyse noise knowledge and practice (questionnaires), and P value was set at 0.05. The statistician was blinded about the results. Using the coin toss method, the operators were randomly assigned to either one of the groups. A spreadsheet created in Microsoft Excel (2016) included the information collected. Using statistical operations in Excel, descriptive statistical tests were calculated. SPSS version 21 was utilised to calculate inferential statistics. Chi-square test was employed to analyse noise knowledge and practice (questionnaires), and P value was set at 0.05. The statistician was blinded about the results. ESULTS Evaluation of noise exposure in the dental operatory This study involved 35 male and 58 female paediatric dentists whose ages ranged from 24 to 46 years. The permissible levels of noise exposure as suggested by OSHA and NIOSH are 90 and 85 dB, respectively. In the present study, the minimum noise level in the operatory on average when recorded without EPD for all 93 operators was 78.3 dB, maximum was 92.7 dB, and average was 86.88 dB. This level of noise was above the NIOSH-recommended level for 78 (83.87%) dentists and above the OSHA level for 5 (5.38%) dentists. For the average noise exposure in the operatory for 93 operators with EPD, the minimum noise level was 80.9 dB, maximum 92.8 dB and average 86.83 dB. It was above the NIOSH-recommended level for 80 (86.02%) dentists and above the OSHA level for 6 (6.45%) dentists. Evaluation of operators’ response before use of EPD The paediatric operators were assessed about the knowledge and exposure of noise around them and use of EPD in questionnaire 1. shows the responses of the operators to some of the questions asked. They responded that the exposure to noise ranged from low (4 operators), medium (61 operators) to high (28 operators), and communication with other personals nearby was difficult. Although 67 (72.04%) were aware of the consequences of loud noise, only 10 (10.75%) had knowledge about ear protection device. Figures and show corresponding responses to multiple-choice questions. Air compressors followed by high-speed turbines (air rotors) and low-speed micromotor were the three most noise-causing equipment in the operatory, as shown in . Comparison of the symptoms/complaints of operators pre- and post-use of EPD is depicted in . Most of the symptoms observed pre-use of EPD were relieved after use of EPD. Symptoms such as irritation at the end of the day, recurring headache and difficulties in focusing, which were seen in 72 (77.42%), 63 (67.74%) and 52 (55.91%) operators’ pre-use of EPD were reduced to 7 (7.52%), 4(5.48%) and 8(8.6%), respectively. The only exception was difficulty in understanding speech, which was found by 11 (11.83%) operators’ pre-use of EPD and 13 (13.98%) post-use of EPD. Although the responses were clinically significant, when compared statistically, the results were insignificant at P value > 0.05 ( P = 0.43). Evaluation of operators’ responses after use of EPD The second questionnaire evaluated the change in the perception of operators after the use of EPD. 85 (91.40%) operators found that their work experience improved after the use of EPD and 90 (96.77%) found EPD to be helpful in reducing the noise. While 8 (8.6%) operators felt discomfort while wearing them, 85 (91.40%) were comfortable. 36 (38.71%) operators found difficulty in communicating with their patients whereas 57 (61.29%) did not find any difficulty. The problems that the operators felt prior use of EPD were resolved for 86 (92.47%) operators. A total of 92 (98.92%) operators confirmed that they would use EPD in the future. The statistical methods used to analyse noise knowledge and practice in the form of questionnaires 1 and 2, did not reveal any significant results as the knowledge of operators could not be correlated with the noise exposure in the operatory. This study involved 35 male and 58 female paediatric dentists whose ages ranged from 24 to 46 years. The permissible levels of noise exposure as suggested by OSHA and NIOSH are 90 and 85 dB, respectively. In the present study, the minimum noise level in the operatory on average when recorded without EPD for all 93 operators was 78.3 dB, maximum was 92.7 dB, and average was 86.88 dB. This level of noise was above the NIOSH-recommended level for 78 (83.87%) dentists and above the OSHA level for 5 (5.38%) dentists. For the average noise exposure in the operatory for 93 operators with EPD, the minimum noise level was 80.9 dB, maximum 92.8 dB and average 86.83 dB. It was above the NIOSH-recommended level for 80 (86.02%) dentists and above the OSHA level for 6 (6.45%) dentists. The paediatric operators were assessed about the knowledge and exposure of noise around them and use of EPD in questionnaire 1. shows the responses of the operators to some of the questions asked. They responded that the exposure to noise ranged from low (4 operators), medium (61 operators) to high (28 operators), and communication with other personals nearby was difficult. Although 67 (72.04%) were aware of the consequences of loud noise, only 10 (10.75%) had knowledge about ear protection device. Figures and show corresponding responses to multiple-choice questions. Air compressors followed by high-speed turbines (air rotors) and low-speed micromotor were the three most noise-causing equipment in the operatory, as shown in . Comparison of the symptoms/complaints of operators pre- and post-use of EPD is depicted in . Most of the symptoms observed pre-use of EPD were relieved after use of EPD. Symptoms such as irritation at the end of the day, recurring headache and difficulties in focusing, which were seen in 72 (77.42%), 63 (67.74%) and 52 (55.91%) operators’ pre-use of EPD were reduced to 7 (7.52%), 4(5.48%) and 8(8.6%), respectively. The only exception was difficulty in understanding speech, which was found by 11 (11.83%) operators’ pre-use of EPD and 13 (13.98%) post-use of EPD. Although the responses were clinically significant, when compared statistically, the results were insignificant at P value > 0.05 ( P = 0.43). The second questionnaire evaluated the change in the perception of operators after the use of EPD. 85 (91.40%) operators found that their work experience improved after the use of EPD and 90 (96.77%) found EPD to be helpful in reducing the noise. While 8 (8.6%) operators felt discomfort while wearing them, 85 (91.40%) were comfortable. 36 (38.71%) operators found difficulty in communicating with their patients whereas 57 (61.29%) did not find any difficulty. The problems that the operators felt prior use of EPD were resolved for 86 (92.47%) operators. A total of 92 (98.92%) operators confirmed that they would use EPD in the future. The statistical methods used to analyse noise knowledge and practice in the form of questionnaires 1 and 2, did not reveal any significant results as the knowledge of operators could not be correlated with the noise exposure in the operatory. ISCUSSION The present study was a multi-centric study with a final sample size of 93, which was comparable with the sample size studied in recent studies where it was 100 and 60. The overall noise in paediatric dental setups is approximately 87 dB, which is higher than the recommended level by NIOSH. Such a high level of noise exposure for a long time period can lead to hearing loss. While high frequency noise can cause acute injury to the inner ear, lower level disturbances over a time can impair the neural pathways between the inner ear and spiral ganglion neurons, resulting in hearing loss. A single exposure to noise levels of 140 dB can result in hearing damage, and prolonged exposure to noise levels at or above 85 dB can induce permanent hearing loss. The noise level in the operatory was continually measured for 8 working hours over the course of 5 days in this investigation, and the lowest, maximum and average levels were all recorded. The operators were able to self-evaluate the noise that they were regularly exposed to. The average noise level in the present study was below the OSHA-recommended level, similar to the study by Bates M, where noise exposure had not exceeded the OSHA level. In the present study, 65.59% dentists considered the level of noise at workplace to be medium level, which was similar to the study by Gonçalves et al. , where 49% (80) of the dentists thought that the noise levels were in the medium range at their workplace, and Lazarotto-Schettini et al. , where most dentists thought that the noise levels were in the medium range. After working hours, at the end of the day, irritation (77.42%) followed by recurring headache (67.74%), difficulty in focusing (55.91%) and tinnitus (22.58%) were the most common findings in the present study, similar to the study by Lazarotto-Schettini et al. , where irritability (46.3%) and difficulty in understanding speech (40.7%) followed by tinnitus (35.1%) were the most reported findings. In contrast to the present study, Cavalcanti et al. found tinnitus (40%), dizziness (32%) and intolerance to high sound levels (20%) to be most common. About 98.92% dentists did not wear EPD as a protective measure against noise exposure in the present study. This value was similar to those reported by Gonçalves et al. , Moimaz et al. and Lazarotto-Schettini et al. Although most of the dentists were aware of EPDs, they did not prefer wearing them as they felt it would be uncomfortable and an additional task. They also implied that it would cause more irritation or that they would have difficulty in hearing. Only 8.6% dentists found difficulties in focusing, 7.53% reported feeling irritated at the end of the day, and 13% had difficulty in understanding speech while using EPD. Notably, 92.47% dentists found that problems faced before the use of EPD were relieved after use, similar to the findings of Tikka et al. and Ma et al. Use of EPD helps in reducing the effect of noise to the auditory system. It decreases the possibility of high-frequency noise’s non-auditory side effects, which include headaches, nausea, weariness, tinnitus and hypertension. 3M ear plugs reduce external noise up to 29 dB. These inexpensive EPDs not only reduce the noise around the operators and bring noise near the OSHA- and NIOSH-recommended levels but they also limit the difficulties faced at the end of the day. Studies by Chowanadisai et al. , Morata et al. and Tikka et al. found that the protective impact of 10–15 dB has significance for a worker exposed to noise, because, in over 90% of cases, even a 10 dB reduction will bring noise levels within an acceptable range. Dentists must be educated and made aware about noise levels in the dental operatory and informed about their adverse effects, which not only include hearing problems in the long term but also hamper daily life activities. The present study’s limitation is that it was conducted over a short time period; therefore, long-term longitudinal investigations should be performed. If the study had been conducted over a long time frame, the exposure time per day per operator could have been better evaluated. Moreover, along with maximum decibels per day, average noise in the peak hours could have been one of the objectives. Subsequently, how many hours exceeded the recommended limit of noise exposure could be determined. While brief, modest exposures to loud noise may initially appear harmless, but over a time, the cumulative effect can result in ONIHL. ONCLUSION The paediatric department is a potentially hazardous working environment in terms of noise with the noise levels frequently exceeding the recommended safety thresholds, posing a significant risk for both auditory and non-auditory health effects. The average noise exposure for dental operators consistently surpassed the NIOSH recommended levels, with a notable percentage also exceeding OSHA limits. High decibels for short time periods lead to a high chance of hearing loss, so the use of EPD is essential to minimise ONIHL. Although dentists are well-versed with the adverse effects of noise in the operating room, they have limited knowledge of EPD and rarely use it. There was a change in perception of operators regarding the use of EPD, and they were willing to use EPD in the future EPD effectively reduces ambient noise up to 25dB, and it brings noise levels within an acceptable range. EPDs are an effective tool for mitigating the risks associated with noise exposure in the dentistry. Future consideration: A study with a large sample size for a long period could be considered in the future. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Availability of data and materials All the data was available in the institutions and material was easily available in the online market. Author contributions Anshula Deshpande (guarantor): Conceptualization; Literature search and Data curation; Formal analysis; Investigation; Methodology; Project administration; Writing − original draft; Writing − review & editing; Supervision. Simron Baishya: Conceptualization; Literature search and Data curation; Formal analysis; Investigation; Methodology; Writing − original draft; Writing − review & editing. Sonali Saha: Literature search and Data curation; Investigation; Writing − review & editing. Vasudha Sodani: Literature search and Data curation; Investigation; Writing − review & editing. Riddhika Shah: Literature search and Data curation; Investigation; Writing − review & editing. Aishwarya Antala: Literature search and Data curation; Investigation; Writing − review & editing. Ethical approval and informed consent Ethical approval was obtained from Sumandeep Vidyapeeth Institutional Ethics Committee with approval number SVIEC/ON/Dent/RP/Aug/23/3 before the commencement of the study. All the operators have given their consent to participate in this study. Nil. There are no conflicts of interest. All the data was available in the institutions and material was easily available in the online market. Anshula Deshpande (guarantor): Conceptualization; Literature search and Data curation; Formal analysis; Investigation; Methodology; Project administration; Writing − original draft; Writing − review & editing; Supervision. Simron Baishya: Conceptualization; Literature search and Data curation; Formal analysis; Investigation; Methodology; Writing − original draft; Writing − review & editing. Sonali Saha: Literature search and Data curation; Investigation; Writing − review & editing. Vasudha Sodani: Literature search and Data curation; Investigation; Writing − review & editing. Riddhika Shah: Literature search and Data curation; Investigation; Writing − review & editing. Aishwarya Antala: Literature search and Data curation; Investigation; Writing − review & editing. Ethical approval was obtained from Sumandeep Vidyapeeth Institutional Ethics Committee with approval number SVIEC/ON/Dent/RP/Aug/23/3 before the commencement of the study. All the operators have given their consent to participate in this study.
Toward Integration of Pharmacogenomic Tests into Daily Clinical Practice: A General Survey for Polish Healthcare Workers
7c09a38f-e552-4545-b83e-c1c5e2edc293
10274214
Pharmacology[mh]
Pharmacogenomics (PGx) is an interdisciplinary translational science for primary care that focuses on the utilization of an individual’s genomic profile to predict therapy response and prevention of adverse drug reactions (ADRs) or severe adverse effects in patients, while improving therapeutic outcomes . PGx modifies patients’ prescriptions to adjust treatment decisions while considering the patients’ genomic profiles. In recent years, there were rapid developments in preparing PGx guidelines and annotations for many gene-drug interactions . However, while PGx tests have been described as an important area in clinical centers, still there are many challenges for the incorporation of such tests into the routine clinical setting . There are many obstacles for PGx tests to reach clinical utility and validity. These include, but are not limited to, a lack of background knowledge of general practitioners and physicians on the interpretation of a given PGx test result, absence of a comprehensive standardized program and/or recommendation to run PGx test in clinics (how the target patients must be selected, for example), small number of insurance providers for supporting the costs of such tests, and the need to provide a special infrastructure for cutting-edge panel-based PGx testing that can be implemented through any clinical center with the need for it . Notably, an interactive discussion and exchange on personalized medicine through clinicians and other healthcare providers and also researchers would help fill the gap between the current efforts in investigations and the requirements in clinical setting to ensure that PGx testing with clinical utility could be brought to every patient in need . Future research may emphasize preparation of more region-specific pharmacovariant (genomic variants in drug-related genes) evidences and target panel sequencing approaches aimed at distinct ethnic groups . Furthermore, widespread and easy access to local genomic databases plus the PGx data for people in a portable format, such as a specific card with a QR code linked to the database (eg, Ubiquitous Pharmacogenomics Consortium. Ubiquitous Pharmacogenomics [U-PGx] project’s PGx passport ), might be essential. To deal with the above-mentioned topics, timely novel and original studies and evaluations would be required. However, initial steps seem to be the use of a general analytical survey that measures the primary knowledge of doctors and clinicians along with their desire to order clinical PGx tests that would be beneficial for both the community and individuals across populations. The present research aimed to evaluate the level of awareness of PGx testing in clinicians and healthcare workers in the Republic of Poland. We aimed to demonstrate the associated difficulties and requirements for the implementation of such tests as part of the daily clinical practice through a specific questionnaire for the participants as well. Knowing about the main barriers of PGx testing is a fundamental phase for these genetic tests to be considered by the national healthcare system. To the best of our knowledge, this is the first direct assessment of the backgrounds and attitudes of Polish clinicians and healthcare workers toward the integration of PGx tests into a routine clinical context. Anonymous questionnaires including information on participants’ level of education and whether they were involved in the healthcare system, and particularly if they were clinicians, were distributed online through every medical university and clinical center through an online platform of the Medical University of Białystok or via Poland’s medical chamber (Naczelna Izba Lekarska). According to the Bioethics Committee of the Medical University of Białystok, no institutional review board approval was applicable for our anonymous, publicly available survey on participants’ opinions of clinical PGx testing. Target participants were clinicians, physicians, and healthcare workers (employees/students). The healthcare workers who were paraclinical and/or administrative staff managing healthcare units were employees with executive roles in public clinical centers. Apart from medical personnel and Ph.D. students, the level of education for other contributors was divided into 2 main categories of “higher” and “medium” and was chosen by participants. The sorting was based on the Polish educational system nomenclature, in which higher education level means someone was educated in a university and possesses an academic degree such as a B.Sc., M.Sc., M.D., Ph.D., or Pharm.D., while medium education level indicates education up to high school. The main questions were as follows: (1) Whether participants had heard about PGx before (yes/no); (2) based on complementary explanations on the questionnaire’s page or pre-existing background knowledge, whether participants believed it was worth implementing genetic testing for identification of actionable pharmacovariants to avoid possible ADRs (yes/no/I don’t know); and (3) what participants considered the main barrier(s) of PGx clinical testing for its integration into routine clinical practice in Poland. Options for the third question were as follows: (i) insufficient knowledge of doctors about the interpretation of the results of PGx tests; (ii) lack of standards and/or guidelines for the performance of PGx testing in hospitals; (iii) no reimbursement of the cost of performing PGx tests; (iv) lack of specialized infrastructure for PGx testing in hospitals; and (v) all of the above reasons. Furthermore, there was a separate yes/no question especially for clinicians at the end of our survey: Would you use PGx testing in daily practice, if available? Statistical Analysis Statistical analysis was performed in R v.4.1.2 (free open source, https://www.R-project.org ). The sample size with an effect size of 50% and taking into account an error type 1 of 5% and a power of 90% was approximately 315 people. The analysis of the responses allowed us to determine whether the answers for particular questions were associated with descriptive variables, such as education level and employment in healthcare service. The data set was highly unbalanced. In particular, there were very few respondents with a medium education level; there were also few answers other than “yes” for the second question in the questionnaire: Is it worth implementing genetic testing for identification of actionable pharmacovariants to avoid possible ADRs? Therefore, we applied Fisher’s exact test for association between the answers and other variables instead of approximate methods like a chi-squared test. We tested the association between the answers and each descriptive variable as well as pairs of variables (education+work as healthcare provider). However, we did not find any significant pairwise interactions; therefore, the results of the pairwise tests are not mentioned further. Additionally, we tested if the answers for the 2 last questions depended on whether the respondent had heard about pharmacogenomics before. Statistical analysis was performed in R v.4.1.2 (free open source, https://www.R-project.org ). The sample size with an effect size of 50% and taking into account an error type 1 of 5% and a power of 90% was approximately 315 people. The analysis of the responses allowed us to determine whether the answers for particular questions were associated with descriptive variables, such as education level and employment in healthcare service. The data set was highly unbalanced. In particular, there were very few respondents with a medium education level; there were also few answers other than “yes” for the second question in the questionnaire: Is it worth implementing genetic testing for identification of actionable pharmacovariants to avoid possible ADRs? Therefore, we applied Fisher’s exact test for association between the answers and other variables instead of approximate methods like a chi-squared test. We tested the association between the answers and each descriptive variable as well as pairs of variables (education+work as healthcare provider). However, we did not find any significant pairwise interactions; therefore, the results of the pairwise tests are not mentioned further. Additionally, we tested if the answers for the 2 last questions depended on whether the respondent had heard about pharmacogenomics before. A total of 315 completed questionnaires were received from participants throughout the Republic of Poland. Each online answer sheet was meticulously evaluated to check if it was completed properly (for example, there were no cases of 2 selected options for 1 question, and that only clinicians completed the extra question at the end of the survey). All the answer sheets (100%) were confirmed to be correctly filled out. The education level profile of the respondents is shown in . Most participants had a higher education level; there were 7 participants with a medium education level and 70 students/Ph.D. students. Two-thirds of participants worked as healthcare providers. Question 1: Have You Ever Heard About Pharmacogenomic Tests? also show the distribution of prior knowledge about pharmacogenomics. About two-thirds of participants had heard about PGx tests before. Fisher’s exact test did not detect any significant dependence between the group of respondents and the answer. Question 2: Is It Worth Doing Pharmacogenomic Tests? For this question, “yes” was the most common answer in all groups of participants . Only 3 respondents answered “no” to this question. Employment in healthcare services was not significantly connected with support for the use of PGx . Conversely, there was a significant association between the answer to the question and both the level of education and prior knowledge about PGx. In particular, the respondents who had not heard about PGx before were more likely to answer “I don’t know” to the question. The respondents at the medium educational level were more likely to give a negative answer to the question (although the uncertainty of this effect was big due to the small number of such participants and only 1 negative answer). To investigate the details, we repeated the tests after removing some groups of answers/respondents . Removal of the answer “I don’t know” eliminated the dependence between the answer and the prior knowledge about PGx. The dependence on the education level disappeared after the removal of the group with medium education level. Question 3: Barriers to Implementation of Pharmacogenetic Tests and , show the frequency of particular answers to this question in different groups of participants. The most common answer in all the groups was “all of the above reasons”. The only association was found between education level and the answer. Namely, respondents with a higher education level were more likely to answer “all the barriers are important” than were other groups, and this difference was significant. However, students were more likely to give single answers (instead of “all the reasons”) for the third question, which asked about obstacles. Finally, we added 3 tests in the R script for the associations we found in the data. The result proved all the associations were significant as well . also show the distribution of prior knowledge about pharmacogenomics. About two-thirds of participants had heard about PGx tests before. Fisher’s exact test did not detect any significant dependence between the group of respondents and the answer. For this question, “yes” was the most common answer in all groups of participants . Only 3 respondents answered “no” to this question. Employment in healthcare services was not significantly connected with support for the use of PGx . Conversely, there was a significant association between the answer to the question and both the level of education and prior knowledge about PGx. In particular, the respondents who had not heard about PGx before were more likely to answer “I don’t know” to the question. The respondents at the medium educational level were more likely to give a negative answer to the question (although the uncertainty of this effect was big due to the small number of such participants and only 1 negative answer). To investigate the details, we repeated the tests after removing some groups of answers/respondents . Removal of the answer “I don’t know” eliminated the dependence between the answer and the prior knowledge about PGx. The dependence on the education level disappeared after the removal of the group with medium education level. and , show the frequency of particular answers to this question in different groups of participants. The most common answer in all the groups was “all of the above reasons”. The only association was found between education level and the answer. Namely, respondents with a higher education level were more likely to answer “all the barriers are important” than were other groups, and this difference was significant. However, students were more likely to give single answers (instead of “all the reasons”) for the third question, which asked about obstacles. Finally, we added 3 tests in the R script for the associations we found in the data. The result proved all the associations were significant as well . The main objective of this study was the evaluation of the background knowledge and attitudes of Poland’s clinicians and healthcare providers toward the implementation of the PGx test in the primary care setting. The use and distribution of such general questionnaires for reaching public opinion on specific clinical affairs (including genetic testing and particularly PGx tests) have been previously utilized in different parts of the world . As expected, in almost all such surveys, overall positive attitudes toward the incorporation of (pharmaco)genetic tests into local clinical settings were seen for people with higher educational levels and relative background knowledge in the field . During the present study, the same was also observed in the participants with higher levels of education, including doctors, clinicians, professional healthcare providers, medical students, and Ph.D. students. Whether in detail or not, most of the individuals had heard about PGx tests before (64.4%). Regarding the use of PGx tests for prevention of ADRs in patients, again most of the opinions were positive (93.3%). Increased awareness of the benefits of PGx incorporation into clinical practice in our target population means that more healthcare and legislative stakeholders would be knowledgeable as well. Once important decision makers become interested in implementing PGx, it would then be more readily accepted by the healthcare society. Educational initiatives for this group of individuals in payor organizations, alongside healthcare administrative workers, also need to be developed and implemented. While these individuals need to be targeted, it must be kept in mind that this could include non-clinical personnel as well. The perceived benefits of PGx implementation have been shown in previous studies to be the most important direct predictor of the intention to adopt PGx testing . Focusing educational initiatives should include factors such as specific patient case examples, data on PGx implementation effects on healthcare resource utilization, and financial benefits to payors and healthcare systems. Meanwhile, some of the educated but unaware participants (never heard about PGx before) were doubtful if it was worth doing such tests or not (“I don’t know” option). Addressing this may require structured and targeted instruction on the clinical utility of PGx tests in medical, pharmacy, nursing, and other health professional school curricula. In 2005, a recommendation was issued for such schools to incorporate PGx as an integral part of their core pharmacology curricula . Also, in a 2019 global survey, it was observed that significantly fewer programs in graduate schools reported PGx education as a part of pharmacology (15.4%) compared with M.D. (59.1%), pharmacy (50%) and nursing (66.7%) programs. Overall, most programs (63%) did offer at least the minimum recommended number of 3 to 4 hours of instruction . However, there is still room for improvement in this area. Since PGx is a hot topic for the implementation of personalized medicine by healthcare systems, both basic and advanced courses, particularly for healthcare providers, should be arranged if the system is interested in reducing the ADR burden and improving the cost effectiveness of drug therapy in their patient population . The next question was the selection of the most important obstacle(s) to performing PGx clinical tests throughout Poland. The response “all the above reasons” was chosen most often (67.6%). This may indicate that although most of the healthcare staff do understand and accept the importance of PGx technology and principles, the entire system needs more preparation for integration of such tests in everyday practice. Even though adding “all the above reasons” in our questionnaire may have created a biased answer to this question, all the barriers were observed throughout Poland’s healthcare system, and including the option was intended to offer the notion to participants, as they could think all the reasons exist together. However, according to this result, it seems Poland is not completely ready for routine implementation of PGx clinical tests. After the above (most popular) selected response, “having no reimbursement of the cost of performing PGx tests” and “insufficient background knowledge of doctors about the interpretation of the test results” were the most frequently selected answers, respectively (14.2% and 8.8%). Although including PGx tests in the list of services that are covered by insurance companies may require more effort , there are some strategies that could be helpful to bring the attention of such corporations to the need for supporting and covering the PGx tests. For instance, running the tests in population-specific local cohorts can add extra insights to unravelling ethnic group-dependent pharmacovariants, cost-effectiveness, clinical utility, and necessity of running the diagnosis tests in related clinical centers. The result could convince the insurance providers to cover the test in their panels. In fact, other strategies to improve the clinical outcomes of PGx tests in commercial markets were also investigated and reported . Such policies will promote the increased and faster integration of PGx tests throughout communities. To raise the education level of physicians and clinicians, adding PGx-oriented materials for educational goals, as mentioned earlier, should be considered and achieved. The educational initiatives for clinicians would need to be focused on PGx principles, terminology, interpretation of results, use of clinical guidelines, and patient case examples . While PGx clinical evidence can be easily found in major PGx databases and guidelines (PharmGKB; Clinical Pharmacogenetics Implementation Consortium; Dutch Pharmacogenetics Working Group), efforts need to be implemented to educate practicing clinicians on the interpretation of PGx test results. This would be the absolute way to ensure that patients get the most benefit out of the technology. On the other hand, more evidence-based clinical data (from local and international research groups) would make it more likely that the tests find their way into daily clinical settings, bringing the benefits for both the patients and the healthcare systems. Through this, more PGx tests would be ordered by relevant physicians and clinicians as well. This was obvious when we assessed the extra question for clinicians. Half of the participants were in this group and almost 90% agreed with ordering PGx tests if they were available. Finally, for addressing the 2 other barriers (lack of distinct guidelines for running PGx test in hospitals and less availability of necessary technologies for PGx genotyping), governmental authorities and policy making institutions must collaborate and encourage the investment by stakeholders and regulate research institutions to provide updated instructions for the standardization of the tests . To overcome the barriers for genotyping of drug-related genes, however, new high-throughput sequencing technologies with the ability to decode rare population-specific pharmacovariants are more recommended than other orthogonal tests, which cover only a few known markers . Today, a model of pre-emptive and/or re-active PGx tests are performed in many European countries through EU-funded implementation projects like U-PGx and SafePolyMed . We also performed an EU-funded comprehensive study on pharmacogenetics and pharmacogenomics of local cardiovascular patients with ADRs at the University Clinical Hospital in Białystok . By taking the initial steps on this bright path, we may soon witness Poland joining such actions within Europe. The present study’s observations resulted in motivation for performing PGx testing throughout the Republic of Poland, where PGx is still not well known in healthcare settings. Moreover, decisions on including personalized medicine and PGx clinical tests within the medical/pharmacy study curriculum has already been accepted as a topic for discussion at upcoming events between head deputies and university rectors . Indeed, modern medical education as a response to the health needs of society must include individualized treatment as part of its long-term plans in Poland, and maybe the whole of Europe. Current reports can provide invaluable insight to the ministry of health in Poland, as drug programs are developed to decrease related expenditures , especially since our results confirmed the main outcomes of similar studies performed in other parts of the world . The revealed themes included that participants appreciated the clinical utility and value of PGx tests, were aware of the implementation challenges in clinical practice, and knew of the necessity for awareness by healthcare professionals and patients. Limitations To keep the data as private as possible and reduce participant hesitancy to share responses, we decided to not assess where the respondents were working or being educated. However, knowing about the workplace and place of study could also have been helpful in evaluating the depth and breadth of the responses. Also, the survey data collection was performed only for 4 months because we wanted to assess participants’ knowledge and attitudes in a very current timeframe. However, a longer period of time could have led to more collected data and a more meaningful result. Finally, the fact that only 1 question relied on background knowledge of participants and the depth of actual knowledge could not be assessed in a quick survey raises the issue of replacing the term “knowledge” with “awareness” of PGx. To keep the data as private as possible and reduce participant hesitancy to share responses, we decided to not assess where the respondents were working or being educated. However, knowing about the workplace and place of study could also have been helpful in evaluating the depth and breadth of the responses. Also, the survey data collection was performed only for 4 months because we wanted to assess participants’ knowledge and attitudes in a very current timeframe. However, a longer period of time could have led to more collected data and a more meaningful result. Finally, the fact that only 1 question relied on background knowledge of participants and the depth of actual knowledge could not be assessed in a quick survey raises the issue of replacing the term “knowledge” with “awareness” of PGx. This study revealed the positive trend and attitude among Poland’s clinicians and other healthcare system employees/students toward the integration of PGx tests in routine clinical practice. While the awareness and interest of the participants may encourage healthcare and legislative stakeholders to provide facilities with the necessities for wide scale clinical implementation of PGx tests, some main barriers remain. These need to be removed if the entire country wants to receive the benefits of personalizing drug therapy using genetic information.
Pros and Cons of Informed Consent in Gynecology and Obstetrics
a87ebbaa-09b7-4389-b81b-ce091517f3b1
9989236
Gynaecology[mh]
Obtaining informed consent is a fundamental aspect of medical ethics to protect patients’ autonomy and human dignity. An adequate practice of informed consent is complex and has not only personal but also ethical, legal, and administrative implications. This may mean context-specific adaptation, particularly in the field of obstetrics and gynecology. Medical specialists must ensure that the provided information is according to the patient’s mental capacity, and that they are aware of what medical examination entails, and what a particular diagnosis means. Moreover, if there are treatment alternatives, what are the expected benefits, potential complications and risks, the consequences of refusing treatment, and medical costs? For instance, in the case of cesarean delivery on maternal request, physicians should point out the safety of normal vaginal delivery and describe potential complications and risks of the intended surgery including side effects of anesthesia, wound infection, thromboembolism, abnormal adherence of the placenta (placenta accreta) or abnormal invasion of the placenta (placenta increta and percreta), which may lead to hysterectomy and blood transfusion. Patients should also be made aware that some insurance companies may not cover the costs of surgery without an obstetric indication for a cesarean section. Given the above, patients should be able to opt for a medical procedure freely and voluntarily without any threats or coercion. Both patients and caregivers should pay special attention to confidentiality statements. It should be a standard practice that patients are allowed to ask questions, and physicians should answer in clear and simple terms. Physicians also need to reassure patients that they will always provide care without prejudice even when their medical recommendations are not accepted. Consent to treatment is voluntary and can be withdrawn at any time even if it leads to fetal complications and pregnancy termination. Furthermore, patients can choose to leave the hospital during treatment against medical advice. In such situations, physicians should use prevention strategies and follow well-documented guidelines to avoid professional liability. It is important that resident physicians are educated about informed consent. For example, explicit consent is required prior to any physical or intimate examination. All paraclinical procedures should be explained verbally and performed only after consent is obtained. However, some patients may disagree with the suggested treatment plan on religious grounds. In this case, mediation by a religious figure may help resolve the issue. In Iran, according to the Islamic Penal Code, consent from the father or legal guardian is required for medical procedures when the female patient is under the age of 18. Additional consent from the husband is required when she is under 18, and there are fertility issues involved that may affect marital life. When the female patient is over 18, both her and the spouse’s consent is required in situations when medical procedures permanently affect fertility (e.g., tubal ligation or hysterectomy). Apart from the legal aspect of informed consent, any vaginal intervention will require a thorough description and documentation of the procedure used. Awareness of physicians about the principles of informed consent is essential, as it helps them to better communicate with patients and prevent complaints and potential liabilities. We would like to thank our colleagues at Vali-e-Asr Reproductive Health Research Center for their valuable contribution to this article. B.HR, M.A: Study design. F.M, M.A: Data interpretation. All Authors contributed to the drafting and revising of the manuscript critically for important intellectual content. All authors have read and approved the final manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. None declared.
Similar but different assembly processes of bacterial and micro-eukaryotic communities in an urban river
dfb29ecd-49a3-4d94-a91d-4dde64dc46c0
11865445
Microbiology[mh]
Bacteria and micro-eukaryotes are crucial components in river ecosystems and play important roles in the degradation of pollutants, biogeochemical processes, enhancing self-purification ability of water environments, and the restoration of riverine ecosystems . Therefore, understanding the diversity and distribution of bacterial and micro-eukaryotic communities is vital to improve river environments and exerting ecological benefits. The mechanisms of microbial community assembly have always been a core issue in research on aquatic ecology . It is crucial to the diversity, distribution, function, biogeographical patterns, and succession processes of microbial community – . Based on niche theory, deterministic processes such as environmental filtering (pH, temperature, dissolved oxygen, organic matter, salinity, etc.) and interspecific interactions (competition, cooperation, or predation) dominate the composition and distribution of microbial community , . In contrast, neutral theory postulates that the stochastic processes such as birth, death, migration, speciation, and dispersal limitation shape the structure of microbial community , . Actually, deterministic and stochastic processes can coexist within the same system or context, as they are not mutually exclusive concepts. It is generally acknowledged that microbial community assembly is jointly driven by deterministic and stochastic processes . In recent years, the mechanisms underlying microbial community assembly in multiple environments have been studied. However, these studies have primarily focused on low-mobility ecosystems such as oceans , soils , wetlands , lakes , and activated sludges . In contrast to these low-mobility ecosystems, river environments and their microbial community exhibit more complex and dynamic characteristics . Furthermore, studies have revealed that seasonality is a crucial driving factor of microbial community diversity and assembly processes within riverine ecosystems – . Environmental factors including dissolved oxygen concentration, water temperature, and light will exhibit significant synergistic changes with seasonal variations, directly or indirectly affecting the structure, diversity, functions, and assembly processes of microbial community in the river , . Nevertheless, research on seasonal succession and assembly mechanisms of microorganisms in river ecosystems remains limited, especially for micro-eukaryotes . Therefore, understanding the seasonal succession and assembly processes of bacteria and micro-eukaryotes in rivers will facilitate a deeper comprehension of the mechanisms for maintaining microbial diversity and ecological functions in river ecosystems. Unlike natural river channels, urban rivers possess not only ecological functions such as flood control, water storage, and local climate regulation but also economic functions such as sewage reception and irrigation, as well as humanistic values like landscaping and entertainment . On the one hand, most of the water sources for urban rivers come from relatively stable effluent discharged from sewage treatment plants, resulting in small seasonality in water quality and quantity, and higher water temperature . On the other hand, urban river channelization may cause habitat homogenization, resulting in overall decline in biodiversity . Whether these differences compared to natural rivers lead to spatial and seasonal variation of microbial communities in urban rivers need to be further studied. The Xiangjianghe River (XJH) is a tributary of the Wujiang River and is located in the northern part of China’s Guizhou Province, and belongs to the Yangtze River basin. It flows through the urban area of Zunyi City with more than one million residents. Human activities may affect the ecological stability of XJH. Besides, the ecological health of XJH is vital to the sustainable development in this area. In the present study, microbial diversity patterns, community networks, and community assembly processes of bacteria and micro-eukaryotes in the XJH under four seasons were performed by 16 S and 18 S rRNA amplicon sequencing to address the following questions: (1) Do the bacterial and micro-eukaryotic communities exhibit spatiotemporal variability across different seasons? (2) What are the main influencing factors that affect the bacterial and micro-eukaryotic communities? (3) Do deterministic or stochastic processes dominate community construction of bacteria and micro-eukaryotes? The present study will provide an important and comprehensive insight for better understanding the bacterial and micro-eukaryotic community patterns in urban river environments. Sampling and environmental factors measurement A total of 84 surface water samples were collected along the Xiangjianghe River (XJH) from seven sites (3 replicates) in May, July, and September 2022, and January 2023, which present spring, summer, autumn, and winter, respectively (Fig. ). Sites 3 and 5 are located in the tributary, and sites 1, 2, 4, 6, and 7 are located in the mainstream of the XJH. Water samples were collected in sterile bottles and transported to the laboratory at 4 °C within 6 h for subsequent analysis. Water temperature (WT), pH, oxidation-reducing potential (ORP), dissolved oxygen (DO), and electrical conductivity (EC) were measured in situ using a YSI Professional Plus meter. Total nitrogen (TN) was determined by a UV spectrophotometric method with alkaline potassium persulfate elimination. Total phosphorus (TP) was determined by the ammonium molybdate spectrophotometric method. Chemical oxygen demand (COD Mn ) was measured using the potassium permanganate titration method. DNA extraction, PCR amplification, and illumina miseq sequencing For microbial community DNA extraction, 1000 ml water samples were filtered through 0.22 μm filters within 6 h after collection and the membranes were stored at − 80 °C ultra-low temperature freezer before further treatment. Microbial DNA was extracted using the E.Z.N.A. ® soil DNA Kit (Omega Bio-tek, Norcross, GA, U.S.) according to the manufacturer’s instructions. The V3-V4 region of the 16 S rRNA gene and the V4 region of the 18s rRNA gene were amplified by using the primer pairs 338/806R and 528 F/706R, respectively. The PCR reactions were performed in triplicate, with each 20 µL mixture containing 4 µL of 5 × FastPfu Buffer, 2 µL of 2.5 mM dNTPs, 0.8 µL of 5 µM primer, 0.4 µL of FastPfu Polymerase, and 10 ng of template DNA. Subsequently, the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and QuantiFluor™ -ST (Promega, USA) were used for the purification and quantization of PCR products. The purified products were pooled in equimolar and paired-end sequenced (2 × 300) on an Illumina MiSeq platform (Illumina, San Diego, USA) in Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). Statistical analysis All bacterial and micro-eukaryotic sequence analyses were performed on the i-sanger cloud platform of Majorbio BioTech Co., Ltd ( http://www.i-sanger.com/ ). Operational taxonomic units (OTUs) were clustered with 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/ ) with a novel “greedy” algorithm that performs chimera filtering and OTU clustering simultaneously. All chloroplasts, mitochondria, and OTUs with total sequences of less than 20 were removed. The data was normalized by subsampling to the least number of reads in order to synchronize the sequencing depth across all samples, following which the distinctive sequences were selected for subsequent analysis. The bacterial and micro-eukaryotic taxonomies were analyzed by the RDP Classifier algorithm against the silva138/16s_bacteria and silva138.1/18s_eukaryota database using a confidence threshold of 70%, respectively. Kruskal-Wallis H test was used to determine the differences in Chao1 and Shannon indices between different seasons and sampling sites. The βNTI (β nearest-taxon index) and RC Bray (modified Raup-Crick index) were calculated to evaluate the impact of stochastic and deterministic processes on the aggregation of microbial communities across different spatial and temporal scales. The neutral community model (NCM) was also performed to quantify the influence of stochastic processes in shaping microbial community. Co-occurrence characteristics of the microbial community in four seasons were revealed by network analysis according to the Spearman correlation coefficients (|r| > 0.8, p < 0.05). The OTUs with relative abundance greater than 1% were retained for better visualization. The visualization and topological analysis of network was conducted in Gephi (version 0.9.2). In order to identify the keystone taxa, the nodes were classified into four topological roles (Module hubs, Connectors, Network hubs, and Peripherals) according to the values of within-module connectivity (Zi) and among-module connectivity (Pi). In this study, the analysis of Co-occurrence pattern, NCM, and niche breadth were performed in R software (version 4.3.1), while others such as the α-diversity index (Chao1 and Shannon), non-metric multidimensional scaling (NMDS), microbial spatiotemporal distribution characteristics, βNTI/RC Bray , and Zi/Pi analysis were performed by the online tools of Majorbio Cloud Platform ( https://cloud.majorbio.com/page/tools/ ) . A total of 84 surface water samples were collected along the Xiangjianghe River (XJH) from seven sites (3 replicates) in May, July, and September 2022, and January 2023, which present spring, summer, autumn, and winter, respectively (Fig. ). Sites 3 and 5 are located in the tributary, and sites 1, 2, 4, 6, and 7 are located in the mainstream of the XJH. Water samples were collected in sterile bottles and transported to the laboratory at 4 °C within 6 h for subsequent analysis. Water temperature (WT), pH, oxidation-reducing potential (ORP), dissolved oxygen (DO), and electrical conductivity (EC) were measured in situ using a YSI Professional Plus meter. Total nitrogen (TN) was determined by a UV spectrophotometric method with alkaline potassium persulfate elimination. Total phosphorus (TP) was determined by the ammonium molybdate spectrophotometric method. Chemical oxygen demand (COD Mn ) was measured using the potassium permanganate titration method. For microbial community DNA extraction, 1000 ml water samples were filtered through 0.22 μm filters within 6 h after collection and the membranes were stored at − 80 °C ultra-low temperature freezer before further treatment. Microbial DNA was extracted using the E.Z.N.A. ® soil DNA Kit (Omega Bio-tek, Norcross, GA, U.S.) according to the manufacturer’s instructions. The V3-V4 region of the 16 S rRNA gene and the V4 region of the 18s rRNA gene were amplified by using the primer pairs 338/806R and 528 F/706R, respectively. The PCR reactions were performed in triplicate, with each 20 µL mixture containing 4 µL of 5 × FastPfu Buffer, 2 µL of 2.5 mM dNTPs, 0.8 µL of 5 µM primer, 0.4 µL of FastPfu Polymerase, and 10 ng of template DNA. Subsequently, the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and QuantiFluor™ -ST (Promega, USA) were used for the purification and quantization of PCR products. The purified products were pooled in equimolar and paired-end sequenced (2 × 300) on an Illumina MiSeq platform (Illumina, San Diego, USA) in Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). All bacterial and micro-eukaryotic sequence analyses were performed on the i-sanger cloud platform of Majorbio BioTech Co., Ltd ( http://www.i-sanger.com/ ). Operational taxonomic units (OTUs) were clustered with 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/ ) with a novel “greedy” algorithm that performs chimera filtering and OTU clustering simultaneously. All chloroplasts, mitochondria, and OTUs with total sequences of less than 20 were removed. The data was normalized by subsampling to the least number of reads in order to synchronize the sequencing depth across all samples, following which the distinctive sequences were selected for subsequent analysis. The bacterial and micro-eukaryotic taxonomies were analyzed by the RDP Classifier algorithm against the silva138/16s_bacteria and silva138.1/18s_eukaryota database using a confidence threshold of 70%, respectively. Kruskal-Wallis H test was used to determine the differences in Chao1 and Shannon indices between different seasons and sampling sites. The βNTI (β nearest-taxon index) and RC Bray (modified Raup-Crick index) were calculated to evaluate the impact of stochastic and deterministic processes on the aggregation of microbial communities across different spatial and temporal scales. The neutral community model (NCM) was also performed to quantify the influence of stochastic processes in shaping microbial community. Co-occurrence characteristics of the microbial community in four seasons were revealed by network analysis according to the Spearman correlation coefficients (|r| > 0.8, p < 0.05). The OTUs with relative abundance greater than 1% were retained for better visualization. The visualization and topological analysis of network was conducted in Gephi (version 0.9.2). In order to identify the keystone taxa, the nodes were classified into four topological roles (Module hubs, Connectors, Network hubs, and Peripherals) according to the values of within-module connectivity (Zi) and among-module connectivity (Pi). In this study, the analysis of Co-occurrence pattern, NCM, and niche breadth were performed in R software (version 4.3.1), while others such as the α-diversity index (Chao1 and Shannon), non-metric multidimensional scaling (NMDS), microbial spatiotemporal distribution characteristics, βNTI/RC Bray , and Zi/Pi analysis were performed by the online tools of Majorbio Cloud Platform ( https://cloud.majorbio.com/page/tools/ ) . Diversity of microbial community In total, 4,064,995 high-quality 16 S rRNA sequences and 5,537,316 high-quality 18 S rRNA sequences were obtained. The sequencing coverage rates ranged from 98.4 to 99.8% and the rarefaction curves of all samples reached a plateau (Fig. ), which indicates the current sequencing depth is sufficient. The Chao1 and Shannon indices were calculated to assess the richness and diversity of microbial community (Fig. ). For bacteria, no significant difference was found in the Chao1 index between the four seasons ( p > 0.05). The Shannon index in winter was higher than that in other seasons, and significantly higher than that in summer ( p < 0.05). For micro-eukaryotes, however, the Chao1 index in spring was significantly higher than that in autumn and winter ( p < 0.001), and the Shannon index in spring was significantly higher than that in autumn ( p < 0.01). In addition, the Chao1 and Shannon indices of bacteria and micro-eukaryotes also displayed significant differences across sampling sites. The maximum Chao1 index of bacteria was found in S6 for spring, summer, and winter, while in S5 for autumn. The minimum value of Chao1 and Shannon index of bacteria was found in S4 for all four seasons, except for the Shannon index in autumn. The maximum Chao1 index of micro-eukaryotes was found in S6 for spring, S5 for summer and autumn, and S3 for winter. The minimum Chao1 index of micro-eukaryotes was found in S2 for spring and summer, and S4 for autumn and winter. Similar trends were also found in the Shannon index of micro-eukaryotes for spring and winter. Microbial community composition The community composition of bacteria and micro-eukaryotes exhibited distinct temporal and spatial variations. For bacteria, the OTUs number in each sample ranged from 510 to 1425, with an average of 957. A total of 1741 core OTUs which present in all four seasons was found, and samples from spring, summer, autumn, and winter had 4, 1, 32, and 6 unique OTUs, respectively (Fig. a). The results of NMDS revealed that the bacterial community was grouped by seasons (Fig. c). Although there were overlaps between spring and winter, as well as between summer and autumn, the clustering still manifested more obvious seasonal patterns. Notably, the spring and winter samples were separated from the summer and autumn samples. The samples of S4 in summer, autumn, and winter were separated far from other samples (Fig. S3a). The mean bacterial relative abundance at phylum level is present in Fig. e. Proteobacteria, Bacteroidota, and Actinobacteriota were predominant (with a mean relative abundance range from 57.6 to 96.0%) in most samples. In particularly, the relative abundance of Cyanobacteria (23.6%) at S4 in summer was higher than that of Bacteroidota (12.0%) and Actinobacteriota (10.2%). The mean relative abundance of Proteobacteria in spring (46.9%) was higher than that in other seasons (41.7% in summer, 37.0% in autumn, and 43.2% in winter, respectively), while the mean relative abundance of Bacteroidota (28.6%) and Cyanobacteria (5.5%) in summer, Actinobacteriota (24.6%) and Verrucomicrobiota (3.7%) in autumn, and Campylobacterota (7.3%) and Firmicutes (6.9%) in winter was higher than that in other seasons. Kruskal-Wallis test results also showed that there were significant differences in the mean relative abundance of abundant bacterial taxa among the different seasons (Fig. g). At the genus level, Limnohabitans (13.3%), Flavobacterium (11.2%), and hgcI_clade (5.5%) were the dominant bacteria (Fig. a). The abundance of Flavobacterium in spring (11.3%) and winter (13.8%) was much higher than that in summer (9.0%) and autumn (10.7%), while the abundance of hgcI_clade in spring (4.8%) and winter (2.4%) was much lower than that in summer (6.8%) and autumn (8.1%). For micro-eukaryotes, the average number of OTUs was 584. A total of 951 core OTUs was found in four seasons, and 40, 20, 101, and 56 unique OTUs were found in spring, summer, autumn, and winter, respectively (Fig. b). Compared to bacteria, micro-eukaryotic community composition exhibited more intense spatiotemporal differences. Four clusters of micro-eukaryotes community composition of the samples were distinguished clearly from each other (Fig. d), indicating significant differences between the four seasons. In addition, the samples at S2 in summer, and at S4 in autumn and winter were separated far from other samples (Fig. S3b). The difference for each site was also significant. The top 10 taxa in the mean relative abundance of micro-eukaryotes at phylum level are shown in Fig. f. Diatomea, Ciliophora, unclassified_k__Chloroplastida, and unclassified_k__Stramenopiles were dominant in all samples. Unclassified_k__Chloroplastida was the most predominant taxa in spring (25.9%), Diatomea was the most abundant taxa in summer (27.7%) and autumn (32.2%), and Ciliophora was the most abundant taxa in winter (32.9%). The relative abundance of Diatomea at S2 in spring (28.1%), summer (76.5%), and winter (51.6%) was higher than that at other sites. In addition, 8 of the 10 top abundant taxa had significant differences among the four seasons, while other two abundant taxa Diatomea and unclassified_k__Stramenopiles showed no significant differences among the different seasons (Fig. h). At genus level, Cyclotella (11.1%), Ochromonas (6.3%), and Cladophora (4.7%) were the dominant micro-eukaryotes (Fig. b). The abundance of Cyclotella in spring (2.7%) was much lower than that in summer (14.9%), autumn (16.5%), and winter (10.3%), while Cladophora abundance in spring (14.3%) was much higher than that in summer (0.5%), autumn (1.7%), and winter (2.2%). The unclassified_f__Choreotrichia abundance at S4 in four seasons was much higher than that at other sampling sites. Co-occurrence patterns of microbial community Network analysis was conducted to assess the microbial community complexity affected by seasonality (Fig. a). The results of topological properties and network correlations were summarized in Table . The topological characteristics demonstrate that the bacterial and micro-eukaryotic community coexistence patterns vary significantly between the four seasons. The top 5 major modules in networks of bacteria and micro-eukaryotes account for 81.64–87.88% and 85.69–91.12% of the nodes in four seasons, respectively. The number of nodes and edges, average degree, and average clustering coefficient of the bacterial network in winter were higher than those in the other three seasons, indicating that the most complex bacterial network was manifested in winter, followed by autumn, spring, and summer. The lowest value of graph density and the highest values of modularity and average path length were also found in summer. The network of micro-eukaryotes in spring was much more complex and compact, with significantly higher nodes, edges, graph density, average degree, and average clustering coefficient, and significantly lower modularity and average path length than other seasons. Obviously, most of the edges in these networks were positive, suggesting that positive correlation accounted for most of bacterial and micro-eukaryotic communities. The proportions of positive correlation for bacteria and micro-eukaryotes were ordered as autumn (86.26%) > winter (85.24%) > spring (67.44%) > summer (67.26%), and winter (80.61%) > summer (74.03%) > autumn (69.58%) > spring (66.31%), respectively. According to the values of within-module connectivity (Zi) and among-module connectivity (Pi), the positioning of nodes (OTUs) in the network was classified into the following four categories: module hubs (Zi > 2.5, Pi < 0.62), network hubs (Zi > 2.5, Pi > 0.62), peripherals (Zi < 2.5, Pi < 0.62), and connectors (Zi < 2.5, Pi > 0.62). The module hubs, network hubs, and connectors are commonly defined as keystone species. No node in the network hubs and more than 92% of the nodes in the peripherals were observed in the network of both bacteria (Fig. b) and micro-eukaryotes (Fig. c). For bacteria, there were 1, 2, 5, and 1 module hubs, and 2, 5, 2, and 49 connectors in spring, summer, autumn, and winter, respectively (Fig. b, Table ). The nodes of module hubs belonged to the dominant phyla, such as Bacteroidota, Proteobacteria, Actinobacteriota, Patescibacteria, and Acidobacteriota. For micro-eukaryotes, no module hub was found in spring and autumn, while 5 and 4 module hubs were found in summer and winter, respectively. They belonged to Cryptomycota, Labyrinthulomycetes, Ciliophora, Protalveolata, unclassified_k__Chloroplastida, and Cercozoa. In addition, 2, 10, 48, and 34 connectors were found in spring, summer, autumn, and winter, respectively (Fig. c, Table S3). Correlations between microbial communities and environmental factors The relationship between microbial communities and environmental factors was conducted to discern significant linkages. Among the selected environmental factors in the present study, TN, WT, pH, ORP, and DO were significantly varied in different seasons (Table S4), and TN, TP, COD, and EC were significantly varied in different sampling sites (Table S5). The results of mantel test revealed that WT and ORP were significantly correlated with community composition of both bacteria and micro-eukaryotes ( r > 0.2, p < 0.01), and the strongest relationship was found between WT and bacterial community ( r > 0.4, p < 0.01) (Fig. a). In addition, in order to identify the main factors affecting the bacterial and micro-eukaryotic communities, the correlation analysis was conducted between the top 10 abundant phyla of microbial communities and the environmental factors. TP, WT, and ORP were the main influencing factors in bacteria (Fig. b). TP was significantly positively correlated with Campylobacterota, Firmicutes, and Patescibacteria ( p < 0.05). WT was significantly positively correlated with Cyanobacteria, while significantly negatively correlated with Firmicutes ( p < 0.01). ORP was significantly positively correlation with Bdellovibrionota ( p < 0.05), while significantly negatively correlated with Proteobacteria ( p < 0.01), Campylobacterota ( p < 0.05), and Firmicutes ( p < 0.001). Environmental factors were mainly negatively correlated with the micro-eukaryotic community (Fig. c). WT was significantly negatively correlated with Chytridiomycota, Cryptomycota, and Protalveolata ( p < 0.05). ORP was significantly negatively correlated with Protalveolata ( p < 0.01). EC was significantly negatively correlated with unclassified_k__Crytophyceae ( p < 0.01) and Dinoflagellata ( p < 0.05). Assembly process of microbial communities The neutral community model (NCM) was employed to assess the potential contribution of stochastic processes to the community assembly of bacteria and micro-eukaryotes (Fig. a). Most proportions of taxa for bacteria (73.1%~78.6%) and micro-eukaryotes (63.9–69.9%) were within the dashed line. In addition, the NCM was strong in explaining the bacterial community with the R 2 were 0.610, 0.689, 0.512, and 0.546 in spring, summer, autumn, and winter, respectively, indicating the important roles of stochastic processes on bacterial community assembly. The migration rate (m) was highest in summer (0.661), followed by spring (0.474), winter (0.328), and autumn (0.260). The values of R 2 (0.108 –0.485) and migration rate (0.044–0.125) in four seasons for micro-eukaryotes were lower than bacteria, suggesting that less stochastic processes in micro-eukaryotes compared to bacteria. Furthermore, the tendency of niche breadth in different seasons for the bacterial and micro-eukaryotic communities was similar to the migration rate (Fig. c). Bacterial communities exhibited relatively wider niche breadth than micro-eukaryotic communities. The null model based on the βNTI and RC Bray was applied to further determine the relative contributions of stochastic and deterministic processes (Fig. b). The results showed that stochastic process was the dominant process in all seasons for both bacteria and micro-eukaryotes. The deterministic process accounted for a negligible proportion of variation in bacterial communities (0–0.5% for heterogeneous selection and 0–1.0% for homogeneous selection). Dispersal limitation was the most important process affecting bacterial community assembly in spring (64.8%), autumn (71.4%), and winter (81.9%), while undominated was the most important process in summer (45.2%). Dispersal limitation was dominant for micro-eukaryotic community assembly, with proportions of 65.2%, 72.9%, 72.4%, and 55.2% in spring, summer, autumn, and winter, respectively. The proportion of stochastic process for micro-eukaryotic communities in summer was higher than in other seasons. In the deterministic process, the proportions of heterogeneous selection (4.3–17.6%) were significantly higher than homogeneous selection (0–0.5%) in four seasons. Compared to bacteria, the deterministic process in micro-eukaryotes was relatively higher. In total, 4,064,995 high-quality 16 S rRNA sequences and 5,537,316 high-quality 18 S rRNA sequences were obtained. The sequencing coverage rates ranged from 98.4 to 99.8% and the rarefaction curves of all samples reached a plateau (Fig. ), which indicates the current sequencing depth is sufficient. The Chao1 and Shannon indices were calculated to assess the richness and diversity of microbial community (Fig. ). For bacteria, no significant difference was found in the Chao1 index between the four seasons ( p > 0.05). The Shannon index in winter was higher than that in other seasons, and significantly higher than that in summer ( p < 0.05). For micro-eukaryotes, however, the Chao1 index in spring was significantly higher than that in autumn and winter ( p < 0.001), and the Shannon index in spring was significantly higher than that in autumn ( p < 0.01). In addition, the Chao1 and Shannon indices of bacteria and micro-eukaryotes also displayed significant differences across sampling sites. The maximum Chao1 index of bacteria was found in S6 for spring, summer, and winter, while in S5 for autumn. The minimum value of Chao1 and Shannon index of bacteria was found in S4 for all four seasons, except for the Shannon index in autumn. The maximum Chao1 index of micro-eukaryotes was found in S6 for spring, S5 for summer and autumn, and S3 for winter. The minimum Chao1 index of micro-eukaryotes was found in S2 for spring and summer, and S4 for autumn and winter. Similar trends were also found in the Shannon index of micro-eukaryotes for spring and winter. The community composition of bacteria and micro-eukaryotes exhibited distinct temporal and spatial variations. For bacteria, the OTUs number in each sample ranged from 510 to 1425, with an average of 957. A total of 1741 core OTUs which present in all four seasons was found, and samples from spring, summer, autumn, and winter had 4, 1, 32, and 6 unique OTUs, respectively (Fig. a). The results of NMDS revealed that the bacterial community was grouped by seasons (Fig. c). Although there were overlaps between spring and winter, as well as between summer and autumn, the clustering still manifested more obvious seasonal patterns. Notably, the spring and winter samples were separated from the summer and autumn samples. The samples of S4 in summer, autumn, and winter were separated far from other samples (Fig. S3a). The mean bacterial relative abundance at phylum level is present in Fig. e. Proteobacteria, Bacteroidota, and Actinobacteriota were predominant (with a mean relative abundance range from 57.6 to 96.0%) in most samples. In particularly, the relative abundance of Cyanobacteria (23.6%) at S4 in summer was higher than that of Bacteroidota (12.0%) and Actinobacteriota (10.2%). The mean relative abundance of Proteobacteria in spring (46.9%) was higher than that in other seasons (41.7% in summer, 37.0% in autumn, and 43.2% in winter, respectively), while the mean relative abundance of Bacteroidota (28.6%) and Cyanobacteria (5.5%) in summer, Actinobacteriota (24.6%) and Verrucomicrobiota (3.7%) in autumn, and Campylobacterota (7.3%) and Firmicutes (6.9%) in winter was higher than that in other seasons. Kruskal-Wallis test results also showed that there were significant differences in the mean relative abundance of abundant bacterial taxa among the different seasons (Fig. g). At the genus level, Limnohabitans (13.3%), Flavobacterium (11.2%), and hgcI_clade (5.5%) were the dominant bacteria (Fig. a). The abundance of Flavobacterium in spring (11.3%) and winter (13.8%) was much higher than that in summer (9.0%) and autumn (10.7%), while the abundance of hgcI_clade in spring (4.8%) and winter (2.4%) was much lower than that in summer (6.8%) and autumn (8.1%). For micro-eukaryotes, the average number of OTUs was 584. A total of 951 core OTUs was found in four seasons, and 40, 20, 101, and 56 unique OTUs were found in spring, summer, autumn, and winter, respectively (Fig. b). Compared to bacteria, micro-eukaryotic community composition exhibited more intense spatiotemporal differences. Four clusters of micro-eukaryotes community composition of the samples were distinguished clearly from each other (Fig. d), indicating significant differences between the four seasons. In addition, the samples at S2 in summer, and at S4 in autumn and winter were separated far from other samples (Fig. S3b). The difference for each site was also significant. The top 10 taxa in the mean relative abundance of micro-eukaryotes at phylum level are shown in Fig. f. Diatomea, Ciliophora, unclassified_k__Chloroplastida, and unclassified_k__Stramenopiles were dominant in all samples. Unclassified_k__Chloroplastida was the most predominant taxa in spring (25.9%), Diatomea was the most abundant taxa in summer (27.7%) and autumn (32.2%), and Ciliophora was the most abundant taxa in winter (32.9%). The relative abundance of Diatomea at S2 in spring (28.1%), summer (76.5%), and winter (51.6%) was higher than that at other sites. In addition, 8 of the 10 top abundant taxa had significant differences among the four seasons, while other two abundant taxa Diatomea and unclassified_k__Stramenopiles showed no significant differences among the different seasons (Fig. h). At genus level, Cyclotella (11.1%), Ochromonas (6.3%), and Cladophora (4.7%) were the dominant micro-eukaryotes (Fig. b). The abundance of Cyclotella in spring (2.7%) was much lower than that in summer (14.9%), autumn (16.5%), and winter (10.3%), while Cladophora abundance in spring (14.3%) was much higher than that in summer (0.5%), autumn (1.7%), and winter (2.2%). The unclassified_f__Choreotrichia abundance at S4 in four seasons was much higher than that at other sampling sites. Network analysis was conducted to assess the microbial community complexity affected by seasonality (Fig. a). The results of topological properties and network correlations were summarized in Table . The topological characteristics demonstrate that the bacterial and micro-eukaryotic community coexistence patterns vary significantly between the four seasons. The top 5 major modules in networks of bacteria and micro-eukaryotes account for 81.64–87.88% and 85.69–91.12% of the nodes in four seasons, respectively. The number of nodes and edges, average degree, and average clustering coefficient of the bacterial network in winter were higher than those in the other three seasons, indicating that the most complex bacterial network was manifested in winter, followed by autumn, spring, and summer. The lowest value of graph density and the highest values of modularity and average path length were also found in summer. The network of micro-eukaryotes in spring was much more complex and compact, with significantly higher nodes, edges, graph density, average degree, and average clustering coefficient, and significantly lower modularity and average path length than other seasons. Obviously, most of the edges in these networks were positive, suggesting that positive correlation accounted for most of bacterial and micro-eukaryotic communities. The proportions of positive correlation for bacteria and micro-eukaryotes were ordered as autumn (86.26%) > winter (85.24%) > spring (67.44%) > summer (67.26%), and winter (80.61%) > summer (74.03%) > autumn (69.58%) > spring (66.31%), respectively. According to the values of within-module connectivity (Zi) and among-module connectivity (Pi), the positioning of nodes (OTUs) in the network was classified into the following four categories: module hubs (Zi > 2.5, Pi < 0.62), network hubs (Zi > 2.5, Pi > 0.62), peripherals (Zi < 2.5, Pi < 0.62), and connectors (Zi < 2.5, Pi > 0.62). The module hubs, network hubs, and connectors are commonly defined as keystone species. No node in the network hubs and more than 92% of the nodes in the peripherals were observed in the network of both bacteria (Fig. b) and micro-eukaryotes (Fig. c). For bacteria, there were 1, 2, 5, and 1 module hubs, and 2, 5, 2, and 49 connectors in spring, summer, autumn, and winter, respectively (Fig. b, Table ). The nodes of module hubs belonged to the dominant phyla, such as Bacteroidota, Proteobacteria, Actinobacteriota, Patescibacteria, and Acidobacteriota. For micro-eukaryotes, no module hub was found in spring and autumn, while 5 and 4 module hubs were found in summer and winter, respectively. They belonged to Cryptomycota, Labyrinthulomycetes, Ciliophora, Protalveolata, unclassified_k__Chloroplastida, and Cercozoa. In addition, 2, 10, 48, and 34 connectors were found in spring, summer, autumn, and winter, respectively (Fig. c, Table S3). The relationship between microbial communities and environmental factors was conducted to discern significant linkages. Among the selected environmental factors in the present study, TN, WT, pH, ORP, and DO were significantly varied in different seasons (Table S4), and TN, TP, COD, and EC were significantly varied in different sampling sites (Table S5). The results of mantel test revealed that WT and ORP were significantly correlated with community composition of both bacteria and micro-eukaryotes ( r > 0.2, p < 0.01), and the strongest relationship was found between WT and bacterial community ( r > 0.4, p < 0.01) (Fig. a). In addition, in order to identify the main factors affecting the bacterial and micro-eukaryotic communities, the correlation analysis was conducted between the top 10 abundant phyla of microbial communities and the environmental factors. TP, WT, and ORP were the main influencing factors in bacteria (Fig. b). TP was significantly positively correlated with Campylobacterota, Firmicutes, and Patescibacteria ( p < 0.05). WT was significantly positively correlated with Cyanobacteria, while significantly negatively correlated with Firmicutes ( p < 0.01). ORP was significantly positively correlation with Bdellovibrionota ( p < 0.05), while significantly negatively correlated with Proteobacteria ( p < 0.01), Campylobacterota ( p < 0.05), and Firmicutes ( p < 0.001). Environmental factors were mainly negatively correlated with the micro-eukaryotic community (Fig. c). WT was significantly negatively correlated with Chytridiomycota, Cryptomycota, and Protalveolata ( p < 0.05). ORP was significantly negatively correlated with Protalveolata ( p < 0.01). EC was significantly negatively correlated with unclassified_k__Crytophyceae ( p < 0.01) and Dinoflagellata ( p < 0.05). The neutral community model (NCM) was employed to assess the potential contribution of stochastic processes to the community assembly of bacteria and micro-eukaryotes (Fig. a). Most proportions of taxa for bacteria (73.1%~78.6%) and micro-eukaryotes (63.9–69.9%) were within the dashed line. In addition, the NCM was strong in explaining the bacterial community with the R 2 were 0.610, 0.689, 0.512, and 0.546 in spring, summer, autumn, and winter, respectively, indicating the important roles of stochastic processes on bacterial community assembly. The migration rate (m) was highest in summer (0.661), followed by spring (0.474), winter (0.328), and autumn (0.260). The values of R 2 (0.108 –0.485) and migration rate (0.044–0.125) in four seasons for micro-eukaryotes were lower than bacteria, suggesting that less stochastic processes in micro-eukaryotes compared to bacteria. Furthermore, the tendency of niche breadth in different seasons for the bacterial and micro-eukaryotic communities was similar to the migration rate (Fig. c). Bacterial communities exhibited relatively wider niche breadth than micro-eukaryotic communities. The null model based on the βNTI and RC Bray was applied to further determine the relative contributions of stochastic and deterministic processes (Fig. b). The results showed that stochastic process was the dominant process in all seasons for both bacteria and micro-eukaryotes. The deterministic process accounted for a negligible proportion of variation in bacterial communities (0–0.5% for heterogeneous selection and 0–1.0% for homogeneous selection). Dispersal limitation was the most important process affecting bacterial community assembly in spring (64.8%), autumn (71.4%), and winter (81.9%), while undominated was the most important process in summer (45.2%). Dispersal limitation was dominant for micro-eukaryotic community assembly, with proportions of 65.2%, 72.9%, 72.4%, and 55.2% in spring, summer, autumn, and winter, respectively. The proportion of stochastic process for micro-eukaryotic communities in summer was higher than in other seasons. In the deterministic process, the proportions of heterogeneous selection (4.3–17.6%) were significantly higher than homogeneous selection (0–0.5%) in four seasons. Compared to bacteria, the deterministic process in micro-eukaryotes was relatively higher. Spatiotemporal distribution characteristics of microbial community structure The α-diversity and taxonomic composition of bacteria and micro-eukaryotes in the XJH were all affected by seasonal variations. Similar results were also found in other urban rivers , . However, It is inconsistent with the result that the α-diversity of microbial communities was not affected by seasonal variation in the Yarlung Zangbo River . Several possible reasons might explain this phenomenon. First, differences in environmental conditions may result in differences of microbial community composition and diversity. The distribution of bacteria and micro-eukaryotes in the XJH could be influenced by human activities such as sewage discharge, urban runoff, and river canalization . While the Yarlung Zangbo River is a typical natural river and less disturbed by human activities. Second, nutrients (nitrogen, phosphorus, etc.) and organic matter are considered as crucial factors affecting the microbial abundance and diversity . Usually, natural rivers lack nutrient sources compared to urban rivers, and nutrients varied greatly across different seasons in urban rivers than natural rivers. Third, natural rivers possess greater self-purification abilities , whereas urban rivers may have diminished self-purification capacity due to high pollution loads, channelization, and human interference, then resulting in significant variations in microbial community structure. Some other studies have manifested that microbial diversity in summer was higher than in other seasons , . However, in the present study, the highest α-diversity of bacteria and micro-eukaryotes was found in spring and winter. Similar results were also manifested in other studies , . It could be attributed to the precipitation. The XJH belongs to the subtropical monsoon humid climate zone. Heavily precipitation usually occurs in summer in this region. Although precipitation could wash nutrients and microorganisms from urban surface into surrounding rivers which leads to higher microbial diversity , higher water flow and water volume are likely to strongly dilute the microorganisms and nutrients. Indeed, the TN and TP concentrations in summer were much lower than those in winter (Table S4), suggesting that nutrients might be diluted by heavy precipitation in summer. The effect of exogenous input is smaller than dilution due to precipitation. Besides, in summer, numerous aquatic macrophytes emerged in this river. It could compete with microorganisms for nutrients, thereby reducing the diversity of bacteria and micro-eukaryotes. Usually, the spatial variation is not obvious on a small spatial scale because of the river connectivity . However, significant spatial variation of α-diversity was found in this study area. In particular, the Chao1 index of bacteria and micro-eukaryotes in S4 was relatively lower than in other sites. It is probably due to the effects of human activities. High intensity of human activity could reduce microbial diversity . The S4 is located upstream of a dam, which is used for flood control. Most microorganisms are attached to suspended particles in rivers. As the sedimentation of suspended particles is due to dam construction, the quantity of microbes in the surface water will be decreased . In addition, the S4 is located in the central urban area of Zunyi. Exogenous pollution through surface runoff, untreated domestic wastewater, and water sports like swimming and bamboo drifting could reduce bacterial and micro-eukaryotic diversity. The abundance of top three dominant bacterial phyla (Proteobacteria, Bacteroidota, and Actinobacteriota) showed significant differences between different seasons ( p < 0.05). It is consistent with several other studies , , . These phyla are common and typical freshwater bacteria in rivers. Temperature variation in different seasons has been reported to influence the temporal variability of bacterial assemblages . Diatomea is the largest group of micro-eukaryotes and widely exists in flowing rivers. Interestingly, although the abundance of Diatomea showed obvious difference between different sampling sites, no significant difference was found between different seasons. It indicates that seasonal variation might be not the main factor affecting the distribution of Diatomea. It has been revealed that the bacterial and micro-eukaryotic community composition all exhibited seasonal variation according to the NMDS results. For bacteria, the samples from summer and autumn clustered together and deviated from spring and winter. It might be attributed to the relatively lower temperature difference between summer and autumn. For micro-eukaryotes, however, there were no overlap samples from different seasons, demonstrating that micro-eukaryotes could have different adaptation and assembly mechanisms compared to bacteria. Co-occurrence networks of bacteria and micro-eukaryotes in different seasons The interactions between microbial communities contribute more to ecological process than the relative abundance and diversity of species in water ecosystems. It will affect the microbial community composition to a certain extent and play important roles in maintaining the stability of ecosystem functions and structures . Co-occurrence network analysis can be effectively used to characterize the interaction between microbial groups , . Positive and negative correlations between edges of two nodes in co-occurrence networks represent the reciprocal and competitive relationships, respectively. The proportions of positive correlations among bacteria and micro-eukaryotes in all seasons were significantly higher than the negative correlations, indicating the importance of synergistic cooperation among microorganisms in this urban river habitat. Most bacteria and micro-eukaryotes resist the interference of external environmental variations through cooperative relationships with other species. This is consistent with other studies , , . The number of nodes, edges, and average degree of bacteria were higher in winter than in other seasons, indicating a high degree of complexity in winter. However, the complexity of micro-eukaryotes in winter was lower than in other seasons. It might be due to the competitive relationship between bacteria and micro-eukaryotes. Usually, the increased complexity of co-occurrence network could lead to higher community stability . However, the modularity of bacteria and micro-eukaryotes was higher in summer than in other seasons, which was not consistent with the complexity. It indicates that network complexity does not necessarily represent network stability . The network modularity can ‌enhance the network stability under human activity interference . Furthermore, the different co-occurrence characteristics between different seasons might be affected by other environmental factors such as WT and nutrients. The keystone species play an essential role in the assembly and ecological functions of microorganisms in co-occurrence networks. Although most of the keystone species belong to the dominant phyla, some keystone species belong to Dependentiae, Synergistota, Gastrotricha, Vertebrata, Hyphochytriomycetes, Labyrinthulomycetes, Mucoromycota, and Cnidaria with low abundance were also found (Table ). It suggests that some of the low abundance species could play non-negligible roles in maintaining microbial community stability. Previous studies have shown that more keystone species may promote network stability to some extent , . The number of keystone species in winter for bacteria and in autumn and winter for micro-eukaryotes was significantly higher than in other seasons. However, it cannot be concluded that the microbial networks are most stable in winter according to the topological properties of co-occurrence network. In addition, the keystone species varied significantly between different seasons in this study. Only one keystone species of micro-eukaryotes ( Paraphysomonas ) was found in both autumn and winter (Table S3). Furthermore, no consistent keystone species was observed between the four seasons of bacteria. Therefore, the stability of microbial networks could be determined by multiple factors such as the complexity of networks, interactions of species, keystone species, as well as external environmental interference. Environment factors and assembly mechanisms of the microbial communities The spatial variations of WT, pH, ORP, and DO were not obvious, probably because of the water connectivity in this studied urban river with a small spatial scale. However, the concentrations of TN, TP, COD, and EC downstream were relatively higher than those upstream. It might be due to the domestic sewage and surface runoff which bring abundant nutrients in the city area. Increased human activities along the river could increase the nutrient contents and change the proportion and forms of nutrients . The Environmental variations determine the temporal and spatial distribution of microbial community in riverine ecosystems. Nutrients, DO, pH, WT, and metals were determined to be the main environmental factors that could influence the microbial community . WT and ORP were detected as the most critical factors affecting bacterial and micro-eukaryotic communities in the XJH. WT promotes the natural succession of microbial communities by influencing the growth, reproduction, and metabolic capacity of microorganisms directly – . The seasonal variation in ORP is also influenced by its temperature-dependency . Furthermore, other detected environmental factors such as TN, TP, COD, pH, and EC also appear to have significant impacts on bacterial and micro-eukaryotic communities. TN, TP, and COD are the foundation of microbial metabolism. They play pivotal roles in modifying the trophic state of aquatic environments, thereby influencing the composition and dynamics of the microbial community , . pH is considered to mediate the availability of ions and influence cellular osmotic pressure and enzyme synthesis of microorganisms . EC provides an indication of the overall concentration of dissolved ions present in aquatic environments and has play an important role in the community structure of micro-eukaryotes . The significant negative relationship between EC and most micro-eukaryotes in the present study might suggest that EC is an inhibitory factor of micro-eukaryotes. Previous studies have shown that environmental factors cannot completely explain the microbial variations due to the influence of diffusion and drift processes . The NCM and null model were then applied to reveal the assembly process of bacterial and micro-eukaryotic communities in the XJH. The fitted values of NCM suggest that stochastic processes play important roles in assembly of bacterial and micro-eukaryotic communities. Other studies have confirmed that the stochastic process dominates the construction of microbial communities in river ecosystems, which is consistent with our findings , , . Besides, channelization of river could accelerate the water velocity . The Zunyi unban section of the XJH has been channelized for landscape and flood control. Therefore, the results of stochastic process in this study area may be attributed to relatively high flow velocity which could be conducive to the microbial dispersal process. The migration rates (m) of bacteria and micro-eukaryotes in spring and summer were higher compared with that in autumn and winter, indicating the dispersal rates of microbial community in spring and summer were higher than in autumn and winter. It might be attributed to the higher river connectivity in spring and summer. Frequent rainfall events in these seasons can cause high connectivity in rivers and then enhance the migration and movement of organisms across the river . In addition, the migration rates of bacteria in all four seasons were significantly higher than those of micro-eukaryotes. It indicates that bacterial cell dispersal might happen more frequently as a compensate mechanism for the random loss of an individual bacteria compared to the micro-eukaryotic species . The results of contribution ratio between stochastic and deterministic processes further confirmed that the stochastic process was dominated in bacterial and micro-eukaryotic communities in the XJH. It may be mainly caused by dispersal limitation and undominated process such as drift. Besides, the deterministic process could not easily be detected during this limited sampling period, which may not able to clearly observe the evolutionary dynamics of bacterial and micro-eukaryotic communities . The contribution ratio of deterministic process during the four seasons for bacteria was 0.5–2.4%, which can be ignored. The community variation explained by deterministic process for micro-eukaryotes was 4.3–17.6%, which was significantly higher than that for bacteria. It suggests that deterministic and stochastic processes affect the bacteria and micro-eukaryotes differently. Different microorganisms have differences in body size, metabolic capacity, and dispersal ability. It will affect the relative contributions of stochastic and deterministic processes. Smaller organisms such as bacteria are less environment filtered than micro-eukaryotes, which are relatively larger organisms. Because the former is more likely to have plasticity in metabolic abilities and have greater environmental tolerance than the latter . Liu et al. also found that bacteria had higher adaptability to environmental changes than micro-eukaryotes . Micro-eukaryotes have more complex cellular structures and longer lifespans compared to bacteria. They usually need more time for evolution and speciation, and then be more susceptible to deterministic factors . Therefore, deterministic process may have a greater influence on the assembly of micro-eukaryotes than bacteria. Furthermore, the microbial community structure has changed due to the heterogeneity of environmental factors in the XJH. So, the heterogeneous selection dominated the assembly of micro-eukaryotes in deterministic process. The wider niche width in bacterial community also proves that bacterial community was less affected by deterministic process . The results also manifest that although stochastic process significantly influenced the community assembly of bacteria and micro-eukaryotes, the NCM did not fully fit the bacterial and micro-eukaryotic communities. It indicates that there could be a coexistence of other community assembly processes and mechanisms such as species interactions and environmental filtering . Undetected variables might lead to unexplained variations in microbial communities . The variables may include some abiotic factors such as metals and organic compounds , , and some biological factors such as lytic bacterial viruses, protists, zooplankton, aquatic plants, etc – . The α-diversity and taxonomic composition of bacteria and micro-eukaryotes in the XJH were all affected by seasonal variations. Similar results were also found in other urban rivers , . However, It is inconsistent with the result that the α-diversity of microbial communities was not affected by seasonal variation in the Yarlung Zangbo River . Several possible reasons might explain this phenomenon. First, differences in environmental conditions may result in differences of microbial community composition and diversity. The distribution of bacteria and micro-eukaryotes in the XJH could be influenced by human activities such as sewage discharge, urban runoff, and river canalization . While the Yarlung Zangbo River is a typical natural river and less disturbed by human activities. Second, nutrients (nitrogen, phosphorus, etc.) and organic matter are considered as crucial factors affecting the microbial abundance and diversity . Usually, natural rivers lack nutrient sources compared to urban rivers, and nutrients varied greatly across different seasons in urban rivers than natural rivers. Third, natural rivers possess greater self-purification abilities , whereas urban rivers may have diminished self-purification capacity due to high pollution loads, channelization, and human interference, then resulting in significant variations in microbial community structure. Some other studies have manifested that microbial diversity in summer was higher than in other seasons , . However, in the present study, the highest α-diversity of bacteria and micro-eukaryotes was found in spring and winter. Similar results were also manifested in other studies , . It could be attributed to the precipitation. The XJH belongs to the subtropical monsoon humid climate zone. Heavily precipitation usually occurs in summer in this region. Although precipitation could wash nutrients and microorganisms from urban surface into surrounding rivers which leads to higher microbial diversity , higher water flow and water volume are likely to strongly dilute the microorganisms and nutrients. Indeed, the TN and TP concentrations in summer were much lower than those in winter (Table S4), suggesting that nutrients might be diluted by heavy precipitation in summer. The effect of exogenous input is smaller than dilution due to precipitation. Besides, in summer, numerous aquatic macrophytes emerged in this river. It could compete with microorganisms for nutrients, thereby reducing the diversity of bacteria and micro-eukaryotes. Usually, the spatial variation is not obvious on a small spatial scale because of the river connectivity . However, significant spatial variation of α-diversity was found in this study area. In particular, the Chao1 index of bacteria and micro-eukaryotes in S4 was relatively lower than in other sites. It is probably due to the effects of human activities. High intensity of human activity could reduce microbial diversity . The S4 is located upstream of a dam, which is used for flood control. Most microorganisms are attached to suspended particles in rivers. As the sedimentation of suspended particles is due to dam construction, the quantity of microbes in the surface water will be decreased . In addition, the S4 is located in the central urban area of Zunyi. Exogenous pollution through surface runoff, untreated domestic wastewater, and water sports like swimming and bamboo drifting could reduce bacterial and micro-eukaryotic diversity. The abundance of top three dominant bacterial phyla (Proteobacteria, Bacteroidota, and Actinobacteriota) showed significant differences between different seasons ( p < 0.05). It is consistent with several other studies , , . These phyla are common and typical freshwater bacteria in rivers. Temperature variation in different seasons has been reported to influence the temporal variability of bacterial assemblages . Diatomea is the largest group of micro-eukaryotes and widely exists in flowing rivers. Interestingly, although the abundance of Diatomea showed obvious difference between different sampling sites, no significant difference was found between different seasons. It indicates that seasonal variation might be not the main factor affecting the distribution of Diatomea. It has been revealed that the bacterial and micro-eukaryotic community composition all exhibited seasonal variation according to the NMDS results. For bacteria, the samples from summer and autumn clustered together and deviated from spring and winter. It might be attributed to the relatively lower temperature difference between summer and autumn. For micro-eukaryotes, however, there were no overlap samples from different seasons, demonstrating that micro-eukaryotes could have different adaptation and assembly mechanisms compared to bacteria. The interactions between microbial communities contribute more to ecological process than the relative abundance and diversity of species in water ecosystems. It will affect the microbial community composition to a certain extent and play important roles in maintaining the stability of ecosystem functions and structures . Co-occurrence network analysis can be effectively used to characterize the interaction between microbial groups , . Positive and negative correlations between edges of two nodes in co-occurrence networks represent the reciprocal and competitive relationships, respectively. The proportions of positive correlations among bacteria and micro-eukaryotes in all seasons were significantly higher than the negative correlations, indicating the importance of synergistic cooperation among microorganisms in this urban river habitat. Most bacteria and micro-eukaryotes resist the interference of external environmental variations through cooperative relationships with other species. This is consistent with other studies , , . The number of nodes, edges, and average degree of bacteria were higher in winter than in other seasons, indicating a high degree of complexity in winter. However, the complexity of micro-eukaryotes in winter was lower than in other seasons. It might be due to the competitive relationship between bacteria and micro-eukaryotes. Usually, the increased complexity of co-occurrence network could lead to higher community stability . However, the modularity of bacteria and micro-eukaryotes was higher in summer than in other seasons, which was not consistent with the complexity. It indicates that network complexity does not necessarily represent network stability . The network modularity can ‌enhance the network stability under human activity interference . Furthermore, the different co-occurrence characteristics between different seasons might be affected by other environmental factors such as WT and nutrients. The keystone species play an essential role in the assembly and ecological functions of microorganisms in co-occurrence networks. Although most of the keystone species belong to the dominant phyla, some keystone species belong to Dependentiae, Synergistota, Gastrotricha, Vertebrata, Hyphochytriomycetes, Labyrinthulomycetes, Mucoromycota, and Cnidaria with low abundance were also found (Table ). It suggests that some of the low abundance species could play non-negligible roles in maintaining microbial community stability. Previous studies have shown that more keystone species may promote network stability to some extent , . The number of keystone species in winter for bacteria and in autumn and winter for micro-eukaryotes was significantly higher than in other seasons. However, it cannot be concluded that the microbial networks are most stable in winter according to the topological properties of co-occurrence network. In addition, the keystone species varied significantly between different seasons in this study. Only one keystone species of micro-eukaryotes ( Paraphysomonas ) was found in both autumn and winter (Table S3). Furthermore, no consistent keystone species was observed between the four seasons of bacteria. Therefore, the stability of microbial networks could be determined by multiple factors such as the complexity of networks, interactions of species, keystone species, as well as external environmental interference. The spatial variations of WT, pH, ORP, and DO were not obvious, probably because of the water connectivity in this studied urban river with a small spatial scale. However, the concentrations of TN, TP, COD, and EC downstream were relatively higher than those upstream. It might be due to the domestic sewage and surface runoff which bring abundant nutrients in the city area. Increased human activities along the river could increase the nutrient contents and change the proportion and forms of nutrients . The Environmental variations determine the temporal and spatial distribution of microbial community in riverine ecosystems. Nutrients, DO, pH, WT, and metals were determined to be the main environmental factors that could influence the microbial community . WT and ORP were detected as the most critical factors affecting bacterial and micro-eukaryotic communities in the XJH. WT promotes the natural succession of microbial communities by influencing the growth, reproduction, and metabolic capacity of microorganisms directly – . The seasonal variation in ORP is also influenced by its temperature-dependency . Furthermore, other detected environmental factors such as TN, TP, COD, pH, and EC also appear to have significant impacts on bacterial and micro-eukaryotic communities. TN, TP, and COD are the foundation of microbial metabolism. They play pivotal roles in modifying the trophic state of aquatic environments, thereby influencing the composition and dynamics of the microbial community , . pH is considered to mediate the availability of ions and influence cellular osmotic pressure and enzyme synthesis of microorganisms . EC provides an indication of the overall concentration of dissolved ions present in aquatic environments and has play an important role in the community structure of micro-eukaryotes . The significant negative relationship between EC and most micro-eukaryotes in the present study might suggest that EC is an inhibitory factor of micro-eukaryotes. Previous studies have shown that environmental factors cannot completely explain the microbial variations due to the influence of diffusion and drift processes . The NCM and null model were then applied to reveal the assembly process of bacterial and micro-eukaryotic communities in the XJH. The fitted values of NCM suggest that stochastic processes play important roles in assembly of bacterial and micro-eukaryotic communities. Other studies have confirmed that the stochastic process dominates the construction of microbial communities in river ecosystems, which is consistent with our findings , , . Besides, channelization of river could accelerate the water velocity . The Zunyi unban section of the XJH has been channelized for landscape and flood control. Therefore, the results of stochastic process in this study area may be attributed to relatively high flow velocity which could be conducive to the microbial dispersal process. The migration rates (m) of bacteria and micro-eukaryotes in spring and summer were higher compared with that in autumn and winter, indicating the dispersal rates of microbial community in spring and summer were higher than in autumn and winter. It might be attributed to the higher river connectivity in spring and summer. Frequent rainfall events in these seasons can cause high connectivity in rivers and then enhance the migration and movement of organisms across the river . In addition, the migration rates of bacteria in all four seasons were significantly higher than those of micro-eukaryotes. It indicates that bacterial cell dispersal might happen more frequently as a compensate mechanism for the random loss of an individual bacteria compared to the micro-eukaryotic species . The results of contribution ratio between stochastic and deterministic processes further confirmed that the stochastic process was dominated in bacterial and micro-eukaryotic communities in the XJH. It may be mainly caused by dispersal limitation and undominated process such as drift. Besides, the deterministic process could not easily be detected during this limited sampling period, which may not able to clearly observe the evolutionary dynamics of bacterial and micro-eukaryotic communities . The contribution ratio of deterministic process during the four seasons for bacteria was 0.5–2.4%, which can be ignored. The community variation explained by deterministic process for micro-eukaryotes was 4.3–17.6%, which was significantly higher than that for bacteria. It suggests that deterministic and stochastic processes affect the bacteria and micro-eukaryotes differently. Different microorganisms have differences in body size, metabolic capacity, and dispersal ability. It will affect the relative contributions of stochastic and deterministic processes. Smaller organisms such as bacteria are less environment filtered than micro-eukaryotes, which are relatively larger organisms. Because the former is more likely to have plasticity in metabolic abilities and have greater environmental tolerance than the latter . Liu et al. also found that bacteria had higher adaptability to environmental changes than micro-eukaryotes . Micro-eukaryotes have more complex cellular structures and longer lifespans compared to bacteria. They usually need more time for evolution and speciation, and then be more susceptible to deterministic factors . Therefore, deterministic process may have a greater influence on the assembly of micro-eukaryotes than bacteria. Furthermore, the microbial community structure has changed due to the heterogeneity of environmental factors in the XJH. So, the heterogeneous selection dominated the assembly of micro-eukaryotes in deterministic process. The wider niche width in bacterial community also proves that bacterial community was less affected by deterministic process . The results also manifest that although stochastic process significantly influenced the community assembly of bacteria and micro-eukaryotes, the NCM did not fully fit the bacterial and micro-eukaryotic communities. It indicates that there could be a coexistence of other community assembly processes and mechanisms such as species interactions and environmental filtering . Undetected variables might lead to unexplained variations in microbial communities . The variables may include some abiotic factors such as metals and organic compounds , , and some biological factors such as lytic bacterial viruses, protists, zooplankton, aquatic plants, etc – . This study demonstrates that diversity and taxonomic composition of bacterial and micro-eukaryotic communities in the XJH were all affected by seasonal and spatial variations. Compared to bacteria, micro-eukaryotic community composition exhibited more intense spatiotemporal differences. There are significant differences in the coexistence patterns of bacterial and micro-eukaryotic communities among the four seasons. The proportion of positive correlations among bacteria and micro-eukaryotes in all seasons was significantly higher than the negative correlations, indicating the importance of synergistic cooperation among microorganisms in this urban river habitat. WT and ORP showed significantly correlated with the composition of both bacteria and micro-eukaryotes, and they were the primary environmental factors influencing the bacterial and micro-eukaryotic community construction process. The results of NCM and null model indicate important roles of stochastic processes on community assembly of bacteria and micro-eukaryotes. In addition, dispersal limitation was the most important process affecting the community assembly of both bacteria and micro-eukaryotes. It also indicates that deterministic process has a greater influence on the assembly of micro-eukaryotes than bacteria. Furthermore, relatively wider niche breadth was found in micro-eukaryotes compared to bacteria. Overall, the assembly processes of bacterial and micro-eukaryotic communities in this urban river were similar but exhibited different characteristics. These observations provide scientific references for further research on the spatiotemporal variation and assembly mechanisms of microorganisms in urban rivers. However, more biological and environmental factors that could influence microbial assembly processes in a larger study area with more sampling sites and the mechanisms of difference in bacteria and micro-eukaryotes still need to be revealed. Below is the link to the electronic supplementary material. Supplementary Material 1
Immunohistochemical Expression of Autophagy-Related Marker
c71ba02c-3ec1-499b-9bee-47ac22170883
10911742
Anatomy[mh]
Breast cancer (BC) is the most commonly diagnosed cancer and the leading cause of cancer related death in women worldwide . It accounts for 23% of all malignancies with the infiltrating duct carcinoma considered the most frequent histological subtype . Based on gene expression profiles, BC has been classified into 5 subgroups: luminal A, luminal B, Her2/neu amplified, basal like and normal like. Each group has distinct biological features and clinical outcome, suggesting that BC progresses through different molecular pathways among patients . The heterogeneity in BC subtypes was suggested by some investigators to be a function of cancer stem cells (CSCs) , based on the common phenotypes between them and BC cells. This makes some authors to hypothesize that CSCs can be incorporated in the molecular staging of BC . CSCs are defined by the expression of specific cell surface markers that can be used to distinguish them from other tumor or normal cells. CD44 is one of these markers . CD44 is a member of the adhesion molecule families and a cell-surface trans-membrane protein distinguishes breast cancer stem cells (BCSCs) from breast Non- CSCs . It is closely associated with cancer metastasis, chemotherapy resistance and poor prognosis . CSCs are characterized by their ability of self-renewal, unlimited proliferation and their differentiation potential, thereby tumor therapy resistance and metastasis . One of the pivotal processes that have been strongly associated to CSCs maintenance and aggressiveness is autophagy. Autophagy, a conserved catabolic pathway, is crucial for the preservation of cell homeostasis during nutrient and oxygen deprivation . Autophagy plays a dual role in cancer development; it can both promote and suppress cancer progression and metastasis. These opposite functions were interpreted as autophagy being a double-edged sword in cancer; that prevents tumor initiation but favors the progression of established tumors, this challenged researchers to further explore its impact on oncogenesis and tumor progression . The membrane-bound microtubule-associated protein chain 3 (MAP- LC3) is one of the most specific biomarkers of autophagy. In mammals, LC3 is expressed in 3 isoforms: A, B, and C. The B isoform, LC3B , has broad tissue specificities and is widely used in autophagy-related studies . The role of LC3B in different types of cancer including BC remains controversial. Flynnand Schiemann and Tang et al. reported the common expression of LC3B in various malignancies and its relation to cancer progression and worse outcome, while others recorded its relation to suppression of tumorgenesis . The relationship between autophagy and CSCs in BC remains unclear; some studies demonstrated that, BCSCs that exhibit high percentage of autophagic activity having poor prognosis . While other studies showing opposite results with LC3B autophagic marker found to suppress function of BCSCs . Therefore, autophagy could represent a promising target for counteracting CSCs aggressiveness that it would be crucial to carefully assess the dependence/sensitivity of each specific type of cancer to autophagy, as well as the impact of autophagy modulation on selected cancer therapies . To date, our knowledge about the effect of autophagy on BCSCs is limited. Here we hypothesize that autophagy may have an impaction on BCSCs in different molecular subtypes of BC. To assess the credibility of our hypothesis we decide to study the expression of LC3B and CD44 in different molecular subgroups of BC via immunohistochemistry. Specimens This study was conducted retrospectively on 50 specimens of IDC/NST of breast. Specimens were selected from the archives of Surgical Pathology Laboratories, Assuit University Hospital, Faculty of Medicine and South Egypt Cancer Institute, Assuit University. The available clinicopathologic data were obtained from hospital medical records of patients and all the patients received treatment according to their stages and luminal subtypes according to national guidelines of breast cancer patient protocol. A representative hematoxylin and eosin stained slides were re-examined for each specimen for detailed histopathological features including histologic type, grade (Nottingham modification of the Scarff Bloom Richardson grading, 1998), tumor infiltrating lymphocytes, the presence or absence of lymohovascular emboli, the presence or absence of perineural invasion and tumor stage by using TNM classification published by the American Joint Committee on Cancer/Union for International Cancer Control (8th edition, 2018) . According to the molecular classification, the specimens were divided into 2 groups, 25 specimens of ER +ve (both luminal A and B) and 25 specimens of ER-ve (HER2/neu +v and triple negative) BC. All procedures performed in the current study were approved by national research ethics committee (reference number17100680 on18/3/2019) in accordance with the 1964 Helsinki declaration and its later amendments. Immunohistochemical staining Immunohistochemical staining was performed using the avidin -biotin immunoperoxidase methods. Tissue sections of 4-μm thick of formalin-fixed paraffin-embedded specimens were taken from tissue blocks. Sections were dewaxed and then rehydrated through descending graded ethanol series, down to distilled water. To block the endogenous peroxidase, the rehydrated sections were treated with Peroxidase Blocking Reagent (6%hydrogen peroxide) for 10 min. For epitope retrieval, sections were microwaved in Tris EDTA solution, pH 9 for 1 hour. Sections were incubated with the primary antibodies for one hour. The antibody used for CD44 immunostaining was primary CD44 /H-CAM Rabbit Unconjugated polyclonal antibody (Catalog #PAS-29590, thermo scientific at dilution 1/200). As regard LC3B immunostaining, it was carried out using LC3B polyclonal unconjugated Rabbit Primary Antibody (catalog # PA1-16931, thermo scientific at dilution 1/300). Secondary staining kits were used according to the manufacturer’s instructions using Ultra Tek Anti- Polyvalent Biotinylated Antibody. Counter staining was done with hematoxylin and examined by light microscopy Sections of appendix and pancreas were used as a positive control for CD44 and LC3B marker respectively. Immunohistochemical evaluation All stained specimens were independently viewed and scored by two pathologists without disclosing clinical data of these patients. CD44 scoring method The immunostaining score for CD44 was calculated based on the proportion of stained tumor cells: 0-10% as negative, 11-25% as weakly positive, 26-50% as moderately positive, and 51-100% as strongly positive. Patients with negative and weakly positive expression were combined as the lower expression group, and patients with moderately positive and strongly positive expression were combined as the higher expression group . LC3B scoring method The stained sections were scored taking into consideration the intensity of staining and the percentage of stained cells within each tissue section. The intensity of staining was scored as (0, no staining; 1, faint staining; 2, moderate staining; and 3, strong staining.) and the percentage of positively stained cells was also scored as (0, no staining; 1, 1–25 % positive cells; 2, 26–50 % positive cells; 3, 51–75 % positive cells; and 4, 76–100 % positive cells). The sum of the staining intensity and extent scores was used as the final score for LC3B (0–7). Patients showed final staining scores of 6–7 were classified as LC3B high expression and the remainder as LC3B low expression . All procedures performed in the current study were approved by IRB and/or national research ethics committee in accordance with the 1964 Helsinki declaration and its later amendments. Informed consent was obtained from all individual participants included in the study. Statistical analysis All statistical calculations were done using SPSS (statistical package for the social science; SPSS Inc., Chicago, IL, USA) version 22. Data were statistically described in frequencies (number of cases) and relative frequencies (percentages). For comparing categorical data, Chi square (χ2) test was performed. Exact test was used instead when the expected frequency is less than 5. P-value is always 2 tailed set significant at 0.05 levels. Correlation between CD44 and LC3B was done by using Spearman’s rho correlation test. Clinicopathological, histopathological characteristics and molecular subtypes of the studied specimens The patient’s clinicopathological, histopathological characteristics and molecular subtypes were summarized in . Briefly all studied cases were IDC/NST of BC. They all were grade II. The age range was 27-65 years (mean, 50±10 years), with twenty six [52%] of cases were < 50 years old. As regard the molecular subtype, fourteen specimens [28%] were luminal A subtype, eleven specimens [22%] were luminal B subtype, eleven specimens [22%] were Her2neu overexpressing subtype and fourteen specimens [28%] were triple negative subtype. Immunohistological findings of the studied specimens Expression of CD44 and LC3B CD44 was expressed in IDC cells with cytoplasmic and membranous staining pattern. Forty-six out of fifty specimens [46/50] were positive for CD44 marker. The specimens were divided into two subgroups, with twenty-five specimens [50%] showed high expression and twenty-five specimens [50%] showed low expression . As regard LC3B expression, it was expressed in IDC cells with cytoplasmic staining pattern. All fifty specimens showed positivity for LC3B in various proportion and different intensity. Finally, they were divided into two groups; low and high expression groups, with eleven specimens [22%] showed low LC3B expression and thirty-nine specimens [78%] showed high LC3B expression . Relationship between CD44 & LC3B expression and clinicopathological characteristics Statistically significant positive association was detected between high expression of both CD44 & LC3B and lymph node metastasis (p value = 0.001 and 0.010 respectively) . Statistically significant positive association was also detected between CD44 & LC3B expression and pathologic stages. Patients with advanced stage (stage 3, 4) had high CD44 & LC3B expression (p value=0.045 and 0.004 respectively) . There was no statistically significant relationship between both CD44 & LC3B expression and other clinicopathologic parameters such as age of patients, site and size of tumor, LVI, PNI, and DCIS . Relationship between CD44 & LC3B expression and molecular subtypes of studied specimens Statistically significant positive association was detected between CD44 & LC3B expression and different molecular subtypes. Patients with triple negative subtype had higher CD44 & LC3B expression (44% and 35.9% respectively) than her2neu (24% and 17.9% respectively) and luminal subtypes (luminal A (20% and 23.1%respectively) & luminal B (12% and23.9% respectively) (p value=0.044 and 0.048 respectively) . Correlation between CD44 and LC3B tumor markers expression of the studied specimens Significant positive moderate correlation was found between the percentage of CD44 and LC3B markers (r=0.366 and p=0.009) . Breast cancer is a complex disease with large heterogeneity, leading to highly variable clinical behavior and response to therapy. The mechanisms resulting in this heterogeneity in BC are not well-understood with one possible explanation for the tumor heterogeneity is the presence of BCSCs . The implication of the CSC model in BC has been suggested to account for potential differences in drug sensitivity, individual risk of recurrence and metastasis. So understanding of CSCs model can improve our ability to regulate target therapy . Anticancer resistance of CSCs, cancer recurrence and metastasis are attributed to several factors including process known as autophagy . MAP1 LC3 is identified as marker of autophagy which played an important role in the development of BC . Relationship between CD44 expression and different clinicopatho-logical factors In this study forty-six out of fifty specimens [92%] showed positive CD44 expression. These results agree with results of Farida and Yuliantini , they found that CD44 expression was positive in [88.6%] of examined BC cases . In this research high CD44 expression was significantly associated with presence of axillary lymph nodal metastasis. This finding is in agreement with results of other previous studies such as Wei et al. , Rustamadji et al. , Tsang et al. . The relation between high CD44 expression and LNs metastasis can be explained as the following: CD44 is a class I transmembrane glycoprotein that serves as the primary receptor for hyalouronic acid (HA) and binds other extra-cellular matrix components, such as collagen, laminin, and fibronectin. This protein has been shown to promote growth, invasion, and metastatic dissemination of BC cells . However, other studies disagree with our results; they found no significant associations between CD44 expression and LNs status in studied BC cases . Pathologic stage is the most useful predictor of BC behavior. In the current study, statistically significant positive association was detected between high CD44 expression and patients with advanced stage (stage 3, 4). This finding is in accordance with results of Hassn Mesrati et al. , Roosta et al. . Lymph nodal status is one of the important determinants of pathologic stage. Since, CD44 facilitates LN metastasis; its high expression is associated with advanced stage. Relationship between CD44 expression and different molecular subtypes of BC We found statistically significant association between CD44 expression and different molecular subtypes of BC. More frequent CSC phenotypes, (higher CD44 expression), were found in triple negative tumors [44% of specimens]. This finding is in agreement with results of previous studies done by Louhichi et al. . They found that CD44 + subpopulation was much higher in basal like BC than non-basal like subtype . CD44 + cells showed high capacity of proliferation, migration, invasion and tumorigenesis, so providing a highly hydrated environment that favors cancer cell progression and invasion (important features of basal like BC) . In contrast, Chang et al. , Faridaand Yuliantini , de Beca et al. found no significant association between stem cell marker CD44 and different molecular subtypes of BC . Relationship between LC3B expression and different clinicopatholo-gical parameters The results of this work showed that level of LC3B expression was up-regulated during later stages of BC, where patients with advanced stage (stage 3, 4) showed high LC3B expression [69.2%] versus patients with low stage [18.2%]. These results are consistent with results of previous study by Zhao et al. . Autophagy serves as a protective mechanism to cancer cells against stress conditions that affect cancer cells especially during later stages of the tumor development (where limited oxygen and nutrient supplies). It provides amino acids, nucleotides and lipid for ATP production and molecular synthesis of cancer cells under these stressful conditions, so promotes viability of the tumor cells . On contrary, Mustafa et al. found no significant association between LC3B expression and different pathologic stages of BC . As regard lymph nodal status, a significant positive association between high LC3B expression [79.5%] and presence lymph node metastasis was detected. This finding agrees with results of Zhao et al. , and disagrees with Mustafa et al. . These results are partially explained by ability of autophagy to help tumor cells to survive, proliferate and disseminate to regional lymph nodes and secondary sites . In addition, Autophagy was shown to be induced during extracellular matrix (ECM) detachment and protects detached tumor cells from detachment-induced cell death, so tumor cells can survive and metastasize . Relationship between LC3B expression and different molecular subtypes of BC Regarding molecular subtypes of BC, we found significant association between LC3B expression and different molecular subtypes of BC. Patients with triple negative subtype had higher LC3B expression [35.9%] than the remaining BC subtypes. Results of the current study agree with results of Choi et al. and disagree with results of Mustafa et al. . This finding can be explained as a following: TNBC cells showed higher level of hypoxia (reflected microscopically by central necrosis and high mitotic activity) than other subtypes of BC. Hypoxia is main stimulus of induction of autophagy and autophagy related markers so TNBC subtype showed higher LC3B expression than other subtypes . This indicated that autophagy might promote viability and progression of TNBC cells . Correlation between CSC marker CD44 and autophagy related marker LC3B Autophagy allows BCSCs to adapt to different stressful conditions and to maintain their activity . So, it protects BCSCs from cytotoxic effects of anti-cancer therapy . Results of this study found significant positive moderate correlation between the percentage of CD44 and LC3B markers in studied specimens. This results coincides with study on pancreatic cancer observed that pancreatic CSCs exhibited elevated autophagy in both clinical specimens and cell lines and with another study was done by Wong et al. to elucidate the interplay of autophagy and CSCs in hepatocellular carcinoma . There are several medications that can exert effects on autophagy, which are also critical in BC therapy . Results of this study suggest that the majority of inhibitors of autophagy may play an important role in both BC and BCSC improving the outcome in BC patients. In conclusion; results of this study revealed that both stem cell marker CD44 and autophagy related marker LC3B expression are significantly associated with lymph nodal metastasis, advanced pathological stage and with triple negative molecular subtype of BC. Statistically positive correlation was also found between both tumor markers. There were no relationships between CD44 & LC3B expression and other clinicopathologic parameters such as age of patients, site, and size of tumor, LVI, PNI, TILs, and DCIS were found. Conceptualization: HEB; Data curation: HEB; Formal analysis: HEB, SH; Investigation: HEB, SH, MAH; Methodology: HEB; Project administration: HEB, SH.MAH; Resources: HEB, SH; Supervision: HEB.MAH; Validation: HEB; Visualization: HEB; Writing – original draft: HEB, SH; Writing – review & editing: HEB; Approval of final manuscript: all authors
Key interventions and outcomes in perioperative care pathways in emergency laparotomy: a systematic review
ba9e6db0-a508-47d6-86f1-27f604fa85d6
11892323
Surgical Procedures, Operative[mh]
Major emergency abdominal surgery is a complex clinical arena serving a heterogenous patient population, with variable physiological status. This high-risk cohort requires time-sensitive, definitive care to potentially mitigate the impact of their physiological and pathological status on post-operative outcomes. The burden of emergency surgery is significant, with reported rates of post-operative morbidity and mortality of 14–47% and 10–20% respectively . There have been considerable efforts made in recent times to try and improve these outcomes through the introduction of structured and standardised care pathways to attenuate the physiological stress of emergency laparotomy and improve post-operative clinical outcomes. Initiatives such as the ELPQuiC (Emergency Laparotomy Quality Improvement Care Bundle) have demonstrated the feasibility of implementing dedicated EmLap pathways into the early peri-operative period in the emergency setting to improve post-operative mortality . Modified Enhanced Recovery after Surgery (ERAS) protocols in the emergency setting have demonstrated improvements in broader clinical outcomes, including reduced length of stay, post-operative complications and improved gastrointestinal functions . These perioperative pathways often comprise several components, which interact to exert their overall effects. As demonstrated by the EPOCH trial, it is the combination of high-fidelity component interventions and overall compliance to the perioperative pathway, that drives overall improvement . Understanding the design and delivery of perioperative pathways in the EmLap setting is essential to evaluate their clinical and cost-effectiveness, and to facilitate broader adoption and implementation. Surgical and perioperative interventions are often poorly reported with a lack of detailed and in-depth intervention reporting . There is growing recognition of the importance of intervention reporting. The Template for Intervention Description and Replication (TIDieR) checklist and guide was developed in 2014 to provide a structure for assessing the completeness of intervention descriptions . The overarching purpose of the TIDieR checklist is to describe interventions in sufficient detail to allow their replication. The use of the TIDieR checklist has led to enhanced and in-depth reporting of complex interventions, which has led to improved implementation across clinical practice and trials . Detailed reporting of the types of interventions delivered across EmLap perioperative pathways, as well as, key aspects of each component, including mode of delivery, frequency, intensity and overall duration, is essential to ensure effective and time sensitive treatment is delivered. Comprehensive reporting of all aspects of perioperative pathways is important in clinical studies to ensure appropriate assessment of clinical effectiveness and onward implementation into clinical practice. Incorrect implementation leads to the initiation of ineffective or lesser treatment. This has implications for the patient, potentially impacting on their clinical outcomes, and on wider healthcare resources. The aim of this systematic review was to identify the current design and make-up of perioperative pathways in the EmLap setting, including identifying component interventions, their associated reported clinical and patient-reported outcomes and to understand their design and reporting in line with the TIDieR checklist. This systematic review was conducted according to a pre-specified protocol based on guidance from the Centre for Reviews and Dissemination and the Cochrane Handbook and is reported in line with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement . Our protocol was registered with the international, prospective register of systematic reviews, PROSPERO (CRD42021277211). Eligibility criteria All randomised and non-randomised cohort studies reporting outcomes on perioperative care pathways (PCP) in adult patients (≥ 18 years old) undergoing major emergency abdominal surgery were included. Perioperative care pathways were defined as multimodal perioperative care bundles, perioperative protocols, dedicated clinical pathways or ERAS protocols comprising of a number of components. Studies were excluded if they reported on perioperative care protocols/pathways in the trauma or elective setting or did not include clinical outcomes. Search strategy The OVID SP versions of MEDLINE (1950 to 31st December 2023), EMBASE (1980 to 31st December 2023) and the Cochrane Central Register of Controlled Trials were searched using the following search terms ‘emergency surgery’, ‘laparotomy’ ‘enhanced recovery’, ‘fast track’, ‘multimodal’, ‘care bundles’, ‘perioperative protocols’, ‘care pathways’ separated by the Boolean operator ‘AND’. Reference lists of included articles were hand-searched to identify any additional studies. All citations were collated within EndNote X7.8 ® , USA and duplicates were removed. All relevant titles and abstracts were screened by two reviewers (DH and BG). The full text versions of potentially eligible abstracts were retrieved in full. Only studies that fulfilled all eligibility criteria were included. Any conflicts were resolved through discussion. Study quality Methodological quality assessment of included studies was undertaken using the ‘Risk of Bias In Non-Randomised Studies of Intervention’ (ROBINS-I) assessment tool for non-randomised studies and the Cochrane risk of bias tool for randomised controlled trials (RCTs) . Data analysis A narrative description of all perioperative pathways was reported to identify design of the pathway including the delivery and timing of component interventions. To assess the completeness of intervention reporting and its replicability each PCP was assessed against the TIDieR checklist. To assess the consistency of outcome reporting the frequency of each definition and any inconsistencies in definitions across individual studies were reported. Descriptive data were expressed using basic statistics including proportions and averages. All data were entered into Microsoft Excel (Microsoft, Redmond, Washington USA) for analysis. All randomised and non-randomised cohort studies reporting outcomes on perioperative care pathways (PCP) in adult patients (≥ 18 years old) undergoing major emergency abdominal surgery were included. Perioperative care pathways were defined as multimodal perioperative care bundles, perioperative protocols, dedicated clinical pathways or ERAS protocols comprising of a number of components. Studies were excluded if they reported on perioperative care protocols/pathways in the trauma or elective setting or did not include clinical outcomes. The OVID SP versions of MEDLINE (1950 to 31st December 2023), EMBASE (1980 to 31st December 2023) and the Cochrane Central Register of Controlled Trials were searched using the following search terms ‘emergency surgery’, ‘laparotomy’ ‘enhanced recovery’, ‘fast track’, ‘multimodal’, ‘care bundles’, ‘perioperative protocols’, ‘care pathways’ separated by the Boolean operator ‘AND’. Reference lists of included articles were hand-searched to identify any additional studies. All citations were collated within EndNote X7.8 ® , USA and duplicates were removed. All relevant titles and abstracts were screened by two reviewers (DH and BG). The full text versions of potentially eligible abstracts were retrieved in full. Only studies that fulfilled all eligibility criteria were included. Any conflicts were resolved through discussion. Methodological quality assessment of included studies was undertaken using the ‘Risk of Bias In Non-Randomised Studies of Intervention’ (ROBINS-I) assessment tool for non-randomised studies and the Cochrane risk of bias tool for randomised controlled trials (RCTs) . A narrative description of all perioperative pathways was reported to identify design of the pathway including the delivery and timing of component interventions. To assess the completeness of intervention reporting and its replicability each PCP was assessed against the TIDieR checklist. To assess the consistency of outcome reporting the frequency of each definition and any inconsistencies in definitions across individual studies were reported. Descriptive data were expressed using basic statistics including proportions and averages. All data were entered into Microsoft Excel (Microsoft, Redmond, Washington USA) for analysis. A total of thirty studies outlining 26 unique pathways in EmLap were included in this review . A total of 10 randomised controlled trials (RCTs), 1 pilot RCT, 4 prospective cohort studies, 1 propensity matched cohort study, 5 retrospective cohort studies, 8 before and after studies and 1 case-control study were included (Table ; Fig. ). Outcomes were reported in 44,055 patients undergoing major emergency abdominal surgery. Care pathways were defined in different ways with 16 studies reporting on emergency ERAS protocols, 7 studies reporting on care bundles, 3 studies reporting on the implementation of a perioperative protocol, 2 studies reporting on protocolised care pathways and 1 study defined its care pathway as intermediate care and 1 study defined it PCP as a quality improvement programme. The earliest reported perioperative pathway was in 2011, with a total of 3 studies predating the introduction of the TIDieR checklist, and 27 studies published following its introduction. Study Bias The majority of RCTs were low overall risk of bias: with 6 RCTs identified to be low risk, 4 RCTs were considered to have some concerns and 1 RCT was considered to be high risk (Fig. a). The majority of 19 non-randomised studies were moderately biased: with 16 identified moderate risk and 3 considered to be seriously biased (Fig. b). Key areas for concern include confounding variables, participant selection, measurement of outcomes and selection of reported results. Peri-operative pathway design Twenty-six unique pathways were identified, with a total of 400 component interventions delivered across all studies. These component interventions were classified into 24 domains (Fig. ) across three distinct time points; pre-, intra- and post-operatively. There was significant overlap with delivery of domain interventions across perioperative timepoints. Six domains; multimodal analgesia, goal-directed fluid therapy, antibiotics, monitoring, thromboembolism prophylaxis and post-operative nausea and vomiting (PONV), were delivered across all three timepoints. Urgent radiology was identified as the only domain intervention delivered exclusively in the pre-operative phase. Risk stratification, timely intervention, prescriptive anaesthetic strategy and prescriptive surgical strategy were domain interventions delivered during the pre- and intra-operative phases of PCPs. There were no exclusive intra-operative domain interventions identified. Five domains were exclusively delivered during post-operative phase; early nutrition, chest physiotherapy, early mobilisation, early removal of drains and discharge/follow up criteria. Three domain interventions were delivered in the pre- and post-operative phases: medical optimisation, review and escalation policies and stress ulceration prophylaxis. Maintaining normothermia was the only domain that was delivered in the intra- and post-operative phases. Twenty-one studies reported on EmLap care pathways with a pre-operative phase, consisting of a median of 6 individual components (Table ). A total of 108 pre-operative component interventions were mapped to 14 broad pre-operative intervention domains. There was significant variation in the coverage of domains delivered in the pre-operative phase, with the sepsis screening/antibiotic prophylaxis domain being the most commonly employed; 14 (66.7%) studies reported component interventions within this domain. Twenty-two studies reported PCPs with an intra-operative phase, consisting of a median of 3 individual components. One hundred and ten intra-operative component interventions were mapped to 12 intra-operative intervention domains (Table ). Commonly covered domains across PCPs intra-operatively were prescriptive surgical strategy ( n = 13, 59.1%), prescriptive anaesthetic strategy ( n = 10, 45.5%), normothermia ( n = 12, 54.5%), goal directed fluid therapy ( n = 10, 45.5%) and analgesia ( n = 14, 63.6%). Twenty-five studies reported PCPs with a post-operative phase, consisting of a median of 8 components (Table ). A total of 191 individual component interventions were identified and mapped to 18 post-operative intervention domains. The most commonly employed domain interventions across PCPs were early nutrition, early mobilisation, early removal of drains and analgesia. PCPs tidier checklist The intervention characteristics of PCPs according to the TIDieR framework are outlined in Table . The majority of studies ( n = 20, 66.6%) did not report the TIDieR framework items, with thirteen studies reporting less than 50% of all items. Three studies reported 90% of the items within the TIDiER framework; reporting on all components of the PCPs intervention, apart from the item on modifications. There was no in-depth detail provided across all PCPs regarding the component intervention delivered, with no data provided on component interventions in specific patient or clinical groups. The PCP designed for use by Burcharth et al. was designed specifically in keeping with the TIDiER framework . The commonest TIDiER item reported across all studies was Item 2: why to describe the rationale, theory, or goal of the elements essential to the intervention. Poorly reported domains included Item 5: Who provided the interventions ( n = 8, 30.8%), Item 7: Where ( n = 7, 26.9%), and Item 9: Tailoring ( n = 5, 19.2%). There was a failure to report Item 10: Modifications across all studies. PCPs reported outcomes Seventeen studies reported on a primary outcome; with 6 studies reporting on post-operative mortality, 3 studies on length of stay (LoS), 3 studies reported on outcomes related to complications, 2 studies reported of composite post-operative outcomes, 1 study reported on compliance, 1 study reported on cost and 1 study reported on gastrointestinal function. A total of 250 individual outcomes were extracted from 30 studies and mapped to 13 overarching categories: mortality, length of stay (LoS), readmission, reoperation, complications, gastrointestinal function, invasive tube removal, analgesic requirements, mobility, cost-effectiveness, compliance rates, post-operative treatment, recovery and function and quality of life (QoL) (Table ). Clinical outcomes such as morbidity, mortality and LoS were most commonly reported. Outcomes relating to analgesic requirement, compliance, mobility, recovery, function and QoL were poorly reported across all studies. Post-operative mortality was the most frequently reported outcome measure across all studies, with 24 (80%) studies reporting this outcome. However, there was significant heterogeneity in the definitions and timing in reporting this outcome measure, with 8 different definitions identified. The most commonly used definition was overall 30-day mortality, with other definitions including in-hospital and risk-adjusted mortality, as well as reporting mortality outcomes at 90 days, 180 days and 1 year post-operatively. Post-operative morbidity was reported by 23 (76.7%) studies in 27 different ways at variable timepoints ranging from 3 to 180 days post-operatively. Seven studies reported specific complications including pulmonary complications, acute kidney injury, ileus, surgical site infection, post-operative bleeding, trocar site hernia, urinary tract infections, septic shock, anastomotic leak, peritonitis and abscess. Two grading systems were identified to grade the severity of complications; the Clavien-Dindo classification and the Post-operative Morbidity Score, across 7 studies. Outcomes for the gastrointestinal function domain were reported across 12 (40%) studies using 8 different definitions. No standardised definition of gastrointestinal function was identified. Patient-reported outcomes such as recovery and function and QoL were poorly reported, with 6 identified outcome measures across 2 (6.6%) studies. The majority of RCTs were low overall risk of bias: with 6 RCTs identified to be low risk, 4 RCTs were considered to have some concerns and 1 RCT was considered to be high risk (Fig. a). The majority of 19 non-randomised studies were moderately biased: with 16 identified moderate risk and 3 considered to be seriously biased (Fig. b). Key areas for concern include confounding variables, participant selection, measurement of outcomes and selection of reported results. Twenty-six unique pathways were identified, with a total of 400 component interventions delivered across all studies. These component interventions were classified into 24 domains (Fig. ) across three distinct time points; pre-, intra- and post-operatively. There was significant overlap with delivery of domain interventions across perioperative timepoints. Six domains; multimodal analgesia, goal-directed fluid therapy, antibiotics, monitoring, thromboembolism prophylaxis and post-operative nausea and vomiting (PONV), were delivered across all three timepoints. Urgent radiology was identified as the only domain intervention delivered exclusively in the pre-operative phase. Risk stratification, timely intervention, prescriptive anaesthetic strategy and prescriptive surgical strategy were domain interventions delivered during the pre- and intra-operative phases of PCPs. There were no exclusive intra-operative domain interventions identified. Five domains were exclusively delivered during post-operative phase; early nutrition, chest physiotherapy, early mobilisation, early removal of drains and discharge/follow up criteria. Three domain interventions were delivered in the pre- and post-operative phases: medical optimisation, review and escalation policies and stress ulceration prophylaxis. Maintaining normothermia was the only domain that was delivered in the intra- and post-operative phases. Twenty-one studies reported on EmLap care pathways with a pre-operative phase, consisting of a median of 6 individual components (Table ). A total of 108 pre-operative component interventions were mapped to 14 broad pre-operative intervention domains. There was significant variation in the coverage of domains delivered in the pre-operative phase, with the sepsis screening/antibiotic prophylaxis domain being the most commonly employed; 14 (66.7%) studies reported component interventions within this domain. Twenty-two studies reported PCPs with an intra-operative phase, consisting of a median of 3 individual components. One hundred and ten intra-operative component interventions were mapped to 12 intra-operative intervention domains (Table ). Commonly covered domains across PCPs intra-operatively were prescriptive surgical strategy ( n = 13, 59.1%), prescriptive anaesthetic strategy ( n = 10, 45.5%), normothermia ( n = 12, 54.5%), goal directed fluid therapy ( n = 10, 45.5%) and analgesia ( n = 14, 63.6%). Twenty-five studies reported PCPs with a post-operative phase, consisting of a median of 8 components (Table ). A total of 191 individual component interventions were identified and mapped to 18 post-operative intervention domains. The most commonly employed domain interventions across PCPs were early nutrition, early mobilisation, early removal of drains and analgesia. The intervention characteristics of PCPs according to the TIDieR framework are outlined in Table . The majority of studies ( n = 20, 66.6%) did not report the TIDieR framework items, with thirteen studies reporting less than 50% of all items. Three studies reported 90% of the items within the TIDiER framework; reporting on all components of the PCPs intervention, apart from the item on modifications. There was no in-depth detail provided across all PCPs regarding the component intervention delivered, with no data provided on component interventions in specific patient or clinical groups. The PCP designed for use by Burcharth et al. was designed specifically in keeping with the TIDiER framework . The commonest TIDiER item reported across all studies was Item 2: why to describe the rationale, theory, or goal of the elements essential to the intervention. Poorly reported domains included Item 5: Who provided the interventions ( n = 8, 30.8%), Item 7: Where ( n = 7, 26.9%), and Item 9: Tailoring ( n = 5, 19.2%). There was a failure to report Item 10: Modifications across all studies. Seventeen studies reported on a primary outcome; with 6 studies reporting on post-operative mortality, 3 studies on length of stay (LoS), 3 studies reported on outcomes related to complications, 2 studies reported of composite post-operative outcomes, 1 study reported on compliance, 1 study reported on cost and 1 study reported on gastrointestinal function. A total of 250 individual outcomes were extracted from 30 studies and mapped to 13 overarching categories: mortality, length of stay (LoS), readmission, reoperation, complications, gastrointestinal function, invasive tube removal, analgesic requirements, mobility, cost-effectiveness, compliance rates, post-operative treatment, recovery and function and quality of life (QoL) (Table ). Clinical outcomes such as morbidity, mortality and LoS were most commonly reported. Outcomes relating to analgesic requirement, compliance, mobility, recovery, function and QoL were poorly reported across all studies. Post-operative mortality was the most frequently reported outcome measure across all studies, with 24 (80%) studies reporting this outcome. However, there was significant heterogeneity in the definitions and timing in reporting this outcome measure, with 8 different definitions identified. The most commonly used definition was overall 30-day mortality, with other definitions including in-hospital and risk-adjusted mortality, as well as reporting mortality outcomes at 90 days, 180 days and 1 year post-operatively. Post-operative morbidity was reported by 23 (76.7%) studies in 27 different ways at variable timepoints ranging from 3 to 180 days post-operatively. Seven studies reported specific complications including pulmonary complications, acute kidney injury, ileus, surgical site infection, post-operative bleeding, trocar site hernia, urinary tract infections, septic shock, anastomotic leak, peritonitis and abscess. Two grading systems were identified to grade the severity of complications; the Clavien-Dindo classification and the Post-operative Morbidity Score, across 7 studies. Outcomes for the gastrointestinal function domain were reported across 12 (40%) studies using 8 different definitions. No standardised definition of gastrointestinal function was identified. Patient-reported outcomes such as recovery and function and QoL were poorly reported, with 6 identified outcome measures across 2 (6.6%) studies. We highlight the heterogenous nature of current perioperative pathway design in the EmLap setting, with multiple component interventions delivered in a variable manner. Our review identified 400 individual components mapping to 24 domains, with variable quality intervention and outcome reporting as measured by the TIDiER checklist. The overall lack of intervention description and reporting for EmLap perioperative pathways limits understanding their effectiveness, implementation and generalisability. EmLap perioperative pathways consist of several interacting components, with little understanding of the underlying interaction due to the variable quality evidence base underpinning each component intervention . This leads to significant heterogeneity in the type of interventions employed, with this systematic review identifying 26 unique perioperative pathways. Although the interventions delivered within these pathways mapped to 24 broad overarching domains, the overall delivery and reporting of individual interventions within these domains was heterogenous and inconsistent across different pathways. The TIDieR framework provides a standardised and robust manner to report complex interventions to enable broader adoption and implementation. However, adherence to this framework is variable in EmLap perioperative pathways. There is a significant focus on key aspects of the TIDieR framework including reporting on rationale for implementation and evaluation with reporting of key procedures outlined in 92.3% ( n = 24) and materials required to deliver these procedures in 57.7% ( n = 15). Despite the majority of studies reporting on key procedures and materials, these descriptions were often minimal or lacked sufficient detail, and therefore are unlikely to facilitate broader adoption or implementation. Several key details regarding intervention description and reporting are underreported, including, who delivered the intervention ( n = 8, 30.8%), where the intervention was delivered ( n = 7, 26.9%), tailoring of interventions ( n = 5, 19.2%), modifications ( n = 0, 0%), and planned and actual adherence ( n = 5, 19.2%). These key reporting criteria are often underreported across a range of complex interventions in multiple disease settings, with the focus largely being on the actual intervention delivered. Key detail on the broader reporting standards of intervention delivery are essential for implementation of complex interventions such as a EmLap perioperative pathway, which is often delivered by several key members of the multidisciplinary team, at different timepoints and stages of the pathway, to a broad and heterogenous clinical population. Three studies were identified to have excellent compliance with the TIDieR framework reporting. Vester-Andersen demonstrated variable compliance of 14.3–85.8% to key components of their complex intervention to improve post-operative EmLap care using intermediate care. However, when compared to standard care, the overall compliance to interventions was much higher due to the key reporting and educational components of the TIDieR framework . Using a similar approach, Burcharth et al. were able to demonstrate overall compliance of 83% to 15 component interventions . Boden et al. assessed the feasibility of implementing a complex intervention of intensive physiotherapy aimed at reducing postoperative pneumonia and improving physical recovery . Through the use of the TIDieR framework the authors identified key interventions with poor compliance and implementation in clinical practice and the associated barriers/challenges. These three studies demonstrate the value of the TIDieR framework, using indepth intervention description and reporting in ensuring the delivery of effective and feasible interventions within the EmLap setting. Through robust and standardised reporting of interventions, complex interventions can be appropriately implemented into clinical practice. Transparent reporting is essential for pathway effectiveness research due to the complex nature of developing and implementing clinical pathways, which is further amplified in the emergency setting.This limits healthcare resource wastage through the early identification of undeliverable interventions and ensuring the delivery of concise, high-fidelity, clinically effective interventions within complex clinical settings. This work contributes to the growing evidence-base in perioperative pathways in EmLap by identifying the content of these pathways and by identifying their associated reporting outcomes. However, our work is limited by the overall quality of the existing evidence-base, consisting primarily of moderately biased, non-randomised studies. We only identified ten RCTs for inclusion into this review. The disproportionate number of non-randomised studies is associated with inherent biases including selection bias and outcome reporting. This has a potential impact on determining the clinical effectiveness of the interventions and perioperative pathways reported within these studies. It is also important to note the limitations of the TIDIER checklist, as it has been designed for the generic use of intervention reporting across medicine and surgery leading to broad descriptors and the lack of thresholds to define adequate reporting. Perioperative pathways in the EmLap setting are complex interventions, with variable design and structure, spanning across the entire perioperative pathway. This review identified 26 unique pathways delivering 400 individual component interventions across 24 domains, with a variety of outcome metrics used to assess their clinical effectiveness. These pathways are multimodal, consisting of multiple component interventions. Currently, they are reported and therefore implemented in a variable manner. Future studies in EmLap perioperative pathways should ensure in depth reporting of the design and delivery of the pathway, including an in-depth description of component interventions, using existing frameworks such at the TIDIER framework. This will help identify component interventions that are valuable, effective and feasible for implementation in the EmLap setting.
Relationship Between Low Education and Poor Periodontal Status among Mexican Adults Aged ≥50 Years
a5013f45-f627-485a-82e4-268ceb92e3ca
11619841
Dentistry[mh]
This is a retrospective cross-sectional study. Its research protocol was reviewed and approved by the Ethics Committee of the Iztacala Faculty of Higher Studies at the National Autonomous University of Mexico (CE/FESI/032023/1587), and the study itself was conducted in accordance with the Declaration of Helsinki. Data Collection The periodontal status of the adults participating in the present study was evaluated using data taken from a series of annual reports (2019–2022) by the Sistema de Vigilancia Epidemiológica de Patologías Bucales (SIVEPAB or the Epidemiological Monitoring System for Oral Pathologies), which is administered by the Ministry of Health General Directorate for Epidemiology. Among the responsibilities of SIVEPAB is the collection of data on patients seeking dental care, mainly from “primary care services”, at one of the 442 sentinel units located in the 32 states of Mexico. The present study used non-probability convenience sampling methods. The inclusion criteria for participants in the present study were being a patient 50 years of age or more, of either gender, who did not present missing data in the database. Patients with third molars were excluded. Independent Variable: Level of Education The variable “years of education” was used to compare those adults who had completed ≤ 9 years of formal education with those who had completed more than 9 years, which, in Mexico, corresponds to primary and secondary school combined. The independent variable was dichotomized into ≤ 9 years and > 9 years. Dependent Variable: Periodontal Status The present study used the CPI, which measures the prevalence and severity of periodontal disease, and also indicates the corresponding treatment needs. Moreover, the CPI was used to document probing depth, on a scale of 0 to 4, corresponding to CPI = 0 (healthy), CPI = 1 (bleeding on probing), CPI = 2 (calculus), CPI = 3 (pocket depth of 4 to 5 mm), and CPI = 4 (pocket depth of ≥ 6 mm). A complete examination of the oral cavity was performed. The oral cavity was divided into sextants with the presence of at least two functional teeth. According to the CPI criteria, if no first or second molar was present in a sextant, all the teeth present were examined. All the adult participants were examined by dentists, using a periodontal probe WHO, in a dental chair equipped with a light source. For each participant, the final CPI score corresponded to the most severe score of the CPI readings obtained from the six sextants. Covariables The present study used the following sociodemographic variables, with a model fit for potential confounders: age (in years), categorised into two groups (50–64 years and ≥ 65 years); gender (male/female); smoking, categorised into two groups (never/former smoker and current); diabetes (yes/no); and oral hygiene, evaluated using the Simplified Oral Hygiene Index (OHI-S). The index is used to evaluate plaque and calculus debris on selected tooth surfaces. The vestibular and lingual surfaces of six permanent teeth were examined. Oral hygiene was classified as poor (OHI-S score ≥2) or good (OHI-S score <2). Sample Size The sample size was calculated using the formula for two independent proportions with 80% power, with a 0.17 difference in proportions detected between the two groups, and a bilateral p-value of 0.05. Assuming that 48% of the participants of the population investigated present the factor of interest (CPI 3-4), the present study required a sample size of 204 per group, i.e., a total sample size of 408, assuming equal-sized groups. Statistical Analysis All statistical analyses were carried out using the Stata 15 program (Stata; College Station, TX, USA). Chi-squared tests were used to determine associations among the variables of age, gender, oral hygiene, smoking, level of education, and diabetes for the groups obtained using the CPI. Multinomial regression was used to analyse the association among the independent variables (age, gender, oral hygiene, level of education, smoking, and diabetes) and the dependent variable “periodontal status” (CPI categories), which was expressed as an odds ratio (OR) with 95% confidence intervals (CI). Possible interactions between diabetes and level of education were analysed. The participants were classified into four groups according to their periodontal status: CPI = 0; CPI = 1; CPI = 2; and CPI = 3/4. In all analyses, two-tailed values of p < 0.05 were considered statistically significant. The periodontal status of the adults participating in the present study was evaluated using data taken from a series of annual reports (2019–2022) by the Sistema de Vigilancia Epidemiológica de Patologías Bucales (SIVEPAB or the Epidemiological Monitoring System for Oral Pathologies), which is administered by the Ministry of Health General Directorate for Epidemiology. Among the responsibilities of SIVEPAB is the collection of data on patients seeking dental care, mainly from “primary care services”, at one of the 442 sentinel units located in the 32 states of Mexico. The present study used non-probability convenience sampling methods. The inclusion criteria for participants in the present study were being a patient 50 years of age or more, of either gender, who did not present missing data in the database. Patients with third molars were excluded. The variable “years of education” was used to compare those adults who had completed ≤ 9 years of formal education with those who had completed more than 9 years, which, in Mexico, corresponds to primary and secondary school combined. The independent variable was dichotomized into ≤ 9 years and > 9 years. The present study used the CPI, which measures the prevalence and severity of periodontal disease, and also indicates the corresponding treatment needs. Moreover, the CPI was used to document probing depth, on a scale of 0 to 4, corresponding to CPI = 0 (healthy), CPI = 1 (bleeding on probing), CPI = 2 (calculus), CPI = 3 (pocket depth of 4 to 5 mm), and CPI = 4 (pocket depth of ≥ 6 mm). A complete examination of the oral cavity was performed. The oral cavity was divided into sextants with the presence of at least two functional teeth. According to the CPI criteria, if no first or second molar was present in a sextant, all the teeth present were examined. All the adult participants were examined by dentists, using a periodontal probe WHO, in a dental chair equipped with a light source. For each participant, the final CPI score corresponded to the most severe score of the CPI readings obtained from the six sextants. The present study used the following sociodemographic variables, with a model fit for potential confounders: age (in years), categorised into two groups (50–64 years and ≥ 65 years); gender (male/female); smoking, categorised into two groups (never/former smoker and current); diabetes (yes/no); and oral hygiene, evaluated using the Simplified Oral Hygiene Index (OHI-S). The index is used to evaluate plaque and calculus debris on selected tooth surfaces. The vestibular and lingual surfaces of six permanent teeth were examined. Oral hygiene was classified as poor (OHI-S score ≥2) or good (OHI-S score <2). The sample size was calculated using the formula for two independent proportions with 80% power, with a 0.17 difference in proportions detected between the two groups, and a bilateral p-value of 0.05. Assuming that 48% of the participants of the population investigated present the factor of interest (CPI 3-4), the present study required a sample size of 204 per group, i.e., a total sample size of 408, assuming equal-sized groups. All statistical analyses were carried out using the Stata 15 program (Stata; College Station, TX, USA). Chi-squared tests were used to determine associations among the variables of age, gender, oral hygiene, smoking, level of education, and diabetes for the groups obtained using the CPI. Multinomial regression was used to analyse the association among the independent variables (age, gender, oral hygiene, level of education, smoking, and diabetes) and the dependent variable “periodontal status” (CPI categories), which was expressed as an odds ratio (OR) with 95% confidence intervals (CI). Possible interactions between diabetes and level of education were analysed. The participants were classified into four groups according to their periodontal status: CPI = 0; CPI = 1; CPI = 2; and CPI = 3/4. In all analyses, two-tailed values of p < 0.05 were considered statistically significant. Characteristics of the Study Population The present study was conducted on 2098 adults ages ≥ 50 years, with a mean age of 62.0 (±9.4) years, of whom 59.5% were women. In terms of the number of years of education, 83.1% had ≤ 9 years and 16.9% > 9 years. According to the OHI-S, 62.1% of the adults had poor oral hygiene. In terms of the worst finding for periodontal status of the participants, 39.9% presented periodontal pockets ≥ 4 mm, 20.8% had calculus, and 12.8% showed bleeding, while only 26.4% were classified as healthy. The prevalence of diabetes in the participants was 23.4% and was higher in women than in men (27.1% vs 17.8%; p < 0.001). The characteristics of the participants, according to their periodontal status, are shown in . The male participants presented a higher percentage of periodontal pockets ≥ 4 mm than did female participants (p = 0.025); smokers with poor oral hygiene and diabetes presented the highest percentage of periodontal pockets ≥ 4 mm. Similarly, participants with a low level of education (≤ 9 years) presented a higher number of periodontal pockets ≥ 4 mm than those with >9 years of education (p < 0.001). shows that in participants with diabetes, the percentage of periodontal pockets ≥ 4 mm was similar in the three categories of educational level among participants with diabetes (p = 0.191). The percentage of periodontal pockets ≥ 4 mm was lowest in participants with > 9 years of education and without diabetes (p < 0.001). shows that participants who currently smoke have more periodontal pockets ≥4 mm compared to participants who have never smoked or were former smokers (p = 0.008). shows the results obtained using the multinomial logistic regression models. Poor oral hygiene, a low level of education (≤ 9 years), and the presence of diabetes were statistically significantly associated with the presence of calculus: OR = 5.04 (3.63 – 7.00), p < 0.001; OR = 4.38 (3.05 – 6.25), p < 0.001, and OR = 1.53 (1.09 – 2.14), p = 0.012,respectively. Other indicators, such as age and gender, were associated with the presence of calculus. Similarly, age ≥ 65 years (OR = 1.33 [1.03 – 1.72], p = 0.025), poor oral hygiene (OR = 6.86 [5.08 – 9.26], p < 0.001), a low level of education (≤ 9 years) (OR = 4.84 [3.51 – 6.66], p < 0.001), smoking (OR = 1.51 [1.05 – 2.18], p = 0.025), and the presence of diabetes (OR = 1.73 [1.28 – 2.32]; p < 0.001) were statistically significantly associated with the presence of periodontal pockets ≥ 4 mm. The present study was conducted on 2098 adults ages ≥ 50 years, with a mean age of 62.0 (±9.4) years, of whom 59.5% were women. In terms of the number of years of education, 83.1% had ≤ 9 years and 16.9% > 9 years. According to the OHI-S, 62.1% of the adults had poor oral hygiene. In terms of the worst finding for periodontal status of the participants, 39.9% presented periodontal pockets ≥ 4 mm, 20.8% had calculus, and 12.8% showed bleeding, while only 26.4% were classified as healthy. The prevalence of diabetes in the participants was 23.4% and was higher in women than in men (27.1% vs 17.8%; p < 0.001). The characteristics of the participants, according to their periodontal status, are shown in . The male participants presented a higher percentage of periodontal pockets ≥ 4 mm than did female participants (p = 0.025); smokers with poor oral hygiene and diabetes presented the highest percentage of periodontal pockets ≥ 4 mm. Similarly, participants with a low level of education (≤ 9 years) presented a higher number of periodontal pockets ≥ 4 mm than those with >9 years of education (p < 0.001). shows that in participants with diabetes, the percentage of periodontal pockets ≥ 4 mm was similar in the three categories of educational level among participants with diabetes (p = 0.191). The percentage of periodontal pockets ≥ 4 mm was lowest in participants with > 9 years of education and without diabetes (p < 0.001). shows that participants who currently smoke have more periodontal pockets ≥4 mm compared to participants who have never smoked or were former smokers (p = 0.008). shows the results obtained using the multinomial logistic regression models. Poor oral hygiene, a low level of education (≤ 9 years), and the presence of diabetes were statistically significantly associated with the presence of calculus: OR = 5.04 (3.63 – 7.00), p < 0.001; OR = 4.38 (3.05 – 6.25), p < 0.001, and OR = 1.53 (1.09 – 2.14), p = 0.012,respectively. Other indicators, such as age and gender, were associated with the presence of calculus. Similarly, age ≥ 65 years (OR = 1.33 [1.03 – 1.72], p = 0.025), poor oral hygiene (OR = 6.86 [5.08 – 9.26], p < 0.001), a low level of education (≤ 9 years) (OR = 4.84 [3.51 – 6.66], p < 0.001), smoking (OR = 1.51 [1.05 – 2.18], p = 0.025), and the presence of diabetes (OR = 1.73 [1.28 – 2.32]; p < 0.001) were statistically significantly associated with the presence of periodontal pockets ≥ 4 mm. The results of the present study showed that adults age ≥ 50 years with a low level of education present worse periodontal health (periodontal pockets ≥ 4 mm in depth) than those adults with a higher level of education, after the regression model was fit for possible confounders. Level of education is one of the intermediary social determinants found to be related to the well-being of older adults. In general terms, an individual’s health improves with their level of education, because they develop habits, skills, and resources that enable them to improve their own health. According to the literature, individuals with a high level of education have a high level of personal control and simply more information. They know more about health, tending to adopt a healthy lifestyle and carry out preventive actions to take care of themselves. Even in developed countries, such as the United States, it has been observed that adults with a lower level of education present poorer health than other populations. Level of education plays a significant role in the processes affecting oral health. Yamamoto et al reported that patients with a low level of education have worse periodontal health and a lower number of teeth than those with a high level of education. Similarly, a meta-analysis conducted in this area of research found that a low level of education was associated with a higher risk of periodontitis in adults aged 35 years or over. The present study found that nearly 40% of adults ≥ 50 years of age presented periodontal pockets ≥ 4 mm in depth, with approximately 83.0% of this cohort having ≤ 9 years of education. Possible reasons why Mexican adults aged 50 years and over present more education-related oral health inequalities include, first and foremost, the treatment costs of dental visits, wherein it has been reported that older adults without education are 73% less likely (OR = 0.27; p < 0.001) to visit the dentist in the last year. Second, those with a lower level of education tend to have lower incomes and cannot afford dental treatment. Another possible explanation for the relationship found by the present study is the high levels of inequality in access to and use of oral healthcare services. Sánchez-García et al reported that approximately half of older Mexican adults with social security coverage have used oral healthcare services in the last 12 months. Therefore, the level of education in adults ages ≥ 50 years may affect their access to and use of oral healthcare services due the social inequalities they face. Oral healthcare strategies are required to assist in the diagnosis, prevention, and reduction of oral diseases, via self-care or simple evidence-based measures for the entire population, in order to significantly reduce the burden of disease and the negative impact on quality of life. It should be noted that education helps to promote and maintain healthy lifestyles and positive options, support personal relationships, and improve personal and family well-being, as well as that of the population. The present study found that poor oral hygiene increases the probability of bleeding, calculus, and periodontal pockets ≥ 4 mm in depth in adults ages 50 years or over. Plaque and the presence of calculus have been considered factors in the occurrence of gingivitis and its progression into periodontitis, which is mainly due to inadequate oral hygiene. , Some studies have found that people with periodontitis present a higher percentage of calculus. , The present study observed that 62.1% of adults sampled presented poor hygiene, an association that was significant for the CPI = 2 category (OR = 5.04; p < 0.001), meaning that poor oral hygiene generates a greater progression of the disease. The association between diabetes and severe periodontitis has been widely studied in different populations. , The present study found that, in Mexican adults, the presence of diabetes was related to the presence of calculus and periodontal pockets ≥ 4 mm in depth. Pranckeviciene et al found an association between severe periodontitis and diabetes (OR = 1.83; p = 0.047). The association reported between diabetes and severe periodontitis results from the accumulation of plaque and calculus and a lack of toothbrushing, a lack of periodontal treatment, and a lack of glycemic control in the long-term. It is known that smoking is risk factor for severe periodontitis due to its vasoconstricting effect, an effect that results in a reduction of the efficacy of the defense mechanisms of the gums. Research has demonstrated that smokers are more likely than non-smokers to have bone loss, mobility, and tooth loss, as well as increased pocket probing depth. Other studies have reported an association between smoking and severe periodontitis (OR = 1.40; p = 0.028). , The present study found an association between smoking and the presence of periodontal pockets ≥ 4 mm in depth (OR = 1.51; p = 0.025). Finally, the present study observed that adults aged ≥ 65 years and over were more likely to present periodontal pockets ≥ 4 mm in depth (OR = 1.33; p = 0.025). Other studies have reported that periodontitis is common in people aged over 65 years. , Therefore, a higher prevalence of periodontal disease in adults, combined with the increased proportion of older people in the global population, may have an impact on the need for healthcare and dental services in the coming years. Similarly, long-term planning is required for oral health services to meet the needs of the continually increasing older adult population. The present study has various limitations. First, the results were based on cross-sectional data, which did not allow causal inferences. Second, the data collected is not representative of the general population and could lead to the overestimation of the prevalence of oral diseases, thus minimising the prevalence of periodontal disease in the Mexican population. Third, while six sites on each tooth were examined when the periodontal pockets were probed, periodontal probing was not calibrated and the variations between dentists at the sentinel units were not evaluated. Fourth, selection bias could present due to the fact that the participants of the present study had to attend the dental service offered at each sentinel unit. On the other hand, the present research is one of the few studies to have examined the relationship between level of education and periodontal disease in populations aged 50 years or over. The present cross-sectional study showed that a low level of education is associated with a worse periodontal status in adults ≥ 50 years of age. Similarly, poor oral hygiene, smoking, diabetes, and being over 65 years old are factors related to periodontal status. These results suggest the importance of periodontal education from an early age onward, as well as the need for effective strategies and interventions for reducing oral health inequalities, in order to thus reduce the gaps in access to oral healthcare over the course of an older adult’s life as they age.
Introduction and reflections on the comparative physiology of sleep and circadian rhythms
16981239-6c48-4bfd-ad2f-606bb714cbfb
11233284
Physiology[mh]
A year without touch: a reflection on physician–patient interaction during COVID-19
0317cf5c-09e6-4254-a7cb-338e2e197cd8
7891479
Pediatrics[mh]
Danish general practitioners as gatekeepers for gynaecological patients in regions with different density of resident specialists in gynaecology: in which situations and to whom do they refer? A cross-sectional study
da26bb59-75bc-4b21-920c-fe44fccb9aa7
10088933
Gynaecology[mh]
In many European countries, the General Practitioner (GP) acts as a professional medical front line person between the wishes and needs of the population on the one hand and access to the specialised healthcare system on the other hand . This gatekeeper system and GPs having a list of patients enrolled at their practice to ensure continuity of care has been seen as part of a comprehensive healthcare system and as a tool to ensure equal access for those in need of care . In the course of a year, 86% of the Danish population comes into direct contact with their GP . The composition of the population enrolled at the GPs list and those who actually contact the GP have an impact on the likelihood of referral to the various specialties . Nevertheless, in Danish as well as in international studies, referral percentages are very similar, with 4–6% of GP contacts being referred to a resident specialist or to a Hospital/Outpatient Clinic (HOC) . The GP referral patterns to resident specialists vary. A wide range of external conditions such as local access to resident specialist, social conditions and the general morbidity of those enrolled at the GP practice have been shown to have an impact on the proportion of patients that are referred . Therefore, referrals occur for very different reasons and at different points in time during a patient contact. In addition, in Denmark there is an unequal distribution between health care regions of specialists, which might shift the referral pattern towards hospital care. Within the gynaecological specialty, the GP can refer patients either to a HOC or to a Resident Specialist in Gynaecology (RSG). It is unknown in which situations the GP refers gynaecological patients and, also, whether these patients are referred to an RSG or to the HOC. There is also a lack of knowledge as to whether the density of RSG influences the referral pattern; moreover, it is not known whether differences in the density of RSGs results in an inequality in the specialist treatment of gynaecological diseases. The present study investigated the referral patterns for GPs referring gynaecological patients to the RSG or to the HOC in specific situations according to density of RSG. Further, we examined whether patients were referred to the HOC or to the RSG, or whether they were treated by the GP her/himself depending on the density of RSGs for six benign gynaecological diagnoses. Setting The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . Design This was as a cross-sectional study based on questionnaire data from GPs. Study population A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. Questionnaires The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . Data collection The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Data analysis Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). Ethical approval According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . This was as a cross-sectional study based on questionnaire data from GPs. A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). Of the 500 GPs who received a questionnaire, 347 GPs (69.4%) replied. Of these, 61.4% were female. Regarding age, 51.2% were younger than 50 years, and 76.3% were younger than 60 years. The majority (58.8%) had more than 10 years of professional experience as GPs and most commonly worked in practices with two to three doctors (45.2%). Most practices had both female and male GPs (52.3%). There were no statistically significant differences in any GP characteristics between regions . Referral patterns in specific situations As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. Referral patterns according to diagnosis As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. Statement of principal findings This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Strengths and weaknesses of the study Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Findings in relation to other studies Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . Possible mechanisms and implications for clinicians or policy makers In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG. This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG.
Mechanisms of cooperation in the plants-arbuscular mycorrhizal fungi-bacteria continuum
4cb5e5d9-ff69-4bbc-8355-84eda80ba923
11879240
Microbiology[mh]
Cooperation between species has always been at the heart of ecosystems, with many organisms in nature maintaining a network of interactions and interdependencies . These interactions are often mutually beneficial, but can also be mutually exclusive, playing a crucial role in the physiology and adaptation of organisms to their environment . Soil microorganisms constitute the most biologically diverse community of organisms in terrestrial ecosystems, with tens of millions of bacteria, archaea, fungi, viruses, and microeukaryotes coexisting underground . One gram of surface soil contain more than 10 9 bacterial and archaeal cells, 200 meters of fungal hyphae, trillions of viruses, and thousands of protists . They engage in a wide variety of social interactions such as predation, parasitism, symbiosis, and mutualism to survive and evolve . This may involve individuals of the same species or individuals of two or more species belonging to the same kingdom (e.g. bacteria versus bacteria) or different kingdoms (e.g. bacteria versus fungi), or even different domains (e.g. eukaryotes versus prokaryotes) . Cross-kingdom cooperation can increase the stability and productivity of ecosystems by bringing phylogenetically distant species closer together. The benefits of cooperation tend to turn into trophism when one party gains access to resources from which the other party is excluded or has only limited access . Plants do not exist in isolation in an ecosystem. A large number of microorganisms inhabit various organs (e.g. leaves and roots) of the plants, forming the unique microbiome, which cooperates closely with the plants and influences their performance and the function of the ecosystem. These outcomes are the fruit of the plants themselves and their associated microbiomes, collectively forming the holobiont . AM fungi are essential members of the holobiont, forming symbiotic associations with more than two thirds of terrestrial plants , and engaging in bi-directional exchanges of carbon (C) sources from plants to fungi and of mineral compounds (e.g. phosphorus – P and nitrogen – N) from AM fungi to plants . In this way, plants and AM fungi provide what they can at low cost in exchange for resources that would otherwise be impossible or more difficult to obtain . This mutualistic cooperation can extend to bacterial partners, as the extraradical hyphae of AM fungi are the perfect place for many soil-dwelling microorganisms to grow. This dynamic interface is commonly referred to as the hyphosphere, where the physical, chemical, and biochemical characteristics are different from the bulk soil due to the influence of hyphal exudates. The hyphae of AM fungi thus connect plant roots to soil microbial communities, forming plants-AM fungi-bacteria continuum . Similar to rhizosphere, hyphosphere is a C-rich ecological niche that can, to some extent, break bacterial dormancy and modify the composition of bacterial communities . Although AM fungi can widely explore soil for resource exchange with plants (e.g. minerals for C), their limited saprophytic capacity prevent them from exploiting organic compounds that are omnipresent in the soil . Some bacteria colonizing the hyphosphere could compensate for the lack of AM fungal saprophytic ability by their strong capacity to mineralize organic P and hydrolyze organic N from the soil in exchange for C and energy . Recent studies have suggested the existence of two distinct modes of cooperation, namely direct and indirect reciprocity, between AM fungi and soil bacteria, which are attributed to their metabolite’s exchanges . Metabolites can be released into the environment through active or passive ways. Species may secrete these compounds actively in order to promote cooperation and modify adaptation strategies to their surroundings, or passively, due to an inability to retain these compounds in their cytoplasm as a result of leakage issues or surplus nutrients . Direct reciprocity means that AM fungi release exudates on the surface of their hyphae which feed specific microbiota, which in turn help the AM fungi to acquire nutrients to improve the fitness of AM symbiosis . However, in the case of direct reciprocal activity, there is a risk of “cheaters”, where certain soil bacteria benefit from the generosity of others without making a personal contribution. Indirect reciprocity allows bacteria who dominate the cooperation (i.e. “cooperators”) to interfere or suppress the cheaters to gain access to resources or harm themselves, thereby safeguarding their interests and fitness . Therefore, the direct and indirect reciprocity are two mechanisms that maintain cooperation between unrelated species in the hyphosphere . In this review, we first explored the functional complementarities and differences in the symbiosis established between AM fungi and plants. Secondly, the resource exchange relationships between AM fungi and plants based on biological market theory and “surplus C” hypothesis were analyzed. Finally, we examined the current knowledge about direct and indirect reciprocal cooperation between AM fungi and soil bacteria in the hyphosphere, which is driven by species fitness and resource availability. These mechanisms are major drivers of biological interactions in the subsurface, of long-standing interest to plant and microbial ecologists. Over the course of evolution, AM fungi have refined their biotrophic abilities, using their hosts as a source of C and as protective niches, whereas plants have developed multiple strategies for adapting symbionts to cope with resource scarcity and environmental changes . AM symbiosis establish a “division of labor” or “resource division network”, which can even extend to soil bacteria: plants secrete C fixed by photosynthesis which feeds their AM fungal partners, and AM fungi and their associated bacteria, transfer mineral nutrients to plants in order to obtain C for their metabolism . The mechanisms of C and mineral nutrient flow in the AM symbiosis have been extensively scrutinized and validated using genetic, molecular, and cellular tools . Here, we focused mainly on evolutionary history, biological market theory, and the surplus C hypothesis to study the transfer of resources in the symbiosis between plants and AM fungi and their stable existence in the ecosystem. This will lead to a better understanding of the reciprocal cooperation between AM fungi and soil bacteria in the hyphosphere. Evolution is the origin of functional complementarity and differentiation in AM symbiosis It is widely accepted that the formation of symbiosis between plants and soil fungi has been instrumental in the transition of plants from aquatic to terrestrial environments . This hypothesis is supported by original descriptions from fossil records , paleobotanical data , and phylogenetic analyzes based on fungal DNA sequences . In the early studies of symbiotic functions, the assistance provided to the host in acquiring P was established as the most emblematic feature of AM fungi. They acquire P beyond the nutrient-depleted zone of rhizosphere to meet plant nutrient needs by accessing soil pores that are inaccessible to roots and by producing an enormous network of extraradical hyphae that extend several cm from the roots . Some fungi even deliver 70%–100% of the overall P obtained by plants . In addition to nutrient acquisition, plants colonized by AM fungi generally show greater tolerance to biotic and abiotic stresses, which is not simply a consequence of better nutritional status . The beneficial properties of AM symbiosis and their positive effects on plant health and fitness have been demonstrated in many species, covering a wide variety of plant lineages, and the perpetuation of AM symbiosis also explains their ubiquitous distribution in the plant kingdom . However, contrary to popular opinion, AM symbiosis is not necessarily beneficial to plant fitness and can even have a negative impact on plant growth - more often than not, the expected effect on yield and mineral nutrition has not been observed . This is not surprising, as assessing the impact of AM fungal inoculation on plants is difficult due to the high variability and uncontrollability of AM fungal inoculation in the field. The overall productivity of plants depends on the interaction of various factors including environment, climate, plant genotype, soil type, and agricultural practices . The stable growth and development of AM fungi in agricultural fields, combined with the establishment of an efficient symbiotic relationship with plants, is the cornerstone of achieving positive effects. Appropriate farming practices, such as low or no tillage, plant diversification, and organic farming, can encourage the development of native communities of AM fungi, and the introduction of AM fungi can also be an effective strategy for restoring soils that are naturally impoverished and low in AM fungi . Over the past decade, the exploration of AM fungal genomes has provided clues for unveiling key metabolic features of AM symbiosis. Mechanisms of convergent evolution may have shaped the genomes of AM fungi, with their transition from saprophytes to symbionts mainly involving the massive loss of plant cell wall-degrading carbohydrate-active enzymes (CAZymes), a lack of fatty acid synthases, a reduced ability to synthesize secondary metabolites and the co-selection of genes present in the saprophytic ancestor to fulfill the new symbiotic function . In saprotrophic fungi, CAZymes play an essential role in the degradation of soil organic litter. They obtain nutrients by decomposing dead or decaying animal and plant residues and other organic matter to maintain normal life activities . However, substantial evidence confirms that AM fungi are unable to reproduce and survive in non-symbiotic conditions . Lipids serve as energy and C building blocks for AM fungal growth, but the genes encoding multidomain cytosolic fatty acid synthase subunits, which are required for fungal de novo fatty acid synthesis, are supposedly absent from the genomes of all AM fungi as observed in Rhizophagus irregularis , Gigaspora margarita , and Gigaspora rosea . Deletion of multiple fatty acid biosynthetic enzyme genes such as FatM and RAM2 in plants severely impairs AM fungal colonization in roots , indicating that AM fungi are fatty acid auxotrophs and their growth and development depend on lipids received from the host plant. In addition, the invertase-encoding and thiamine-encoding genes are absent from AM fungal genomes sequenced to date, further supporting their biotrophic status . These functional complementarities and functional differentiations allow the relationship between plants and AM fungi to be preserved in natural selection and evolution, and gradually evolve into the typical mutualistic symbiosis. Application of biological market theory in AM symbiosis The evolution and development of the AM symbiosis is characterized by a transfer and exchange of resources, which is a typical feature of the biological market. Thus, biological market theory provides a conceptual framework for analyzing the cooperative exchange of resources between plants and AM fungi, allowing us to explore the rewards of fitness they obtain by providing resources in changing environments . In such market, individuals tend to profit by distinguishing potential trading partners based on the quality and quantity of commodities they offer . In the symbiosis between plants and AM fungi, the commodities exchanged are C provided by the plants and mineral nutrients provided by the AM fungi. These entities do not have a common currency of exchange, which requires the use of an exchange rate between C and mineral nutrients to express the relative value of these commodities, with the partner offering the best exchange rate being the most favored . Biological market theory provides a balanced view of partner selection, based on different individual perspectives, rather than adopting a traditional plant-centric model . For both plants and AM fungi, the costs and benefits directly affect their fitness. If the benefits outweigh the costs, the individual fitness increases, and if several individuals in a population experience the same balance between costs and benefits, the population grow . In addition, biological market theory allows individuals to manipulate or influence exchange rates when the availability of resources fluctuates, and to make judicious choices about whether to enter into cooperative relationships with other individuals. The actual price for a given exchange reflects a combination of various dynamic factors, including supply and demand as well as nutrient acquisition efficiencies of partners . Both plants and AM fungi have been shown to be able to exploit changes in available resources to impose favorable trading conditions, and higher returns encourage them to invest more in exchanges . For example, plant nutritional and physiological status largely influence C allocation. Under P deficient conditions, host plants can activate a series of adaptive responses regulated by phosphate starvation response (PHR) proteins, including key genes for AM fungi to colonize the roots and exchange nutrient from plants. Conversely, high P condition strongly inhibits the movement of PHR proteins towards the nucleus and/or their binding to the promotors of targeted genes, which hinders root colonization by AM fungi and reduces C allocation . For AM fungi, their P trading strategies with plants in the extraradical hyphal networks are not uniform. The heterogeneity and drastic changes of P resource in the environment can exacerbate the flow of P from hyphae to the host . AM fungi respond to high variation in P resources by increasing total P distributed to host plants, decreasing allocation to storage in the fungal network, and differentially moving P resources within fungal network from rich to poor patches of nutrients . When available P resources decline sharply, AM fungi compensate for the lack of P resources by moving P within the hyphae closer to host roots. If available P resources increase, they also store excess nutrients until root demands increase to get larger C returns . Although biological market theory is well supported in mycorrhizal research, there are still numerous experiments confirming that the C and P exchange between plants and AM fungi does not adhere to the predictions of this theory, and the partners with relatively low or even no value also exist stably . In natural ecosystems, most plants are colonized by multiple AM fungi, and AM fungi are always shared by neighboring plant species. These AM fungi can connect together different plant species, among which some can be chlorophyll-free (i.e. myco-heterotrophic species), therefore obtaining their nutrients from AM fungi, whereas AM fungi themselves receive C compounds from the surrounding autotrophic plants, which is contrary to biological market theory . Consequently, greater caution should be exercised when using biological market theory to elucidate mutualism between plants and AM fungi. It is imperative to integrate the specific characteristics of AM symbiosis into evolutionary models, in particular the relationship between the complex hyphal networks and diversity of plant root traits, the strength of AM fungal sinks and plant sources, the residual cargo exchange, and environmental conditions for AM symbiont growth . Application of “surplus C” hypothesis in AM symbiosis Unlike the biological market theory, which assumes that plants actively identify fungal partners and adapt their C allocation strategies accordingly, the surplus C hypothesis proposes that C transfer by plants may be the consequence of a surplus of C produced by photosynthesis . The surplus C hypothesis is based on the source-sink dynamics of plants and avoids the question of the existence of myco-heterotrophic species, which is difficult to interpret in the framework of biological market theory . It refers to the process of transferring C from tissues or organs of relatively high content or concentration to those of low, irrespective of the underlying mechanism governing C transfer . Consequently, in the presence of a surplus of C in the leaves or the whole plant, they may not actively exchange it with AM fungi for mineral nutrients, opting instead to discharge C. This imposes no cost on the host plants, has no potential harmful effects on their growth or other functions, and does not even require active regulation . The amount of C transferred to AM fungi depends on the plant’s metabolic processes and the strength of the fungal C sink and appears to be independent of the transfer of fungal mineral nutrients . Plant nutrient deficiencies and even growth limitations may not disrupt the process of phloem loading, which allow for the translocation of any surplus photosynthetic products from the source leaves to the sink organs such as branchlets, stems, and roots , mitigating the toxicity of photosynthetic C accumulation in leaves. The surplus C hypothesis does not take into account the form, mechanism, and regulatory processes involved in the C transfer from plants to AM fungi. This may be why the hypothesis is not widely accepted by the research communities. If there is simply a surplus of C, the monosaccharides resulting from sucrose cleavage constitute the primary form of C assimilated by AM fungi, and obviously, lipids serve as the main C source for AM fungi to fulfill their growth and development requirements . De novo biosynthesis of fatty acids occurs in root cells via a set of conserved biochemical reactions. It is an energy-consuming process regulated by the mycorrhizal-specific transcription factors RAM1 and WRI5a . Synthesized fatty acids also necessitate transportation to AM fungi via mycorrhizal-specific fatty acid transporters STR/STR2 situated on the plant plasma membrane . If AM fungi do not provide mineral nutrients or other fitness rewards, they can be a burden for plants. In addition, a large number of studies have demonstrated that the composition of root exudates varies considerably under different nutrient conditions, exerting a profound stimulatory effect on the rhizosphere soil microbial community, which is commonly regarded as the result of plants actively adapting to their environment . For example, in scenarios where the availability of soil P is limited, plants can excrete considerable amounts of carboxylates, especially malic acid and citric acid . The excretion of these compounds may function as a strategy used by plants to acquire P, rather than serving as a means to discharge surplus C. Overall, the biological market theory serves as an evolutionary framework to elucidate the origin and development of mycorrhizal symbiosis, whereas the surplus C hypothesis exhibits limited association with evolutionary history and primarily emphasizes the intensity of source-sink dynamics . Considering the rationality of the above two perspectives, they need to be balanced in the study of nutrition and ecology of mycorrhizal research. It is widely accepted that the formation of symbiosis between plants and soil fungi has been instrumental in the transition of plants from aquatic to terrestrial environments . This hypothesis is supported by original descriptions from fossil records , paleobotanical data , and phylogenetic analyzes based on fungal DNA sequences . In the early studies of symbiotic functions, the assistance provided to the host in acquiring P was established as the most emblematic feature of AM fungi. They acquire P beyond the nutrient-depleted zone of rhizosphere to meet plant nutrient needs by accessing soil pores that are inaccessible to roots and by producing an enormous network of extraradical hyphae that extend several cm from the roots . Some fungi even deliver 70%–100% of the overall P obtained by plants . In addition to nutrient acquisition, plants colonized by AM fungi generally show greater tolerance to biotic and abiotic stresses, which is not simply a consequence of better nutritional status . The beneficial properties of AM symbiosis and their positive effects on plant health and fitness have been demonstrated in many species, covering a wide variety of plant lineages, and the perpetuation of AM symbiosis also explains their ubiquitous distribution in the plant kingdom . However, contrary to popular opinion, AM symbiosis is not necessarily beneficial to plant fitness and can even have a negative impact on plant growth - more often than not, the expected effect on yield and mineral nutrition has not been observed . This is not surprising, as assessing the impact of AM fungal inoculation on plants is difficult due to the high variability and uncontrollability of AM fungal inoculation in the field. The overall productivity of plants depends on the interaction of various factors including environment, climate, plant genotype, soil type, and agricultural practices . The stable growth and development of AM fungi in agricultural fields, combined with the establishment of an efficient symbiotic relationship with plants, is the cornerstone of achieving positive effects. Appropriate farming practices, such as low or no tillage, plant diversification, and organic farming, can encourage the development of native communities of AM fungi, and the introduction of AM fungi can also be an effective strategy for restoring soils that are naturally impoverished and low in AM fungi . Over the past decade, the exploration of AM fungal genomes has provided clues for unveiling key metabolic features of AM symbiosis. Mechanisms of convergent evolution may have shaped the genomes of AM fungi, with their transition from saprophytes to symbionts mainly involving the massive loss of plant cell wall-degrading carbohydrate-active enzymes (CAZymes), a lack of fatty acid synthases, a reduced ability to synthesize secondary metabolites and the co-selection of genes present in the saprophytic ancestor to fulfill the new symbiotic function . In saprotrophic fungi, CAZymes play an essential role in the degradation of soil organic litter. They obtain nutrients by decomposing dead or decaying animal and plant residues and other organic matter to maintain normal life activities . However, substantial evidence confirms that AM fungi are unable to reproduce and survive in non-symbiotic conditions . Lipids serve as energy and C building blocks for AM fungal growth, but the genes encoding multidomain cytosolic fatty acid synthase subunits, which are required for fungal de novo fatty acid synthesis, are supposedly absent from the genomes of all AM fungi as observed in Rhizophagus irregularis , Gigaspora margarita , and Gigaspora rosea . Deletion of multiple fatty acid biosynthetic enzyme genes such as FatM and RAM2 in plants severely impairs AM fungal colonization in roots , indicating that AM fungi are fatty acid auxotrophs and their growth and development depend on lipids received from the host plant. In addition, the invertase-encoding and thiamine-encoding genes are absent from AM fungal genomes sequenced to date, further supporting their biotrophic status . These functional complementarities and functional differentiations allow the relationship between plants and AM fungi to be preserved in natural selection and evolution, and gradually evolve into the typical mutualistic symbiosis. The evolution and development of the AM symbiosis is characterized by a transfer and exchange of resources, which is a typical feature of the biological market. Thus, biological market theory provides a conceptual framework for analyzing the cooperative exchange of resources between plants and AM fungi, allowing us to explore the rewards of fitness they obtain by providing resources in changing environments . In such market, individuals tend to profit by distinguishing potential trading partners based on the quality and quantity of commodities they offer . In the symbiosis between plants and AM fungi, the commodities exchanged are C provided by the plants and mineral nutrients provided by the AM fungi. These entities do not have a common currency of exchange, which requires the use of an exchange rate between C and mineral nutrients to express the relative value of these commodities, with the partner offering the best exchange rate being the most favored . Biological market theory provides a balanced view of partner selection, based on different individual perspectives, rather than adopting a traditional plant-centric model . For both plants and AM fungi, the costs and benefits directly affect their fitness. If the benefits outweigh the costs, the individual fitness increases, and if several individuals in a population experience the same balance between costs and benefits, the population grow . In addition, biological market theory allows individuals to manipulate or influence exchange rates when the availability of resources fluctuates, and to make judicious choices about whether to enter into cooperative relationships with other individuals. The actual price for a given exchange reflects a combination of various dynamic factors, including supply and demand as well as nutrient acquisition efficiencies of partners . Both plants and AM fungi have been shown to be able to exploit changes in available resources to impose favorable trading conditions, and higher returns encourage them to invest more in exchanges . For example, plant nutritional and physiological status largely influence C allocation. Under P deficient conditions, host plants can activate a series of adaptive responses regulated by phosphate starvation response (PHR) proteins, including key genes for AM fungi to colonize the roots and exchange nutrient from plants. Conversely, high P condition strongly inhibits the movement of PHR proteins towards the nucleus and/or their binding to the promotors of targeted genes, which hinders root colonization by AM fungi and reduces C allocation . For AM fungi, their P trading strategies with plants in the extraradical hyphal networks are not uniform. The heterogeneity and drastic changes of P resource in the environment can exacerbate the flow of P from hyphae to the host . AM fungi respond to high variation in P resources by increasing total P distributed to host plants, decreasing allocation to storage in the fungal network, and differentially moving P resources within fungal network from rich to poor patches of nutrients . When available P resources decline sharply, AM fungi compensate for the lack of P resources by moving P within the hyphae closer to host roots. If available P resources increase, they also store excess nutrients until root demands increase to get larger C returns . Although biological market theory is well supported in mycorrhizal research, there are still numerous experiments confirming that the C and P exchange between plants and AM fungi does not adhere to the predictions of this theory, and the partners with relatively low or even no value also exist stably . In natural ecosystems, most plants are colonized by multiple AM fungi, and AM fungi are always shared by neighboring plant species. These AM fungi can connect together different plant species, among which some can be chlorophyll-free (i.e. myco-heterotrophic species), therefore obtaining their nutrients from AM fungi, whereas AM fungi themselves receive C compounds from the surrounding autotrophic plants, which is contrary to biological market theory . Consequently, greater caution should be exercised when using biological market theory to elucidate mutualism between plants and AM fungi. It is imperative to integrate the specific characteristics of AM symbiosis into evolutionary models, in particular the relationship between the complex hyphal networks and diversity of plant root traits, the strength of AM fungal sinks and plant sources, the residual cargo exchange, and environmental conditions for AM symbiont growth . Unlike the biological market theory, which assumes that plants actively identify fungal partners and adapt their C allocation strategies accordingly, the surplus C hypothesis proposes that C transfer by plants may be the consequence of a surplus of C produced by photosynthesis . The surplus C hypothesis is based on the source-sink dynamics of plants and avoids the question of the existence of myco-heterotrophic species, which is difficult to interpret in the framework of biological market theory . It refers to the process of transferring C from tissues or organs of relatively high content or concentration to those of low, irrespective of the underlying mechanism governing C transfer . Consequently, in the presence of a surplus of C in the leaves or the whole plant, they may not actively exchange it with AM fungi for mineral nutrients, opting instead to discharge C. This imposes no cost on the host plants, has no potential harmful effects on their growth or other functions, and does not even require active regulation . The amount of C transferred to AM fungi depends on the plant’s metabolic processes and the strength of the fungal C sink and appears to be independent of the transfer of fungal mineral nutrients . Plant nutrient deficiencies and even growth limitations may not disrupt the process of phloem loading, which allow for the translocation of any surplus photosynthetic products from the source leaves to the sink organs such as branchlets, stems, and roots , mitigating the toxicity of photosynthetic C accumulation in leaves. The surplus C hypothesis does not take into account the form, mechanism, and regulatory processes involved in the C transfer from plants to AM fungi. This may be why the hypothesis is not widely accepted by the research communities. If there is simply a surplus of C, the monosaccharides resulting from sucrose cleavage constitute the primary form of C assimilated by AM fungi, and obviously, lipids serve as the main C source for AM fungi to fulfill their growth and development requirements . De novo biosynthesis of fatty acids occurs in root cells via a set of conserved biochemical reactions. It is an energy-consuming process regulated by the mycorrhizal-specific transcription factors RAM1 and WRI5a . Synthesized fatty acids also necessitate transportation to AM fungi via mycorrhizal-specific fatty acid transporters STR/STR2 situated on the plant plasma membrane . If AM fungi do not provide mineral nutrients or other fitness rewards, they can be a burden for plants. In addition, a large number of studies have demonstrated that the composition of root exudates varies considerably under different nutrient conditions, exerting a profound stimulatory effect on the rhizosphere soil microbial community, which is commonly regarded as the result of plants actively adapting to their environment . For example, in scenarios where the availability of soil P is limited, plants can excrete considerable amounts of carboxylates, especially malic acid and citric acid . The excretion of these compounds may function as a strategy used by plants to acquire P, rather than serving as a means to discharge surplus C. Overall, the biological market theory serves as an evolutionary framework to elucidate the origin and development of mycorrhizal symbiosis, whereas the surplus C hypothesis exhibits limited association with evolutionary history and primarily emphasizes the intensity of source-sink dynamics . Considering the rationality of the above two perspectives, they need to be balanced in the study of nutrition and ecology of mycorrhizal research. Mutualism between plants and AM fungi takes place in the peri-arbuscular space of roots, where intraradical hyphae of AM fungi obtain C to sustain their metabolic activities. Cooperation between AM fungi and bacteria occurs in the hyphosphere, where AM fungi provide C to soil bacteria through their vast hyphal networks. In this plants-AM fungi-bacteria continuum, multiple factors can affect the cooperation and stable coexistence in the complex environment. Plants may establish symbiosis with different AM fungal partners depending on their root traits and photosynthetic efficiency . Alterations in AM fungal communities or AM fungal species lead to different hyphosphere bacterial communities, generally due to the differences in the composition of their exudates and/or by different developmental and metabolic characteristics . The availability of nutrients in the environment influences the AM symbiosis by altering the cooperative strategies between plants and AM fungi, which in turn influences the composition of exudates , thereby attracting or repelling soil bacteria, and mediating direct and indirect reciprocity in the hyphosphere. Some beneficial bacteria (e.g. Rahnella aquatilis and Devosia sp. ) can function in synergy with AM fungi to improve plant performance via fine-tuned communication at the hyphosphere and peri-arbuscular interfaces . In contrast, certain bacteria engage in competition with AM fungi for nutrients, thereby reducing the fitness of AM fungi, which may be adversely affected by the antagonism exhibited by other beneficial bacteria . Direct reciprocity and indirect reciprocity are two mechanisms to maintain cooperation between AM fungi and soil bacteria in the hyphosphere . Here, we discussed these potential mechanisms via a number of study cases. Carbon and mineral resource exchange is the driving force behind direct reciprocity Many studies have shown that AM fungi actively select but do not randomly cooperate with soil bacteria possibly because of their limited exo-enzymatic repertoire. The soil bacteria that rely on exudates released by hyphae for survival often have the ability to mineralize organic P and/or hydrolyze organic N, and their population abundance in the hyphosphere is substantially different from that in the bulk soil . The exchange of C and mineral nutrients lead to a direct reciprocity between AM fungi and bacteria, a common mode of cooperation in the hyphosphere involving many fungal and bacterial species . Experiments conducted in vitro provide conclusive evidence of this reciprocity. Each gram of hyphae (dry weight) of Rhizophagus irregularis can release approximately 30 mM of C-containing compounds within four weeks, mainly in the forms of low molecular weight sugars, amino acids, and carboxylates . Different soil bacteria have different C metabolism profiles and may preferentially use different forms of compounds . For example, Rahnella aquatilis (a phosphate-solubilizing bacterium) prefers fructose to glucose, whereas Pseudomonas fluorescens (a complete denitrifying bacterium) prefers carboxylates . However, if these C-containing compounds are secreted passively (without consuming energy), although this represents surplus C for the AM fungus, similar to the surplus C hypothesis of plants, it does not support the reciprocity cooperation. Intriguingly, the exudates released by the hyphae may not be a purely passive process, but a targeted response that occurs upstream of the passive process of root exudation, which could be controlled by the AM fungi and consume energy . Under different nutritional conditions, the growth of AM fungi can cause significant changes in the composition of their exudates, and even the release of some signaling molecules . Therefore, AM fungi actively provide soil bacteria with C sources needed for growth and stimulate their saprophytic ability, thereby increasing the availability of mineral nutrients to AM fungi. This was well shown with R. aqualitis which can swim through the thick water film surrounding the extraradical hyphae of R. irregularis towards organic P patches, where it extends the ability of R. irregularis to efficiently utilize this otherwise inaccessible P source. AM fungal hyphae were also reported to help the migration of Sinorhizobium meliloti to the roots of leguminous plants, thereby triggering nodulation . Specific signals (i.e. flavonoids) are released by the hyphae connected to the legume, serving as chemoattractants. Conversely, Sinorhizobium meliloti stimulates the flow of cytoplasm/protoplasm within the hyphae, likely increasing the release of nutrients and signals . In another study , it was observed that two chitinolytic Paenibacillus sp. isolates that relies on fungal exudates for growth, increased the efficiency of chitin uses as a N source by the extraradical hyphae of R. irregularis . The experimental design and optimization targeting AM fungal and bacterial cooperative mechanisms such as those described above typically consider isolated cells of a single species . Although this reductionist approach aims to simplify the process, it creates a situation that rarely occurs in nature. In the natural environment, microorganisms thrive in complex communities, where the fitness of individual species depends on interactions with other species in the population. In the hyphosphere, a variety of soil bacteria that interact closely with AM fungi can stably exist and form the hyphobiome. At the order level, the bacterial network is dominated by Myxococcales, Betaproteobacteriales, Fibrobacterales, Cytophagales, and Chloroflexales . The core bacterial members are essentially correlated with their phosphatase activity, which can mineralize a large amount of organically bound P in soil . High-throughput stable isotope probing experiments provided further evidence that bacteria colonizing the hyphosphere can lead to an increase in the ability of AM fungi to acquire organic nutrients from the soil, as these beneficial bacteria have a large number of genes encoding carbohydrate-degrading enzymes in their genomes . This is crucial for maintaining the mutualistic symbiosis between AM fungi and plants, as mineral nutrients are the cargo they exchange with plants for C, which determines the growth and development of AM fungi . If AM symbiosis is compromised, extraradical hyphae may experience C limitation, hindering their ability to explore the soil, which affects reciprocal cooperation in the hyphosphere. In addition, some bacteria (e.g. Paenibacillus validus and Pseudomonas monteilii ) dwelling in the hyphosphere can promote spore germination, hyphal growth, and branching by producing growth promoting factors, thereby improving the colonization efficiency of AM fungi within roots . These bacteria are also thought to have the ability to enhance AM fungal resistance and facilitate mycorrhizal symbiosis establishment . Therefore, the hyphosphere, a critical and active zone of soil, addresses the issue of plant, AM fungal and bacterial resource scarcity and fitness in the ecosystem to a certain extent and promotes their direct reciprocity . However, the internal and external conditions that affect this direct reciprocity are unclear. The extent to which AM fungi provide available C to soil bacteria, rather than passive secretion due to the surplus C, still needs to be further explored. “Cooperators” secrete antibiotics to suppress “cheaters” is the driving force behind indirect reciprocity Indirect reciprocity in the hyphosphere may be dominated by specific bacterial species, often accompanied by the production of antibiotics or antimicrobial compounds . Streptomyces sp. isolated from the hyphosphere can use different forms of hyphal exudates to grow rapidly and plays a decisive role in the early stages of bacterial community formation . Streptomyces sp. D1 has the ability to produce phosphatases that are efficient in mineralizing organic P, thereby increasing P availability for AM fungi. In low-P soils, soil bacteria tend to compete with AM fungi for P to prioritize their own resource needs . Therefore, for those bacteria (i.e. the cheaters such as Pseudomonas sp. H2 and Paenarthrobacter sp. 31) that contribute little or no to P availability, their acquisition of AM fungi-derived C may not maintain the stability of reciprocal cooperation, and even inhibits the synthesis and transfer of poly-P in the extraradical hyphae of AM fungi. Intriguingly, Streptomyces sp. D1 strongly alters the hyphosphere bacterial communities by inhibiting certain bacteria with low phosphatase efficiency. Conversely, Streptomyces sp. D1 generally does not impact the bacteria with high organic P mobilization capability as these bacteria do not contribute to P resource shortage . Cultured cells and exudates of Streptomyces sp. D1 inhibited by nearly 40% the growth of Pseudomonas sp. H2, by synthesis and secretion of albaflavenone (bactericidal antibiotic). A review of the genome of Streptomyces sp. D1 found that it contains a large number of genes related to antibiotic synthesis, including types I, II, and III polyketide synthases, non-ribosomal peptide synthases, and terpene. The expression of terpenoid backbone biosynthesis-related genes that synthesize albaflavenone precursor substances is generally increased during the interactions in the hyphosphere . These evidences suggest that Streptomyces sp. D1 actively produced antibiotics to punish cheaters in the hyphosphere bacterial community to maintain the stability of mutualistic cooperation between AM fungi and soil bacteria . Streptomyces sp. may be a species with cooperative advantages that can limit cheater behavior and negatively affect the fitness advantage of the cheater. The cheaters frequently undermine the stability of the community by snatching resources and space . In the hyphosphere, the cheating bacteria possess the capability to compete for available C and P, garnering benefits from the resources generated by other collaborators and refraining from contributing to the costs of producing these resources . For the majority of species, their fitness is generally achieved by increasing benefits or reducing costs. If the cheaters proliferate in an uncontrolled manner, they impose a burden on cooperative species, and with the invasion of cheaters, the proportion of resource producers in a population tends to decline to diminish to the brink of population collapse . The true cheating occurs when the relative fitness of the cheater increases above and that of the cooperator decreases below the average fitness of the population, which has been extensively verified in other studies . In addition, some soil bacteria (i.e. “destroyers”, such as Burkholderia) have the potential to either feed on hyphae or generate antifungal agents that antagonize AM fungi and suppress the growth of hyphae . However, in complex microbial communities, the question arises as to how to accurately distinguish the identities of cooperators, cheaters, and even destroyers. Their functions and roles as well as the reciprocal mechanisms still need to be explored and verified via more ingenious experimental designs and innovative methods. Spatial structure is one of the fundamental mechanisms influencing the resilience of cooperation. It allows cooperating species to isolate themselves from cheaters, thus facilitating the formation of indirect reciprocity . Spatial structures at multiple length scales exist in bacterial biofilms, which are multicellular aggregates formed by a matrix composed of biopolymers and proteins. The spatial association within species can benefit themselves, whereas distances between species determines the transport of dispersible resources . The interaction networks within biofilms depend largely on the spatial structure of the biofilms—i.e., the arrangement in space of different microbial species, which affects the resource acquisition and stable survival of bacteria in the ecosystem . If different strains and species mix spatially within the biofilms, bacterial cells interact closely with other species, often in an antagonistic manner, and natural selection tends to favor those species that dominate over competitors . In the hyphosphere, the expression of genes involved in biofilm formation and regulation of R. aquatilis is activated by R. irregularis , which may play an important role in the early stage of the formation of stable reciprocity . The formation of biofilm aids the proliferation of the species, inhibits the colonization by competitors, and protect scarce resources from cheaters . Very recently, an indirect reciprocity has been reported among R. irregularis , Bacillus velezensis and mycoparasitic fungi, which is closely related to the formation of biofilm by B. velezensis on the hyphae surface of R. irregularis . B. velezensis stimulated by R. irregularis C and signals released at the hyphae surface can release several antimicrobial compounds in its biofilm (e.g. bacilysin, a bioactive secondary metabolite with antimicrobial activity), which can form a chemical barrier protecting the AM fungi from microbial aggressors in order to compensate for the low natural potential of AM fungi to produce antibiotics . This indicates that B. velezensis and R. irregularis dominate reciprocal cooperation and possess the potential to inhibit destroyers from interfering their stable coexistence . In addition, these mutual benefits are also extended to the plant via the provision of enhanced protection against Botrytis cinerea via the induction of systemic resistance . Although examples are rare, indirect reciprocity represents a novel form of cross-kingdom cooperation in the hyphosphere, contributing to the stability and development of plant-associated microbial communities in a favorable direction. Many studies have shown that AM fungi actively select but do not randomly cooperate with soil bacteria possibly because of their limited exo-enzymatic repertoire. The soil bacteria that rely on exudates released by hyphae for survival often have the ability to mineralize organic P and/or hydrolyze organic N, and their population abundance in the hyphosphere is substantially different from that in the bulk soil . The exchange of C and mineral nutrients lead to a direct reciprocity between AM fungi and bacteria, a common mode of cooperation in the hyphosphere involving many fungal and bacterial species . Experiments conducted in vitro provide conclusive evidence of this reciprocity. Each gram of hyphae (dry weight) of Rhizophagus irregularis can release approximately 30 mM of C-containing compounds within four weeks, mainly in the forms of low molecular weight sugars, amino acids, and carboxylates . Different soil bacteria have different C metabolism profiles and may preferentially use different forms of compounds . For example, Rahnella aquatilis (a phosphate-solubilizing bacterium) prefers fructose to glucose, whereas Pseudomonas fluorescens (a complete denitrifying bacterium) prefers carboxylates . However, if these C-containing compounds are secreted passively (without consuming energy), although this represents surplus C for the AM fungus, similar to the surplus C hypothesis of plants, it does not support the reciprocity cooperation. Intriguingly, the exudates released by the hyphae may not be a purely passive process, but a targeted response that occurs upstream of the passive process of root exudation, which could be controlled by the AM fungi and consume energy . Under different nutritional conditions, the growth of AM fungi can cause significant changes in the composition of their exudates, and even the release of some signaling molecules . Therefore, AM fungi actively provide soil bacteria with C sources needed for growth and stimulate their saprophytic ability, thereby increasing the availability of mineral nutrients to AM fungi. This was well shown with R. aqualitis which can swim through the thick water film surrounding the extraradical hyphae of R. irregularis towards organic P patches, where it extends the ability of R. irregularis to efficiently utilize this otherwise inaccessible P source. AM fungal hyphae were also reported to help the migration of Sinorhizobium meliloti to the roots of leguminous plants, thereby triggering nodulation . Specific signals (i.e. flavonoids) are released by the hyphae connected to the legume, serving as chemoattractants. Conversely, Sinorhizobium meliloti stimulates the flow of cytoplasm/protoplasm within the hyphae, likely increasing the release of nutrients and signals . In another study , it was observed that two chitinolytic Paenibacillus sp. isolates that relies on fungal exudates for growth, increased the efficiency of chitin uses as a N source by the extraradical hyphae of R. irregularis . The experimental design and optimization targeting AM fungal and bacterial cooperative mechanisms such as those described above typically consider isolated cells of a single species . Although this reductionist approach aims to simplify the process, it creates a situation that rarely occurs in nature. In the natural environment, microorganisms thrive in complex communities, where the fitness of individual species depends on interactions with other species in the population. In the hyphosphere, a variety of soil bacteria that interact closely with AM fungi can stably exist and form the hyphobiome. At the order level, the bacterial network is dominated by Myxococcales, Betaproteobacteriales, Fibrobacterales, Cytophagales, and Chloroflexales . The core bacterial members are essentially correlated with their phosphatase activity, which can mineralize a large amount of organically bound P in soil . High-throughput stable isotope probing experiments provided further evidence that bacteria colonizing the hyphosphere can lead to an increase in the ability of AM fungi to acquire organic nutrients from the soil, as these beneficial bacteria have a large number of genes encoding carbohydrate-degrading enzymes in their genomes . This is crucial for maintaining the mutualistic symbiosis between AM fungi and plants, as mineral nutrients are the cargo they exchange with plants for C, which determines the growth and development of AM fungi . If AM symbiosis is compromised, extraradical hyphae may experience C limitation, hindering their ability to explore the soil, which affects reciprocal cooperation in the hyphosphere. In addition, some bacteria (e.g. Paenibacillus validus and Pseudomonas monteilii ) dwelling in the hyphosphere can promote spore germination, hyphal growth, and branching by producing growth promoting factors, thereby improving the colonization efficiency of AM fungi within roots . These bacteria are also thought to have the ability to enhance AM fungal resistance and facilitate mycorrhizal symbiosis establishment . Therefore, the hyphosphere, a critical and active zone of soil, addresses the issue of plant, AM fungal and bacterial resource scarcity and fitness in the ecosystem to a certain extent and promotes their direct reciprocity . However, the internal and external conditions that affect this direct reciprocity are unclear. The extent to which AM fungi provide available C to soil bacteria, rather than passive secretion due to the surplus C, still needs to be further explored. Indirect reciprocity in the hyphosphere may be dominated by specific bacterial species, often accompanied by the production of antibiotics or antimicrobial compounds . Streptomyces sp. isolated from the hyphosphere can use different forms of hyphal exudates to grow rapidly and plays a decisive role in the early stages of bacterial community formation . Streptomyces sp. D1 has the ability to produce phosphatases that are efficient in mineralizing organic P, thereby increasing P availability for AM fungi. In low-P soils, soil bacteria tend to compete with AM fungi for P to prioritize their own resource needs . Therefore, for those bacteria (i.e. the cheaters such as Pseudomonas sp. H2 and Paenarthrobacter sp. 31) that contribute little or no to P availability, their acquisition of AM fungi-derived C may not maintain the stability of reciprocal cooperation, and even inhibits the synthesis and transfer of poly-P in the extraradical hyphae of AM fungi. Intriguingly, Streptomyces sp. D1 strongly alters the hyphosphere bacterial communities by inhibiting certain bacteria with low phosphatase efficiency. Conversely, Streptomyces sp. D1 generally does not impact the bacteria with high organic P mobilization capability as these bacteria do not contribute to P resource shortage . Cultured cells and exudates of Streptomyces sp. D1 inhibited by nearly 40% the growth of Pseudomonas sp. H2, by synthesis and secretion of albaflavenone (bactericidal antibiotic). A review of the genome of Streptomyces sp. D1 found that it contains a large number of genes related to antibiotic synthesis, including types I, II, and III polyketide synthases, non-ribosomal peptide synthases, and terpene. The expression of terpenoid backbone biosynthesis-related genes that synthesize albaflavenone precursor substances is generally increased during the interactions in the hyphosphere . These evidences suggest that Streptomyces sp. D1 actively produced antibiotics to punish cheaters in the hyphosphere bacterial community to maintain the stability of mutualistic cooperation between AM fungi and soil bacteria . Streptomyces sp. may be a species with cooperative advantages that can limit cheater behavior and negatively affect the fitness advantage of the cheater. The cheaters frequently undermine the stability of the community by snatching resources and space . In the hyphosphere, the cheating bacteria possess the capability to compete for available C and P, garnering benefits from the resources generated by other collaborators and refraining from contributing to the costs of producing these resources . For the majority of species, their fitness is generally achieved by increasing benefits or reducing costs. If the cheaters proliferate in an uncontrolled manner, they impose a burden on cooperative species, and with the invasion of cheaters, the proportion of resource producers in a population tends to decline to diminish to the brink of population collapse . The true cheating occurs when the relative fitness of the cheater increases above and that of the cooperator decreases below the average fitness of the population, which has been extensively verified in other studies . In addition, some soil bacteria (i.e. “destroyers”, such as Burkholderia) have the potential to either feed on hyphae or generate antifungal agents that antagonize AM fungi and suppress the growth of hyphae . However, in complex microbial communities, the question arises as to how to accurately distinguish the identities of cooperators, cheaters, and even destroyers. Their functions and roles as well as the reciprocal mechanisms still need to be explored and verified via more ingenious experimental designs and innovative methods. Spatial structure is one of the fundamental mechanisms influencing the resilience of cooperation. It allows cooperating species to isolate themselves from cheaters, thus facilitating the formation of indirect reciprocity . Spatial structures at multiple length scales exist in bacterial biofilms, which are multicellular aggregates formed by a matrix composed of biopolymers and proteins. The spatial association within species can benefit themselves, whereas distances between species determines the transport of dispersible resources . The interaction networks within biofilms depend largely on the spatial structure of the biofilms—i.e., the arrangement in space of different microbial species, which affects the resource acquisition and stable survival of bacteria in the ecosystem . If different strains and species mix spatially within the biofilms, bacterial cells interact closely with other species, often in an antagonistic manner, and natural selection tends to favor those species that dominate over competitors . In the hyphosphere, the expression of genes involved in biofilm formation and regulation of R. aquatilis is activated by R. irregularis , which may play an important role in the early stage of the formation of stable reciprocity . The formation of biofilm aids the proliferation of the species, inhibits the colonization by competitors, and protect scarce resources from cheaters . Very recently, an indirect reciprocity has been reported among R. irregularis , Bacillus velezensis and mycoparasitic fungi, which is closely related to the formation of biofilm by B. velezensis on the hyphae surface of R. irregularis . B. velezensis stimulated by R. irregularis C and signals released at the hyphae surface can release several antimicrobial compounds in its biofilm (e.g. bacilysin, a bioactive secondary metabolite with antimicrobial activity), which can form a chemical barrier protecting the AM fungi from microbial aggressors in order to compensate for the low natural potential of AM fungi to produce antibiotics . This indicates that B. velezensis and R. irregularis dominate reciprocal cooperation and possess the potential to inhibit destroyers from interfering their stable coexistence . In addition, these mutual benefits are also extended to the plant via the provision of enhanced protection against Botrytis cinerea via the induction of systemic resistance . Although examples are rare, indirect reciprocity represents a novel form of cross-kingdom cooperation in the hyphosphere, contributing to the stability and development of plant-associated microbial communities in a favorable direction. Plants-AM fungi-bacteria continuum represent a model of multi-level cross-kingdom cooperation. Plants, through their intimate association with AM fungi, modulates beneficial direct and indirect reciprocity between AM fungi and soil bacteria in the hyphosphere that improve the adaptability of the parties to the environment. Exploring these fascinating interactions helps shed light on the complex ecological and evolutionary implications of social relationships in the plants-AM fungi-bacteria continuum. However, the study on reciprocity between AM fungi and bacteria in the hyphosphere mediated by plants has just begun, and we need more experimental evidence to explore the role of different forms of interactions between them in the stable development of microbial communities. The information behind these interactions can be explored from broader interdisciplinary perspectives and the application of cutting-edge technologies. For example, genetics and evolutionary theory can thoroughly investigate the molecular mechanisms, regulatory networks and horizontal gene transfer of plant-AM fungal-bacterial cooperation. Raman microspectroscopy and “transparent soil” microcosms allow direct visualization of the physiological status, phenotypes, and functions of plants, AM fungi and bacteria in invisible solid matrices. In addition, multidimensional and multimodal datasets of microorganisms, reciprocal models or theoretical frameworks, or even predictive tools based on artificial intelligence, can be applied, paving the way for future studies into cooperation between different species along this continuum.
Multimodal deep learning approaches for precision oncology: a comprehensive review
cfb2c5c0-4263-4bdb-908d-f5e24e32eb0b
11700660
Internal Medicine[mh]
Precision oncology Cancer remains one of the foremost causes of mortality globally. The 2020 report from the International Agency for Research on Cancer identified ~18.1 million new cancer cases and 9.6 million cancer-related deaths across 185 countries, both figures rising alarmingly . In the USA, the economic burden of cancer was estimated at ~$124.5 billion in 2010, with projections rising to $157.8 billion by 2020 . The emergence of novel cancer therapies, such as targeted therapies and immunotherapies, underscores the potential for curative outcomes through early detection and effective treatment . Consequently, early diagnosis, precise tumor classification, and personalized treatment are critical for improving survival rates, enhancing quality of life for cancer patients, and alleviating the societal economic burden. DL in precision oncology Over the past two decades, advances in computing technology has propelled deep learning (DL) to the forefront of precision oncology. For instance, DL models used in low-dose computed tomography (CT) lung cancer screening have successfully reduced the pool of candidates while maintaining high inclusion rates and positive predictive values . Natural language processing (NLP) techniques are increasingly applied to extract valuable insights from electronic health records (EHRs), aiding clinicians in decision-making . DL has demonstrated exceptional performance in tasks such as biological sequence classification and cancer subtyping , with artificial intelligence (AI) systems even surpassing human experts in certain diagnostic areas . Moreover, DL has shown promise in predicting cancer prognosis. For example, a DL-based model leveraging pathological biomarkers was able to stratify colorectal cancer (CRC) patients into distinct prognostic groups, minimizing overtreatment in low-risk patients and identifying those who would benefit from more aggressive therapies . DL also holds potential in personalized treatment planning and predicting therapeutic responses . While these unimodal DL applications have achieved significant success, the rapid advancement of computing and biomedical technologies, along with the explosive growth of clinical data, highlights the urgent need for integrated multimodal data analysis to fully harness clinical information and gain deeper insights into cancer mechanisms. Clinical value of multimodal fusion analysis Vast amounts of multimodal data are generated throughout the clinical process of cancer care. Multimodal analysis techniques leverage the unique characteristics of each modality to develop models that offer a more comprehensive understanding and reasoning. This approach closely aligns with real-world clinical practices, particularly for complex diseases. Common multimodal fusion models include the integration of various medical imaging types, such as whole slide image (WSI) and CT , as well as diverse magnetic resonance imaging (MRI) sequences , fostering opportunities for innovative fusion strategies. Additionally, the amalgamation of multi-omics data and the fusion of molecular omics with imaging data further exemplifies this trend . Cross-modal fusion that encompasses imaging, molecular, and clinical data represents advanced stages of multimodal analysis . Numerous studies have shown that multimodal methods outperforms single-modality approaches in specific tasks . Nevertheless, designing effective fusion methods presents several challenges, including the high-dimensional nature of multimodal data, issues with data incompleteness and modality imbalance, and the need for real-time processing. Moreover, uncovering the biological significance of multimodal features remains a significant hurdle. Existing reviews have highlighted key DL applications in cancer diagnosis, prognosis, and treatment selection , with many focusing on specific data types or cancer categories , exploring the taxonomy of MDL models for biomedical data integration , or discussing DL-based multimodal feature fusion for identifying cancer biomarkers . However, these reviews typically focus on single-modal data or are limited to multi-omics data, without offering a comprehensive overview of cross-scale multimodal data fusion. Consequently, a thorough review of MDL methods across the entire precision oncology continuum is still lacking. Given the rapid expansion of medical multimodal data and the swift evolution of MDL technologies, this paper aims to survey various modalities involved in precision oncology and the cutting-edge MDL models employed for data integration, thereby establishing a paradigm for the effective utilization of big data in cancer management . Structure of the work The paper is structured as follows: it begins with a discussion of literature search strategies, followed by an overview of publicly available multimodal oncology datasets and an introduction to key DL technologies. Next, we examine modality representation and fusion techniques, survey MDL applications in precision oncology, and explore the opportunities and challenges in integrating oncology big data. The paper concludes with a forward-looking perspective on future developments. Search methods A systematic search was conducted in September 2024 across PubMed, MEDLINE, and Web of Science Core Collection for peer-reviewed articles published in English, with no date restrictions. Search terms included medical topics (e.g. cancer, tumors, lesions), methodologies (e.g. deep learning, artificial intelligence, convolutional neural networks, machine learning), and data types (e.g. multimodal, multi-omics, data fusion). Two independent researchers performed the search to ensure accuracy; disagreements were resolved by a third investigator with domain expertise. Initial screening was based on titles and abstracts, followed by full-text review. Only original research involving human subjects and with full-text availability was included, excluding reviews, posters, and comments. A total of 651 articles met the inclusion criteria and were analyzed, with selected studies discussed to provide insights into MDL applications in oncology. Public multimodal oncology resources Cancer is a highly complex, heterogeneous biological process, requiring diverse data sources for accurate diagnosis, treatment, and prognosis. Commonly used data types—either individually or in combination—include radiomics, pathomics, acoustic and endoscopic imaging, genomics, clinical data, dermoscopy, multimodal data, and emerging real-world data . Here, we summarized credible publicly available multimodal oncology resources and representative MDL studies utilizing these datasets for the readers’ convenience. Notably, The Cancer Genome Atlas (TCGA) ( https://portal.gdc.cancer.gov/ ) and The Cancer Imaging Archive (TCIA) ( https://www.cancerimagingarchive.net/ ) are extensive databases encompassing thousands of samples across various cancer types and medical centers. They provide rich multimodal data and analytical tools essential for cancer research. In addition to large-scale public databases, several specialized multimodal datasets are available. For instance, Lung-CLiP (Lung Cancer Likelihood in Plasma) provided clinical, demographic, and genome-wide single-nucleotide variation (SNV) and copy number variation (CNV) data for lung cancer cases, as well as codes for reproduction of corresponding results . DNA Evaluation of Fragments for Early Interception (DELFI) offers cell-free DNA (cfDNA) fragmentation profiles and clinical data for 296 lung cancer patients . Adaptive Support Vector Machine (ASVM) integrates cfDNA fragmentome, CNVs, and clinical data for 423 patients across eight cancer types . The HAM10000 dataset consists of 10 015 multicenter dermatoscopic images with corresponding clinical data aimed at improving melanoma detection . These resources are invaluable for developing and evaluating MDL algorithms for patient profiling. Overview of DL techniques DL algorithms leverage modular structures to perform complex functions. For prevalent DL architectures that are applicable across diverse data types, please see . The table of abbreviations and their full forms can be found in . Due to privacy and other reasons, obtaining medical data often faces distinct kinds of limitations. Missing modalities and labels are common in multimodal datasets, which contrasts sharply with DL models’ eagerness for large amounts of labeled data. Fortunately, several techniques have shown promise to reduce the reliance on extensive data labeling while maintaining model performance and data security. Transfer learning Transfer learning (TL) has emerged as a powerful tool in the field of DL-based medical data analysis . By delivering knowledge from one domain to another, TL facilitates the resolution of analogous tasks. The common attributes found in natural images, such as colors, edges, corners, and textures, can aid in medical image tasks like registration, segmentation, and classification. By maintaining most of the pretrained model weights and just fine-tuning the last layers, both generalized and domain-specific features are learned, thus saving much annotated data, time, and computational resources. Federated learning Federated learning (FL) is an innovative approach to safeguarding the privacy and security of medical data. It allows multiple participants (referred to as clients) to collaboratively train a global model without sharing their local data . In this framework, a central server coordinates multiple training rounds to produce the final global model. At the beginning of each round, the server distributes the current global model to all clients. Each client then trains the model on their local data, updates it, and returns the modified model to the server. The server aggregates these updates to enhance the global model, thus completing one training cycle. Throughout this process, participants’ data remain on their devices, and only encrypted model updates are exchanged with the server, ensuring data confidentiality. Supervise or not? The efficacy of DL models hinges on the quality and quantity of training data. In precision oncology, four primary learning paradigms—supervised, weakly supervised, self-supervised, and unsupervised learning—have emerged as pivotal techniques. While each method possesses distinct characteristics, they are interconnected and can be complementary in certain applications. Supervised learning Supervised learning (SL) involves training models on labeled datasets, where each data point is associated with a corresponding target variable. The model learns to map input features to output labels, minimizing the discrepancy between predicted and actual values. SL excels in predictive accuracy, making it widely used for tasks such as classifying tumor subtypes and predicting patient outcomes . However, SL requires substantial labeled datasets, which can be challenging to obtain in healthcare, and it assumes a specific data distribution, potentially limiting generalization ability to unseen data. Weakly supervised learning Weakly supervised learning (WSL) addresses the scarcity of labeled data by leveraging partially labeled or noisy datasets. A prominent technique is multiple instance learning, which operates on bags of instances where only the bag is labeled . The model learns to identify patterns within instances to make predictions for the entire bag. WSL can also use labeling functions to create training sets. However, weakly supervised labels are often less accurate than those from human experts, necessitating careful consideration. Self-supervised learning Self-supervised learning (SSL) allows the model to generate its own labels from the data. Users create a pretext task related to the primary task of interest. By solving this pretext task, pseudo-labels are produced based on specific input attributes, enabling the model to learn representations transferable to the primary task, even with limited labeled data . SSL is especially useful when labeled data are scarce or costly to acquire; however, the design of the pretext task is crucial for ensuring the relevance of the learned representations. Unsupervised learning Unsupervised learning (USL) operates on unlabeled data, identifying patterns and structures without explicit supervision. Statistical methods are employed to uncover underlying relationships. Techniques such as clustering analysis, dimensionality reduction, and association rule learning exemplify USL . USL offers the advantage of discovering novel knowledge without relying on labeled data. However, its results can be non-unique and less interpretable, making it less suitable for applications where accuracy is paramount. The choice of learning paradigm in precision oncology depends on the specific application, data availability and quality, and the desired level of accuracy and interpretability. SL is ideal for tasks with ample labeled data and clear target variables, while WSL is beneficial when data are limited or noisy. SSL is effective for pretraining models on large unlabeled datasets, and USL is valuable for exploratory data analysis. By understanding the strengths and limitations of each paradigm, researchers can select the most suitable approach for their specific research questions. Integration techniques for multimodal data with DL Multimodal modeling addresses the complexities of unstructured multimodal data, such as images, text, and omics data. It faces two primary challenges: first, effectively representing data from each modality; and second, integrating data from diverse modalities. This section provides an overview of current technical approaches to these challenges. Multimodal representation Multimodal representation involves extracting semantic information from diverse data forms into real-valued vectors. Medical data encompass structured, semi-structured, and unstructured formats. Effective data representation methods are vital for revealing relational insights, thereby facilitating accurate computer-aided diagnosis and prognosis. This representation can be categorized into unimodal and cross-modal approaches, as detailed below. Unimodal representation Unimodal representation, or marginal representation, focuses on distilling key information from a single modality through various encoding techniques. For textual data, exemplified by EHRs, word embeddings transform phrases into dense vectors that capture their semantic meanings, ensuring similar phrases are closely clustered in a low-dimensional feature space. For imaging modalities like CT, MRI, and WSI, data are converted into 2D or 3D pixel matrices, suitable for convolutional neural networks (CNNs). In ultrasound, endoscopic, and other video data, individual frames are segmented and encoded similarly to static images. For genomic and transcriptomic data, one-hot encoding is commonly employed. Cross-modal representation Cross-modal representation, or joint representation, integrates features from multiple modalities, capturing complementary, redundant, or cooperative information. Canonical correlation analysis (CCA) is a traditional method for cross-modal information representation, mapping multimodal data—such as images and text—into a shared latent space by identifying linear combinations of multidimensional variables . While CCA enhances multimodal model performance, its linear assumptions and sensitivity to noise constrain its effectiveness. Recent advancements focus on multimodal interaction mining and model efficiency. For example, Zhen et al . developed a spectral hashing coding strategy for rapid cross-modal retrieval by employing spectral analysis of various modalities . Cheerla et al . implemented an attention network to extract cross-modal features from gene expression data, pathological images, and clinical information, projecting them into a joint feature space for representation learning . Zhao et al . proposed a hierarchical attention encoder-reinforced decoder network to generate natural language answers in open-ended video question answering . Despite these advancements, current research inadequately addresses interference and adverse effects from modality-specific information irrelevant to target tasks. Additionally, existing encoding methods, often derived from natural language or image processing, may be overly simplistic for the specialized context of medical data, leading to complex, redundant structures and low parameter efficiency in developing multimodal learning frameworks. Multimodal fusion Multimodal feature fusion strategies can be broadly categorized into three types: data-level fusion, model-based fusion, and decision-level fusion. When classified by the stage at which fusion occurs, these correspond to early fusion, intermediate fusion, and late fusion, respectively . Early fusion Early fusion is the most straightforward approach for integrating multimodal data, wherein features from diverse modalities are concatenated and directly input into a DL model . This technique treats the resulting vector as a unimodal input, preserving the original model architecture. Joint representations of multimodal inputs are learned directly, bypassing explicit marginal representations. Early fusion can be further divided into two categories: direct modeling and AutoEncoder (AE) methods. In direct modeling, multimodal inputs are processed similarly to unimodal inputs. For example, Misra et al . developed a multimodal fusion framework to classify benign and malignant breast lesions by processing brightness-mode (B-mode) and strain-elastography-mode (SE-mode) ultrasound images through separate CNNs for feature extraction, which were subsequently ensembled using another CNN model . AE methods initially learn lower-dimensional joint representations, which are then employed for further supervised or unsupervised modeling. For instance, Allesøe et al . utilized a Variational AutoEncoder (VAE) model to integrate multi-omics data, identifying drug–omics associations across multimodal datasets for type 2 diabetes patients . While early fusion effectively captures low-level cross-modal relationships without requiring marginal representation extraction, it may struggle to discern high-level relationships and is sensitive to differences in the sampling rates of various modalities. Intermediate fusion Intermediate fusion involves initially learning each modality independently before integrating them within a MDL framework . This method focuses on generating marginal representations prior to fusion, allowing for greater flexibility. Intermediate fusion can be categorized into homogeneous fusion and heterogeneous fusion based on the networks used for marginal representation . In homogeneous fusion, identical neural networks are employed to learn marginal representations across modalities, making it suitable for homogeneous modalities. Heterogeneous fusion is applied when modalities differ significantly, necessitating distinct neural networks for representation learning. Furthermore, both fusion types can be divided into marginal and joint categories based on representation handling. Marginal intermediate fusion concatenates learned marginal representations as inputs to fusion layers, while joint intermediate fusion encodes more abstract features from multiple modalities prior to integration. In marginal homogeneous intermediate fusion, identical neural networks learn marginal representations, which are later combined for decision-making. For example, Gu et al . employed a 3D U-Net to encode positron emission tomography (PET) and CT images as separate channels, integrating them during the decoding phase to generate pulmonary perfusion images . Marginal heterogeneous intermediate fusion uses distinct network types for different modalities. The Pathomic Fusion model, for instance, extracted histological features via CNNs or graph convolutional neural network, while genomic features were captured using a feed-forward network. These multimodal features were then fused through a gating-based attention mechanism combined with the Kronecker product function . Joint homogeneous fusion begins with concatenating marginal representations, followed by joint representation learning from this composite. For example, Yuan et al . constructed two identical convolutional–long short-term memory (Conv-LSTM) encoders to extract features from PET and CT, respectively, and these features were concatenated and transformed by a LSTM module for the sample . Joint heterogeneous intermediate fusion employs different networks for each modality, subsequently deriving joint representations from concatenated marginal representations. Hu et al . illustrated this by using a ResNet-Trans network for CT features and a graph to model relationships between clinical and imaging features, learning joint representations with a graph neural network for lymph node metastasis prediction . In summary, intermediate fusion strategies offer significant flexibility in determining optimal fusion depth and sequence, potentially revealing more accurate relationships between modalities. However, implementing intermediate fusion requires considerable computational expertise and resources. Late fusion Late fusion, inspired by ensemble classification, consolidates predictions from individual sub-models trained on distinct data modalities to make a final decision . This can be accomplished through various methods, including voting, averaging, or meta-learning. For example, Saikia et al . compared majority voting and weighted voting approaches for predicting human papillomavirus status using PET-CT images . Sedghi et al . improved prostate cancer detection by averaging outputs from temporal enhanced ultrasound and MRI-based U-Nets . Qiu et al . introduced an attention-based late fusion strategy to integrate complementary information from WSIs and CNVs for lung cancer classification . While late fusion facilitates comprehensive marginal representation learning from unimodal models, the reduced interaction between modalities may lead to irrelevant multimodal features and complicate model interpretation. Each fusion strategy has unique advantages and limitations. The optimal approach depends on various factors, including data heterogeneity, researcher intuition, biological implications, the presence of missing values or noise, experimental evidence, computational resources, or a combination of these elements. MDL applications in precision oncology The integration of AI at various stages can correlate clinical laboratory tests and examination data with oncological phenotypes. The adaptability of clinical tasks involving multimodal data varies across different contexts. This section delves into cutting-edge MDL applications in cancer management, emphasizing image analysis, cancer detection, diagnosis, prognosis, and treatment. Image registration and segmentation Image processing represents a core application of ML in oncology, with key tasks including multimodal image registration and segmentation. The integration of PET and CT images for lesion identification and tumor volume delineation is prevalent in clinical practice, yet it remains challenging. Gu et al . utilized a 3D U-Net architecture, leveraging PET and CT images as dual channels within a marginally homogeneous intermediate fusion strategy, significantly enhancing the accuracy of pulmonary perfusion volume quantification compared to methods relying solely on metabolic data . The complexity of understanding spatial correspondences increases when input modalities exhibit substantial discrepancies in appearance. To mitigate this, Song et al . proposed a contrastive learning–based cross-modal attention block that correlates features extracted from transrectal ultrasound (TRUS) and MRI. These correlations were integrated into a deep registrator for modality fusion and rigid image registration . Additionally, Haque et al . correlated hematoxylin and eosin–stained WSIs with mass spectrometry imaging (MSI) data to facilitate modality translation, aiming to predict prostate cancer directly from WSIs . Segmentation is another classical challenge in image analysis, critical for accurate diagnosis, therapeutic selection, and efficacy evaluation. However, segmenting soft tissue tumors, particularly brain tumors, poses significant challenges due to their complex physiological structures. Zhao et al . introduced an innovative glioma tumor segmentation method that integrates fully convolutional networks (FCNs) and recurrent neural networks (RNNs) within a unified framework, achieving segmentation results characterized by both appearance and spatial consistency. They trained three segmentation models using 2D MRI patches from axial, coronal, and sagittal views, merging results through a voting-based fusion strategy . Beyond tissue or organ segmentation, cell segmentation is fundamental for various downstream biomedical applications, including tumor microenvironment exploration and spatial transcriptomics analysis. In a challenge aimed at advancing universal cell segmentation algorithms across diverse platforms and modalities , Lee et al . employed SegFormer and a multiscale attention network as the encoder and decoder, achieving superior performance in both cell recognition and differentiation across multiple modalities . Relevant research mentioned in this section is summarized in for further inspection. Cancer detection, diagnosis, and metastasis prediction Early detection is paramount for timely treatment and favorable prognosis. Currently, MDL methods offer clinicians with unprecedented opportunities to comprehensively assess patients’ tumor status . For instance, Li et al . proposed a VAE-based framework that integrates single-cell multimodal data, utilizing SNV features alongside gene expression characteristics to classify tumor cells . Liu et al . introduced AutoCancer, which integrates feature selection, neural architecture search, and hyperparameter optimization, demonstrating strong performance in cancer detection using heterogeneous liquid biopsy data . Precision in tumor diagnosis is a vital area for medical AI applications. Gao et al . employed CNN and RNN as encoders for multiphase contrast-enhanced CT (CECT) and corresponding clinical data. These feature sets were concatenated to differentiate malignant hepatic tumors . Park et al . found that incorporating metadata, such as the maximum value of the standard uptake (SUVmax) and lesion size, enhanced the performance of unimodal CT and PET models . Khan et al . combined CT features with pathological features using fully connected layers to classify liver cancer variants . Wu et al . developed a clinically aligned platform for grading ductal carcinoma in situ , treating each angle of ultrasound images as a separate modality and deriving final predictions through max pooling across all angles . Wang et al . constructed multiple models for ovarian lesion classification with ultrasound, menopausal status, and serum data. Their trimodal model achieved superior predictive accuracy compared to both dual-modality and single-modal approaches . Similarly, OvcaFinder was created for ovarian cancer identification, integrating ultrasound images, radiological scores, and clinical variables . Du et al . aimed to enhance real-time gastric neoplasm diagnosis by constructing and comparing five models based on multimodal endoscopy data. Their results indicated that the multimodal model using the immediate fusion strategy yielded the best performance . Carrillo-Perez et al . presented a late fusion model combining histology and RNA-Seq data for lung cancer subtyping, demonstrating that this integrative classification approach outperformed reliance on unimodal data . Qiu et al . integrated pathology and genomics data for cancer classification. Their weakly supervised design and hierarchical fusion strategy maximized the utility of WSI labels and facilitate efficient multimodal interactions . Wang et al . employed a late fusion approach to integrate clinical and dermoscopy images for malignant melanoma detection . Another study combined skin lesion images with patient clinical variables, constructing a multiclass classification model . Nodal involvement and distant metastasis are critical for definitive diagnosis, therapeutic decision-making, and prognosis in cancer patients. Hu et al . integrated CT and clinical features using a ResNet-Trans and graph neural network (GNN)–based framework, showcasing promise in predicting lymph node metastasis (LNM) in non–small cell lung cancer (NSCLC) patients . Zhong et al . developed a PET-CT-based cross-modal biomarker to predict occult nodal metastasis in early-stage NSCLC patients, indicating the superiority of their multimodal model over single-modal approaches . Overall, tumor detection, diagnosis, and metastasis prediction involve a diverse array of tumor data modalities, encompassing both the fusion of similar modalities and the integration of highly heterogeneous modalities. CNNs are commonly employed in diagnostic models, where supervised learning techniques prevail. Intermediate fusion strategies are frequently utilized, with comparative studies indicating that intermediate fusion often surpasses early and late fusion in efficacy. Moreover, the interpretability of features in ML models remains a crucial factor influencing their potential for clinical translation. Prognosis prediction The ability to predict recurrence and survival time in cancer patients is crucial for selecting and optimizing treatment regimens, particularly in advanced-stage tumors. Enhancing prognosis prediction through the integration of multiple early tumor indicators could significantly improve the accuracy of clinical interventions, leading to better patient outcomes and reduced waste of medical resources. Recently, MDL has garnered significant attention in tumor prognosis prediction . For instance, Li et al . developed a two-stage framework that decouples multimodal feature representation from the fusion process, demonstrating advantages in predicting the postoperative efficacy of cytoreductive surgery for CRC . Miao et al . integrated radiomic features with clinical information, revealing relationships between body composition changes, breast cancer metastasis, and survival . In another study, Fu et al . introduced a heterogeneous graph-based MDL method that encodes both the spatial phenotypes from imaging mass cytometry (IMC) and clinical variables, achieving remarkable performance in prognosis prediction across two public datasets . Malnutrition is also a critical factor in cancer prognosis; Huang et al . combined non-enhanced CT features with clinical predictors to develop models for assessing nutritional status in gastric cancer, thereby enhancing preoperative survival risk prediction . Huang et al . constructed an ensemble model based on EfficientNet-B4, utilizing both PET and CT data to predict progression in lung malignancies and overall survival (OS). Their findings indicated that this dual-modality model outperformed the PET-only model in accuracy and sensitivity, although no significant differences were observed compared to the CT-only model . FL presents a promising solution to the challenges posed by small medical datasets and stringent privacy concerns. For instance, FedSurv is an asynchronous FL framework that employs a combination of PET and clinical features to predict survival time for NSCLC patients . In certain cancers, such as lymphoma, predicting interim outcomes is vital for adjusting therapeutic regimens and improving quality of life. Cheng et al . proposed a multimodal approach based on PET-CT that employs a contrastive hybrid learning strategy to identify primary treatment failure (PTF) in diffuse large B-cell lymphoma (DLBCL), providing a noninvasive tool for assessing PTF risk . Distant recurrence significantly contributes to poor prognosis in cancer patients, yet predicting this risk remains challenging despite insights into correlated factors. To this end, Volinsky-Fremond et al . designed a multimodal prognostic model that combines WSIs and tumor stage information to predict recurrence risk and assess the benefits of adjuvant chemotherapy in endometrial cancer, outperforming existing state-of-the-art (SOTA) methods. Their success can be attributed to the utilization of Vision Transformer (ViT) for representative learning of WSIs, alongside a three-arm architecture that integrates prognostic information from WSIs, molecular phenotypes predicted directly from WSIs, and tumor stage . Combining MRI or CT with WSI enables a comprehensive analysis of patient prognosis from both macroscopic and microscopic perspectives. For instance, Li et al . presented a weakly supervised framework that employs a hierarchical radiology-guided co-attention mechanism to capture interactions between histopathological characteristics and radiological features, facilitating the identification of prognostic biomarkers with multimodal interpretability . Chen et al . calculated the Kronecker product of unimodal feature representations to encode pairwise feature communications across modalities, controlling each modality’s contribution through a gating-based attention mechanism, thereby yielding an end-to-end framework that combines histological and genomic data for survival outcome prediction . In summary, patient survival is influenced by numerous factors, and the intricate interplay among these variables poses significant challenges for prognostic accuracy, even with extensive clinical data on tumors. MDL presents a novel method for integrating diverse indicators related to tumor prognosis, offering the potential to discover new prognostic biomarkers from cross-scale data. Recent models employing attention mechanisms have enhanced the interpretability of multimodal features, driving advancements in AI applications for clinical use. Treatment decision and response monitoring Neoadjuvant chemotherapy, targeted therapy, and immunotherapy are increasingly integral to cancer management. The modern demand for more effective treatments underscores the need for accurate, personalized tests over one-size-fits-all approaches. For example, programmed death ligand-1 (PD-L1) expression status, evaluated via immunohistochemistry (IHC), serves as a clinical decision-making tool for immune checkpoint blockade (ICB) therapy. However, many treatments lack specific clinical indicators, highlighting an urgent need to identify biomarkers that can predict treatment benefits and support personalized therapy. This review examines recent advancements in MDL-based treatment decision-making and response monitoring . To establish appropriate treatment paradigms and prognostic assessments for intramedullary gliomas, Ma et al . employed a Swin Transformer to segment lesions from multimodal MRI data. They combined extracted radiomic features with clinical baseline data to predict tumor grade and molecular phenotype . Tumor mutational burden (TMB) has emerged as a promising indicator of the efficacy and prognosis of ICB therapy in tumors. Huang et al . developed a surrogate method for predicting TMB from WSIs in CRC by training a multimodal model that incorporates WSIs alongside relevant clinical data . Esteva et al . created an integration framework for histopathology and clinical data to predict clinically relevant outcomes in prostate cancer patients, demonstrating enhanced prognostic accuracy compared to existing tools and providing evidence for treatment personalization . Zhou et al . introduced a cascade multimodal synchronous generation network for MRI-guided radiation therapy, optimizing time and costs by generating intermediate multimodal sMRI and sCT data, incorporating attention modules for multilevel feature fusion . Estimating the efficacy of therapeutic approaches is equally crucial. While some cancer patients may experience significant improvement during early targeted therapy, resistance can develop, rendering treatment ineffective and exposing patients to adverse side effects. Given the variability and dynamic nature of treatment responses, current research focuses on developing effective prediction methods, particularly noninvasive approaches. Pathologic complete response (pCR) is a recognized metric for evaluating the efficacy of neoadjuvant chemotherapy and serves as an indicator of disease-free survival and OS. Joo et al . developed a fusion model integrating clinical parameters and pretreatment MRI data to predict pCR in breast cancer, outperforming unimodal models . Zhou et al . combined PET-CT, clinical variables, and IHC scores within a multimodal framework to predict the efficacy of bevacizumab in advanced CRC patients, utilizing a 2.5D architecture for feature extraction . Gu et al . applied a DenseNet-121-based multimodal framework to integrate ultrasound and clinicopathological data for stratifying responses to neoadjuvant therapy in breast cancer . Rabinovici-Cohen et al . predicted post-treatment recurrence in breast cancer patients by combining MRIs, IHC markers, and clinical data within a heterogeneous multimodal framework, demonstrating the advantages of multimodal fusion . Accurate prediction of treatment outcomes before, during, and after therapy is essential for developing optimal individualized strategies, ultimately enhancing progression-free survival and OS for cancer patients. Current MDL methodologies exhibit remarkable advantages in integrating multisource data, often surpassing single-modality models in accuracy, positioning them favorably to advance clinical decision-making and efficacy evaluation in oncology. Cancer remains one of the foremost causes of mortality globally. The 2020 report from the International Agency for Research on Cancer identified ~18.1 million new cancer cases and 9.6 million cancer-related deaths across 185 countries, both figures rising alarmingly . In the USA, the economic burden of cancer was estimated at ~$124.5 billion in 2010, with projections rising to $157.8 billion by 2020 . The emergence of novel cancer therapies, such as targeted therapies and immunotherapies, underscores the potential for curative outcomes through early detection and effective treatment . Consequently, early diagnosis, precise tumor classification, and personalized treatment are critical for improving survival rates, enhancing quality of life for cancer patients, and alleviating the societal economic burden. Over the past two decades, advances in computing technology has propelled deep learning (DL) to the forefront of precision oncology. For instance, DL models used in low-dose computed tomography (CT) lung cancer screening have successfully reduced the pool of candidates while maintaining high inclusion rates and positive predictive values . Natural language processing (NLP) techniques are increasingly applied to extract valuable insights from electronic health records (EHRs), aiding clinicians in decision-making . DL has demonstrated exceptional performance in tasks such as biological sequence classification and cancer subtyping , with artificial intelligence (AI) systems even surpassing human experts in certain diagnostic areas . Moreover, DL has shown promise in predicting cancer prognosis. For example, a DL-based model leveraging pathological biomarkers was able to stratify colorectal cancer (CRC) patients into distinct prognostic groups, minimizing overtreatment in low-risk patients and identifying those who would benefit from more aggressive therapies . DL also holds potential in personalized treatment planning and predicting therapeutic responses . While these unimodal DL applications have achieved significant success, the rapid advancement of computing and biomedical technologies, along with the explosive growth of clinical data, highlights the urgent need for integrated multimodal data analysis to fully harness clinical information and gain deeper insights into cancer mechanisms. Vast amounts of multimodal data are generated throughout the clinical process of cancer care. Multimodal analysis techniques leverage the unique characteristics of each modality to develop models that offer a more comprehensive understanding and reasoning. This approach closely aligns with real-world clinical practices, particularly for complex diseases. Common multimodal fusion models include the integration of various medical imaging types, such as whole slide image (WSI) and CT , as well as diverse magnetic resonance imaging (MRI) sequences , fostering opportunities for innovative fusion strategies. Additionally, the amalgamation of multi-omics data and the fusion of molecular omics with imaging data further exemplifies this trend . Cross-modal fusion that encompasses imaging, molecular, and clinical data represents advanced stages of multimodal analysis . Numerous studies have shown that multimodal methods outperforms single-modality approaches in specific tasks . Nevertheless, designing effective fusion methods presents several challenges, including the high-dimensional nature of multimodal data, issues with data incompleteness and modality imbalance, and the need for real-time processing. Moreover, uncovering the biological significance of multimodal features remains a significant hurdle. Existing reviews have highlighted key DL applications in cancer diagnosis, prognosis, and treatment selection , with many focusing on specific data types or cancer categories , exploring the taxonomy of MDL models for biomedical data integration , or discussing DL-based multimodal feature fusion for identifying cancer biomarkers . However, these reviews typically focus on single-modal data or are limited to multi-omics data, without offering a comprehensive overview of cross-scale multimodal data fusion. Consequently, a thorough review of MDL methods across the entire precision oncology continuum is still lacking. Given the rapid expansion of medical multimodal data and the swift evolution of MDL technologies, this paper aims to survey various modalities involved in precision oncology and the cutting-edge MDL models employed for data integration, thereby establishing a paradigm for the effective utilization of big data in cancer management . The paper is structured as follows: it begins with a discussion of literature search strategies, followed by an overview of publicly available multimodal oncology datasets and an introduction to key DL technologies. Next, we examine modality representation and fusion techniques, survey MDL applications in precision oncology, and explore the opportunities and challenges in integrating oncology big data. The paper concludes with a forward-looking perspective on future developments. A systematic search was conducted in September 2024 across PubMed, MEDLINE, and Web of Science Core Collection for peer-reviewed articles published in English, with no date restrictions. Search terms included medical topics (e.g. cancer, tumors, lesions), methodologies (e.g. deep learning, artificial intelligence, convolutional neural networks, machine learning), and data types (e.g. multimodal, multi-omics, data fusion). Two independent researchers performed the search to ensure accuracy; disagreements were resolved by a third investigator with domain expertise. Initial screening was based on titles and abstracts, followed by full-text review. Only original research involving human subjects and with full-text availability was included, excluding reviews, posters, and comments. A total of 651 articles met the inclusion criteria and were analyzed, with selected studies discussed to provide insights into MDL applications in oncology. Cancer is a highly complex, heterogeneous biological process, requiring diverse data sources for accurate diagnosis, treatment, and prognosis. Commonly used data types—either individually or in combination—include radiomics, pathomics, acoustic and endoscopic imaging, genomics, clinical data, dermoscopy, multimodal data, and emerging real-world data . Here, we summarized credible publicly available multimodal oncology resources and representative MDL studies utilizing these datasets for the readers’ convenience. Notably, The Cancer Genome Atlas (TCGA) ( https://portal.gdc.cancer.gov/ ) and The Cancer Imaging Archive (TCIA) ( https://www.cancerimagingarchive.net/ ) are extensive databases encompassing thousands of samples across various cancer types and medical centers. They provide rich multimodal data and analytical tools essential for cancer research. In addition to large-scale public databases, several specialized multimodal datasets are available. For instance, Lung-CLiP (Lung Cancer Likelihood in Plasma) provided clinical, demographic, and genome-wide single-nucleotide variation (SNV) and copy number variation (CNV) data for lung cancer cases, as well as codes for reproduction of corresponding results . DNA Evaluation of Fragments for Early Interception (DELFI) offers cell-free DNA (cfDNA) fragmentation profiles and clinical data for 296 lung cancer patients . Adaptive Support Vector Machine (ASVM) integrates cfDNA fragmentome, CNVs, and clinical data for 423 patients across eight cancer types . The HAM10000 dataset consists of 10 015 multicenter dermatoscopic images with corresponding clinical data aimed at improving melanoma detection . These resources are invaluable for developing and evaluating MDL algorithms for patient profiling. DL algorithms leverage modular structures to perform complex functions. For prevalent DL architectures that are applicable across diverse data types, please see . The table of abbreviations and their full forms can be found in . Due to privacy and other reasons, obtaining medical data often faces distinct kinds of limitations. Missing modalities and labels are common in multimodal datasets, which contrasts sharply with DL models’ eagerness for large amounts of labeled data. Fortunately, several techniques have shown promise to reduce the reliance on extensive data labeling while maintaining model performance and data security. Transfer learning Transfer learning (TL) has emerged as a powerful tool in the field of DL-based medical data analysis . By delivering knowledge from one domain to another, TL facilitates the resolution of analogous tasks. The common attributes found in natural images, such as colors, edges, corners, and textures, can aid in medical image tasks like registration, segmentation, and classification. By maintaining most of the pretrained model weights and just fine-tuning the last layers, both generalized and domain-specific features are learned, thus saving much annotated data, time, and computational resources. Federated learning Federated learning (FL) is an innovative approach to safeguarding the privacy and security of medical data. It allows multiple participants (referred to as clients) to collaboratively train a global model without sharing their local data . In this framework, a central server coordinates multiple training rounds to produce the final global model. At the beginning of each round, the server distributes the current global model to all clients. Each client then trains the model on their local data, updates it, and returns the modified model to the server. The server aggregates these updates to enhance the global model, thus completing one training cycle. Throughout this process, participants’ data remain on their devices, and only encrypted model updates are exchanged with the server, ensuring data confidentiality. Transfer learning (TL) has emerged as a powerful tool in the field of DL-based medical data analysis . By delivering knowledge from one domain to another, TL facilitates the resolution of analogous tasks. The common attributes found in natural images, such as colors, edges, corners, and textures, can aid in medical image tasks like registration, segmentation, and classification. By maintaining most of the pretrained model weights and just fine-tuning the last layers, both generalized and domain-specific features are learned, thus saving much annotated data, time, and computational resources. Federated learning (FL) is an innovative approach to safeguarding the privacy and security of medical data. It allows multiple participants (referred to as clients) to collaboratively train a global model without sharing their local data . In this framework, a central server coordinates multiple training rounds to produce the final global model. At the beginning of each round, the server distributes the current global model to all clients. Each client then trains the model on their local data, updates it, and returns the modified model to the server. The server aggregates these updates to enhance the global model, thus completing one training cycle. Throughout this process, participants’ data remain on their devices, and only encrypted model updates are exchanged with the server, ensuring data confidentiality. The efficacy of DL models hinges on the quality and quantity of training data. In precision oncology, four primary learning paradigms—supervised, weakly supervised, self-supervised, and unsupervised learning—have emerged as pivotal techniques. While each method possesses distinct characteristics, they are interconnected and can be complementary in certain applications. Supervised learning Supervised learning (SL) involves training models on labeled datasets, where each data point is associated with a corresponding target variable. The model learns to map input features to output labels, minimizing the discrepancy between predicted and actual values. SL excels in predictive accuracy, making it widely used for tasks such as classifying tumor subtypes and predicting patient outcomes . However, SL requires substantial labeled datasets, which can be challenging to obtain in healthcare, and it assumes a specific data distribution, potentially limiting generalization ability to unseen data. Weakly supervised learning Weakly supervised learning (WSL) addresses the scarcity of labeled data by leveraging partially labeled or noisy datasets. A prominent technique is multiple instance learning, which operates on bags of instances where only the bag is labeled . The model learns to identify patterns within instances to make predictions for the entire bag. WSL can also use labeling functions to create training sets. However, weakly supervised labels are often less accurate than those from human experts, necessitating careful consideration. Self-supervised learning Self-supervised learning (SSL) allows the model to generate its own labels from the data. Users create a pretext task related to the primary task of interest. By solving this pretext task, pseudo-labels are produced based on specific input attributes, enabling the model to learn representations transferable to the primary task, even with limited labeled data . SSL is especially useful when labeled data are scarce or costly to acquire; however, the design of the pretext task is crucial for ensuring the relevance of the learned representations. Unsupervised learning Unsupervised learning (USL) operates on unlabeled data, identifying patterns and structures without explicit supervision. Statistical methods are employed to uncover underlying relationships. Techniques such as clustering analysis, dimensionality reduction, and association rule learning exemplify USL . USL offers the advantage of discovering novel knowledge without relying on labeled data. However, its results can be non-unique and less interpretable, making it less suitable for applications where accuracy is paramount. The choice of learning paradigm in precision oncology depends on the specific application, data availability and quality, and the desired level of accuracy and interpretability. SL is ideal for tasks with ample labeled data and clear target variables, while WSL is beneficial when data are limited or noisy. SSL is effective for pretraining models on large unlabeled datasets, and USL is valuable for exploratory data analysis. By understanding the strengths and limitations of each paradigm, researchers can select the most suitable approach for their specific research questions. Supervised learning (SL) involves training models on labeled datasets, where each data point is associated with a corresponding target variable. The model learns to map input features to output labels, minimizing the discrepancy between predicted and actual values. SL excels in predictive accuracy, making it widely used for tasks such as classifying tumor subtypes and predicting patient outcomes . However, SL requires substantial labeled datasets, which can be challenging to obtain in healthcare, and it assumes a specific data distribution, potentially limiting generalization ability to unseen data. Weakly supervised learning (WSL) addresses the scarcity of labeled data by leveraging partially labeled or noisy datasets. A prominent technique is multiple instance learning, which operates on bags of instances where only the bag is labeled . The model learns to identify patterns within instances to make predictions for the entire bag. WSL can also use labeling functions to create training sets. However, weakly supervised labels are often less accurate than those from human experts, necessitating careful consideration. Self-supervised learning (SSL) allows the model to generate its own labels from the data. Users create a pretext task related to the primary task of interest. By solving this pretext task, pseudo-labels are produced based on specific input attributes, enabling the model to learn representations transferable to the primary task, even with limited labeled data . SSL is especially useful when labeled data are scarce or costly to acquire; however, the design of the pretext task is crucial for ensuring the relevance of the learned representations. Unsupervised learning (USL) operates on unlabeled data, identifying patterns and structures without explicit supervision. Statistical methods are employed to uncover underlying relationships. Techniques such as clustering analysis, dimensionality reduction, and association rule learning exemplify USL . USL offers the advantage of discovering novel knowledge without relying on labeled data. However, its results can be non-unique and less interpretable, making it less suitable for applications where accuracy is paramount. The choice of learning paradigm in precision oncology depends on the specific application, data availability and quality, and the desired level of accuracy and interpretability. SL is ideal for tasks with ample labeled data and clear target variables, while WSL is beneficial when data are limited or noisy. SSL is effective for pretraining models on large unlabeled datasets, and USL is valuable for exploratory data analysis. By understanding the strengths and limitations of each paradigm, researchers can select the most suitable approach for their specific research questions. Multimodal modeling addresses the complexities of unstructured multimodal data, such as images, text, and omics data. It faces two primary challenges: first, effectively representing data from each modality; and second, integrating data from diverse modalities. This section provides an overview of current technical approaches to these challenges. Multimodal representation Multimodal representation involves extracting semantic information from diverse data forms into real-valued vectors. Medical data encompass structured, semi-structured, and unstructured formats. Effective data representation methods are vital for revealing relational insights, thereby facilitating accurate computer-aided diagnosis and prognosis. This representation can be categorized into unimodal and cross-modal approaches, as detailed below. Unimodal representation Unimodal representation, or marginal representation, focuses on distilling key information from a single modality through various encoding techniques. For textual data, exemplified by EHRs, word embeddings transform phrases into dense vectors that capture their semantic meanings, ensuring similar phrases are closely clustered in a low-dimensional feature space. For imaging modalities like CT, MRI, and WSI, data are converted into 2D or 3D pixel matrices, suitable for convolutional neural networks (CNNs). In ultrasound, endoscopic, and other video data, individual frames are segmented and encoded similarly to static images. For genomic and transcriptomic data, one-hot encoding is commonly employed. Cross-modal representation Cross-modal representation, or joint representation, integrates features from multiple modalities, capturing complementary, redundant, or cooperative information. Canonical correlation analysis (CCA) is a traditional method for cross-modal information representation, mapping multimodal data—such as images and text—into a shared latent space by identifying linear combinations of multidimensional variables . While CCA enhances multimodal model performance, its linear assumptions and sensitivity to noise constrain its effectiveness. Recent advancements focus on multimodal interaction mining and model efficiency. For example, Zhen et al . developed a spectral hashing coding strategy for rapid cross-modal retrieval by employing spectral analysis of various modalities . Cheerla et al . implemented an attention network to extract cross-modal features from gene expression data, pathological images, and clinical information, projecting them into a joint feature space for representation learning . Zhao et al . proposed a hierarchical attention encoder-reinforced decoder network to generate natural language answers in open-ended video question answering . Despite these advancements, current research inadequately addresses interference and adverse effects from modality-specific information irrelevant to target tasks. Additionally, existing encoding methods, often derived from natural language or image processing, may be overly simplistic for the specialized context of medical data, leading to complex, redundant structures and low parameter efficiency in developing multimodal learning frameworks. Multimodal representation involves extracting semantic information from diverse data forms into real-valued vectors. Medical data encompass structured, semi-structured, and unstructured formats. Effective data representation methods are vital for revealing relational insights, thereby facilitating accurate computer-aided diagnosis and prognosis. This representation can be categorized into unimodal and cross-modal approaches, as detailed below. Unimodal representation, or marginal representation, focuses on distilling key information from a single modality through various encoding techniques. For textual data, exemplified by EHRs, word embeddings transform phrases into dense vectors that capture their semantic meanings, ensuring similar phrases are closely clustered in a low-dimensional feature space. For imaging modalities like CT, MRI, and WSI, data are converted into 2D or 3D pixel matrices, suitable for convolutional neural networks (CNNs). In ultrasound, endoscopic, and other video data, individual frames are segmented and encoded similarly to static images. For genomic and transcriptomic data, one-hot encoding is commonly employed. Cross-modal representation, or joint representation, integrates features from multiple modalities, capturing complementary, redundant, or cooperative information. Canonical correlation analysis (CCA) is a traditional method for cross-modal information representation, mapping multimodal data—such as images and text—into a shared latent space by identifying linear combinations of multidimensional variables . While CCA enhances multimodal model performance, its linear assumptions and sensitivity to noise constrain its effectiveness. Recent advancements focus on multimodal interaction mining and model efficiency. For example, Zhen et al . developed a spectral hashing coding strategy for rapid cross-modal retrieval by employing spectral analysis of various modalities . Cheerla et al . implemented an attention network to extract cross-modal features from gene expression data, pathological images, and clinical information, projecting them into a joint feature space for representation learning . Zhao et al . proposed a hierarchical attention encoder-reinforced decoder network to generate natural language answers in open-ended video question answering . Despite these advancements, current research inadequately addresses interference and adverse effects from modality-specific information irrelevant to target tasks. Additionally, existing encoding methods, often derived from natural language or image processing, may be overly simplistic for the specialized context of medical data, leading to complex, redundant structures and low parameter efficiency in developing multimodal learning frameworks. Multimodal feature fusion strategies can be broadly categorized into three types: data-level fusion, model-based fusion, and decision-level fusion. When classified by the stage at which fusion occurs, these correspond to early fusion, intermediate fusion, and late fusion, respectively . Early fusion Early fusion is the most straightforward approach for integrating multimodal data, wherein features from diverse modalities are concatenated and directly input into a DL model . This technique treats the resulting vector as a unimodal input, preserving the original model architecture. Joint representations of multimodal inputs are learned directly, bypassing explicit marginal representations. Early fusion can be further divided into two categories: direct modeling and AutoEncoder (AE) methods. In direct modeling, multimodal inputs are processed similarly to unimodal inputs. For example, Misra et al . developed a multimodal fusion framework to classify benign and malignant breast lesions by processing brightness-mode (B-mode) and strain-elastography-mode (SE-mode) ultrasound images through separate CNNs for feature extraction, which were subsequently ensembled using another CNN model . AE methods initially learn lower-dimensional joint representations, which are then employed for further supervised or unsupervised modeling. For instance, Allesøe et al . utilized a Variational AutoEncoder (VAE) model to integrate multi-omics data, identifying drug–omics associations across multimodal datasets for type 2 diabetes patients . While early fusion effectively captures low-level cross-modal relationships without requiring marginal representation extraction, it may struggle to discern high-level relationships and is sensitive to differences in the sampling rates of various modalities. Intermediate fusion Intermediate fusion involves initially learning each modality independently before integrating them within a MDL framework . This method focuses on generating marginal representations prior to fusion, allowing for greater flexibility. Intermediate fusion can be categorized into homogeneous fusion and heterogeneous fusion based on the networks used for marginal representation . In homogeneous fusion, identical neural networks are employed to learn marginal representations across modalities, making it suitable for homogeneous modalities. Heterogeneous fusion is applied when modalities differ significantly, necessitating distinct neural networks for representation learning. Furthermore, both fusion types can be divided into marginal and joint categories based on representation handling. Marginal intermediate fusion concatenates learned marginal representations as inputs to fusion layers, while joint intermediate fusion encodes more abstract features from multiple modalities prior to integration. In marginal homogeneous intermediate fusion, identical neural networks learn marginal representations, which are later combined for decision-making. For example, Gu et al . employed a 3D U-Net to encode positron emission tomography (PET) and CT images as separate channels, integrating them during the decoding phase to generate pulmonary perfusion images . Marginal heterogeneous intermediate fusion uses distinct network types for different modalities. The Pathomic Fusion model, for instance, extracted histological features via CNNs or graph convolutional neural network, while genomic features were captured using a feed-forward network. These multimodal features were then fused through a gating-based attention mechanism combined with the Kronecker product function . Joint homogeneous fusion begins with concatenating marginal representations, followed by joint representation learning from this composite. For example, Yuan et al . constructed two identical convolutional–long short-term memory (Conv-LSTM) encoders to extract features from PET and CT, respectively, and these features were concatenated and transformed by a LSTM module for the sample . Joint heterogeneous intermediate fusion employs different networks for each modality, subsequently deriving joint representations from concatenated marginal representations. Hu et al . illustrated this by using a ResNet-Trans network for CT features and a graph to model relationships between clinical and imaging features, learning joint representations with a graph neural network for lymph node metastasis prediction . In summary, intermediate fusion strategies offer significant flexibility in determining optimal fusion depth and sequence, potentially revealing more accurate relationships between modalities. However, implementing intermediate fusion requires considerable computational expertise and resources. Late fusion Late fusion, inspired by ensemble classification, consolidates predictions from individual sub-models trained on distinct data modalities to make a final decision . This can be accomplished through various methods, including voting, averaging, or meta-learning. For example, Saikia et al . compared majority voting and weighted voting approaches for predicting human papillomavirus status using PET-CT images . Sedghi et al . improved prostate cancer detection by averaging outputs from temporal enhanced ultrasound and MRI-based U-Nets . Qiu et al . introduced an attention-based late fusion strategy to integrate complementary information from WSIs and CNVs for lung cancer classification . While late fusion facilitates comprehensive marginal representation learning from unimodal models, the reduced interaction between modalities may lead to irrelevant multimodal features and complicate model interpretation. Each fusion strategy has unique advantages and limitations. The optimal approach depends on various factors, including data heterogeneity, researcher intuition, biological implications, the presence of missing values or noise, experimental evidence, computational resources, or a combination of these elements. Early fusion is the most straightforward approach for integrating multimodal data, wherein features from diverse modalities are concatenated and directly input into a DL model . This technique treats the resulting vector as a unimodal input, preserving the original model architecture. Joint representations of multimodal inputs are learned directly, bypassing explicit marginal representations. Early fusion can be further divided into two categories: direct modeling and AutoEncoder (AE) methods. In direct modeling, multimodal inputs are processed similarly to unimodal inputs. For example, Misra et al . developed a multimodal fusion framework to classify benign and malignant breast lesions by processing brightness-mode (B-mode) and strain-elastography-mode (SE-mode) ultrasound images through separate CNNs for feature extraction, which were subsequently ensembled using another CNN model . AE methods initially learn lower-dimensional joint representations, which are then employed for further supervised or unsupervised modeling. For instance, Allesøe et al . utilized a Variational AutoEncoder (VAE) model to integrate multi-omics data, identifying drug–omics associations across multimodal datasets for type 2 diabetes patients . While early fusion effectively captures low-level cross-modal relationships without requiring marginal representation extraction, it may struggle to discern high-level relationships and is sensitive to differences in the sampling rates of various modalities. Intermediate fusion involves initially learning each modality independently before integrating them within a MDL framework . This method focuses on generating marginal representations prior to fusion, allowing for greater flexibility. Intermediate fusion can be categorized into homogeneous fusion and heterogeneous fusion based on the networks used for marginal representation . In homogeneous fusion, identical neural networks are employed to learn marginal representations across modalities, making it suitable for homogeneous modalities. Heterogeneous fusion is applied when modalities differ significantly, necessitating distinct neural networks for representation learning. Furthermore, both fusion types can be divided into marginal and joint categories based on representation handling. Marginal intermediate fusion concatenates learned marginal representations as inputs to fusion layers, while joint intermediate fusion encodes more abstract features from multiple modalities prior to integration. In marginal homogeneous intermediate fusion, identical neural networks learn marginal representations, which are later combined for decision-making. For example, Gu et al . employed a 3D U-Net to encode positron emission tomography (PET) and CT images as separate channels, integrating them during the decoding phase to generate pulmonary perfusion images . Marginal heterogeneous intermediate fusion uses distinct network types for different modalities. The Pathomic Fusion model, for instance, extracted histological features via CNNs or graph convolutional neural network, while genomic features were captured using a feed-forward network. These multimodal features were then fused through a gating-based attention mechanism combined with the Kronecker product function . Joint homogeneous fusion begins with concatenating marginal representations, followed by joint representation learning from this composite. For example, Yuan et al . constructed two identical convolutional–long short-term memory (Conv-LSTM) encoders to extract features from PET and CT, respectively, and these features were concatenated and transformed by a LSTM module for the sample . Joint heterogeneous intermediate fusion employs different networks for each modality, subsequently deriving joint representations from concatenated marginal representations. Hu et al . illustrated this by using a ResNet-Trans network for CT features and a graph to model relationships between clinical and imaging features, learning joint representations with a graph neural network for lymph node metastasis prediction . In summary, intermediate fusion strategies offer significant flexibility in determining optimal fusion depth and sequence, potentially revealing more accurate relationships between modalities. However, implementing intermediate fusion requires considerable computational expertise and resources. Late fusion, inspired by ensemble classification, consolidates predictions from individual sub-models trained on distinct data modalities to make a final decision . This can be accomplished through various methods, including voting, averaging, or meta-learning. For example, Saikia et al . compared majority voting and weighted voting approaches for predicting human papillomavirus status using PET-CT images . Sedghi et al . improved prostate cancer detection by averaging outputs from temporal enhanced ultrasound and MRI-based U-Nets . Qiu et al . introduced an attention-based late fusion strategy to integrate complementary information from WSIs and CNVs for lung cancer classification . While late fusion facilitates comprehensive marginal representation learning from unimodal models, the reduced interaction between modalities may lead to irrelevant multimodal features and complicate model interpretation. Each fusion strategy has unique advantages and limitations. The optimal approach depends on various factors, including data heterogeneity, researcher intuition, biological implications, the presence of missing values or noise, experimental evidence, computational resources, or a combination of these elements. The integration of AI at various stages can correlate clinical laboratory tests and examination data with oncological phenotypes. The adaptability of clinical tasks involving multimodal data varies across different contexts. This section delves into cutting-edge MDL applications in cancer management, emphasizing image analysis, cancer detection, diagnosis, prognosis, and treatment. Image registration and segmentation Image processing represents a core application of ML in oncology, with key tasks including multimodal image registration and segmentation. The integration of PET and CT images for lesion identification and tumor volume delineation is prevalent in clinical practice, yet it remains challenging. Gu et al . utilized a 3D U-Net architecture, leveraging PET and CT images as dual channels within a marginally homogeneous intermediate fusion strategy, significantly enhancing the accuracy of pulmonary perfusion volume quantification compared to methods relying solely on metabolic data . The complexity of understanding spatial correspondences increases when input modalities exhibit substantial discrepancies in appearance. To mitigate this, Song et al . proposed a contrastive learning–based cross-modal attention block that correlates features extracted from transrectal ultrasound (TRUS) and MRI. These correlations were integrated into a deep registrator for modality fusion and rigid image registration . Additionally, Haque et al . correlated hematoxylin and eosin–stained WSIs with mass spectrometry imaging (MSI) data to facilitate modality translation, aiming to predict prostate cancer directly from WSIs . Segmentation is another classical challenge in image analysis, critical for accurate diagnosis, therapeutic selection, and efficacy evaluation. However, segmenting soft tissue tumors, particularly brain tumors, poses significant challenges due to their complex physiological structures. Zhao et al . introduced an innovative glioma tumor segmentation method that integrates fully convolutional networks (FCNs) and recurrent neural networks (RNNs) within a unified framework, achieving segmentation results characterized by both appearance and spatial consistency. They trained three segmentation models using 2D MRI patches from axial, coronal, and sagittal views, merging results through a voting-based fusion strategy . Beyond tissue or organ segmentation, cell segmentation is fundamental for various downstream biomedical applications, including tumor microenvironment exploration and spatial transcriptomics analysis. In a challenge aimed at advancing universal cell segmentation algorithms across diverse platforms and modalities , Lee et al . employed SegFormer and a multiscale attention network as the encoder and decoder, achieving superior performance in both cell recognition and differentiation across multiple modalities . Relevant research mentioned in this section is summarized in for further inspection. Cancer detection, diagnosis, and metastasis prediction Early detection is paramount for timely treatment and favorable prognosis. Currently, MDL methods offer clinicians with unprecedented opportunities to comprehensively assess patients’ tumor status . For instance, Li et al . proposed a VAE-based framework that integrates single-cell multimodal data, utilizing SNV features alongside gene expression characteristics to classify tumor cells . Liu et al . introduced AutoCancer, which integrates feature selection, neural architecture search, and hyperparameter optimization, demonstrating strong performance in cancer detection using heterogeneous liquid biopsy data . Precision in tumor diagnosis is a vital area for medical AI applications. Gao et al . employed CNN and RNN as encoders for multiphase contrast-enhanced CT (CECT) and corresponding clinical data. These feature sets were concatenated to differentiate malignant hepatic tumors . Park et al . found that incorporating metadata, such as the maximum value of the standard uptake (SUVmax) and lesion size, enhanced the performance of unimodal CT and PET models . Khan et al . combined CT features with pathological features using fully connected layers to classify liver cancer variants . Wu et al . developed a clinically aligned platform for grading ductal carcinoma in situ , treating each angle of ultrasound images as a separate modality and deriving final predictions through max pooling across all angles . Wang et al . constructed multiple models for ovarian lesion classification with ultrasound, menopausal status, and serum data. Their trimodal model achieved superior predictive accuracy compared to both dual-modality and single-modal approaches . Similarly, OvcaFinder was created for ovarian cancer identification, integrating ultrasound images, radiological scores, and clinical variables . Du et al . aimed to enhance real-time gastric neoplasm diagnosis by constructing and comparing five models based on multimodal endoscopy data. Their results indicated that the multimodal model using the immediate fusion strategy yielded the best performance . Carrillo-Perez et al . presented a late fusion model combining histology and RNA-Seq data for lung cancer subtyping, demonstrating that this integrative classification approach outperformed reliance on unimodal data . Qiu et al . integrated pathology and genomics data for cancer classification. Their weakly supervised design and hierarchical fusion strategy maximized the utility of WSI labels and facilitate efficient multimodal interactions . Wang et al . employed a late fusion approach to integrate clinical and dermoscopy images for malignant melanoma detection . Another study combined skin lesion images with patient clinical variables, constructing a multiclass classification model . Nodal involvement and distant metastasis are critical for definitive diagnosis, therapeutic decision-making, and prognosis in cancer patients. Hu et al . integrated CT and clinical features using a ResNet-Trans and graph neural network (GNN)–based framework, showcasing promise in predicting lymph node metastasis (LNM) in non–small cell lung cancer (NSCLC) patients . Zhong et al . developed a PET-CT-based cross-modal biomarker to predict occult nodal metastasis in early-stage NSCLC patients, indicating the superiority of their multimodal model over single-modal approaches . Overall, tumor detection, diagnosis, and metastasis prediction involve a diverse array of tumor data modalities, encompassing both the fusion of similar modalities and the integration of highly heterogeneous modalities. CNNs are commonly employed in diagnostic models, where supervised learning techniques prevail. Intermediate fusion strategies are frequently utilized, with comparative studies indicating that intermediate fusion often surpasses early and late fusion in efficacy. Moreover, the interpretability of features in ML models remains a crucial factor influencing their potential for clinical translation. Prognosis prediction The ability to predict recurrence and survival time in cancer patients is crucial for selecting and optimizing treatment regimens, particularly in advanced-stage tumors. Enhancing prognosis prediction through the integration of multiple early tumor indicators could significantly improve the accuracy of clinical interventions, leading to better patient outcomes and reduced waste of medical resources. Recently, MDL has garnered significant attention in tumor prognosis prediction . For instance, Li et al . developed a two-stage framework that decouples multimodal feature representation from the fusion process, demonstrating advantages in predicting the postoperative efficacy of cytoreductive surgery for CRC . Miao et al . integrated radiomic features with clinical information, revealing relationships between body composition changes, breast cancer metastasis, and survival . In another study, Fu et al . introduced a heterogeneous graph-based MDL method that encodes both the spatial phenotypes from imaging mass cytometry (IMC) and clinical variables, achieving remarkable performance in prognosis prediction across two public datasets . Malnutrition is also a critical factor in cancer prognosis; Huang et al . combined non-enhanced CT features with clinical predictors to develop models for assessing nutritional status in gastric cancer, thereby enhancing preoperative survival risk prediction . Huang et al . constructed an ensemble model based on EfficientNet-B4, utilizing both PET and CT data to predict progression in lung malignancies and overall survival (OS). Their findings indicated that this dual-modality model outperformed the PET-only model in accuracy and sensitivity, although no significant differences were observed compared to the CT-only model . FL presents a promising solution to the challenges posed by small medical datasets and stringent privacy concerns. For instance, FedSurv is an asynchronous FL framework that employs a combination of PET and clinical features to predict survival time for NSCLC patients . In certain cancers, such as lymphoma, predicting interim outcomes is vital for adjusting therapeutic regimens and improving quality of life. Cheng et al . proposed a multimodal approach based on PET-CT that employs a contrastive hybrid learning strategy to identify primary treatment failure (PTF) in diffuse large B-cell lymphoma (DLBCL), providing a noninvasive tool for assessing PTF risk . Distant recurrence significantly contributes to poor prognosis in cancer patients, yet predicting this risk remains challenging despite insights into correlated factors. To this end, Volinsky-Fremond et al . designed a multimodal prognostic model that combines WSIs and tumor stage information to predict recurrence risk and assess the benefits of adjuvant chemotherapy in endometrial cancer, outperforming existing state-of-the-art (SOTA) methods. Their success can be attributed to the utilization of Vision Transformer (ViT) for representative learning of WSIs, alongside a three-arm architecture that integrates prognostic information from WSIs, molecular phenotypes predicted directly from WSIs, and tumor stage . Combining MRI or CT with WSI enables a comprehensive analysis of patient prognosis from both macroscopic and microscopic perspectives. For instance, Li et al . presented a weakly supervised framework that employs a hierarchical radiology-guided co-attention mechanism to capture interactions between histopathological characteristics and radiological features, facilitating the identification of prognostic biomarkers with multimodal interpretability . Chen et al . calculated the Kronecker product of unimodal feature representations to encode pairwise feature communications across modalities, controlling each modality’s contribution through a gating-based attention mechanism, thereby yielding an end-to-end framework that combines histological and genomic data for survival outcome prediction . In summary, patient survival is influenced by numerous factors, and the intricate interplay among these variables poses significant challenges for prognostic accuracy, even with extensive clinical data on tumors. MDL presents a novel method for integrating diverse indicators related to tumor prognosis, offering the potential to discover new prognostic biomarkers from cross-scale data. Recent models employing attention mechanisms have enhanced the interpretability of multimodal features, driving advancements in AI applications for clinical use. Treatment decision and response monitoring Neoadjuvant chemotherapy, targeted therapy, and immunotherapy are increasingly integral to cancer management. The modern demand for more effective treatments underscores the need for accurate, personalized tests over one-size-fits-all approaches. For example, programmed death ligand-1 (PD-L1) expression status, evaluated via immunohistochemistry (IHC), serves as a clinical decision-making tool for immune checkpoint blockade (ICB) therapy. However, many treatments lack specific clinical indicators, highlighting an urgent need to identify biomarkers that can predict treatment benefits and support personalized therapy. This review examines recent advancements in MDL-based treatment decision-making and response monitoring . To establish appropriate treatment paradigms and prognostic assessments for intramedullary gliomas, Ma et al . employed a Swin Transformer to segment lesions from multimodal MRI data. They combined extracted radiomic features with clinical baseline data to predict tumor grade and molecular phenotype . Tumor mutational burden (TMB) has emerged as a promising indicator of the efficacy and prognosis of ICB therapy in tumors. Huang et al . developed a surrogate method for predicting TMB from WSIs in CRC by training a multimodal model that incorporates WSIs alongside relevant clinical data . Esteva et al . created an integration framework for histopathology and clinical data to predict clinically relevant outcomes in prostate cancer patients, demonstrating enhanced prognostic accuracy compared to existing tools and providing evidence for treatment personalization . Zhou et al . introduced a cascade multimodal synchronous generation network for MRI-guided radiation therapy, optimizing time and costs by generating intermediate multimodal sMRI and sCT data, incorporating attention modules for multilevel feature fusion . Estimating the efficacy of therapeutic approaches is equally crucial. While some cancer patients may experience significant improvement during early targeted therapy, resistance can develop, rendering treatment ineffective and exposing patients to adverse side effects. Given the variability and dynamic nature of treatment responses, current research focuses on developing effective prediction methods, particularly noninvasive approaches. Pathologic complete response (pCR) is a recognized metric for evaluating the efficacy of neoadjuvant chemotherapy and serves as an indicator of disease-free survival and OS. Joo et al . developed a fusion model integrating clinical parameters and pretreatment MRI data to predict pCR in breast cancer, outperforming unimodal models . Zhou et al . combined PET-CT, clinical variables, and IHC scores within a multimodal framework to predict the efficacy of bevacizumab in advanced CRC patients, utilizing a 2.5D architecture for feature extraction . Gu et al . applied a DenseNet-121-based multimodal framework to integrate ultrasound and clinicopathological data for stratifying responses to neoadjuvant therapy in breast cancer . Rabinovici-Cohen et al . predicted post-treatment recurrence in breast cancer patients by combining MRIs, IHC markers, and clinical data within a heterogeneous multimodal framework, demonstrating the advantages of multimodal fusion . Accurate prediction of treatment outcomes before, during, and after therapy is essential for developing optimal individualized strategies, ultimately enhancing progression-free survival and OS for cancer patients. Current MDL methodologies exhibit remarkable advantages in integrating multisource data, often surpassing single-modality models in accuracy, positioning them favorably to advance clinical decision-making and efficacy evaluation in oncology. Image processing represents a core application of ML in oncology, with key tasks including multimodal image registration and segmentation. The integration of PET and CT images for lesion identification and tumor volume delineation is prevalent in clinical practice, yet it remains challenging. Gu et al . utilized a 3D U-Net architecture, leveraging PET and CT images as dual channels within a marginally homogeneous intermediate fusion strategy, significantly enhancing the accuracy of pulmonary perfusion volume quantification compared to methods relying solely on metabolic data . The complexity of understanding spatial correspondences increases when input modalities exhibit substantial discrepancies in appearance. To mitigate this, Song et al . proposed a contrastive learning–based cross-modal attention block that correlates features extracted from transrectal ultrasound (TRUS) and MRI. These correlations were integrated into a deep registrator for modality fusion and rigid image registration . Additionally, Haque et al . correlated hematoxylin and eosin–stained WSIs with mass spectrometry imaging (MSI) data to facilitate modality translation, aiming to predict prostate cancer directly from WSIs . Segmentation is another classical challenge in image analysis, critical for accurate diagnosis, therapeutic selection, and efficacy evaluation. However, segmenting soft tissue tumors, particularly brain tumors, poses significant challenges due to their complex physiological structures. Zhao et al . introduced an innovative glioma tumor segmentation method that integrates fully convolutional networks (FCNs) and recurrent neural networks (RNNs) within a unified framework, achieving segmentation results characterized by both appearance and spatial consistency. They trained three segmentation models using 2D MRI patches from axial, coronal, and sagittal views, merging results through a voting-based fusion strategy . Beyond tissue or organ segmentation, cell segmentation is fundamental for various downstream biomedical applications, including tumor microenvironment exploration and spatial transcriptomics analysis. In a challenge aimed at advancing universal cell segmentation algorithms across diverse platforms and modalities , Lee et al . employed SegFormer and a multiscale attention network as the encoder and decoder, achieving superior performance in both cell recognition and differentiation across multiple modalities . Relevant research mentioned in this section is summarized in for further inspection. Early detection is paramount for timely treatment and favorable prognosis. Currently, MDL methods offer clinicians with unprecedented opportunities to comprehensively assess patients’ tumor status . For instance, Li et al . proposed a VAE-based framework that integrates single-cell multimodal data, utilizing SNV features alongside gene expression characteristics to classify tumor cells . Liu et al . introduced AutoCancer, which integrates feature selection, neural architecture search, and hyperparameter optimization, demonstrating strong performance in cancer detection using heterogeneous liquid biopsy data . Precision in tumor diagnosis is a vital area for medical AI applications. Gao et al . employed CNN and RNN as encoders for multiphase contrast-enhanced CT (CECT) and corresponding clinical data. These feature sets were concatenated to differentiate malignant hepatic tumors . Park et al . found that incorporating metadata, such as the maximum value of the standard uptake (SUVmax) and lesion size, enhanced the performance of unimodal CT and PET models . Khan et al . combined CT features with pathological features using fully connected layers to classify liver cancer variants . Wu et al . developed a clinically aligned platform for grading ductal carcinoma in situ , treating each angle of ultrasound images as a separate modality and deriving final predictions through max pooling across all angles . Wang et al . constructed multiple models for ovarian lesion classification with ultrasound, menopausal status, and serum data. Their trimodal model achieved superior predictive accuracy compared to both dual-modality and single-modal approaches . Similarly, OvcaFinder was created for ovarian cancer identification, integrating ultrasound images, radiological scores, and clinical variables . Du et al . aimed to enhance real-time gastric neoplasm diagnosis by constructing and comparing five models based on multimodal endoscopy data. Their results indicated that the multimodal model using the immediate fusion strategy yielded the best performance . Carrillo-Perez et al . presented a late fusion model combining histology and RNA-Seq data for lung cancer subtyping, demonstrating that this integrative classification approach outperformed reliance on unimodal data . Qiu et al . integrated pathology and genomics data for cancer classification. Their weakly supervised design and hierarchical fusion strategy maximized the utility of WSI labels and facilitate efficient multimodal interactions . Wang et al . employed a late fusion approach to integrate clinical and dermoscopy images for malignant melanoma detection . Another study combined skin lesion images with patient clinical variables, constructing a multiclass classification model . Nodal involvement and distant metastasis are critical for definitive diagnosis, therapeutic decision-making, and prognosis in cancer patients. Hu et al . integrated CT and clinical features using a ResNet-Trans and graph neural network (GNN)–based framework, showcasing promise in predicting lymph node metastasis (LNM) in non–small cell lung cancer (NSCLC) patients . Zhong et al . developed a PET-CT-based cross-modal biomarker to predict occult nodal metastasis in early-stage NSCLC patients, indicating the superiority of their multimodal model over single-modal approaches . Overall, tumor detection, diagnosis, and metastasis prediction involve a diverse array of tumor data modalities, encompassing both the fusion of similar modalities and the integration of highly heterogeneous modalities. CNNs are commonly employed in diagnostic models, where supervised learning techniques prevail. Intermediate fusion strategies are frequently utilized, with comparative studies indicating that intermediate fusion often surpasses early and late fusion in efficacy. Moreover, the interpretability of features in ML models remains a crucial factor influencing their potential for clinical translation. The ability to predict recurrence and survival time in cancer patients is crucial for selecting and optimizing treatment regimens, particularly in advanced-stage tumors. Enhancing prognosis prediction through the integration of multiple early tumor indicators could significantly improve the accuracy of clinical interventions, leading to better patient outcomes and reduced waste of medical resources. Recently, MDL has garnered significant attention in tumor prognosis prediction . For instance, Li et al . developed a two-stage framework that decouples multimodal feature representation from the fusion process, demonstrating advantages in predicting the postoperative efficacy of cytoreductive surgery for CRC . Miao et al . integrated radiomic features with clinical information, revealing relationships between body composition changes, breast cancer metastasis, and survival . In another study, Fu et al . introduced a heterogeneous graph-based MDL method that encodes both the spatial phenotypes from imaging mass cytometry (IMC) and clinical variables, achieving remarkable performance in prognosis prediction across two public datasets . Malnutrition is also a critical factor in cancer prognosis; Huang et al . combined non-enhanced CT features with clinical predictors to develop models for assessing nutritional status in gastric cancer, thereby enhancing preoperative survival risk prediction . Huang et al . constructed an ensemble model based on EfficientNet-B4, utilizing both PET and CT data to predict progression in lung malignancies and overall survival (OS). Their findings indicated that this dual-modality model outperformed the PET-only model in accuracy and sensitivity, although no significant differences were observed compared to the CT-only model . FL presents a promising solution to the challenges posed by small medical datasets and stringent privacy concerns. For instance, FedSurv is an asynchronous FL framework that employs a combination of PET and clinical features to predict survival time for NSCLC patients . In certain cancers, such as lymphoma, predicting interim outcomes is vital for adjusting therapeutic regimens and improving quality of life. Cheng et al . proposed a multimodal approach based on PET-CT that employs a contrastive hybrid learning strategy to identify primary treatment failure (PTF) in diffuse large B-cell lymphoma (DLBCL), providing a noninvasive tool for assessing PTF risk . Distant recurrence significantly contributes to poor prognosis in cancer patients, yet predicting this risk remains challenging despite insights into correlated factors. To this end, Volinsky-Fremond et al . designed a multimodal prognostic model that combines WSIs and tumor stage information to predict recurrence risk and assess the benefits of adjuvant chemotherapy in endometrial cancer, outperforming existing state-of-the-art (SOTA) methods. Their success can be attributed to the utilization of Vision Transformer (ViT) for representative learning of WSIs, alongside a three-arm architecture that integrates prognostic information from WSIs, molecular phenotypes predicted directly from WSIs, and tumor stage . Combining MRI or CT with WSI enables a comprehensive analysis of patient prognosis from both macroscopic and microscopic perspectives. For instance, Li et al . presented a weakly supervised framework that employs a hierarchical radiology-guided co-attention mechanism to capture interactions between histopathological characteristics and radiological features, facilitating the identification of prognostic biomarkers with multimodal interpretability . Chen et al . calculated the Kronecker product of unimodal feature representations to encode pairwise feature communications across modalities, controlling each modality’s contribution through a gating-based attention mechanism, thereby yielding an end-to-end framework that combines histological and genomic data for survival outcome prediction . In summary, patient survival is influenced by numerous factors, and the intricate interplay among these variables poses significant challenges for prognostic accuracy, even with extensive clinical data on tumors. MDL presents a novel method for integrating diverse indicators related to tumor prognosis, offering the potential to discover new prognostic biomarkers from cross-scale data. Recent models employing attention mechanisms have enhanced the interpretability of multimodal features, driving advancements in AI applications for clinical use. Neoadjuvant chemotherapy, targeted therapy, and immunotherapy are increasingly integral to cancer management. The modern demand for more effective treatments underscores the need for accurate, personalized tests over one-size-fits-all approaches. For example, programmed death ligand-1 (PD-L1) expression status, evaluated via immunohistochemistry (IHC), serves as a clinical decision-making tool for immune checkpoint blockade (ICB) therapy. However, many treatments lack specific clinical indicators, highlighting an urgent need to identify biomarkers that can predict treatment benefits and support personalized therapy. This review examines recent advancements in MDL-based treatment decision-making and response monitoring . To establish appropriate treatment paradigms and prognostic assessments for intramedullary gliomas, Ma et al . employed a Swin Transformer to segment lesions from multimodal MRI data. They combined extracted radiomic features with clinical baseline data to predict tumor grade and molecular phenotype . Tumor mutational burden (TMB) has emerged as a promising indicator of the efficacy and prognosis of ICB therapy in tumors. Huang et al . developed a surrogate method for predicting TMB from WSIs in CRC by training a multimodal model that incorporates WSIs alongside relevant clinical data . Esteva et al . created an integration framework for histopathology and clinical data to predict clinically relevant outcomes in prostate cancer patients, demonstrating enhanced prognostic accuracy compared to existing tools and providing evidence for treatment personalization . Zhou et al . introduced a cascade multimodal synchronous generation network for MRI-guided radiation therapy, optimizing time and costs by generating intermediate multimodal sMRI and sCT data, incorporating attention modules for multilevel feature fusion . Estimating the efficacy of therapeutic approaches is equally crucial. While some cancer patients may experience significant improvement during early targeted therapy, resistance can develop, rendering treatment ineffective and exposing patients to adverse side effects. Given the variability and dynamic nature of treatment responses, current research focuses on developing effective prediction methods, particularly noninvasive approaches. Pathologic complete response (pCR) is a recognized metric for evaluating the efficacy of neoadjuvant chemotherapy and serves as an indicator of disease-free survival and OS. Joo et al . developed a fusion model integrating clinical parameters and pretreatment MRI data to predict pCR in breast cancer, outperforming unimodal models . Zhou et al . combined PET-CT, clinical variables, and IHC scores within a multimodal framework to predict the efficacy of bevacizumab in advanced CRC patients, utilizing a 2.5D architecture for feature extraction . Gu et al . applied a DenseNet-121-based multimodal framework to integrate ultrasound and clinicopathological data for stratifying responses to neoadjuvant therapy in breast cancer . Rabinovici-Cohen et al . predicted post-treatment recurrence in breast cancer patients by combining MRIs, IHC markers, and clinical data within a heterogeneous multimodal framework, demonstrating the advantages of multimodal fusion . Accurate prediction of treatment outcomes before, during, and after therapy is essential for developing optimal individualized strategies, ultimately enhancing progression-free survival and OS for cancer patients. Current MDL methodologies exhibit remarkable advantages in integrating multisource data, often surpassing single-modality models in accuracy, positioning them favorably to advance clinical decision-making and efficacy evaluation in oncology. Recent advancements in medical imaging and sequencing technologies have led to an exponential increase in biomedical multimodal data. As the demand for precise tumor diagnosis and personalized treatment continues to rise, effectively harnessing this wealth of data presents a significant challenge in clinical oncology. The impressive success of DL in domains such as computer vision and NLP has catalyzed its application in tumor diagnosis and treatment within medical AI. Substantial evidence indicates that multimodal fusion modeling of biomedical data outperforms single-modality approaches in performance metrics . Consequently, we propose that MDL methods have the potential to serve as powerful tools for integrating multidisciplinary diagnostic and therapeutic data in oncology. This paper first introduces the key data modalities relevant to clinical tumor management , discussing their clinical significance. We also provide a summary of publicly available multimodal oncology datasets, ranging from large-scale databases like TCGA and TCIA to specialized datasets, such as HAM10000, which focus on specific tumor types or populations. These resources offer valuable data for researchers in the field. We then outline fundamental DL concepts and common network architectures , guiding researchers in selecting appropriate frameworks and methods for constructing MDL models. A review of SOTA modality-specific and multimodal representation techniques follows, with a focus on early, intermediate, and late fusion strategies. Evidence suggests that intermediate fusion models often outperform early or late fusion approaches , so we provide an in-depth discussion of this method, categorizing it into homogeneous and heterogeneous fusion, as well as marginal versus joint fusion. This categorization allows readers to choose suitable representation and fusion strategies based on the heterogeneity and computational demands of their multimodal data. Finally, we explore cutting-edge applications of MDL in oncology, covering areas such as multimodal data processing, tumor detection and diagnosis, prognosis prediction, treatment selection, and response monitoring. These applications highlight the latest advancements and emerging trends in MDL for precision oncology. However, challenges remain, as detailed in the following part. Challenge 1: scarcity of large open-source multimodal datasets and annotated information Stringent ethical reviews constrain the acquisition of medical data, leading to a shortage of multimodal datasets, which contradicts AI’s reliance on big data. To improve model generalization, training on multicenter datasets is often necessary, but privacy concerns and labor-intensive data collection methods impede data sharing. FL has emerged as a promising solution, allowing for distributed model training without direct data exchange. . In FL, only model parameters are shared and aggregated, addressing data privacy issues. As FL technology evolves, real-time data circulation among medical centers will become more feasible, enabling large-scale, standardized biomedical multimodal datasets. Another common issue with multimodal datasets is modality incompleteness. For example, full multimodal MRIs typically consist of pre-contrast T1, T2, fluid attenuated inversion recovery, and post-contrast T1 scans. Missing sequences due to factors such as acquisition protocols, scanner availability, or patient-specific issues complicate joint modeling. In practice, prioritizing modality completeness or diversity depends on the task; for instance, when crucial modalities are missing, completeness should take precedence, whereas modality diversity may be more beneficial in other cases. Approaches to address missing modalities include modality synthesis, knowledge distillation, latent feature models, and domain adaptation techniques . However, challenges such as long training times and model complexity remain, underscoring the need for more efficient solutions. To enhance clinical applicability, we recommend prioritizing clinically accessible and affordable modalities over complex datasets, enabling broader adoption of multimodal AI in precision oncology. Furthermore, the scarcity of fully annotated multimodal datasets remains a bottleneck for MDL model development. While vast amounts of unlabeled cross-modal data are available, labeled data are limited and often noisy. Improving annotation reliability through partial label information is critical. Our review identifies two weakly supervised annotation approaches: (1) active learning, which selects reliable labels from pseudo-clusters and iterates from “easy” to “hard” annotations, and (2) data- and knowledge-driven annotation, which enhances accuracy by leveraging data characteristics and prior knowledge. These approaches can improve annotation efficiency and model robustness, advancing the application of MDL in precision oncology. Challenge 2: insufficient fine-grained modeling in MDL and the need for model optimization Precision oncology is a multifaceted process involving various stages, and the increasing complexity of new therapies continues to challenge AI applications in oncology. While current MDL efforts focus on tasks such as tumor segmentation, detection, diagnosis, prognosis, and treatment decision support, many areas remain underexplored, and fine-grained models are often lacking. To improve generalization across tasks, it is essential to integrate diverse techniques and domain-specific expertise, enhancing the capability of pretrained models. The high heterogeneity and cross-scale nature of multimodal medical data pose significant challenges for efficient integration. Fusion strategies are typically categorized into early, intermediate, and late fusion. Early fusion, while intuitive, often fails to establish deep interactions between modalities, leading to suboptimal information utilization. Intermediate fusion generates more diverse fused features but increases model complexity, which can lead to overfitting. Late fusion, typically used at the decision level as an ensemble method, becomes less efficient as the number of modalities grows, resulting in linear increases in parameter count, training inefficiency, and greater sensitivity to modality noise. Overall, existing methods struggle to balance intra-modality processing with inter-modality fusion, resulting in performance bottlenecks and increased computational costs. Promising solutions include compressing multimodal architectures, such as multitask models that can train on diverse data types (e.g. images, videos, and audio) simultaneously. Hybrid fusion approaches, which combine the strengths of different fusion strategies, also hold potential. However, the effectiveness of these models, originally designed for natural images or audiovisual data, remains to be fully validated in the context of biomedical data. Challenge 3: poor interpretability of MDL Explainability has become a major concern in medical AI. The high dimensionality and heterogeneity of multimodal data exacerbate this issue, and the latent embeddings generated after data fusion frequently lack clear connections to the original modalities, further hindering transparency. Current efforts to enhance explainability typically leverage domain knowledge. Techniques such as ablation studies, feature clustering, and activation maps illuminate key decision areas, helping researchers and clinicians better understand the decision-making processes . Some studies have explored graph-based methods, particularly in radiomics and omics, to illustrate relationships between data components and offer more intuitive explanations . Model-agnostic methods, such as Local Interpretable Model-Agnostic Explanations, approximate complex model behaviors with simpler local models to improve interpretability . Debate continues within the academic community about whether AI models should inherently possess explainability or rely on post hoc interpretability techniques (e.g. saliency maps or attention mechanisms). Future AI applications should prioritize biologically inspired explainable models that enhance performance while providing clear, understandable rationales for their decisions, thereby fostering clinician trust. Incorporating clinical domain knowledge into model design and developing user-friendly interaction platforms can help mitigate the “black-box” nature of AI tools, ultimately improving diagnostic accuracy. Additionally, better visualization tools will be essential for addressing interpretability challenges. These tools will visually represent the internal workings of models, making it easier for users to grasp decision rationales. Challenge 4: static models and group spectroscopy in MDL Current MDL models are often static, resulting in delays in tumor prediction and hindering timely assessments of tumor progression, drug resistance, and toxicity. Future models should incorporate dynamic medical domain knowledge, establishing real-time MDL frameworks to improve data processing and integration. Additionally, mechanisms to manage modality inconsistency and missing data would broaden the applicability of these models. While most existing models are group based, precision oncology requires personalized treatment plans tailored to individual patients. Therefore, patient-specific MDL models are a critical future direction. Other challenges include the evaluation of MDL models in clinical context, and the security of multimodal data collection, transmission, and sharing. Multidisciplinary collaboration are needed to solve these issues. Challenge 5: evaluation of MDL models in clinical settings Evaluating the clinical effectiveness of MDL models presents a novel and formidable challenge. Due to the inherent complexity of MDL architectures, the high heterogeneity and dimensionality of the data, and the diverse nature of the modalities involved, clinical assessment goes beyond simply measuring model accuracy. It must also account for time and data costs, as well as the incremental information gain provided by the inclusion of additional modalities. These factors are critical for optimizing cost-effectiveness and harnessing the synergistic potential of multimodal data. Consequently, in addition to conventional evaluation metrics used for unimodal models, it is important to incorporate indicators such as modality-specific information gain (e.g. mutual information), clinical feasibility (as assessed by multidisciplinary expert panels), and the computational complexity that escalates with the increasing number of modalities (e.g. linear or exponential growth). As MDL methods continue to evolve and see broader clinical adoption, establishing a rigorous evaluation framework will be indispensable. In summary, our research highlights the growing role of MDL in precision oncology, fueled by the rapid expansion of biomedical big data and advancements in DL. Multimodal fusion methods offer substantial value for cancer management by integrating diverse modalities to provide comprehensive and accurate insights. However, the full potential of multimodal data remains underexplored. Key improvements are needed in handling data heterogeneity, refining fusion strategies, and optimizing network architectures for clinical scenarios. Key Points This review synthesizes recent advances in MDL for precision oncology, covering applications in image processing, diagnosis, prognosis prediction, treatment decisions, and response monitoring. We also discuss publicly available multimodal datasets and provide a comparative analysis of deep learning techniques, modality representation, and fusion strategies. While multimodal models generally show improved performance over unimodal ones, attention-based architectures and intermediate fusion strategies are often effective. However, with varied data noise, evaluation metrics, and statistical methods between studies, definitive conclusions on the superiority of specific methods are not yet established. Despite progress, challenges in integrating multimodal data persist. Effective fusion methods and adaptive MDL frameworks are crucial to overcoming issues like data heterogeneity, incompleteness, and feature redundancy, paving the way for the broader adoption of precision oncology. Stringent ethical reviews constrain the acquisition of medical data, leading to a shortage of multimodal datasets, which contradicts AI’s reliance on big data. To improve model generalization, training on multicenter datasets is often necessary, but privacy concerns and labor-intensive data collection methods impede data sharing. FL has emerged as a promising solution, allowing for distributed model training without direct data exchange. . In FL, only model parameters are shared and aggregated, addressing data privacy issues. As FL technology evolves, real-time data circulation among medical centers will become more feasible, enabling large-scale, standardized biomedical multimodal datasets. Another common issue with multimodal datasets is modality incompleteness. For example, full multimodal MRIs typically consist of pre-contrast T1, T2, fluid attenuated inversion recovery, and post-contrast T1 scans. Missing sequences due to factors such as acquisition protocols, scanner availability, or patient-specific issues complicate joint modeling. In practice, prioritizing modality completeness or diversity depends on the task; for instance, when crucial modalities are missing, completeness should take precedence, whereas modality diversity may be more beneficial in other cases. Approaches to address missing modalities include modality synthesis, knowledge distillation, latent feature models, and domain adaptation techniques . However, challenges such as long training times and model complexity remain, underscoring the need for more efficient solutions. To enhance clinical applicability, we recommend prioritizing clinically accessible and affordable modalities over complex datasets, enabling broader adoption of multimodal AI in precision oncology. Furthermore, the scarcity of fully annotated multimodal datasets remains a bottleneck for MDL model development. While vast amounts of unlabeled cross-modal data are available, labeled data are limited and often noisy. Improving annotation reliability through partial label information is critical. Our review identifies two weakly supervised annotation approaches: (1) active learning, which selects reliable labels from pseudo-clusters and iterates from “easy” to “hard” annotations, and (2) data- and knowledge-driven annotation, which enhances accuracy by leveraging data characteristics and prior knowledge. These approaches can improve annotation efficiency and model robustness, advancing the application of MDL in precision oncology. Precision oncology is a multifaceted process involving various stages, and the increasing complexity of new therapies continues to challenge AI applications in oncology. While current MDL efforts focus on tasks such as tumor segmentation, detection, diagnosis, prognosis, and treatment decision support, many areas remain underexplored, and fine-grained models are often lacking. To improve generalization across tasks, it is essential to integrate diverse techniques and domain-specific expertise, enhancing the capability of pretrained models. The high heterogeneity and cross-scale nature of multimodal medical data pose significant challenges for efficient integration. Fusion strategies are typically categorized into early, intermediate, and late fusion. Early fusion, while intuitive, often fails to establish deep interactions between modalities, leading to suboptimal information utilization. Intermediate fusion generates more diverse fused features but increases model complexity, which can lead to overfitting. Late fusion, typically used at the decision level as an ensemble method, becomes less efficient as the number of modalities grows, resulting in linear increases in parameter count, training inefficiency, and greater sensitivity to modality noise. Overall, existing methods struggle to balance intra-modality processing with inter-modality fusion, resulting in performance bottlenecks and increased computational costs. Promising solutions include compressing multimodal architectures, such as multitask models that can train on diverse data types (e.g. images, videos, and audio) simultaneously. Hybrid fusion approaches, which combine the strengths of different fusion strategies, also hold potential. However, the effectiveness of these models, originally designed for natural images or audiovisual data, remains to be fully validated in the context of biomedical data. Explainability has become a major concern in medical AI. The high dimensionality and heterogeneity of multimodal data exacerbate this issue, and the latent embeddings generated after data fusion frequently lack clear connections to the original modalities, further hindering transparency. Current efforts to enhance explainability typically leverage domain knowledge. Techniques such as ablation studies, feature clustering, and activation maps illuminate key decision areas, helping researchers and clinicians better understand the decision-making processes . Some studies have explored graph-based methods, particularly in radiomics and omics, to illustrate relationships between data components and offer more intuitive explanations . Model-agnostic methods, such as Local Interpretable Model-Agnostic Explanations, approximate complex model behaviors with simpler local models to improve interpretability . Debate continues within the academic community about whether AI models should inherently possess explainability or rely on post hoc interpretability techniques (e.g. saliency maps or attention mechanisms). Future AI applications should prioritize biologically inspired explainable models that enhance performance while providing clear, understandable rationales for their decisions, thereby fostering clinician trust. Incorporating clinical domain knowledge into model design and developing user-friendly interaction platforms can help mitigate the “black-box” nature of AI tools, ultimately improving diagnostic accuracy. Additionally, better visualization tools will be essential for addressing interpretability challenges. These tools will visually represent the internal workings of models, making it easier for users to grasp decision rationales. Current MDL models are often static, resulting in delays in tumor prediction and hindering timely assessments of tumor progression, drug resistance, and toxicity. Future models should incorporate dynamic medical domain knowledge, establishing real-time MDL frameworks to improve data processing and integration. Additionally, mechanisms to manage modality inconsistency and missing data would broaden the applicability of these models. While most existing models are group based, precision oncology requires personalized treatment plans tailored to individual patients. Therefore, patient-specific MDL models are a critical future direction. Other challenges include the evaluation of MDL models in clinical context, and the security of multimodal data collection, transmission, and sharing. Multidisciplinary collaboration are needed to solve these issues. Evaluating the clinical effectiveness of MDL models presents a novel and formidable challenge. Due to the inherent complexity of MDL architectures, the high heterogeneity and dimensionality of the data, and the diverse nature of the modalities involved, clinical assessment goes beyond simply measuring model accuracy. It must also account for time and data costs, as well as the incremental information gain provided by the inclusion of additional modalities. These factors are critical for optimizing cost-effectiveness and harnessing the synergistic potential of multimodal data. Consequently, in addition to conventional evaluation metrics used for unimodal models, it is important to incorporate indicators such as modality-specific information gain (e.g. mutual information), clinical feasibility (as assessed by multidisciplinary expert panels), and the computational complexity that escalates with the increasing number of modalities (e.g. linear or exponential growth). As MDL methods continue to evolve and see broader clinical adoption, establishing a rigorous evaluation framework will be indispensable. In summary, our research highlights the growing role of MDL in precision oncology, fueled by the rapid expansion of biomedical big data and advancements in DL. Multimodal fusion methods offer substantial value for cancer management by integrating diverse modalities to provide comprehensive and accurate insights. However, the full potential of multimodal data remains underexplored. Key improvements are needed in handling data heterogeneity, refining fusion strategies, and optimizing network architectures for clinical scenarios. Key Points This review synthesizes recent advances in MDL for precision oncology, covering applications in image processing, diagnosis, prognosis prediction, treatment decisions, and response monitoring. We also discuss publicly available multimodal datasets and provide a comparative analysis of deep learning techniques, modality representation, and fusion strategies. While multimodal models generally show improved performance over unimodal ones, attention-based architectures and intermediate fusion strategies are often effective. However, with varied data noise, evaluation metrics, and statistical methods between studies, definitive conclusions on the superiority of specific methods are not yet established. Despite progress, challenges in integrating multimodal data persist. Effective fusion methods and adaptive MDL frameworks are crucial to overcoming issues like data heterogeneity, incompleteness, and feature redundancy, paving the way for the broader adoption of precision oncology. This review synthesizes recent advances in MDL for precision oncology, covering applications in image processing, diagnosis, prognosis prediction, treatment decisions, and response monitoring. We also discuss publicly available multimodal datasets and provide a comparative analysis of deep learning techniques, modality representation, and fusion strategies. While multimodal models generally show improved performance over unimodal ones, attention-based architectures and intermediate fusion strategies are often effective. However, with varied data noise, evaluation metrics, and statistical methods between studies, definitive conclusions on the superiority of specific methods are not yet established. Despite progress, challenges in integrating multimodal data persist. Effective fusion methods and adaptive MDL frameworks are crucial to overcoming issues like data heterogeneity, incompleteness, and feature redundancy, paving the way for the broader adoption of precision oncology. Supplementary_materials_bbae699
Brief Video-Delivered Intervention to Reduce Anxiety and Improve Functioning in Older Veterans: Pilot Randomized Controlled Trial
62489348-08f7-4115-be57-24e62c7de890
11667142
Patient Education as Topic[mh]
Background Anxiety disorders are pervasive among older adults and especially common in older military veterans. These disorders include generalized anxiety disorder (GAD), social anxiety disorder, panic disorder, agoraphobia, and unspecified anxiety disorders . A meta-analytic review estimated that nearly 1 in 10 (9.1%) older military veterans met the criteria for one of these anxiety disorders . Not only do late-life anxiety symptoms and disorders contribute to detrimental outcomes such as functional and cognitive decline , but the presence of an anxiety disorder (ie, GAD) is a risk factor for suicide ideation among older non–combat-exposed veterans . Thus, access to mental health services for anxiety in older veterans is critical. Numerous barriers impede older veterans with anxiety from receiving mental health services. These barriers include mental health stigma and beliefs, lack of knowledge about available mental health services, mobility limitations, transportation challenges, and residing in rural areas . Another barrier is the impact of anxiety diagnoses on referral for treatment. Older veterans with anxiety are more likely to receive nonspecific anxiety disorder diagnoses and less likely to receive mental health services compared with younger veterans who receive specific anxiety diagnoses and more subsequent mental health services . Thus, accessible and brief nonpharmacological interventions for late-life anxiety are needed to address the barriers to accessing mental health care faced by older veterans with anxiety disorders. Furthermore, conclusions from a recent systematic review and meta-analysis suggest that these brief interventions may show promise for reducing anxiety symptoms in older adults . One such brief behavioral intervention, progressive relaxation , also known as progressive muscle relaxation (PMR), has been shown to be an efficacious behavioral intervention for the treatment of late-life anxiety . PMR is often included as a component within cognitive behavioral therapy (CBT) for anxiety treatments, as well as being a component of digital mental health interventions such as self-management mobile apps. We focused on this skill and developed a 4-week video-delivered intervention that teaches PMR and diaphragmatic breathing and encourages the application of these skills to help patients engage in activities in which anxiety or other types of stress may arise. Activity engagement was selected as a treatment target as discomfort and distress from anxiety symptoms prompt individuals to use coping strategies such as escape from and avoidance of anxiety-evoking situations in the short term . This process (ie, negative reinforcement) makes it more likely for avoidance to be relied upon in the long term and is particularly detrimental to older patients because reduced engagement in activities often leads to isolation and functional decline . This intervention—Breathing, Relaxation, and Education for Anxiety Treatment in the Home Environment (BREATHE)—is organized with weekly video lessons, daily practice videos, and telephone coaching to encourage adherence to the practices. It was initially tested in a proof-of-concept study comparing the 4-week BREATHE intervention to a waitlist control in older adults with anxiety disorders . The BREATHE intervention was found to be superior to the waitlist control in reducing anxiety, depressive, and somatic symptoms; however, the attrition in BREATHE (35%) warranted further investigation. To address these attrition issues, we sought feedback on BREATHE from older veterans and made iterative revisions to the intervention in a second study . The revised BREATHE intervention was then tested in a small feasibility study with 10 older veterans with anxiety disorders in which BREATHE was found to be feasible and acceptable in that 90% (n=9) completed the intervention and 100% felt that BREATHE somewhat or completely met their expectations. While BREATHE is a guided self-management approach to anxiety, it differs from other similar approaches that either rely on manuals and bibliotherapy or are entirely internet delivered (see the study by Cremers et al for a review). BREATHE falls between these 2 approaches, uses familiar technology (web-based videos or DVDs), and focuses on a single skill (PMR) rather than multiple skills (eg, CBT). Objectives In this study, we conducted a pilot randomized controlled trial (RCT) comparing 2 guided self-management interventions in older veterans with anxiety disorders. This study was conducted in part to determine the feasibility of conducting an RCT by documenting willingness to be randomized, engagement with the interventions assigned, and dropout. We also examined the preliminary efficacy of BREATHE compared with a psychoeducation intervention (Healthy Living for Reduced Anxiety) on anxiety symptoms and functioning in older veterans. We hypothesized that (1) BREATHE would result in greater reduction in anxiety symptoms compared with psychoeducation at 12 weeks and (2) BREATHE would result in significantly greater improvements in functioning compared with psychoeducation at 12 weeks. Our exploratory aims were to examine whether home practices and treatment engagement were related to patient characteristics or to intervention outcomes and examine participants’ perceptions of the home practices via qualitative interviews. Anxiety disorders are pervasive among older adults and especially common in older military veterans. These disorders include generalized anxiety disorder (GAD), social anxiety disorder, panic disorder, agoraphobia, and unspecified anxiety disorders . A meta-analytic review estimated that nearly 1 in 10 (9.1%) older military veterans met the criteria for one of these anxiety disorders . Not only do late-life anxiety symptoms and disorders contribute to detrimental outcomes such as functional and cognitive decline , but the presence of an anxiety disorder (ie, GAD) is a risk factor for suicide ideation among older non–combat-exposed veterans . Thus, access to mental health services for anxiety in older veterans is critical. Numerous barriers impede older veterans with anxiety from receiving mental health services. These barriers include mental health stigma and beliefs, lack of knowledge about available mental health services, mobility limitations, transportation challenges, and residing in rural areas . Another barrier is the impact of anxiety diagnoses on referral for treatment. Older veterans with anxiety are more likely to receive nonspecific anxiety disorder diagnoses and less likely to receive mental health services compared with younger veterans who receive specific anxiety diagnoses and more subsequent mental health services . Thus, accessible and brief nonpharmacological interventions for late-life anxiety are needed to address the barriers to accessing mental health care faced by older veterans with anxiety disorders. Furthermore, conclusions from a recent systematic review and meta-analysis suggest that these brief interventions may show promise for reducing anxiety symptoms in older adults . One such brief behavioral intervention, progressive relaxation , also known as progressive muscle relaxation (PMR), has been shown to be an efficacious behavioral intervention for the treatment of late-life anxiety . PMR is often included as a component within cognitive behavioral therapy (CBT) for anxiety treatments, as well as being a component of digital mental health interventions such as self-management mobile apps. We focused on this skill and developed a 4-week video-delivered intervention that teaches PMR and diaphragmatic breathing and encourages the application of these skills to help patients engage in activities in which anxiety or other types of stress may arise. Activity engagement was selected as a treatment target as discomfort and distress from anxiety symptoms prompt individuals to use coping strategies such as escape from and avoidance of anxiety-evoking situations in the short term . This process (ie, negative reinforcement) makes it more likely for avoidance to be relied upon in the long term and is particularly detrimental to older patients because reduced engagement in activities often leads to isolation and functional decline . This intervention—Breathing, Relaxation, and Education for Anxiety Treatment in the Home Environment (BREATHE)—is organized with weekly video lessons, daily practice videos, and telephone coaching to encourage adherence to the practices. It was initially tested in a proof-of-concept study comparing the 4-week BREATHE intervention to a waitlist control in older adults with anxiety disorders . The BREATHE intervention was found to be superior to the waitlist control in reducing anxiety, depressive, and somatic symptoms; however, the attrition in BREATHE (35%) warranted further investigation. To address these attrition issues, we sought feedback on BREATHE from older veterans and made iterative revisions to the intervention in a second study . The revised BREATHE intervention was then tested in a small feasibility study with 10 older veterans with anxiety disorders in which BREATHE was found to be feasible and acceptable in that 90% (n=9) completed the intervention and 100% felt that BREATHE somewhat or completely met their expectations. While BREATHE is a guided self-management approach to anxiety, it differs from other similar approaches that either rely on manuals and bibliotherapy or are entirely internet delivered (see the study by Cremers et al for a review). BREATHE falls between these 2 approaches, uses familiar technology (web-based videos or DVDs), and focuses on a single skill (PMR) rather than multiple skills (eg, CBT). In this study, we conducted a pilot randomized controlled trial (RCT) comparing 2 guided self-management interventions in older veterans with anxiety disorders. This study was conducted in part to determine the feasibility of conducting an RCT by documenting willingness to be randomized, engagement with the interventions assigned, and dropout. We also examined the preliminary efficacy of BREATHE compared with a psychoeducation intervention (Healthy Living for Reduced Anxiety) on anxiety symptoms and functioning in older veterans. We hypothesized that (1) BREATHE would result in greater reduction in anxiety symptoms compared with psychoeducation at 12 weeks and (2) BREATHE would result in significantly greater improvements in functioning compared with psychoeducation at 12 weeks. Our exploratory aims were to examine whether home practices and treatment engagement were related to patient characteristics or to intervention outcomes and examine participants’ perceptions of the home practices via qualitative interviews. A 2-group pilot RCT (ClinicalTrials.gov NCT02400723) was conducted over the course of 12 weeks. The interventions lasted 4 weeks, with data collected at baseline, week 4 (end of treatment), week 8 (4 weeks after treatment), and 12 weeks (8 weeks after treatment). Ethical Considerations This study was reviewed and approved by the Stanford University Institutional Review Board (IRB-32454), the institutional review board of record for the US Department of Veterans Affairs (VA) Palo Alto Health Care System. Informed consent was obtained at the baseline visit. Participants were paid US $60 for the initial assessment and the 12 week assessment. Participants were paid US $10 for completing the telephone assessments at weeks 4 and 8. Data were deidentified prior to data entry and analysis. Recruitment Participants were recruited from a large VA health care system via posted flyers and brochures. Informational letters were also sent to patients who had at least one encounter during the past year that was related to a diagnosis of anxiety. Recruitment took place from February 2019 to March 2020 (before the COVID-19 pandemic) and from July 2020 to November 2020 (after the COVID-19 pandemic). Participants were eligible if they were aged ≥60 years and proficient in English and exhibited a diagnosis of an anxiety disorder (ie, GAD, panic disorder, agoraphobia, social anxiety disorder, or unspecified anxiety disorder). Participants were excluded if they were currently enrolled in other intervention research studies or were currently involved in individual therapy or group therapy more frequently than once per month. Those with possible cognitive impairment per a brief cognitive assessment or those who reported a diagnosis of bipolar disorder, psychotic disorder, schizophrenia, or other serious psychiatric disorders were excluded during the telephone screening process. If participants were taking psychotropic medications, they needed to be on a stable dose for 1 month before enrollment. Measures The following sections describe measures for inclusion and exclusion criteria assessment, outcome assessment, and measurement of covariates. Assessment of Inclusion and Exclusion Criteria The Short Blessed Test (SBT), derived from the Blessed Orientation-Memory-Concentration Test, was included as a brief cognitive assessment to ascertain possible cognitive impairment . Participants obtaining scores of ≥6 on the SBT were excluded from the study . The Structured Clinical Interview for the DSM-V (SCID-5 ), was administered to assess mental health diagnoses and exclude participants with serious psychiatric disorders, as mentioned previously. Those participants exhibiting an anxiety disorder were included if they met all other inclusion criteria. Demographic Questionnaire A questionnaire was administered at baseline to assess basic demographic, employment, and health information. Additional questions inquired about participants’ previous experience with relaxation, breathing training, meditation, tai chi, and any other similar techniques. Anxiety Measures The Geriatric Anxiety Scale (GAS ) is a 30-item measure of anxiety that served as a primary outcome measure. The first 25 items measure the frequency of somatic, cognitive, and affective anxiety symptoms and are summed to obtain a total score. The last 5 items assess specific anxiety and fear content and are not included in the total score. Items are scored on a scale from 0 ( not at all ) to 3 ( all of the time ), with total scores ranging from 0 to 75; higher scores indicate greater anxiety. The GAS and its somatic, cognitive, and affective symptom subscales have good internal consistency and convergent validity compared with other anxiety measures . The GAS was administered at baseline, week 4, week 8, and week 12. The Hamilton Anxiety Rating Scale (HAM-A ) assesses the severity of anxiety using clinician ratings of 14 items on a 5-point scale and was included as a secondary outcome measure of anxiety. It has adequate internal consistency, high interrater reliability, and good to adequate concurrent validity . The HAM-A was administered at baseline and week 12. The Patient-Reported Outcomes Measurement Information System 7-item anxiety scale assesses the frequency of experiencing anxiety symptoms within the previous week . Psychometric support for this measure has been found in adult populations and in our previous work . The Patient-Reported Outcomes Measurement Information System anxiety scale was administered at baseline and week 12. The Anxiety Control Questionnaire (ACQ ) is a 30-item self-report measure assessing one’s perceived ability to control anxiety-evoking situations and emotional reactions to these situations. Gerolimatos et al found good internal consistency in older adults. The ACQ was administered at baseline and week 12. Functioning The Activity Card Sort (ACS) was selected as a measure of activity engagement as it assesses the presence and the loss of activities using 80 photographs that depict instrumental, leisure, and social activities. We used an interactive sorting task to calculate lifestyle-adjusted function, which excludes activities that people have never performed in their lifetime . The lifestyle-adjusted function is the number of easy activities divided by the sum of easy activities, hard activities, and no-longer-performed activities. Thus, the lifestyle-adjusted function score accurately reflects loss, gain, or changes in activity participation. This measure was obtained at baseline and week 12. Due to this being an in-person task, the lifestyle-adjusted function scores were only obtained from participants who completed the study before the COVID-19 pandemic. We also used an individualized scoring of the ACS . In this approach, we asked participants to select 5 activities that they would like to do more frequently if not experiencing anxiety. We ascertained participants’ goals for the number of times they would like to do the activity, and we obtained the frequency of performing the activity at baseline, week 4, week 8, and week 12. The Veterans RAND 12-item Health Survey (VR-12) examines health-related quality of life and generates 2 scores: a physical component summary and a mental component summary (MCS). The MCS was used as a functioning outcome measure. The VR-12 was administered at baseline, week 4, week 8, and week 12. Additional Measures The 9-item Patient Health Questionnaire (PHQ-9 ) was used to assess participants’ depression symptoms. The questionnaire asks about symptoms during the previous 2 weeks, and each item is scored from 0 ( not at all ) to 3 ( nearly every day ), with total scores ranging from 0 to 27. This measure was administered at baseline and week 12. The Cumulative Illness Rating Scale–Geriatric measures medical illness burden. Retrospective chart review was used to obtain the ratings and drew on recorded history, physical examination, and laboratory tests, consistent with previous work . A semistructured interview and brief survey about the BREATHE or Healthy Living intervention was administered to participants at week 12. The survey included a question in which participants ranked the intervention components (video lessons, practices, and coaching calls) from most helpful to least helpful. The semistructured interview encompassed 7 questions about different domains, including changes made in one’s life as a result of the intervention, effects on one’s well-being, changes in activities and function, when improvement was first noticed, sustainability of the practices, recommended changes to the intervention, and whether the intervention would be recommended to other patients. Herein, we focus on questions related to the home practices in the BREATHE participants. Procedures Telephone Screening and Baseline Assessment Participants completed a brief telephone screen to assess for potential cognitive impairment using the SBT, concurrent psychotherapy, presence of serious mental illness, or recent changes to psychotropic medications (if taking any). Eligible participants based on the telephone screen were invited to a baseline assessment that included obtaining informed consent. Baseline assessments were conducted in person up to February 2020 and via telephone after the onset of the COVID-19 pandemic. During the baseline assessment, a structured psychiatric interview was conducted to ascertain the presence of a current anxiety disorder (ie, GAD, panic disorder, agoraphobia, social anxiety disorder, or unspecified anxiety disorder). Participants who did not meet the criteria for an anxiety disorder or individuals who had a potential psychotic disorder or bipolar disorder were excluded at baseline. The remainder of the assessment consisted of completing the clinical interviews (SCID-5 and HAM-A), ACS, and questionnaires (demographics and health questionnaire, GAS, PHQ-9, ACQ, and VR-12). Randomization Eligible participants were randomized to BREATHE or psychoeducation in a blocked randomization scheme with blocks of varying sizes (2 to 8), as recommended by the statistician who created the randomization scheme. Each assignment was concealed in an envelope that a research team member opened in sequential order at the time of randomization. Interventions BREATHE Intervention In the BREATHE intervention, participants watched 1 lesson video each week and then were instructed to practice relaxation 1 to 2 times a day. Participants were able to select whether they viewed the videos from a DVD or from a website. Participants also received weekly calls from their BREATHE coaches during which adherence was ascertained (ie, the lesson video viewed and practices completed). Anxiety ratings for each practice were reviewed, questions were addressed, and encouragement for practice adherence was provided. During weeks 2 to 4, patients were instructed to apply the skills during everyday life. In total, 2 team members (CG and LA) with a master’s to PhD level of training in psychology served as coaches for both the BREATHE and Healthy Living interventions. Coaches used a coaching manual (available upon request) for each intervention and questions tailored to each week’s content to guide the coaching calls. Both BREATHE and Healthy Living coaches addressed any participant challenges with using the DVD or website. provides an overview of the intervention components, including the mean duration of the completed coaching calls. Psychoeducation (Healthy Living for Reduced Anxiety) In the psychoeducation intervention (ie, active control), participants viewed 30-minute video lessons once a week for 4 weeks. The videos provided information about what anxiety is, coping with anxiety and sleep tips, benefits of exercise and a gentle stretching routine, and healthy eating. A Healthy Living coach (see the aforementioned description) called participants each week to ascertain adherence (ie, the lesson video viewed) and answer specific questions about the materials. Practice assignments consisted only of brief supplemental readings . Assessments at Weeks 4, 8, and 12 At weeks 4 and 8, a total of 4 questionnaires were completed with participants via phone: the GAS, PHQ-9, VR-12, and activity goal frequency (ACS). At week 12 (8 weeks after completion of the 4-week intervention), participants returned for a posttreatment assessment. The posttreatment assessment included the GAS, PHQ-9, HAM-A, ACS, VR-12, and ACQ. A feedback survey for the BREATHE and Healthy Living conditions was completed followed by a semistructured interview. The survey asked participants to rank the helpfulness of each part of the program: video lessons, practices for BREATHE or readings for Healthy Living, and coaching calls. Then, the survey had 5 statements with which participants rated their agreement on a Likert-type scale ranging from strongly disagree (1) to strongly agree (5). The items asked about the usability of the DVDs, website log-in, and watching of web-based videos and about the frequency of coaching calls and duration of the program. The semistructured interview was recorded with participants’ permission, and the recordings or notes (if not recorded) were then transcribed. During the COVID-19 pandemic, the posttreatment assessment was completed by telephone. Assessors were not blinded to participant condition. Power An a priori sample power analysis was calculated with an α of .05 and power of 0.80. On the basis of previous research, the controlled effect size (Hedges g ) for relaxation therapy’s effect on anxiety was 0.90 (95% CI 0.44-1.44) . The total estimated sample size was 30 (15 per group; Cohen f= 0.41). Because of expected attrition and use of self-directed treatment, we estimated a smaller effect of the primary measure of anxiety and, thus, aimed for a sample of at least 26 participants per group ( f= 0.35). Statistical Analysis Descriptive statistics were used to help characterize the sample; t tests (2-tailed) and chi-square analyses were used to test whether the 2 groups differed on any characteristics. Correlation analyses were conducted to examine whether baseline anxiety or medical comorbidity were associated with homework completion. Analyses were conducted using SPSS Statistics (version 29.0; IBM Corp) . To examine the hypotheses regarding the primary outcomes of anxiety (GAS) and functioning (VR-12 MCS), mixed-effects models, also known as linear growth models or multilevel models , were used. Missing data points due to participant dropout were handled assuming that the data were missing at random and conditional on observed information. Mixed models were used to examine the change in outcomes across 4 time points (baseline [T1], 4 weeks and end of treatment [T2], 8 weeks [T3], and 12 weeks [T4]). Growth models with just time were estimated first followed by the fully specified models that included a between factor of treatment group (BREATHE vs psychoeducation); a within factor of time; and an interaction of treatment by time, which was the estimate of the treatment effect over the course of the study. A total of 3 mixed-effects models with 2 time points (baseline [T1] and 12 weeks [T4]) were conducted on (1) anxiety as measured using the HAM-A, (2) perceived anxiety control as measured using the ACQ, and (3) lifestyle-adjusted functioning using the ACS. Sensitivity analyses were conducted to evaluate the change in anxiety symptoms (GAS) from baseline to 4 weeks. Rapid qualitative analysis was used to investigate perceptions of the home practices among the participants assigned to the BREATHE intervention. Interview transcripts were summarized using templates, and then domain summaries were created by 2 authors trained in qualitative techniques (CG and CC). The summaries were reviewed for themes. This study was reviewed and approved by the Stanford University Institutional Review Board (IRB-32454), the institutional review board of record for the US Department of Veterans Affairs (VA) Palo Alto Health Care System. Informed consent was obtained at the baseline visit. Participants were paid US $60 for the initial assessment and the 12 week assessment. Participants were paid US $10 for completing the telephone assessments at weeks 4 and 8. Data were deidentified prior to data entry and analysis. Participants were recruited from a large VA health care system via posted flyers and brochures. Informational letters were also sent to patients who had at least one encounter during the past year that was related to a diagnosis of anxiety. Recruitment took place from February 2019 to March 2020 (before the COVID-19 pandemic) and from July 2020 to November 2020 (after the COVID-19 pandemic). Participants were eligible if they were aged ≥60 years and proficient in English and exhibited a diagnosis of an anxiety disorder (ie, GAD, panic disorder, agoraphobia, social anxiety disorder, or unspecified anxiety disorder). Participants were excluded if they were currently enrolled in other intervention research studies or were currently involved in individual therapy or group therapy more frequently than once per month. Those with possible cognitive impairment per a brief cognitive assessment or those who reported a diagnosis of bipolar disorder, psychotic disorder, schizophrenia, or other serious psychiatric disorders were excluded during the telephone screening process. If participants were taking psychotropic medications, they needed to be on a stable dose for 1 month before enrollment. The following sections describe measures for inclusion and exclusion criteria assessment, outcome assessment, and measurement of covariates. Assessment of Inclusion and Exclusion Criteria The Short Blessed Test (SBT), derived from the Blessed Orientation-Memory-Concentration Test, was included as a brief cognitive assessment to ascertain possible cognitive impairment . Participants obtaining scores of ≥6 on the SBT were excluded from the study . The Structured Clinical Interview for the DSM-V (SCID-5 ), was administered to assess mental health diagnoses and exclude participants with serious psychiatric disorders, as mentioned previously. Those participants exhibiting an anxiety disorder were included if they met all other inclusion criteria. Demographic Questionnaire A questionnaire was administered at baseline to assess basic demographic, employment, and health information. Additional questions inquired about participants’ previous experience with relaxation, breathing training, meditation, tai chi, and any other similar techniques. Anxiety Measures The Geriatric Anxiety Scale (GAS ) is a 30-item measure of anxiety that served as a primary outcome measure. The first 25 items measure the frequency of somatic, cognitive, and affective anxiety symptoms and are summed to obtain a total score. The last 5 items assess specific anxiety and fear content and are not included in the total score. Items are scored on a scale from 0 ( not at all ) to 3 ( all of the time ), with total scores ranging from 0 to 75; higher scores indicate greater anxiety. The GAS and its somatic, cognitive, and affective symptom subscales have good internal consistency and convergent validity compared with other anxiety measures . The GAS was administered at baseline, week 4, week 8, and week 12. The Hamilton Anxiety Rating Scale (HAM-A ) assesses the severity of anxiety using clinician ratings of 14 items on a 5-point scale and was included as a secondary outcome measure of anxiety. It has adequate internal consistency, high interrater reliability, and good to adequate concurrent validity . The HAM-A was administered at baseline and week 12. The Patient-Reported Outcomes Measurement Information System 7-item anxiety scale assesses the frequency of experiencing anxiety symptoms within the previous week . Psychometric support for this measure has been found in adult populations and in our previous work . The Patient-Reported Outcomes Measurement Information System anxiety scale was administered at baseline and week 12. The Anxiety Control Questionnaire (ACQ ) is a 30-item self-report measure assessing one’s perceived ability to control anxiety-evoking situations and emotional reactions to these situations. Gerolimatos et al found good internal consistency in older adults. The ACQ was administered at baseline and week 12. Functioning The Activity Card Sort (ACS) was selected as a measure of activity engagement as it assesses the presence and the loss of activities using 80 photographs that depict instrumental, leisure, and social activities. We used an interactive sorting task to calculate lifestyle-adjusted function, which excludes activities that people have never performed in their lifetime . The lifestyle-adjusted function is the number of easy activities divided by the sum of easy activities, hard activities, and no-longer-performed activities. Thus, the lifestyle-adjusted function score accurately reflects loss, gain, or changes in activity participation. This measure was obtained at baseline and week 12. Due to this being an in-person task, the lifestyle-adjusted function scores were only obtained from participants who completed the study before the COVID-19 pandemic. We also used an individualized scoring of the ACS . In this approach, we asked participants to select 5 activities that they would like to do more frequently if not experiencing anxiety. We ascertained participants’ goals for the number of times they would like to do the activity, and we obtained the frequency of performing the activity at baseline, week 4, week 8, and week 12. The Veterans RAND 12-item Health Survey (VR-12) examines health-related quality of life and generates 2 scores: a physical component summary and a mental component summary (MCS). The MCS was used as a functioning outcome measure. The VR-12 was administered at baseline, week 4, week 8, and week 12. Additional Measures The 9-item Patient Health Questionnaire (PHQ-9 ) was used to assess participants’ depression symptoms. The questionnaire asks about symptoms during the previous 2 weeks, and each item is scored from 0 ( not at all ) to 3 ( nearly every day ), with total scores ranging from 0 to 27. This measure was administered at baseline and week 12. The Cumulative Illness Rating Scale–Geriatric measures medical illness burden. Retrospective chart review was used to obtain the ratings and drew on recorded history, physical examination, and laboratory tests, consistent with previous work . A semistructured interview and brief survey about the BREATHE or Healthy Living intervention was administered to participants at week 12. The survey included a question in which participants ranked the intervention components (video lessons, practices, and coaching calls) from most helpful to least helpful. The semistructured interview encompassed 7 questions about different domains, including changes made in one’s life as a result of the intervention, effects on one’s well-being, changes in activities and function, when improvement was first noticed, sustainability of the practices, recommended changes to the intervention, and whether the intervention would be recommended to other patients. Herein, we focus on questions related to the home practices in the BREATHE participants. The Short Blessed Test (SBT), derived from the Blessed Orientation-Memory-Concentration Test, was included as a brief cognitive assessment to ascertain possible cognitive impairment . Participants obtaining scores of ≥6 on the SBT were excluded from the study . The Structured Clinical Interview for the DSM-V (SCID-5 ), was administered to assess mental health diagnoses and exclude participants with serious psychiatric disorders, as mentioned previously. Those participants exhibiting an anxiety disorder were included if they met all other inclusion criteria. A questionnaire was administered at baseline to assess basic demographic, employment, and health information. Additional questions inquired about participants’ previous experience with relaxation, breathing training, meditation, tai chi, and any other similar techniques. The Geriatric Anxiety Scale (GAS ) is a 30-item measure of anxiety that served as a primary outcome measure. The first 25 items measure the frequency of somatic, cognitive, and affective anxiety symptoms and are summed to obtain a total score. The last 5 items assess specific anxiety and fear content and are not included in the total score. Items are scored on a scale from 0 ( not at all ) to 3 ( all of the time ), with total scores ranging from 0 to 75; higher scores indicate greater anxiety. The GAS and its somatic, cognitive, and affective symptom subscales have good internal consistency and convergent validity compared with other anxiety measures . The GAS was administered at baseline, week 4, week 8, and week 12. The Hamilton Anxiety Rating Scale (HAM-A ) assesses the severity of anxiety using clinician ratings of 14 items on a 5-point scale and was included as a secondary outcome measure of anxiety. It has adequate internal consistency, high interrater reliability, and good to adequate concurrent validity . The HAM-A was administered at baseline and week 12. The Patient-Reported Outcomes Measurement Information System 7-item anxiety scale assesses the frequency of experiencing anxiety symptoms within the previous week . Psychometric support for this measure has been found in adult populations and in our previous work . The Patient-Reported Outcomes Measurement Information System anxiety scale was administered at baseline and week 12. The Anxiety Control Questionnaire (ACQ ) is a 30-item self-report measure assessing one’s perceived ability to control anxiety-evoking situations and emotional reactions to these situations. Gerolimatos et al found good internal consistency in older adults. The ACQ was administered at baseline and week 12. The Activity Card Sort (ACS) was selected as a measure of activity engagement as it assesses the presence and the loss of activities using 80 photographs that depict instrumental, leisure, and social activities. We used an interactive sorting task to calculate lifestyle-adjusted function, which excludes activities that people have never performed in their lifetime . The lifestyle-adjusted function is the number of easy activities divided by the sum of easy activities, hard activities, and no-longer-performed activities. Thus, the lifestyle-adjusted function score accurately reflects loss, gain, or changes in activity participation. This measure was obtained at baseline and week 12. Due to this being an in-person task, the lifestyle-adjusted function scores were only obtained from participants who completed the study before the COVID-19 pandemic. We also used an individualized scoring of the ACS . In this approach, we asked participants to select 5 activities that they would like to do more frequently if not experiencing anxiety. We ascertained participants’ goals for the number of times they would like to do the activity, and we obtained the frequency of performing the activity at baseline, week 4, week 8, and week 12. The Veterans RAND 12-item Health Survey (VR-12) examines health-related quality of life and generates 2 scores: a physical component summary and a mental component summary (MCS). The MCS was used as a functioning outcome measure. The VR-12 was administered at baseline, week 4, week 8, and week 12. The 9-item Patient Health Questionnaire (PHQ-9 ) was used to assess participants’ depression symptoms. The questionnaire asks about symptoms during the previous 2 weeks, and each item is scored from 0 ( not at all ) to 3 ( nearly every day ), with total scores ranging from 0 to 27. This measure was administered at baseline and week 12. The Cumulative Illness Rating Scale–Geriatric measures medical illness burden. Retrospective chart review was used to obtain the ratings and drew on recorded history, physical examination, and laboratory tests, consistent with previous work . A semistructured interview and brief survey about the BREATHE or Healthy Living intervention was administered to participants at week 12. The survey included a question in which participants ranked the intervention components (video lessons, practices, and coaching calls) from most helpful to least helpful. The semistructured interview encompassed 7 questions about different domains, including changes made in one’s life as a result of the intervention, effects on one’s well-being, changes in activities and function, when improvement was first noticed, sustainability of the practices, recommended changes to the intervention, and whether the intervention would be recommended to other patients. Herein, we focus on questions related to the home practices in the BREATHE participants. Telephone Screening and Baseline Assessment Participants completed a brief telephone screen to assess for potential cognitive impairment using the SBT, concurrent psychotherapy, presence of serious mental illness, or recent changes to psychotropic medications (if taking any). Eligible participants based on the telephone screen were invited to a baseline assessment that included obtaining informed consent. Baseline assessments were conducted in person up to February 2020 and via telephone after the onset of the COVID-19 pandemic. During the baseline assessment, a structured psychiatric interview was conducted to ascertain the presence of a current anxiety disorder (ie, GAD, panic disorder, agoraphobia, social anxiety disorder, or unspecified anxiety disorder). Participants who did not meet the criteria for an anxiety disorder or individuals who had a potential psychotic disorder or bipolar disorder were excluded at baseline. The remainder of the assessment consisted of completing the clinical interviews (SCID-5 and HAM-A), ACS, and questionnaires (demographics and health questionnaire, GAS, PHQ-9, ACQ, and VR-12). Randomization Eligible participants were randomized to BREATHE or psychoeducation in a blocked randomization scheme with blocks of varying sizes (2 to 8), as recommended by the statistician who created the randomization scheme. Each assignment was concealed in an envelope that a research team member opened in sequential order at the time of randomization. Interventions BREATHE Intervention In the BREATHE intervention, participants watched 1 lesson video each week and then were instructed to practice relaxation 1 to 2 times a day. Participants were able to select whether they viewed the videos from a DVD or from a website. Participants also received weekly calls from their BREATHE coaches during which adherence was ascertained (ie, the lesson video viewed and practices completed). Anxiety ratings for each practice were reviewed, questions were addressed, and encouragement for practice adherence was provided. During weeks 2 to 4, patients were instructed to apply the skills during everyday life. In total, 2 team members (CG and LA) with a master’s to PhD level of training in psychology served as coaches for both the BREATHE and Healthy Living interventions. Coaches used a coaching manual (available upon request) for each intervention and questions tailored to each week’s content to guide the coaching calls. Both BREATHE and Healthy Living coaches addressed any participant challenges with using the DVD or website. provides an overview of the intervention components, including the mean duration of the completed coaching calls. Psychoeducation (Healthy Living for Reduced Anxiety) In the psychoeducation intervention (ie, active control), participants viewed 30-minute video lessons once a week for 4 weeks. The videos provided information about what anxiety is, coping with anxiety and sleep tips, benefits of exercise and a gentle stretching routine, and healthy eating. A Healthy Living coach (see the aforementioned description) called participants each week to ascertain adherence (ie, the lesson video viewed) and answer specific questions about the materials. Practice assignments consisted only of brief supplemental readings . Assessments at Weeks 4, 8, and 12 At weeks 4 and 8, a total of 4 questionnaires were completed with participants via phone: the GAS, PHQ-9, VR-12, and activity goal frequency (ACS). At week 12 (8 weeks after completion of the 4-week intervention), participants returned for a posttreatment assessment. The posttreatment assessment included the GAS, PHQ-9, HAM-A, ACS, VR-12, and ACQ. A feedback survey for the BREATHE and Healthy Living conditions was completed followed by a semistructured interview. The survey asked participants to rank the helpfulness of each part of the program: video lessons, practices for BREATHE or readings for Healthy Living, and coaching calls. Then, the survey had 5 statements with which participants rated their agreement on a Likert-type scale ranging from strongly disagree (1) to strongly agree (5). The items asked about the usability of the DVDs, website log-in, and watching of web-based videos and about the frequency of coaching calls and duration of the program. The semistructured interview was recorded with participants’ permission, and the recordings or notes (if not recorded) were then transcribed. During the COVID-19 pandemic, the posttreatment assessment was completed by telephone. Assessors were not blinded to participant condition. Participants completed a brief telephone screen to assess for potential cognitive impairment using the SBT, concurrent psychotherapy, presence of serious mental illness, or recent changes to psychotropic medications (if taking any). Eligible participants based on the telephone screen were invited to a baseline assessment that included obtaining informed consent. Baseline assessments were conducted in person up to February 2020 and via telephone after the onset of the COVID-19 pandemic. During the baseline assessment, a structured psychiatric interview was conducted to ascertain the presence of a current anxiety disorder (ie, GAD, panic disorder, agoraphobia, social anxiety disorder, or unspecified anxiety disorder). Participants who did not meet the criteria for an anxiety disorder or individuals who had a potential psychotic disorder or bipolar disorder were excluded at baseline. The remainder of the assessment consisted of completing the clinical interviews (SCID-5 and HAM-A), ACS, and questionnaires (demographics and health questionnaire, GAS, PHQ-9, ACQ, and VR-12). Eligible participants were randomized to BREATHE or psychoeducation in a blocked randomization scheme with blocks of varying sizes (2 to 8), as recommended by the statistician who created the randomization scheme. Each assignment was concealed in an envelope that a research team member opened in sequential order at the time of randomization. BREATHE Intervention In the BREATHE intervention, participants watched 1 lesson video each week and then were instructed to practice relaxation 1 to 2 times a day. Participants were able to select whether they viewed the videos from a DVD or from a website. Participants also received weekly calls from their BREATHE coaches during which adherence was ascertained (ie, the lesson video viewed and practices completed). Anxiety ratings for each practice were reviewed, questions were addressed, and encouragement for practice adherence was provided. During weeks 2 to 4, patients were instructed to apply the skills during everyday life. In total, 2 team members (CG and LA) with a master’s to PhD level of training in psychology served as coaches for both the BREATHE and Healthy Living interventions. Coaches used a coaching manual (available upon request) for each intervention and questions tailored to each week’s content to guide the coaching calls. Both BREATHE and Healthy Living coaches addressed any participant challenges with using the DVD or website. provides an overview of the intervention components, including the mean duration of the completed coaching calls. Psychoeducation (Healthy Living for Reduced Anxiety) In the psychoeducation intervention (ie, active control), participants viewed 30-minute video lessons once a week for 4 weeks. The videos provided information about what anxiety is, coping with anxiety and sleep tips, benefits of exercise and a gentle stretching routine, and healthy eating. A Healthy Living coach (see the aforementioned description) called participants each week to ascertain adherence (ie, the lesson video viewed) and answer specific questions about the materials. Practice assignments consisted only of brief supplemental readings . In the BREATHE intervention, participants watched 1 lesson video each week and then were instructed to practice relaxation 1 to 2 times a day. Participants were able to select whether they viewed the videos from a DVD or from a website. Participants also received weekly calls from their BREATHE coaches during which adherence was ascertained (ie, the lesson video viewed and practices completed). Anxiety ratings for each practice were reviewed, questions were addressed, and encouragement for practice adherence was provided. During weeks 2 to 4, patients were instructed to apply the skills during everyday life. In total, 2 team members (CG and LA) with a master’s to PhD level of training in psychology served as coaches for both the BREATHE and Healthy Living interventions. Coaches used a coaching manual (available upon request) for each intervention and questions tailored to each week’s content to guide the coaching calls. Both BREATHE and Healthy Living coaches addressed any participant challenges with using the DVD or website. provides an overview of the intervention components, including the mean duration of the completed coaching calls. In the psychoeducation intervention (ie, active control), participants viewed 30-minute video lessons once a week for 4 weeks. The videos provided information about what anxiety is, coping with anxiety and sleep tips, benefits of exercise and a gentle stretching routine, and healthy eating. A Healthy Living coach (see the aforementioned description) called participants each week to ascertain adherence (ie, the lesson video viewed) and answer specific questions about the materials. Practice assignments consisted only of brief supplemental readings . At weeks 4 and 8, a total of 4 questionnaires were completed with participants via phone: the GAS, PHQ-9, VR-12, and activity goal frequency (ACS). At week 12 (8 weeks after completion of the 4-week intervention), participants returned for a posttreatment assessment. The posttreatment assessment included the GAS, PHQ-9, HAM-A, ACS, VR-12, and ACQ. A feedback survey for the BREATHE and Healthy Living conditions was completed followed by a semistructured interview. The survey asked participants to rank the helpfulness of each part of the program: video lessons, practices for BREATHE or readings for Healthy Living, and coaching calls. Then, the survey had 5 statements with which participants rated their agreement on a Likert-type scale ranging from strongly disagree (1) to strongly agree (5). The items asked about the usability of the DVDs, website log-in, and watching of web-based videos and about the frequency of coaching calls and duration of the program. The semistructured interview was recorded with participants’ permission, and the recordings or notes (if not recorded) were then transcribed. During the COVID-19 pandemic, the posttreatment assessment was completed by telephone. Assessors were not blinded to participant condition. An a priori sample power analysis was calculated with an α of .05 and power of 0.80. On the basis of previous research, the controlled effect size (Hedges g ) for relaxation therapy’s effect on anxiety was 0.90 (95% CI 0.44-1.44) . The total estimated sample size was 30 (15 per group; Cohen f= 0.41). Because of expected attrition and use of self-directed treatment, we estimated a smaller effect of the primary measure of anxiety and, thus, aimed for a sample of at least 26 participants per group ( f= 0.35). Descriptive statistics were used to help characterize the sample; t tests (2-tailed) and chi-square analyses were used to test whether the 2 groups differed on any characteristics. Correlation analyses were conducted to examine whether baseline anxiety or medical comorbidity were associated with homework completion. Analyses were conducted using SPSS Statistics (version 29.0; IBM Corp) . To examine the hypotheses regarding the primary outcomes of anxiety (GAS) and functioning (VR-12 MCS), mixed-effects models, also known as linear growth models or multilevel models , were used. Missing data points due to participant dropout were handled assuming that the data were missing at random and conditional on observed information. Mixed models were used to examine the change in outcomes across 4 time points (baseline [T1], 4 weeks and end of treatment [T2], 8 weeks [T3], and 12 weeks [T4]). Growth models with just time were estimated first followed by the fully specified models that included a between factor of treatment group (BREATHE vs psychoeducation); a within factor of time; and an interaction of treatment by time, which was the estimate of the treatment effect over the course of the study. A total of 3 mixed-effects models with 2 time points (baseline [T1] and 12 weeks [T4]) were conducted on (1) anxiety as measured using the HAM-A, (2) perceived anxiety control as measured using the ACQ, and (3) lifestyle-adjusted functioning using the ACS. Sensitivity analyses were conducted to evaluate the change in anxiety symptoms (GAS) from baseline to 4 weeks. Rapid qualitative analysis was used to investigate perceptions of the home practices among the participants assigned to the BREATHE intervention. Interview transcripts were summarized using templates, and then domain summaries were created by 2 authors trained in qualitative techniques (CG and CC). The summaries were reviewed for themes. Overview Of the 98 participants assessed for eligibility, 56 (57%) were eligible and were all subsequently randomized. shows the participant flow throughout the study. Randomized participants had a mean age of 71.36 (SD 6.19) years and ranged in age from 60 to 88 years. shows the characteristics of participants at baseline. Participants were diagnosed with a current anxiety disorder at baseline using the SCID-5. A total of 16% (9/56) of the participants had more than one concurrent anxiety disorder, 29% (16/56) had co-occurring depression, and 14% (8/56) had subthreshold posttraumatic stress disorder (PTSD; other specified trauma disorder). The most frequently occurring disorders were other specified anxiety disorder (30/56, 54%), GAD (18/56, 32%), social anxiety disorder (11/56, 20%), agoraphobia (2/56, 4%), and panic disorder (2/56, 4%). Retention and Engagement Completion of the assessment at 4 weeks and after treatment was 81% (22/27) for BREATHE and 93% (27/29) for Healthy Living (χ 2 2 =1.7; P= .19). The retention at week 12 was similar across groups, with 81.5% (22/27) of BREATHE participants and 82.8% (24/29) of Healthy Living participants completing the assessment at 12 weeks (T4). The groups did not differ with regard to which modality they used for the delivery of the videos (χ 2 =2.4; P =.30). Across both groups, 66% (37/56) used DVD delivery, 21% (12/56) used web delivery, 7% (4/56) used both, and 5% (3/56) had missing data on modality due to dropout from the study. Most of the weekly coaching calls were completed for both interventions, with no differences found in the number of weekly calls completed for each intervention (t 54 =0.92; P =.36). On average, BREATHE participants completed 3.48 (SD 1.05) of 4 (87%) weekly coaching calls. Healthy Living participants completed 3.72 (SD 0.92) of 4 (93%) weekly coaching calls. With regard to engagement with the intervention as self-reported to coaches, the Healthy Living group completed significantly more lessons (mean 3.55, SD 1.09) than the BREATHE group (mean 2.85, SD 1.43; t 54 =2.07; P =.04). In addition to the weekly lessons, the BREATHE group had daily home practices and 3 assignments to apply the skills in real life. Pearson correlation analyses found that greater baseline anxiety scores (GAS) were associated with fewer completed practices in the BREATHE group ( r =–0.41; P =.03). In addition, greater severity of medical comorbidity (Cumulative Illness Rating Scale–Geriatric) was associated with fewer completed practices in BREATHE ( r =–0.50; P =.009). On average, BREATHE participants completed 23.07 (SD 17.38) practices during the 4-week intervention. A total of 22% (6/27) of the participants completed 1 application of the skills, 19% (5/27) completed 2 applications of the skills, and 30% (8/27) completed ≥3 applications. Participants shared feedback about the interventions through a postsurvey . Participants generally agreed to strongly agreed regarding the ease of use and the usability of the DVD and website to access the videos. Most participants agreed to strongly agreed that the coaching calls were frequent enough and the program length was sufficient. Primary and Secondary Analyses Treatment effects were examined using mixed-effects models conducted on total anxiety scores as measured using the GAS. The models were best fitted with a fixed effect of time and group and random intercept. shows the means for the BREATHE and Healthy Living groups across 4 time points (baseline [T1], 4 weeks and end of treatment [T2], 8 weeks [T3], and 12 weeks [T4]). Notably, there was substantial variability for baseline GAS scores, with some participants reporting low symptoms in the previous week despite meeting the criteria for an anxiety disorder. There were no differences in the rates of decline in anxiety between the groups. That is, the effects of time ( P =.07), group ( P =.64), and interaction of treatment by group ( P =.07) were not significant. An inspection of the subscales revealed that there was a significant time by group interaction for somatic anxiety symptoms ( F 3, 145.6 =2.81; P =.04) but not for affective or cognitive anxiety symptoms. This significant interaction indicated that the Healthy Living group experienced a significant decline in somatic anxiety over time compared with the BREATHE group. No difference was found with regard to a difference in BREATHE or Healthy Living effect on mental functioning over time as measured using the VR-12. In addition, no difference was found between treatment groups with regard to depressive symptoms as measured using the PHQ-9 , anxiety symptoms as measured using the HAM-A, lifestyle-adjusted functioning as measured using the ACS, or anxiety control as measured using the ACQ . Effects at Week 4 and Association With Engagement Measures As shown in , the BREATHE and Healthy Living groups did not differ in effects at week 4 and end of treatment with the exception of GAS somatic scale scores, which were lower for Healthy Living compared with BREATHE at this time point. We also investigated whether changes in scores from baseline to week 4 were associated with home practice completion. Using Pearson correlation analyses, we found the association with practice completion to be nonsignificant ( r =0.33; P= .13). Reduction in GAS scores from baseline to week 4 (end of treatment) was not associated with completion of lessons ( r s =–0.09; P =.53) or coaching calls ( r s =–0.009; P =.95) across the BREATHE and Healthy Living combined samples. Individual Activity Goals Individual activity goals set by participants were examined with regard to the number of goals identified, types of goals, and change in individualized goal frequencies from baseline to 4 weeks. On average, participants selected at least 4 activities to focus on (mean 4.7, SD 1.2; range 2-7). Of 263 total activities identified, the 3 most frequently selected activity categories were social activities (72/263, 27.4%); high-demand leisure activities (70/263, 26.6%) such as hiking or swimming; and low-demand leisure activities (63/263, 24%) such as photography, reading, or playing a musical instrument. The remaining activities were instrumental activities (35/263, 13.3%) or other activities identified by participants (23/263, 8.7%). Change scores for each activity goal were calculated from 0 to 4 weeks. Scores were then collapsed into 2 categories: increase in activity frequency (change score of >0) or no change or decrease in activity frequency (change score of ≤0). The BREATHE and Healthy Living groups did not differ in their distribution of increased activities at 4 weeks (χ 2 =0.2; P =.64). A total of 79% (23/29) of Healthy Living participants and 74% (20/27) of BREATHE participants reported an increase in frequency of one or more goals at 4 weeks. On average, participants in both groups attained ≥1 goal at 4 weeks (BREATHE: mean 1.6, SD 1.4; Healthy Living: mean 1.6, SD 1.3; consistent with an intention-to-treat approach, all participants were retained in these analyses. Those individuals who were lost to follow-up were coded as not attaining their goal). Qualitative Feedback Regarding Practice Routine and Challenges Feedback regarding the experience of the practices in the BREATHE group, including the overall experience of the practices, mechanics and frequency of practices, challenges encountered, and application of skills to real-world situations, was analyzed to better understand participants’ experience with the BREATHE intervention (22/27, 81% completed the interviews). While Healthy Living participants relayed the same type of information, in this section, we focus on the BREATHE intervention as the condition of interest. Experience With Practices Participants expressed varied experiences with the practice components of the BREATHE intervention, that is, the diaphragmatic breathing and the PMR exercises. Overall, most participants ranked the practices as the most (9/21, 43%) or second (9/21, 43%) most helpful component of the intervention compared with the weekly coaching calls and video lessons. Some described the diaphragmatic breathing exercises as the most effective part of the intervention and the easiest skill to use outside of home. One participant (BR1) noted the following: The breathing definitely was important. That’s probably a big part of it. The breathing practices, because I didn’t normally do this before, but by doing breathing exercises, it definitely helped me. Participants described adjustments made to PMR exercises. These included tensing for less time (BR12); adjusting for physical issues (BR18), which could include imagined tensing as specified in the videos; and grouping some muscles together (eg, lower extremities and facial muscles; BR18). Another participant liked tensing from the waist up (ie, face and upper torso) “because I get a lot of tension there” (BR4). A smaller number of participants felt that the practices and skills were not helpful in that breathing would not solve the problems they were facing. One individual explained the following: I don’t think learning how to breathe a certain way is gonna make anything go away that I’m dealing with. BR5 Another participant noted that tensing their muscles was not helpful “because I’m tense already” (BR3). Mechanics and Frequency of Practices The mechanics of how and when people practiced varied. Creating a practice habit through timing, frequency (eg, once or twice a day), or location seemed to help with accomplishing the practices without stress or feeling pressured. One participant described the following: ...that the only time that I could do this was lying down—was in bed, lying down. And I thought—Oh, gosh, it’s not going to be as effective. But after life quieted down somewhat, I was able to attain my goal and start to do it twice a day. Attempt to do it twice a day. But it was still lying down, and I loved it. BR8 For some, setting a schedule to practice was stressful and, instead, practicing during a block of time (eg, mornings and evenings) helped: It’s like I like to do stretching exercises every day and I would kind of keep myself physically fit and this was more kind of a mental exercise of imposing discipline on myself to be able to do this twice a day. BR6 With regard to the frequency of practice, participants described practicing once or twice a day for the first month or 2; however, many participants described that their practice frequency decreased after the first 4 weeks, which coincided with the weekly check-in calls ending after the first 4 weeks. For some participants, the relaxation procedures may have been too static (ie, the same video used to guide practice), and after several weeks, this contributed to diminished engagement. In contrast, some described returning to the practices to help with anxiety. One participant explained the following: I didn’t do it [progressive relaxation] as much for a while. And then out of necessity I said, “This is what I need to do,” and I have done it more in the past months. BR16 Another individual expressed an intention to continue practicing: I will keep doing this ’cause I enjoy setting aside a time for a disciplined approach to trying to relax. BR6 Personalization of the practices seemed to facilitate continued use of the techniques. One participant described adjusting their practice over time: I changed it from watching the videos to just actually sitting, closing my eyes, and going through the process. I felt most comfortable doing it that way. BR1 The guidance helped one participant “keep my mind on track” (BR8). Challenges Participants identified several challenges with completing the practices. Setting time aside to practice daily was a challenge due to timing, forgetting (BR2 and BR15), traveling (BR7), having visitors (BR6), or other activities. One participant described the following: So, carving out the time is a little bit of a challenge. It’s not overwhelming, because the program is really not that demanding, really, when you think about the time. But you just get busy. BR2 Health and other life challenges deprioritized the study for some participants (BR5). Technology difficulties were noted by 9% (2/22) of the participants, and 5% (1/22) encountered difficulties, but described learning the technology. Documenting anxiety scores was identified as tedious by another individual. Application of Skills in the Real World Participants described multiple applications of the relaxation and breathing practice in the real world. The most common applications were using the skills when experiencing anxiety, tension, or stress or whenever a situation dictated it. Some described noticing tense or anxious feelings and then using the practices to alleviate the symptoms: ...when I began to feel periods of anxiety, I would either go to immediately and practice, or I’d set time aside [to practice]. BR10 Others described specific situations in which they applied the skills, including dentist appointments (BR4), at the hospital (BR9), or when having an operation (BR11). Some used the breathing or tensing and releasing tools while driving and in traffic or while waiting for VA appointments (eg, BR8 and BR13). Another veteran described deep breathing when a task became more complicated or frustrating: What helps me is that if I’m sitting there making breakfast and I see that it’s getting complicated, something is frustrating me that maybe I dropped something that immediately I will just close my eyes and breathe slow. And sometimes I can only breathe three and then, if I feel I need it, I just go into doing more. And it’s not a big conversation, I just do it. BR16 Of the 98 participants assessed for eligibility, 56 (57%) were eligible and were all subsequently randomized. shows the participant flow throughout the study. Randomized participants had a mean age of 71.36 (SD 6.19) years and ranged in age from 60 to 88 years. shows the characteristics of participants at baseline. Participants were diagnosed with a current anxiety disorder at baseline using the SCID-5. A total of 16% (9/56) of the participants had more than one concurrent anxiety disorder, 29% (16/56) had co-occurring depression, and 14% (8/56) had subthreshold posttraumatic stress disorder (PTSD; other specified trauma disorder). The most frequently occurring disorders were other specified anxiety disorder (30/56, 54%), GAD (18/56, 32%), social anxiety disorder (11/56, 20%), agoraphobia (2/56, 4%), and panic disorder (2/56, 4%). Completion of the assessment at 4 weeks and after treatment was 81% (22/27) for BREATHE and 93% (27/29) for Healthy Living (χ 2 2 =1.7; P= .19). The retention at week 12 was similar across groups, with 81.5% (22/27) of BREATHE participants and 82.8% (24/29) of Healthy Living participants completing the assessment at 12 weeks (T4). The groups did not differ with regard to which modality they used for the delivery of the videos (χ 2 =2.4; P =.30). Across both groups, 66% (37/56) used DVD delivery, 21% (12/56) used web delivery, 7% (4/56) used both, and 5% (3/56) had missing data on modality due to dropout from the study. Most of the weekly coaching calls were completed for both interventions, with no differences found in the number of weekly calls completed for each intervention (t 54 =0.92; P =.36). On average, BREATHE participants completed 3.48 (SD 1.05) of 4 (87%) weekly coaching calls. Healthy Living participants completed 3.72 (SD 0.92) of 4 (93%) weekly coaching calls. With regard to engagement with the intervention as self-reported to coaches, the Healthy Living group completed significantly more lessons (mean 3.55, SD 1.09) than the BREATHE group (mean 2.85, SD 1.43; t 54 =2.07; P =.04). In addition to the weekly lessons, the BREATHE group had daily home practices and 3 assignments to apply the skills in real life. Pearson correlation analyses found that greater baseline anxiety scores (GAS) were associated with fewer completed practices in the BREATHE group ( r =–0.41; P =.03). In addition, greater severity of medical comorbidity (Cumulative Illness Rating Scale–Geriatric) was associated with fewer completed practices in BREATHE ( r =–0.50; P =.009). On average, BREATHE participants completed 23.07 (SD 17.38) practices during the 4-week intervention. A total of 22% (6/27) of the participants completed 1 application of the skills, 19% (5/27) completed 2 applications of the skills, and 30% (8/27) completed ≥3 applications. Participants shared feedback about the interventions through a postsurvey . Participants generally agreed to strongly agreed regarding the ease of use and the usability of the DVD and website to access the videos. Most participants agreed to strongly agreed that the coaching calls were frequent enough and the program length was sufficient. Treatment effects were examined using mixed-effects models conducted on total anxiety scores as measured using the GAS. The models were best fitted with a fixed effect of time and group and random intercept. shows the means for the BREATHE and Healthy Living groups across 4 time points (baseline [T1], 4 weeks and end of treatment [T2], 8 weeks [T3], and 12 weeks [T4]). Notably, there was substantial variability for baseline GAS scores, with some participants reporting low symptoms in the previous week despite meeting the criteria for an anxiety disorder. There were no differences in the rates of decline in anxiety between the groups. That is, the effects of time ( P =.07), group ( P =.64), and interaction of treatment by group ( P =.07) were not significant. An inspection of the subscales revealed that there was a significant time by group interaction for somatic anxiety symptoms ( F 3, 145.6 =2.81; P =.04) but not for affective or cognitive anxiety symptoms. This significant interaction indicated that the Healthy Living group experienced a significant decline in somatic anxiety over time compared with the BREATHE group. No difference was found with regard to a difference in BREATHE or Healthy Living effect on mental functioning over time as measured using the VR-12. In addition, no difference was found between treatment groups with regard to depressive symptoms as measured using the PHQ-9 , anxiety symptoms as measured using the HAM-A, lifestyle-adjusted functioning as measured using the ACS, or anxiety control as measured using the ACQ . As shown in , the BREATHE and Healthy Living groups did not differ in effects at week 4 and end of treatment with the exception of GAS somatic scale scores, which were lower for Healthy Living compared with BREATHE at this time point. We also investigated whether changes in scores from baseline to week 4 were associated with home practice completion. Using Pearson correlation analyses, we found the association with practice completion to be nonsignificant ( r =0.33; P= .13). Reduction in GAS scores from baseline to week 4 (end of treatment) was not associated with completion of lessons ( r s =–0.09; P =.53) or coaching calls ( r s =–0.009; P =.95) across the BREATHE and Healthy Living combined samples. Individual activity goals set by participants were examined with regard to the number of goals identified, types of goals, and change in individualized goal frequencies from baseline to 4 weeks. On average, participants selected at least 4 activities to focus on (mean 4.7, SD 1.2; range 2-7). Of 263 total activities identified, the 3 most frequently selected activity categories were social activities (72/263, 27.4%); high-demand leisure activities (70/263, 26.6%) such as hiking or swimming; and low-demand leisure activities (63/263, 24%) such as photography, reading, or playing a musical instrument. The remaining activities were instrumental activities (35/263, 13.3%) or other activities identified by participants (23/263, 8.7%). Change scores for each activity goal were calculated from 0 to 4 weeks. Scores were then collapsed into 2 categories: increase in activity frequency (change score of >0) or no change or decrease in activity frequency (change score of ≤0). The BREATHE and Healthy Living groups did not differ in their distribution of increased activities at 4 weeks (χ 2 =0.2; P =.64). A total of 79% (23/29) of Healthy Living participants and 74% (20/27) of BREATHE participants reported an increase in frequency of one or more goals at 4 weeks. On average, participants in both groups attained ≥1 goal at 4 weeks (BREATHE: mean 1.6, SD 1.4; Healthy Living: mean 1.6, SD 1.3; consistent with an intention-to-treat approach, all participants were retained in these analyses. Those individuals who were lost to follow-up were coded as not attaining their goal). Feedback regarding the experience of the practices in the BREATHE group, including the overall experience of the practices, mechanics and frequency of practices, challenges encountered, and application of skills to real-world situations, was analyzed to better understand participants’ experience with the BREATHE intervention (22/27, 81% completed the interviews). While Healthy Living participants relayed the same type of information, in this section, we focus on the BREATHE intervention as the condition of interest. Experience With Practices Participants expressed varied experiences with the practice components of the BREATHE intervention, that is, the diaphragmatic breathing and the PMR exercises. Overall, most participants ranked the practices as the most (9/21, 43%) or second (9/21, 43%) most helpful component of the intervention compared with the weekly coaching calls and video lessons. Some described the diaphragmatic breathing exercises as the most effective part of the intervention and the easiest skill to use outside of home. One participant (BR1) noted the following: The breathing definitely was important. That’s probably a big part of it. The breathing practices, because I didn’t normally do this before, but by doing breathing exercises, it definitely helped me. Participants described adjustments made to PMR exercises. These included tensing for less time (BR12); adjusting for physical issues (BR18), which could include imagined tensing as specified in the videos; and grouping some muscles together (eg, lower extremities and facial muscles; BR18). Another participant liked tensing from the waist up (ie, face and upper torso) “because I get a lot of tension there” (BR4). A smaller number of participants felt that the practices and skills were not helpful in that breathing would not solve the problems they were facing. One individual explained the following: I don’t think learning how to breathe a certain way is gonna make anything go away that I’m dealing with. BR5 Another participant noted that tensing their muscles was not helpful “because I’m tense already” (BR3). Mechanics and Frequency of Practices The mechanics of how and when people practiced varied. Creating a practice habit through timing, frequency (eg, once or twice a day), or location seemed to help with accomplishing the practices without stress or feeling pressured. One participant described the following: ...that the only time that I could do this was lying down—was in bed, lying down. And I thought—Oh, gosh, it’s not going to be as effective. But after life quieted down somewhat, I was able to attain my goal and start to do it twice a day. Attempt to do it twice a day. But it was still lying down, and I loved it. BR8 For some, setting a schedule to practice was stressful and, instead, practicing during a block of time (eg, mornings and evenings) helped: It’s like I like to do stretching exercises every day and I would kind of keep myself physically fit and this was more kind of a mental exercise of imposing discipline on myself to be able to do this twice a day. BR6 With regard to the frequency of practice, participants described practicing once or twice a day for the first month or 2; however, many participants described that their practice frequency decreased after the first 4 weeks, which coincided with the weekly check-in calls ending after the first 4 weeks. For some participants, the relaxation procedures may have been too static (ie, the same video used to guide practice), and after several weeks, this contributed to diminished engagement. In contrast, some described returning to the practices to help with anxiety. One participant explained the following: I didn’t do it [progressive relaxation] as much for a while. And then out of necessity I said, “This is what I need to do,” and I have done it more in the past months. BR16 Another individual expressed an intention to continue practicing: I will keep doing this ’cause I enjoy setting aside a time for a disciplined approach to trying to relax. BR6 Personalization of the practices seemed to facilitate continued use of the techniques. One participant described adjusting their practice over time: I changed it from watching the videos to just actually sitting, closing my eyes, and going through the process. I felt most comfortable doing it that way. BR1 The guidance helped one participant “keep my mind on track” (BR8). Challenges Participants identified several challenges with completing the practices. Setting time aside to practice daily was a challenge due to timing, forgetting (BR2 and BR15), traveling (BR7), having visitors (BR6), or other activities. One participant described the following: So, carving out the time is a little bit of a challenge. It’s not overwhelming, because the program is really not that demanding, really, when you think about the time. But you just get busy. BR2 Health and other life challenges deprioritized the study for some participants (BR5). Technology difficulties were noted by 9% (2/22) of the participants, and 5% (1/22) encountered difficulties, but described learning the technology. Documenting anxiety scores was identified as tedious by another individual. Application of Skills in the Real World Participants described multiple applications of the relaxation and breathing practice in the real world. The most common applications were using the skills when experiencing anxiety, tension, or stress or whenever a situation dictated it. Some described noticing tense or anxious feelings and then using the practices to alleviate the symptoms: ...when I began to feel periods of anxiety, I would either go to immediately and practice, or I’d set time aside [to practice]. BR10 Others described specific situations in which they applied the skills, including dentist appointments (BR4), at the hospital (BR9), or when having an operation (BR11). Some used the breathing or tensing and releasing tools while driving and in traffic or while waiting for VA appointments (eg, BR8 and BR13). Another veteran described deep breathing when a task became more complicated or frustrating: What helps me is that if I’m sitting there making breakfast and I see that it’s getting complicated, something is frustrating me that maybe I dropped something that immediately I will just close my eyes and breathe slow. And sometimes I can only breathe three and then, if I feel I need it, I just go into doing more. And it’s not a big conversation, I just do it. BR16 Participants expressed varied experiences with the practice components of the BREATHE intervention, that is, the diaphragmatic breathing and the PMR exercises. Overall, most participants ranked the practices as the most (9/21, 43%) or second (9/21, 43%) most helpful component of the intervention compared with the weekly coaching calls and video lessons. Some described the diaphragmatic breathing exercises as the most effective part of the intervention and the easiest skill to use outside of home. One participant (BR1) noted the following: The breathing definitely was important. That’s probably a big part of it. The breathing practices, because I didn’t normally do this before, but by doing breathing exercises, it definitely helped me. Participants described adjustments made to PMR exercises. These included tensing for less time (BR12); adjusting for physical issues (BR18), which could include imagined tensing as specified in the videos; and grouping some muscles together (eg, lower extremities and facial muscles; BR18). Another participant liked tensing from the waist up (ie, face and upper torso) “because I get a lot of tension there” (BR4). A smaller number of participants felt that the practices and skills were not helpful in that breathing would not solve the problems they were facing. One individual explained the following: I don’t think learning how to breathe a certain way is gonna make anything go away that I’m dealing with. BR5 Another participant noted that tensing their muscles was not helpful “because I’m tense already” (BR3). The mechanics of how and when people practiced varied. Creating a practice habit through timing, frequency (eg, once or twice a day), or location seemed to help with accomplishing the practices without stress or feeling pressured. One participant described the following: ...that the only time that I could do this was lying down—was in bed, lying down. And I thought—Oh, gosh, it’s not going to be as effective. But after life quieted down somewhat, I was able to attain my goal and start to do it twice a day. Attempt to do it twice a day. But it was still lying down, and I loved it. BR8 For some, setting a schedule to practice was stressful and, instead, practicing during a block of time (eg, mornings and evenings) helped: It’s like I like to do stretching exercises every day and I would kind of keep myself physically fit and this was more kind of a mental exercise of imposing discipline on myself to be able to do this twice a day. BR6 With regard to the frequency of practice, participants described practicing once or twice a day for the first month or 2; however, many participants described that their practice frequency decreased after the first 4 weeks, which coincided with the weekly check-in calls ending after the first 4 weeks. For some participants, the relaxation procedures may have been too static (ie, the same video used to guide practice), and after several weeks, this contributed to diminished engagement. In contrast, some described returning to the practices to help with anxiety. One participant explained the following: I didn’t do it [progressive relaxation] as much for a while. And then out of necessity I said, “This is what I need to do,” and I have done it more in the past months. BR16 Another individual expressed an intention to continue practicing: I will keep doing this ’cause I enjoy setting aside a time for a disciplined approach to trying to relax. BR6 Personalization of the practices seemed to facilitate continued use of the techniques. One participant described adjusting their practice over time: I changed it from watching the videos to just actually sitting, closing my eyes, and going through the process. I felt most comfortable doing it that way. BR1 The guidance helped one participant “keep my mind on track” (BR8). Participants identified several challenges with completing the practices. Setting time aside to practice daily was a challenge due to timing, forgetting (BR2 and BR15), traveling (BR7), having visitors (BR6), or other activities. One participant described the following: So, carving out the time is a little bit of a challenge. It’s not overwhelming, because the program is really not that demanding, really, when you think about the time. But you just get busy. BR2 Health and other life challenges deprioritized the study for some participants (BR5). Technology difficulties were noted by 9% (2/22) of the participants, and 5% (1/22) encountered difficulties, but described learning the technology. Documenting anxiety scores was identified as tedious by another individual. Participants described multiple applications of the relaxation and breathing practice in the real world. The most common applications were using the skills when experiencing anxiety, tension, or stress or whenever a situation dictated it. Some described noticing tense or anxious feelings and then using the practices to alleviate the symptoms: ...when I began to feel periods of anxiety, I would either go to immediately and practice, or I’d set time aside [to practice]. BR10 Others described specific situations in which they applied the skills, including dentist appointments (BR4), at the hospital (BR9), or when having an operation (BR11). Some used the breathing or tensing and releasing tools while driving and in traffic or while waiting for VA appointments (eg, BR8 and BR13). Another veteran described deep breathing when a task became more complicated or frustrating: What helps me is that if I’m sitting there making breakfast and I see that it’s getting complicated, something is frustrating me that maybe I dropped something that immediately I will just close my eyes and breathe slow. And sometimes I can only breathe three and then, if I feel I need it, I just go into doing more. And it’s not a big conversation, I just do it. BR16 Principal Findings The findings of this pilot RCT of BREATHE compared with Healthy Living demonstrate the feasibility of conducting an RCT in that all participants were willing to be randomized and retention was similar between study arms at 12 weeks. Some variation in engagement with and acceptability of the interventions emerged, as discussed in the following sections, but does not decrease the feasibility of conducting an RCT. The hypotheses were not supported with regard to the greater reduction in total anxiety symptoms or improvement in functioning (as measured using the VR-12 MCS score) in BREATHE compared with the psychoeducation control of Healthy Living. On the anxiety symptom subscales, participants in the Healthy Living condition experienced a greater reduction in somatic anxiety symptoms, but no other differences were found for the other 2 subscales of cognitive and affective anxiety symptoms. There are several reasons for these findings. First, as observed in most previous late-life intervention studies , the inclusion of an active control may have diminished the potential effect of relaxation. Furthermore, a growing body of research has focused on the importance of well-being–focused interventions, including yoga , nutrition , and other alternative interventions , and the presence of some of these factors may have contributed to the benefits that participants achieved in the Healthy Living intervention. This intervention did include gentle stretching and some basic sleep and coping tips in addition to nutritional information, which all may relate to physical and mental health while aging. It is possible that a single-component intervention was not effective for engaging participants across a 4-week period. Recent studies have demonstrated that CBT for depression and anxiety is effective at not only reducing anxiety but also preventing relapse over a 10-year period . In addition, the heterogeneity of anxiety disorders, as detected using structured interviews, and the range in anxiety severity based on baseline GAS scores may lead to attenuated findings with a single-component intervention rather than a multicomponent intervention to address varied facets of anxiety (ie, cognitive, somatic, and affective symptoms). Our exploratory aim examined factors related to home practices in the BREATHE condition or related to intervention outcomes which revealed that greater self-reported anxiety symptom severity and greater medical comorbidity at baseline were associated with fewer progressive relaxation practices completed during the 4-week intervention. The qualitative findings helped us further probe the potential effect of practices. Qualitative findings suggested that, for some, practicing breathing, relaxation, or both breathing and relaxation was not viewed as enough to manage their anxiety and stress, thus dovetailing with the quantitative finding of those participants with greater anxiety practicing less. Accordingly, BREATHE or Healthy Living may be better suited for individuals with mild to moderate anxiety, which should be tested directly in a future study. It is possible that older veterans with greater distress would have preferred having access to group or individual psychotherapy and may have had previous exposure to these modalities as a function of the integration of mental health into the Veterans Health Administration . Other challenges to practicing were recounted by participants, including difficulty finding the time to practice, having other things to do (eg, visitors and other activities), or worsening of health problems and experiencing acute health changes. Some participants reported a decline in practices after the coaching calls ended at the 4-week mark. Thus, the challenges with practices were 2-fold and included both adhering to the practices and maintaining a practice routine over time, which parallels the challenges of having sustained engagement with digital mental health interventions and with any behavioral intervention requiring practice. Perhaps improvements and variations in the types of relaxation are needed to maintain interest in practices over time. These findings led to the question of whether home practice is the issue itself. Integrating an ongoing relaxation practice into daily life while facing multiple medical conditions may have required extra effort and motivation from participants. The qualitative interview findings revealed that participants were able to personalize their practices based on their own needs, tailor their routine, and implement the relaxation procedures in their daily life when they needed it. Another possible future direction would be to provide multiple skills in a modular approach consistent with digital mental health interventions or consider adding a session focused on motivational interviewing to the intervention. Participants described using the skills in a myriad of settings, including when facing worries or anxiety, when in a stressful situation, or when simply waiting for an appointment, and they achieved one or more goals on average. Future studies should clarify the dose of PMR needed and whether preferences for psychotherapy compared with technology play a role in practice completion and overall adherence. Alternatively, while these interventions were designed to be widely accessible and scalable, it is possible that a more mechanistic consideration of breathing approaches for specific individuals may be needed (ie, precision medicine approaches ). In addition, more engaging and adaptive technology–based interventions could be of benefit for late-life anxiety and lead to more sustained engagement with practices. Further refinement of goal setting to promote intervention engagement alongside skills to cope with anxiety is needed. On average, BREATHE and Healthy Living participants attained at least one goal at the end of treatment, but this could be strengthened using more directed, individualized outcome measures for the goals (eg, goal attainment scaling approaches ). Limitations This study is not without limitations. One of the key limitations may be the heterogeneity of anxiety disorders and symptom presentation in a relatively small pilot study. Rather than including multiple types of anxiety disorders, focusing on specific symptoms (eg, worry that exceeds a particular threshold on a questionnaire) may be a better approach, as used with other studies . As this sample included veterans, we likely encountered higher rates of co-occurring PTSD compared with other nonveteran samples. This particular comorbidity of PTSD may have made the anxiety presentation more complex and difficult to treat using guided self-management interventions. Additional limitations include the small sample size as the study was designed to be a pilot, the use of nonblinded assessors, and the absence of a validated measure of acceptability. Another limitation to our design was that only a portion of the outcome measures were assessed at the end of treatment (T2). This study took place earlier in the COVID-19 pandemic, which led to some modifications in data collection methods and limited our ability to collect complete data, in particular from the lifestyle-adjusted ACS. Comparison With Prior Work These findings differ from those of earlier studies on progressive relaxation for late-life anxiety in that the effects of progressive relaxation were not as robust as those found in earlier studies . The use of a primarily male, ethnically and racially diverse military veteran sample may have contributed to some differences compared with earlier studies, in which older nonveteran White women made up much of the samples. The benefits derived from the Healthy Living intervention fit with the evidence for complementary and integrated health approaches for late-life anxiety and the Whole Health program in the Veterans Health Administration . Conclusions These findings suggest that guided self-management approaches to treating late-life anxiety in older veterans are feasible but that further refinement and study are needed to identify what works best for whom using a video-delivered format with remote coaching. Our findings suggest that a psychoeducation-based approach may help older adults with somatic anxiety symptoms. While progressive relaxation was deemed to be feasible and enjoyable to most participants, those with mild to moderate anxiety symptom severity and fewer health problems were more likely to adhere to the recommended home practice. Thus, based on the qualitative feedback, the BREATHE intervention in particular might not be a good match for all older adults with anxiety disorders due to some participants needing a higher-intensity treatment and others experiencing negative reactions to progressive relaxation, primarily “relaxation-induced anxiety” . Further work is needed to delineate the role of intervention design factors and individual participant baseline characteristics on the effect of guided self-management approaches on late-life anxiety. The findings of this pilot RCT of BREATHE compared with Healthy Living demonstrate the feasibility of conducting an RCT in that all participants were willing to be randomized and retention was similar between study arms at 12 weeks. Some variation in engagement with and acceptability of the interventions emerged, as discussed in the following sections, but does not decrease the feasibility of conducting an RCT. The hypotheses were not supported with regard to the greater reduction in total anxiety symptoms or improvement in functioning (as measured using the VR-12 MCS score) in BREATHE compared with the psychoeducation control of Healthy Living. On the anxiety symptom subscales, participants in the Healthy Living condition experienced a greater reduction in somatic anxiety symptoms, but no other differences were found for the other 2 subscales of cognitive and affective anxiety symptoms. There are several reasons for these findings. First, as observed in most previous late-life intervention studies , the inclusion of an active control may have diminished the potential effect of relaxation. Furthermore, a growing body of research has focused on the importance of well-being–focused interventions, including yoga , nutrition , and other alternative interventions , and the presence of some of these factors may have contributed to the benefits that participants achieved in the Healthy Living intervention. This intervention did include gentle stretching and some basic sleep and coping tips in addition to nutritional information, which all may relate to physical and mental health while aging. It is possible that a single-component intervention was not effective for engaging participants across a 4-week period. Recent studies have demonstrated that CBT for depression and anxiety is effective at not only reducing anxiety but also preventing relapse over a 10-year period . In addition, the heterogeneity of anxiety disorders, as detected using structured interviews, and the range in anxiety severity based on baseline GAS scores may lead to attenuated findings with a single-component intervention rather than a multicomponent intervention to address varied facets of anxiety (ie, cognitive, somatic, and affective symptoms). Our exploratory aim examined factors related to home practices in the BREATHE condition or related to intervention outcomes which revealed that greater self-reported anxiety symptom severity and greater medical comorbidity at baseline were associated with fewer progressive relaxation practices completed during the 4-week intervention. The qualitative findings helped us further probe the potential effect of practices. Qualitative findings suggested that, for some, practicing breathing, relaxation, or both breathing and relaxation was not viewed as enough to manage their anxiety and stress, thus dovetailing with the quantitative finding of those participants with greater anxiety practicing less. Accordingly, BREATHE or Healthy Living may be better suited for individuals with mild to moderate anxiety, which should be tested directly in a future study. It is possible that older veterans with greater distress would have preferred having access to group or individual psychotherapy and may have had previous exposure to these modalities as a function of the integration of mental health into the Veterans Health Administration . Other challenges to practicing were recounted by participants, including difficulty finding the time to practice, having other things to do (eg, visitors and other activities), or worsening of health problems and experiencing acute health changes. Some participants reported a decline in practices after the coaching calls ended at the 4-week mark. Thus, the challenges with practices were 2-fold and included both adhering to the practices and maintaining a practice routine over time, which parallels the challenges of having sustained engagement with digital mental health interventions and with any behavioral intervention requiring practice. Perhaps improvements and variations in the types of relaxation are needed to maintain interest in practices over time. These findings led to the question of whether home practice is the issue itself. Integrating an ongoing relaxation practice into daily life while facing multiple medical conditions may have required extra effort and motivation from participants. The qualitative interview findings revealed that participants were able to personalize their practices based on their own needs, tailor their routine, and implement the relaxation procedures in their daily life when they needed it. Another possible future direction would be to provide multiple skills in a modular approach consistent with digital mental health interventions or consider adding a session focused on motivational interviewing to the intervention. Participants described using the skills in a myriad of settings, including when facing worries or anxiety, when in a stressful situation, or when simply waiting for an appointment, and they achieved one or more goals on average. Future studies should clarify the dose of PMR needed and whether preferences for psychotherapy compared with technology play a role in practice completion and overall adherence. Alternatively, while these interventions were designed to be widely accessible and scalable, it is possible that a more mechanistic consideration of breathing approaches for specific individuals may be needed (ie, precision medicine approaches ). In addition, more engaging and adaptive technology–based interventions could be of benefit for late-life anxiety and lead to more sustained engagement with practices. Further refinement of goal setting to promote intervention engagement alongside skills to cope with anxiety is needed. On average, BREATHE and Healthy Living participants attained at least one goal at the end of treatment, but this could be strengthened using more directed, individualized outcome measures for the goals (eg, goal attainment scaling approaches ). This study is not without limitations. One of the key limitations may be the heterogeneity of anxiety disorders and symptom presentation in a relatively small pilot study. Rather than including multiple types of anxiety disorders, focusing on specific symptoms (eg, worry that exceeds a particular threshold on a questionnaire) may be a better approach, as used with other studies . As this sample included veterans, we likely encountered higher rates of co-occurring PTSD compared with other nonveteran samples. This particular comorbidity of PTSD may have made the anxiety presentation more complex and difficult to treat using guided self-management interventions. Additional limitations include the small sample size as the study was designed to be a pilot, the use of nonblinded assessors, and the absence of a validated measure of acceptability. Another limitation to our design was that only a portion of the outcome measures were assessed at the end of treatment (T2). This study took place earlier in the COVID-19 pandemic, which led to some modifications in data collection methods and limited our ability to collect complete data, in particular from the lifestyle-adjusted ACS. These findings differ from those of earlier studies on progressive relaxation for late-life anxiety in that the effects of progressive relaxation were not as robust as those found in earlier studies . The use of a primarily male, ethnically and racially diverse military veteran sample may have contributed to some differences compared with earlier studies, in which older nonveteran White women made up much of the samples. The benefits derived from the Healthy Living intervention fit with the evidence for complementary and integrated health approaches for late-life anxiety and the Whole Health program in the Veterans Health Administration . These findings suggest that guided self-management approaches to treating late-life anxiety in older veterans are feasible but that further refinement and study are needed to identify what works best for whom using a video-delivered format with remote coaching. Our findings suggest that a psychoeducation-based approach may help older adults with somatic anxiety symptoms. While progressive relaxation was deemed to be feasible and enjoyable to most participants, those with mild to moderate anxiety symptom severity and fewer health problems were more likely to adhere to the recommended home practice. Thus, based on the qualitative feedback, the BREATHE intervention in particular might not be a good match for all older adults with anxiety disorders due to some participants needing a higher-intensity treatment and others experiencing negative reactions to progressive relaxation, primarily “relaxation-induced anxiety” . Further work is needed to delineate the role of intervention design factors and individual participant baseline characteristics on the effect of guided self-management approaches on late-life anxiety.
Selective neuroimmune modulation by type I interferon drives neuropathology and neurologic dysfunction following traumatic brain injury
96d460ab-ffef-4496-88c0-1742b3565bfc
10436463
Pathology[mh]
Traumatic brain injury (TBI) is a leading cause of death and disability through young adulthood; patients who survive their injury often develop chronic neurological disease . Despite the high burden of TBI, neuroprotective therapies do not exist. The current lack of treatment options stems from an incomplete understanding of the mechanisms that lead to ongoing neurologic dysfunction and degeneration after injury. Following TBI, cellular damage initiates acute immune reactivity including activation of resident glial cells, infiltration of peripheral leukocytes, and increased production of soluble inflammatory mediators . While the initial surge in immune activation facilitates debris clearance and regeneration, chronic dysregulated immune cell reactivity may contribute to progressive neurodegeneration and pathology. Growing evidence suggests that microglia are key cellular mediators of chronic neurologic dysfunction following TBI, with microglia depletion resulting in decreased inflammation and reduced neuropathology . The specific mechanisms that promote chronic microglial and peripheral immune cell activation following traumatic brain injury remain unclear. One potential mechanism underlying sustained immune activation is type I interferon (IFN-I) signaling through the interferon-α/β receptor (IFNAR). IFNAR is encoded by two genes, IFNAR1 and IFNAR2, both are essential for receptor function. Type I IFNs are proteins that regulate the recruitment and effector functions of immune cells . Our lab has demonstrated that following TBI, the microglial transcriptome is highly enriched for type I interferon-stimulated genes (ISGs) . Others have shown upregulation of ISGs in the cortex and hippocampus following experimental TBI . The robust upregulation of the IFN-I pathway at both the tissue and immune cell level suggests that IFN-I signaling is a potential instigator of sustained, dysregulated immune response and a source of neuropathology. Initial studies of IFN-I deficiency after TBI demonstrate reduced acute inflammatory gene expression and ISG expression that was associated with improved neurologic function . However, the prior studies are limited by their emphasis on acute timepoints after TBI and by their lack of mechanistic data on how specific immune cell types respond to TBI. The objective of the following study is to determine the effect of IFN-I signaling on subacute and chronic neuroimmune activation, neuropathology, and neurologic function following TBI. Animals All studies were conducted on adult, 2–5 months old, male C57BL/6J (#000664) or global IFNAR1-deficient mice (#028288; B6(Cg)- Ifnar1 tm1.2Ees /J, Ifnar1 null allele) purchased from Jackson Laboratory. The average age across all studies on day of craniectomy was 2.5 months old. Within each experiment, all groups were age matched. Average weight on day of craniectomy was 26.0 ± 3.6 g. Mice were housed in the Animal Care Facility at the University of Iowa (Iowa City, IA) under a 12-h light-dark cycle with ad libitum access to food and water. After craniectomy and fluid percussion injury (FPI) or non-surgical control, all mice remained singly caged. All procedures performed in this study were in accord with protocols approved by the Institutional Animal Care and Use Committee at the University of Iowa. Fluid percussion injury Lateral FPI was performed as previously described . On the day preceding injury, mice underwent craniectomy. Animals were anesthetized with ketamine/xylazine (87 mg/kg ketamine and 12 mg/kg xylazine) via intraperitoneal injection. The head was then mounted in a stereotaxic frame, and a midline incision of the scalp was made for reflection of the skin and exposure of underlying skull. A 3-mm OD handheld trephine (University of Pennsylvania Machine Shop) was used for craniectomy on the left parietal skull bone centered between lambda and bregma sutures and between lateral skull edge and sagittal suture. A modified Luer-Lock hub was placed surrounding the craniectomy site and secured with cyanoacrylate glue (Loctite 760355). The hub was further secured with methyl-methacrylate dental cement (Jet Acrylic Liquid mixed with Perm Reline/Repair Resin) surrounding the bottom portion of the hub. The hub was filled with sterile saline and closed with a sterile intravenous cap to prevent dural exposure to the environment. The following day, mice underwent FPI. The pendulum angle of the FPI device was adjusted before use on each experimental group to achieve a peak pressure between 1.3 and 1.5 atmospheres (atm) when triggered against capped intravenous tubing. For experiments in this study, the pendulum angle varied between 10.8 and 11.8 degrees. Mice received 3% inhaled isoflurane in an induction chamber before being transferred to a nose cone, where the intravenous cap was removed and any air bubbles in the hub were eliminated. Once deeply anesthetized, mice were connected to the FPI device via 20-inch IV tubing and placed on their right side. The pendulum was released, generating a brief fluid pulse against the exposed dura. A Tektronix digital oscilloscope (TDS460A) was used to measure the duration and peak pressure of the fluid pulse. After injury, mice were placed on their backs, and their righting time was measured as an indicator of injury severity. After righting, mice were re-anesthetized with isoflurane, the Luer-Lock hub was removed, and the skin incision was sutured closed. After skin closure, anesthesia was discontinued, and animals were placed in a heated cage until recovered and ambulatory. Given we were interested in studying moderate to severe traumatic brain injury, mice were included only if the duration of the righting reflex was > 4 min . Across all studies, the average righting time ± SD was 380 ± 79 s, which corresponded to an average peak pressure delivered of 1.4 ± 0.05 ATM. For Fig. , mice received sham injury where they underwent identical treatment through connection to the FPI device but were disconnected without triggering of the FPI device. For the data presented in all the other figures, non-surgical controls were used; these mice received anesthesia and analgesia but did not undergo craniectomy. RNA-scope and immunohistochemistry Mice were anesthetized with ketamine/xylazine and perfused with ice-cold saline followed by 4% paraformaldehyde (PFA) 7 or 31 days after TBI or control (no TBI). Dissected brains were post-fixed in 4% PFA at 4 °C overnight, then cryoprotected in a 30% sucrose solution until sinking. Brains were embedded in optimal cutting temperature (OCT) compound by the University of Iowa Central Microscopy Research Facility, and 10-µm coronal sections were prepared. Frozen sections from the injury epicenter were dried at 40 °C for 30 min prior to staining. The RNAscope Multiplex Fluorescent detection Kit v2 was used per manufacturer’s instructions to stain for mRNA expression of Cxcl10 (ACD, 408921) or H2-K1 (ACD, 1,049,831-C1). For all immunohistochemistry, slides were placed in a blocking/extraction solution (0.5% Triton X-100 and 10% goat serum in 1× PBS) for 1 h at room temperature. After blocking, tissues were incubated overnight at 4 °C in rabbit anti-IBA1 (1:200; Wako Chemicals 019-17941), rabbit anti-CD8α (1:500; Abcam 217344), or rabbit anti-NeuN (1:200; Abcam 177,487) primary antibody diluted in blocking/extraction solution. Alexa Fluor-488 or -568 conjugated goat anti-rabbit secondary antibody (Life Technologies) was applied at a 1:500 dilution for 1 h at room temperature. Fluorescently stained tissue slices were imaged using a slide-scanning microscope (Olympus VS120). Regions of interest were demarcated using OlyVIA software (Olympus). ImageJ was used to perform proportional area analysis (IBA1+) and quantification of NeuN+ and Cxcl10 + cells. RNA isolation and nanoString gene expression analysis 7 or 31 days after control or TBI, mice were euthanized with isoflurane prior to decapitation and removal of the brains. Brain tissue was dissected, collected by region, and snap-frozen in liquid nitrogen. Total RNA was extracted from control or TBI brain regions using TRIzol (Invitrogen, Carlsbad, CA) as per the manufacturer’s instructions. RNA quantity and quality were evaluated with the Agilent 2100 Bioanalyzer. Gene expression was quantified using the NanoString nCounter Neuroinflammation panel. Data was normalized using nSolver software with a background threshold count value of 20. Housekeeping genes Gusb and Asb10 were flagged due to differential expression across treatment groups and were excluded. The geometric mean was used to compute the normalization factor. Differential gene expression analysis was carried out with DEseq2. The computed P -values for all genes were adjusted to control the FDR using the Benjamini–Hochberg procedure. Genes that featured a log2fold change > 0.5 and an adjusted p value < 0.05 were considered to be differentially expressed genes (DEGs). Quantitative real-time PCR First-strand complementary DNA (cDNA) was synthesized with SuperScript III reverse transcriptase (Invitrogen). Amplified cDNAs were diluted 1:15 in ultra-pure water and subjected to real-time polymerase chain reaction (PCR) on an Applied Biosystems Model 7900HT with TaqMan Universal PCR Mastermix (Applied Biosystems, Foster City, CA) and the following Taqman probes: Irf7 (Mm00516793), Rsad2 (Mm00491265), Slfn8 (Mm00824405), Ddx58 (Mm01216853), Ly6a (Mm00726565), Ifih1 (Mm00459183), Ifit3 (Mm01704846), Ifitm3 (Mm00847057), Zbp1 (Mm01247052), Cxcl10 (Mm00445235) H2-K1 (Mm01612247), β2m (Mm00437762), Tap1 (Mm00443188), and Gapdh (Mm99999915). PCR reactions were conducted as follows: 2 min at 50 °C, 10 min at 95 °C, followed by 40 cycles for amplification at 95 °C for 15 s and 60 °C for 60 s. Biologic samples were run in duplicate or triplicate. Genes of interest were normalized to endogenous control Gapdh . Data were analyzed using the comparative cycle threshold method, and results are expressed as fold difference from WT controls. Open field Hyperactivity and anxiety-like behavior were assessed using the open field test. The open field apparatus (SD Instruments) consisted of a single enclosure 20 × 20 × 15 inches (0.51 × 0.51 × 0.38 m) with a center area of 10 × 10 inches (0.25 × 0.25 m). 14 days after control treatment or TBI, open field testing was performed. During testing, mice were placed individually in the center of the open field enclosure, and they moved freely for 5 min while their locomotor activity was recorded by a video camera mounted above the apparatus. Overall ambulatory movement was quantified by the total body distance traveled during the trial. Total distance traveled and time spent in the center were reported automatically by the ANY-Maze Video Tracking System (Stoelting, IL, USA). After each test, the apparatus was thoroughly cleaned with 75% ethanol. Barnes maze Cognitive function was assessed using the Barnes maze (SD Instruments). The Barnes maze was a white circular Table 36 inches in diameter with 20 holes, 2 inches in diameter, evenly spaced around the perimeter. The table was brightly lit and open, motivating the test subjects to learn the location of the dark escape box located under one of the 20 holes. The maze was enclosed with four different visual cues hung on each wall surrounding the table. ANY-maze video tracking was used for data collection. Three weeks after control or TBI, acquisition trials were conducted (four trials per day) for 4 days, during which an escape box was placed under the target hole. Each trial ended when the mouse entered the target hole or after 80 s had elapsed. Mice that did not locate the escape box were guided to the target hole where they entered the escape box. All mice were allowed to remain in the escape box for 15 s before removal from the apparatus. Average latency to the escape hole was recorded for each acquisition day. On day 5 of Barnes maze testing, a probe trial was conducted to assess memory. The escape box was removed from under the target hole and mice were placed in the maze for 60 s. Each mouse underwent one probe trial, during which the time spent in a 2-cm-diameter zone around the target hole, the latency to first escape-zone entrance, and the distance to first escape-zone entrance was recorded. Mice that displayed > 20 s without video tracking (2 subjects) on the probe trial were excluded from probe trial data analysis. Flow cytometry Three or ten days after TBI, mice were euthanized and their brains were removed. Mononuclear cells were isolated by digesting brain tissue in CollagenaseD/DNase (Sigma, 1108886601, D4513) for 45 min at 37 °C, dissociating through a 70-µm filter and isolating the cells at the interface between a 70% and 37% Percoll gradient (GE, 17-0891-01) after a 20 min, 25 °C spin at 2,000 r.p.m. Single-cell suspensions were plated and stained for 20 min at 4 °C with a combination of fluorescently labeled antibodies that are specific for surface markers. Cells were enumerated with counting beads (ThermoFisher, C36950) and then analyzed by flow cytometry. All data for samples were acquired using an LSRFortessa™ Cell Analyzer (BD Bioscience) and analyzed using FlowJo software, v.10.8.1 (FlowJo LLC). MRI MRI images were obtained at baseline and 7 days post injury using the 7.0T GE 901 Discovery MRI Small Animal Scanner housed at the University of Iowa MR Research Facility. Mice were placed in an induction chamber with isoflurane (3%) until anesthetized, then each mouse was transferred to the scanner and placed in a specially designed imaging tray with a nose cone for continued anesthesia (1-1.5%). Diffusion tensor images (DTI) were obtained, and each scan lasted approximately 30 min. DTI was used to evaluate the microstructural integrity of the white matter including evaluation of the fractional anisotropy (FA). Voxelwise statistical analysis of the FA was carried out using a version of tract-based spatial statistics (TBSS) that was modified to work with mouse images . The mean FA image was created and thresholded at 0.2 to create a mean FA skeleton which represents the centers of all tracts common to the group. Each subject’s aligned FA data was then projected onto this skeleton and the resulting data fed into voxelwise cross-subject statistics. Permutation testing, set to 500, was performed using the FSL randomise function to control for false positives and threshold-free cluster enhancement at p < 0.05 was used to check for significance in the contrasts. T-statistics were calculated between fiber tract skeletons from WT and IFNAR KO animal to look for changes following TBI compared to baseline scans. The results of those t-statistics were the input for the calculation of the f-statistic, as previously described to determine if there was a significant interaction between genotype and the effect of injury on fiber tracts integrity. Statistical analysis Statistical analysis was performed using Prism 9.5.1 (GraphPad Software) except for Nanostring analysis and MRI analysis which are described in detail above. Distribution normality was determined using the Shapiro-Wilk test. For experiments containing two groups, analysis was done using a two-tailed, unpaired t-test for normally distributed data or the Mann-Whitney U test for non-normally distributed data. For experiments utilizing 4 experimental groups consisting of IFNAR KO and WT mice (qPCR, flow cytometry, immunohistochemistry, and single-day behavior experiments) statistical analysis was done using two-way ANOVA accounting for the independent variables of genotype (IFNAR KO vs. WT) and injury status (control vs. TBI). If significant effects were detected in the interaction or main effects of injury or genotype, post-hoc multiple comparisons were done by Tukey’s test. For multiday Barnes maze testing, a two-way repeated measures ANOVA was done evaluating the impact of genotype and injury group across time. This was followed by Tukey’s test for post-hoc multiple comparisons if significant effects existed in the interaction or main effects. A value of p < 0.05 was considered statistically significant. All data are displayed as mean ± SEM. WT and IFNAR KO mice were randomly assigned to control or TBI groups. All testers were blinded to genotype and injury status. Results were combined from multiple TBI experiments for each of the studied endpoints to demonstrate reproducibility. All studies were conducted on adult, 2–5 months old, male C57BL/6J (#000664) or global IFNAR1-deficient mice (#028288; B6(Cg)- Ifnar1 tm1.2Ees /J, Ifnar1 null allele) purchased from Jackson Laboratory. The average age across all studies on day of craniectomy was 2.5 months old. Within each experiment, all groups were age matched. Average weight on day of craniectomy was 26.0 ± 3.6 g. Mice were housed in the Animal Care Facility at the University of Iowa (Iowa City, IA) under a 12-h light-dark cycle with ad libitum access to food and water. After craniectomy and fluid percussion injury (FPI) or non-surgical control, all mice remained singly caged. All procedures performed in this study were in accord with protocols approved by the Institutional Animal Care and Use Committee at the University of Iowa. Lateral FPI was performed as previously described . On the day preceding injury, mice underwent craniectomy. Animals were anesthetized with ketamine/xylazine (87 mg/kg ketamine and 12 mg/kg xylazine) via intraperitoneal injection. The head was then mounted in a stereotaxic frame, and a midline incision of the scalp was made for reflection of the skin and exposure of underlying skull. A 3-mm OD handheld trephine (University of Pennsylvania Machine Shop) was used for craniectomy on the left parietal skull bone centered between lambda and bregma sutures and between lateral skull edge and sagittal suture. A modified Luer-Lock hub was placed surrounding the craniectomy site and secured with cyanoacrylate glue (Loctite 760355). The hub was further secured with methyl-methacrylate dental cement (Jet Acrylic Liquid mixed with Perm Reline/Repair Resin) surrounding the bottom portion of the hub. The hub was filled with sterile saline and closed with a sterile intravenous cap to prevent dural exposure to the environment. The following day, mice underwent FPI. The pendulum angle of the FPI device was adjusted before use on each experimental group to achieve a peak pressure between 1.3 and 1.5 atmospheres (atm) when triggered against capped intravenous tubing. For experiments in this study, the pendulum angle varied between 10.8 and 11.8 degrees. Mice received 3% inhaled isoflurane in an induction chamber before being transferred to a nose cone, where the intravenous cap was removed and any air bubbles in the hub were eliminated. Once deeply anesthetized, mice were connected to the FPI device via 20-inch IV tubing and placed on their right side. The pendulum was released, generating a brief fluid pulse against the exposed dura. A Tektronix digital oscilloscope (TDS460A) was used to measure the duration and peak pressure of the fluid pulse. After injury, mice were placed on their backs, and their righting time was measured as an indicator of injury severity. After righting, mice were re-anesthetized with isoflurane, the Luer-Lock hub was removed, and the skin incision was sutured closed. After skin closure, anesthesia was discontinued, and animals were placed in a heated cage until recovered and ambulatory. Given we were interested in studying moderate to severe traumatic brain injury, mice were included only if the duration of the righting reflex was > 4 min . Across all studies, the average righting time ± SD was 380 ± 79 s, which corresponded to an average peak pressure delivered of 1.4 ± 0.05 ATM. For Fig. , mice received sham injury where they underwent identical treatment through connection to the FPI device but were disconnected without triggering of the FPI device. For the data presented in all the other figures, non-surgical controls were used; these mice received anesthesia and analgesia but did not undergo craniectomy. Mice were anesthetized with ketamine/xylazine and perfused with ice-cold saline followed by 4% paraformaldehyde (PFA) 7 or 31 days after TBI or control (no TBI). Dissected brains were post-fixed in 4% PFA at 4 °C overnight, then cryoprotected in a 30% sucrose solution until sinking. Brains were embedded in optimal cutting temperature (OCT) compound by the University of Iowa Central Microscopy Research Facility, and 10-µm coronal sections were prepared. Frozen sections from the injury epicenter were dried at 40 °C for 30 min prior to staining. The RNAscope Multiplex Fluorescent detection Kit v2 was used per manufacturer’s instructions to stain for mRNA expression of Cxcl10 (ACD, 408921) or H2-K1 (ACD, 1,049,831-C1). For all immunohistochemistry, slides were placed in a blocking/extraction solution (0.5% Triton X-100 and 10% goat serum in 1× PBS) for 1 h at room temperature. After blocking, tissues were incubated overnight at 4 °C in rabbit anti-IBA1 (1:200; Wako Chemicals 019-17941), rabbit anti-CD8α (1:500; Abcam 217344), or rabbit anti-NeuN (1:200; Abcam 177,487) primary antibody diluted in blocking/extraction solution. Alexa Fluor-488 or -568 conjugated goat anti-rabbit secondary antibody (Life Technologies) was applied at a 1:500 dilution for 1 h at room temperature. Fluorescently stained tissue slices were imaged using a slide-scanning microscope (Olympus VS120). Regions of interest were demarcated using OlyVIA software (Olympus). ImageJ was used to perform proportional area analysis (IBA1+) and quantification of NeuN+ and Cxcl10 + cells. 7 or 31 days after control or TBI, mice were euthanized with isoflurane prior to decapitation and removal of the brains. Brain tissue was dissected, collected by region, and snap-frozen in liquid nitrogen. Total RNA was extracted from control or TBI brain regions using TRIzol (Invitrogen, Carlsbad, CA) as per the manufacturer’s instructions. RNA quantity and quality were evaluated with the Agilent 2100 Bioanalyzer. Gene expression was quantified using the NanoString nCounter Neuroinflammation panel. Data was normalized using nSolver software with a background threshold count value of 20. Housekeeping genes Gusb and Asb10 were flagged due to differential expression across treatment groups and were excluded. The geometric mean was used to compute the normalization factor. Differential gene expression analysis was carried out with DEseq2. The computed P -values for all genes were adjusted to control the FDR using the Benjamini–Hochberg procedure. Genes that featured a log2fold change > 0.5 and an adjusted p value < 0.05 were considered to be differentially expressed genes (DEGs). First-strand complementary DNA (cDNA) was synthesized with SuperScript III reverse transcriptase (Invitrogen). Amplified cDNAs were diluted 1:15 in ultra-pure water and subjected to real-time polymerase chain reaction (PCR) on an Applied Biosystems Model 7900HT with TaqMan Universal PCR Mastermix (Applied Biosystems, Foster City, CA) and the following Taqman probes: Irf7 (Mm00516793), Rsad2 (Mm00491265), Slfn8 (Mm00824405), Ddx58 (Mm01216853), Ly6a (Mm00726565), Ifih1 (Mm00459183), Ifit3 (Mm01704846), Ifitm3 (Mm00847057), Zbp1 (Mm01247052), Cxcl10 (Mm00445235) H2-K1 (Mm01612247), β2m (Mm00437762), Tap1 (Mm00443188), and Gapdh (Mm99999915). PCR reactions were conducted as follows: 2 min at 50 °C, 10 min at 95 °C, followed by 40 cycles for amplification at 95 °C for 15 s and 60 °C for 60 s. Biologic samples were run in duplicate or triplicate. Genes of interest were normalized to endogenous control Gapdh . Data were analyzed using the comparative cycle threshold method, and results are expressed as fold difference from WT controls. Hyperactivity and anxiety-like behavior were assessed using the open field test. The open field apparatus (SD Instruments) consisted of a single enclosure 20 × 20 × 15 inches (0.51 × 0.51 × 0.38 m) with a center area of 10 × 10 inches (0.25 × 0.25 m). 14 days after control treatment or TBI, open field testing was performed. During testing, mice were placed individually in the center of the open field enclosure, and they moved freely for 5 min while their locomotor activity was recorded by a video camera mounted above the apparatus. Overall ambulatory movement was quantified by the total body distance traveled during the trial. Total distance traveled and time spent in the center were reported automatically by the ANY-Maze Video Tracking System (Stoelting, IL, USA). After each test, the apparatus was thoroughly cleaned with 75% ethanol. Cognitive function was assessed using the Barnes maze (SD Instruments). The Barnes maze was a white circular Table 36 inches in diameter with 20 holes, 2 inches in diameter, evenly spaced around the perimeter. The table was brightly lit and open, motivating the test subjects to learn the location of the dark escape box located under one of the 20 holes. The maze was enclosed with four different visual cues hung on each wall surrounding the table. ANY-maze video tracking was used for data collection. Three weeks after control or TBI, acquisition trials were conducted (four trials per day) for 4 days, during which an escape box was placed under the target hole. Each trial ended when the mouse entered the target hole or after 80 s had elapsed. Mice that did not locate the escape box were guided to the target hole where they entered the escape box. All mice were allowed to remain in the escape box for 15 s before removal from the apparatus. Average latency to the escape hole was recorded for each acquisition day. On day 5 of Barnes maze testing, a probe trial was conducted to assess memory. The escape box was removed from under the target hole and mice were placed in the maze for 60 s. Each mouse underwent one probe trial, during which the time spent in a 2-cm-diameter zone around the target hole, the latency to first escape-zone entrance, and the distance to first escape-zone entrance was recorded. Mice that displayed > 20 s without video tracking (2 subjects) on the probe trial were excluded from probe trial data analysis. Three or ten days after TBI, mice were euthanized and their brains were removed. Mononuclear cells were isolated by digesting brain tissue in CollagenaseD/DNase (Sigma, 1108886601, D4513) for 45 min at 37 °C, dissociating through a 70-µm filter and isolating the cells at the interface between a 70% and 37% Percoll gradient (GE, 17-0891-01) after a 20 min, 25 °C spin at 2,000 r.p.m. Single-cell suspensions were plated and stained for 20 min at 4 °C with a combination of fluorescently labeled antibodies that are specific for surface markers. Cells were enumerated with counting beads (ThermoFisher, C36950) and then analyzed by flow cytometry. All data for samples were acquired using an LSRFortessa™ Cell Analyzer (BD Bioscience) and analyzed using FlowJo software, v.10.8.1 (FlowJo LLC). MRI images were obtained at baseline and 7 days post injury using the 7.0T GE 901 Discovery MRI Small Animal Scanner housed at the University of Iowa MR Research Facility. Mice were placed in an induction chamber with isoflurane (3%) until anesthetized, then each mouse was transferred to the scanner and placed in a specially designed imaging tray with a nose cone for continued anesthesia (1-1.5%). Diffusion tensor images (DTI) were obtained, and each scan lasted approximately 30 min. DTI was used to evaluate the microstructural integrity of the white matter including evaluation of the fractional anisotropy (FA). Voxelwise statistical analysis of the FA was carried out using a version of tract-based spatial statistics (TBSS) that was modified to work with mouse images . The mean FA image was created and thresholded at 0.2 to create a mean FA skeleton which represents the centers of all tracts common to the group. Each subject’s aligned FA data was then projected onto this skeleton and the resulting data fed into voxelwise cross-subject statistics. Permutation testing, set to 500, was performed using the FSL randomise function to control for false positives and threshold-free cluster enhancement at p < 0.05 was used to check for significance in the contrasts. T-statistics were calculated between fiber tract skeletons from WT and IFNAR KO animal to look for changes following TBI compared to baseline scans. The results of those t-statistics were the input for the calculation of the f-statistic, as previously described to determine if there was a significant interaction between genotype and the effect of injury on fiber tracts integrity. Statistical analysis was performed using Prism 9.5.1 (GraphPad Software) except for Nanostring analysis and MRI analysis which are described in detail above. Distribution normality was determined using the Shapiro-Wilk test. For experiments containing two groups, analysis was done using a two-tailed, unpaired t-test for normally distributed data or the Mann-Whitney U test for non-normally distributed data. For experiments utilizing 4 experimental groups consisting of IFNAR KO and WT mice (qPCR, flow cytometry, immunohistochemistry, and single-day behavior experiments) statistical analysis was done using two-way ANOVA accounting for the independent variables of genotype (IFNAR KO vs. WT) and injury status (control vs. TBI). If significant effects were detected in the interaction or main effects of injury or genotype, post-hoc multiple comparisons were done by Tukey’s test. For multiday Barnes maze testing, a two-way repeated measures ANOVA was done evaluating the impact of genotype and injury group across time. This was followed by Tukey’s test for post-hoc multiple comparisons if significant effects existed in the interaction or main effects. A value of p < 0.05 was considered statistically significant. All data are displayed as mean ± SEM. WT and IFNAR KO mice were randomly assigned to control or TBI groups. All testers were blinded to genotype and injury status. Results were combined from multiple TBI experiments for each of the studied endpoints to demonstrate reproducibility. Activation of the type I interferon pathway is persistent and widespread in the brain after TBI We had demonstrated that the transcriptional response of microglia is highly enriched for expression of genes in the type I interferon pathway (IFN-I) at seven days following TBI . We now sought to characterize the time course and spatial localization of IFN-I pathway upregulation following TBI. We selected a panel of interferon-stimulated genes (ISGs) that were upregulated in microglia seven days post-injury (DPI) including Ifit3 , Ifi204 , Stat1 , Irf7 , and Axl . Using quantitative real-time PCR (qPCR), we evaluated expression of these ISGs in regional brain tissue at 1 DPI through 21 DPI. We found that in both the ipsilateral perilesional cortex and hippocampus, ISGs were elevated by 1 DPI and several ISGs remained elevated at 21 DPI (Fig. , A and B). We next performed dual RNAscope and immunohistochemistry (IHC) to assess spatial location and confirm the cell source of Irf7 , a key transcriptional factor in IFN-I pathway activation. We found that at 7 DPI, reactive microglia were a substantial source of Irf7 expression in widespread regions of the brain (Fig. , C-F). These reactive microglia were present in the cortex and hippocampus as well as two areas known to be affected by traumatic axonal injury, the corpus callosum and thalamus. IFNAR deficiency modulates neuroinflammatory gene expression following TBI Given the pleiotropic and immune-modulating functions of IFN-I, we next sought to assess the impact of IFNAR deficiency on neuroinflammatory gene expression following TBI . We used IFNAR1 KO mice which resulted in total loss of IFNAR receptor function. The NanoString neuroinflammation panel was used to evaluate the expression of 770 genes that are involved in primary immune function/inflammatory processes. TBI was induced in wild-type and IFN-I signalling-deficient mice by lateral fluid percussion injury (FPI). We used pairwise comparison of WT TBI vs. IFNAR KO TBI to determine differentially expressed genes (DEGs). There were 10 genes ( Irf7, Rsad2, Slfn8, Oas1g, Ddx58, Ly6a, Ifih1, Ifitm3, Zbp1, Cxcl10 ) that were significantly decreased in IFNAR KO TBI compared to WT TBI (Fig. A). It is important to note that there were no significant DEGs when comparing WT and IFNAR KO uninjured controls. All 10 DEGs have been described as interferon-stimulated genes (ISGs). These genes have many functions including positive regulation of anti-viral immune activation ( Irf7 ), nucleic acid sensing ( Ddx58, Ifih1, Oas1g, Zbp1 ), T cell activation and differentiation ( Rsad2, Slfn8 ), and chemoattraction ( Cxcl10 ). However, broad suppression of the inflammatory response to TBI did not occur: IFNAR KO mice still had 366 DEGs in response to TBI, and these included many inflammation-related genes that have been described as upregulated following TBI (Supplemental Fig. , A and B, Table ). In conclusion, we found that IFNAR deficiency resulted in the modulation of a specific subset of inflammatory genes following TBI. To determine whether IFNAR signaling also impacts the chronic transcriptional response to TBI, we collected hippocampi from WT and IFNAR KO mice at thirty-one days post-injury (31 DPI) and performed qPCR (Fig. B). We found that expression of six of the ten DEGs identified at 7 DPI remained elevated in WT TBI at 31 DPI compared to WT controls. This increased expression was IFNAR-dependent, as IFNAR KO TBI mice had significantly less expression of all six genes compared to WT TBI mice ( Irf7, Rsad2, Slfn8, Ddx58, Zbp1, Cxcl10; p < 0.004). Overall, IFN-I pathway genes remained elevated chronically post-TBI and this was prevented by IFNAR deficiency. The specific functions of these IFN-I pathway genes may provide clues as to how IFNAR deficiency alters outcomes following TBI. One of the TBI-induced genes modulated by IFNAR deficiency, Cxcl10 , is well recognized for its involvement in a variety of neurologic diseases . CXCL10 is a potent chemokine known to induce microglial reactivity and chemotaxis as well as recruitment of peripheral leukocytes to the central nervous system (CNS). To test the validity of our NanoString results and assess the spatial distribution of Cxcl10 , we performed RNAscope in situ hybridization from brain tissue sections at 7 DPI. In accordance with our NanoString results, we observed upregulation of Cxcl10 in WT TBI mice with minimal Cxcl10 expression in IFNAR KO TBI mice as well as in both WT and IFNAR KO control mice. WT TBI animals had increased Cxcl10 expression in the perilesional cortex and ipsilesional corpus callosum, hippocampus, and thalamus (Fig. , B-E). Interestingly, the thalamus displayed the most robust Cxcl10 expression compared to other regions after WT TBI. Although IFNAR KO TBI mice did have detectable Cxcl10 expression in the perilesional cortex, corpus callosum, and hippocampus (Fig. , G-I), Cxcl10 staining was notably absent in the ipsilesional thalamus of IFNAR KO TBI subjects (Fig. J). Importantly, all regions examined had significantly less Cxcl10 + cells in IFNAR KO TBI mice compared to WT TBI mice (Fig. , K-N). Overall, Cxcl10 expression is increased throughout the brain 7 days following TBI. This expression is substantially reduced in IFNAR-deficient mice following injury. IFNAR deficiency reduces microgliosis following TBI After injury, reactive microglia migrate to sites of injury where they serve many functions including phagocytosis of debris, production of cytokines and chemokines, and antigen presentation . Given that microglia both produce and respond to type I interferons, we were interested in the effect of IFNAR deficiency on microgliosis following brain injury. To assess for changes in microglial morphology and/or accumulation, we used IHC to stain for IBA1, a marker of microglia and macrophages, and calculated proportional area of IBA1 staining 31 days following TBI. As expected, WT mice had increased IBA1 staining in the thalamus 31 days following TBI (Fig. ). IFNAR KO TBI mice had greater thalamic microglial staining compared to IFNAR KO controls, but this injury-induced response was significantly less than WT TBI mice (Fig. F). Both WT TBI and IFNAR KO TBI mice showed no difference in hippocampal IBA1 staining compared to their respective uninjured controls (Fig. ). In summary, IFNAR deficiency decreased microgliosis 31 days after TBI. IFNAR deficiency reduces expression of tissue and microglial-specific MHC class I genes following TBI In addition to evaluating microglial proportional area, we were interested in assessing the impact of IFNAR deficiency on microglial function following TBI. A key function of microglia is antigen presentation . We previously observed that several MHC class I molecules were upregulated in microglia following TBI . As many antigen-presentation and processing molecules are type I interferon-stimulated genes, we hypothesized that IFNAR deficiency would reduce MHC class I molecule expression in microglia following TBI. We first examined the impact of IFNAR deficiency on hippocampal expression of MHC class I genes, H2-K1 , β 2m , and Tap1 , at 7 and 31 DPI (Fig. , A-C). TBI resulted in increased hippocampal expression of H2-K1 , β 2m , and Tap1 in WT mice at both timepoints. Conversely, TBI-induced upregulation of these MHC class I molecules was prevented in IFNAR KO mice. IFNAR KO TBI mice showed no increase in H2-K1 , β 2m, or Tap1 expression at either time point compared to controls. Next, we combined fluorescent in situ hybridization with immunofluorescence to evaluate if the IFNAR-dependent reduction in MHC class I antigen processing and presentation molecules occurred specifically in reactive microglia. We used RNAscope to stain for H2-K1 mRNA, and IBA1 immunostaining to identify microglia in tissue sections obtained from WT and IFNAR KO mice at 7 DPI. H2-K1 expression was minimal in both WT and IFNAR KO control subjects. Following TBI in WT mice, H2-K1 upregulation was seen in the ipsilesional cortex, corpus callosum, hippocampus, and thalamus (Fig. , D-H) and colocalized with IBA1. The ipsilesional thalamus displayed the greatest injury-induced expression. In contrast, IFNAR KO TBI mice had minimal H2-K1 staining compared to WT TBI animals. The perilesional cortex had the most H2-K1 expression in the IFNAR KO subjects and similarly co-localized with IBA1 (Fig. , I-M). Overall, IFNAR deficiency decreased microglial MHC class I molecule expression following TBI suggesting that IFNAR deficiency may alter antigen presentation, a key function of reactive microglia following TBI. IFNAR deficiency reduces monocyte populations following TBI The IFNAR-mediated decrease in chemokine expression following TBI, suggested that IFNAR deficiency may impact the recruitment of peripheral immune cells to the CNS. To better understand how IFNAR deficiency alters the dynamics of immune cell populations in the CNS after TBI, we used flow cytometry to enumerate myeloid cell markers from whole brains at 3 and 10 DPI. Our gating strategy is shown in Fig. A. Neither injury nor genotype significantly altered the number of microglia (Fig. B). The total population of CD11b+, CD45+ high leukocytes was significantly increased at 3 DPI in WT mice. IFNAR KO TBI mice showed no significant increase in number of CD11b+, CD45+ high leukocytes at 3 or 10 DPI compared to genotype controls (Fig. C). To evaluate the leukocyte identity, we stained for cell markers that demarcate neutrophils and monocytes. Neutrophils have previously been shown to rapidly infiltrate the brain after injury and decline in number in the days following injury . Our results were consistent with this understanding, by 3 and 10 DPI neither injury nor genotype significantly altered the number of neutrophils (Fig. D). Monocytes also infiltrate the brain after injury, classically peaking in number at 3–4 days post injury, including both inflammatory (LY6C+) and patrolling (LY6C-) monocytes . In our model, WT TBI mice had increased LY6C+ and LY6C- monocytes at 3 DPI. IFNAR KO TBI were no different than genotypic uninjured controls. These results are evidence of decreased monocyte accumulation in IFNAR KO TBI compared to WT TBI mice (Fig. , E and F). IFNAR deficiency reduces T cell accumulation following TBI Decreased expression of the chemokine CXCL10 and of MHC class I presentation molecules, in tandem with reduced infiltration of monocyte populations following TBI, led us to hypothesize that IFNAR deficiency may also affect T cell accumulation following TBI. To investigate this question, we used flow cytometry to quantify the total number of CD4+ and CD8+ T cells in the whole brain at 3 and 10 DPI (Fig. A). In agreement with prior studies , WT TBI mice trended towards elevated CD4+ T cell counts at both 3 and 10 DPI compared to uninjured genotypic controls. However, an increase in CD4+ T cells was not seen in IFNAR KO TBI mice (Fig. B). CD8+ T cells were significantly increased in WT TBI mice at 10 DPI. In contrast, there was no TBI-induced increase in CD8+ T cells in IFNAR KO mice compared to IFNAR KO controls (Fig. C). Overall, IFNAR deficiency reduced the accumulation of T cells in the brain following TBI, with the greatest effect on CD8 + T cells. To gain a better understanding of where CD8 + T cells are acting in the brain after injury, we stained for CD8α at 7 DPI. CD8α + T cells were present in the perilesional cortex, and ipsilesional hippocampus, corpus callosum, and thalamus (Fig. , D-H), suggesting that they are active in focal regions around the injury, white matter tracts, and subcortical structures such as the hippocampus and thalamus. IFNAR deficiency reduces neuronal loss and white matter disruption following TBI Neuronal death and degeneration are well documented after traumatic brain injury and may contribute to cognitive dysfunction . To determine whether IFNAR deficiency attenuates injury-induced neuronal loss, we stained for the neuronal marker, NeuN, in WT and IFNAR KO tissue sections obtained at 7 and 31 DPI. The thalamus was chosen as the region of interest in light of the IFNAR-dependent thalamic expression of Cxcl10 and H2-K1 , and the microglial reactivity observed in the thalamus. At both 7 and 31 DPI, WT TBI mice had significantly fewer thalamic neurons compared to uninjured controls (Fig. ). IFNAR KO TBI mice showed no decrease in thalamic neuronal counts compared to uninjured controls at both timepoints and had significantly more thalamic neurons compared to WT TBI mice at 7 DPI (Fig. ). Overall, IFNAR deficiency attenuated thalamic neuronal loss at subacute and chronic timepoints following injury. Clinically, traumatic brain injury often involves both focal and diffuse injuries to the brain . Lateral FPI has the advantage of mirroring the mixed modality of brain injury that occurs in humans, making it possible to also evaluate how IFNAR deficiency affects white matter damage after TBI. Diffusion tensor imaging followed by tract-based spatial statistics was used to assess white matter disruption seven days following TBI. Fractional anisotropy, an indicator of axonal integrity, was significantly decreased in WT TBI mice compared to IFNAR KO TBI mice (3,803 significantly decreased voxels; p = 0.046) in perilesional white matter tracts (Fig. C). Based on this evidence, we conclude that IFNAR deficiency reduced injury-induced disruption of the white matter at seven days post-TBI. IFNAR deficiency ameliorates TBI-induced neurobehavioral dysfunction To determine the role of interferon signaling on injury-induced neurologic dysfunction, we performed behavior testing in WT and IFNAR KO mice 2–3 weeks following TBI or in uninjured controls (Fig. A). We first used the open field test to assess hyperactivity and anxiety phenotypes 14 days following TBI. As shown in Fig. B, WT TBI mice displayed injury-induced hyperactivity with significantly increased total ambulation compared to WT control mice. In contrast, IFNAR KO mice did not develop injury-induced hyperactivity. To assess anxiety-like behavior, the total time in the center of the open field chamber was recorded for the duration of the trial. TBI did not increase anxiety-like behavior in either genotype with no difference from controls in time spent in the center of the open field (Fig. C). Next, we assessed spatial-based learning and memory using the Barnes maze. Testing was initiated on day 20 post-injury and extended through day 24. Spatial learning was evaluated during four days of Barnes maze training (Fig. D). The main effect of time was significant, while neither injury nor genotype significantly altered training phase performance. On the fifth day of Barnes maze testing, spatial memory was assessed during the probe trial (Fig. E). There were no significant differences in total time spent around the escape hole or latency to first entry. The distance traveled en route to the first escape-zone entry was analyzed as a measure of path efficiency and showed a significant interaction between injury and genotype (Two-way ANOVA, F(1, 57) = 4.689, p < 0.05) On post-hoc multilple comparison testing, WT TBI mice trended towards a less efficient path when compared to WT control mice. In contrast, IFNAR KO TBI mice showed no difference compared to uninjured controls. Collectively, neurobehavioral testing supported that mice with IFNAR deficiency were protected from injury-induced neurologic dysfunction. We had demonstrated that the transcriptional response of microglia is highly enriched for expression of genes in the type I interferon pathway (IFN-I) at seven days following TBI . We now sought to characterize the time course and spatial localization of IFN-I pathway upregulation following TBI. We selected a panel of interferon-stimulated genes (ISGs) that were upregulated in microglia seven days post-injury (DPI) including Ifit3 , Ifi204 , Stat1 , Irf7 , and Axl . Using quantitative real-time PCR (qPCR), we evaluated expression of these ISGs in regional brain tissue at 1 DPI through 21 DPI. We found that in both the ipsilateral perilesional cortex and hippocampus, ISGs were elevated by 1 DPI and several ISGs remained elevated at 21 DPI (Fig. , A and B). We next performed dual RNAscope and immunohistochemistry (IHC) to assess spatial location and confirm the cell source of Irf7 , a key transcriptional factor in IFN-I pathway activation. We found that at 7 DPI, reactive microglia were a substantial source of Irf7 expression in widespread regions of the brain (Fig. , C-F). These reactive microglia were present in the cortex and hippocampus as well as two areas known to be affected by traumatic axonal injury, the corpus callosum and thalamus. Given the pleiotropic and immune-modulating functions of IFN-I, we next sought to assess the impact of IFNAR deficiency on neuroinflammatory gene expression following TBI . We used IFNAR1 KO mice which resulted in total loss of IFNAR receptor function. The NanoString neuroinflammation panel was used to evaluate the expression of 770 genes that are involved in primary immune function/inflammatory processes. TBI was induced in wild-type and IFN-I signalling-deficient mice by lateral fluid percussion injury (FPI). We used pairwise comparison of WT TBI vs. IFNAR KO TBI to determine differentially expressed genes (DEGs). There were 10 genes ( Irf7, Rsad2, Slfn8, Oas1g, Ddx58, Ly6a, Ifih1, Ifitm3, Zbp1, Cxcl10 ) that were significantly decreased in IFNAR KO TBI compared to WT TBI (Fig. A). It is important to note that there were no significant DEGs when comparing WT and IFNAR KO uninjured controls. All 10 DEGs have been described as interferon-stimulated genes (ISGs). These genes have many functions including positive regulation of anti-viral immune activation ( Irf7 ), nucleic acid sensing ( Ddx58, Ifih1, Oas1g, Zbp1 ), T cell activation and differentiation ( Rsad2, Slfn8 ), and chemoattraction ( Cxcl10 ). However, broad suppression of the inflammatory response to TBI did not occur: IFNAR KO mice still had 366 DEGs in response to TBI, and these included many inflammation-related genes that have been described as upregulated following TBI (Supplemental Fig. , A and B, Table ). In conclusion, we found that IFNAR deficiency resulted in the modulation of a specific subset of inflammatory genes following TBI. To determine whether IFNAR signaling also impacts the chronic transcriptional response to TBI, we collected hippocampi from WT and IFNAR KO mice at thirty-one days post-injury (31 DPI) and performed qPCR (Fig. B). We found that expression of six of the ten DEGs identified at 7 DPI remained elevated in WT TBI at 31 DPI compared to WT controls. This increased expression was IFNAR-dependent, as IFNAR KO TBI mice had significantly less expression of all six genes compared to WT TBI mice ( Irf7, Rsad2, Slfn8, Ddx58, Zbp1, Cxcl10; p < 0.004). Overall, IFN-I pathway genes remained elevated chronically post-TBI and this was prevented by IFNAR deficiency. The specific functions of these IFN-I pathway genes may provide clues as to how IFNAR deficiency alters outcomes following TBI. One of the TBI-induced genes modulated by IFNAR deficiency, Cxcl10 , is well recognized for its involvement in a variety of neurologic diseases . CXCL10 is a potent chemokine known to induce microglial reactivity and chemotaxis as well as recruitment of peripheral leukocytes to the central nervous system (CNS). To test the validity of our NanoString results and assess the spatial distribution of Cxcl10 , we performed RNAscope in situ hybridization from brain tissue sections at 7 DPI. In accordance with our NanoString results, we observed upregulation of Cxcl10 in WT TBI mice with minimal Cxcl10 expression in IFNAR KO TBI mice as well as in both WT and IFNAR KO control mice. WT TBI animals had increased Cxcl10 expression in the perilesional cortex and ipsilesional corpus callosum, hippocampus, and thalamus (Fig. , B-E). Interestingly, the thalamus displayed the most robust Cxcl10 expression compared to other regions after WT TBI. Although IFNAR KO TBI mice did have detectable Cxcl10 expression in the perilesional cortex, corpus callosum, and hippocampus (Fig. , G-I), Cxcl10 staining was notably absent in the ipsilesional thalamus of IFNAR KO TBI subjects (Fig. J). Importantly, all regions examined had significantly less Cxcl10 + cells in IFNAR KO TBI mice compared to WT TBI mice (Fig. , K-N). Overall, Cxcl10 expression is increased throughout the brain 7 days following TBI. This expression is substantially reduced in IFNAR-deficient mice following injury. After injury, reactive microglia migrate to sites of injury where they serve many functions including phagocytosis of debris, production of cytokines and chemokines, and antigen presentation . Given that microglia both produce and respond to type I interferons, we were interested in the effect of IFNAR deficiency on microgliosis following brain injury. To assess for changes in microglial morphology and/or accumulation, we used IHC to stain for IBA1, a marker of microglia and macrophages, and calculated proportional area of IBA1 staining 31 days following TBI. As expected, WT mice had increased IBA1 staining in the thalamus 31 days following TBI (Fig. ). IFNAR KO TBI mice had greater thalamic microglial staining compared to IFNAR KO controls, but this injury-induced response was significantly less than WT TBI mice (Fig. F). Both WT TBI and IFNAR KO TBI mice showed no difference in hippocampal IBA1 staining compared to their respective uninjured controls (Fig. ). In summary, IFNAR deficiency decreased microgliosis 31 days after TBI. In addition to evaluating microglial proportional area, we were interested in assessing the impact of IFNAR deficiency on microglial function following TBI. A key function of microglia is antigen presentation . We previously observed that several MHC class I molecules were upregulated in microglia following TBI . As many antigen-presentation and processing molecules are type I interferon-stimulated genes, we hypothesized that IFNAR deficiency would reduce MHC class I molecule expression in microglia following TBI. We first examined the impact of IFNAR deficiency on hippocampal expression of MHC class I genes, H2-K1 , β 2m , and Tap1 , at 7 and 31 DPI (Fig. , A-C). TBI resulted in increased hippocampal expression of H2-K1 , β 2m , and Tap1 in WT mice at both timepoints. Conversely, TBI-induced upregulation of these MHC class I molecules was prevented in IFNAR KO mice. IFNAR KO TBI mice showed no increase in H2-K1 , β 2m, or Tap1 expression at either time point compared to controls. Next, we combined fluorescent in situ hybridization with immunofluorescence to evaluate if the IFNAR-dependent reduction in MHC class I antigen processing and presentation molecules occurred specifically in reactive microglia. We used RNAscope to stain for H2-K1 mRNA, and IBA1 immunostaining to identify microglia in tissue sections obtained from WT and IFNAR KO mice at 7 DPI. H2-K1 expression was minimal in both WT and IFNAR KO control subjects. Following TBI in WT mice, H2-K1 upregulation was seen in the ipsilesional cortex, corpus callosum, hippocampus, and thalamus (Fig. , D-H) and colocalized with IBA1. The ipsilesional thalamus displayed the greatest injury-induced expression. In contrast, IFNAR KO TBI mice had minimal H2-K1 staining compared to WT TBI animals. The perilesional cortex had the most H2-K1 expression in the IFNAR KO subjects and similarly co-localized with IBA1 (Fig. , I-M). Overall, IFNAR deficiency decreased microglial MHC class I molecule expression following TBI suggesting that IFNAR deficiency may alter antigen presentation, a key function of reactive microglia following TBI. The IFNAR-mediated decrease in chemokine expression following TBI, suggested that IFNAR deficiency may impact the recruitment of peripheral immune cells to the CNS. To better understand how IFNAR deficiency alters the dynamics of immune cell populations in the CNS after TBI, we used flow cytometry to enumerate myeloid cell markers from whole brains at 3 and 10 DPI. Our gating strategy is shown in Fig. A. Neither injury nor genotype significantly altered the number of microglia (Fig. B). The total population of CD11b+, CD45+ high leukocytes was significantly increased at 3 DPI in WT mice. IFNAR KO TBI mice showed no significant increase in number of CD11b+, CD45+ high leukocytes at 3 or 10 DPI compared to genotype controls (Fig. C). To evaluate the leukocyte identity, we stained for cell markers that demarcate neutrophils and monocytes. Neutrophils have previously been shown to rapidly infiltrate the brain after injury and decline in number in the days following injury . Our results were consistent with this understanding, by 3 and 10 DPI neither injury nor genotype significantly altered the number of neutrophils (Fig. D). Monocytes also infiltrate the brain after injury, classically peaking in number at 3–4 days post injury, including both inflammatory (LY6C+) and patrolling (LY6C-) monocytes . In our model, WT TBI mice had increased LY6C+ and LY6C- monocytes at 3 DPI. IFNAR KO TBI were no different than genotypic uninjured controls. These results are evidence of decreased monocyte accumulation in IFNAR KO TBI compared to WT TBI mice (Fig. , E and F). Decreased expression of the chemokine CXCL10 and of MHC class I presentation molecules, in tandem with reduced infiltration of monocyte populations following TBI, led us to hypothesize that IFNAR deficiency may also affect T cell accumulation following TBI. To investigate this question, we used flow cytometry to quantify the total number of CD4+ and CD8+ T cells in the whole brain at 3 and 10 DPI (Fig. A). In agreement with prior studies , WT TBI mice trended towards elevated CD4+ T cell counts at both 3 and 10 DPI compared to uninjured genotypic controls. However, an increase in CD4+ T cells was not seen in IFNAR KO TBI mice (Fig. B). CD8+ T cells were significantly increased in WT TBI mice at 10 DPI. In contrast, there was no TBI-induced increase in CD8+ T cells in IFNAR KO mice compared to IFNAR KO controls (Fig. C). Overall, IFNAR deficiency reduced the accumulation of T cells in the brain following TBI, with the greatest effect on CD8 + T cells. To gain a better understanding of where CD8 + T cells are acting in the brain after injury, we stained for CD8α at 7 DPI. CD8α + T cells were present in the perilesional cortex, and ipsilesional hippocampus, corpus callosum, and thalamus (Fig. , D-H), suggesting that they are active in focal regions around the injury, white matter tracts, and subcortical structures such as the hippocampus and thalamus. Neuronal death and degeneration are well documented after traumatic brain injury and may contribute to cognitive dysfunction . To determine whether IFNAR deficiency attenuates injury-induced neuronal loss, we stained for the neuronal marker, NeuN, in WT and IFNAR KO tissue sections obtained at 7 and 31 DPI. The thalamus was chosen as the region of interest in light of the IFNAR-dependent thalamic expression of Cxcl10 and H2-K1 , and the microglial reactivity observed in the thalamus. At both 7 and 31 DPI, WT TBI mice had significantly fewer thalamic neurons compared to uninjured controls (Fig. ). IFNAR KO TBI mice showed no decrease in thalamic neuronal counts compared to uninjured controls at both timepoints and had significantly more thalamic neurons compared to WT TBI mice at 7 DPI (Fig. ). Overall, IFNAR deficiency attenuated thalamic neuronal loss at subacute and chronic timepoints following injury. Clinically, traumatic brain injury often involves both focal and diffuse injuries to the brain . Lateral FPI has the advantage of mirroring the mixed modality of brain injury that occurs in humans, making it possible to also evaluate how IFNAR deficiency affects white matter damage after TBI. Diffusion tensor imaging followed by tract-based spatial statistics was used to assess white matter disruption seven days following TBI. Fractional anisotropy, an indicator of axonal integrity, was significantly decreased in WT TBI mice compared to IFNAR KO TBI mice (3,803 significantly decreased voxels; p = 0.046) in perilesional white matter tracts (Fig. C). Based on this evidence, we conclude that IFNAR deficiency reduced injury-induced disruption of the white matter at seven days post-TBI. To determine the role of interferon signaling on injury-induced neurologic dysfunction, we performed behavior testing in WT and IFNAR KO mice 2–3 weeks following TBI or in uninjured controls (Fig. A). We first used the open field test to assess hyperactivity and anxiety phenotypes 14 days following TBI. As shown in Fig. B, WT TBI mice displayed injury-induced hyperactivity with significantly increased total ambulation compared to WT control mice. In contrast, IFNAR KO mice did not develop injury-induced hyperactivity. To assess anxiety-like behavior, the total time in the center of the open field chamber was recorded for the duration of the trial. TBI did not increase anxiety-like behavior in either genotype with no difference from controls in time spent in the center of the open field (Fig. C). Next, we assessed spatial-based learning and memory using the Barnes maze. Testing was initiated on day 20 post-injury and extended through day 24. Spatial learning was evaluated during four days of Barnes maze training (Fig. D). The main effect of time was significant, while neither injury nor genotype significantly altered training phase performance. On the fifth day of Barnes maze testing, spatial memory was assessed during the probe trial (Fig. E). There were no significant differences in total time spent around the escape hole or latency to first entry. The distance traveled en route to the first escape-zone entry was analyzed as a measure of path efficiency and showed a significant interaction between injury and genotype (Two-way ANOVA, F(1, 57) = 4.689, p < 0.05) On post-hoc multilple comparison testing, WT TBI mice trended towards a less efficient path when compared to WT control mice. In contrast, IFNAR KO TBI mice showed no difference compared to uninjured controls. Collectively, neurobehavioral testing supported that mice with IFNAR deficiency were protected from injury-induced neurologic dysfunction. The IFN-I pathway is strongly upregulated across a wide spectrum of neurodegenerative diseases . This robust response is the subject of increased study focused on the neurologic impact of IFN-I activation . In TBI, we and others have demonstrated significant IFN-I activation both at the tissue level and in specific immune cell populations, including microglia and astrocytes . However, the effects of IFN-I signaling on the neuroimmune response to TBI and the mechanisms by which this response contributes to TBI-induced neuropathology are poorly understood. In this study, we demonstrate that IFN-I signaling drives expression of a specific subset of neuroimmune genes, alters microglial reactivity, and results in the accumulation of peripheral leukocytes following TBI. Suppression of these neuroimmune responses by inactivation of the IFN-I receptor (IFNAR), was accompanied by reduced neuronal loss and white matter disruption and improved neurologic function following TBI. Our study highlights the therapeutic potential of IFNAR blockade and targeted immunomodulation following TBI. The neuroimmune response to TBI is temporally dynamic and context-specific, with roles in both injury recovery and secondary neurotoxicity. Unravelling this complexity requires ongoing study to better understand the molecular signals that drive these diverse effects. Past work evaluating the impact of IFN-I signaling on the neuroimmune response to TBI has focused on small panels of genes, looking at their expression predominantly in the acute phase following TBI . To overcome this limitation, we performed multiplex gene analysis using the NanoString Neuroinflammatory panel, allowing broad unbiased screening of over 700 neuroinflammatory genes in WT and IFNAR KO mice following TBI. We also evaluated expression changes at both subacute and chronic timepoints. Somewhat unexpectedly, we found that both IFNAR-deficient TBI and WT TBI mice—relative to their uninjured control counterparts—had over 300 injury-induced DEGs at 7 DPI, with many DEGs overlapping between both genotypes. However, when comparing WT TBI and IFNAR-deficient TBI mice, we found that IFNAR deficiency potently suppresses a subset of 10 TBI-induced genes, previously identified as type I interferon-stimulated genes. This selective modulation differs from the findings of a prior study on Alzheimer’s disease whereby IFNAR deficiency resulted in suppression of both ISGs and complement pathway genes . Our findings are consistent with work in a mouse model of Parkinson’s disease; in this model, deficiency in STING, the upstream regulator of IFN-I, selectively modulated ISGs but did not prevent upregulated expression of complement and other inflammatory pathway genes . These disease-specific responses highlight the importance of context in driving neuroimmune regulation. In the case of TBI, the acute neurologic insult is more severe than in neurodegenerative diseases, and consequently, may trigger inflammation through activation of multiple, redundant mechanisms. It is important to note, however, that in both our study and the study of a Parkinson’s disease model, the selective prevention of IFN-I pathway activation was sufficient for improved neuropathology and neurobehavioral outcomes . In fact, the selective transcriptional alteration mediated by IFNAR deficiency may be a vital asset for development of highly specific IFN-I targeted therapies for TBI. As knowledge of the diverse states and functions of CNS immune cells has increased, there has been growing recognition of the importance of targeted vs. broad manipulation of the immune response in the setting of neurologic disease. One cause of past failures of anti-inflammatory therapies following TBI may have been the broad immune cell suppression arising from these treatments, which would eliminate not only toxic but also reparative functions . Further study is needed to identify whether selective targeting of type I interferon signaling is sufficient and optimal for ameliorating harmful, dysregulated immune responses following TBI. As the primary prenatally seeded immune cell in the CNS, microglia are important effector cells of the neuroimmune response following brain injury. While there is growing evidence for a pathologic role of IFN-I activated microglia in chronic neurodegenerative diseases, in TBI, the effects of IFN-I signaling on microglial accumulation and diverse microglial phenotypes are less understood . One TBI study reported increased microglia accumulation in IFNAR-deficient mice 24 h post-injury , while another demonstrated decreased acute microgliosis in mice deficient in STING, an upstream stimulator of IFN-I . As the acute microglial response may likely be necessary for debris clearance and other repair mechanisms, we aimed to study the effect of IFNAR signaling on the chronic microglial response. Additionally, we sought to address the impact of IFN-I on specific microglial functions. In our study, we found that IFNAR-deficient mice had significantly decreased regional microglial area at 31 DPI, with the greatest effect found in the thalamus. We also demonstrated that type I IFN signaling drives microglial expression of MHC class I antigen processing and presentation factors following TBI. This alteration in microglial reactivity may be an important mechanism contributing to secondary injury following TBI. The requirement of microglial MHC class I antigen presentation for cytotoxic CD8 + T cell infiltration has been shown in CNS viral infection models . Furthermore, a prior study in TBI demonstrated that depletion of CD8 + T cells resulted in improved neurologic function following TBI . In our study, we showed that the decreased expression of microglial MHC class I molecules in IFNAR KO mice was associated with decreased brain accumulation of CD8 + T cells and was also associated with decreased neuronal loss, white matter injury, and neurobehavior dysfunction following TBI. Our ongoing studies will more directly evaluate the impact of IFN-I stimulated microglia on CD8 + T cell recruitment and subsequent neurotoxicity following TBI. To modulate the IFN-I signaling pathway therapeutically, it is essential to know whether and how IFN-I impacts the multiple mechanisms of neuropathology involved in brain injury and disease. For example, in models of Alzheimer’s disease and frontotemporal dementia, IFN-I stimulated microglia cause pathologic synaptic loss . Additionally, in a Parkinson’s disease model, IFN-I pathway activation resulted in dopaminergic neuron loss . Following TBI, neuronal death, axonal injury, synaptic dysfunction, and white matter injury all can contribute to neurologic dysfunction. In this study, we demonstrated that thalamic neuron loss and white matter pathology were suppressed when IFN-I signaling was disrupted. This builds upon prior work that demonstrated a protective effect of type I IFN deficiency on hippocampal neurodegeneration, and cortical volume loss following TBI . In our present study, loss of IFN-I signaling led to altered microglial reactivity, coupled with a decrease in brain monocytes, and T cells following TBI. Previous studies using cell depletion strategies, have implicated activated monocytes, microglia, and T cells as contributors to chronic neuropathology following TBI . As such, it is likely that the neuroprotective effects of IFNAR deficiency are due to impacts on both centrally- and peripherally-derived immune cells—the result of altered immune cell crosstalk that influences neuroprotective and neurotoxic states. Our future work will seek to identify the specific ISGs that mediate TBI neuropathology. One molecule of particular promise is CXCL10. CXCL10 is known to be upregulated after TBI, and in our study, its expression was potently inhibited by IFNAR deficiency . The exact role of CXCL10 in TBI neuropathology is unknown; however one study found that a single dose of intranasal CXCL10-siRNA prior to TBI resulted in decreased infiltration of Th1 cells . Another study in an experimental model of demyelinating disease found that CXCL10 deficiency resulted in reduced microglial activation and amelioration of chronic neurotoxicity . CXCL10 blockade following TBI warrants further investigation to dissect its role in TBI pathology. ZBP1 is another candidate molecule that is upregulated by TBI in WT mice, but whose transcription was prevented in our IFNAR KO TBI mice. While ZBP1 was initially identified as an innate immune sensor resulting in IFN-I upregulation, it is now also recognized to induce inflammatory cell death and thus warrants investigation as a potential effector molecule of neuropathology following TBI . Our study has several limitations. To study the effect of IFNAR deficiency, we used a global IFNAR knockout mouse line. As with any global knockout mouse model, this carries the risk of altered neural and immunological development. Previous studies have shown decreased hippocampal synapse number and synaptic plasticity in healthy, IFNAR-deficient mice . Others have observed elevated levels of peripheral myeloid lineage cells in IFNAR-deficient mice . Additionally, while our studies did not find any significant behavioral differences when comparing IFNAR KO and wildtype controls, others have described spatial learning impairment in IFNAR-deficient mice . Future studies will utilize inducible IFNAR knockout and pharmacologic blockade to avoid the effects on development seen in the global knockout and to determine if IFN-I mediates differential effects on TBI pathogenesis at acute, subacute, and chronic timepoints. We will also use cell-type-specific IFNAR-deficient KO models to dissect the cell-specific contributions to type I interferon-driven neuropathology following TBI. Importantly, this will allow for the distinction between IFN-I driven microglial vs. peripheral leukocyte mediated effects. Finally, a limitation in this study is the exclusive use of male mice. It is well established that sex is a relevant biologic variable after TBI . Others have demonstrated sex differences in the neuroimmune response to TBI, including decreased inflammatory activation and reduced infiltration of myeloid cells in female mice . Further study is needed evaluate IFN-I signaling in both sexes. In summary, this study shows that IFN-I signaling is widespread and persistently upregulated following TBI. The rapid and persistent upregulation of the IFN-I pathway reveals a broad therapeutic window for intervention. IFNAR deficiency results in selective modulation of both the central and peripheral neuroimmune response. This includes reduced neuroinflammatory gene expression, decreased regional microglial reactivity and transcription of antigen presentation genes, and decreased monocyte and T cell accumulation. The neuroimmune modulation of IFNAR-deficient mice is associated with reduced neuronal loss, reduced white matter disruption, and amelioration of TBI-induced neurobehavioral dysfunction. Overall, blockade of type I IFN signaling after a traumatic brain injury is a promising approach for developing immune-modulating therapies that may confer neuroprotection following TBI. Below is the link to the electronic supplementary material. Supplementary Material 1: Nanostring list of differentially expressed genes Supplementary Material 2: Statistical analysis spreadsheet Supplementary Material 3: TBI results in differential expression of neuroinflammatory genes in both wildtype and IFNAR deficient mice
R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment: part 15
73296a35-fff2-43fe-84b2-2608820632ea
11037032
Internal Medicine[mh]
Classification of rib fracture types from postmortem computed tomography images using deep learning
d5624286-2901-4941-b1da-f3118848f865
11790768
Forensic Medicine[mh]
Rib fractures are a common type of injury. They can result from blunt trauma in an accident, chest compression during cardiopulmonary resuscitation, or a pathological fracture in malignant disease. They are often associated with other injuries, such as hemo- or pneumothorax and lung contusions . Depending on the displacement, type, and extent, rib fractures can result in an unstable chest (flail chest) and—in combination with associated injuries—can significantly influence morbidity and mortality . Depending on trauma severity or case circumstances, conventional radiography is the primary technique used to look for rib fractures because of its general availability, low radiation dose, and affordable costs. However, the sensitivity of conventional radiographs for the detection of rib fractures (especially nondisplaced ones) is considered relatively low . In contrast, computed tomography (CT) shows much higher sensitivity in detecting rib fractures, providing more detailed two-dimensional images that might also be viewed in three dimensions . However, CT scans might not be available everywhere. In addition, they are more expensive, and they expose the patient to a higher radiation dose than conventional radiography . In forensic medicine, concerns regarding radiation dose can obviously be ignored, and postmortem computed tomography (PMCT) has already gained great acceptance worldwide as a valuable adjunct and sometimes even a replacement for conventional autopsies . Several recent studies have employed deep learning and image processing to automate rib fracture detection, adding to previous literature in which different groups proposed solutions for automating the detection of rib fractures on CT scans and radiographs . For example, one recent study focused on detecting rib fractures on CT scans and classifying them into six categories, including displaced versus nondisplaced, buckle, and segmental fractures . The authors trained a U-Net-based network using the RibFrac challenge dataset . The model proposed by Choi et al. can also determine the position of a fracture. In another study by Wang and Wang, the authors developed a modified U-Net architecture, combined with an attention module and a modified dilated convolution, to detect and segment rib fractures on CT scans . The authors relied on the same RibFrac challenge dataset to train their architecture. In a third study, Wu et al. utilized chest radiographs and employed a YOLOv3-based convolutional neural network (CNN) for rib fracture detection . In our study, we developed a model to automatically detect rib fractures and classify whether they are displaced or nondisplaced using two-dimensional planar views of the rib cage reconstructed from PMCT volumetric data. Ethics The data used in this retrospective cohort study are in accordance with Swiss laws and ethical standards. The ethics approval for this study was waived by the Ethics Committee of the Canton of Zurich (KEK ZH-No. 15–0686). Case selection A total of 340 consecutive autopsy cases were retrospectively retrieved from July 2017 to April 2018 from the archives of the Institute of Forensic Medicine, University of Zurich, Switzerland. We excluded cases with signs of advanced decomposition (using the RA-index defined by Egger et al. ), corpses that had undergone organ explantation, cases of severe trauma with extensive damage to the corpse (e.g., amputation or exenteration), cases without whole-body PMCT, cases where rib fractures were not visible in the rib unfolding tool or located in the cartilaginous part of the rib, and cases that were still under investigation during this period. After these exclusion criteria were applied, a total of 195 cases remained (55 females, median age 64 years; 140 males, median age 54 years). Of the 195 cases, 85 showed acute rib fractures, 84 had no rib fractures, and 26 presented subacute and chronic fractures either in combination with acute fractures or independently. Both complete and incomplete rib fractures were included, independent of their location. They were classified as either “displaced,” “nondisplaced,” “ad latus” (sideways), “ad axim” (with angulation), “ad longitudinem cum contractione” (in long axis compressed fracture), and “ad longitudinem cum distractione” (in long axis with gap between the fragments) fractures. Postmortem computed tomography data Whole-body imaging was performed on a 128-slice dual source CT scanner (SOMATOM Flash Definition, Siemens, Forchheim, Germany) using automated dose modulation software (CARE Dose4D™, Siemens, Forchheim, Germany); the slice thickness was 1 mm, and the increment was 0.5 mm. The images were reconstructed with both soft and hard kernels. A complete overview of the technical parameters used to acquire the CT scans can be found in Flach et al. . Image treatment prior to classification The rib fracture images were reconstructed from volumetric CT data using Syngo.via rib unfolding tool CT Bone Reading (Siemens Healthineers GmbH, Erlangen, Germany) with standard bone window setting (center 450, width 1500) (see Fig. for more details). The tool used for this conversion was developed by Ringl et al. . Data mining To extract data containing fractures, we used 270 images of unfolded rib cages with fractures. Two readers, one who was a medical student under supervision and one who was a board-certified forensic pathologist and radiologist, classified each fracture type as either “displaced” or “nondisplaced.” The “displaced” fractures were further divided into “ad latus” (sideways), “ad axim” (with angulation), “ad longitudinem cum contractione” (in long axis compressed fracture), and “ad longitudinem cum distractione” (in long axis with gap between the fragments). Due to the very small number of “ad axim” fractures, we excluded them from further analysis. First, we cropped the images to [12pt]{minimal} $$500 1000$$ 500 × 1000 pixels to eliminate the background and then upscaled the images to 300% of the original size with the INTER_AREA interpolation method from OpenCV, resulting in large images measuring [12pt]{minimal} $$1500 3000.$$ 1500 × 3000 . With this preprocessing step, we wanted to achieve an optimal size for dividing the image into sufficient image patches but still capturing all fractures. All fractures were marked using their respective x - and y -coordinates on the large image. For each large image containing one or more fractures, we applied data augmentation by shifting the sliding window from the centered x - and y -coordinates in all four cardinal directions (up, down, right, and left) in steps of 10 pixels. This resulted in a total of 16 additional samples next to the original sample (centered around the fracture). For each fracture, we then manually removed the sample images where the data augmentation resulted in a loss of information (e.g., the fracture was no longer visible). The sample curation led to 11,759 “displaced” (“ad latus” 1785, “longitudinem cum contractione” 6801, and “longitudinem cum distractione” 3173) and 18,462 “nondisplaced” samples, for a total of 30,251 “fracture” images. To extract samples with the label “no fracture,” we used 231 images of unfolded rib cages without any fractures. As for the images with fractures, we applied the same preprocessing steps (cropping and resizing) to images without fractures. Employing a sliding window of size [12pt]{minimal} $$99 99$$ 99 × 99 pixels and shifting it 25 pixels in each direction along both the [12pt]{minimal} $$x$$ x - and [12pt]{minimal} $$y$$ y -axes, we obtained 231,926 small images, each of which was [12pt]{minimal} $$99 99$$ 99 × 99 pixels in size. From these images, we randomly selected 30,251 “no fracture” images, resulting in a balanced dataset of 60,472 samples in total. Training, validation, and testing For our study, we used a Windows workstation (Windows 10, Nvidia GeForce GTX 1660 SUPER, 64 GB CPU RAM). We split our data into ~ 70% training and ~ 30% test data. Representations from the same fracture were kept together in each partition to prevent data leakage into the test set; thus, the partitions varied slightly in size. We then ran a 5-fold cross-validation on the training dataset with different hyperparameters. We selected the best hyperparameters (see Section “ ”) by assessing the epochs with the highest validation score ( F 1 score). Finally, we trained our model with the best selection of hyperparameters on the full training dataset and validated the trained model on the test set. We assessed three levels of hierarchical taxonomy (see Fig. for more details): Performance of the model on the balanced binary task when classifying “no fracture” and “fracture” and reported with the accuracy score (high-level task). Performance of the model on the imbalanced binary task when classifying “displaced” and “nondisplaced” with the F 1 , precision, and recall scores (mid-level task). Performance of the model on the imbalanced multiclass task with the displaced classes “ad latus,” “ad longitudinem cum contractione,” and “ad longitudinem cum distractione” with the F 1 , precision, and recall scores (low-level task). Additionally, we defined two types of assessment: Performance measurement on the fracture representations (referred to as “standard” assessment), as in simple image classification tasks. Aggregation of the prediction values from multiple representations of the same fracture into a single prediction value. The aggregation procedure starts by running a custom-made function [12pt]{minimal} $$Y$$ Y on the predicted values. The function [12pt]{minimal} $$Y$$ Y is defined as [12pt]{minimal} $$Y=\{0,& \;_{i=1}^{n}{}_{i}=0\\ 1,& .$$ Y = 0 , if ∑ i = 1 n y ^ i = 0 1 , otherwise where the variable [12pt]{minimal} $${}_{i}$$ y ^ i stands for the label value predicted by the model for the representation [12pt]{minimal} $$i$$ i . The variable [12pt]{minimal} $${}_{i}$$ y ^ i can take any integer value from 0 to [12pt]{minimal} $$c$$ c , where [12pt]{minimal} $$c$$ c represents the number of classes. Hence, the function [12pt]{minimal} $$Y= 0$$ Y = 0 if at least one of the representations [12pt]{minimal} $$i$$ i was classified into the class 0 (classified as “no fracture”). Otherwise, the function [12pt]{minimal} $$Y= 1$$ Y = 1 if at least one of the representations [12pt]{minimal} $$i$$ i was classified into a nonzero class (classified as “fracture”). Then, we used the maximum operator to determine the fracture type [12pt]{minimal} $$k$$ k when [12pt]{minimal} $$Y= 1$$ Y = 1 : [12pt]{minimal} $$k= }(_{i=1}^{n}{{logit}}_{i}^{c})$$ k = max c ( 1 n ∑ i = 1 n logit i c ) where the [12pt]{minimal} $${}_{i}^{c}$$ logit i c stands for the model output value for the class [12pt]{minimal} $$c$$ c before entering the Softmax function. In other words, the aggregated prediction value corresponding to a single fracture is the type of fracture (class) that has the highest weight over all its representations. This would ensure us that we have detected a fracture even with the weakest signal. We referred to this type of assessment as “aggregated.” Model architecture and hyperparameters We used the ResNet50 architecture pretrained on the ImageNet database combined with two additional dense layers, each with 198 neurons, and with a dropout layer whose dropout rate was 0.5. Additionally, we included the EarlyStopping function to stop the training when the value of the validation loss function was minimal (patience = 15). We also used the ReduceLROnPlateau function to downscale the learning rate when the validation loss value was not improving (patience = 2) . The batch size was set to 16, and we used the categorical cross-entropy loss function with the Adam optimizer. We first froze the layers of the pretrained network and trained on our data for several epochs (max = 100 epochs, depending on early stopping) with a learning rate of 0.0001. Then, we unfroze the layers and fine-tuned the network for another few epochs (max = 100 epochs, depending on early stopping) with a learning rate of 8e − 05. The data used in this retrospective cohort study are in accordance with Swiss laws and ethical standards. The ethics approval for this study was waived by the Ethics Committee of the Canton of Zurich (KEK ZH-No. 15–0686). A total of 340 consecutive autopsy cases were retrospectively retrieved from July 2017 to April 2018 from the archives of the Institute of Forensic Medicine, University of Zurich, Switzerland. We excluded cases with signs of advanced decomposition (using the RA-index defined by Egger et al. ), corpses that had undergone organ explantation, cases of severe trauma with extensive damage to the corpse (e.g., amputation or exenteration), cases without whole-body PMCT, cases where rib fractures were not visible in the rib unfolding tool or located in the cartilaginous part of the rib, and cases that were still under investigation during this period. After these exclusion criteria were applied, a total of 195 cases remained (55 females, median age 64 years; 140 males, median age 54 years). Of the 195 cases, 85 showed acute rib fractures, 84 had no rib fractures, and 26 presented subacute and chronic fractures either in combination with acute fractures or independently. Both complete and incomplete rib fractures were included, independent of their location. They were classified as either “displaced,” “nondisplaced,” “ad latus” (sideways), “ad axim” (with angulation), “ad longitudinem cum contractione” (in long axis compressed fracture), and “ad longitudinem cum distractione” (in long axis with gap between the fragments) fractures. Whole-body imaging was performed on a 128-slice dual source CT scanner (SOMATOM Flash Definition, Siemens, Forchheim, Germany) using automated dose modulation software (CARE Dose4D™, Siemens, Forchheim, Germany); the slice thickness was 1 mm, and the increment was 0.5 mm. The images were reconstructed with both soft and hard kernels. A complete overview of the technical parameters used to acquire the CT scans can be found in Flach et al. . The rib fracture images were reconstructed from volumetric CT data using Syngo.via rib unfolding tool CT Bone Reading (Siemens Healthineers GmbH, Erlangen, Germany) with standard bone window setting (center 450, width 1500) (see Fig. for more details). The tool used for this conversion was developed by Ringl et al. . To extract data containing fractures, we used 270 images of unfolded rib cages with fractures. Two readers, one who was a medical student under supervision and one who was a board-certified forensic pathologist and radiologist, classified each fracture type as either “displaced” or “nondisplaced.” The “displaced” fractures were further divided into “ad latus” (sideways), “ad axim” (with angulation), “ad longitudinem cum contractione” (in long axis compressed fracture), and “ad longitudinem cum distractione” (in long axis with gap between the fragments). Due to the very small number of “ad axim” fractures, we excluded them from further analysis. First, we cropped the images to [12pt]{minimal} $$500 1000$$ 500 × 1000 pixels to eliminate the background and then upscaled the images to 300% of the original size with the INTER_AREA interpolation method from OpenCV, resulting in large images measuring [12pt]{minimal} $$1500 3000.$$ 1500 × 3000 . With this preprocessing step, we wanted to achieve an optimal size for dividing the image into sufficient image patches but still capturing all fractures. All fractures were marked using their respective x - and y -coordinates on the large image. For each large image containing one or more fractures, we applied data augmentation by shifting the sliding window from the centered x - and y -coordinates in all four cardinal directions (up, down, right, and left) in steps of 10 pixels. This resulted in a total of 16 additional samples next to the original sample (centered around the fracture). For each fracture, we then manually removed the sample images where the data augmentation resulted in a loss of information (e.g., the fracture was no longer visible). The sample curation led to 11,759 “displaced” (“ad latus” 1785, “longitudinem cum contractione” 6801, and “longitudinem cum distractione” 3173) and 18,462 “nondisplaced” samples, for a total of 30,251 “fracture” images. To extract samples with the label “no fracture,” we used 231 images of unfolded rib cages without any fractures. As for the images with fractures, we applied the same preprocessing steps (cropping and resizing) to images without fractures. Employing a sliding window of size [12pt]{minimal} $$99 99$$ 99 × 99 pixels and shifting it 25 pixels in each direction along both the [12pt]{minimal} $$x$$ x - and [12pt]{minimal} $$y$$ y -axes, we obtained 231,926 small images, each of which was [12pt]{minimal} $$99 99$$ 99 × 99 pixels in size. From these images, we randomly selected 30,251 “no fracture” images, resulting in a balanced dataset of 60,472 samples in total. For our study, we used a Windows workstation (Windows 10, Nvidia GeForce GTX 1660 SUPER, 64 GB CPU RAM). We split our data into ~ 70% training and ~ 30% test data. Representations from the same fracture were kept together in each partition to prevent data leakage into the test set; thus, the partitions varied slightly in size. We then ran a 5-fold cross-validation on the training dataset with different hyperparameters. We selected the best hyperparameters (see Section “ ”) by assessing the epochs with the highest validation score ( F 1 score). Finally, we trained our model with the best selection of hyperparameters on the full training dataset and validated the trained model on the test set. We assessed three levels of hierarchical taxonomy (see Fig. for more details): Performance of the model on the balanced binary task when classifying “no fracture” and “fracture” and reported with the accuracy score (high-level task). Performance of the model on the imbalanced binary task when classifying “displaced” and “nondisplaced” with the F 1 , precision, and recall scores (mid-level task). Performance of the model on the imbalanced multiclass task with the displaced classes “ad latus,” “ad longitudinem cum contractione,” and “ad longitudinem cum distractione” with the F 1 , precision, and recall scores (low-level task). Additionally, we defined two types of assessment: Performance measurement on the fracture representations (referred to as “standard” assessment), as in simple image classification tasks. Aggregation of the prediction values from multiple representations of the same fracture into a single prediction value. The aggregation procedure starts by running a custom-made function [12pt]{minimal} $$Y$$ Y on the predicted values. The function [12pt]{minimal} $$Y$$ Y is defined as [12pt]{minimal} $$Y=\{0,& \;_{i=1}^{n}{}_{i}=0\\ 1,& .$$ Y = 0 , if ∑ i = 1 n y ^ i = 0 1 , otherwise where the variable [12pt]{minimal} $${}_{i}$$ y ^ i stands for the label value predicted by the model for the representation [12pt]{minimal} $$i$$ i . The variable [12pt]{minimal} $${}_{i}$$ y ^ i can take any integer value from 0 to [12pt]{minimal} $$c$$ c , where [12pt]{minimal} $$c$$ c represents the number of classes. Hence, the function [12pt]{minimal} $$Y= 0$$ Y = 0 if at least one of the representations [12pt]{minimal} $$i$$ i was classified into the class 0 (classified as “no fracture”). Otherwise, the function [12pt]{minimal} $$Y= 1$$ Y = 1 if at least one of the representations [12pt]{minimal} $$i$$ i was classified into a nonzero class (classified as “fracture”). Then, we used the maximum operator to determine the fracture type [12pt]{minimal} $$k$$ k when [12pt]{minimal} $$Y= 1$$ Y = 1 : [12pt]{minimal} $$k= }(_{i=1}^{n}{{logit}}_{i}^{c})$$ k = max c ( 1 n ∑ i = 1 n logit i c ) where the [12pt]{minimal} $${}_{i}^{c}$$ logit i c stands for the model output value for the class [12pt]{minimal} $$c$$ c before entering the Softmax function. In other words, the aggregated prediction value corresponding to a single fracture is the type of fracture (class) that has the highest weight over all its representations. This would ensure us that we have detected a fracture even with the weakest signal. We referred to this type of assessment as “aggregated.” We used the ResNet50 architecture pretrained on the ImageNet database combined with two additional dense layers, each with 198 neurons, and with a dropout layer whose dropout rate was 0.5. Additionally, we included the EarlyStopping function to stop the training when the value of the validation loss function was minimal (patience = 15). We also used the ReduceLROnPlateau function to downscale the learning rate when the validation loss value was not improving (patience = 2) . The batch size was set to 16, and we used the categorical cross-entropy loss function with the Adam optimizer. We first froze the layers of the pretrained network and trained on our data for several epochs (max = 100 epochs, depending on early stopping) with a learning rate of 0.0001. Then, we unfroze the layers and fine-tuned the network for another few epochs (max = 100 epochs, depending on early stopping) with a learning rate of 8e − 05. We assessed the performance of our model in two different ways. First, we showed the metrics for the predictions on all representations in the test set (“standard” assessment). Second, we aggregated the predictions of all representations on the test set to the fracture level and reported the metrics (“aggregated” assessment). Figure shows the confusion matrices for all classes in terms of absolute and relative values and for each of the assessments. Most of the confusions occurred within the fracture classes, while fewer occurred in the class “no fracture.” While “nondisplaced” was correctly predicted in 80–86% of cases (depending on the assessment), “ad latus” (sideways) was correctly predicted in only 17–18% of cases. The other two “displaced” subclasses, “ad longitudinem cum contractione” (in long axis compressed fracture) and “ad longitudinem cum distractione,” (in long axis with gap between the fragments) were correctly predicted in 70–75% and 64–75% of cases, respectively. Table gives an overview of the performance of our model. In the balanced binary classification task with the classes “no fracture” and “fracture,” our model achieved an accuracy score of 0.945 (0 worst score, 1 best score) on the “standard” assessment and an accuracy score of 0.993 on the “aggregated” assessment. When evaluating the models’ performance on the imbalanced binary task with the classes “displaced” and “nondisplaced,” we found an F 1 score of 0.845, a precision score of 0.845, and a recall score of 0.846. When data were aggregated at the fracture level, the model achieved an F 1 score of 0.856, a precision score of 0.857, and a recall score of 0.855. The third task was an imbalanced multiclass task of the different “displaced” classes “ad latus” (sideways), “ad longitudinem cum contractione” (in long axis compressed fracture), and “ad longitudinem cum distractione” (in long axis with gap between the fragments). There, we found an F 1 score of 0.661, a precision score of 0.736, and a recall score of 0.603 for the “standard” assessment and an F 1 score of 0.707, a precision score of 0.769, and a recall score of 0.662 for the “aggregated” assessment. The aim of this study was to train a deep learning model able to detect and classify different types of rib fractures using a two-dimensional representation of the rib cage reconstructed from three-dimensional PMCT images. By applying our model, we investigated two types of assessment (“standard” and “aggregated”) on three different hierarchical taxonomy levels (“fracture” versus “no fracture,” “displaced” versus “nondisplaced,” and “displaced subclasses”) with different scores. Our results show that the trained model can distinguish between “fracture” and “no fracture” samples to a large extent and with a high accuracy (94.5%). When data were aggregated at the fracture level, only three out of 591 fractures were classified as “no fracture.” The model also performed reliably in distinguishing “displaced” from “nondisplaced” fractures, although to a slightly lesser extent. When classifying “displaced” from “nondisplaced” fractures, we noted that the trained model performed slightly better in classifying “nondisplaced” than “displaced” fractures. This could be due to either the smaller sample size or the possibility that the features of “displaced” fractures were more difficult for the model to capture. Finally, the most difficult task was distinguishing “displaced” subclasses. In particular, the model performed worst for the subclass “ad latus” (sideways), which was often confused with “ad longitudinem cum contractione” (in long axis compressed fracture) or “nondisplaced.” The scores for the aggregated assessment were generally higher than those for the standard assessment, which reflects our choice of metric design. We defined a single correct fracture prediction from all possible representations as sufficient to qualify as a “fracture” and be classified accordingly. As we mentioned in the introduction, three recent studies used deep learning techniques to automatically detect rib fractures either on CT scans or radiographs. These studies used different datasets which makes it difficult to compare their performance with our model. However, we went one step further by identifying four different subclasses of “displaced” fractures. We also developed a method to display the position of each fracture. If multiple fractures are present on the same CT scans, they are labeled separately (see Fig. ). The analysis of two-dimensional representations of the rib cage instead of volumetric data already enables clinicians to make a quick and easy assessment for potential rib fractures. Building upon our previous work , we have shown how deep learning techniques can be used as an automation step to reliably locate and classify relevant fracture types on such large two-dimensional PMCT images and thus further simplify and support clinicians’ work. Our model achieved an accuracy score of 0.945 on a balanced binary classification task with the classes “no fracture” and “fracture.” The F 1 score on the imbalanced binary task with the classes “displaced” and “nondisplaced” reached 0.845. Classifying “displaced” subclasses remains challenging, especially the subclass “ad latus.”
Script Concordance Tests for Formative Clinical Reasoning and Problem-Solving Assessment in General Pediatrics
ddbb17bc-c940-4092-a07d-f6087d6b968d
9485313
Pediatrics[mh]
By the using this assessment, facilitators will be able to: 1. Explore medical students’ clinical reasoning and medical problem-solving using common pediatric clinical scenarios cases. 2. Incorporate objective measures of medical student performance during formative feedback. 3. Evaluate students’ clinical reasoning and medical problem-solving growth over time. Construct Clinical reasoning and problem-solving are complex constructs that are challenging to assess systematically. These competencies are commonly assessed via clinical performance–based assessment by attending physicians during clinical rotations and written exams and in standardized patient encounters. Script Concordance Tests (SCTs), introduced by Charlin and colleagues, have emerged as valid and reliable alternative to traditional multiple-choice questions for assessing student clinical competency. SCTs consist of short clinical vignettes, typically with proposed diagnoses, diagnostic studies, treatments, and management options for patient care scenarios. SCTs frequently guide the learner through an evolving clinical scenario, commonly comprising three or more parts. Charlin and colleagues described the fundamental components of a three-part SCT item : • Part 1: diagnostic hypothesis, investigative action, or treatment option relevant to the situation. • Part 2: new information (e.g., a sign, condition, imaging study, or laboratory test result) that might influence the diagnostic hypothesis, investigative action, or treatment option. • Part 3: a Likert-type scale identifying how more or less likely a student is to make the diagnosis, order imaging/lab, or select a specific treatment based on the qualifier in part 2. SCTs are constructed so that learners must answer each question independently before additional information is subsequently provided, requiring the examinees to determine the effect of that information on their decision regarding a diagnosis, test, or management option. In contrast to multiple-choice questions, SCTs evaluate a range of possible student responses to clinically ambiguous situations. Well-constructed SCTs capture some of the ambiguity associated with a clinical encounter, recognizing that there may be no single best response to a scenario. The progression towards competency-based medical education has prompted investigation into types of formative assessment that can reliably and efficiently evaluate medical students in real time. SCTs are not resource intensive but are suitable for online use and are reusable, making them a compelling option for medical educators. SCTs are a means of assessment that can provide faculty with objective information on learners’ clinical and problem-solving competency. Prior work has described statistically significant positive relationships between SCT performance, clerkship clinical skill evaluations, and USMLE Step 2 Clinical Knowledge (CK) scores. – As more emphasis is placed upon medical student clerkship performance, objective measures for assessing learner skill and providing salient constructive feedback are needed. SCTs represent a promising form of assessment. Because SCTs evaluate judgment, they can be used to test clinical subjects where expert consensus on care delivery does not yet exist. Cases remain the same, and the vignettes do not need to become more complex, as an increasing score on SCTs can be correlated to a subject's training level and track gains in clinical knowledge throughout an entire career. , Intended Populations These SCTs are intended for medical students and are best employed during a pediatric clerkship or subinternship. The items may also be useful for nonphysician health professions students (doctor of nursing practice and physician assistant). Clinical reasoning and problem-solving are complex constructs that are challenging to assess systematically. These competencies are commonly assessed via clinical performance–based assessment by attending physicians during clinical rotations and written exams and in standardized patient encounters. Script Concordance Tests (SCTs), introduced by Charlin and colleagues, have emerged as valid and reliable alternative to traditional multiple-choice questions for assessing student clinical competency. SCTs consist of short clinical vignettes, typically with proposed diagnoses, diagnostic studies, treatments, and management options for patient care scenarios. SCTs frequently guide the learner through an evolving clinical scenario, commonly comprising three or more parts. Charlin and colleagues described the fundamental components of a three-part SCT item : • Part 1: diagnostic hypothesis, investigative action, or treatment option relevant to the situation. • Part 2: new information (e.g., a sign, condition, imaging study, or laboratory test result) that might influence the diagnostic hypothesis, investigative action, or treatment option. • Part 3: a Likert-type scale identifying how more or less likely a student is to make the diagnosis, order imaging/lab, or select a specific treatment based on the qualifier in part 2. SCTs are constructed so that learners must answer each question independently before additional information is subsequently provided, requiring the examinees to determine the effect of that information on their decision regarding a diagnosis, test, or management option. In contrast to multiple-choice questions, SCTs evaluate a range of possible student responses to clinically ambiguous situations. Well-constructed SCTs capture some of the ambiguity associated with a clinical encounter, recognizing that there may be no single best response to a scenario. The progression towards competency-based medical education has prompted investigation into types of formative assessment that can reliably and efficiently evaluate medical students in real time. SCTs are not resource intensive but are suitable for online use and are reusable, making them a compelling option for medical educators. SCTs are a means of assessment that can provide faculty with objective information on learners’ clinical and problem-solving competency. Prior work has described statistically significant positive relationships between SCT performance, clerkship clinical skill evaluations, and USMLE Step 2 Clinical Knowledge (CK) scores. – As more emphasis is placed upon medical student clerkship performance, objective measures for assessing learner skill and providing salient constructive feedback are needed. SCTs represent a promising form of assessment. Because SCTs evaluate judgment, they can be used to test clinical subjects where expert consensus on care delivery does not yet exist. Cases remain the same, and the vignettes do not need to become more complex, as an increasing score on SCTs can be correlated to a subject's training level and track gains in clinical knowledge throughout an entire career. , These SCTs are intended for medical students and are best employed during a pediatric clerkship or subinternship. The items may also be useful for nonphysician health professions students (doctor of nursing practice and physician assistant). Authors (Meghan Lopez and Joseph C. Fantone III) selected SCT topics and collaboratively wrote each associated vignette, representing eight common clinical scenarios in general pediatrics, including genetic syndromes, rashes, abdominal masses, diarrhea, lymphadenopathy, otalgia, vomiting, and fever without a source. Many of these topics were also covered in faculty taught in-house lectures during the pediatric clerkship. We have provided copies of each SCT to be administered to students and another copy with expert answers included ( and , respectively). Vignettes were designed in a three-part scaffolded fashion; that is, the learner was provided information, prompted to respond to a set of questions, then offered additional information and another set of questions. Each SCT comprised nine items (72 individual test questions) divided equally among three categories: 1. The likelihood of a diagnosis based on historical or physical exam elements. 2. Evaluation of the relevance of ordering a lab or diagnostic study. 3. Evaluating the legitimacy of management strategies. Following SCT construction, we administered the eight SCTs to an expert panel comprising 10 board-certified general pediatricians who responded to the items. The scoring key was developed by tabulating response frequencies at each point in the Likert-type response scale for each item. For each question (72 total), each test (eight total), and aggregate AD scores over all items completed by each student, we adopted an absolute difference (AD) scoring method, using Bland and colleagues’ 3-point absolute distance from the mean method for each separate SCT topic ( and ). Mean aggregate scores were used to ensure comprehensive understanding of each clinical vignette. The refined SCTs were administered during the required pediatric clerkship at the University of Florida College of Medicine to a convenience sample of third-year medical students from fall 2016 through spring 2020. Institutional review board approval was obtained prior to engaging in data analysis (UFIRB# 202000845). Content Validity During the item development process, we convened an expert panel of board-certified pediatricians who provided feedback on content validity. Each pediatrician had served in a faculty position at the University of Florida College of Medicine for an average of 13.4 years ( SD = 8.7) of pediatric experience following completion of residency training. Thus, their experience in the field contributed to the content validity of our test items. During the item development process, we convened an expert panel of board-certified pediatricians who provided feedback on content validity. Each pediatrician had served in a faculty position at the University of Florida College of Medicine for an average of 13.4 years ( SD = 8.7) of pediatric experience following completion of residency training. Thus, their experience in the field contributed to the content validity of our test items. We adopted the unitary theory of evidence, collecting validity data from varying sources to explore the utility of the SCTs. The unitary theory of evidence seeks to comprehensively establish validity beyond simple calculations. We explored the validity of the SCTs via the lenses of construct, content, and consequential validity. Between fall 2016 and summer 2020, 21 cohorts consisting of 455 students completed the required pediatric clerkship at the University of Florida College of Medicine. Students were instructed that the SCTs were part of a clinical reasoning exercise and, apart from one clerkship rotation, were optional. Clerkship materials, including the SCTs, were housed within the pediatric clerkship course residing in the Canvas Learning Management Students. All SCTs were made available to students at the start of the clerkship; each SCT was titled by subject matter (e.g., Clinical Reasoning Activity—Genetic Syndrome) and drew upon accumulative medical knowledge from the preclinical stages of the curriculum as well as learning materials and activities occurring in the early stages of the clerkship. Students could complete SCTs at any time during the clerkship. One-hundred thirty-one students (29%) completed at least one SCT. Eighteen (12%) of those respondents’ answers were discarded because the test attempts were incomplete, resulting in a final sample of 113. Students who participated in the SCTs completed, on average, four of eight ( SD = 3.18). The mean aggregate percentage AD score across all SCTs was .84 ( SD = .08; ). On average, students took 5.5 minutes ( SD = 1.1) to complete an SCT. Thus, completing the entire set would take a student approximately 45 minutes. Faculty completed SCTs in an average of 3.5 minutes ( SD = 1.1). Therefore, they took approximately 30 minutes complete the set. We found the overall internal consistency using AD scoring to be .49 (95% CI, .42-.56). Pearson correlations comparing SCT performance to USMLE Step 1, NBME Pediatrics Shelf, NBME Pediatrics Shelf percentile, USMLE Step 2 CK, medical decision-making competency, and overall competency scores were all statistically significant at at least p < .05 . When compared to Pearson correlations calculated using student NBME Shelf exam scores, SCTs demonstrated a slightly stronger but nonsignificant relationship to medical decision-making competency scores ( r = .267 vs. r = .265, p = .49) and overall competency scores ( r = .237 vs. r = .192, p = .36). We evaluated the SCTs by exploring overall scores, internal consistency, and their relationship with summative assessments, including the USMLE Step 1 score, NBME Pediatrics Shelf Exam raw and percentiles score, USLME Step 2 CK score, clerkship medical decision-making competency score, and an overall clerkship competency score (measured on a 0–9 scale). Construct validity was demonstrated by association between SCT scores and clinical evaluations of student medical decision-making by faculty throughout the rotation. Furthermore, increased performance of more experienced test takers (the pediatric attendings who also took the examination) indicates that higher levels of experience with pediatric clinical medicine correlated with better performance on the SCTs, further strengthening the construct validity of our assessment. Clerkships have historically struggled to objectively query medical student clinical knowledge and reasoning in an efficient manner. Within the pediatric clerkship, following the completion of this analysis in fall 2020, the SCTs became a required assignment. SCT results are used to facilitate formative feedback during the midclerkship evaluation. Consequential validity was demonstrated in our case by leadership at the University of Florida discussing and encouraging wider adoption of SCTs in curriculum committee meetings after completion of our project. Leadership cited SCTs’ potential to help guide midpoint formative feedback on student medical decision-making using an objective measure as the main motivation behind the push for universal and required SCT adoption. Exploring medical student decision-making and clinical reasoning in inherently ambiguous situations is an important yet challenging endeavor. SCTs provide an economical alternative to multiple-choice questions for querying learner skills. Multiple studies have explored the validity of SCTs and advocated for their incorporation into medical curricula. , , , , , These pediatric SCTs were well suited for use in our pediatric clerkship. They demonstrated positive relationships with summative measures of consequence (NBME Pediatrics Shelf Exam, USMLE Step 2 CK, and competency scores); item properties and interest in wider institutional adoption provide evidence of validity and utility. While we are not asserting generalizability across settings, our results are congruent with Humbert and colleagues’ exploration of SCTs and their relationship to summative measures in emergency medicine trainees. Based on our experience, we feel that SCTs are well suited for facilitating meaningful conversations with learners, but additional work is needed before we employ them as more consequential forms of assessment. SCTs can be reused without concern for item deterioration, providing a promising means of assessing competency growth over time, identifying gaps in performance, and evaluating the need for and appraising the progression of remediation. These unique qualities make SCTs promising facilitators for specific, low-cost, timely, midclerkship feedback to learners. Students received feedback via real-time explanation of each individual question by the test writers. Furthermore, SCTs were strongly suggested to students who had shown gaps in clinical reasoning knowledge in previous clerkships. Our overall analysis of these data in totality took place following completion of the clerkships by these medical students. As a result, clerkship directors were unable to use our analysis for formative feedback of students. However, as mentioned above, students could compare their responses to faculty experts in real time as a form of immediate feedback on their performance. This statement was included in our write-up because it has been well documented that SCT scores positively correlate with increased levels of training and application of knowledge. Midclerkship feedback is a Liaison Committee on Medical Education requirement for clerkships longer than 4 weeks. We hope to implement SCTs formally in our midpoint formative feedback of students as an example of the clinical reasoning competency. To do this, we will need to make SCTs required. Finally, we postulate that in the future, with continued research and development, SCTs could be used as an evaluative measure of the quality of clinical education and the clinical learning environments associated with a clerkship. For example, a clerkship director at a medical school with multiple rotation sites could incorporate SCT performance data from each site to better inform needs for specific faculty or site development. Markert and colleagues offered evidence for a similar method of comparing the quality of clinical site instruction by contrasting student NBME scores and clerkship grades, along with other postgraduate measures. Limitations These SCTs have notable limitations. Their use was limited to a single institution, with a limited sample size. Our exploration did not control for cohort maturation effects during the academic years, nor was student demographic information incorporated. SCTs were optional for most clerkship students, resulting in the possibility of selection bias. Furthermore, time between SCT completion and Step 2 CK completion was widely variable between students due to the varying order of third-year clerkships, which could be considered a confounding variable. In addition, SCT internal consistency was lower than desirable (.49). To achieve a Cronbach alpha value of .65, the number of cases would need to be increased from eight to 16, assuming similar difficulty, which is in line with recommendations by Gagnon and colleagues. However, we do not perceive a high Cronbach alpha value as a requirement for meaningful implementation of our learning tool, given that this is a formative assessment with no bearing on students’ grades. Correlation coefficients between SCTs and summative assessments were lower than anticipated but were in line with the range of mini-Clinical Evaluation Exercise data reported by Mortaz Hejri and colleagues. We feel that a natural next step would involve collaborative multi-institutional SCT item development, implementation, and exploration. Conclusion SCTs administered during a pediatric clerkship were positively related to summative outcome measures including NBME performance, Step 2 CK scores, medical decision-making clerkship competency score, and overall clerkship competency score. Based upon this information, future University of Florida College of Medicine pediatric clerkship students will be required to complete SCTs, and additional SCTs will be developed to improve internal consistency. SCTs will continue to be used to facilitate robust midclerkship feedback. Additional work with regard to standardization of SCT grading, scaling, and student familiarity with the format is necessary before we can consider the use of SCTs as summative assessments. Further projects could feasibly explore the relationship between SCTs, summative outcomes, and student demographics, as well as the potential for SCTs as formative and summative assessment tools. These SCTs have notable limitations. Their use was limited to a single institution, with a limited sample size. Our exploration did not control for cohort maturation effects during the academic years, nor was student demographic information incorporated. SCTs were optional for most clerkship students, resulting in the possibility of selection bias. Furthermore, time between SCT completion and Step 2 CK completion was widely variable between students due to the varying order of third-year clerkships, which could be considered a confounding variable. In addition, SCT internal consistency was lower than desirable (.49). To achieve a Cronbach alpha value of .65, the number of cases would need to be increased from eight to 16, assuming similar difficulty, which is in line with recommendations by Gagnon and colleagues. However, we do not perceive a high Cronbach alpha value as a requirement for meaningful implementation of our learning tool, given that this is a formative assessment with no bearing on students’ grades. Correlation coefficients between SCTs and summative assessments were lower than anticipated but were in line with the range of mini-Clinical Evaluation Exercise data reported by Mortaz Hejri and colleagues. We feel that a natural next step would involve collaborative multi-institutional SCT item development, implementation, and exploration. SCTs administered during a pediatric clerkship were positively related to summative outcome measures including NBME performance, Step 2 CK scores, medical decision-making clerkship competency score, and overall clerkship competency score. Based upon this information, future University of Florida College of Medicine pediatric clerkship students will be required to complete SCTs, and additional SCTs will be developed to improve internal consistency. SCTs will continue to be used to facilitate robust midclerkship feedback. Additional work with regard to standardization of SCT grading, scaling, and student familiarity with the format is necessary before we can consider the use of SCTs as summative assessments. Further projects could feasibly explore the relationship between SCTs, summative outcomes, and student demographics, as well as the potential for SCTs as formative and summative assessment tools. SCTs Without Answers.docx SCTs With Expert Answers.pdf Scoring Guide.docx Scoring Spreadsheet.xslx All appendices are peer reviewed as integral parts of the Original Publication.
The oncology ribbon of reflection: a novel tool to encourage trainee self-reflection
7cc2838b-bb15-41bd-91e7-5913e7e54197
11139802
Internal Medicine[mh]
Oncology can be a challenging field for junior trainees, many of whom have not previously witnessed the many of whom have not previously witnessed the non-restorative paradigms or inevitable decline associated with many oncologic patient journeys. Emotional and psychological reactions to these situations (which might range from breaking bad news repeatedly throughout the day, to coping with the death of a known patient, to counselling and prescribing therapies which are not intended to return a patient to their previous state of health) can be difficult to manage without properly guided reflection, or, in the very least, tools to allow for this introspection. Reflection on stressful patient situations and related events is key for learning from the moment and coping with the strain of clinical medicine. Even though debriefing with a trusted mentor might be ideal, such an individual is not always accessible. Thus, it appears potentially helpful to have a self-guided reflection tool for trainees to use on their own; they may require more granular guidance to navigate into a frame of mind where they can engage in useful self-reflection than is provided by existing reflection models. Available models lack specificity relevant to medicine and, particularly, to oncology, and are either too general or simplistic for the stressed learner, who may not have the wherewithal in the heat of the moment to derive the application of existing frameworks to their own situation. This suggests having a tested framework specifically designed for learners in oncology is not only desirable, but of paramount importance to ensure these trainees are sufficiently supported. We created a guided reflection tool in the shape of a cancer ribbon, an oncology symbol. This was based on several well-established reflective models, starting with Gibbs and incorporating similar elements of Kolb, Schoen, and Moon that pertained to a trainee rotating through oncology, under the guidance of course facilitators for a Master of Medical Education programme. The tool was introduced as a quality improvement initiative with trainees rotating through medical oncology (medical students to senior fellows). Learners were invited to use the tool for self-reflection or guided reflection with a mentor, in either a mental, verbal, or written format. Learners were then invited to give feedback on the tool after its use, and adjustments were made in line with feedback suggestions. In its initial iteration, the tool had a four-question, easy-to-follow design. These questions were derived by the study team based on their personal experience, as well as a conglomerate of the steps of the established models used for inspiration. At the present time, the tool has undergone three separate iterations, with feedback solicited from each group of learners who used the tool, as well as faculty mentors who work with trainees in clinic and understand learners’ needs and stressors. The tool and its questions were adjusted at each step, resulting in the final product presented here. This project received ethics exemption by the Western University Research and Ethics Board. As this was a quality improvement project, open-ended feedback was solicited both in written and oral format (whichever was preferable for the user) after the tool’s use. Feedback to the tool has been overwhelmingly positive, both regarding its utility in practice and design, reflecting that our tool fills a gap in available resources for concrete process guidance. Suggestions to improve the tool included adding several steps for better reflection, adding colour to make the tool more engaging, rewording prompts for ease of understanding, and avoiding stacked (multiple) questions in one prompt. Feedback was similar from trainees of different levels (as well as consultants, who also tested the tool, with the intention to understand its purpose for trainees’ use), which also reflects the flexibility of the tool for learners of different levels. The tool has been well-received locally and is now being made available as a routine resource for trainees in oncology by programme and division leaders. Next steps for the tool’s improvement may include offering suggestions for trainee support, such as coordinating with training programmes for in-person debriefing options. Recognizing that self- reflection should not always occur in isolation, as inner turmoil may affect the quality of the reflection, consideration could also be given to reviewing the outcome of the reflection with a peer group or vertical mentor. As one of the tool’s main limitations is its oncology-specific focus, further options include providing discipline-specific reflection tools for trainees in non-oncologic specialties.
Onlay mesh
aaafc81e-8439-4ed3-b511-a4a0902950fe
11879325
Surgical Procedures, Operative[mh]
Umbilical hernia (UH) repair is a widely performed surgical procedure globally, yet an optimal technique for smaller UHs remains undefined. With UHs accounting for 10% of all abdominal wall hernias and approximately 4000 repairs performed annually in Sweden , identifying the optimal repair technique is crucial for improving patient outcomes. Traditionally, mesh repair has been reserved for larger UHs, whereas smaller UHs have been repaired using a simple suture or Mayo technique ; however, suture repair has shown disappointing recurrence rates, with some studies reporting rates of up to 21% , . According to the European classification from 2009, a small UH is defined as having a defect size of less than 2 cm . Previous research suggests that even smaller UHs may benefit from mesh repair, with lower recurrence rates compared with suturing. Studies, including observational cohort studies , retrospective studies , , older randomized clinical trials (RCTs) with small sample sizes , , and heterogeneous meta-analyses , have reported recurrence rates of 4–15% for suture repair compared with 0–5% for mesh repair. Notably, a large RCT for UHs sized 1–4 cm found significantly lower recurrence rates with mesh repair (4%) versus suturing (12%) . However, a non-significant decrease was observed for UHs sized 1–2 cm, possibly due to an insufficient sample size for that subgroup . That trial used a preperitoneal sublay mesh technique, which is technically challenging to perform in smaller UHs without enlarging the defect and may involve a higher risk of complications. Despite the first guidelines from 2020 recommending preperitoneal mesh for all UHs as small as 1 cm , many surgeons are hesitant to adopt this technique for smaller UHs due to its challenges. These guidelines also highlight the need for further higher-quality data to refine treatment recommendations, particularly for small UHs defined today as less than 1 cm. Also, a meta-analysis has underscored the uncertain role of mesh in very small UHs under 1 cm . Alternative approaches, such as ventral mesh patches, have been suggested, but are associated with an increased risk of wound complications/surgical-site occurrences (SSOs) and morbidity , , . Although mesh repair reduces recurrence rates, optimal mesh positioning and the increased risk of SSOs are key concerns. The appropriate mesh placement for smaller UHs remains uncertain. One alternative to enhance mesh use in smaller UHs is to use small onlay mesh over the sutured defect. This approach may offer acceptable strength in preventing hernia recurrence to a sublay placement, while reducing the risk of complications. Additionally, it can be easier to perform, regardless of the hernia size or the surgeon’s level of expertise. No RCTs have yet compared suture repair with small onlay mesh repair for smaller UHs. The Suture UMbilical MEsh Repair (SUMMER) trial is designed to provide high-level evidence for mesh repair of primary elective UHs less than or equal to 2 cm. This randomized, controlled, parallel-group, double-blind, multicentre trial aims to compare recurrence rates and SSOs after suture versus onlay mesh repair. The hypothesis is that mesh repair could reduce recurrence rates without increasing SSOs compared with suture repair. This paper presents early results from the trial, namely SSOs and pain intensity at 30 days post-surgery. Trial design and participants The SUMMER trial is a prospective, randomized, controlled, parallel-group, double-blind, multicentre study for primary elective UHs less than or equal to 2 cm. Conducted across six Swedish surgical units—four district hospitals (Södertälje Hospital, Enköping Hospital, Mora Lasarett, and Danderyds Hospital) and two elective care providers (Frölunda Hospital, Sophiahemmet Capio)—the trial randomized participants 1 : 1 to either suture repair or onlay mesh repair between February 2020 and January 2024. Eligible patients were recruited from surgical outpatient clinics based on specified inclusion and exclusion criteria . All participating surgeons received standardized training, including detailed video instructions on the surgical technique to ensure uniformity. Participants had a 30-day follow-up visit after surgery and will continue to be monitored at 1 and 3 years post-surgery. The study followed CONSORT guidelines and statements and was approved by the Regional Ethics Review Board in Stockholm, Sweden (DNR 2018/22-65 and DNR 2019/05-608). The trial is registered with ClinicalTrials.gov under the identifier NCT04231071. The trial study protocol was published before data analysis . Adults over 18 years old referred for a UH at participating surgical units were considered eligible to participate in the trial. Patients with a primary UH with a defect less than or equal to 2 cm, as assessed during the outpatient clinical examination, chosen for elective open repair, and who fulfilled the inclusion criteria without any exclusion criteria were given oral and written information about the trial. Written consent was obtained before surgery and randomization. A UH in this study was defined according to the European Hernia Society definition from 2009 as a midline abdominal wall defect from 3 cm above to 3 cm below the umbilicus . Patients screened for inclusion who met exclusion criteria were registered for their exclusion reasons and are presented in . Trial participants included but not operated on and randomized were also registered and are presented in . Surgical techniques All surgical procedures were conducted under general anaesthesia and without any preoperative antibiotics being given to the participants. No drains were inserted. In the suture repair group, an open incision was made in the umbilical area. The hernia sac was dissected and reduced. Resection of the hernia sac was permissible but discouraged. Measurement of the defect size was done before randomization intraoperatively, ensuring it to be less than or equal to 2 cm. When a trial participant fulfilled all the inclusion criteria, randomization to an allocation took place perioperatively. When allocated to suture repair, the defect in the aponeurosis was closed transversely with a running non-absorbable monofilament suture (2/0). The umbilical skin was adapted to the aponeurosis using an absorbable monofilament suture (3/0 or 4/0). The skin closure was completed with an intracutaneous running suture using the same suture material. In the mesh repair group, the procedure followed the steps outlined above for the suture repair. Additionally, after closing the defect, dissection of the subcutaneous tissue from the aponeurosis was performed to facilitate the placement of a 4 × 4 cm macroporous lightweight partially resorbable mesh (Ultrapro Advanced, © Ethicon US, LLC, 2020). The same mesh size was used in all trial participants, regardless of the size of the UH defect. The mesh was secured using five non-absorbable monofilament sutures (2/0)—one in the centre and one in each corner. To reduce the risk of nerve entrapment, the sutures were placed in a transverse direction. The subcutaneous tissue and the umbilical skin were adapted to the mesh using an absorbable monofilament suture (3/0 or 4/0). Skin closure was completed with an intracutaneous running suture using the same suture material. Any trial participants in whom an unintentional opening in the umbilical skin was created during the procedure were excluded from the study. Outcomes of interest The primary outcome of the trial is hernia recurrence at the site of the previous UH repair, evaluated at the 3-year follow-up. These data are not presented in this paper. Early recurrences were assessed at the 30-day follow-up through a physical abdominal wall examination by a surgeon. In cases of uncertainty concerning recurrence, CT was used for confirmation, as it better maintains allocation concealment compared with ultrasonography. The secondary outcomes include SSOs (defined as seromas (clear fluid accumulations), haematomas (blood accumulations), and wound infections (surgical-site infections necessitating any enactment)) and pain intensity at 30 days post-surgery. Findings regarding chronic pain, assessed via the Ventral Hernia Pain Questionnaire (VHPQ) at the 1-year follow-up visit, are not presented in this paper. SSOs were evaluated at the 30-day postoperative follow-up visit and were classified using the Clavien–Dindo grading system , with a grade greater than or equal to I indicating the presence of a postoperative complication. Any bedside wound opening was classified as Clavien–Dindo grade IIIa in this trial, which, according to the Clavien–Dindo classification, includes procedures requiring surgical intervention not performed under general anaesthesia. Participants were instructed to return to their surgical unit for any SSOs occurring before their 30-day visit. These events were documented in the case report forms (CRFs) for the 30-day follow-up. Systematic complications, such as pulmonary, cardiovascular, or urinary tract issues, were also monitored. All SSO events were monitored using the same approach as recurrence assessments. Complications were managed following local medical standards. Pain intensity was measured using the Numerical Rating Scale (NRS) during the 30-day postoperative follow-up, where participants verbally reported their pain level on a scale from zero to ten during their outpatient clinic visit. Statistics The sample size was calculated with a focus on the primary outcome of recurrence at 3 years post-surgery and not for the secondary outcomes of SSOs. Recurrence rates in previous studies have varied widely, ranging from a few percent to as high as 20%. Based on previous reports, recurrence rates of 12% in the suture group and 3% in the mesh group were assumed. The study was powered to detect a risk difference of 9% in recurrence and an OR of approximately 4.4, with 80% power at a significance level of 0.050, with a 95% confidence interval. The sample size was adjusted to address potential patients lost to follow-up of 10%, resulting in a need of 144 trial participants in each arm. All CRFs were collected and managed using REDCap (Research Electronic Data Capture), an online data capture tool hosted by the Karolinska Institutet in Sweden. Baseline data were entered before surgery, with all other data registered prospectively. Intraoperative randomization was conducted through REDCap, ensuring complete allocation concealment. Each surgical unit implemented a 1 : 1 randomization in random sequence blocks of four or six, stratified by surgical unit and hernia size (less than 10 mm or 10–20 mm). Decisions to exclude participants from the trial post-randomization were made by surgeons during operations when exclusion criteria were met; reasons were documented in the CRFs and are presented in . Trial participants are to be kept unaware of the allocated surgical method of repair throughout the entire 3-year follow-up interval. No details regarding the allocation were recorded in the hospitals’ medical records for the patients. Instead, the allocation was documented separately on paper and securely concealed by the clinical secretary. This prevented trial participants from unblinding their allocation via online access to their hospital records. Also, the follow-up surgeons who assessed the outcomes were not part of the surgery and remained uninformed about the allocation. A statistical analysis plan was written before participant inclusion. The analysis included participants who adhered to the inclusion criteria, were free from exclusionary factors, and completed the 30-day follow-up post-surgery. Details of excluded eligible patients are shown in . Differences in baseline characteristics and post-surgical outcomes between the surgical methods were tested using Fisher’s exact test for categorical variables and Wilcoxon’s test for continuous variables. Regarding pain intensity, an NRS score of zero indicated no pain and an NRS score of greater than or equal to one indicated pain. Logistic regression analysis yielding an estimated OR, adjusted for strong predictors and stratified factors (BMI and hernia defect size), was performed to compare SSO risk and pain intensity risk. Sensitivity analysis addressed the potential impact of participants lost to follow-up by treating dropouts as either cases (having SSOs) or non-cases. All statistical analyses were performed in RStudio (version 2023.12.1+402). The SUMMER trial is a prospective, randomized, controlled, parallel-group, double-blind, multicentre study for primary elective UHs less than or equal to 2 cm. Conducted across six Swedish surgical units—four district hospitals (Södertälje Hospital, Enköping Hospital, Mora Lasarett, and Danderyds Hospital) and two elective care providers (Frölunda Hospital, Sophiahemmet Capio)—the trial randomized participants 1 : 1 to either suture repair or onlay mesh repair between February 2020 and January 2024. Eligible patients were recruited from surgical outpatient clinics based on specified inclusion and exclusion criteria . All participating surgeons received standardized training, including detailed video instructions on the surgical technique to ensure uniformity. Participants had a 30-day follow-up visit after surgery and will continue to be monitored at 1 and 3 years post-surgery. The study followed CONSORT guidelines and statements and was approved by the Regional Ethics Review Board in Stockholm, Sweden (DNR 2018/22-65 and DNR 2019/05-608). The trial is registered with ClinicalTrials.gov under the identifier NCT04231071. The trial study protocol was published before data analysis . Adults over 18 years old referred for a UH at participating surgical units were considered eligible to participate in the trial. Patients with a primary UH with a defect less than or equal to 2 cm, as assessed during the outpatient clinical examination, chosen for elective open repair, and who fulfilled the inclusion criteria without any exclusion criteria were given oral and written information about the trial. Written consent was obtained before surgery and randomization. A UH in this study was defined according to the European Hernia Society definition from 2009 as a midline abdominal wall defect from 3 cm above to 3 cm below the umbilicus . Patients screened for inclusion who met exclusion criteria were registered for their exclusion reasons and are presented in . Trial participants included but not operated on and randomized were also registered and are presented in . All surgical procedures were conducted under general anaesthesia and without any preoperative antibiotics being given to the participants. No drains were inserted. In the suture repair group, an open incision was made in the umbilical area. The hernia sac was dissected and reduced. Resection of the hernia sac was permissible but discouraged. Measurement of the defect size was done before randomization intraoperatively, ensuring it to be less than or equal to 2 cm. When a trial participant fulfilled all the inclusion criteria, randomization to an allocation took place perioperatively. When allocated to suture repair, the defect in the aponeurosis was closed transversely with a running non-absorbable monofilament suture (2/0). The umbilical skin was adapted to the aponeurosis using an absorbable monofilament suture (3/0 or 4/0). The skin closure was completed with an intracutaneous running suture using the same suture material. In the mesh repair group, the procedure followed the steps outlined above for the suture repair. Additionally, after closing the defect, dissection of the subcutaneous tissue from the aponeurosis was performed to facilitate the placement of a 4 × 4 cm macroporous lightweight partially resorbable mesh (Ultrapro Advanced, © Ethicon US, LLC, 2020). The same mesh size was used in all trial participants, regardless of the size of the UH defect. The mesh was secured using five non-absorbable monofilament sutures (2/0)—one in the centre and one in each corner. To reduce the risk of nerve entrapment, the sutures were placed in a transverse direction. The subcutaneous tissue and the umbilical skin were adapted to the mesh using an absorbable monofilament suture (3/0 or 4/0). Skin closure was completed with an intracutaneous running suture using the same suture material. Any trial participants in whom an unintentional opening in the umbilical skin was created during the procedure were excluded from the study. The primary outcome of the trial is hernia recurrence at the site of the previous UH repair, evaluated at the 3-year follow-up. These data are not presented in this paper. Early recurrences were assessed at the 30-day follow-up through a physical abdominal wall examination by a surgeon. In cases of uncertainty concerning recurrence, CT was used for confirmation, as it better maintains allocation concealment compared with ultrasonography. The secondary outcomes include SSOs (defined as seromas (clear fluid accumulations), haematomas (blood accumulations), and wound infections (surgical-site infections necessitating any enactment)) and pain intensity at 30 days post-surgery. Findings regarding chronic pain, assessed via the Ventral Hernia Pain Questionnaire (VHPQ) at the 1-year follow-up visit, are not presented in this paper. SSOs were evaluated at the 30-day postoperative follow-up visit and were classified using the Clavien–Dindo grading system , with a grade greater than or equal to I indicating the presence of a postoperative complication. Any bedside wound opening was classified as Clavien–Dindo grade IIIa in this trial, which, according to the Clavien–Dindo classification, includes procedures requiring surgical intervention not performed under general anaesthesia. Participants were instructed to return to their surgical unit for any SSOs occurring before their 30-day visit. These events were documented in the case report forms (CRFs) for the 30-day follow-up. Systematic complications, such as pulmonary, cardiovascular, or urinary tract issues, were also monitored. All SSO events were monitored using the same approach as recurrence assessments. Complications were managed following local medical standards. Pain intensity was measured using the Numerical Rating Scale (NRS) during the 30-day postoperative follow-up, where participants verbally reported their pain level on a scale from zero to ten during their outpatient clinic visit. The sample size was calculated with a focus on the primary outcome of recurrence at 3 years post-surgery and not for the secondary outcomes of SSOs. Recurrence rates in previous studies have varied widely, ranging from a few percent to as high as 20%. Based on previous reports, recurrence rates of 12% in the suture group and 3% in the mesh group were assumed. The study was powered to detect a risk difference of 9% in recurrence and an OR of approximately 4.4, with 80% power at a significance level of 0.050, with a 95% confidence interval. The sample size was adjusted to address potential patients lost to follow-up of 10%, resulting in a need of 144 trial participants in each arm. All CRFs were collected and managed using REDCap (Research Electronic Data Capture), an online data capture tool hosted by the Karolinska Institutet in Sweden. Baseline data were entered before surgery, with all other data registered prospectively. Intraoperative randomization was conducted through REDCap, ensuring complete allocation concealment. Each surgical unit implemented a 1 : 1 randomization in random sequence blocks of four or six, stratified by surgical unit and hernia size (less than 10 mm or 10–20 mm). Decisions to exclude participants from the trial post-randomization were made by surgeons during operations when exclusion criteria were met; reasons were documented in the CRFs and are presented in . Trial participants are to be kept unaware of the allocated surgical method of repair throughout the entire 3-year follow-up interval. No details regarding the allocation were recorded in the hospitals’ medical records for the patients. Instead, the allocation was documented separately on paper and securely concealed by the clinical secretary. This prevented trial participants from unblinding their allocation via online access to their hospital records. Also, the follow-up surgeons who assessed the outcomes were not part of the surgery and remained uninformed about the allocation. A statistical analysis plan was written before participant inclusion. The analysis included participants who adhered to the inclusion criteria, were free from exclusionary factors, and completed the 30-day follow-up post-surgery. Details of excluded eligible patients are shown in . Differences in baseline characteristics and post-surgical outcomes between the surgical methods were tested using Fisher’s exact test for categorical variables and Wilcoxon’s test for continuous variables. Regarding pain intensity, an NRS score of zero indicated no pain and an NRS score of greater than or equal to one indicated pain. Logistic regression analysis yielding an estimated OR, adjusted for strong predictors and stratified factors (BMI and hernia defect size), was performed to compare SSO risk and pain intensity risk. Sensitivity analysis addressed the potential impact of participants lost to follow-up by treating dropouts as either cases (having SSOs) or non-cases. All statistical analyses were performed in RStudio (version 2023.12.1+402). Between February 2020 and January 2024, 415 of 1040 eligible patients were included in the trial, with 290 participants randomized—146 participants underwent suture repair and 144 participants underwent onlay mesh repair . Follow-up analyses at 30 days included 144 participants for suture repair and 135 participants for onlay mesh repair. Exclusions occurred due to participants being lost to follow-up, defects being larger than 2 cm, an incorrect initial diagnosis, or multiple defects . Reasons for participants being lost to follow-up were being unable to reach three participants and one participant declining further participation for the mesh group and being unable to reach one participant for the suture group. Repairs were performed by 86 surgeons across the six surgical units, with a balanced distribution regarding the level of surgical expertise in both groups . Characteristics of the trial participants are presented in ; both groups had a similar sex ratio, with approximately 70% men and 30% women . The median age was 51 years and the median BMI was 27.5 kg/m 2 for suture repair and 28.4 kg/m 2 for mesh repair. Nearly 98% of participants had an ASA fitness grade of I–II. There were more smokers and diabetic participants in the mesh repair group. The hernia size was predominately 10–20 mm in both groups (greater than 87%). The median duration of surgery was 32 min for suture repair and 45 min for onlay mesh repair ( P < 0.001). Postoperative surgical-site occurrences Overall SSO (Clavien–Dindo grade greater than or equal to I) rates were 18.1% (26 participants) for suture repair and 23.7% (32 participants) for mesh repair . Clinically relevant SSO (Clavien–Dindo grade greater than or equal to II) rates were 2.8% (4 participants) for suture repair and 1.5% (2 participants) for mesh repair . Of these clinically relevant SSOs, one mesh repair participant required an intervention under local anaesthesia (Clavien–Dindo grade IIIa) and two suture repair participants were treated with antibiotics for potential wound infections. Logistic regression analysis showed no significant increase in the risk of SSOs for mesh repair (OR 1.39 (95% c.i. 0.78 to 2.51)) . This pattern persisted across the different categories of SSOs . Pain intensity—Numerical Rating Scale Measurement of pain intensity using the NRS showed that 82.0% of suture repair participants and 73.0% of mesh repair participants reported no pain, with NRS scores of zero, and that 18.0% of suture repair participants and 27.0% of mesh repair participants had NRS scores of greater than or equal to one . Analysis showed an adjusted, increased, non-significant OR of 1.72 (c.i. 0.98 to 3.07; P = 0.061) for the risk of pain with an NRS score of greater than or equal to one for mesh repair. Additionally, 95.3% of the suture group and 96.5% of the mesh group had NRS scores less than or equal to three. No pain with an NRS score of greater than or equal to eight was reported. Sensitivity analysis No pattern of imbalance in the baseline characteristics of those lost to follow-up was observed . The sensitivity analysis revealed similar effects as the main analysis , both when treating all lost to follow-up as cases and non-cases of SSOs . Overall SSO (Clavien–Dindo grade greater than or equal to I) rates were 18.1% (26 participants) for suture repair and 23.7% (32 participants) for mesh repair . Clinically relevant SSO (Clavien–Dindo grade greater than or equal to II) rates were 2.8% (4 participants) for suture repair and 1.5% (2 participants) for mesh repair . Of these clinically relevant SSOs, one mesh repair participant required an intervention under local anaesthesia (Clavien–Dindo grade IIIa) and two suture repair participants were treated with antibiotics for potential wound infections. Logistic regression analysis showed no significant increase in the risk of SSOs for mesh repair (OR 1.39 (95% c.i. 0.78 to 2.51)) . This pattern persisted across the different categories of SSOs . Measurement of pain intensity using the NRS showed that 82.0% of suture repair participants and 73.0% of mesh repair participants reported no pain, with NRS scores of zero, and that 18.0% of suture repair participants and 27.0% of mesh repair participants had NRS scores of greater than or equal to one . Analysis showed an adjusted, increased, non-significant OR of 1.72 (c.i. 0.98 to 3.07; P = 0.061) for the risk of pain with an NRS score of greater than or equal to one for mesh repair. Additionally, 95.3% of the suture group and 96.5% of the mesh group had NRS scores less than or equal to three. No pain with an NRS score of greater than or equal to eight was reported. No pattern of imbalance in the baseline characteristics of those lost to follow-up was observed . The sensitivity analysis revealed similar effects as the main analysis , both when treating all lost to follow-up as cases and non-cases of SSOs . This large-scale RCT provides evidence for mesh repair for UHs less than or equal to 2 cm. Although the sample size for this trial was not primarily calculated to investigate SSOs, the early results suggest that onlay mesh repair seems to be comparable to suture repair for UHs less than or equal to 2 cm with regard to SSOs. This highlights the safety of onlay mesh repair for smaller UHs during the early postoperative interval. However, a 3-year follow-up after surgery is necessary to investigate differences in the trial’s primary outcome of recurrences. The findings revealed a higher SSO rate in the mesh repair group (23.7%) compared with the suture repair group (18.1%) across all observed SSOs, when SSOs were defined as Clavien–Dindo grade greater than or equal to I. However, for clinically relevant SSOs requiring any enactment (Clavien–Dindo grade greater than or equal to II), the rate was comparably low between the groups. Specifically, SSO rates were lower in the mesh repair group (1.5%) compared with the suture repair group (2.8%). Previous randomized trials have reported higher SSO rates with mesh repair. Kaufmann et al . found SSO rates of 3% for suture repair versus 8% for mesh repair in 300 trial participants, defining SSOs as requiring interventions. Others used mesh plugs in 68 patients for repairs of hernias greater than 3 cm and standard flat preperitoneal mesh in 32 patients for hernias less than 3 cm in diameter, finding SSO rates of 11% for both suture and mesh repair . Similarly, another study investigated two different mesh repairs against Mayo suture repair for UHs less than 4 cm and found SSO rates of 12% for the 17 patients repaired using the Prolene Hernia System (Ethicon) and 20% for the 15 patients repaired using standard flat onlay mesh repair . The results of the present study point to lower SSO rates when applying the same SSO definition used in these studies, indicating that a small onlay mesh technique with minimal subcutaneous dissection does not increase the risk of complications requiring intervention. A seroma after onlay mesh repair may be the SSO that is most frequently reported. In the present study, 9.6% of mesh repair participants developed seromas, and none required an intervention involving aspiration. Fonseca et al . reported an SSO rate of 33% for onlay mesh repair, mainly due to seromas (23%) and surgical-site infections (10%), linked to larger microporous meshes. Others found a higher infection rate with small-pore meshes, 6.7% for mesh repair and 4.9% for suture repair , yet no mesh removal was necessary, affirming the safety of the macroporous lightweight onlay mesh. In addition, McCombie et al . used a lightweight onlay mesh for UHs and reported an even lower SSO rate than the present study; the seroma rate was 7.8% and all non-infected seromas were treated with aspiration. Larger onlay mesh repairs, involving greater subcutaneous dissection, tend to have a higher seroma rate when compared with other mesh techniques , , , , , . However, using small 4 × 4 cm macroporous lightweight onlay mesh can minimize dissection and reduce the risk of seroma formation. This approach was supported by a retrospective study conducted by some of the authors of this trial, who used the same small 4 × 4 cm onlay mesh technique for UHs less than or equal to 2 cm and reported a seroma rate of 5% . Thus, one explanation for the low rate (1.5%) of clinically relevant SSOs in the mesh repair group in the present study may be attributed to the minimal subcutaneous dissection for the small mesh insertion. The most prevalent complication in this present study was a haematoma, with rates of 16.3% for mesh repair and 12.5% for suture repair, with only one participant (in the suture group) having a Clavien–Dindo grade II complication and requiring some intervention. Though these rates are relatively high compared with previous studies , , , the low number of cases that required intervention (only one) suggest benignity, with expectant management as the primary approach. Moreover, pain management after surgery remains a critical aspect of patient recovery. The present study showed that those participants who underwent suture repair reported less pain on the pain intensity scale using NRS 30 days post-surgery compared with those participants who underwent mesh repair, although the difference was not significant between the groups. Assessment of potential chronic pain occurrence between the groups will be reported 1-year post-surgery. This large, multicentre RCT aims to determine whether using small onlay mesh for repair of UHs less than or equal to 2 cm can reduce recurrence rates compared with suture repair. The early SSO results support the safety of using small onlay mesh for repair of smaller UHs. The involvement of 86 participating surgeons across six surgical units can be considered a strength. Efforts such as extensive training and standardized procedural documentation were employed to minimize variability in surgical technique. The multicentre design reduces biases related to individual surgeon experience and ensures the generalizability of the results to routine clinical settings. Consequently, the surgical method of repair is considered straightforward to endorse in clinical practice. Another strength was the focus on only smaller elective primary UHs with a stringent definition, unlike other studies that have included various types of ventral hernias with a broader size range , , , , . Also, the use of online data registration ensured accurate data entry and a totally concealed allocation before randomization. Moreover, the absence of crossover allocation likely resulted from randomization occurring close to treatment allocation, minimizing post-randomization cases. Additional strengths were the high follow-up rates—94% for mesh repair and 99% for suture repair at 30-day follow-up after surgery. Participants lost to follow-up were primarily due to the inability to reach participants (4 of 5), considered to be due to participants lacking interest in returning after a successful minor hernia operation. Furthermore, the double-blind design ensured unbiased follow-up assessments by both participants and surgeons. Additionally, the frequent documentation of reasons for not including eligible patients helped detect potential selection bias, although this documentation was not always fully complete. The trial faced limitations, including challenges in objectively diagnosing SSOs across the six surgical units, as no standard requirements for aspiration or radiology were specified in the study protocol for confirming seromas or haematomas. Consequently, this lack of a systematic approach may have led to variability in SSO classification and management, although likely to be random across both groups and followed by a clinically relevant approach. Additionally, the trial’s definition of a UH was based on the European guidelines from 2009, which differ from the 2020 guidelines. Also, discrepancies may have arisen from defining surgical-site wound infections as complications requiring enactment in the study protocol against the number of wound infections classified as only Clavien–Dindo grade I, with 3.5% of suture repair participants and 5.2% of mesh repair participants having wound infections but not treated with antibiotics. Furthermore, the inclusion interval was extended due to the COVID-19 pandemic. In conclusion, the early results of secondary outcomes show acceptable SSO rates and support the safety of using small onlay mesh repair for smaller hernias. This trial is likely to impact future standards of practice regarding the repair of UHs. Small onlay mesh repair is recommended as an alternative, safe, and easy mesh technique for surgeons to choose to treat smaller UHs. Future follow-up of the trial participants will determine whether this method significantly reduces recurrence rates compared with suture repair. zrae173_Supplementary_Data
Regimens combining radiation and immunotherapy for cancer: latest updates from 2024 ASCO Annual Meeting
ed9195e2-46fd-4fb6-9379-9e0d602e7325
11401258
Internal Medicine[mh]
Consolidation immunotherapy following definitive concurrent chemoradiotherapy (cCRT) has been established as a standard treatment in unresectable locally advanced non-small cell lung cancer (LA-NSCLC) based on the PACIFIC trial . This strategy was further validated in a real-world study with superior survival outcomes at ASCO 2024 . Moreover, research has been conducted to investigate the efficacy of consolidation immunotherapy in limited-stage small-cell lung cancer (LS-SCLC). The phase III ADRIATIC study evaluated durvalumab as a consolidation treatment for LS-SCLC patients who had not experienced disease progression after cCRT (Table ). Results from the study showed that durvalumab consolidation significantly prolonged overall survival [median OS (mOS): 55.9 vs. 33.4 months; HR 0.73; p = 0.0104] and progression-free survival [median PFS (mPFS): 16.6 vs. 9.2 months; HR 0.76; p = 0.0161] compared to placebo (Table ). The incidence rates of grade 3/4 adverse events (AEs) were similar at 24%, with less than 3% experiencing grade 3/4 pneumonitis across both groups . The ADRIATIC study would be a game-changer, and durvalumab would be a new standard of care for LS-SCLC after concurrent chemoradiotherapy. While consolidation immunotherapy is the standard treatment for LA-NSCLC patients, its application in patients with a driver gene mutation is still largely unknown. A global retrospective study revealed consolidation with ALK tyrosine kinase inhibitors (ALK-TKI) significantly improved PFS in ALK-positive unresectable LA-NSCLC compared to durvalumab or observation after cCRT ([not reached [NR]] vs. 11.3 months vs. 7.4 months, p < 0.0001) (Table ) . Meanwhile, the LAURA study in this year’s ASCO meeting demonstrated that consolidation therapy with osimertinib following cCRT in EGFR-mutant locally advanced patients achieved a significantly prolonged median PFS of 39.1 months . In contrast, in the PACIFIC trial, the median PFS of patients with EGFR mutations was 11.2 months. The ongoing BO42777 trial aims to evaluate the safety and efficacy of multiple targeted therapies compared to durvalumab for patients with unresectable stage III NSCLC . The anticipated results will provide valuable insights into the efficacy of consolidation targeted therapies. Researchers have been exploring immunotherapy as a component of a neoadjuvant strategy to improve survival outcomes in locally advanced tumors. Dr. Mai et al. reported the interim findings from the phase III BEACON study, which compared the efficacy of induction tislelizumab versus placebo combined with chemotherapy (gemcitabine-cisplatin), followed by cCRT and adjuvant tislelizumab or placebo in patients with locoregionally advanced nasopharyngeal carcinoma. The tislelizumab arm demonstrated a significant increase in the complete response rate (CRR) compared to the placebo arm (30.5% vs. 16.7%; p = 0.0006), meeting the primary endpoint (Table ) . It is expected that the significant tumor shrinkage effect would be converted into survival benefits. The GASTO-1091 study enrolled unresectable LA-NSCLC patients who received neoadjuvant docetaxel, cisplatin, and nivolumab followed by hypofractionated cCRT (hypo-cCRT). The hypo-cCRT was administered with a dose of 40 Gy in 10 fractions (fx) or 30 Gy in 6fx, followed by a hypo-cCRT boost of 24-30 Gy in 6fx, resulting in a total dose of 60-64 Gy. Patients without progression or grade ≥ 2 pneumonitis after hypo-cCRT were subsequently randomized to receive nivolumab or undergo observation. The study’s primary endpoint was met, with the nivolumab group demonstrating significantly extended PFS compared to the observation group (NR vs. 12.2 months, p = 0.002) . Notably, the control group in this study had a PFS of 12.2 months, which is substantially longer than the 5.6 months observed in the PACIFIC study’s control group. Additionally, both groups exhibited high overall response rates (ORR) with a minor difference (98.8% vs. 97.7%). These findings support further investigation of this innovative modality. A phase II study (SU2C-SARC032) demonstrated a significant enhancement in disease-free survival (DFS) when utilizing neoadjuvant pembrolizumab and radiotherapy (50 Gy/25fx), followed by surgery and adjuvant pembrolizumab (EXP), compared to standard radiotherapy and surgery (SOC) in undifferentiated pleomorphic sarcoma (UPS) and liposarcoma (LPS) (HR 0.57, 90%CI: 0.35-0.91; p = 0.023) (Table ). The estimated 2-year DFS rate was higher in the EXP arm (70% vs. 53%) . The superior DFS found in this study would potentially revolutionize the treatment paradigm for these specific cancer types in the future. However, in terms of the AEs, the incidence of grade ≥ 3 was significantly higher in the EXP arm compared to the SOC arm in this study (52% vs. 26%, p = 0.002) (Table ). Meanwhile, the PACIFIC-2 study evaluated the efficacy and safety of durvalumab in combination with cCRT, followed by consolidation durvalumab, as compared to placebo in patients with LA-NSCLC. The study did not demonstrate a statistically significant improvement in PFS (mPFS: 13.8 vs. 9.4 months, HR 0.85; 95% CI: 0.65-1.12; p = 0.247) or OS (mOS: 36.4 vs. 29.5 months, HR 1.03; 95% CI: 0.78-1.39; p = 0.823). Moreover, a notably higher percentage of patients experienced adverse events leading to the discontinuation of durvalumab compared to those receiving placebo (25.6% vs. 12.0%) , underscoring the importance of vigilant monitoring for potential adverse effects when concurrent combining immunotherapy with radiotherapy. Local radiotherapy has shown promise in managing metastatic malignancies. Chen et al. reported on the efficacy and safety of sequential thoracic radiotherapy following immunotherapy plus EP/EC (etoposide and cisplatin or carboplatin) as first-line treatment for extensive-stage small-cell lung cancer (ES-SCLC). Among patients who responded to treatment, radiotherapy was administrated at a dose of ≥ 30 Gy/10fx or 50 Gy/25fx. The median OS and PFS were 21.4 months (95%CI: 17.2-NR) and 10.1 months (95%CI: 6.9-15.5), respectively, after a median follow-up of 17.7 months. The one-year and two-year OS rates were found to be 74.1% and 39.7%, respectively, while grade ≥ 3 treatment-related AEs occurred in 58.2% of patients (Tables and ) . Furthermore, a randomized phase III study on thoracic radiotherapy for ES-SCLC is ongoing, and its results are eagerly anticipated . In conclusion, these results highlighted the importance of combining radiotherapy and immunotherapy in solid tumors. Consolidation immunotherapy following cCRT has demonstrated survival benefits in both LA-NSCLC and LS-SCLC. Meanwhile, neoadjuvant immunotherapy combined with chemotherapy/radiotherapy has shown significant tumor shrinkage effects and might convert to survival outcomes. Additionally, incorporating radiotherapy into first-line treatment for advanced tumors has yielded promising results. However, concurrent immunotherapy and radiotherapy are currently being tested for extremity sarcoma and might not be recommended off-trial for chest tumors due to the negative results observed in the PACIFIC-2 trial. Further study is needed to elucidate the potential underlying mechanism for proper cooperation of radiotherapy and immunotherapy in the future.
Proteomic analysis reveals the difference between the spermatozoa of young and old Sus scrofa
b83d99d5-a924-44ef-9c01-2cbe1937fc26
11718062
Biochemistry[mh]
Age is a crucial determinant of fertility in male animals. As age advances, various aspects, such as semen quality, embryo viability, sperm DNA integrity, conception rate, adverse pregnancy outcomes, and offspring health are subject to varying degrees of impairment – . In contrast to human studies, animal breeding relies on the semen of male animals for artificial insemination, whereas studies on animal husbandry mainly focus on semen. With advancing research, the mechanism underlying the influence of age on spermatozoa has become better understood. Sperm quality and motility decline, and the proportion of abnormal spermatozoa increases in older animals – . The sperm morphology and activity of aged animals are influenced by various factors, including diminished antioxidant capacity, dysfunction in the epididymis and gonads, which can result in abnormal sperm motility and morphology as well as reduced fertility , , . In livestock production, the use of high-value semen is crucial for improving the genetic progress of species and the quality of offspring and is necessary to keep the semen-providing individuals within a certain age group . Owing to the quiescent state of transcription and translation in spermatozoa, proteins are considered the main players in sperm functionality . Sperm proteins and their interactions are closely related to sperm capacitation from in vitro to oocyte fertilization. Proteomic analysis of spermatozoa has been widely used to identify potential breeding markers . Early studies on porcine semen proteomics were carried out based on 2-DE technology. Kwon et al. detected semen proteins before and after capacitation and found significant differences in proteins related to energy metabolism and active oxygen species, such as SUCLA2 and PRDX5, before and after capacitation, and used these as biomarkers to predict the quality of boar semen . Due to their technical limitations, they detected only ten difference expressed proteins among 224 proteins. With the development of mass spectrometry, modern proteomics research has moved from 2D era to high throughput era . An iTRAQ based proteomic analysis of different parts of pig spermatozoa showed that a total of 1723 proteins were identified, of which 32 proteins were different in different parts of semen . Zhang et al. performed proteomic analysis of extracellular vesicles with different sperm motility and identified a total of 2576 proteins, of which 51 proteins were differentially expressed. Combined WGCNA analysis of these proteins and metabolites revealed significant correlations between sperm proteins and metabolites . With the development of proteomic analysis techniques, more and more proteins in semen have been discovered, but there are still few studies on the proteome of semen of aged boars. The Wannan black pig (WBP), a disease-resistant local variety, is predominantly found in the southern mountainous region of the Anhui Province, with a cultivation history spanning over 100 years . It thrives in dark and humid environments where reproductive performance and adaptability play crucial roles. This breed has high fertility, superior meat quality, maternal stability, and excellent tolerance to roughage. The WBP has gained immense popularity among the residents of the Yangtze River Delta region and serves as the prime ingredient for renowned Huizhou cuisine delicacies such as “Huizhou ham” and “braised meat.” By 1982, there were 1702 purebred females within the WBP population . The semen quality of WBP plays a crucial role in the development of the local pig industry and enhancement of pig breeds, directly affecting both the quantity and quality of offspring, as well as the economic benefits derived from pig farming . The quality of boar semen exhibits significant variation, with a generally limited utilization time. The objective of this study was to use proteomic methods to determine protein expression levels in the semen of boars of different ages, as previous research has primarily focused on the motility and morphological indicators of spermatozoa. A tandem mass tag (TMT) was employed for the quantification and identification of differentially expressed proteins in WBPs across various age groups with the aim of enhancing our understanding of the impact of age on semen at the molecular level. Overall, 4050 proteins belonging to 29,634 spectra were identified using LC-MS/MS. The pie chart depicts the coverage of the detected protein sequences, with proteins primarily concentrated between 0 and 10%, encompassing approximately 1940 proteins in this range, representing a total coverage of 47.9% (Fig. A). The distribution map of peptide lengths illustrates the length distribution of all identified peptides, revealing that most peptides fell within the range of 5–15 amino acids, with approximately 17,000 peptides falling into this category (Fig. B). As shown in Fig. C, spermatozoa from difference samples were well-separated in PCA score plot, indicating that there are significant different. Protein-unique peptides indicated the number of distinct peptides for each identified protein. Notably, there was a relatively high number (approximately 2700) of proteins containing 1–5 unique peptides (Fig. D). Functional enrichment analysis of these proteins was conducted using the GO and KEGG databases. GO analysis revealed that the sperm proteins were primarily involved in the biological processes of organelle organization, translation, small-molecule metabolic organization, and amide and peptide biosynthesis. In terms of cellular components, there was a significant enrichment in intracellular anatomical structures, organelles, intracellular organelles, and membrane-bound organelles. Molecular functions were predominantly enriched in catalytic activity, hydrolase activity, and small-molecule binding (Fig. A). The KEGG pathway enrichment analysis revealed that sperm proteins were primarily associated with lysosomal, ribosomal, and oxidative phosphorylation, the tricarboxylic acid cycle, and other pathways involved in energy metabolism (Fig. B). We conducted differential expression analysis on these 4050 proteins, considering a fold change > 1.2 or < 0.83 and a p-value < 0.05. Of these, 130 exhibited significantly different expression levels. Compared to aged boars, 33 genes were upregulated and 97 proteins were downregulated in young boars. The expression patterns and abundance of these proteins are shown (Fig. A). We subsequently conducted functional enrichment analysis of the differentially expressed proteins. Unlike the total protein set, these differentially expressed proteins were enriched in reproductive processes, such as reproduction, fertilization, sexual reproduction, and spermatogenesis. This indicates a direct correlation between protein expression and reproductive function (Fig. B). The differentially expressed proteins (DEPs) were classified and annotated according to the KEGG database. The DEPs were primarily involved in 20 pathways, including the carbon pool formed by folate and steroid hormone biosynthesis. (Fig. C). All DEPs are listed in (Table ). To further explore the functions of the DEPs, functional enrichment analysis was performed for the upregulated and downregulated DEPs. GO biological process enrichment showed that ketone bodies, spermatid development and differentiation, and sulfide oxidation processes were uniquely enriched in the upregulated DEPs, whereas fertilization, reproduction, reproductive processes, sexual reproduction, and single fertilization were enriched in the downregulated DEPs (Fig. A, B). Cellular components of GO enrichment in downregulated DEPs were mainly enriched in acrosomal membrane and extracellular region, whereas those in upregulated DEPs were mainly in the intracellular region (Fig. C, D). Molecular function of GO enrichment also revealed the difference between up- and downregulated DEPs; the upregulated proteins were more related to energy metabolism enzyme activity, and the downregulated proteins were related to histone and DNA topoisomerase activity (Fig. E, F). The up- and downregulated proteins were subsequently subjected to KEGG and subcellular annotations, respectively. The results showed that 15% of the upregulated proteins were localized in the mitochondria (Fig. A) and were enriched in the tricarboxylic acid cycle (Fig. B). The proportion of downregulated proteins in the mitochondria was only 4.05%; however, 5.41% of the downregulated proteins were located in the Golgi apparatus (Fig. C) and enriched in the ribosomal pathway (Fig. D). A PPI network was constructed using STRING and Cytoscape to visually depict the interplay among these distinct proteins. These proteins were categorized into distinct interaction networks. Ribosomes (RPS5, RPLP0, and RPL7) and RNA polymerases (POLR2B and POLR2G) were centrally positioned and exhibited robust interconnections with other proteins (Fig. ). The accuracy of the differential proteins between young and old boars was verified through PRM quantitative analysis, wherein nine proteins, including seminal plasma sperm motility inhibitor (SPMI), were selected for verification. Other proteins related to energy metabolism, such as aconitate hydratase (ACO2), Acetyl-coenzyme A synthetase (ACSS), and succinyl-CoA:3-ketoacid coenzyme A transferase 1 (OXCT1), were also included. The other five proteins Sulfide: quinone oxidoreductase (SQOR), Galectin-3-binding protein (LGALS3BP), MYC binding protein (MYCBP), Complement component 1 Q subcomponent-binding protein (C1QBP), and CUB domain-containing protein (AWN) were also chosen for PRM validation. The relative expression difference of the target protein’s corresponding peptides in the samples was calculated using a t-test based on their quantitative values ( p < 0.05). The normalized results of relative protein expression are presented in Fig. and Table S2 and are consistent with the proteomic analysis. Seminal fluid predominantly epitomizes the reproductive process of boars, with variations in their reproductive capabilities across different age cohorts, potentially attributed to individual sperm quality, semen vitality, health status, and oxidative stress , . Recently, proteomic analysis of semen from older adult human males has been reported, revealing a positive correlation between age and the DNA fragmentation index . Furthermore, the DEPs were significantly enriched in pathways related to energy metabolism. To discern the disparities in protein levels among boars of varying ages, we conducted LC-MS-based proteomic analysis on juvenile (1-year-old) and mature (7-year-old) boars. A total of 4050 proteins were identified, including 130 proteins that were differentially expressed between the two groups. FOLR2, an isoform of the folate receptor FR, is expressed in the placenta, hematopoietic cells, and macrophages and is equipped with a cellular glycosylphosphatidylinositol (GPI) anchor , . Therefore, FOLR2 expression may serve as a predictive marker of male fertility. Immunostaining results revealed that spermatozoa in semen exhibited a more robust immune response than spermatozoa in testicular tissue . The presence of FOLR in spermatozoa enables the formation of folate complexes, thereby safeguarding the folate content within the sperm microenvironment and facilitating normal DNA replication post-fertilization through the transfer of folate carriers into the interior of the spermatozoa . The role of energy metabolism in sperm flagellar motility has been extensively studied. Mammalian spermatozoa require sustained motility from ejaculation to fertilization, necessitating the generation of sufficient energy to meet their locomotion demands . The tricarboxylic acid cycle serves as a crucial pathway for ATP production in sperm mitochondria. Aconitate hydratase (ACO2), an enzyme that regulates the tricarboxylic acid cycle, translocates from the cytoplasm to the nucleus during somatic cell reprogramming, thereby influencing cellular totipotency . It also affects ATP-dependent sperm motility in spermatozoa . The expression level of ACO2 was significantly lower in males with asthenospermia than in those with normal fertility. The addition of isocitrate results in a significant increase in sperm motility, which could be attributed to enhanced ATP production . In our study, the expression level of ACO2 was higher in young boars than in old boars, suggesting that young boars exhibited greater semen motility, which is consistent with previously reported findings. Outside of ACO2, a larger proportion of the upregulated proteins was localized in the mitochondria (15%), indicating that young boars have a stronger energy source in the semen, which can enhance sperm motility. Carboxypeptidase O (CPO) was identified as the most significant DEP, exhibiting over 11-fold higher expression in the Y group than in the O group. This protein belongs to the M14 family of metal carboxypeptidases and displays a preference for cleaving C-terminal acidic amino acids. In insects, seminal fluid proteins are synthesized in the male reproductive tract and transferred to females during mating, along with spermatozoa, inducing physiological and behavioral changes in females. CPB, another member of the carboxypeptidase family, possesses this functionality. Because they have the same domain, we hypothesized that CPO may also exhibit similar functions . Mitochondria-associated cysteine-rich protein (SMCP) is a rapidly evolving protein that mainly localizes to the outer membrane of sperm mitochondria and enhances sperm motility . In vitro fertilization experiments in SMCP-knockout mice showed that the fertilization success rate was three-fold lower than that in wild-type mice, indicating that sperm motility and oocyte penetration were reduced . The findings of knockout experiments by Karim et al. further support this perspective . Our results demonstrated a significantly higher expression of SMCP in the Y group, indicating enhanced motility in the semen of young boars. Ribosomal proteins are the most highly expressed genes in virtually all cells, and their products play a pivotal role in ribosome biogenesis, thereby influencing protein folding . Among these proteins, 40 S ribosomal protein S5 (RPS5) is a key RNA-binding component that contributes significantly to translation . Our findings highlight the central involvement of RPS5 in PPIs and its pronounced expression in Group Y. In a study conducted by Sandeep et al., transcriptome sequencing was performed on 60 males with known fertility ( n = 20), idiopathic infertility, and asthenospermia, revealing high expression of RPS5 in the normal group and low expression in the asthenospermia group . Similarly, other members of the ribosomal protein family, such as RPLP0 and RPL7, interacted with RPS5, underscoring the crucial role played by ribosomal family members in sperm function, a finding consistent with that of Laxman’s transcriptome data analysis . The downregulated proteins in our study were partially localized in the Golgi apparatus and were enriched in the ribosomal pathway, indicating that protein translation, processing, and folding are more important in the semen of old boars. The AWN protein, a member of the sperm adhesin family, was initially discovered in boar seminal plasma and is found in the seminal plasma of most mammalian species , . Electron microscopy indicated that AWN predominantly localizes on the surface of sperm cells and exhibits a range of ligand-binding capabilities to interact with receptor molecules in the anterior region of ejaculated spermatozoa, thereby facilitating sperm capacitation . This is consistent with the subcellular localization results of our proteomic analysis. Sajjad et al. conducted single-cell transcriptome sequencing of mouse spermatogonia and mesenchymal stem cells and identified histone acetyltransferase 2 A (KAT2A) as the predominant central regulator, based on centrality analysis . KAT2A, an enzyme responsible for regulating various acylation modifications, potentially modulates its function by influencing post-translational modifications of both histone and non-histone proteins , . Histone modifications exert epigenetic effects on spermatogenesis by modulating gene transcription, either by activating or repressing transcription. Acetylation of histone H3K14 is associated with testosterone production and spermatogenesis . Sodium arsenite exposure enhances the level of H3K14ac in rat testes by promoting the expression of KAT2A, which subsequently leads to the repression of steroidogenic-related genes, resulting in reduced testosterone production and impaired spermatogenesis . Lower expression levels of KAT2A were also observed in Group Y, suggesting that Group Y may cause decreased acetylation and normal spermatogenesis. Protein modifications are closely associated with spermatogenesis. Protein acetylation serves as a protective mechanism against spontaneous acrosome reactions in spermatozoa, thereby enhancing fertilization rates . Additionally, it plays a crucial role in regulating spermatozoa capacitation and influences semen quality post-thawing , . Histone crotonylation is a type of acylation that links metabolism and gene expression . Histone crotonylation plays a pivotal role in promoter activation and male germ cell differentiation and exerts a significant influence during the late stages of spermatogenesis . In addition to exerting an impact on gene expression in the testis, CDYL-mediated knockdown of histone crotonylation results in impaired fertility in mice, as evidenced by a decrease in the number of spermatozoa found in the epididymis and reduced motility of spermatids . In conclusion, the reproductive performance of breeding boars in practical production is influenced by age, and the variation in reproductive ability among boars of different ages may be associated with the protein composition of their semen. Comparative proteomic analysis of semen proteins from 1- and 7-year-old breeding boars identified 130 differentially expressed genes, including 33 upregulated and 97 downregulated genes. Functional enrichment analysis revealed that these differentially expressed genes were associated with energy metabolism, reproduction, and fertilization. Furthermore, protein post-translational modification may affect the quality of boar semen at different ages. This study provides insights into the age-related differences in boar semen at the protein level. In future studies, we will further explore the epigenetic effects on semen through post-translational modifications. Animals and semen collection The animal study protocol was approved by the Animal Ethics Committee of the Anhui Agricultural University (approval no. AHAU20210826). The experimental animals were selected from the Wannan Black Pig National Breeder Farm of Guangde Sanxi Ecological Farming Co., Ltd. Twenty Wannan black pigs, each aged 1 ( n = 10) and 7 ( n = 10) years, with similar body weights and no physiological diseases were chosen. Fresh ejaculations were continuously collected through an artificial vagina stimulated by a sow. Discard the anterior and caudal parts of the ejaculated semen and take the middle part into the semen collection cup. Semen was collected for each boar twice a week. Spermatozoa sample which is free from any foreign matter or odor, exhibiting a milk-white appearance and with a survival rate exceeding 90% is selected for subsequent testing. The spermatozoa were purified by centrifugation after mixing with semen in a percoll solution. Animal experiments in this study were performed in full accordance with the ARRIVE guideline reporting guidelines. All methods were carried out in accordance with relevant guidelines and regulations. Sample preparation Samples were suspended on ice in 200 µL lysis buffer (4% SDS, 100 mM DTT, 150 mM Tris-HCl pH 8.0). The cells and tissues were disrupted by agitation using a homogenizer and boiled for 5 min. The samples were then ultrasonicated and boiled for 5 min. Undissolved cellular debris were removed using centrifugation at 16,000 rpm for 15 min. The supernatant was collected and quantified using a BCA protein assay kit (Bio-Rad, USA). Protein digestion Digestion of protein (200 µg for each sample) was performed according to the FASP procedure described by Wisniewski, Zougman et al. Briefly, the detergent, DTT and other low-molecular-weight components were removed using 200 µL UA buffer (8 M Urea, 150 mM Tris-HCl pH 8.0) via repeated ultrafiltration (Microcon units, 30 kD) facilitated using centrifugation. Then, 100 µL 0.05 M iodoacetamide in UA buffer was added to block reduced cysteine residues and the samples were incubated for 20 min in the dark. The filter was washed with 100 µL UA buffer three times and then 100 µL 25 mM NH4HCO3 twice. Finally, the protein suspension was digested with 4 µg trypsin (Promega) in 40 µL 25 mM NH4HCO3 overnight at 37 °C, and the resulting peptides were collected as a filtrate. The peptide concentration was determined at OD280 using a Nanodrop device. TMT labeling of peptides The peptides were labeled with TMT reagent according to the manufacturer’s instructions (Thermo Fisher Scientific). Each aliquot (100 µg of peptide equivalent) was reacted with one tube of TMT reagent, respectively. After the sample was dissolved in 100 µL of 0.05 M TEAB solution, pH 8.5, the TMT reagent was dissolved in 41 µL of anhydrous acetonitrile. The mixture was then incubated at room temperature for 1 h. Then, 8 µL of 5% hydroxylamine was added to the sample and incubated for 15 min to quench the reaction. The multiplex-labeled samples were pooled and lyophilized. High pH reverse phase fractionation (HPRP) TMT-labeled peptides mixture was fractionated using a Waters XBridge BEH130 column (C18, 3.5 μm, 2.1 × 150 mm) on a Agilent 1290 HPLC operating at 0.3 mL/min. Buffer A consisted of 10 mM ammonium formate and buffer B consisted of 10 mM ammonium formate with 90% acetonitrile; both buffers were adjusted to pH 10 using ammonium hydroxide. A total of 30 fractions were collected from each peptide mixture and concatenated to 15 (pooling equal-interval RPLC fractions). The fractions were dried for nano-LC-MS/MS analysis. LC-MS analysis (TMT10plex) The LC-MS analysis was performed using a Q Exactive mass spectrometer coupled to an Easy nLC (Thermo Fisher Scientific). Peptide from each fraction was loaded onto a the C18-reversed phase column (12 cm long, 75 μm ID, 3 μm) in buffer A (2% acetonitrile and 0.1% Formic acid) and separated with a linear gradient of buffer B (90% acetonitrile and 0.1% Formic acid) at a flow rate of 300 nL/min over 90 min. The linear gradient was set as follows: 0–2 min, linear gradient from 2 to 5% buffer B; 2–62 min, linear gradient from 5 to 20% buffer B; 62–80 min, linear gradient from 20 to 35% buffer B; 80–83 min, linear gradient from 35 to 90% buffer B; and 83–90 min, buffer B maintained at 90%. MS data were acquired using a data-dependent top15 method dynamically choosing the most abundant precursor ions from the survey scan (300–1800 m/z) for HCD fragmentation. The target value was determined based on predictive Automatic Gain Control (pAGC). The AGC target values of 1e6 and a maximum injection time of 50 ms were used for full MS, and the target AGC value of 1e5 and a maximum injection time of 100 ms were used for MS2. The dynamic exclusion duration was 30 s. Survey scans were acquired at a resolution of 70,000 at m/z 200, and the resolution of the HCD spectra was set to 35,000 at m/z 200. The normalized collision energy was 30. The instrument was run in the peptide recognition mode. Mass spectrometry proteomics data were deposited in the ProteomeXchange Consortium ( https://proteomecentral.proteomexchange.org ) via the iProX partner repository , with the dataset identifier PXD050879. Database searching and analysis The resulting LC-MS/MS raw files were imported into the Proteome Discoverer 2.4 software (version 1.6.0.16) for data interpretation and protein identification against the Uniprot-Sus scrofa (Pig) [9823]-122175-220104.fasta) database. The initial search was performed using a precursor mass window of 10 ppm. The search followed the enzymatic cleavage rule of trypsin/phosphate and allowed two maximal missed cleavage sites and a mass tolerance of 20 ppm for fragment ions. The modification set was as follows: fixed modification, carbamidomethyl (C), TMT10plex(K), and TMT10plex(N-term); and variable modification, oxidation (M) and acetyl (Protein N-term). The minimum 6 amino acids for peptide, ≥ 1 unique peptides were required per protein. For peptide and protein identification, the false discovery rate (FDR) was set to 1%. The TMT reporter ion intensity was used for quantification. Bioinformatic analysis Analyses of the bioinformatics data were performed using Perseus software, Microsoft Excel, and R statistical computing software. Differentially expressed proteins were screened with a cutoff of a fold-change ratio of > 1.20 or < 0.83 and nominal p-value of < 0.05. Expression data were grouped together via hierarchical clustering according to the protein level. To annotate the sequences, information was extracted from the UniProtKB/Swiss-Prot, Kyoto encyclopedia of genes and genomes (KEGG), and gene ontology (GO). GO and KEGG enrichment analyses were performed using Fisher’s exact test, and FDR correction for multiple testing was performed. GO terms were grouped into three categories: biological processes (BP), molecular functions (MF), and cellular components (CC). The enriched GO and KEGG pathways were nominally statistically significant at nominal p < 0.05. Protein–protein interaction (PPI) networks were constructed using the STRING database and Cytoscape software. Parallel reaction monitoring (PRM) analysis To verify the protein expression levels obtained by TMT analysis, the expression levels of selected proteins were quantified using LC-PRM/MS [1]. Briefly, peptides were prepared according to the TMT protocol. Tryptic peptides were loaded onto C18 stage tips for desalting before reverse-phase chromatography using an Easy nLC-1200 system (Thermo Scientific). One-hour liquid chromatography gradients with acetonitrile ranging from 5 to 35% over 45 min were used. The PRM analysis was performed using a Q Exactive Plus mass spectrometer (Thermo Scientific). Methods optimized for the collision energy, charge state, and retention times of the most significantly regulated peptides were generated experimentally using unique peptides with high intensity and confidence for each target protein. The mass spectrometer was operated in positive ion mode with the following parameters: the full MS1 scan was acquired with a resolution of 70,000 (at 200 m/z), AGC target values of 3.0 × 10 6 , and a 250 ms maximum ion injection time. Full MS scans were followed by 20 PRM scans at 35,000 resolution (at m/z 200) with AGC 3.0 × 106 and a maximum injection time of 200 ms. The targeted peptides were isolated using a 2Th window and fragmented at a normalized collision energy of 27 in a higher-energy dissociation (HCD) collision cell. Raw data were analyzed using Skyline (MacCoss Lab, University of Washington) [2] to get the signal intensities of the individual peptide sequences. For the PRM MS data, the average base peak intensity of each sample was extracted from the full scan acquisition using RawMeat (version 2.1, VAST Scientific, www.vastscientific.com ). The normalization factor for sample N was calculated as fN = the average base peak intensity of sample N/median of the average base peak intensities of all samples. The area under curve (AUC) for each transition from sample N was multiplied by this factor. After normalization, the AUC of each transition were summed to obtain the AUCs at the peptide level. The relative protein abundance was defined as the intensity of a specific peptide. The animal study protocol was approved by the Animal Ethics Committee of the Anhui Agricultural University (approval no. AHAU20210826). The experimental animals were selected from the Wannan Black Pig National Breeder Farm of Guangde Sanxi Ecological Farming Co., Ltd. Twenty Wannan black pigs, each aged 1 ( n = 10) and 7 ( n = 10) years, with similar body weights and no physiological diseases were chosen. Fresh ejaculations were continuously collected through an artificial vagina stimulated by a sow. Discard the anterior and caudal parts of the ejaculated semen and take the middle part into the semen collection cup. Semen was collected for each boar twice a week. Spermatozoa sample which is free from any foreign matter or odor, exhibiting a milk-white appearance and with a survival rate exceeding 90% is selected for subsequent testing. The spermatozoa were purified by centrifugation after mixing with semen in a percoll solution. Animal experiments in this study were performed in full accordance with the ARRIVE guideline reporting guidelines. All methods were carried out in accordance with relevant guidelines and regulations. Samples were suspended on ice in 200 µL lysis buffer (4% SDS, 100 mM DTT, 150 mM Tris-HCl pH 8.0). The cells and tissues were disrupted by agitation using a homogenizer and boiled for 5 min. The samples were then ultrasonicated and boiled for 5 min. Undissolved cellular debris were removed using centrifugation at 16,000 rpm for 15 min. The supernatant was collected and quantified using a BCA protein assay kit (Bio-Rad, USA). Digestion of protein (200 µg for each sample) was performed according to the FASP procedure described by Wisniewski, Zougman et al. Briefly, the detergent, DTT and other low-molecular-weight components were removed using 200 µL UA buffer (8 M Urea, 150 mM Tris-HCl pH 8.0) via repeated ultrafiltration (Microcon units, 30 kD) facilitated using centrifugation. Then, 100 µL 0.05 M iodoacetamide in UA buffer was added to block reduced cysteine residues and the samples were incubated for 20 min in the dark. The filter was washed with 100 µL UA buffer three times and then 100 µL 25 mM NH4HCO3 twice. Finally, the protein suspension was digested with 4 µg trypsin (Promega) in 40 µL 25 mM NH4HCO3 overnight at 37 °C, and the resulting peptides were collected as a filtrate. The peptide concentration was determined at OD280 using a Nanodrop device. The peptides were labeled with TMT reagent according to the manufacturer’s instructions (Thermo Fisher Scientific). Each aliquot (100 µg of peptide equivalent) was reacted with one tube of TMT reagent, respectively. After the sample was dissolved in 100 µL of 0.05 M TEAB solution, pH 8.5, the TMT reagent was dissolved in 41 µL of anhydrous acetonitrile. The mixture was then incubated at room temperature for 1 h. Then, 8 µL of 5% hydroxylamine was added to the sample and incubated for 15 min to quench the reaction. The multiplex-labeled samples were pooled and lyophilized. TMT-labeled peptides mixture was fractionated using a Waters XBridge BEH130 column (C18, 3.5 μm, 2.1 × 150 mm) on a Agilent 1290 HPLC operating at 0.3 mL/min. Buffer A consisted of 10 mM ammonium formate and buffer B consisted of 10 mM ammonium formate with 90% acetonitrile; both buffers were adjusted to pH 10 using ammonium hydroxide. A total of 30 fractions were collected from each peptide mixture and concatenated to 15 (pooling equal-interval RPLC fractions). The fractions were dried for nano-LC-MS/MS analysis. The LC-MS analysis was performed using a Q Exactive mass spectrometer coupled to an Easy nLC (Thermo Fisher Scientific). Peptide from each fraction was loaded onto a the C18-reversed phase column (12 cm long, 75 μm ID, 3 μm) in buffer A (2% acetonitrile and 0.1% Formic acid) and separated with a linear gradient of buffer B (90% acetonitrile and 0.1% Formic acid) at a flow rate of 300 nL/min over 90 min. The linear gradient was set as follows: 0–2 min, linear gradient from 2 to 5% buffer B; 2–62 min, linear gradient from 5 to 20% buffer B; 62–80 min, linear gradient from 20 to 35% buffer B; 80–83 min, linear gradient from 35 to 90% buffer B; and 83–90 min, buffer B maintained at 90%. MS data were acquired using a data-dependent top15 method dynamically choosing the most abundant precursor ions from the survey scan (300–1800 m/z) for HCD fragmentation. The target value was determined based on predictive Automatic Gain Control (pAGC). The AGC target values of 1e6 and a maximum injection time of 50 ms were used for full MS, and the target AGC value of 1e5 and a maximum injection time of 100 ms were used for MS2. The dynamic exclusion duration was 30 s. Survey scans were acquired at a resolution of 70,000 at m/z 200, and the resolution of the HCD spectra was set to 35,000 at m/z 200. The normalized collision energy was 30. The instrument was run in the peptide recognition mode. Mass spectrometry proteomics data were deposited in the ProteomeXchange Consortium ( https://proteomecentral.proteomexchange.org ) via the iProX partner repository , with the dataset identifier PXD050879. The resulting LC-MS/MS raw files were imported into the Proteome Discoverer 2.4 software (version 1.6.0.16) for data interpretation and protein identification against the Uniprot-Sus scrofa (Pig) [9823]-122175-220104.fasta) database. The initial search was performed using a precursor mass window of 10 ppm. The search followed the enzymatic cleavage rule of trypsin/phosphate and allowed two maximal missed cleavage sites and a mass tolerance of 20 ppm for fragment ions. The modification set was as follows: fixed modification, carbamidomethyl (C), TMT10plex(K), and TMT10plex(N-term); and variable modification, oxidation (M) and acetyl (Protein N-term). The minimum 6 amino acids for peptide, ≥ 1 unique peptides were required per protein. For peptide and protein identification, the false discovery rate (FDR) was set to 1%. The TMT reporter ion intensity was used for quantification. Analyses of the bioinformatics data were performed using Perseus software, Microsoft Excel, and R statistical computing software. Differentially expressed proteins were screened with a cutoff of a fold-change ratio of > 1.20 or < 0.83 and nominal p-value of < 0.05. Expression data were grouped together via hierarchical clustering according to the protein level. To annotate the sequences, information was extracted from the UniProtKB/Swiss-Prot, Kyoto encyclopedia of genes and genomes (KEGG), and gene ontology (GO). GO and KEGG enrichment analyses were performed using Fisher’s exact test, and FDR correction for multiple testing was performed. GO terms were grouped into three categories: biological processes (BP), molecular functions (MF), and cellular components (CC). The enriched GO and KEGG pathways were nominally statistically significant at nominal p < 0.05. Protein–protein interaction (PPI) networks were constructed using the STRING database and Cytoscape software. To verify the protein expression levels obtained by TMT analysis, the expression levels of selected proteins were quantified using LC-PRM/MS [1]. Briefly, peptides were prepared according to the TMT protocol. Tryptic peptides were loaded onto C18 stage tips for desalting before reverse-phase chromatography using an Easy nLC-1200 system (Thermo Scientific). One-hour liquid chromatography gradients with acetonitrile ranging from 5 to 35% over 45 min were used. The PRM analysis was performed using a Q Exactive Plus mass spectrometer (Thermo Scientific). Methods optimized for the collision energy, charge state, and retention times of the most significantly regulated peptides were generated experimentally using unique peptides with high intensity and confidence for each target protein. The mass spectrometer was operated in positive ion mode with the following parameters: the full MS1 scan was acquired with a resolution of 70,000 (at 200 m/z), AGC target values of 3.0 × 10 6 , and a 250 ms maximum ion injection time. Full MS scans were followed by 20 PRM scans at 35,000 resolution (at m/z 200) with AGC 3.0 × 106 and a maximum injection time of 200 ms. The targeted peptides were isolated using a 2Th window and fragmented at a normalized collision energy of 27 in a higher-energy dissociation (HCD) collision cell. Raw data were analyzed using Skyline (MacCoss Lab, University of Washington) [2] to get the signal intensities of the individual peptide sequences. For the PRM MS data, the average base peak intensity of each sample was extracted from the full scan acquisition using RawMeat (version 2.1, VAST Scientific, www.vastscientific.com ). The normalization factor for sample N was calculated as fN = the average base peak intensity of sample N/median of the average base peak intensities of all samples. The area under curve (AUC) for each transition from sample N was multiplied by this factor. After normalization, the AUC of each transition were summed to obtain the AUCs at the peptide level. The relative protein abundance was defined as the intensity of a specific peptide. The reproductive performance of breeding boars in practical production is influenced by age, and the variation in reproductive ability among boars at different ages may be associated with the protein composition in their semen. Comparative proteomic analysis of semen proteins from 1-year-old and 7-year-old breeding boars identified a total of 130 differentially expressed genes, including 33 up-regulated genes and 97 down-regulated genes. Functional enrichment analysis revealed that these differentially expressed genes are related to energy metabolism, reproduction, and fertilization. Furthermore, it was observed that protein post-translational modification might impact the quality of boar semen at different ages. This study provides insights into the age-related differences in boar semen at the protein level. In future investigations, we will further explore the epigenetic effects on semen through post-translational modifications. Below is the link to the electronic supplementary material. Supplementary Material 1
Paradigm Shift Toward Digital Neuropsychology and High-Dimensional Neuropsychological Assessments: Review
b2789860-dbd3-4900-817d-422f9dd8840f
7773516
Physiology[mh]
Clinical neuropsychologists have traditionally developed and validated parsimonious assessment tools using basic technologies (ie, pencil and paper protocols, general linear model). Advances have predominantly occurred in expanded normative standards throughout the history of this profession . Although these low-dimensional tools are well-validated assessments of basic cognitive constructs, they have limited presentation (static 2D stimuli) and logging capabilities (which require manual logging of responses). Moreover, low-dimensional approaches limit their statistical modeling (typically linear) to combinations of features relative to a set of weights for predicting the value of criterion variables. Some neuropsychologists may argue that the parsimony offered by low-dimensional tools reflects the reality of a much higher-dimensional deficit. However, low-dimensional tools may offer diminished interpretations of complex phenomena. The preference for low-dimensional tools is apparent in surveys of assessments used by neuropsychologists . This conservativism has resulted in neuropsychological assessments that have hardly changed since the original scales were established in the early 1900s . Low-dimensional neuropsychological assessment tools place the neuropsychologist on par with the 19th century literary work on the nature of perception and dimensionality. Specifically, the narrator, A Square, resides in Flatland with residents (Flatlanders) whose perception is limited to 2 dimensions. After a discussion with a Stranger (a sphere), A Square comes to understand how complex and high dimensional the world is. Unfortunately, A Square is jailed for holding and communicating heretical beliefs . For neuropsychologists, low-dimensional technologies have led us to search for simplified explanations of complex phenomena, which limits our ability to develop, validate, interpret, and communicate useful models of human neuropsychology. Recently, cognitive psychologists have called this the Flatland fallacy . They contend that the Flatland fallacy can be surmounted by formalizing psychological theories as computational models that have the capacity to make precise predictions about cognition and/or behavior . These authors explain that in the limited perspective of the Flatlanders’ view (bottom of the figure), a 3D object (sphere) seems to be fluctuating magnitudes (an expanding and reducing circle). However, the reality is that (top of the figure) this object is merely progressing through a lower‐dimensional plane. The low-dimensional perspective leads to a false understanding of reality. Similarly, neuropsychologists may incorrectly determine that the low level of dimensionality correctly describes neuropsychological or psychological phenomena. In fact, they may be missing the complexity and high dimensionality of neuropsychological phenomena. Cognitive psychologists also contend that unitary cognitive constructs such as attention are limited and prevent psychologists from deepening the understanding of complex, or high-dimensional, phenomena. First, theories of unitary cognitive constructs are based on circular reasoning. Complex phenomena such as the conception of attention are explained by presumptive attentional systems. Instead, psychologists should model the parallel, reciprocal, and iterative interactions between context and neural or functional processes. This would enhance the characterization of physically executed actions . Similarly, the analytical approach to psychology is problematic because (1) an exhaustive definition is proposed (eg, attention), (2) assumed subfunctions are identified (eg, selective, sustained, or divided attention) with separable functional and neuronal processes (or dedicated systems), and (3) research concentrates on specific tasks that purport to measure the theoretical subfunctions rather than underlying processes required to execute an efficient behavior in a particular situation . Commonalities between subfunctions and other constructs (eg, working memory) are often not empirically distinguishable and by no means imply that the underlying functional and neural processes are different or separable. These authors propose that rather than being rigidly adherent to prior cognitive conceptual frameworks, psychologists should model mechanisms and processes (sensory, motor, and cognitive) that are found in several complex behaviors. These behaviors may run in parallel or interact across stimulus properties, time, and goals to achieve an outcome. How do neuropsychologists move from low-dimensional neuropsychology to high-dimensional neuropsychology? The National Institutes of Health (NIH) offers initiatives for neuropsychologists interested in higher-dimensional tools, including (1) integrating neuroscience into behavioral and social sciences, (2) transformative advances in measurement science, (3) digital intervention platforms, and (4) large-scale population cohorts and data integration . Similarly, the NIH Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative seeks high-dimensional approaches to understand brain disorders (eg, Alzheimer disease, Parkinson disease, depression, and traumatic brain injury) and accelerating the technologies for high-dimensional modeling of how the brain records, processes, uses, stores, and retrieves vast quantities of information . Neuropsychologists can enhance work conducted in NIH initiatives by offering interpretations of neuroscience findings based on clinical expertise. After a brief consideration of the historic progression of neuropsychological assessment technologies, there is an elucidation of current NIH initiatives for the behavioral and social sciences as well as evaluations of current neuropsychological assessment technologies. Neuropsychology has experienced a number of advances as it developed from a primarily qualitative practice to a more objective and evidence-based approach , with expanded normative standards , performance validity testing , and cross-cultural considerations . Although these improvements have aided the investigation of neurocognitive functions, there are increasing discussions on the need to enhance the dimensionality of neuropsychological assessments and computational modeling . The technological and theoretical development of neuropsychological assessment can be understood in terms of dimensional waves of technological adoption . In Neuropsychology 1.0, neuropsychological assessments accentuate the development of low-dimensional and construct-driven (ie, simple stimulus presentations of stimuli to test abstract concepts like working memory) paper-and-pencil measures. In Neuropsychology 2.0, there is a technological move to automated administration, scoring, and in some instances the interpretation of low-dimensional stimulus presentations using computerized approaches (eg, NIH Toolbox and video teleconferencing) . Concurrently, technological developments in neuroimaging have changed the role of neuropsychological assessments, from lesion localization to predictions about a patient’s ability to perform activities of daily living. Finally, Neuropsychology 3.0 reflects contemporary advances in high-dimensional (dynamic and ecologically valid simulation, logging, and modeling of everyday activities) tools. Some neuropsychologists are hesitant to move from low-dimensional to high-dimensional tools because computerized assessments may introduce errors and/or decrease the reliability of the assessment process by means of automation . Although there have been improvements in computational power and security, developers of high-dimensional technologies need to take appropriate actions to ensure proper implementation . Furthermore, normative efforts are ongoing for high-dimensional assessments, and continued validation of advanced platforms and novel data analytic approaches is needed. Three decades ago, clinical psychologists were urged to adopt progressively available advanced technologies . Concurrently, in the 1980s, neuropsychologists started discussing the potential of computerized neuropsychological assessments . It was subsequently pointed out that when compared with progress in our everyday technologies, psychological assessments had barely progressed throughout the 20th century (eg, Wechsler scales) . Technologies found in neuropsychological testing can be compared with now obsolete black-and-white televisions, vinyl records, rotary-dial telephones, and the first commercial computer made in the United States (ie, Universal Automatic Computer I). Assessment technologies need to progress in ideas, not just new measures . In the late 1990s, it was discussed how neuropsychology lagged behind (in absolute terms and in comparison with) other clinical neurosciences. Clinical neuropsychologists continued to use many of the same tools that have been developed decades earlier. Moreover, new tests that were coming out were not conceptually or substantively better than the ones from decades earlier (eg, Wechsler scales). Dodrill pointed to the fact that in the 1970s, there was little difference in the technological progress of neurology and neuropsychology. This changed with the advent of computerized tomographic scanning, and neuropsychologists were no longer consulted for lesion localization. Neuroimaging advances allowed neurologists to better understand and treat neurologic pathophysiology . Dodrill suggests that if technological developments in neurology had been as slow as that found in neuropsychology, then neurologists would be limited to pneumoencephalograms and radioisotope scans. These procedures are deemed primeval by current neuroradiological standards. To get an idea of where neuropsychology is today, basic searches were performed on July 29, 2020, to tally the number of technology publications per discipline. The first search included a PubMed search with the search terms “computer” AND (“neuropsychology” OR “neurology” OR “neuroscience”) from 1990 to 2019 . A second and third search using the terms “technology” and “neuroimaging” instead of “computer” revealed similar findings ( and , respectively). - show the number of publications by year that resulted from each of these 3 broad literature searches. Findings from these basic searches suggest that high-dimensional technologies have vastly greater representations in neurology and the neurosciences. The inclusion of technologies is very recently increasing in neuropsychology but is explicitly not keeping pace with other neurosciences. Similarly, a survey of rates of neuropsychologists using computerized instruments revealed that only 6% of the 693 neuropsychology assessments were computerized . The average respondent reported that they rarely used computerized tests. An increased likelihood of computerized assessment use was apparent for early career neuropsychologists. Integrating Neuroscience Advances Into Clinical Neuropsychology High-dimensional technologies such as functional neuroimaging provide real-time observations of brain function that challenge the validity of some low-dimensional paper-and-pencil technologies. Impairments following brain injury are rarely a single type of processing, and there is no one-to-one relationship between neuropsychological functions and brain structures and systems. Similar symptoms can arise from various injury types, and the same underlying injury can result in a variety of different symptoms. Although the integration of neuroimaging and neuropsychological methods has improved our understanding of brain functions, specific neuropsychological functions are typically associated with activation in multiple brain regions (distributed processing). Advances in methods and high-dimensional technologies offer promise for redefining previous understandings of cognitive functions (eg, elucidation of multiple types of processing speed) and introduction of novel (and complex and dynamic) cognitive functions . Neuropsychologists are increasingly arguing for neuropsychological models established in terms of patients’ reciprocal relations with the environments in which they carry out activities of daily living . Understanding the complex and dynamic interactions of persons involves the study of the brain’s processes in environmental and social systems. The increasing emphasis of clinical neuropsychology on ecological validity and integration with social neuroscience is limited by current low-dimensional paper-and-pencil assessments . There is growing attention to the development of high-dimensional tools for assessing and modeling brain functions that include dynamic presentations of environmentally relevant stimuli . Moving beyond the static or low-dimensional stimuli found in most traditional neuropsychological tests require neuropsychologists to find ways to update their technologies to reflect high-dimensional assessment approaches (eg, deep learning, mobile platforms, wearables, extended reality [XR], and the Internet of Things [IoT]). Neuropsychologists have looked to factor analytic studies of neuropsychological test results to enhance understanding of the functional capacities of patients . However, looking at the relations among responses to low-dimensional tasks that use static or 2D stimuli can constrain task performance and neural activity to abstract constructs (eg, working memory). Low-dimensional assessments bind mean neural population dynamics to a low-dimensional subspace and may limit the neuropsychologist’s assessment of the patient’s ability to perform everyday activities . Furthermore, observation of low-dimensional neural signals may be an artifact of simple cognitive tasks. Standard paper-and-pencil (low-dimensional) tasks often involve basic responses to static or low-dimensional stimuli . Computational neuroscience offers high-dimensional models of cognition via neural network–motivated connectionist models. This approach integrates neuroscience findings into high-dimensional models of the ways in which our brains execute cognitive functions. Leabra is a programing framework that has been used to integrate connectionist models of cognitive function. The result is a holistic architecture adept at producing precise predictions of a broad array of cognitive processes . Computational models based on neuroscience findings allow for assessing a model’s sensitivity for capturing a neuropsychological construct and specificity of a given construct to other neuropsychological states and processes. Finally, computational models are shareable and extensible by other neuroscientists who want to extend previous work via iterative construct validation. Adoption of Advances in Measurement Science to Neuropsychological Assessment The NIH Office of Behavioral and Social Sciences Research (OBSSR) also emphasizes advances in measurement science and the move from low-dimensional data analytical approaches (typically linear combinations of features relative to a set of weights for criterion value prediction) to higher-dimensional data analytic approaches for evaluating change over time (eg, deep learning neural networks, machine learning). Clinical scientists are starting to adopt developments in deep learning and other computational modeling approaches . Machine learning and deep learning have been applied successfully in various areas of artificial intelligence research: natural language processing, speech recognition, self-driving cars, and computer vision. For example, natural language processing–oriented computerized neuropsychological assessments have been developed to extract key features of linguistic complexity changes associated with progression in the Alzheimer disease spectrum . High-dimensional data analytics can be applied to computerized adaptive testing (CAT) and computational models derived from large collaborative databases. High-dimensional measurement protocols offer a clinical scientist with increased precision and granularity of data . Technologically enhanced neuropsychological assessments (including high-dimensional virtual environments [VEs] with graphical models) surpass simple automations (computerized neuropsychological assessments) of low-dimensional paper-and-pencil tasks. Moreover, they allow neuropsychologists to present scenarios that require patients to actively choose among multiple subtasks. From higher-dimensional tasks, context-dependent computational models can be established that include latent context variables that can be extricated using nonlinear modeling. A framework has been proposed that aims to elucidate probabilistic computations using graphical and statistical models of naturalistic behaviors. The probability distribution for high-dimensional (ecologically valid simulations of everyday activities) tasks is complex. As a result, the brain likely simplifies the high-dimensional stimuli by centering on significant interactions . Neuropsychologists can develop probabilistic graphical models for proficient descriptions of complex statistical distributions that relate several interactions and/or conditional dependencies among neuropsychological variables. High-dimensional technologies such as functional neuroimaging provide real-time observations of brain function that challenge the validity of some low-dimensional paper-and-pencil technologies. Impairments following brain injury are rarely a single type of processing, and there is no one-to-one relationship between neuropsychological functions and brain structures and systems. Similar symptoms can arise from various injury types, and the same underlying injury can result in a variety of different symptoms. Although the integration of neuroimaging and neuropsychological methods has improved our understanding of brain functions, specific neuropsychological functions are typically associated with activation in multiple brain regions (distributed processing). Advances in methods and high-dimensional technologies offer promise for redefining previous understandings of cognitive functions (eg, elucidation of multiple types of processing speed) and introduction of novel (and complex and dynamic) cognitive functions . Neuropsychologists are increasingly arguing for neuropsychological models established in terms of patients’ reciprocal relations with the environments in which they carry out activities of daily living . Understanding the complex and dynamic interactions of persons involves the study of the brain’s processes in environmental and social systems. The increasing emphasis of clinical neuropsychology on ecological validity and integration with social neuroscience is limited by current low-dimensional paper-and-pencil assessments . There is growing attention to the development of high-dimensional tools for assessing and modeling brain functions that include dynamic presentations of environmentally relevant stimuli . Moving beyond the static or low-dimensional stimuli found in most traditional neuropsychological tests require neuropsychologists to find ways to update their technologies to reflect high-dimensional assessment approaches (eg, deep learning, mobile platforms, wearables, extended reality [XR], and the Internet of Things [IoT]). Neuropsychologists have looked to factor analytic studies of neuropsychological test results to enhance understanding of the functional capacities of patients . However, looking at the relations among responses to low-dimensional tasks that use static or 2D stimuli can constrain task performance and neural activity to abstract constructs (eg, working memory). Low-dimensional assessments bind mean neural population dynamics to a low-dimensional subspace and may limit the neuropsychologist’s assessment of the patient’s ability to perform everyday activities . Furthermore, observation of low-dimensional neural signals may be an artifact of simple cognitive tasks. Standard paper-and-pencil (low-dimensional) tasks often involve basic responses to static or low-dimensional stimuli . Computational neuroscience offers high-dimensional models of cognition via neural network–motivated connectionist models. This approach integrates neuroscience findings into high-dimensional models of the ways in which our brains execute cognitive functions. Leabra is a programing framework that has been used to integrate connectionist models of cognitive function. The result is a holistic architecture adept at producing precise predictions of a broad array of cognitive processes . Computational models based on neuroscience findings allow for assessing a model’s sensitivity for capturing a neuropsychological construct and specificity of a given construct to other neuropsychological states and processes. Finally, computational models are shareable and extensible by other neuroscientists who want to extend previous work via iterative construct validation. The NIH Office of Behavioral and Social Sciences Research (OBSSR) also emphasizes advances in measurement science and the move from low-dimensional data analytical approaches (typically linear combinations of features relative to a set of weights for criterion value prediction) to higher-dimensional data analytic approaches for evaluating change over time (eg, deep learning neural networks, machine learning). Clinical scientists are starting to adopt developments in deep learning and other computational modeling approaches . Machine learning and deep learning have been applied successfully in various areas of artificial intelligence research: natural language processing, speech recognition, self-driving cars, and computer vision. For example, natural language processing–oriented computerized neuropsychological assessments have been developed to extract key features of linguistic complexity changes associated with progression in the Alzheimer disease spectrum . High-dimensional data analytics can be applied to computerized adaptive testing (CAT) and computational models derived from large collaborative databases. High-dimensional measurement protocols offer a clinical scientist with increased precision and granularity of data . Technologically enhanced neuropsychological assessments (including high-dimensional virtual environments [VEs] with graphical models) surpass simple automations (computerized neuropsychological assessments) of low-dimensional paper-and-pencil tasks. Moreover, they allow neuropsychologists to present scenarios that require patients to actively choose among multiple subtasks. From higher-dimensional tasks, context-dependent computational models can be established that include latent context variables that can be extricated using nonlinear modeling. A framework has been proposed that aims to elucidate probabilistic computations using graphical and statistical models of naturalistic behaviors. The probability distribution for high-dimensional (ecologically valid simulations of everyday activities) tasks is complex. As a result, the brain likely simplifies the high-dimensional stimuli by centering on significant interactions . Neuropsychologists can develop probabilistic graphical models for proficient descriptions of complex statistical distributions that relate several interactions and/or conditional dependencies among neuropsychological variables. Neuropsychologists can use deep learning algorithms that simulate the hierarchical structure of a person’s brain (healthy and damaged). Deep learning is a form of machine learning (ie, algorithms that learn from data to automatically perform tasks such as classification and prediction that can be nonlinear in nature) that processes data from lower dimensionality to increasingly higher dimensions. It is increasingly used to develop novel technologies, big data, and artificial intelligence . Neuropsychologists can use deep learning to analyze studies with both traditional (low-dimensional paper and pencil) and high-dimensional simulation technologies (eg, virtual reality–based neuropsychological assessments, mixed reality, augmented reality). With deep learning, neuropsychologists could process the lower-dimensional data (paper-and-pencil tests). Next, they could move to increasingly higher-dimensional data (eg, from simulation technologies) and develop increasingly complex data-driven semantic concepts that are likely more representative of brain functioning than historical, theoretically based cognitive constructs (eg, working memory). Probabilistic models and generative neural networks can be used to develop a unified framework for modeling neuropsychological functioning (nonclinical and clinical). Connectionist models such as these are understood to be a portion of the more general framework of probabilistic graphical models. Neuropsychological performances have been modeled as Bayesian computations (brain function expresses perceptions and actions as inferential processes). In this approach, neuropsychological deficits are false inferences arising from aberrant previous beliefs. Bayesian approaches can be used for computational phenotyping that uses graphical models implemented as stochastic processes that involve a randomly determined sequence of observations (each of which is considered as a sample of one element from a probability distribution) via generative neural networks . Visual object recognition (eg, facial processing) can be used as an example. Selective lesions can be applied to computational models of visual object recognition to assess the impact of damage to various cortical regions (eg, early visual processing, extrastriate areas, anterior associative areas). New high-dimensional measures could be developed to assess visual agnosia and examine the appearance of category-specific deficits. Deep learning architectures can also be used for modeling specific connection pathways in selective impairment. Stochastic decay (stochastic reduction of weight values that decreases responsivity to afferent signals) can be applied to synaptic strengths for examination of cognitive decline. Both global degradation of all network synapses and local degradation of inhibitory synapses from a given processing layer have been investigated. The findings revealed that although older participants accurately performed arithmetical tasks, they had impaired numerosity discrimination on trials requiring the inhibition of incongruent information. They also found that these results were related to poor inhibitory processes measured by standard neuropsychological assessments. The specific degradation of inhibitory processes resulted in a pattern closely resembling older participants’ performance . The addition of computational modeling for the development, validation, and application of neuropsychological assessments represents a high-dimensional approach for neuropsychologists. The NIH Toolbox is a battery of computerized neuropsychological assessments that uses item response theory (IRT) and CAT. With IRT, the NIH Toolbox has an alternative to classical test theory as it moves beyond group-specific norms . In IRT, the probability of an item response is modeled according to the respondent’s position on the underlying construct of interest. This approach can be useful for providing item-level properties of each NIH Toolbox measure across the full range of each construct. Although neuropsychological measures tend to meet the reliability and validity requirements of classical test theory, the equivalence of item properties (eg, item difficulty and item discriminatory power) is often assumed across items. Consideration of item difficulty tends to be subsumed under independent variable manipulation (eg, cognitive load) to modify the marginal likelihood of correct responses in item subgroups. A limitation of this approach is that it does not match well with current item-level analyses found in neuroimaging assessments of brain activations following stimulus probes. For neuropsychological assessments to comport well with brain activation probes, item difficulty needs to be considered to avoid ceiling and floor effects in patient performances across clinical cohorts. IRT models offer the neuropsychologist both individual patient parameters and individual item characteristics that can be scaled along a shared latent dimension. Neuropsychological assessments would benefit from greater adoption of developments in IRT that emphasize the accuracy of individual items. Various IRT approaches have been applied as signal detection theory models that connect corresponding but discrete methods . Combining IRT and signal detection delivers the measurement accuracy needed for robust modeling of item difficulty and examinee ability. The NIH Toolbox CAT approach shortens testing time (by about half as long as low-dimensional paper-and-pencil measures). Through avoidance of floor or ceiling effects and concise item pools, CAT delivers equal (or greater) ability–level assessments . Moreover, CAT offers enhanced efficiency, flexibility, and precision assessment of multiple domains of interest without adversely affecting participant burden. The application of IRT to CAT provides neuropsychologists with real-time assessment of item-level performance. Neuropsychologists are increasingly interested in developing assessments that assess the patients’ real-world functions in a manner that generalizes to functional performance in everyday activities . A function-led approach to neuropsychological assessments involves starting with directly observable everyday behaviors and proceeding backward to observe how a sequence of actions leads to a given behavior. Furthermore, a function-led approach examines how that behavior is disrupted. For example, a patient may have difficulty multitasking while using a global positioning system to navigate a simulated neighborhood in a driving simulator. High-dimensional technologies can be used to present dynamic and interactive stimuli in a 3D environment that includes automatic logging and computational modeling (eg, head movements, eye tracking, response latencies, patterns, etc) of a patient’s performance in everyday activities. High-dimensional neuropsychology tools are being developed and validated to simulate everyday functions (rather than abstract cognitive constructs) . Given the drawbacks to experiments conducted in real-life settings (time consuming, require transportation, involve consent from local businesses, costly to use or build physical mock-ups, and difficult to replicate or standardize across settings) and difficulty in maintaining systematic control of real-world stimulus challenges, high-dimensional and function-led XR environments are being used by neuropsychologists. Low-dimensional (paper-and-pencil and computer automated) neuropsychological tools only indirectly assess the patient’s ability to perform everyday activities . VEs offer potential aids in enhancing the dimensionality and ecological validity of neuropsychological assessments through enhanced computational capacities for administration efficiency, stimulus presentation, automated logging of responses, and data analytic processing. Given the precise stimulus presentation and control of dynamic or high-dimensional perceptual stimuli, VEs offer neuropsychological assessments with enhanced ecological validity . High-dimensional immersive VEs move beyond low-dimensional paper-and-pencil tests with static stimulus presentations in sterile testing rooms to simulated environments that replicate the distractions, stressors, and/or demands found in the real world. Using passive data monitoring from everyday technologies (eg, smartphones, IoT), clinical scientists can collect real-time cognitive performance throughout the course of a day . Each patient has a digital footprint that transpires from consistent use of everyday technologies. Coupling technologies with developments in measurement science allows for novel approaches to capture cognitions, affects, and behaviors . Rapid progress in sensor technologies has led to objective and effective measures of behavioral performance, psychophysiology, and environmental contexts . For example, machine learning has been employed to extract features from passive monitoring of mobile phone use. When comparing these features with performance on the psychomotor vigilance task, it was found that alertness deviations as small as 11% could be detected . Another example of enhanced data monitoring can be found in the increased granularity in performance assessments and digital logging tools used in the Framingham Heart Study . New developments in digital logging of verbal responses to cognitive stimuli allow for automated algorithms that can extract new language features (eg, speaker turn taking, speaking rate, hesitations, pitch, number of words, vocabulary). These features offer promise for predicting incident cognitive impairment . Furthermore, low-dimensional pencils and pens can be upgraded with high-dimensional digital pens with associated software designed to measure pen positioning 75 times per second. Digital pens have a spatial resolution of ±0.002 inches. For example, digital pens are being used by neuropsychologists for assessing clock drawing performance . Minute drawing elements such as pen strokes (eg, clock face, hand, digit) can be logged with greater than 84% accuracy . The sensitivity of these high-dimensional technologies to minute drawing elements, decision-making latencies, and graphomotor characteristics may offer promise to greatly enhance lower-dimensional hand scoring of the Boston Process Approach. A review of the Boston Process Approach and neuropsychology-based technologies has been available recently . Another transformative opportunity from the NIH OBSSR is the application of high-dimensional technologies to interventions . Progress in neurocognitive rehabilitation has been enhanced by neuroimaging of plasticity of the brain. Similarly, a notable increase can be found in the use of noninvasive brain stimulation approaches that leverage neural plasticity for rehabilitation . Neuropsychologists interested in rehabilitation emphasize the promotion of brain plasticity by increasing a patient’s capacity for performing everyday activities. The resource and labor intensiveness of interventions and the resulting limitations (reach, scalability, and duration) found in real-world assessment environments require interventions to be personalized at the start, adapted throughout treatment, and operationalized into coded databases for fidelity . Smart Environment Technologies Smart environments integrate and incorporate several high-dimensional capabilities (eg, function-led evaluation, passive data monitoring, deep learning, etc) to provide both assessment and intervention. Using smart environments, neuropsychologists can discreetly monitor a patient’s everyday activities for changes in clinical status (eg, mobility patterns can predict neurocognitive status). Moreover, automatic interventions can be provided in real-world settings . Smart environments use machine learning algorithms (eg, naïve Bayes, Markov, conditional random fields, and dynamic Bayes networks) to model, recognize, and monitor large amounts of labeled training data . Activity aware prompting is used to assist in the elevation of independent living. Results from studies using prompting technologies reveal growth in independent activity engagement by patients with neurocognitive impairment . VE Technologies Smart virtual reality environments simulate real-world scenarios and offer refined stimulus delivery for interventions . Using VEs, neuropsychologists can present and control stimuli across various sensory modalities (eg, visual, auditory, olfactory, haptic, and kinesthetic). There is an increasing number of validated VEs that can be used for assessment and intervention: virtual apartments , grocery stores , libraries , classrooms , driving , cities , and military environments . In addition to the use of novel measurement science for more efficient assessments using behavioral performances, real-time psychophysiological data (eg, eye gaze) can also be used to adapt assessment and intervention environments for a more individualized approach using factors such as emotional reactivity and ongoing skill development . Smartphones and Other Digital Technologies Current NIH initiatives for the behavioral and social sciences contend that intervention technologies need to move from short-term assessment and rehabilitation interventions (low-dimensional assessments and treatments that may limit maintenance of behavioral response and change) to high-dimensional approaches that use novel technologies (eg, smartphones) to extend treatment duration to improve behavioral maintenance . Mobile technologies offer neuropsychologists higher-dimensional interventions that extend into patients’ everyday activities by logging, monitoring, prompting, and skill building between treatment sessions. One version of this involves ecological momentary assessments and interventions, as patients perform activities of daily living . Ecological momentary assessments and interventions using digital devices offer large streams of continuous data . Advances in computational modeling offer distinctive prospects for real-time behavioral interventions in ecological contexts . As with any new tool, neuropsychologists need to develop and validate measures and interventions. Smart environments integrate and incorporate several high-dimensional capabilities (eg, function-led evaluation, passive data monitoring, deep learning, etc) to provide both assessment and intervention. Using smart environments, neuropsychologists can discreetly monitor a patient’s everyday activities for changes in clinical status (eg, mobility patterns can predict neurocognitive status). Moreover, automatic interventions can be provided in real-world settings . Smart environments use machine learning algorithms (eg, naïve Bayes, Markov, conditional random fields, and dynamic Bayes networks) to model, recognize, and monitor large amounts of labeled training data . Activity aware prompting is used to assist in the elevation of independent living. Results from studies using prompting technologies reveal growth in independent activity engagement by patients with neurocognitive impairment . Smart virtual reality environments simulate real-world scenarios and offer refined stimulus delivery for interventions . Using VEs, neuropsychologists can present and control stimuli across various sensory modalities (eg, visual, auditory, olfactory, haptic, and kinesthetic). There is an increasing number of validated VEs that can be used for assessment and intervention: virtual apartments , grocery stores , libraries , classrooms , driving , cities , and military environments . In addition to the use of novel measurement science for more efficient assessments using behavioral performances, real-time psychophysiological data (eg, eye gaze) can also be used to adapt assessment and intervention environments for a more individualized approach using factors such as emotional reactivity and ongoing skill development . Current NIH initiatives for the behavioral and social sciences contend that intervention technologies need to move from short-term assessment and rehabilitation interventions (low-dimensional assessments and treatments that may limit maintenance of behavioral response and change) to high-dimensional approaches that use novel technologies (eg, smartphones) to extend treatment duration to improve behavioral maintenance . Mobile technologies offer neuropsychologists higher-dimensional interventions that extend into patients’ everyday activities by logging, monitoring, prompting, and skill building between treatment sessions. One version of this involves ecological momentary assessments and interventions, as patients perform activities of daily living . Ecological momentary assessments and interventions using digital devices offer large streams of continuous data . Advances in computational modeling offer distinctive prospects for real-time behavioral interventions in ecological contexts . As with any new tool, neuropsychologists need to develop and validate measures and interventions. The NIH OBSSR strategic plan is also interested in big data, data analytics, and data integration techniques for developing collaborative knowledge bases . Integrating neuropsychological data into large collaborative knowledge bases will allow neuropsychologists to either formalize cognitive ontologies or abandon cognitive ontologies for phylogenetically refined functional and neuronal processes that underlie all complex behaviors or more simplistically traditional neuropsychological tasks . Formal designations of distinct sensory, motor, and cognitive entities can be established in terms of parallel, reciprocal, hierarchical, and/or spatiotemporal relations . Consistent with critiques from cognitive psychology , a limitation of neuropsychological data integration is that low-dimensional neuropsychological assessments are made up of hypothetical interdimensional constructs inferred from research findings . Evidence for poor test specificity is apparent in median correlations for common neuropsychological tests. It has been found that although the median correlation within domain groupings on a neuropsychological test was 0.52, the median correlation between groupings was 0.44 . Therefore, the tests are not unambiguously domain specific. The median correlations should be notably higher within groupings and lower between groupings. A recent meta-analysis of relationships between the Wisconsin Card Sorting Test (WCST) and the Weschler Adult Intelligence Scale (WAIS) found a robust relationship between WCST performance and WAIS indices . This is interesting because the WAIS was recently found to be the test most often administered by neuropsychologists and the WCST was the fifth most often administered . Interestingly, the meta-analysis found that WCST scores were associated in comparable strength with both verbal and nonverbal domains from Wechsler Adult Intelligence Scale tests. Another issue is that there is considerable variation in some neuropsychological tests of the same domain (eg, various measures of go or no-go performance) . The shared variance of tests of supposedly differing domains and the lack of consistency in tests of the same domain may decrease the capacity for accurate data integration. Compounding this issue is the fact that current diagnostic frameworks found in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) and the World Health Organization’s International Classification of Diseases (ICD) are dependent on presenting signs and symptoms. Moreover, they do not align with findings from genetics and clinical neuroscience . Ontologies are formal specifications of entities found in a domain and their relations. An ontology contains designations of separate entities along with a specification of ontological relations among entities with representations via spatiotemporal (eg, preceded-by or contained-within ) or hierarchical relations (eg, is-a or part-of ). This provision of an objective, concise, common, and controlled vocabulary facilitates communication among domains. Neuropsychological assessment lags behind other clinical sciences in the development of formal ontologies . As such, neuropsychologists have moved beyond the diagnostic taxonomies found in the DSM and ICD. These diagnostic taxonomies are not sufficient for biomarker research because they do not reflect relevant neurocognitive and behavioral systems. Instead, neuropsychologists interested in developing a common vocabulary for ontologies and collaborative knowledge bases should adopt the US National Institute of Mental Health’s Research Domain Criteria (RDoC) project. The RDoC aims to establish a classification system for mental disorders based on neuroscience and behavioral research . Neuropsychologists interested in high-dimensional technologies have embraced the following NIH initiatives to advance scientific developments: (1) integration of neuroscience into behavioral and social sciences, (2) transformative advances in measurement science, (3) digital intervention platforms, and (4) large-scale population cohorts and data integration. Evidence that progress is occurring in neuropsychology exists; however, more work needs to be done. Much of this work involves adoption, development, and validation of novel technologies. Similarly, there is a need for a classification system (based on neuroscience and psychology research) that moves beyond low-dimensional emphases on unitary cognitive constructs specific to a purported functional or neuronal system. A high-dimensional classification instead embraces testable hypotheses of how an observed phenomenon is produced from fundamental underlying mechanisms or processes, the dynamics of those processes (eg, reciprocal, hierarchal, iterative), and the multiple functional or neuronal systems involved in several complex behaviors . In more basic terms, neuropsychologists should theorize with verbs instead of nouns to serve scientific progress. Only then can neuropsychologists integrate data to develop meaningful ontologies and collaborative knowledge bases of high-dimensional neuropsychological phenomena. Computational modeling has great promise for achieving this endeavor. High-dimensional neuropsychology requires substantial reform in the way the profession conducts training. High-dimensional training should be added to current trainings that emphasize primarily (in some programs it may be solely) low-dimensional neuropsychological tests (eg, paper-and-pencil tests) and methods (limited introduction to general linear modeling). Increased emphasis should be placed on technical skill development with high-dimensional technologies and data-driven inferential reasoning. Curricula in neuropsychology programs should be expanded to adapt to the recent technological advances that have led to exponential growth in the other sciences. This would require reimagining training in clinical psychology programs. If neuropsychologists of the future are to work with large collaborative knowledge bases and perform complicated computational modeling of big data, then they need at least basic training in areas traditionally associated with computer science (eg, computer programing) and informatics (algorithms and databases). As such, their basic statistical training would need to be enhanced to include data manipulation, predictive model generation, machine learning, natural language processing, graph theory, and visualization. Increased emphasis on training basic technical and computational skills will improve the ability of future neuropsychologists to participate in science. A final note is the need for training in neuroethics. Neuroethics has been distinguished into 2 branches: (1) ethics of neuroscience—neuroethics as applied ethical reflection on the practices and technologies found in the neurosciences—and (2) neuroscience of ethics—what neuroscience can reveal about the nature of morality and morally relevant topics . Neuroethics are important for the NIH BRAIN initiative. The NIH BRAIN project aims to examine the ways in which dynamic patterns of neural activity are transformed into cognition, emotion, perception, and action in health and disease . The BRAIN initiative promotes the use of powerful new tools and technologies: (1) technologies for monitoring neural circuit activity and (2) technologies that enable the modulation of neural circuits . As expected, the ethical concerns related to the medical and nonmedical use of neurotechnologies by neuropsychologists are profound. Neuroethics for neurotechnologies include a combination of principlist, deontological, and consequential ethical approaches to answer ethical quandaries . Training in neuroethics and the ethical use of high-dimensional technologies will allow neuropsychologists to provide enhanced care for their patients.
Parenting Styles, Food Parenting Practices, Family Meals, and Weight Status of African American Families
038b641d-0639-489e-a9dd-8117d4ac9aab
9864142
Family Medicine[mh]
Obesity is a major public health issue and a growing concern universally. The obesity prevalence is around 21% among older adolescents aged 12–19 years, which is higher than that of younger children and adolescents . The National Center for Health Statistics reported that the obesity prevalence is higher among African American youth in comparison to White youth . Moreover, overweight and obese children and adolescents are more likely to be overweight and develop a chronic disease in adulthood . Environmental, behavioral, and personal factors exert an influence on eating habits and behaviors as an element involved in developing or preventing obesity in children and adolescents . Lifestyle choices, psychological factors, family factors, and socioeconomic factors are the most remarkable etiologies for childhood obesity . Parenting (or caregiver) styles (PSs), food parenting practices (FPPs), and frequency of family meals are major factors among different environmental factors that impact children and adolescents as the first and most influential community that they join. PSs refer to the engagement and responsiveness level of parents in different situations with their child. FPPs are postulated to impact children’s eating behaviors . FPPs were also identified as a predictor of children and adolescents’ health outcomes in adulthood . A high frequency of family meals provides several benefits for families, which include improving weight status and promoting healthy eating habits . An absence of family meals is associated with unhealthy eating patterns and poor diet quality . Furthermore, there is a negative association between the frequency of family meals and obesity development . However, the question of how eating meals together as a family is related to other aspects of the family environment, such as different PSs and FPPs, and whether this relationship is associated with obesity status among African American minority groups is unresolved. African American parents were shown to use an authoritarian PS, which is characterized by high restriction and monitoring of children’s food consumption . Many African American children and adolescents do not meet the recommended dietary intake of fruits, vegetables, and whole grains due to low socioeconomic status, which can lead to a higher risk of obesity . Family system theory (FST) was used as the theoretical framework for this study. This theory emphasizes the importance of the family as a system to understand and explain individual behaviors in the context of family interactions . FST suggests that any change in family structure or the role of family members can have an impact on the behavior of the entire family over time . Previous studies revealed that a warm and supportive PS correlates with the number of desirable healthy behaviors practiced. This can impact adolescent weight status and dietary patterns . In contrast, restrictive FPPs, such as pressure to eat, restrictions on youth’s access to foods, and parental concerns about adolescents’ weight status, are associated with poorer diet quality . There are very few studies addressing the impact of the family environment on the obesity status of both parents and adolescents, especially among minorities. The importance of this work is to study the effect of three influential factors of family environment, including PSs, FPPs, and family meal frequency together on obesity status among African American families. This study becomes even more important considering that obesity is a major problem among adolescents of minority groups, especially African Americans. The goal of this study is to help elucidate which family environmental factors have a positive impact on controlling the weight status of African American families and communities, and to determine which PSs and/or FPPs may lead to a higher family meal frequency. The results indicate higher family meal frequency with positive correlation with healthier weight status among African American adolescents, and authoritative PS and monitoring, reasoning, and modeling FPPs with higher frequency of family meals. 2.1. Research Design, Participants, and Procedure The protocol of the current study was approved by the institutional review board (IRB) of the University of the District of Columbia. A total of 211 African American parent–adolescent dyads participated in this cross-sectional study. The dyads were recruited by Qualtrics from November to December 2021 to complete the survey. The inclusion criteria included the following: parents or caregivers willing to participate in the study with their 10–17-year-old adolescents; access to the Internet; being comfortable reading and writing in English; being responsible for providing food for the adolescent. All participants signed a parental consent or adolescent assent form before participating as a prerequisite for the survey. 2.2. Parents’ Survey The parents completed a 20–25 min online survey. The survey used questions from the 85-item Comprehensive General Parenting Questionnaire (CGPQ), which facilitate research exploring how parenting impacts a child’s weight-related behaviors and items used by a Monroe-Lord et al. (2021) study on African American families . The demographic characteristics of both the adolescents and the parents, household food security, household acculturation, and participation in federal food assistance programs were evaluated. The parents self-reported anthropometric measurements including height and weight for themselves and their children. Body mass index (BMI) for parents and BMI percentile for adolescents were used in this study. BMI was calculated with participants’ weight divided by the square of height used for parents. As BMI increases with age during childhood and adolescence, and it is different between males and female, BMI-for-age percentile based on CDC growth charts were used for obesity status for adolescents. BMI was categorized into three groups, including normal weight if BMI was between 18.5 and 24.9, overweight if BMI was 25.0–29.9, and obesity if BMI was 30.0 and above. BMI percentile also was categorized into three groups, including normal weight if the BMI percentile was equal to or greater than the 5th percentile and less than 85th percentile, overweight if BMI percentile was at or the 85th percentile but less than the 95th percentile, and obesity if BMI percentile was at or above the 95th percentile for specific age, gender, and height . The survey also included the following question: “During the past 7 days, how many times did all, or most, of your family in your house eat a meal together?” The answer included six options, from “Never” to “More than 7 times.” For this study, frequency of family meals was categorized into three groups after combining the answer choices, namely, two times or less, three to six times, and 7seven times or more . 2.3. Statistical Analysis Two exploratory factor analyses were run to identify the PSs and FPPs. Once the factors were identified, average factor scores for each parent were calculated. Spearman’s rank correlation (when weight status was considered as a continuous variable) and the Wilcoxon rank sum test (when weight status was considered as a categorical variable) were used to test the relationship between BMI percentile and BMI for both the adolescents and the parents, respectively, and PSs and FPPs. The Wilcoxon rank sum test was used to examine the relationships between family meal frequency and weight status, as well as PSs and FPPs. Spearman’s correlation was used to test the relationship between adolescent BMI percentile and parental BMI. SAS 9.4 (SAS Institute, Cary, NC, USA) was used for statistical analysis in this study. The results are considered significant at p < 0.05. The protocol of the current study was approved by the institutional review board (IRB) of the University of the District of Columbia. A total of 211 African American parent–adolescent dyads participated in this cross-sectional study. The dyads were recruited by Qualtrics from November to December 2021 to complete the survey. The inclusion criteria included the following: parents or caregivers willing to participate in the study with their 10–17-year-old adolescents; access to the Internet; being comfortable reading and writing in English; being responsible for providing food for the adolescent. All participants signed a parental consent or adolescent assent form before participating as a prerequisite for the survey. The parents completed a 20–25 min online survey. The survey used questions from the 85-item Comprehensive General Parenting Questionnaire (CGPQ), which facilitate research exploring how parenting impacts a child’s weight-related behaviors and items used by a Monroe-Lord et al. (2021) study on African American families . The demographic characteristics of both the adolescents and the parents, household food security, household acculturation, and participation in federal food assistance programs were evaluated. The parents self-reported anthropometric measurements including height and weight for themselves and their children. Body mass index (BMI) for parents and BMI percentile for adolescents were used in this study. BMI was calculated with participants’ weight divided by the square of height used for parents. As BMI increases with age during childhood and adolescence, and it is different between males and female, BMI-for-age percentile based on CDC growth charts were used for obesity status for adolescents. BMI was categorized into three groups, including normal weight if BMI was between 18.5 and 24.9, overweight if BMI was 25.0–29.9, and obesity if BMI was 30.0 and above. BMI percentile also was categorized into three groups, including normal weight if the BMI percentile was equal to or greater than the 5th percentile and less than 85th percentile, overweight if BMI percentile was at or the 85th percentile but less than the 95th percentile, and obesity if BMI percentile was at or above the 95th percentile for specific age, gender, and height . The survey also included the following question: “During the past 7 days, how many times did all, or most, of your family in your house eat a meal together?” The answer included six options, from “Never” to “More than 7 times.” For this study, frequency of family meals was categorized into three groups after combining the answer choices, namely, two times or less, three to six times, and 7seven times or more . Two exploratory factor analyses were run to identify the PSs and FPPs. Once the factors were identified, average factor scores for each parent were calculated. Spearman’s rank correlation (when weight status was considered as a continuous variable) and the Wilcoxon rank sum test (when weight status was considered as a categorical variable) were used to test the relationship between BMI percentile and BMI for both the adolescents and the parents, respectively, and PSs and FPPs. The Wilcoxon rank sum test was used to examine the relationships between family meal frequency and weight status, as well as PSs and FPPs. Spearman’s correlation was used to test the relationship between adolescent BMI percentile and parental BMI. SAS 9.4 (SAS Institute, Cary, NC, USA) was used for statistical analysis in this study. The results are considered significant at p < 0.05. 3.1. Demographic Analysis The details of the sample characteristics are presented in . The adolescent sample was composed of 41% male and 59% female individuals with a mean age of 14.28 years. The mean BMI percentile was 71.35. The obesity rate of the adolescents was 19.6% (the national estimate for African American youth is 22%). Approximately 82% of the caregiver participants in this study were the parents of the adolescents (we use the term parents for them). Most of the parents were female (70%), and 57% of the parents were overweight or obese. Approximately 56% of the parents had a college education or above. Furthermore, approximately 52% of the adolescents lived in single-parent households, and more than half of the families had family meals three to six times per week. 3.2. Parenting Styles and Food Parenting Practices Two exploratory factor analyses were run for sets of 35 (for PSs) and 33 (for FPPs) items. The first factor analysis for the identification of PSs produced four factors, which were named authoritative, authoritarian, setting rules/expectations, and neglecting. One item was excluded because it did not load with any of the other four factors. The items for each PS are listed in . A second factor analysis was run for the identification of FPPs, at which point five items were excluded from the final factors. The analysis produced four factors, which were named monitoring, reasoning, copying, and role modeling. Monitoring is defined as parents keeping track of what and how much their children eat. Reasoning or teaching is defined as parents reasoning with the child about the benefits of healthy food and teaching them healthy eating habits. Copying is defined as when parents intentionally or unintentionally encourage the child to copy their eating behaviors. Role modeling is defined as parents exhibiting healthy eating behaviors to encourage similar behaviors in their children. The items for each FPP factor are listed in . The factor loadings and the details of the factor analyses for each PS and FPP are shown in . The internal reliability of each factor was good (Cronbach’s alpha > 0.8) or acceptable (Cronbach’s alpha > 0.7) for all PSs and FPPs. The parents received a score on all eight factors. The highest median scores were for the authoritative and setting rules PSs. Setting rules, authoritative, neglecting, and authoritarian were the PSs applied by African American parents most prevalently and, respectively, while role modeling, copying, reasoning, and monitoring were used most prevalently and, respectively, as FPPs. 3.3. Relationship of Different Demographic Data with Weight Status of Both Adolescents and Parents The relationship between the weight status of both parents and adolescents and different demographic variables were examined and reported in . Based on the results, both adolescent and parent sex were meaningfully related to BMI percentile of adolescent ( p = 0.012 and p = 0.0485, respectively). Fewer male adolescents (40.6%) were in the normal weight group compared with female adolescents (63.5%), and adolescents whose main caregiver was female were in the better weight status compared with those whose caregiver was male. Male parents significantly had higher BMI compared to female parents ( p = 0.0081). In addition, there was a significant trend toward lower BMI among the younger parents ( p = 0.0286). Interestingly, we could not find any relationship between socioeconomic factors, including parental education and household income, and obesity status of both parent and adolescents of African American. 3.4. Relationship of Parent Weight Status with Adolesscent Weight Status The relationship between parent BMI and adolescent BMI percentiles was examined. The relationship between parent BMI and adolescent BMI percentiles was examined. A highly significant correlation between parent BMI and adolescent BMI percentile was found (r = 0.42, p < 0.0001). 3.5. Relationship of PSs, FPPs, and Family Meal Frequency with Adolescents’ Weight Status BMI percentile was considered a categorical variable (normal weight, overweight, and obese) to evaluate whether the PSs and FPPs are correlated with the obesity status of the African American adolescents. No meaningful relationship was found between the categorized adolescents’ BMI percentiles and PSs and FPPs . In addition, no correlation was found between parent’s weight status and PSs, FPPs, and family meal. Family meal frequency was associated with the adolescents’ BMI percentile ( p = 0.03). The median BMI percentile score was 87.06, which indicates overweight, for those adolescents with two or fewer family meals, while it was 62.45, which indicates a normal weight, for those adolescents with more than seven family meals per week. Although no significant correlation was found between parental BMI and family meal frequency ( p = 0.33), there was a positive trend, with a decrease in parental BMI when having more family meals . 3.6. Relationship of Family Meal Frequency with PSs and FPPs Among different studied PSs, only the authoritative PS was positively related to family meal frequency ( p = 0.0004). The authoritative score was one score higher in those families with seven or more family meals compared to those with two or fewer family meals. However, among the four different FPPs, three of them—monitoring, reasoning, and modeling—were correlated with the frequency of family meals ( p = 0.0002, p = 0.0017, and p = 0.0008, respectively) . The details of the sample characteristics are presented in . The adolescent sample was composed of 41% male and 59% female individuals with a mean age of 14.28 years. The mean BMI percentile was 71.35. The obesity rate of the adolescents was 19.6% (the national estimate for African American youth is 22%). Approximately 82% of the caregiver participants in this study were the parents of the adolescents (we use the term parents for them). Most of the parents were female (70%), and 57% of the parents were overweight or obese. Approximately 56% of the parents had a college education or above. Furthermore, approximately 52% of the adolescents lived in single-parent households, and more than half of the families had family meals three to six times per week. Two exploratory factor analyses were run for sets of 35 (for PSs) and 33 (for FPPs) items. The first factor analysis for the identification of PSs produced four factors, which were named authoritative, authoritarian, setting rules/expectations, and neglecting. One item was excluded because it did not load with any of the other four factors. The items for each PS are listed in . A second factor analysis was run for the identification of FPPs, at which point five items were excluded from the final factors. The analysis produced four factors, which were named monitoring, reasoning, copying, and role modeling. Monitoring is defined as parents keeping track of what and how much their children eat. Reasoning or teaching is defined as parents reasoning with the child about the benefits of healthy food and teaching them healthy eating habits. Copying is defined as when parents intentionally or unintentionally encourage the child to copy their eating behaviors. Role modeling is defined as parents exhibiting healthy eating behaviors to encourage similar behaviors in their children. The items for each FPP factor are listed in . The factor loadings and the details of the factor analyses for each PS and FPP are shown in . The internal reliability of each factor was good (Cronbach’s alpha > 0.8) or acceptable (Cronbach’s alpha > 0.7) for all PSs and FPPs. The parents received a score on all eight factors. The highest median scores were for the authoritative and setting rules PSs. Setting rules, authoritative, neglecting, and authoritarian were the PSs applied by African American parents most prevalently and, respectively, while role modeling, copying, reasoning, and monitoring were used most prevalently and, respectively, as FPPs. The relationship between the weight status of both parents and adolescents and different demographic variables were examined and reported in . Based on the results, both adolescent and parent sex were meaningfully related to BMI percentile of adolescent ( p = 0.012 and p = 0.0485, respectively). Fewer male adolescents (40.6%) were in the normal weight group compared with female adolescents (63.5%), and adolescents whose main caregiver was female were in the better weight status compared with those whose caregiver was male. Male parents significantly had higher BMI compared to female parents ( p = 0.0081). In addition, there was a significant trend toward lower BMI among the younger parents ( p = 0.0286). Interestingly, we could not find any relationship between socioeconomic factors, including parental education and household income, and obesity status of both parent and adolescents of African American. The relationship between parent BMI and adolescent BMI percentiles was examined. The relationship between parent BMI and adolescent BMI percentiles was examined. A highly significant correlation between parent BMI and adolescent BMI percentile was found (r = 0.42, p < 0.0001). BMI percentile was considered a categorical variable (normal weight, overweight, and obese) to evaluate whether the PSs and FPPs are correlated with the obesity status of the African American adolescents. No meaningful relationship was found between the categorized adolescents’ BMI percentiles and PSs and FPPs . In addition, no correlation was found between parent’s weight status and PSs, FPPs, and family meal. Family meal frequency was associated with the adolescents’ BMI percentile ( p = 0.03). The median BMI percentile score was 87.06, which indicates overweight, for those adolescents with two or fewer family meals, while it was 62.45, which indicates a normal weight, for those adolescents with more than seven family meals per week. Although no significant correlation was found between parental BMI and family meal frequency ( p = 0.33), there was a positive trend, with a decrease in parental BMI when having more family meals . Among different studied PSs, only the authoritative PS was positively related to family meal frequency ( p = 0.0004). The authoritative score was one score higher in those families with seven or more family meals compared to those with two or fewer family meals. However, among the four different FPPs, three of them—monitoring, reasoning, and modeling—were correlated with the frequency of family meals ( p = 0.0002, p = 0.0017, and p = 0.0008, respectively) . In this study, the relationship between different parental influences (i.e., PSs, FPPs, and family meals) and African American families’ obesity status was evaluated. Notably, the existing literature examining the influence of PSs and FPPs on obesity status among both parents and adolescents in a minority population, specifically African American, is sparse. The findings of this study reveal that African American families establish set rules and expectations more than other PSs, and authoritarian was the least prevalent PS. A previous study revealed that PS characterized by rigidity, restriction, and high control, which are classed as authoritarian styles, is more prevalent among African Americans . This type of PS evokes a sense of safety and nurturance among adolescents . Setting a large number of rules and expectations is a form of behavioral control by parents, which can be perceived as an authoritarian style by adolescents. Thus, it can play a negative role in health behaviors among adolescents, instead of resulting in improvements in their health status. It is important to have a supportive and alternative plan for adolescents while establishing rules and expectations. Moreover, role modeling was the dominant FPP among the African American adolescents in this study, while monitoring was the least prevalent. This finding is consistent with a previous study with a small sample size that claimed a higher score for role modeling compared to other FPPs among African American families . This study could not find any relationship between the BMI percentiles/BMI of the adolescents/parents and PSs and FPPs. This finding is consistent with two recent studies and one older study that reported no specific correlation between FPPs and being overweight or obesity in children and adolescents . In addition, two other studies confirmed that maternal weight status is an independent factor of FPPs . However, a previous study showed that greater parental responsiveness, which is characteristic of an authoritative PS, is significantly correlated with a lower BMI percentile . These different findings regarding the correlation between authoritative PSs and obesity status could result from the different questionnaire design used in the two studies. It is important to note that although we could not find any statistically meaningful correlation, the association between authoritative PSs and the obesity status of the adolescents was negative. It is important to consider the impact of PSs or FPPs on obesity status in adulthood. Its impact can be highlighted in the future of adolescents. Family meal frequency was associated with the weight status of the adolescents. The adolescents who had more family meals per week had a lower BMI percentile. Although no significant correlation was found between parental BMI and family meal frequency, a positive impact on the parents’ obesity status was observed, as also those adolescents with three or more family meals per week were in the normal weight status group, compared to those who had two or fewer family meals who were overweight. Previous findings also showed that family meal frequency can control the development of obesity among children and adolescents . In addition, family meals during adolescence not only maintain adolescents’ normal weight status, but also help to protect them from the development of becoming overweight and obese in young adulthood . This is due to learning how to choose healthier and more nutrient-dense foods during family meals, which impacts their dietary habits and aids in the prevention of the consumption of unhealthy snacks and foods. Authoritative was the only PS positively associated with the frequency of family meals. Previously, a study with a small sample size of overweight and obese African American adolescents demonstrated that an authoritative style contributes to improving the family meal frequency . Three out of four FPPs (i.e., monitoring, reasoning, and modeling) significantly impacted the frequency of family meals. Paying attention to FPPs can help to create positive changes in establishing healthier behaviors, such as family meals in comparison to PSs. FPPs may not only promote healthier diets among adolescents, but may also help to promote better psychological health for family members . The strength of this study is the consideration of African American individuals as a minority group who are one of the most vulnerable populations to obesity. Future studies can use other methods, such as bio-impedance, waist circumference, and dual X-ray absorptiometry to examine the correlation between obesity status and parental influences. The majority of the previous studies on PSs, FPPs, family meals, and adolescents’ weight status focused on one of the parent or caregiver variables, especially maternal influences, in minority groups such as African American families. Future studies can focus on the father’s styles and practices in terms of the weight status and dietary habits of adolescents. In addition, intervention studies can also help us to understand which and how different parental influences (i.e., PSs, FPPs, and family meal frequency) can be most useful in maintaining a normal weight status while considering the cultural values of minority groups. This study focused on different parental influences, including PSs, FPPs, and family meal frequency, and their relationships with the weight status of African American families. We also examined how each PS and FPP can impact the family meal frequency. The results indicate that family meal frequency plays a more important role in ensuring a healthy weight status among African American adolescents in comparison to PSs and FPPs. An authoritative PS was the only style correlated with a higher family meal frequency, while monitoring, reasoning, and modeling practices were correlated with a higher frequency of family meals.
Revisión de botiquines de pacientes por las farmacias comunitarias de Álava
f4f60bd1-1db9-4164-8b67-06a2cd6ce318
11739903
Patient Education as Topic[mh]
El Consejo General de Colegios Oficiales de Farmacéuticos definió la estrategia de la profesión farmacéutica basada en tres ejes: “somos asistenciales, somos sociales y somos digitales” . Esta estrategia contribuye a acelerar la transformación y retos de la “Agenda 2030”, que incluye los 17 “Objetivos de Desarrollo Sostenible” para lograr un futuro más justo, saludable y respetuoso con el medio ambiente. Desde el Colegio Oficial de Farmacéuticos (COF) de Álava diseñamos el proyecto de Revisión de Botiquines, con el que pretendimos, además de impulsar la realización de un SPFA en las farmacias comunitarias de Álava, resaltar la labor del farmacéutico como agente activo capaz de fomentar hábitos sanitarios y medioambientales en pro de la consecución de estos objetivos. En relación con nuestra labor asistencial, el objetivo del proyecto fue mejorar la utilización de los medicamentos, comprobando el conocimiento que los pacientes tenían de los mismos, garantizando la seguridad y optimizando su uso para aumentar su bienestar y salud y así evitar resultados negativos asociados a la medicación (RNM). Y con respecto a la labor social, el objetivo fue fomentar la adecuada gestión de los residuos de los medicamentos, educando a los pacientes en esta materia para que los eliminen en los puntos SIGRE de las farmacias comunitarias. Los objetivos específicos de este proyecto fueron: Comprobar si los medicamentos se almacenaban correctamente, detectando y eliminando los que estuviesen caducados, los innecesarios o en mal estado en el punto SIGRE. Formar sobre la adecuada conservación de los medicamentos y la eliminación de sus envases y restos en los puntos SIGRE de la farmacia comunitaria. Comprobar el conocimiento que tenían los pacientes de todos y cada uno de los medicamentos que conforman el botiquín, asesorándoles sobre su uso y facilitando información personalizada del medicamento (IPM). Detectar los medicamentos, productos sanitarios y/o de autocuidado que pudieran suponer un riesgo para la salud del paciente (seguridad). Detectar las incidencias: problemas relacionados con el medicamento (PRM) y RNM y realizar las correspondientes intervenciones farmacéuticas. Dar pautas para ordenar el botiquín. La revisión de botiquines tuvo lugar durante los meses de junio y julio de 2022. Se convocó a todas las farmacias de Álava a formar parte de la campaña “Pon al día tu botiquín”. La Comisión de SPFA y SIGRE impartieron un curso de formación en el que se explicaron los objetivos de la campaña, el procedimiento, registro de datos y el proceso de reciclado de medicamentos en el punto SIGRE. Ofrecimiento del servicio El servicio se ofreció a los pacientes que lo solicitaron o que el farmacéutico detectó que podrían beneficiarse del mismo siguiendo los criterios de inclusión y exclusión que se indican a continuación. Los criterios de inclusión fueron: Pacientes polimedicados. Pacientes con problemas en el manejo de la medicación. Pacientes que hubieran sufrido cambios en la medicación en los últimos meses. Pacientes que estuvieran tomando medicamentos considerados de alto riesgo. Pacientes en los que se comprobó en receta electrónica que no habían retirado prescripciones (sospecha de falta de adherencia). Quedaron excluidos: Menores de edad. Pacientes cuyas capacidades cognitivas o idiomáticas no les permitían entender la finalidad del estudio. Pacientes que hubieran sufrido cambios en la medicación en los últimos meses. Pacientes residentes en centros sociosanitarios. Citación del paciente Después que el paciente aceptó participar en el servicio, se concretó día y hora para realizar la revisión y se le comentó que acudiese con todos los medicamentos de tratamiento activo (MTA) junto con su hoja de tratamiento o informe, así como con los medicamentos no incluidos en el tratamiento activo, productos sanitarios y/o de autocuidado (MPSA) que contenía su botiquín. En caso de disponer de medicamentos termolábiles, se pidió que los trajesen anotados. Revisión del botiquín La entrevista se realizó en la zona de atención personalizada de la farmacia. Se separaron los MTA del resto. Revisión de MTA Se pidió al paciente que identificase los medicamentos uno a uno y se comprobó si había algún medicamento incluido en la hoja de tratamiento o en el informe que no hubiese traído. Se realizaron las preguntas correspondientes a los mismos y en el caso de detectar alguna incidencia, se realizó la intervención farmacéutica correspondiente y se registró (tabla 1). Revisión de MPSA Se pidió al paciente que identificase los MPSA uno a uno y se realizaron las preguntas del formulario correspondientes a los mismos y, en caso de detectar alguna incidencia, se realizó la intervención farmacéutica correspondiente y se registró (tabla 1). Repaso final Antes de finalizar la entrevista se realizó un repaso final, para asegurar que el paciente había entendido todo correctamente y que la recogida de datos había sido la adecuada. Por último, se le dieron consejos para el uso y mantenimiento del botiquín acompañado de la infografía para pacientes (figura 1) y de la infografía de eliminación de residuos en el punto SIGRE (figura 2). Formulario electrónico Una vez realizada la entrevista y ya sin el paciente en la farmacia, con toda la información recogida, se cumplimentó el formulario de forma electrónica para el procesamiento de datos. En la campaña participaron 10 farmacias comunitarias que realizaron 49 revisiones de botiquines. Los pacientes a los que se prestó el Servicio de Revisión de Botiquín fueron 49, de los cuales un 67,3 % fueron mujeres. Por rangos de edad, el 73,5 % de los pacientes eran mayores de 70 años y el 26,5 % tenían entre 30 y 70 años. No hubo pacientes menores de 30 años. El total de medicamentos y productos de botiquín analizado fue de 710, de los que 383 (53,9 %) fueron MTA y 326 (45,9 %) MPSA. El 67,3 % de los pacientes almacenaba sus medicamentos en un lugar inadecuado (cocina y/o baño), frente a un 32,7 % que lo hacía correctamente. Resultados de la revisión de MTA De los 383 MTA revisados, 101 (26,4 %) presentaron incidencias (gráfico 1), entre las que destacaron: conservación inadecuada: medicamento sin prospecto y/o sin embalaje, temperatura inadecuada, exceso de tiempo desde que se desprecintó, etc. (27,9 %), falta de conocimiento del uso del medicamento (26,5 %), falta de adherencia (20,4 %) y dosis, pauta y/o duración no adecuada (12,2 %). Como se observa en la tabla 2, de acuerdo con la prueba chi cuadrado, de las 49 personas a las que se realizó la revisión de botiquines, ni el género ni la edad son variables que influyan en el número de personas con MTA con o sin incidencias (p=0,21 para género y p= 0,99 para edad). Del mismo modo, no se puede establecer un grupo poblacional diana con incidencia en MTA en función de la edad y el sexo (p=0,72), siempre teniendo en consideración que no han participado en el estudio menores de 30 años. Sin embargo, se observa que, de las 101 incidencias detectadas en los 383 MTA, existe una tendencia a que las mujeres de más de 70 años sean las personas que mayor número de incidencias acumulan en relación con este tipo de medicamentos (p=0,08). Las intervenciones farmacéuticas realizadas (gráfico 2) para resolver estas incidencias fueron principalmente: facilitar IPM (42,0 %), informar sobre pautas de ordenación del botiquín (20,0 %) e intentar solucionar la falta de adherencia (18,0 %). Resultados de la revisión de MPSA De los 326 MPSA revisados, 157 (48,1 %) presentaron incidencias (gráfico 3), entre las que destacaron: medicamento caducado (48,0 %), medicamento no necesario (20,7 %) y falta de conocimiento del uso del medicamento (12,1 %). Tal como se muestra en la tabla 3, aplicando la prueba chi cuadrado, existen diferencias significativas, donde se destaca que las mujeres acumulan más MPSA con incidencias (p=0,03), mientras que no se observan diferencias en cuanto a la edad (p=0,17). Tampoco existen diferencias en función de la edad y el sexo (p=0,07) por lo que no se puede establecer un grupo poblacional diana con incidencias en los MPSA, siempre teniendo en consideración que no han participado en el estudio menores de 30 años. De las 157 incidencias detectadas en los MPSA, existen diferencias significativas, siendo las mujeres mayores de 70 años las que acumulan más incidencias en relación con los MPSA (p=0,001). Las intervenciones farmacéuticas realizadas (gráfico 4) para resolver estas incidencias fueron mayoritariamente: desechar al punto SIGRE (75,0 %) y facilitar IPM (13,7 %). Eliminación en el punto SIGRE Los farmacéuticos eliminaron en el punto SIGRE de sus farmacias un total de 137 medicamentos y productos sanitarios y/o autocuidado, lo que supone el 19,3 % del total de los revisados, siendo mayoritariamente los MPSA los eliminados: 125 envases (91,2 %), frente a los 12 envases de MTA (8,8 %). De los MPSA eliminados, los principales grupos fueron: analgésicos y antiinflamatorios no esteroideos (AINE) (20,6 %), antibióticos (19,8 %), antisépticos y material de cura (14,3 %) y tratamiento de patología invernal (8,7 %). La causa principal de eliminación en el punto SIGRE, tanto de los MTA como de los MPSA fue medicamento caducado, un 33,3 % y un 84,2 % respectivamente. Acumular medicamentos en el domicilio es una práctica muy extendida en los hogares y no siempre exenta de riesgos. La conservación en condiciones inadecuadas y la falta de conocimiento, entre otros motivos, pueden hacer del domicilio lugar de origen de PRM con consecuencias negativas para la salud de los pacientes y aumento del gasto sanitario. Los farmacéuticos debemos desarrollar medidas informativas y educativas que ayuden a nuestros pacientes a detectar y prevenir estos problemas. Los SPFA, especialmente los relacionados con la Atención Farmacéutica, es decir, con los medicamentos y productos sanitarios, pueden suponer una estrategia útil en la reducción de problemas del uso de la medicación. Una muestra de ello es el servicio de revisión de botiquines enmarcado por FORO AF FC entre los orientados al proceso de uso . Los datos obtenidos en este estudio muestran cómo el 67,3 % de las personas entrevistadas guardaba sus medicamentos en lugares inadecuados como la cocina o el baño, datos que concuerdan con los hallados por Hernández y cols. y Matos y cols. . Estos son lugares sometidos a grandes variaciones de humedad y temperatura, lo cual puede afectar a la estabilidad de los medicamentos. A la vista de estos datos, observamos que los pacientes necesitan tener más información sobre las condiciones adecuadas de conservación de sus botiquines. Las personas a las que se realizó la revisión de botiquines fueron mayoritariamente mujeres (67,3 %), ya que son las que generalmente se responsabilizan en mayor medida de los tratamientos farmacológicos/cuidados médicos en el hogar. En cuanto a la edad, la mayor parte de los entrevistados fueron mayores de 70 años (73,5 %). Esto puede deberse a que las personas de esta edad suelen ser pacientes pluripatológicos y polimedicados y, por tanto, acuden con mayor frecuencia a la farmacia comunitaria, datos que concuerdan con los hallados por Nuñez y cols y Flores y cols . Llama la atención el hecho de que no se contemplen pacientes menores de 30 años. La causa puede ser que en este grupo de edad es muy poco frecuente encontrar personas que cumplan con los criterios de inclusión. Dado que son las mujeres mayores de 70 años las que se relacionan con el mayor número de incidencias en los MTA y MPSA, podría ser interesante realizar futuros estudios en este grupo poblacional con el objetivo de intentar reducir su número. En un 26,4 % de los MTA se detectaron incidencias, siendo las principales: conservación inadecuada (medicamentos sin prospecto, y/o sin embalaje original, etc.), falta de conocimiento del uso del medicamento (para qué sirve, pauta de dosificación, etc.). Son datos que concuerdan con los estudios realizados por el Muy Ilustre Colegio Oficial de Farmacéuticos de Valencia . En el 48,3 % de los MPSA se detectaron incidencias, siendo la principal los medicamentos caducados, al igual que se recoge en el estudio de Berenguer y cols. . La eliminación en el punto SIGRE fue la intervención más importante, eliminándose un total de 137 medicamentos, lo que suponía entre 2 y 3 medicamentos por persona, al igual que se recoge en el estudio de Oñatibia y cols. . La eliminación de MPSA (91,2 %) frente a los MTA (8,80 %) fue muy superior. Las posibles causas serían la automedicación, tratamientos no acabados y cambios de tratamiento en los que se ha almacenado el medicamento deprescrito. De los MPSA eliminados en el punto SIGRE destacan analgésicos y AINE, antibióticos y antisépticos y material de cura, resultados similares a los del estudio de Echave y cols. . El farmacéutico intervino informando y educando en la necesidad de evitar el sobreconsumo de medicamentos y la acumulación en los hogares, así como en la necesidad de eliminar adecuadamente los medicamentos caducados y no consumidos en el punto SIGRE de las farmacias . Por todo ello, los resultados de esta campaña muestran que el farmacéutico comunitario es un agente sanitario clave en la promoción de actividades de información y educación sanitaria que orientan al paciente hacia la sensibilización del uso racional, seguro y efectivo de los medicamentos, los aspectos negativos de la automedicación no responsable, la mejora de aspectos del botiquín como son su ordenación, ubicación y revisiones periódicas, así como en la adecuada eliminación de los residuos medicamentosos en los puntos SIGRE por lo que concluimos que el Servicio de Revisión de Botiquines debería estar integrado en la cartera de SPFA. A SIGRE por su disponibilidad y su colaboración en este proyecto, a Jonatan Miranda por el tratamiento estadístico de los datos y a todas las farmacias de Álava participantes en este proyecto. Por orden alfabético: Cristina Dávila, García-Agundez, González Casi, Ibarra, Imanol Monteagudo, López de Ocáriz, Mosteiro, Mozas, Pérez Zubiaur, Villacorta.
Cardiac tamponades related to interventional electrophysiology procedures are associated with higher risk of short-term hospitalization for pericarditis but favourable long-term outcome
9646d361-5a22-4fca-9c9b-11826f1d13eb
10259250
Physiology[mh]
Catheter ablation has been established as an effective treatment for a variety of cardiac arrhythmias. However, periprocedural complications may occur. Cardiac tamponade is a dreaded complication with an overall rate around 1.0%, bearing a rare but potentially fatal outcome. It may occur during all types of invasive electrophysiology procedure (EP) but has a generally higher risk during catheter ablation of ventricular tachycardia (VT) as well as atrial fibrillation. For the acute management of cardiac tamponade including treatment via pericardiocentesis in unstable patients, there is comprehensive data available. However, the potential long-term impact of EP-related cardiac tamponade on relevant clinical outcomes other than arrhythmia recurrence has been scarcely investigated so far. In only one study, it has been shown that iatrogenic cardiac tamponade in patients undergoing invasive EPs was associated with a higher risk for cerebrovascular events as well as hospitalization for pericarditis during the first 2 weeks and first months, respectively, after the index procedure but with no increased risk for mortality or other serious cardiovascular events. Although this has been the largest patient cohort and longest follow-up regarding EP-related cardiac tamponades so far, this study was only performed at a single electrophysiology centre and results may not be applicable to other centres. Therefore, the purpose of this study was to investigate the association of iatrogenic cardiac tamponades and mortality as well as serious cardiovascular events during long-term follow-up in a nationwide multicentre cohort of patients undergoing invasive EPs. Study protocol and setting The Swedish Catheter Ablation Registry has been previously described. Since 2004, data on catheter ablations performed in Sweden are prospectively included in the registry. Since 2006, all centres performing catheter ablation of cardiac arrhythmias in Sweden [11 ablation centres (7 university institutions, 3 community hospitals, and 1 private institution)] report to this registry. Baseline characteristics together with procedural characteristics, as well as data on adverse events including any cardiac tamponade or pericardial effusion requiring pericardiocentesis, are provided. Patient consent was obtained by information of entry and allowance to opt out. The completeness of key variables [personal identification number, age, gender, date of ablation, type of arrhythmia, procedural time, and energy delivery (radiofrequency or cryo energy)] is high. Coverage and register and data completeness are exceeding 98% throughout the study period (see , for details). The National Patient Registry, the Cause of Death Registry, and Dispensed Drug Registry, administered by the Swedish Board of Health and Welfare, provided information on date and cause of death, further baseline comorbidities, prescribed drugs dispensed at pharmacies, and the cause-specific hospitalization as outcome defined according to the ICD-10 codes. The personal identification number that all permanent Swedish citizens own served as unique identifier for each patient allowing merging of different registries. Establishment of the Swedish Catheter Ablation Registry and this analysis with linking of the above registries was approved by the Swedish Ethical Review Authority and was conducted in accordance with the Declaration of Helsinki. For this study, individual patient consent was not required. Study population and post-ablation follow-up For the current study, consecutive patients (≥18 years old at the time of index procedure) undergoing invasive EP at one of the catheter ablation centres in Sweden between 1 January 2005 and 31 December 2019 were enrolled. Electrophysiology procedures were performed according to conventional and local standards as described previously. For the tamponade group, all patients suffering from a cardiac tamponade or pericardial effusion related to invasive EP requiring pericardiocentesis were included. Cardiac tamponades had to be detected up to 30 days from invasive EP. A control group not experiencing any kind of cardiac effusion was generated from the same catheter ablation cohort and matched with cardiac tamponade patients in a ratio of 1:2 based on age, gender, treated arrhythmia type at index procedure, and timepoint of index procedure (range of 5 years for the first and last criteria). All patients were followed up from the time of their index procedure until the date of death, emigration, or the end of the study (31 December 2020). Study outcomes The primary endpoint was defined as composite of death from any cause, acute myocardial infarction, transitory ischaemic attack (TIA), or stroke and heart failure that led to an unplanned overnight hospitalization. Transitory ischaemic attack or stroke was diagnosed during ambulatory or inpatient visit. Secondary endpoints were the single components of the primary endpoint as well as cardiovascular death as previously defined and unplanned overnight hospitalization for pericarditis. Hospitalization for pericarditis was defined as new admission to hospital due to pericarditis. Statistical analysis The matching was performed in MATLAB (‘MATrix LABoratory’) version 2020b (MathWorks, Natick, Massachusetts, USA). All continuous variables are presented as mean +/− standard deviation or median with interquartile range (IQR), were appropriate, and were compared by using Student’s t -test or Mann–Whitney U test, respectively, according to their distributions. Categorical variables are expressed as frequencies/percentages and were compared by χ 2 tests. Clinical outcomes were examined as a time-to-first-event analysis within 5 years which was deemed a meaningful period to reflect long-term follow-up and during which time more than half of patients were still at risk except for hospitalization for pericarditis. Endpoints which do not include mortality patients were censored at death. Hazard ratios with 95% confidence intervals and P values from Cox regression analyses, stratifying for matched pairs, are provided. In case of zero events in one group, P values from log-rank analysis are provided. For hospitalization for pericarditis, a binary logistic regression analysis at 6 months was performed in addition to Cox regression analysis, and odds ratios (ORs) with 95% confidence intervals as well as P values are provided. As all available tamponade patients were included, no sample size calculation was performed. All statistical tests and confidence intervals were two-sided, with a significance level of 0.05. , displays the definition of the variables used in the current study. Statistical analyses were performed using SPSS software, version 27 (IBM Corp., Armonk, New York). The Swedish Catheter Ablation Registry has been previously described. Since 2004, data on catheter ablations performed in Sweden are prospectively included in the registry. Since 2006, all centres performing catheter ablation of cardiac arrhythmias in Sweden [11 ablation centres (7 university institutions, 3 community hospitals, and 1 private institution)] report to this registry. Baseline characteristics together with procedural characteristics, as well as data on adverse events including any cardiac tamponade or pericardial effusion requiring pericardiocentesis, are provided. Patient consent was obtained by information of entry and allowance to opt out. The completeness of key variables [personal identification number, age, gender, date of ablation, type of arrhythmia, procedural time, and energy delivery (radiofrequency or cryo energy)] is high. Coverage and register and data completeness are exceeding 98% throughout the study period (see , for details). The National Patient Registry, the Cause of Death Registry, and Dispensed Drug Registry, administered by the Swedish Board of Health and Welfare, provided information on date and cause of death, further baseline comorbidities, prescribed drugs dispensed at pharmacies, and the cause-specific hospitalization as outcome defined according to the ICD-10 codes. The personal identification number that all permanent Swedish citizens own served as unique identifier for each patient allowing merging of different registries. Establishment of the Swedish Catheter Ablation Registry and this analysis with linking of the above registries was approved by the Swedish Ethical Review Authority and was conducted in accordance with the Declaration of Helsinki. For this study, individual patient consent was not required. For the current study, consecutive patients (≥18 years old at the time of index procedure) undergoing invasive EP at one of the catheter ablation centres in Sweden between 1 January 2005 and 31 December 2019 were enrolled. Electrophysiology procedures were performed according to conventional and local standards as described previously. For the tamponade group, all patients suffering from a cardiac tamponade or pericardial effusion related to invasive EP requiring pericardiocentesis were included. Cardiac tamponades had to be detected up to 30 days from invasive EP. A control group not experiencing any kind of cardiac effusion was generated from the same catheter ablation cohort and matched with cardiac tamponade patients in a ratio of 1:2 based on age, gender, treated arrhythmia type at index procedure, and timepoint of index procedure (range of 5 years for the first and last criteria). All patients were followed up from the time of their index procedure until the date of death, emigration, or the end of the study (31 December 2020). The primary endpoint was defined as composite of death from any cause, acute myocardial infarction, transitory ischaemic attack (TIA), or stroke and heart failure that led to an unplanned overnight hospitalization. Transitory ischaemic attack or stroke was diagnosed during ambulatory or inpatient visit. Secondary endpoints were the single components of the primary endpoint as well as cardiovascular death as previously defined and unplanned overnight hospitalization for pericarditis. Hospitalization for pericarditis was defined as new admission to hospital due to pericarditis. The matching was performed in MATLAB (‘MATrix LABoratory’) version 2020b (MathWorks, Natick, Massachusetts, USA). All continuous variables are presented as mean +/− standard deviation or median with interquartile range (IQR), were appropriate, and were compared by using Student’s t -test or Mann–Whitney U test, respectively, according to their distributions. Categorical variables are expressed as frequencies/percentages and were compared by χ 2 tests. Clinical outcomes were examined as a time-to-first-event analysis within 5 years which was deemed a meaningful period to reflect long-term follow-up and during which time more than half of patients were still at risk except for hospitalization for pericarditis. Endpoints which do not include mortality patients were censored at death. Hazard ratios with 95% confidence intervals and P values from Cox regression analyses, stratifying for matched pairs, are provided. In case of zero events in one group, P values from log-rank analysis are provided. For hospitalization for pericarditis, a binary logistic regression analysis at 6 months was performed in addition to Cox regression analysis, and odds ratios (ORs) with 95% confidence intervals as well as P values are provided. As all available tamponade patients were included, no sample size calculation was performed. All statistical tests and confidence intervals were two-sided, with a significance level of 0.05. , displays the definition of the variables used in the current study. Statistical analyses were performed using SPSS software, version 27 (IBM Corp., Armonk, New York). Baseline characteristics of patients with and without periprocedural cardiac tamponade Between 1 January 2005 and 31 December 2019, a total of 58 770 invasive EPs in 44 497 patients were performed. In this cohort, a total of 200 patients with periprocedural cardiac tamponades/pericardial effusions requiring pericardiocentesis were identified and included in the study. The overall procedural risk of cardiac tamponade/pericardial effusion was 0.34%. Among patients undergoing catheter ablation for atrial fibrillation and VT, the procedural risk of tamponade was 0.61% (133 tamponades among 21 629 atrial fibrillation catheter ablations) and 1.1% (20 tamponades among 1787 VT catheter ablations), respectively. The control group comprised 400 patients without cardiac tamponade/pericardial effusion requiring pericardiocentesis during the index procedure. In addition to the matched variables, the remaining clinical baseline characteristics of both groups were similar without any statistically significant difference ( Table ). The mean duration of follow-up was 6.7 +/− 3.9 years in the tamponade and 7.2 +/− 3.9 years in the control group ( P = 0.145). Index procedure–related characteristics in patients with and without periprocedural cardiac tamponade All timepoints (year) of index electrophysiology procedures in tamponade and control patients are provided in , . Procedure time was significantly longer in the tamponade as compared with that in the control group ( P = 0.015). The remainder of index procedure–related characteristics was not different between the groups as presented in Table . Primary endpoint After a follow-up of 5 years, the composite primary endpoint—death from any cause, acute myocardial infarction, TIA/stroke, and hospitalization for heart failure—occurred in 38 tamponade patients (19.0%) vs. 58 control patients (14.5%) resulting in no statistically significant association with cardiac tamponade (hazard ratio [HR] 1.22 [95% CI, 0.79–1.88]) ( Table ). The Kaplan–Meier curve of the primary endpoint is provided in Figure . Secondary endpoints All single components of the primary endpoint as wells as cardiovascular death revealed no statistically significant association after a follow-up of 5 years ( Table ). The Kaplan–Meier curves of the single components of the primary endpoint are presented in Figure . Hospitalization for pericarditis occurred in more patients in the tamponade than in the control group resulting in a significantly higher risk with periprocedural cardiac tamponade [15.5% vs. 0.8%; HR 20.67 (95% CI, 6.32–67.60)] ( Table ). The Kaplan–Meier curve of this endpoint is provided in Figure . Hospitalizations for pericarditis occurred during the first 6 months of follow-up in both groups (median 14 days, range 1–184 days) except for one control patient in which hospitalization for pericarditis occurred after 487 days post-index EP. Mean duration of hospital stay due to pericarditis was 5.2 +/− 4.3 days (range 1–17 days). Consistency analysis To investigate whether the observed events are index procedure related, we performed a consistency analysis considering only 30 days and first year of follow-up. The results of the primary endpoint and its subcomponents from this analysis were consistent with the results of the primary analysis showing no statistically significant association with cardiac tamponade [HR 1.18 (95% CI, 0.39–3.63) and HR 1.67 (95% CI, 0.81–3.43), respectively]. All results for the primary and secondary endpoints after 30 days and 1 year follow-up are provided in , and . To investigate whether the observed events are also found in the largest subgroup of arrhythmia, we performed a consistency analysis only considering patients treated for atrial fibrillation. From this analysis, the results of the primary endpoint and its subcomponents after a 5 year follow-up were similar to the results of the primary analysis showing no statistically significant association with cardiac tamponade [HR 1.20 (95% CI, 0.68–2.11)]. All results for the primary and secondary endpoints in patients treated for atrial fibrillation at index EP after a 5 year follow-up are provided in , . As almost all hospitalizations for pericarditis occurred during early follow-up, a binary logistic regression analysis at 6 months in addition to Cox regression analysis was performed. The results of this analysis [OR 36.50 (95% CI, 8.64–154.25)] were consistent with the results of the primary analysis. Between 1 January 2005 and 31 December 2019, a total of 58 770 invasive EPs in 44 497 patients were performed. In this cohort, a total of 200 patients with periprocedural cardiac tamponades/pericardial effusions requiring pericardiocentesis were identified and included in the study. The overall procedural risk of cardiac tamponade/pericardial effusion was 0.34%. Among patients undergoing catheter ablation for atrial fibrillation and VT, the procedural risk of tamponade was 0.61% (133 tamponades among 21 629 atrial fibrillation catheter ablations) and 1.1% (20 tamponades among 1787 VT catheter ablations), respectively. The control group comprised 400 patients without cardiac tamponade/pericardial effusion requiring pericardiocentesis during the index procedure. In addition to the matched variables, the remaining clinical baseline characteristics of both groups were similar without any statistically significant difference ( Table ). The mean duration of follow-up was 6.7 +/− 3.9 years in the tamponade and 7.2 +/− 3.9 years in the control group ( P = 0.145). All timepoints (year) of index electrophysiology procedures in tamponade and control patients are provided in , . Procedure time was significantly longer in the tamponade as compared with that in the control group ( P = 0.015). The remainder of index procedure–related characteristics was not different between the groups as presented in Table . After a follow-up of 5 years, the composite primary endpoint—death from any cause, acute myocardial infarction, TIA/stroke, and hospitalization for heart failure—occurred in 38 tamponade patients (19.0%) vs. 58 control patients (14.5%) resulting in no statistically significant association with cardiac tamponade (hazard ratio [HR] 1.22 [95% CI, 0.79–1.88]) ( Table ). The Kaplan–Meier curve of the primary endpoint is provided in Figure . All single components of the primary endpoint as wells as cardiovascular death revealed no statistically significant association after a follow-up of 5 years ( Table ). The Kaplan–Meier curves of the single components of the primary endpoint are presented in Figure . Hospitalization for pericarditis occurred in more patients in the tamponade than in the control group resulting in a significantly higher risk with periprocedural cardiac tamponade [15.5% vs. 0.8%; HR 20.67 (95% CI, 6.32–67.60)] ( Table ). The Kaplan–Meier curve of this endpoint is provided in Figure . Hospitalizations for pericarditis occurred during the first 6 months of follow-up in both groups (median 14 days, range 1–184 days) except for one control patient in which hospitalization for pericarditis occurred after 487 days post-index EP. Mean duration of hospital stay due to pericarditis was 5.2 +/− 4.3 days (range 1–17 days). To investigate whether the observed events are index procedure related, we performed a consistency analysis considering only 30 days and first year of follow-up. The results of the primary endpoint and its subcomponents from this analysis were consistent with the results of the primary analysis showing no statistically significant association with cardiac tamponade [HR 1.18 (95% CI, 0.39–3.63) and HR 1.67 (95% CI, 0.81–3.43), respectively]. All results for the primary and secondary endpoints after 30 days and 1 year follow-up are provided in , and . To investigate whether the observed events are also found in the largest subgroup of arrhythmia, we performed a consistency analysis only considering patients treated for atrial fibrillation. From this analysis, the results of the primary endpoint and its subcomponents after a 5 year follow-up were similar to the results of the primary analysis showing no statistically significant association with cardiac tamponade [HR 1.20 (95% CI, 0.68–2.11)]. All results for the primary and secondary endpoints in patients treated for atrial fibrillation at index EP after a 5 year follow-up are provided in , . As almost all hospitalizations for pericarditis occurred during early follow-up, a binary logistic regression analysis at 6 months in addition to Cox regression analysis was performed. The results of this analysis [OR 36.50 (95% CI, 8.64–154.25)] were consistent with the results of the primary analysis. In this nationwide study analysing patients undergoing invasive EPs, iatrogenic cardiac tamponade was associated with an increased risk of hospitalization for pericarditis during the first months after index procedure. In the long-term, however, it revealed no significant association with mortality or other serious cardiovascular events. Results were consistent during shorter follow-up as well as in the largest arrhythmia subgroup, i.e. in patients with atrial fibrillation. Unlike recent single-centre analysis investigating invasive EPs only from Karolinska University Hospital’s database, this study applied the Swedish Catheter Ablation Registry which includes all centres performing catheter ablation of cardiac arrhythmias in Sweden. To the best of our knowledge, this is the largest cohort and first multicentre study analysing the clinical long-term outcome of patients with EP-related cardiac tamponades. Patient characteristics were well-balanced between the study groups except for procedure time which was significantly longer in the tamponade group probably due to the more complex course. Consistent with previous data, the largest arrhythmia subgroup associated with cardiac tamponade was atrial fibrillation with 66.5%. The procedural risk of cardiac tamponade/pericardial effusion requiring pericardiocentesis was overall 0.34%, among atrial fibrillation patients 0.61%, and VT patients 1.1%. These rates were comparable or even lower with previous data with 0.6%, 0.8%, and 1.1%, respectively. , , Different from recent single-centre analysis, the present multicentre study revealed no statistically significant association for the composite primary endpoint with cardiac tamponade. In the previous study, this was mainly driven by a statistically significant increased risk of TIA/stroke especially during the first 2 weeks after index procedure. In the current larger multicentre study, the rate of TIA/stroke in the tamponade group was generally lower compared with the previous study [11 patients (5.5%) vs. 5 patients (8.3%)], and this secondary endpoint did not reveal a statistically significant association with cardiac tamponade. Several factors might have contributed to the previously increased TIA/stroke risk as discussed in the former study. One factor was insufficient anticoagulation in patients ablated for VT and atrioventricular node re-entry tachycardia (AVNRT) with concomitant atrial fibrillation during ‘the early years’ but only protected by acetylsalicylic acid (ASA) which was in line with clinical routine at that time. Tamponade patients of the former study were also included in the present study but not those from ‘the early years’ since the present study started from January 2005 onwards which might be an explanation for the lower TIA/stroke risk in the current study. Further presumably contributing factors to the overall lower TIA/stroke risk especially during the first 30 days in the current study [two patients (1.0%) vs. four patients (6.7%)] might be the more frequently used uninterrupted anticoagulation strategy pre-ablation as well as the larger proportion of novel oral anticoagulants (NOACs). While in the present study we do not have detailed data on periprocedural or reversal of anticoagulation, blood reinfusion, or cardiac surgery as in the former study which was derived from the electronic medical record and not from national registries as in the current study, a thorough adjustment of anticoagulation especially in the initial phase after cardiac tamponade requiring pericardiocentesis is generally advisable. In the current study, iatrogenic cardiac tamponade revealed no significant association with all-cause or cardiovascular (CV) mortality, hospitalization for heart failure, or acute myocardial infarction which is in line with the former single-centre study. Also, none of the tamponade patients died during hospital stay. Iatrogenic cardiac tamponade was associated with a strongly increased risk of hospitalization for pericarditis during the first 6 months after index procedure as described previously. Hence, being aware of the high occurrence of hospitalization for post-traumatic pericarditis in the tamponade group, a routine application of non-steroidal anti-inflammatory drug (NSAIDs), colchicine, or oral/intrapericardial administration of steroids after pericardiocentesis might be beneficial to prevent or at least attenuate the intrapericardial inflammatory reactions to trauma and bleeding trigger. , Inflammatory processes in the pericardium may lead to fibrinous scarring and the development of pericardial constriction with symptoms of heart failure. However, in this study iatrogenic cardiac tamponade was not significantly associated with hospitalization for heart failure after long-term follow-up. This is a registry-based cohort study. From the catheter ablation registry database, it cannot be distinguished between in- or out-of-hospital cardiac tamponades/pericardial effusions. Most of the reported complications in the database occur during the in-hospital period, but also out-of-hospital cardiac tamponades might have been included in the study. We performed an exact matching over a propensity score matching as only a subset of variables was available in the initial ablation database. Despite this limitation, our matching resulted in a well-matched cohort. In some of the secondary analyses, e.g. acute myocardial infarction, there were low event rates potentially impeding proper statistical analysis and hence they should be considered with caution. Underreporting may occur in registry studies; however, for serious complications, such as cardiac tamponade, it is assumed to be rare. In this nationwide cohort of patients undergoing invasive EPs, iatrogenic cardiac tamponade was associated with an increased risk of hospitalization for pericarditis during the first months after index procedure. In the long-term, however, cardiac tamponade revealed no significant association with mortality or other serious cardiovascular events. euad140_Supplementary_Data Click here for additional data file.
Bridging gaps: a qualitative inquiry on improving paediatric rheumatology care among healthcare workers in Kenya
bd4c31bc-64ca-44b0-949b-64ba8de4bf1f
10717234
Internal Medicine[mh]
Paediatric rheumatic diseases such as Juvenile Idiopathic Arthritis (JIA) are associated with significant mortality, morbidity and reduced quality of life . It is estimated that about 2 million of these children live in Africa . However, given the paucity of paediatric rheumatologists (0.08 per million Africans versus 3–4 per million North Americans) and diagnostic resources, this could be an underestimate . There have been various initiatives to help bridge the gap in the rheumatology workforce. This includes the UWEZO collaboration between Kenyan, United Kingdom (UK) and Swedish rheumatologists that trained 500 healthcare workers across 11 sites in Kenya, and the EPAREP project (Enhancement of Paediatric and Adult Rheumatology Education and Practice) in Zambia . Building the health workforce capable of offering clinical care to pediatric rheumatology patients and improving their performance for this noble task is a critical need globally . As a result, it is important to get the view of the majority of the healthworkers treating pediatric rheumatology patients on how to improve clinical care offered. The United Nations’ Children’s Fund (UNICEF) global ‘call to action’ (in 2021) emphasised the need to strengthen health systems through a focus on equitable access, integrated and community-based care . Exploring healthcare workers’ understanding, attitudes and perceptions towards paediatric rheumatic diseases is key prior to offering solutions to improve pediatric rheumatology clinical are . Interventions to improve healthworker performance should consider the contextual factors that would impact their local effectiveness for patients and families . In order to implement these interventions effectively, policy makers need to understand and address the contextual factors which can contribute to differences in local effects . Researchers therefore must recognise the importance of reporting how context may modify clinical service delivery . Implementation of the “knowledge to action cycle” promoted by the Canadian Institute of Health Research highlights the importance of identifying challenges that are likely to impact on the effectiveness of an intervention, and to also consider the potential strategies of achieving change. These two fundamental principles not only inform the choice of intervention, but also facilitates the context to be modified, and the intervention “tailored”, as part of the implementation process . Interventions to improve the performance of existing health workers have the potential to impact very positively on patient morbidity and mortality in an underserved sub-Saharan context. In light of the above, we initially interviewed a cohort of healthcare workers across the Republic of Kenya to understand the challenges that they face in offering care to paediatric rheumatology patients in Kenya . In the same focused group discussions, participants were interviewed to ascertain plausible solutions to help mitigate the challenges encountered. We thus aimed to identify interventions to improve the clinical care offered to paediatric rheumatology patients in the Republic of Kenya, as perceived by non-specialist healthcare workers. Our study incorporated the COnsolidated criteria for REporting Qualitative studies (COREQ Additional file : Appendix 1) as a reporting guide for qualitative research developed to promote explicit and comprehensive reporting of interviews and focus groups. This was an exploratory qualitative study involving 12 focus group discussions conducted between September and November 2021 with 68 healthcare workers (HCWs) including clinical officers (physician assistants), nurses, general practitioners, and paediatricians. Participants were recruited from across the Kenyan Republic through the six regional branches of the Kenya Paediatric Association (KPA) namely Nairobi, North Rift, South Rift, Central, Coast and the Lake Region. In brief, participants were recruited through ‘snowball sampling’ and each cadre was then divided into 3 different Focus Group Discussions (FGDs) that were conducted virtually by AM (primary investigator, a paediatric rheumatologist) and PM (co-investigator qualitative researcher) through Zoom Communications (copyright 2021) using a standard interview guide that focussed on plausible interventions and implementation strategies. Data was recorded and transcribed verbatim. Notes of the focus group proceedings were used to cross-check for consistency. All forms, recordings and transcripts were managed according to ethical guidelines. Data analysis A reflective thematic analysis was conducted using MAXQDA 2022.2 software . Data familiarization was done by going through each quote by participants to deduce the key message. Initial coding where themes and sub-themes were grouped into categories was done by AM and RMR (co-investigator qualitative researcher). This followed a series of interactive meetings among members of the research team where linkages in themes were identified. Codes were merged into categories and themes. We completed a ‘member checking’ process to check the accuracy of our findings. Ethical considerations Ethical approval was obtained from the Aga Khan University, East Africa Institutional Research Ethics Committee (Ref 2021/IERC-50(v2)) and a research permit was obtained from the National Commission For Science, Technology & Innovation (NACOSTI/P/21/11789). Informed consent was obtained from all participants before data collection. A reflective thematic analysis was conducted using MAXQDA 2022.2 software . Data familiarization was done by going through each quote by participants to deduce the key message. Initial coding where themes and sub-themes were grouped into categories was done by AM and RMR (co-investigator qualitative researcher). This followed a series of interactive meetings among members of the research team where linkages in themes were identified. Codes were merged into categories and themes. We completed a ‘member checking’ process to check the accuracy of our findings. Ethical approval was obtained from the Aga Khan University, East Africa Institutional Research Ethics Committee (Ref 2021/IERC-50(v2)) and a research permit was obtained from the National Commission For Science, Technology & Innovation (NACOSTI/P/21/11789). Informed consent was obtained from all participants before data collection. Participants Among the 68 participants, 78% (53/68) were female and the mean age was 36 years (inter-quartile range 31–40 years). Fifty percent of the respondents (34/68) worked in the public sector. Among those invited, one paediatrician and three general practitioners declined to participate due to lack of time and limited interaction with paediatric patients respectively. Table below illustrates the biodemographic characteristics of the participants. Supplementary Fig. shows the geographical distribution of study participants. Below are the interventions proposed by participants. These include patient-centered interventions on both an individual and community level, health worker interventions, and health systems interventions. Patient and caregiver psychosocial support and advocacy Individual interventions Patient education One of the major challenges faced by healthcare workers regarding paediatric rheumatic patients is lack of understanding of the disease. Participants proposed that there should be a comprehensive education program for patients and their guardians to explain paediatric rheumatic diseases, their natural history, complications, therapeutic interventions and adverse effects. This will help guardians and parents improve how they manage and care for their children. “I think, the emphasis should be on the parents. Once parents understand this condition, this child is always being cared for well. But unless the parent understands and accepts that condition, it’s usually very difficult for these parents to care for this child.” 55 year old Female Nurse Patient psychosocial support and advocacy Participants proposed raising awareness about paediatric rheumatology and offering psychosocial support to families and patients as they battle with stress, stigma and confusion. The protracted duration of uncertainty and misdiagnosis creates doubt among guardians and hence they seek alternative medicine. “And then sometimes, some beliefs like this patient has been bewitched especially given the chronicity of paediatric rheumatology, you might find that sometimes there is a conflict in terms of what you are going to do as a clinician versus what they want to go and try at home.” 39 year old Female Paediatrician “On my side, before I forget is that the parents or the caregivers they present so late because when they were at home they were trying to use the herbal medicine or anything topical so by the time they are coming to the hospital, they come when it’s late. ……"how is it that a baby has rheumatoid arthritis at this age" because for them they understand it’s for the older age….” 30 year old Female Nurse Key counselling components proposed for children, caregivers and families by the participants include: explaining the natural history of disease, complications, treatment options, toxicity and highlight the multi-disciplinary nature of clinical care. Counselling should be culturally acceptable, explained in a language they can understand, and context specific to suit the child. “I think we should also involve patients in their care in that we inform them of their condition. Despite the fact that they are children, we should explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse.” 59 year old Female Nurse “Then the stigma, when the children start getting injections, it interferes with their self-esteem. It interferes with their education. And you find these children, it’s like now they are, it’s like special children…… it has interfered with their life…… I've seen children suffer a lot, especially their self-esteem and their education. And it affects them more when they are adolescents.” 46 year old Female Clinical Officer Paricipants also proposed psychosocial support and advocacy through group therapy. “Parents of children with these rheumatological illnesses, should also have a meeting together .. and share experiences and they capacity build each other. They may form something like an association where they have a forum where they can share this information and experiences.” 52 year old Female Nurse Among the 68 participants, 78% (53/68) were female and the mean age was 36 years (inter-quartile range 31–40 years). Fifty percent of the respondents (34/68) worked in the public sector. Among those invited, one paediatrician and three general practitioners declined to participate due to lack of time and limited interaction with paediatric patients respectively. Table below illustrates the biodemographic characteristics of the participants. Supplementary Fig. shows the geographical distribution of study participants. Below are the interventions proposed by participants. These include patient-centered interventions on both an individual and community level, health worker interventions, and health systems interventions. Individual interventions Patient education One of the major challenges faced by healthcare workers regarding paediatric rheumatic patients is lack of understanding of the disease. Participants proposed that there should be a comprehensive education program for patients and their guardians to explain paediatric rheumatic diseases, their natural history, complications, therapeutic interventions and adverse effects. This will help guardians and parents improve how they manage and care for their children. “I think, the emphasis should be on the parents. Once parents understand this condition, this child is always being cared for well. But unless the parent understands and accepts that condition, it’s usually very difficult for these parents to care for this child.” 55 year old Female Nurse Patient psychosocial support and advocacy Participants proposed raising awareness about paediatric rheumatology and offering psychosocial support to families and patients as they battle with stress, stigma and confusion. The protracted duration of uncertainty and misdiagnosis creates doubt among guardians and hence they seek alternative medicine. “And then sometimes, some beliefs like this patient has been bewitched especially given the chronicity of paediatric rheumatology, you might find that sometimes there is a conflict in terms of what you are going to do as a clinician versus what they want to go and try at home.” 39 year old Female Paediatrician “On my side, before I forget is that the parents or the caregivers they present so late because when they were at home they were trying to use the herbal medicine or anything topical so by the time they are coming to the hospital, they come when it’s late. ……"how is it that a baby has rheumatoid arthritis at this age" because for them they understand it’s for the older age….” 30 year old Female Nurse Key counselling components proposed for children, caregivers and families by the participants include: explaining the natural history of disease, complications, treatment options, toxicity and highlight the multi-disciplinary nature of clinical care. Counselling should be culturally acceptable, explained in a language they can understand, and context specific to suit the child. “I think we should also involve patients in their care in that we inform them of their condition. Despite the fact that they are children, we should explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse.” 59 year old Female Nurse “Then the stigma, when the children start getting injections, it interferes with their self-esteem. It interferes with their education. And you find these children, it’s like now they are, it’s like special children…… it has interfered with their life…… I've seen children suffer a lot, especially their self-esteem and their education. And it affects them more when they are adolescents.” 46 year old Female Clinical Officer Paricipants also proposed psychosocial support and advocacy through group therapy. “Parents of children with these rheumatological illnesses, should also have a meeting together .. and share experiences and they capacity build each other. They may form something like an association where they have a forum where they can share this information and experiences.” 52 year old Female Nurse Patient education One of the major challenges faced by healthcare workers regarding paediatric rheumatic patients is lack of understanding of the disease. Participants proposed that there should be a comprehensive education program for patients and their guardians to explain paediatric rheumatic diseases, their natural history, complications, therapeutic interventions and adverse effects. This will help guardians and parents improve how they manage and care for their children. “I think, the emphasis should be on the parents. Once parents understand this condition, this child is always being cared for well. But unless the parent understands and accepts that condition, it’s usually very difficult for these parents to care for this child.” 55 year old Female Nurse Patient psychosocial support and advocacy Participants proposed raising awareness about paediatric rheumatology and offering psychosocial support to families and patients as they battle with stress, stigma and confusion. The protracted duration of uncertainty and misdiagnosis creates doubt among guardians and hence they seek alternative medicine. “And then sometimes, some beliefs like this patient has been bewitched especially given the chronicity of paediatric rheumatology, you might find that sometimes there is a conflict in terms of what you are going to do as a clinician versus what they want to go and try at home.” 39 year old Female Paediatrician “On my side, before I forget is that the parents or the caregivers they present so late because when they were at home they were trying to use the herbal medicine or anything topical so by the time they are coming to the hospital, they come when it’s late. ……"how is it that a baby has rheumatoid arthritis at this age" because for them they understand it’s for the older age….” 30 year old Female Nurse Key counselling components proposed for children, caregivers and families by the participants include: explaining the natural history of disease, complications, treatment options, toxicity and highlight the multi-disciplinary nature of clinical care. Counselling should be culturally acceptable, explained in a language they can understand, and context specific to suit the child. “I think we should also involve patients in their care in that we inform them of their condition. Despite the fact that they are children, we should explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse.” 59 year old Female Nurse “Then the stigma, when the children start getting injections, it interferes with their self-esteem. It interferes with their education. And you find these children, it’s like now they are, it’s like special children…… it has interfered with their life…… I've seen children suffer a lot, especially their self-esteem and their education. And it affects them more when they are adolescents.” 46 year old Female Clinical Officer Paricipants also proposed psychosocial support and advocacy through group therapy. “Parents of children with these rheumatological illnesses, should also have a meeting together .. and share experiences and they capacity build each other. They may form something like an association where they have a forum where they can share this information and experiences.” 52 year old Female Nurse One of the major challenges faced by healthcare workers regarding paediatric rheumatic patients is lack of understanding of the disease. Participants proposed that there should be a comprehensive education program for patients and their guardians to explain paediatric rheumatic diseases, their natural history, complications, therapeutic interventions and adverse effects. This will help guardians and parents improve how they manage and care for their children. “I think, the emphasis should be on the parents. Once parents understand this condition, this child is always being cared for well. But unless the parent understands and accepts that condition, it’s usually very difficult for these parents to care for this child.” 55 year old Female Nurse Participants proposed raising awareness about paediatric rheumatology and offering psychosocial support to families and patients as they battle with stress, stigma and confusion. The protracted duration of uncertainty and misdiagnosis creates doubt among guardians and hence they seek alternative medicine. “And then sometimes, some beliefs like this patient has been bewitched especially given the chronicity of paediatric rheumatology, you might find that sometimes there is a conflict in terms of what you are going to do as a clinician versus what they want to go and try at home.” 39 year old Female Paediatrician “On my side, before I forget is that the parents or the caregivers they present so late because when they were at home they were trying to use the herbal medicine or anything topical so by the time they are coming to the hospital, they come when it’s late. ……"how is it that a baby has rheumatoid arthritis at this age" because for them they understand it’s for the older age….” 30 year old Female Nurse Key counselling components proposed for children, caregivers and families by the participants include: explaining the natural history of disease, complications, treatment options, toxicity and highlight the multi-disciplinary nature of clinical care. Counselling should be culturally acceptable, explained in a language they can understand, and context specific to suit the child. “I think we should also involve patients in their care in that we inform them of their condition. Despite the fact that they are children, we should explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse.” 59 year old Female Nurse “Then the stigma, when the children start getting injections, it interferes with their self-esteem. It interferes with their education. And you find these children, it’s like now they are, it’s like special children…… it has interfered with their life…… I've seen children suffer a lot, especially their self-esteem and their education. And it affects them more when they are adolescents.” 46 year old Female Clinical Officer Paricipants also proposed psychosocial support and advocacy through group therapy. “Parents of children with these rheumatological illnesses, should also have a meeting together .. and share experiences and they capacity build each other. They may form something like an association where they have a forum where they can share this information and experiences.” 52 year old Female Nurse Raising paediatric rheumatologyawareness Participants highlighted a paucity of knowledge and awareness of paediatric rheumatology in the community. “You know like the way you have fever, joint pain and you say I think I need to get a malaria test, I think I need to see a doctor, so for rheumatological conditions, people in the community don't have any knowledge about it.” 35 year old Female Nurse Participants proposed the need for community outreach initiatives to not only create awareness of the disease but also help find unidentified cases, refer promptly and follow up defaulters. This can be done through churches, community health volunteers (CHV) and local leaders through chief’s barazas (local community gatherings), schools and home visits. This awareness helps motivate community members to mobilize resources to help subsidise the costs of diagnostics and treatment. It was proposed school teachers should be empowered through educational approaches to help them identify and support children once diagnosed. “Where previously I was working, I was in a community where we used to serve patients who used to live in the slum area. And we used to have the advantage of using the area chief, the sub-chief. Now those are the local administrators. Then there was the community health volunteers who were useful in mobilizing parents to bring children to the clinic. And there is also now, the other important contributors would be the churches….” 36 year old Male Clinical Officer Follow-up care Participants stated continuity of care and follow up of patients is often suboptimal which worsens clinical outcomes. “I feel another challenge is follow up. Being that this is a rare condition, not much has been put in it. So, when we do the follow up, if this parent fails to bring this child to the clinic, usually you get most of the time nobody is bothered the way we do when we are dealing with sickle cell, HIV” 55 year old Female Nurse “The challenge is patients get lost to follow up and after sometime they come back probably even sicker than they were and they need an admission.” 24 year old Male Clinical Officer It was recommended that a comprehensive follow up framework be established to help minimize loss to follow up among patients and regular communication between referring healthworkers to share feedback on clinical progress of patients. “I feel like I want to follow up with the patient while they are nurtured in the ward or just go through the patient records of the same and then maybe if I meet the rheumatologist, then we would have a chit-chat about the prognosis and what happened and what her thoughts are....” 33 year old Male General Practitioner Patient financial support interventions Respondents were concerned that lack of finances for diagnostics and treatment at the family level poses a great impediment to offering the appropriate clinical care to paediatric rheumatology patients. Financial constraint is partially accelerated by misdiagnosis and several trial and error tests and therapies by health providers. “….most of these patients they will have gone through so many facilities that you find that they are already frustrated, they have used so many resources such that by the time they are getting to you, they are financialy drained, they don't even have money for the most basic tests or even a repeat test or something like that. So lack of resources is one of the major challenges that's there… “ 24 year old Male Clinical Officer Some proposed solutions include government subsidies, public–private partnerships to source for funding, expanding existing insurance coverage, and establishing welfare funds. It was highlighted that facilitating quick diagnoses helps to reduce costs associated with mismanagement. “ I would say if there is one thing I would want general reduction in the cost of management. Management here I mean right from investigation, treatment and probably the follow-up of these patients because where I practice I do work in a private facility so I've been able to see these tests being ordered and for sure as somebody has said before, they are damn expensive. ” 37 year old Male General Practitioner “The other thing would be insurance. …most of the medications are not covered by the insurance company especially, the NHIF (National Health Insurance Fund-Kenya’s public Health Insurance Scheme). I think it would be helpful if we have maybe individuals who would be able to kind of influence the decision makers under the NHIF insurance company to probably cover some of these medications to make it easier for the patient to afford care.” 27 year old Female General Practitioner “From where I'm working the place is pretty expensive so what I would recommend is if they have a welfare kitty, they offer it to rheumatology patients so that rheumatology patients will be more, and if the patients are more, we are able to learn more while the consultant is taking care of the rheumatology patients.” 29 year old Female Nurse Health worker interventions Clinical interventions Diagnostic interventions One of the major challenges facing healthcare providers is accurate diagnosis of paediatric rheumatic diseases. They expressed their frustrations associated with misdiagnosis that increases medical costs to patients and leads to mistrust of the healthcare providers. “Someone has been treated for let me say just malaria, respiratory tract infection, sepsis yeah, when they come here, once you do all those investigations, then you find the diagnosis is too late. So it’s also important to make sure that this information reaches to the other centers, the facilities that will enable medical practitioners to make a concrete diagnosis and help manage these cases”. 36 year old Male Clinical Officer As a result, participants proposed various intervention strategies that can be adopted to improve diagnosis. These included simplified, easy to access guidelines aimed at facilitating early and accurate diagnosis whilst mitigating against mismanagement. These algorithms should cater to well-resourced facilities, resourced limited facilities and help them rule out mimics of rheumatic disease (e.g. infection, malignancy). “We need to arm people with the knowledge about these cases so that at least everyone would have an index of suspicion and make the right diagnosis with the meager resources that we have and also in whatever setting that we are seeing our patients.” 37 year old Male General Practitioner “But again we also need to define which tests to order and what is priority and in terms of treatment put it also within the guideline so that we don’t over investigate or undertreat or over treat patients. I think some protocols and guidelines on management of these patients will be a welcome idea for us..” 39 year old Male Paediatrician Participants gave various proposals on the design and content of the diagnostic algorithm and protocols: it should be simple, concise, easy to access, updated regularly and evidence based. “Mine would be very simple. At least someone should take a proper history then if there is a work aid that would give you a scope that would tell that now you need to do lab work.” 36 year old Male Clinical Officer Participants highlighted a paucity of knowledge and awareness of paediatric rheumatology in the community. “You know like the way you have fever, joint pain and you say I think I need to get a malaria test, I think I need to see a doctor, so for rheumatological conditions, people in the community don't have any knowledge about it.” 35 year old Female Nurse Participants proposed the need for community outreach initiatives to not only create awareness of the disease but also help find unidentified cases, refer promptly and follow up defaulters. This can be done through churches, community health volunteers (CHV) and local leaders through chief’s barazas (local community gatherings), schools and home visits. This awareness helps motivate community members to mobilize resources to help subsidise the costs of diagnostics and treatment. It was proposed school teachers should be empowered through educational approaches to help them identify and support children once diagnosed. “Where previously I was working, I was in a community where we used to serve patients who used to live in the slum area. And we used to have the advantage of using the area chief, the sub-chief. Now those are the local administrators. Then there was the community health volunteers who were useful in mobilizing parents to bring children to the clinic. And there is also now, the other important contributors would be the churches….” 36 year old Male Clinical Officer Participants stated continuity of care and follow up of patients is often suboptimal which worsens clinical outcomes. “I feel another challenge is follow up. Being that this is a rare condition, not much has been put in it. So, when we do the follow up, if this parent fails to bring this child to the clinic, usually you get most of the time nobody is bothered the way we do when we are dealing with sickle cell, HIV” 55 year old Female Nurse “The challenge is patients get lost to follow up and after sometime they come back probably even sicker than they were and they need an admission.” 24 year old Male Clinical Officer It was recommended that a comprehensive follow up framework be established to help minimize loss to follow up among patients and regular communication between referring healthworkers to share feedback on clinical progress of patients. “I feel like I want to follow up with the patient while they are nurtured in the ward or just go through the patient records of the same and then maybe if I meet the rheumatologist, then we would have a chit-chat about the prognosis and what happened and what her thoughts are....” 33 year old Male General Practitioner Respondents were concerned that lack of finances for diagnostics and treatment at the family level poses a great impediment to offering the appropriate clinical care to paediatric rheumatology patients. Financial constraint is partially accelerated by misdiagnosis and several trial and error tests and therapies by health providers. “….most of these patients they will have gone through so many facilities that you find that they are already frustrated, they have used so many resources such that by the time they are getting to you, they are financialy drained, they don't even have money for the most basic tests or even a repeat test or something like that. So lack of resources is one of the major challenges that's there… “ 24 year old Male Clinical Officer Some proposed solutions include government subsidies, public–private partnerships to source for funding, expanding existing insurance coverage, and establishing welfare funds. It was highlighted that facilitating quick diagnoses helps to reduce costs associated with mismanagement. “ I would say if there is one thing I would want general reduction in the cost of management. Management here I mean right from investigation, treatment and probably the follow-up of these patients because where I practice I do work in a private facility so I've been able to see these tests being ordered and for sure as somebody has said before, they are damn expensive. ” 37 year old Male General Practitioner “The other thing would be insurance. …most of the medications are not covered by the insurance company especially, the NHIF (National Health Insurance Fund-Kenya’s public Health Insurance Scheme). I think it would be helpful if we have maybe individuals who would be able to kind of influence the decision makers under the NHIF insurance company to probably cover some of these medications to make it easier for the patient to afford care.” 27 year old Female General Practitioner “From where I'm working the place is pretty expensive so what I would recommend is if they have a welfare kitty, they offer it to rheumatology patients so that rheumatology patients will be more, and if the patients are more, we are able to learn more while the consultant is taking care of the rheumatology patients.” 29 year old Female Nurse Clinical interventions Diagnostic interventions One of the major challenges facing healthcare providers is accurate diagnosis of paediatric rheumatic diseases. They expressed their frustrations associated with misdiagnosis that increases medical costs to patients and leads to mistrust of the healthcare providers. “Someone has been treated for let me say just malaria, respiratory tract infection, sepsis yeah, when they come here, once you do all those investigations, then you find the diagnosis is too late. So it’s also important to make sure that this information reaches to the other centers, the facilities that will enable medical practitioners to make a concrete diagnosis and help manage these cases”. 36 year old Male Clinical Officer As a result, participants proposed various intervention strategies that can be adopted to improve diagnosis. These included simplified, easy to access guidelines aimed at facilitating early and accurate diagnosis whilst mitigating against mismanagement. These algorithms should cater to well-resourced facilities, resourced limited facilities and help them rule out mimics of rheumatic disease (e.g. infection, malignancy). “We need to arm people with the knowledge about these cases so that at least everyone would have an index of suspicion and make the right diagnosis with the meager resources that we have and also in whatever setting that we are seeing our patients.” 37 year old Male General Practitioner “But again we also need to define which tests to order and what is priority and in terms of treatment put it also within the guideline so that we don’t over investigate or undertreat or over treat patients. I think some protocols and guidelines on management of these patients will be a welcome idea for us..” 39 year old Male Paediatrician Participants gave various proposals on the design and content of the diagnostic algorithm and protocols: it should be simple, concise, easy to access, updated regularly and evidence based. “Mine would be very simple. At least someone should take a proper history then if there is a work aid that would give you a scope that would tell that now you need to do lab work.” 36 year old Male Clinical Officer Diagnostic interventions One of the major challenges facing healthcare providers is accurate diagnosis of paediatric rheumatic diseases. They expressed their frustrations associated with misdiagnosis that increases medical costs to patients and leads to mistrust of the healthcare providers. “Someone has been treated for let me say just malaria, respiratory tract infection, sepsis yeah, when they come here, once you do all those investigations, then you find the diagnosis is too late. So it’s also important to make sure that this information reaches to the other centers, the facilities that will enable medical practitioners to make a concrete diagnosis and help manage these cases”. 36 year old Male Clinical Officer As a result, participants proposed various intervention strategies that can be adopted to improve diagnosis. These included simplified, easy to access guidelines aimed at facilitating early and accurate diagnosis whilst mitigating against mismanagement. These algorithms should cater to well-resourced facilities, resourced limited facilities and help them rule out mimics of rheumatic disease (e.g. infection, malignancy). “We need to arm people with the knowledge about these cases so that at least everyone would have an index of suspicion and make the right diagnosis with the meager resources that we have and also in whatever setting that we are seeing our patients.” 37 year old Male General Practitioner “But again we also need to define which tests to order and what is priority and in terms of treatment put it also within the guideline so that we don’t over investigate or undertreat or over treat patients. I think some protocols and guidelines on management of these patients will be a welcome idea for us..” 39 year old Male Paediatrician Participants gave various proposals on the design and content of the diagnostic algorithm and protocols: it should be simple, concise, easy to access, updated regularly and evidence based. “Mine would be very simple. At least someone should take a proper history then if there is a work aid that would give you a scope that would tell that now you need to do lab work.” 36 year old Male Clinical Officer One of the major challenges facing healthcare providers is accurate diagnosis of paediatric rheumatic diseases. They expressed their frustrations associated with misdiagnosis that increases medical costs to patients and leads to mistrust of the healthcare providers. “Someone has been treated for let me say just malaria, respiratory tract infection, sepsis yeah, when they come here, once you do all those investigations, then you find the diagnosis is too late. So it’s also important to make sure that this information reaches to the other centers, the facilities that will enable medical practitioners to make a concrete diagnosis and help manage these cases”. 36 year old Male Clinical Officer As a result, participants proposed various intervention strategies that can be adopted to improve diagnosis. These included simplified, easy to access guidelines aimed at facilitating early and accurate diagnosis whilst mitigating against mismanagement. These algorithms should cater to well-resourced facilities, resourced limited facilities and help them rule out mimics of rheumatic disease (e.g. infection, malignancy). “We need to arm people with the knowledge about these cases so that at least everyone would have an index of suspicion and make the right diagnosis with the meager resources that we have and also in whatever setting that we are seeing our patients.” 37 year old Male General Practitioner “But again we also need to define which tests to order and what is priority and in terms of treatment put it also within the guideline so that we don’t over investigate or undertreat or over treat patients. I think some protocols and guidelines on management of these patients will be a welcome idea for us..” 39 year old Male Paediatrician Participants gave various proposals on the design and content of the diagnostic algorithm and protocols: it should be simple, concise, easy to access, updated regularly and evidence based. “Mine would be very simple. At least someone should take a proper history then if there is a work aid that would give you a scope that would tell that now you need to do lab work.” 36 year old Male Clinical Officer Participants proposed having standardized management guidelines to help harmonize the treatment options offered to paediatric rheumatology patients to improve their confidence and efficiency. “Just having harmonized guidelines, the way we have like HIV guidelines or the diabetes guidelines or cardiovascular diseases guidelines, so I think it would help having a Kenyan guideline or protocol such that…, so from history and physical examination, the diagnosis, medications and follow up that these patients require.” 27 year old Female General Practitioner Referral interventions Lack of a clear referral pathway in the health system was also identified to be a major challenge. As a result, participants requested for referral guidelines and prompt feedback to help expedite diagnosis and improve patient outcomes. “…to give the best care for patients, we should be able to have an appropriate referral protocol…...to be able to get a system where we could easily get that patient to a rheumatologist and be able to give the best care even if we don't really have all that we need.” 27 year old Female General Practitioner “Feedback upon referral because I have a good idea, I've referred but really there's not much feedback so that I know, this is what I was dealing with and I'll be able to go read around it and be better equipped for the next time I encounter such patients.” 41 year old Female Paediatrician Follow up interventions Participants suggested patient follow up guidelines be availed to help minimize disease and iatrogenic complications. Structured follow ups are postulated to help in peer mentorship, networking, help build the clinician’s confidence and promote continuity of care. “And then even for follow-up, those are some of the challenges, a patient you might see them once and then the next time you give them another clinic appointment, they don't show up ….and then patients hopping from one hospital to another. You know, they are seeking help so they'll hop from one hospital to another…… so we really need to figure out how do we improve so that we can also be able to retain these patients for follow-up.“ 46 year old Female Paediatrician Research interventions Participants expressed interest to be updated with latest research and emerging knowledge on paediatric rheumatology diagnosis, treatment and care. They also highlighted the importance of data in making policy decisions. “The other thing is even after diagnosis in terms of treatment, rheumatology is one area that keeps growing and diversifying and there's new studies being done. A lot of new treatment modules are coming up, newer molecules especially like immune modulators, things that were not there when we were back in school but there are newer things that are coming out.” 46 year old Female Paediatrician “Data has a voice. Because number one, they'll ask you, how many have you seen? And the problem we have is now we haven't had the cases one either because we have not been able to make a diagnosis of children with rheumatology. But…they'll ask about the data. "How many?" "Have there been like complications?" which will make me now influence. But without data, I may not have a voice.” 55 year old Female Clinical Officer Educational interventions Participants proposed trainings to increase knowledge. Training can be delivered through different formats, such as; Fellowship training schemes Departmental meeting and local / regional events ‘Trainer of trainers’ to cascade knowledge Hybrid seminars to facilitate access remotely and also in person “Then, the second thing is, also long term, just have a fellowship for rheumatology within the country. I'm not sure whether we have any in the country so just empowering more healthcare professionals, like paediatricians or family medicine physicians to take up these training so that we have access to subspecialists.” 27 year old Female General Practitioner Educational content Symptom identification Participants requested for an intervention to increase their knowledge on paediatric rheumatology symptoms and basic musculoskeletal examination skills in order to raise their index of suspicion when they encounter these patients. Furthermore, education among guardians, teachers and patients will help raise awareness and improve their healthcare seeking behavior. “I would say ….lack of knowledge is the reason things are the way they are so creating awareness among the staff just to understand more about...because the reason we are able to quickly handle respiratory diseases is because we know so much about them, we've handled them so they don't scare us at all. So, if the same awareness was created on rheumatological diseases that would be a first step.” 29 year old Female General Practitioner Disease management strategies Participants highlighted they would like sessions on the protocols and guidelines of disease management which is believed shall minimize harmful alternative medical practices. Participants would like to know exactly which questions to ask when taking a paediatric rheumatology history. “It’s like even as you are asking us those questions, it’s an eye opener. So, I'd invite somebody with knowledge. To teach us more about rheumatology. The approach, the management and if possible give us a guideline. Just like we have a paediatric protocol such that we have no issues with pneumonias, bronchial asthma, all that. So, if we get somebody to educate us first and then to be given a guideline and maybe a mentor would help.” 55 year old Female Clinical Officer Communication skills Participants were of the opinion that it will be important to develop effective and inter-cultural communication skills to explain the prognosis to patients and explain diseases in a patient friendly language. In addition, this skill will be key to raise awareness about paediatric rheumatic diseases with policy stakeholders. “Just to add on to what R11 said, I think we should also involve patients in their care..... despite the fact that they are paediatrics, we explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse and also involving their parents, the clients themselves and the paediatricians in initiatives towards informing the community about the condition and making people aware of it.” 59 year old Female Nurse Education delivery strategies CMEs (Continuing Medical Education) Among the proposed mode of knowledge delivery was continuous medical education held in person or virtually given its potential to spur interest in paediatric rheumatology among healthcare workers and its vast reach. It was recommended that they can be stratified as per the level of care offered in the respective facilities. “The best way to create awareness in a facility and where I work is through CMEs because when you hold a CME in a public facility it is attended by everyone like my colleague the one who just spoke before me said. You hold a CME, it would be attended by the nurses, the lab people and all that. So there are different cadres that don’t know about paediatric rheumatology, they'll learn about it. So if you have scheduled topics maybe monthly or by every two weeks you cover different topics, then you will empower everyone to know about it from the COs ( clinical officer-community physician assistants ) who cover OPD ( outpatient department ) to the lab people…….” 30 year old Female General Practitioner Case based discussions Another suggestion was to have case based discussions as this was felt to provide real time learning of context specific cases. “I would say learn, learn, learn, like read, read, read, maybe get someone who has experience like Dr. Angela there to just go through cases and be able to know how to write and rightly do things and how to manage patients or even sometimes to work under her and see how she manages patients so you can be able to improve that.” 27 year old Female General Practitioner Conferences Conferences were also proposed as a platform of educational exchange to learn from multiple stakeholders. “ I've been practicing for four years, I have never been called for a rheumatology conference. In Kenya, how many times have you heard of an asthma conference or the malaria ones are every month in the public sector but rheumatology, it’s barely spoken about. I wish I had more knowledge on it.” 30 year old Female General Practitioner Online interactive applications and programs Participants proposed having trustworthy online interactive applications and programs as these provide information “on demand” as they work that offer a “one stop shop” for all their needs in care of patients. “I would prefer UpToDate ( https://www.uptodate.com ). I would say I find it easy to read, not in terms of the volume of words but think the explanation and in terms of laying out the management for each condition, I think is more favorable for me and also as my colleague has said, like most of the data, or most of the information that is given in UpToDate are up to date.” 27 year old Female General Practitioner Virtual consults Due to the limited number of paediatric rheumatologists and geographical barriers, participants proposed implementation of virtual consults. “One thing that really helped with the management of one of the patients we had was really a virtual access to a paediatric rheumatologist and we would do a case presentation and discuss focused on the patient management and keep giving updates on how the patient is doing and what is the next step because most of these conditions are treated for quite some time.” 36 year old Female General Practitioner Online courses Participants proposed online training sessions and tutorials to allow virtual exchange of knowledge especially in instances where access poses a challenge. “Mine would be start ECHO (Extension for Community Healthcare Outcomes) teaching in the module so that people share. The ones who are already experts in it, they can share their knowledge with the others so that it becomes easier to diagnose the patient’s early enough and put patients on treatment. On the other side, other than ECHO, an online course where at the end of it, someone earns a certificate, those would help them actually acquire CPD (continuous professional development) points.…..So it can be renewable, maybe once you get the certificate, you can renew it maybe after one year or two years or three years. That would have a big impact.” 36 year old Male Clinical Officer Health systems interventions Participants recommended it will be important to strengthen the health systems to improve access to care and patient outcomes. Recommendations proposed include: Improving diagnosis Participants proposed availing simple diagnostic tools at various health facilities to help with prompt diagnosis in the backdrop of understaffing, busy clinics and lack of time for detailed clinical assessment. These tools include for example diagnostic and management algorithms. “ And I think the best way, like she said, for the specialist to create sort of a simple tool which you can use even in the deepest of mashinani (remote place) to help you think these investigations you can do, how to prioritize your investigations since they are expensive. Something like that.” 35 year old Female Paediatrician Improving access to clinical care Participants highlighted that improving access to clinical care is equally important in improving patient outcomes. It was proposed that this can be done by improving access to medications and reducing health provider-patient population ratio through strategies such as having more clinicians at the outpatient level and having social workers take histories and provide counseling. This can be further facilitated by making services affordable, reaching out to organizations willing to support paediatric rheumatology patients through donations as part of their corporate social responsibility. Participants also proposed paediatric rheumatologists be trained to provide holistic integrated multi-disciplinary care where all pertinent health workers eg rheumatologists, opthalmologists, phyiostherapists, occupational therapists are available to offer a “one stop” shop service for patients. “The other challenge is cost of treatment. There is a patient who wants to be given maybe (Methotrexate) or IVIG (Intravenous Immunoglobulin) and the insurance is not willing to commit to pay and the patient is forced to commit that they will pay, all of us know IVIG is pretty expensive so this delays treatment and the more we delay treatment, the more we get more organs involved in the condition. So maybe the government would intervene and this drug be made readily available at a better cost. I think that would also help.” 35 year old Female Nurse “Number one, transport because they would only get the injection at Thika level 5 hospital (county regional hospital). Number two, in public facilities, you’d find there are stock outs of the medication so they’d have to go buy them in the pharmacies. If a diagnosis is made, the closest facility where they are, that is where they should be getting the injection from instead of a child wasting a whole day coming to get an injection then misses school. It would end up affecting their life in the long run.” 36 year old Male Clinical Officer “Some organizations where I am, they usually chip in to cater for some of these investigations and to pay the bills for these children when they have been managed” 36 year old Male Clinical Officer Integrated clinical care Participants proposed through adapting a multi-sectoral approach for example through engaging stakeholders involved in infrastructure development, medical supplies and healthcare education, hospitals can implement integrated multi-disciplinary clinical care so that patients have a one stop shop to have their clinical care given these conditions often present with multi-systemic manifestations. “Yes, it’s true. Like they say from the personnel, once you get the knowledge we have to involve different personnel from different departments because if I make the diagnosis without the help of the lab investigation, I wouldn’t be doing much and most of the time I’ll just refer the baby or the child to another facility where they can get those services. So if I’m able to involve the lab technician and also if those ones are not available I can be able to expedite that to either the superiors or the management to provide the services or even allow to create more awareness in the rest of the facility.” 28 year old Female General Practitioner Table below summarizes our study findings. Figure below captures the proposed interventions into a conceptual framework. Lack of a clear referral pathway in the health system was also identified to be a major challenge. As a result, participants requested for referral guidelines and prompt feedback to help expedite diagnosis and improve patient outcomes. “…to give the best care for patients, we should be able to have an appropriate referral protocol…...to be able to get a system where we could easily get that patient to a rheumatologist and be able to give the best care even if we don't really have all that we need.” 27 year old Female General Practitioner “Feedback upon referral because I have a good idea, I've referred but really there's not much feedback so that I know, this is what I was dealing with and I'll be able to go read around it and be better equipped for the next time I encounter such patients.” 41 year old Female Paediatrician Participants suggested patient follow up guidelines be availed to help minimize disease and iatrogenic complications. Structured follow ups are postulated to help in peer mentorship, networking, help build the clinician’s confidence and promote continuity of care. “And then even for follow-up, those are some of the challenges, a patient you might see them once and then the next time you give them another clinic appointment, they don't show up ….and then patients hopping from one hospital to another. You know, they are seeking help so they'll hop from one hospital to another…… so we really need to figure out how do we improve so that we can also be able to retain these patients for follow-up.“ 46 year old Female Paediatrician Participants expressed interest to be updated with latest research and emerging knowledge on paediatric rheumatology diagnosis, treatment and care. They also highlighted the importance of data in making policy decisions. “The other thing is even after diagnosis in terms of treatment, rheumatology is one area that keeps growing and diversifying and there's new studies being done. A lot of new treatment modules are coming up, newer molecules especially like immune modulators, things that were not there when we were back in school but there are newer things that are coming out.” 46 year old Female Paediatrician “Data has a voice. Because number one, they'll ask you, how many have you seen? And the problem we have is now we haven't had the cases one either because we have not been able to make a diagnosis of children with rheumatology. But…they'll ask about the data. "How many?" "Have there been like complications?" which will make me now influence. But without data, I may not have a voice.” 55 year old Female Clinical Officer Educational interventions Participants proposed trainings to increase knowledge. Training can be delivered through different formats, such as; Fellowship training schemes Departmental meeting and local / regional events ‘Trainer of trainers’ to cascade knowledge Hybrid seminars to facilitate access remotely and also in person “Then, the second thing is, also long term, just have a fellowship for rheumatology within the country. I'm not sure whether we have any in the country so just empowering more healthcare professionals, like paediatricians or family medicine physicians to take up these training so that we have access to subspecialists.” 27 year old Female General Practitioner Participants proposed trainings to increase knowledge. Training can be delivered through different formats, such as; Fellowship training schemes Departmental meeting and local / regional events ‘Trainer of trainers’ to cascade knowledge Hybrid seminars to facilitate access remotely and also in person “Then, the second thing is, also long term, just have a fellowship for rheumatology within the country. I'm not sure whether we have any in the country so just empowering more healthcare professionals, like paediatricians or family medicine physicians to take up these training so that we have access to subspecialists.” 27 year old Female General Practitioner Symptom identification Participants requested for an intervention to increase their knowledge on paediatric rheumatology symptoms and basic musculoskeletal examination skills in order to raise their index of suspicion when they encounter these patients. Furthermore, education among guardians, teachers and patients will help raise awareness and improve their healthcare seeking behavior. “I would say ….lack of knowledge is the reason things are the way they are so creating awareness among the staff just to understand more about...because the reason we are able to quickly handle respiratory diseases is because we know so much about them, we've handled them so they don't scare us at all. So, if the same awareness was created on rheumatological diseases that would be a first step.” 29 year old Female General Practitioner Disease management strategies Participants highlighted they would like sessions on the protocols and guidelines of disease management which is believed shall minimize harmful alternative medical practices. Participants would like to know exactly which questions to ask when taking a paediatric rheumatology history. “It’s like even as you are asking us those questions, it’s an eye opener. So, I'd invite somebody with knowledge. To teach us more about rheumatology. The approach, the management and if possible give us a guideline. Just like we have a paediatric protocol such that we have no issues with pneumonias, bronchial asthma, all that. So, if we get somebody to educate us first and then to be given a guideline and maybe a mentor would help.” 55 year old Female Clinical Officer Communication skills Participants were of the opinion that it will be important to develop effective and inter-cultural communication skills to explain the prognosis to patients and explain diseases in a patient friendly language. In addition, this skill will be key to raise awareness about paediatric rheumatic diseases with policy stakeholders. “Just to add on to what R11 said, I think we should also involve patients in their care..... despite the fact that they are paediatrics, we explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse and also involving their parents, the clients themselves and the paediatricians in initiatives towards informing the community about the condition and making people aware of it.” 59 year old Female Nurse Participants requested for an intervention to increase their knowledge on paediatric rheumatology symptoms and basic musculoskeletal examination skills in order to raise their index of suspicion when they encounter these patients. Furthermore, education among guardians, teachers and patients will help raise awareness and improve their healthcare seeking behavior. “I would say ….lack of knowledge is the reason things are the way they are so creating awareness among the staff just to understand more about...because the reason we are able to quickly handle respiratory diseases is because we know so much about them, we've handled them so they don't scare us at all. So, if the same awareness was created on rheumatological diseases that would be a first step.” 29 year old Female General Practitioner Participants highlighted they would like sessions on the protocols and guidelines of disease management which is believed shall minimize harmful alternative medical practices. Participants would like to know exactly which questions to ask when taking a paediatric rheumatology history. “It’s like even as you are asking us those questions, it’s an eye opener. So, I'd invite somebody with knowledge. To teach us more about rheumatology. The approach, the management and if possible give us a guideline. Just like we have a paediatric protocol such that we have no issues with pneumonias, bronchial asthma, all that. So, if we get somebody to educate us first and then to be given a guideline and maybe a mentor would help.” 55 year old Female Clinical Officer Participants were of the opinion that it will be important to develop effective and inter-cultural communication skills to explain the prognosis to patients and explain diseases in a patient friendly language. In addition, this skill will be key to raise awareness about paediatric rheumatic diseases with policy stakeholders. “Just to add on to what R11 said, I think we should also involve patients in their care..... despite the fact that they are paediatrics, we explain to them in a more understandable way in their context, what is expected, pain management, how to manage at home, how to identify symptoms early before they get worse and also involving their parents, the clients themselves and the paediatricians in initiatives towards informing the community about the condition and making people aware of it.” 59 year old Female Nurse CMEs (Continuing Medical Education) Among the proposed mode of knowledge delivery was continuous medical education held in person or virtually given its potential to spur interest in paediatric rheumatology among healthcare workers and its vast reach. It was recommended that they can be stratified as per the level of care offered in the respective facilities. “The best way to create awareness in a facility and where I work is through CMEs because when you hold a CME in a public facility it is attended by everyone like my colleague the one who just spoke before me said. You hold a CME, it would be attended by the nurses, the lab people and all that. So there are different cadres that don’t know about paediatric rheumatology, they'll learn about it. So if you have scheduled topics maybe monthly or by every two weeks you cover different topics, then you will empower everyone to know about it from the COs ( clinical officer-community physician assistants ) who cover OPD ( outpatient department ) to the lab people…….” 30 year old Female General Practitioner Case based discussions Another suggestion was to have case based discussions as this was felt to provide real time learning of context specific cases. “I would say learn, learn, learn, like read, read, read, maybe get someone who has experience like Dr. Angela there to just go through cases and be able to know how to write and rightly do things and how to manage patients or even sometimes to work under her and see how she manages patients so you can be able to improve that.” 27 year old Female General Practitioner Conferences Conferences were also proposed as a platform of educational exchange to learn from multiple stakeholders. “ I've been practicing for four years, I have never been called for a rheumatology conference. In Kenya, how many times have you heard of an asthma conference or the malaria ones are every month in the public sector but rheumatology, it’s barely spoken about. I wish I had more knowledge on it.” 30 year old Female General Practitioner Online interactive applications and programs Participants proposed having trustworthy online interactive applications and programs as these provide information “on demand” as they work that offer a “one stop shop” for all their needs in care of patients. “I would prefer UpToDate ( https://www.uptodate.com ). I would say I find it easy to read, not in terms of the volume of words but think the explanation and in terms of laying out the management for each condition, I think is more favorable for me and also as my colleague has said, like most of the data, or most of the information that is given in UpToDate are up to date.” 27 year old Female General Practitioner Virtual consults Due to the limited number of paediatric rheumatologists and geographical barriers, participants proposed implementation of virtual consults. “One thing that really helped with the management of one of the patients we had was really a virtual access to a paediatric rheumatologist and we would do a case presentation and discuss focused on the patient management and keep giving updates on how the patient is doing and what is the next step because most of these conditions are treated for quite some time.” 36 year old Female General Practitioner Online courses Participants proposed online training sessions and tutorials to allow virtual exchange of knowledge especially in instances where access poses a challenge. “Mine would be start ECHO (Extension for Community Healthcare Outcomes) teaching in the module so that people share. The ones who are already experts in it, they can share their knowledge with the others so that it becomes easier to diagnose the patient’s early enough and put patients on treatment. On the other side, other than ECHO, an online course where at the end of it, someone earns a certificate, those would help them actually acquire CPD (continuous professional development) points.…..So it can be renewable, maybe once you get the certificate, you can renew it maybe after one year or two years or three years. That would have a big impact.” 36 year old Male Clinical Officer Health systems interventions Participants recommended it will be important to strengthen the health systems to improve access to care and patient outcomes. Recommendations proposed include: Improving diagnosis Participants proposed availing simple diagnostic tools at various health facilities to help with prompt diagnosis in the backdrop of understaffing, busy clinics and lack of time for detailed clinical assessment. These tools include for example diagnostic and management algorithms. “ And I think the best way, like she said, for the specialist to create sort of a simple tool which you can use even in the deepest of mashinani (remote place) to help you think these investigations you can do, how to prioritize your investigations since they are expensive. Something like that.” 35 year old Female Paediatrician Improving access to clinical care Participants highlighted that improving access to clinical care is equally important in improving patient outcomes. It was proposed that this can be done by improving access to medications and reducing health provider-patient population ratio through strategies such as having more clinicians at the outpatient level and having social workers take histories and provide counseling. This can be further facilitated by making services affordable, reaching out to organizations willing to support paediatric rheumatology patients through donations as part of their corporate social responsibility. Participants also proposed paediatric rheumatologists be trained to provide holistic integrated multi-disciplinary care where all pertinent health workers eg rheumatologists, opthalmologists, phyiostherapists, occupational therapists are available to offer a “one stop” shop service for patients. “The other challenge is cost of treatment. There is a patient who wants to be given maybe (Methotrexate) or IVIG (Intravenous Immunoglobulin) and the insurance is not willing to commit to pay and the patient is forced to commit that they will pay, all of us know IVIG is pretty expensive so this delays treatment and the more we delay treatment, the more we get more organs involved in the condition. So maybe the government would intervene and this drug be made readily available at a better cost. I think that would also help.” 35 year old Female Nurse “Number one, transport because they would only get the injection at Thika level 5 hospital (county regional hospital). Number two, in public facilities, you’d find there are stock outs of the medication so they’d have to go buy them in the pharmacies. If a diagnosis is made, the closest facility where they are, that is where they should be getting the injection from instead of a child wasting a whole day coming to get an injection then misses school. It would end up affecting their life in the long run.” 36 year old Male Clinical Officer “Some organizations where I am, they usually chip in to cater for some of these investigations and to pay the bills for these children when they have been managed” 36 year old Male Clinical Officer Integrated clinical care Participants proposed through adapting a multi-sectoral approach for example through engaging stakeholders involved in infrastructure development, medical supplies and healthcare education, hospitals can implement integrated multi-disciplinary clinical care so that patients have a one stop shop to have their clinical care given these conditions often present with multi-systemic manifestations. “Yes, it’s true. Like they say from the personnel, once you get the knowledge we have to involve different personnel from different departments because if I make the diagnosis without the help of the lab investigation, I wouldn’t be doing much and most of the time I’ll just refer the baby or the child to another facility where they can get those services. So if I’m able to involve the lab technician and also if those ones are not available I can be able to expedite that to either the superiors or the management to provide the services or even allow to create more awareness in the rest of the facility.” 28 year old Female General Practitioner Table below summarizes our study findings. Figure below captures the proposed interventions into a conceptual framework. Among the proposed mode of knowledge delivery was continuous medical education held in person or virtually given its potential to spur interest in paediatric rheumatology among healthcare workers and its vast reach. It was recommended that they can be stratified as per the level of care offered in the respective facilities. “The best way to create awareness in a facility and where I work is through CMEs because when you hold a CME in a public facility it is attended by everyone like my colleague the one who just spoke before me said. You hold a CME, it would be attended by the nurses, the lab people and all that. So there are different cadres that don’t know about paediatric rheumatology, they'll learn about it. So if you have scheduled topics maybe monthly or by every two weeks you cover different topics, then you will empower everyone to know about it from the COs ( clinical officer-community physician assistants ) who cover OPD ( outpatient department ) to the lab people…….” 30 year old Female General Practitioner Another suggestion was to have case based discussions as this was felt to provide real time learning of context specific cases. “I would say learn, learn, learn, like read, read, read, maybe get someone who has experience like Dr. Angela there to just go through cases and be able to know how to write and rightly do things and how to manage patients or even sometimes to work under her and see how she manages patients so you can be able to improve that.” 27 year old Female General Practitioner Conferences were also proposed as a platform of educational exchange to learn from multiple stakeholders. “ I've been practicing for four years, I have never been called for a rheumatology conference. In Kenya, how many times have you heard of an asthma conference or the malaria ones are every month in the public sector but rheumatology, it’s barely spoken about. I wish I had more knowledge on it.” 30 year old Female General Practitioner Participants proposed having trustworthy online interactive applications and programs as these provide information “on demand” as they work that offer a “one stop shop” for all their needs in care of patients. “I would prefer UpToDate ( https://www.uptodate.com ). I would say I find it easy to read, not in terms of the volume of words but think the explanation and in terms of laying out the management for each condition, I think is more favorable for me and also as my colleague has said, like most of the data, or most of the information that is given in UpToDate are up to date.” 27 year old Female General Practitioner Due to the limited number of paediatric rheumatologists and geographical barriers, participants proposed implementation of virtual consults. “One thing that really helped with the management of one of the patients we had was really a virtual access to a paediatric rheumatologist and we would do a case presentation and discuss focused on the patient management and keep giving updates on how the patient is doing and what is the next step because most of these conditions are treated for quite some time.” 36 year old Female General Practitioner Participants proposed online training sessions and tutorials to allow virtual exchange of knowledge especially in instances where access poses a challenge. “Mine would be start ECHO (Extension for Community Healthcare Outcomes) teaching in the module so that people share. The ones who are already experts in it, they can share their knowledge with the others so that it becomes easier to diagnose the patient’s early enough and put patients on treatment. On the other side, other than ECHO, an online course where at the end of it, someone earns a certificate, those would help them actually acquire CPD (continuous professional development) points.…..So it can be renewable, maybe once you get the certificate, you can renew it maybe after one year or two years or three years. That would have a big impact.” 36 year old Male Clinical Officer Participants recommended it will be important to strengthen the health systems to improve access to care and patient outcomes. Recommendations proposed include: Participants proposed availing simple diagnostic tools at various health facilities to help with prompt diagnosis in the backdrop of understaffing, busy clinics and lack of time for detailed clinical assessment. These tools include for example diagnostic and management algorithms. “ And I think the best way, like she said, for the specialist to create sort of a simple tool which you can use even in the deepest of mashinani (remote place) to help you think these investigations you can do, how to prioritize your investigations since they are expensive. Something like that.” 35 year old Female Paediatrician Participants highlighted that improving access to clinical care is equally important in improving patient outcomes. It was proposed that this can be done by improving access to medications and reducing health provider-patient population ratio through strategies such as having more clinicians at the outpatient level and having social workers take histories and provide counseling. This can be further facilitated by making services affordable, reaching out to organizations willing to support paediatric rheumatology patients through donations as part of their corporate social responsibility. Participants also proposed paediatric rheumatologists be trained to provide holistic integrated multi-disciplinary care where all pertinent health workers eg rheumatologists, opthalmologists, phyiostherapists, occupational therapists are available to offer a “one stop” shop service for patients. “The other challenge is cost of treatment. There is a patient who wants to be given maybe (Methotrexate) or IVIG (Intravenous Immunoglobulin) and the insurance is not willing to commit to pay and the patient is forced to commit that they will pay, all of us know IVIG is pretty expensive so this delays treatment and the more we delay treatment, the more we get more organs involved in the condition. So maybe the government would intervene and this drug be made readily available at a better cost. I think that would also help.” 35 year old Female Nurse “Number one, transport because they would only get the injection at Thika level 5 hospital (county regional hospital). Number two, in public facilities, you’d find there are stock outs of the medication so they’d have to go buy them in the pharmacies. If a diagnosis is made, the closest facility where they are, that is where they should be getting the injection from instead of a child wasting a whole day coming to get an injection then misses school. It would end up affecting their life in the long run.” 36 year old Male Clinical Officer “Some organizations where I am, they usually chip in to cater for some of these investigations and to pay the bills for these children when they have been managed” 36 year old Male Clinical Officer Participants proposed through adapting a multi-sectoral approach for example through engaging stakeholders involved in infrastructure development, medical supplies and healthcare education, hospitals can implement integrated multi-disciplinary clinical care so that patients have a one stop shop to have their clinical care given these conditions often present with multi-systemic manifestations. “Yes, it’s true. Like they say from the personnel, once you get the knowledge we have to involve different personnel from different departments because if I make the diagnosis without the help of the lab investigation, I wouldn’t be doing much and most of the time I’ll just refer the baby or the child to another facility where they can get those services. So if I’m able to involve the lab technician and also if those ones are not available I can be able to expedite that to either the superiors or the management to provide the services or even allow to create more awareness in the rest of the facility.” 28 year old Female General Practitioner Table below summarizes our study findings. Figure below captures the proposed interventions into a conceptual framework. Our study identified interventions to promote patient and caregiver psychosocial support, healthworker interventions and health system interventions. Patient and caregiver interventions proposed are education, raising awareness in the community, follow up care and patient financial support interventions. Health worker interventions suggested included clinical interventions with guidelines and educational initiatives focused on symptom identification, management strategies and communication skills delivered through various virtual and in-person platforms. The health system interventions recommended were an integrated multi-disciplinary care model, through a multi-sectoral approach that involves the various stakeholders in health. Patient and caregiver psychosocial support and advocacy Optimizing chronic disease care requires involvement of patients and their families as partners in the doctor–patient relationship . Patient empowerment can be defined as having knowledge and being motivated to influence one’s health . Our study participants emphasized it was necessary to empower patients by providing the necessary knowledge, skills, and understanding of their disease to allow effective self-management. Increasing patient empowerment can help promote efficient use of the health care system by reducing trial and error health visits and interventions . Therapeutic patient education delivered by healthcare professionals can empower patients to engage in informed participation in clinical decision-making and effectively manage their conditions . Several patient education interventions have been shown, in clinical trials, to improve medical outcomes, self-efficacy and satisfaction with treatment among patients with chronic conditions . Nonetheless, patient education alone is often insufficient for generating and maintaining behaviour change, as health decisions are typically driven by multiple factors, not merely lack of knowledge . It is important to combine psychological theory, research evidence, and patient/provider perspectives to promote appropriate health seeking behaviour and improve health outcomes . Our study revealed that a key pillar in psychosocial support and advocacy was patient education and patient empowerment. Similarly, the majority of trial interventions to improve chronic disease outcomes in adults (reviewed recently ) include one or more of the following: self-monitoring, disease management training, exercise programs, and enhancing communication skills. These concepts were desirable outcomes of patient centered interventions proposed by our participants. Healthworker interventions The quality of medical care varies across facilities . Addressing the substantial variation in care quality produced by existing health workers due to their unique context may be a more feasible immediate-term solution . Robust evidence about context specific interventions to improve health worker performance is key for success . In a systematic review by Blacklock and colleagues about the impact of contextual factors on the effect of interventions to improve health worker performance in sub-Saharan Africa, they identified three staff-related themes namely: absolute shortages of staff, the inadequacy of existing knowledge and skills, and erosion of personal motivation to effect change . Our participants reiterated the above themes by highlighting the paucity of health workers as evidenced by the long queues and wait times experienced by patients . They further highlighted that lack of knowledge in paediatric rheumatology makes them lack confidence in management of these patients . This demoralizes them and results in patients being ‘pushed on’ through the health system without getting any significant help . This is exacerbated by high staff turnover, inadequate initial basic education, and lack of ongoing training . Consequently, participants in our study proposed educational platforms, diagnostic aids, management, referral and follow up guidelines for the paediatric rheumatology patients they encounter. Similarly work by Blacklock and colleagues emphasized that training to impart knowledge to health workers should be accompanied by support and reinforcement infrastructure for meaningful change in clinical practice . Participants proposed a variety of formats that are useful to improve access to education, diagnostic aids and management approaches. These included online platforms targeting a broad reach of healthcare workers, many of whom may have geographical, funding or time restraints. Many resources are already available; some are free (e.g. the PAFLAR webinar series https://paflar.org/activities/ and the Paediatric Musculoskeletal Matters portfolio www.pmmonline.org which includes the pGALS basic MSK clinical examination https://www.pmmonline.org/doctor/clinical-assessment/examination/ ) while others are at a cost (e.g. UpToDate). The Project ECHO platform ( https://echo.unm.edu/ ) creates a community of learning and facilitates network development . The challenge is to raise awareness about their existence and value to the target audiences. Our study highlighted how expensive diagnostics and therapeutic interventions pose a challenge in achieving optimal clinical care in resource-poor settings. Alhassan et al. emphasized that patient and community factors such as poverty, while important, cannot be easily reformed, and highlighted the need to address other community issues such as language, cultural expectations, and transport issues . Similarly, our study revealed the importance of engaging community leaders and stakeholders such as village elders and community health workers to promote health behavior change for sustainable impact. In addition, concurrent programs implemented by other agencies such as corporate entities and non-governmental organizations may have an important impact . This was highlighted by participants citing examples of corporate organizations that offer financial assistance to some patients for diagnostic and therapeutic interventions. It is important to create an ‘enabling’ environmental/organizational culture that supports evidence-based approaches such as collaborative learning, and the tailoring of approaches to suit the cultural, economic, political, and social context . Our participants invoked these themes by suggesting frequent case based discussions and continuous medical education sessions as forums of knowledge exchange for paediatric rheumatology. Health system interventions Progress in the management of communicable diseases and reproductive maternal and child health conditions, combined with demographic transition, have caused a shift in the burden of mortality and morbidity to noncommunicable diseases (NCDs) such as paediatric rheumatic diseases . The long term management of NCDs requires integrated service delivery with the need for periodic reassessment and treatment modification . This can be potentiated using digital technologies to reduce response time by using trained non-physician health workers, providing decision support, minimizing variability in the quality of delivered care, and optimizing monitoring and patient engagement, eventually reducing the cost of care and improving outcomes . Our study participants also highlighted the above by proposing to have diagnostic and management algorithms available to improve their clinical decision making and ECHO sessions. In addition task shifting strategies were recommended to improve access to care by partnering with the social workers in the community and train them to take clinical history for paediatric rheumatology patients. Innovations in service delivery should yield positive outcomes related to access, equity, quality, and responsiveness . Well-designed, cost-effective studies are important to help policy makers use the limited health budgets to ensure maximum health benefits . A key strategy is to include implementation research into the existing and proposed health initiatives to support generation of evidence for health system strengthening on strategically important outcomes . Though our study has shed much light on possible ways to improve paediatric rheumatology care in Kenya, there remains much to learn about the specifics of interventions, e.g. in terms of optimal duration and/or frequency and how best to measure factors like patient empowerment in health care evaluation . Additional work also needs to be done to understand how community engagement and improved healthcare systems can ultimately reduce inequities . Providing equitable access, of course, requires that healthcare systems have adequate infrastructure, governance, personnel, and other resources . This includes strong partnerships, and good program management/co-ordination, to ensure success and sustainability . A strong health system requires multi-sectoral engagement at many levels, including a community-based system for accessing local healthcare services . Our study had several potential limitations. Since focus groups were conducted virtually, we were unable to study the full body language of respondents. In some instances, internet connectivity was a challenge and so some participants were not clearly audible; however, the majority of the time, the feedback was clear. Four potential participants were unable to join us due to a busy work load or other factors, but this is a problem in all focus-group studies. Optimizing chronic disease care requires involvement of patients and their families as partners in the doctor–patient relationship . Patient empowerment can be defined as having knowledge and being motivated to influence one’s health . Our study participants emphasized it was necessary to empower patients by providing the necessary knowledge, skills, and understanding of their disease to allow effective self-management. Increasing patient empowerment can help promote efficient use of the health care system by reducing trial and error health visits and interventions . Therapeutic patient education delivered by healthcare professionals can empower patients to engage in informed participation in clinical decision-making and effectively manage their conditions . Several patient education interventions have been shown, in clinical trials, to improve medical outcomes, self-efficacy and satisfaction with treatment among patients with chronic conditions . Nonetheless, patient education alone is often insufficient for generating and maintaining behaviour change, as health decisions are typically driven by multiple factors, not merely lack of knowledge . It is important to combine psychological theory, research evidence, and patient/provider perspectives to promote appropriate health seeking behaviour and improve health outcomes . Our study revealed that a key pillar in psychosocial support and advocacy was patient education and patient empowerment. Similarly, the majority of trial interventions to improve chronic disease outcomes in adults (reviewed recently ) include one or more of the following: self-monitoring, disease management training, exercise programs, and enhancing communication skills. These concepts were desirable outcomes of patient centered interventions proposed by our participants. The quality of medical care varies across facilities . Addressing the substantial variation in care quality produced by existing health workers due to their unique context may be a more feasible immediate-term solution . Robust evidence about context specific interventions to improve health worker performance is key for success . In a systematic review by Blacklock and colleagues about the impact of contextual factors on the effect of interventions to improve health worker performance in sub-Saharan Africa, they identified three staff-related themes namely: absolute shortages of staff, the inadequacy of existing knowledge and skills, and erosion of personal motivation to effect change . Our participants reiterated the above themes by highlighting the paucity of health workers as evidenced by the long queues and wait times experienced by patients . They further highlighted that lack of knowledge in paediatric rheumatology makes them lack confidence in management of these patients . This demoralizes them and results in patients being ‘pushed on’ through the health system without getting any significant help . This is exacerbated by high staff turnover, inadequate initial basic education, and lack of ongoing training . Consequently, participants in our study proposed educational platforms, diagnostic aids, management, referral and follow up guidelines for the paediatric rheumatology patients they encounter. Similarly work by Blacklock and colleagues emphasized that training to impart knowledge to health workers should be accompanied by support and reinforcement infrastructure for meaningful change in clinical practice . Participants proposed a variety of formats that are useful to improve access to education, diagnostic aids and management approaches. These included online platforms targeting a broad reach of healthcare workers, many of whom may have geographical, funding or time restraints. Many resources are already available; some are free (e.g. the PAFLAR webinar series https://paflar.org/activities/ and the Paediatric Musculoskeletal Matters portfolio www.pmmonline.org which includes the pGALS basic MSK clinical examination https://www.pmmonline.org/doctor/clinical-assessment/examination/ ) while others are at a cost (e.g. UpToDate). The Project ECHO platform ( https://echo.unm.edu/ ) creates a community of learning and facilitates network development . The challenge is to raise awareness about their existence and value to the target audiences. Our study highlighted how expensive diagnostics and therapeutic interventions pose a challenge in achieving optimal clinical care in resource-poor settings. Alhassan et al. emphasized that patient and community factors such as poverty, while important, cannot be easily reformed, and highlighted the need to address other community issues such as language, cultural expectations, and transport issues . Similarly, our study revealed the importance of engaging community leaders and stakeholders such as village elders and community health workers to promote health behavior change for sustainable impact. In addition, concurrent programs implemented by other agencies such as corporate entities and non-governmental organizations may have an important impact . This was highlighted by participants citing examples of corporate organizations that offer financial assistance to some patients for diagnostic and therapeutic interventions. It is important to create an ‘enabling’ environmental/organizational culture that supports evidence-based approaches such as collaborative learning, and the tailoring of approaches to suit the cultural, economic, political, and social context . Our participants invoked these themes by suggesting frequent case based discussions and continuous medical education sessions as forums of knowledge exchange for paediatric rheumatology. Progress in the management of communicable diseases and reproductive maternal and child health conditions, combined with demographic transition, have caused a shift in the burden of mortality and morbidity to noncommunicable diseases (NCDs) such as paediatric rheumatic diseases . The long term management of NCDs requires integrated service delivery with the need for periodic reassessment and treatment modification . This can be potentiated using digital technologies to reduce response time by using trained non-physician health workers, providing decision support, minimizing variability in the quality of delivered care, and optimizing monitoring and patient engagement, eventually reducing the cost of care and improving outcomes . Our study participants also highlighted the above by proposing to have diagnostic and management algorithms available to improve their clinical decision making and ECHO sessions. In addition task shifting strategies were recommended to improve access to care by partnering with the social workers in the community and train them to take clinical history for paediatric rheumatology patients. Innovations in service delivery should yield positive outcomes related to access, equity, quality, and responsiveness . Well-designed, cost-effective studies are important to help policy makers use the limited health budgets to ensure maximum health benefits . A key strategy is to include implementation research into the existing and proposed health initiatives to support generation of evidence for health system strengthening on strategically important outcomes . Though our study has shed much light on possible ways to improve paediatric rheumatology care in Kenya, there remains much to learn about the specifics of interventions, e.g. in terms of optimal duration and/or frequency and how best to measure factors like patient empowerment in health care evaluation . Additional work also needs to be done to understand how community engagement and improved healthcare systems can ultimately reduce inequities . Providing equitable access, of course, requires that healthcare systems have adequate infrastructure, governance, personnel, and other resources . This includes strong partnerships, and good program management/co-ordination, to ensure success and sustainability . A strong health system requires multi-sectoral engagement at many levels, including a community-based system for accessing local healthcare services . Our study had several potential limitations. Since focus groups were conducted virtually, we were unable to study the full body language of respondents. In some instances, internet connectivity was a challenge and so some participants were not clearly audible; however, the majority of the time, the feedback was clear. Four potential participants were unable to join us due to a busy work load or other factors, but this is a problem in all focus-group studies. In summary, we were able to identify potential initiatives to improve paediatric rheumatology care in Kenya. Our work gathered opinion from ‘first line’ healthcare workers to generate potential solutions to improve care. Although the absolute number of participants was relatively small, we believe that it is representative of the target population of healthcare workers in Kenya given the geographical distribution of our sample (Table and Fig. ). Proposed interventions featured patient education and psychosocial support, community interventions (outreach/awareness campaigns and mobilising financial support for diagnostic/therapeutic interventions), and health worker interventions (i.e. guidelines, research, and provider education). In addition, it was highlighted that healthcare systems be bolstered to improve insurance coverage and access to medicines and integrated multi-disciplinary clinical care. Our findings pave the way to bridge care gaps in paediatric rheumatology services. Additional efforts are underway to design, implement and monitor the impact of some of these potential interventions, while engaging all stakeholders in the health ecosystem. Additional file 1: Appendix 1. Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist. Additional file 2: Figure 1. Conceptual Framework of Interventions to Improving Paediatric Rheumatology care in Kenya (Authors’ creation).
Analysis of Cataract Surgery Instrument Identification Performance of Convolutional and Recurrent Neural Network Ensembles Leveraging BigCat
a7a379d3-b6b8-47b6-a41a-f0ca67af8ec8
8976933
Ophthalmology[mh]
Cataract surgery is one of the most commonly performed surgical procedures in the world and is a fundamental part of ophthalmology training. Complication rates for cataract surgery are low and have decreased with improved phacoemulsification technology and training methodology. These improvements have primarily come in the form of improved anterior chamber stability and surgical simulators, respectively. Providing consistent, objective feedback on surgical quality remains a challenge, however. Verbal intraoperative feedback can be difficult given the use of topical anesthesia and limited patient sedation, and providing feedback after surgery can be difficult given the premium placed on time in the operating room and the need to move between surgical cases efficiently. There are also limited options available for validated tools available for cataract surgery evaluation. Moreover, although expert surgeons are able to provide qualitative feedback, quantitative feedback may be of value in improving surgical performance, particularly with regard to steps such as the capsulorrhexis and nucleus disassembly. In addition, there are limitations in comparing trainee performance over the length of the training program and across trainees. To move toward the goal of providing objective feedback on surgical performance, the automated identification of instruments within the surgical field is an essential step. The ordering, duration, and location of surgical instruments at different points throughout a surgery may indicate how well a surgeon performed or whether there were complications during the surgery. By creating a machine learning model that can accurately identify when a surgery tool is being used during a surgery, we take an important step toward creating an accurate surgical assessment tool. Although detecting surgical instruments has been attempted before, no previous attempts have had the ability to train on the large amount of annotated cataract surgery data we have gathered . Where previous studies have reported only area under the receiver operator characteristic curve (AUROC) values as their primary performance metric, , we report accuracy, precision, recall, and F1 scores. These metrics are expected to be more indicative of model performance, particularly in the setting of class imbalance. , Additionally, we report the number of frames used in our training, testing, and validation datasets, giving a more specific quantification of the amount of data used. We also report the number of parameters and the inference times of our top performing models, making clearer the tradeoff between model complexity and speed. Recent approaches in the CATARACTS challenge use a combination of convolutional neural networks (CNNs) and post-prediction smoothing techniques to identify instrument presence in videos of cataract surgery. These methods combine, in some cases, up to four different CNNs followed by post-processing smoothing techniques in order to attain state-of-the-art performance. Although such methods achieve top-tier results, the architectures used were exceptionally large, and investigations did not consider time, space, and expense tradeoffs. A method reported by al Hajj et al. 3 involved a novel CNN that processes sequences of images instead of processing each image individually, analogous to smoothing techniques that process multiple images at a time. This approach, however, did not achieve the top-tier results seen in more recent works. It is unclear whether this was due to limitations of the architecture itself or the training dataset used. To create a real-time instrument detection model for incorporation into a surgical assessment system, limitations on architecture complexity and size must be considered. In this article, we show that to attain top performance, it is not necessary to create a complex system of neural networks. By training on a large, annotated dataset and using a single CNN architecture, we create a model that is a fraction of the size of many previous architectures while achieving state-of-the-art results. Data Collection Video recordings of cataract surgeries performed by attending surgeons at University of Michigan's Kellogg Eye Center were collected between 2020 and 2021. Institutional review board approval was obtained for the study (HUM00160950), and it was determined that informed consent was not required because of its retrospective nature and the anonymized data used in this study. The study was carried out in accordance with the tenets of the Declaration of Helsinki. All surgical videos were recorded using Zeiss high definition one-chip imaging sensors integrated into ceiling-mounted Zeiss Lumera 700 operating microscopes (Carl Zeiss Meditec AG, Jena, Germany). The imaging sensor received light split from the optical pathway of the primary surgeon's scope head and the signal was recorded to a Karl Storz AIDA recording device in full high-definition (1920 × 1080) resolution (Karl Storz SE & Co. KG, Tuttlingen, Germany). All surgeries were performed using the Alcon Centurion phacoemulsification machine (Alcon AG, Fort Worth, TX, USA). Femtosecond laser cataract surgeries and complex cataract surgeries (those qualifying for Current Procedural Terminology code 66982) were excluded. Cases with incomplete recordings were also excluded. Segments from before surgery and after surgery were trimmed, but video during surgery was otherwise completely unedited. The source resolution was 1920 × 1080 pixels at a frame rate of 30 frames/sec. Frame by frame instrument annotations were performed by a contracted third-party annotation services provider (Alegion Inc., Austin, TX, USA). Alegion's proprietary workflow was followed, which included (1) training of Alegion labeling technicians by NN, (2) two rounds of instrument labeling validation by NN on videos not included in the final dataset, and (3) final automated checks on received annotations to ensure that each video frame had corresponding instrument annotations. Alegion's proprietary video labeling platform was used by their labeling technicians to perform the annotations, and annotations were provided in JavaScript object notation (JSON) format. We have written software that converts the Alegion-structured JSON–formatted annotations to the open and well-documented COCO format, which is widely supported by open-source image labeling software such as Computer Vision Annotation Tool (CVAT) and labelImg. This enables one to evaluate and build on existing annotations with open-source labeling platforms, ensuring quality and reproducibility for future research. Through use of this third-party annotation services provider, it was ensured that no surgeons involved in the study were involved in the manual annotation of videos included in the dataset. A total of 208 videos were selected for annotation of instrument presence ground truth for every frame. One hundred ninety videos passed annotation validation checks to ensure appropriate and complete annotations for all available frames. The resulting dataset of 190 surgical videos and their annotations was termed BigCat. Over the set of surgical videos, 10 distinct instruments (listed in ) were annotated for their presence or absence with a binary designation for each instrument for each frame. provides a comparison of BigCat with other reported cataract surgery video datasets. Data Preprocessing The hydrodissection cannula and the 27-gauge cannula labels were combined into a general “cannula” label, as these instruments at our institution were visually similar. Video frames were resized to 480 × 270 pixels to improve the speed of the training and inference processes. In order to augment the data for training, transformations were randomly applied to input images. This was intended to improve the generalizability of the models studied. The types of transformations applied were rotations, shifts, shears, zooms, horizontal flips, and rescales. Of the 190 videos that passed validation checks 114 videos (2,282,382 frames) were allocated for training, 38 videos (838,005 frames) were allocated for validation, and 38 videos (826,266 frames) were held out for testing. Model Development We sought to evaluate the instrument identification performance of CNNs individually or in an ensemble, with or without a postprocessing technique . This approach was designed to quantify the tradeoffs between model complexity and performance. The problem itself was posed as a multilabel classification problem with the nine aforementioned classes. To speed up model development, we did not use our full dataset when training and validating these models. Instead, we sampled 100 random batches of size 32 without replacement for each epoch, and we subsequently trained for 200 epochs. This amounted to exposure to approximately 28% of our training data and took approximately six hours to run. The first algorithm considered consisted of a CNN, a dense neural network (NN), and a sigmoid function. The CNN was used to draw spatial patterns from the input images, whereas the dense NNs were meant to make predictions on the input images. The sigmoid mapped these predictions into a probability between 0 and 1. The output was a set of probabilities that represent the confidence that each surgery tool was present within a given input frame. Our final predictions were gathered by thresholding the output probabilities such that any output probability over a certain value for a surgery tool will result in the model predicting that tool is in frame. The specific value is a hyperparameter that we tune to optimize performance. The CNNs used were Densenet169 and Inception-ResNet-V2. , Densenet169 was used because of its densely connected network used to mitigate the vanishing gradient problem and promote feature reuse. Inception-ResNet-V2 was used because of its state-of-the-art performance on the ImageNet dataset. To attempt to improve on the results from using each CNN individually, the second approach used was a strategy called ensemble classification. The ensemble classifier took in the input video frames and passed them to the two CNN models, which each made their own predictions for the likelihood that a tool was in frame. This strategy is meant to draw on the strengths of both networks when making predictions. We combined the probabilities of the two CNNs in two different ways. Our first approach was a simple arithmetic mean of the probabilities. Our second approach was to use a linear regression model to combine the probabilities. To incorporate time dependencies and address smoothness of the CNN predictions, the third approach involved the addition of recurrent neural networks (RNNs) on top of our CNN models. Two different RNNs were layered on top of the CNN models. The first of the RNNs implemented was a long short-term memory (LSTM) network followed by a dense NN layer and a sigmoid function. The second was a fully connected SimpleRNN followed by a dense NN layer and a sigmoid function. These RNNs were layered on top of the CNN ensemble, as well as over the CNNs individually. We also implemented a recursive averaging algorithm to smooth the predictions from our CNNs. We average the predictions for each tool across a five-frame window. This window consists of the current frame, two frames prior to and two frames after the current frame. For the frames prior, we use the averaged prediction values that were created in the last two iterations of averaging. We take our final output probability as this average across the five frames. This approach uses negligible processing power with respect to our models. Model Evaluation Model performance was evaluated using a wide range of metrics. These included class accuracy, recall, precision, F1 score, and AUROC. For this problem in which each tool is used for only a small fraction of the entire surgery, accuracy and AUROC values may be inflated. , The F1 score, which captures both precision and recall values, offers a broader view of model performance. , We ran a grid search over learning rate and batch size on our model with the best validation metrics to optimize its performance. We tested learning rates of 1e-4, 1e-5, and 1e-6 and batch sizes of 16 and 32. We also ran an exhaustive search over the prediction confidence threshold value. Once finding the optimal hyperparameters, we trained our model over our entire dataset. Statistical Analysis Differences in model performance on the validation set were assessed using the Friedman test, followed by post hoc paired Wilcoxon signed-rank tests with Bonferroni correction. Implementation Data pipelines and machine learning models were developed and tested in Python 3.7.7 with TensorFlow 2.2.0 and Keras 2.3.0. All statistical analysis was performed using Python 3.7.7. Testing, including inference time measurements, were performed using a machine with 4 Nvidia RTX 2080 Ti GPUs. For each test run, we use two GPUs to load the model, to load the testing data, and to make inferences on the testing data. Video recordings of cataract surgeries performed by attending surgeons at University of Michigan's Kellogg Eye Center were collected between 2020 and 2021. Institutional review board approval was obtained for the study (HUM00160950), and it was determined that informed consent was not required because of its retrospective nature and the anonymized data used in this study. The study was carried out in accordance with the tenets of the Declaration of Helsinki. All surgical videos were recorded using Zeiss high definition one-chip imaging sensors integrated into ceiling-mounted Zeiss Lumera 700 operating microscopes (Carl Zeiss Meditec AG, Jena, Germany). The imaging sensor received light split from the optical pathway of the primary surgeon's scope head and the signal was recorded to a Karl Storz AIDA recording device in full high-definition (1920 × 1080) resolution (Karl Storz SE & Co. KG, Tuttlingen, Germany). All surgeries were performed using the Alcon Centurion phacoemulsification machine (Alcon AG, Fort Worth, TX, USA). Femtosecond laser cataract surgeries and complex cataract surgeries (those qualifying for Current Procedural Terminology code 66982) were excluded. Cases with incomplete recordings were also excluded. Segments from before surgery and after surgery were trimmed, but video during surgery was otherwise completely unedited. The source resolution was 1920 × 1080 pixels at a frame rate of 30 frames/sec. Frame by frame instrument annotations were performed by a contracted third-party annotation services provider (Alegion Inc., Austin, TX, USA). Alegion's proprietary workflow was followed, which included (1) training of Alegion labeling technicians by NN, (2) two rounds of instrument labeling validation by NN on videos not included in the final dataset, and (3) final automated checks on received annotations to ensure that each video frame had corresponding instrument annotations. Alegion's proprietary video labeling platform was used by their labeling technicians to perform the annotations, and annotations were provided in JavaScript object notation (JSON) format. We have written software that converts the Alegion-structured JSON–formatted annotations to the open and well-documented COCO format, which is widely supported by open-source image labeling software such as Computer Vision Annotation Tool (CVAT) and labelImg. This enables one to evaluate and build on existing annotations with open-source labeling platforms, ensuring quality and reproducibility for future research. Through use of this third-party annotation services provider, it was ensured that no surgeons involved in the study were involved in the manual annotation of videos included in the dataset. A total of 208 videos were selected for annotation of instrument presence ground truth for every frame. One hundred ninety videos passed annotation validation checks to ensure appropriate and complete annotations for all available frames. The resulting dataset of 190 surgical videos and their annotations was termed BigCat. Over the set of surgical videos, 10 distinct instruments (listed in ) were annotated for their presence or absence with a binary designation for each instrument for each frame. provides a comparison of BigCat with other reported cataract surgery video datasets. The hydrodissection cannula and the 27-gauge cannula labels were combined into a general “cannula” label, as these instruments at our institution were visually similar. Video frames were resized to 480 × 270 pixels to improve the speed of the training and inference processes. In order to augment the data for training, transformations were randomly applied to input images. This was intended to improve the generalizability of the models studied. The types of transformations applied were rotations, shifts, shears, zooms, horizontal flips, and rescales. Of the 190 videos that passed validation checks 114 videos (2,282,382 frames) were allocated for training, 38 videos (838,005 frames) were allocated for validation, and 38 videos (826,266 frames) were held out for testing. We sought to evaluate the instrument identification performance of CNNs individually or in an ensemble, with or without a postprocessing technique . This approach was designed to quantify the tradeoffs between model complexity and performance. The problem itself was posed as a multilabel classification problem with the nine aforementioned classes. To speed up model development, we did not use our full dataset when training and validating these models. Instead, we sampled 100 random batches of size 32 without replacement for each epoch, and we subsequently trained for 200 epochs. This amounted to exposure to approximately 28% of our training data and took approximately six hours to run. The first algorithm considered consisted of a CNN, a dense neural network (NN), and a sigmoid function. The CNN was used to draw spatial patterns from the input images, whereas the dense NNs were meant to make predictions on the input images. The sigmoid mapped these predictions into a probability between 0 and 1. The output was a set of probabilities that represent the confidence that each surgery tool was present within a given input frame. Our final predictions were gathered by thresholding the output probabilities such that any output probability over a certain value for a surgery tool will result in the model predicting that tool is in frame. The specific value is a hyperparameter that we tune to optimize performance. The CNNs used were Densenet169 and Inception-ResNet-V2. , Densenet169 was used because of its densely connected network used to mitigate the vanishing gradient problem and promote feature reuse. Inception-ResNet-V2 was used because of its state-of-the-art performance on the ImageNet dataset. To attempt to improve on the results from using each CNN individually, the second approach used was a strategy called ensemble classification. The ensemble classifier took in the input video frames and passed them to the two CNN models, which each made their own predictions for the likelihood that a tool was in frame. This strategy is meant to draw on the strengths of both networks when making predictions. We combined the probabilities of the two CNNs in two different ways. Our first approach was a simple arithmetic mean of the probabilities. Our second approach was to use a linear regression model to combine the probabilities. To incorporate time dependencies and address smoothness of the CNN predictions, the third approach involved the addition of recurrent neural networks (RNNs) on top of our CNN models. Two different RNNs were layered on top of the CNN models. The first of the RNNs implemented was a long short-term memory (LSTM) network followed by a dense NN layer and a sigmoid function. The second was a fully connected SimpleRNN followed by a dense NN layer and a sigmoid function. These RNNs were layered on top of the CNN ensemble, as well as over the CNNs individually. We also implemented a recursive averaging algorithm to smooth the predictions from our CNNs. We average the predictions for each tool across a five-frame window. This window consists of the current frame, two frames prior to and two frames after the current frame. For the frames prior, we use the averaged prediction values that were created in the last two iterations of averaging. We take our final output probability as this average across the five frames. This approach uses negligible processing power with respect to our models. Model performance was evaluated using a wide range of metrics. These included class accuracy, recall, precision, F1 score, and AUROC. For this problem in which each tool is used for only a small fraction of the entire surgery, accuracy and AUROC values may be inflated. , The F1 score, which captures both precision and recall values, offers a broader view of model performance. , We ran a grid search over learning rate and batch size on our model with the best validation metrics to optimize its performance. We tested learning rates of 1e-4, 1e-5, and 1e-6 and batch sizes of 16 and 32. We also ran an exhaustive search over the prediction confidence threshold value. Once finding the optimal hyperparameters, we trained our model over our entire dataset. Differences in model performance on the validation set were assessed using the Friedman test, followed by post hoc paired Wilcoxon signed-rank tests with Bonferroni correction. Data pipelines and machine learning models were developed and tested in Python 3.7.7 with TensorFlow 2.2.0 and Keras 2.3.0. All statistical analysis was performed using Python 3.7.7. Testing, including inference time measurements, were performed using a machine with 4 Nvidia RTX 2080 Ti GPUs. For each test run, we use two GPUs to load the model, to load the testing data, and to make inferences on the testing data. Dataset Characteristics A final dataset consisting of annotated video recordings of 190 cataract surgeries performed by nine attending surgeons at University of Michigan's Kellogg Eye Center was gathered. The source resolution was 1920 × 1080 pixels at a frame rate of 30 frames/sec with an average duration of 692 seconds and standard deviation of 161 seconds ( a). The average time with the paracentesis blade visible was nine seconds (1% of overall procedure video length) and was consistently at the beginning of the procedure ( b). In contrast, the phacoemulsification handpiece was visible on average 241 seconds (35% of video length) and the irrigation/aspiration handpiece was visible on average 137 seconds (20% of video length) ( c). Model Performance The validation performance for each model is presented in . The Inception-ResNet-V2 and DenseNet169 models performed at the highest level while remaining low cost with respect to other architectures . The Inception-ResNet-V2 achieved a validation F1 score of 0.9189 and validation AUROC of 0.9860 and contained 90,121,961 parameters. The DenseNet169 achieved a validation F1 score of 0.9273 and validation AUROC of 0.9905 and contained 63,763,529 parameters. A Friedman test for differences in the F1 scores among the models studied yielded a test statistic of 207.36 and a P value of 4.68e-39, indicating a difference among the models. Post hoc paired Wilcoxon signed-rank tests with Bonferroni correction demonstrated that the DenseNet169 model had no statistical difference in performance in comparison to the Inception-ResNet-V2 model, but performed statistically better than the two CNN-only ensembles (see for P values). These CNN ensembles were outperformed by DenseNet169 despite using more than 2.4 times the number of model parameters (153,885,490 vs. 63,763,529). DenseNet169 with recursive averaging performed statistically significantly better than all other models studied, including the models using RNNs (see for P values). Our top performing model, DenseNet169 with recursive averaging, achieved a validation F1 score of 0.9322 and a validation AUROC of 0.9913. The additional resources needed for recursive averaging are nearly negligible with respect to the amount of processing time and memory usage. We then ran a grid search across batch size and learning rate to optimize the performance of the DenseNet169 model. We found that a batch size of 32 and a learning rate of 1e-6 optimized performance for the DenseNet169 model. We also conducted an exhaustive search across our prediction threshold to optimize F1 score, and we found a threshold of 0.41 to optimize our performance. With these optimal hyperparameters, we then trained the DenseNet169 model on the full dataset for 6 epochs (approximately 30 hours of training time per epoch) and analyzed this final model on our testing data. This allowed us to achieve a testing F1 score of 0.9528 and a testing AUROC of 0.9985. Performance of our final model on the testing set is summarized in . We also analyzed our model qualitatively. As can be seen in , which depicts the final model's instrument time course predictions and ground truth for a representative case, the predictions from our model appear similar to the actual instrument presence. Inference Time We compared the average inference times of our top performing models. We used two GPUs to load each model and the testing data. The DenseNet169 model was fastest with an inference time of about 0.00598 seconds per frame (∼167 frames/sec). The ResNet was slightly slower with an inference time of about 0.00721 seconds per frame (∼143 frames/sec). The Ensemble classifier consisting of DenseNet169 and Inception-ResNet-V2 was slower with an inference time of about 0.0128 sec/frame (∼77 frames/sec). A final dataset consisting of annotated video recordings of 190 cataract surgeries performed by nine attending surgeons at University of Michigan's Kellogg Eye Center was gathered. The source resolution was 1920 × 1080 pixels at a frame rate of 30 frames/sec with an average duration of 692 seconds and standard deviation of 161 seconds ( a). The average time with the paracentesis blade visible was nine seconds (1% of overall procedure video length) and was consistently at the beginning of the procedure ( b). In contrast, the phacoemulsification handpiece was visible on average 241 seconds (35% of video length) and the irrigation/aspiration handpiece was visible on average 137 seconds (20% of video length) ( c). The validation performance for each model is presented in . The Inception-ResNet-V2 and DenseNet169 models performed at the highest level while remaining low cost with respect to other architectures . The Inception-ResNet-V2 achieved a validation F1 score of 0.9189 and validation AUROC of 0.9860 and contained 90,121,961 parameters. The DenseNet169 achieved a validation F1 score of 0.9273 and validation AUROC of 0.9905 and contained 63,763,529 parameters. A Friedman test for differences in the F1 scores among the models studied yielded a test statistic of 207.36 and a P value of 4.68e-39, indicating a difference among the models. Post hoc paired Wilcoxon signed-rank tests with Bonferroni correction demonstrated that the DenseNet169 model had no statistical difference in performance in comparison to the Inception-ResNet-V2 model, but performed statistically better than the two CNN-only ensembles (see for P values). These CNN ensembles were outperformed by DenseNet169 despite using more than 2.4 times the number of model parameters (153,885,490 vs. 63,763,529). DenseNet169 with recursive averaging performed statistically significantly better than all other models studied, including the models using RNNs (see for P values). Our top performing model, DenseNet169 with recursive averaging, achieved a validation F1 score of 0.9322 and a validation AUROC of 0.9913. The additional resources needed for recursive averaging are nearly negligible with respect to the amount of processing time and memory usage. We then ran a grid search across batch size and learning rate to optimize the performance of the DenseNet169 model. We found that a batch size of 32 and a learning rate of 1e-6 optimized performance for the DenseNet169 model. We also conducted an exhaustive search across our prediction threshold to optimize F1 score, and we found a threshold of 0.41 to optimize our performance. With these optimal hyperparameters, we then trained the DenseNet169 model on the full dataset for 6 epochs (approximately 30 hours of training time per epoch) and analyzed this final model on our testing data. This allowed us to achieve a testing F1 score of 0.9528 and a testing AUROC of 0.9985. Performance of our final model on the testing set is summarized in . We also analyzed our model qualitatively. As can be seen in , which depicts the final model's instrument time course predictions and ground truth for a representative case, the predictions from our model appear similar to the actual instrument presence. We compared the average inference times of our top performing models. We used two GPUs to load each model and the testing data. The DenseNet169 model was fastest with an inference time of about 0.00598 seconds per frame (∼167 frames/sec). The ResNet was slightly slower with an inference time of about 0.00721 seconds per frame (∼143 frames/sec). The Ensemble classifier consisting of DenseNet169 and Inception-ResNet-V2 was slower with an inference time of about 0.0128 sec/frame (∼77 frames/sec). Intuitively, the tools that are in use during a surgery are important in the outcome of the surgery. It then follows that the ability to identify which tools are currently in use is an important first step in building a video-based surgery assessment tool. Our intention with this study was to develop an efficient model with state-of-the-art performance in instrument identification to enable downstream processing for more complex recognition and assessment tasks. Our 190 video dataset, BigCat, was gathered with a full 30 frame/sec frame rate and full 1920 × 1080 frame resolution. This equates to 3,946,653 full resolution video frames. Compared to other recently gathered cataract video datasets, the dataset we present here, BigCat, is orders of magnitude larger. Many recent approaches downsample the frame rate of the videos considerably when training and testing on the data. , With BigCat, every frame is annotated with instrument presence data, allowing for use of the full 30 frames per second when training and testing our models. See for a comparison of BigCat between other reported datasets. We found the DenseNet169 architecture with recursive averaging to be the best performing model among those tested. Although DenseNet169, Inception-ResNet-V2, and the CNN ensemble architectures all achieve similar performances, the DenseNet169 performed slightly better with 30% fewer parameters than Inception-ResNet-V2 and 59% fewer parameters than the CNN ensemble, making it a more efficient choice. When trained on the full BigCat dataset, our final DenseNet169 model with recursive averaging achieved an overall test F1 score of 0.9528 and an overall test AUROC of 0.9985. Compared to the DResSys and Multi Image Fusion models, which achieved AUROC of 0.9971 and 0.977 respectively, these are state-of-the-art results. , DResSys performs at a similar level as our model with respect to AUROC; however, it uses a combination of an Inception-V4, a ResNet-50, and two NasNet-A models, making this architecture around four times larger than our DenseNet169 model with recursive averaging. , As mentioned above, the F1 scores for the DResSys and Multi Image Fusion models were not reported, precluding comparison of this metric. The F1 score is of particular importance for classification problems with significant class imbalance, such as instrument identification. In cataract surgery, each surgical instrument is used for only a small fraction of the surgery, causing a large disparity in the number of negative and positive examples for each instrument. The scarcity of positive examples may cause accuracy and AUROC values to be inflated. For example, a model that predicts the paracentesis blade is never in frame may still achieve a high accuracy and AUROC score because it will be correct for most frames. Additionally, previous studies have found the AUROC to be unreliable when discriminating among multiple high-performing models. Because the F1 score is not affected by the number of true-negative predictions, it is better able to avoid inflation caused by the imbalance between positive and negative examples inherent in the problem of cataract surgery instrument identification. , The DenseNet169 model achieved an average inference time of 0.00598 sec/frame on the standard hardware described, which is equivalent to approximately 167 frames/sec. It is important that this prediction can be performed substantially faster than real time to enable additional downstream processing in the future. The use of a five-frame sliding window for averaging of predictions does increase latency by the time required to acquire two frames beyond the frame of interest, thus increasing latency of an intraoperative application. However, this recursive averaging could be removed if decreased latency were desired while still maintaining excellent classification performance. In either scenario, the lightweight nature and the speed of inference of our selected model will allow for the implementation of more complex analyses on top of our current approach. We experimented with many complex models; however, our simpler architecture consisting of only a DenseNet169 model and a dense NN layer performed best out of all those considered. Although we investigated many different architectures, the space of CNNs is very large, and it is possible that an alternative CNN network could yield greater performance. One CNN architecture we considered, but did not implement, was NasNet. This architecture was too large for the 2-GPU setup we used and felt to be a reasonable reflection of standard hardware. It is possible that NasNet could improve performance by helping to optimize the wiring of our CNN as opposed to using a generic DenseNet model. This model warrants further investigation but highlights the tradeoffs of space and time described above. The results also suggest that using RNNs for smoothing predictions does not yield significant improvements when training on the BigCat dataset. This is backed by our data, as the validation F1 scores do not significantly increase or decrease when using the LSTM on top of the DenseNet169. This could be because we trained our model on all frames in our training dataset, where many previous attempts sampled frames at a lower rate (e.g., 6 frames per second). By sampling at a lower frame rate, it is possible that the loss of data requires a smoothing technique to ensure predictions are not erratic. One reason that our simple model may have outperformed more complex architectures in previous works is the use of our dataset, BigCat. Our findings suggest that the large amount of annotated data in our BigCat dataset allows us to achieve exceptional performance with respect to identifying surgery tools in cataract surgery videos. Our initial models used for validation were all trained on 640,000 video frames, which amounts to around 28% of our dataset. We saw improvements over these models in our validation and testing performance when instead training on the full dataset. Additionally, our final model was trained on 2,282,382 video frames. DResSys was trained on only around 82,000 video frames. The ability of our lightweight model to outperform DResSys is thus likely related to the size and quality of the BigCat dataset used to train our model. Limitations of this study include the use of a testing dataset gathered from the same institution as the training and validation datasets. The absence of a comparable public dataset for external testing is a limitation of the current study. The publicly available Cataract 101 dataset, for example, does not contain instrument presence annotations, and has only surgical phase annotations. While the data augmentation performed on BigCat should allow for some invariance to scale and orientation, it will be valuable to assess performance on external datasets in the future. As instruments can have very different appearances (such as an irrigation-aspiration handpiece with polymer tip vs. silicone tip), true generalizability requires examples of all potential representations of a given instrument type, which will pose an ongoing data collection challenge moving forward. Future work will involve the development of models to assess the actions of the instruments identified by the models presented here. In addition, we will look to expand BigCat to include complex cataract surgeries. This will enable future models to identify more rare surgical tools such as iris expansion devices, capsular hooks, and capsular tension rings and further improve their generalizability. Supplement 1
Impact of hyponatremia in patients hospitalized in Internal Medicine units: Hyponatremia in Internal Medicine units
680d5cc2-8dc6-4e19-9e16-e4595ff4d5e9
11124689
Internal Medicine[mh]
Hyponatremia is the most prevalent electrolyte imbalance at the hospital level and its high social and health impact is well known. Its reported prevalence, ranging from 10% to 20%, depends on the type of study, the population studied, as well as the severity of hyponatremia. In most cases, hyponatremia is caused by a net gain of water homeostasis. The disorder is caused by a net gain of water without any real change in sodium concentration levels. In most cases, it is due to increased circulating vasopressin, that is, antidiuretic hormone (ADH), or renal sensitivity to ADH that prevents excretion of free water, resulting in the development of dilutional hyponatremia. It is widely known that this disorder is associated with poorer clinical outcomes, prolonged hospital stays, higher economic costs, as well as higher morbidity and mortality. Hyponatremia has been mainly studied in patients with specific pathologies, such as chronic heart failure (CHF), kidney failure, chronic liver disease, respiratory pathologies, or neoplasms. However, little data exist on the prevalence, management, and prognostic impact of hyponatremia in larger and more heterogeneous populations, such as patients admitted to Internal Medicine (IM) units. Moreover, studies that have analyzed hyponatremia based on volume status are scarce, while the management and prognosis of each subgroup are known to differ. Also, there is limited research on long-term follow-up of volume status. For all these reasons, we have conducted this study to explore the prevalence and main epidemiological, clinical, and prognostic characteristics of hyponatremia in IM, as well as the diagnostic and therapeutic approach applied to this condition. 2.1. Design A prospective observational multicenter study with a 12-month follow-up was designed. Recruitment took place from March 15, 2015 to October 11, 2017. All patients with hyponatremia (plasma sodium concentration <135 mmol/L) hospitalized in the IM areas of 5 hospitals of the autonomous region of Andalusia (Spain) were included (Table S1, Supplemental Digital Content, http://links.lww.com/MD/M655 ). 2.2. Selection criteria: inclusion and exclusion All patients admitted to the IM services of the participating hospitals, aged ≥18 years, who presented hypotonic hyponatremia in the analytical tests carried out during the episode that led to admission, and who agreed to participate by signing the informed consent form, were included. Patients who were in a situation of agony at the time of inclusion were excluded. 2.3. Sample size calculation An expected frequency of hyponatremia of 40% ( P = .4, q = 0.6), a 95% confidence interval (95% CI) ( z = 1.96), an accepted error of 5% ( β = 0.05), and a loss rate of 15% was assumed. The sample size was thus set at 423 patients. 2.4. Inclusion procedure, variables, and follow-up During the study period, point prevalence studies were carried out every 2 weeks (except in the summer months). For those patients who met the inclusion criteria, blood osmolality was calculated (Formula = sodium mmol/L × 2 + Glucose mg/dL/18 + Urea mg/dL/2.8). If this was <275 mOsm/kg, demographic, clinical, and laboratory data were collected, and the different classifications of hyponatremia were considered. Demographic data included age and sex. Clinical data included comorbidities, diagnoses on admission, hyponatremia-inducing drugs, sodium levels on admission, laboratory parameters (glucose, urea, creatinine, blood and urine osmolality, sodium in urine and cortisol), mortality during admission, and diagnostic and therapeutic management. It can be direct (aimed at correcting sodium) and indirect (treatment of the underlying disease). In addition, the classification of hyponatremia based on volume status, symptomatology, sodium levels, and chronology was performed. Follow-up of all patients was conducted for 12 months. Persistence of hyponatremia, number of admissions, and mortality were recorded during this period. 2.5. Definitions Hyponatremia is defined as a plasma sodium concentration <135 mmol/L ; hypotonic hyponatremia is defined as plasma osmolality <275 mOsm/kg ; hypervolemic hyponatremia shows increased extracellular fluid volume; hypovolemic hyponatremia is defined by decreased extracellular fluid volume; euvolemic hyponatremia is related to normal extracellular fluid volume (the determinants for this distinction were the medical history, physical examination, analytical values, and sometimes the response to treatment). In the present study, mild symptoms included nausea without vomiting, headache, and confusion. Severe symptoms included vomiting, cardiorespiratory distress, seizures, deep somnolence, and coma (Glasgow Scale <8). Chronic hyponatremia was defined as >48 hours or was not classifiable. Acute hyponatremia was defined as <48 hours. Mild hyponatremia was defined as sodium between 135 and 130 mmol/L; moderate hyponatremia, between 129 and 125 mmol/L; and profound hyponatremia, <125 mmol/L. 2.6. Ethical aspects All patients or their legal representatives agreed to the use of their anonymized clinical data for clinical research purposes by signing a written informed consent on receipt of the patient information sheet. The study was approved by the Research Ethics Committee of the Virgen Macarena Hospital of Seville, on February 13, 2015 for all participating centers. For this prospective project, all data were collected, processed, and analyzed anonymously and only for the intended purposes. All data were protected by the World Medical Association Declaration of Helsinki and the Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons in the processing of personal data. All authors declared no conflicts of interest concerning this work. 2.7. Statistical analysis A descriptive analysis was carried out using absolute values and percentages for qualitative variables, as well as central values and measures of dispersion for quantitative variables. The distribution of all variables was assessed using the Kolmogorov–Smirnov test to determine the normality of the data distribution. Subsequently, after obtaining a normal distribution, bivariate inferential analysis of possible clinical and healthcare differences, as well as of factors associated with mortality at the time of the episode and 12 months, was performed using Chi-squared, Student t test, and analysis of variance. Finally, a multivariate analysis of factors associated with mortality was performed. The strength of the associations was quantified by calculating the relative risk (RR) with a 95% CI. The statistical analysis was performed using SPSS 26.0. A prospective observational multicenter study with a 12-month follow-up was designed. Recruitment took place from March 15, 2015 to October 11, 2017. All patients with hyponatremia (plasma sodium concentration <135 mmol/L) hospitalized in the IM areas of 5 hospitals of the autonomous region of Andalusia (Spain) were included (Table S1, Supplemental Digital Content, http://links.lww.com/MD/M655 ). All patients admitted to the IM services of the participating hospitals, aged ≥18 years, who presented hypotonic hyponatremia in the analytical tests carried out during the episode that led to admission, and who agreed to participate by signing the informed consent form, were included. Patients who were in a situation of agony at the time of inclusion were excluded. An expected frequency of hyponatremia of 40% ( P = .4, q = 0.6), a 95% confidence interval (95% CI) ( z = 1.96), an accepted error of 5% ( β = 0.05), and a loss rate of 15% was assumed. The sample size was thus set at 423 patients. During the study period, point prevalence studies were carried out every 2 weeks (except in the summer months). For those patients who met the inclusion criteria, blood osmolality was calculated (Formula = sodium mmol/L × 2 + Glucose mg/dL/18 + Urea mg/dL/2.8). If this was <275 mOsm/kg, demographic, clinical, and laboratory data were collected, and the different classifications of hyponatremia were considered. Demographic data included age and sex. Clinical data included comorbidities, diagnoses on admission, hyponatremia-inducing drugs, sodium levels on admission, laboratory parameters (glucose, urea, creatinine, blood and urine osmolality, sodium in urine and cortisol), mortality during admission, and diagnostic and therapeutic management. It can be direct (aimed at correcting sodium) and indirect (treatment of the underlying disease). In addition, the classification of hyponatremia based on volume status, symptomatology, sodium levels, and chronology was performed. Follow-up of all patients was conducted for 12 months. Persistence of hyponatremia, number of admissions, and mortality were recorded during this period. Hyponatremia is defined as a plasma sodium concentration <135 mmol/L ; hypotonic hyponatremia is defined as plasma osmolality <275 mOsm/kg ; hypervolemic hyponatremia shows increased extracellular fluid volume; hypovolemic hyponatremia is defined by decreased extracellular fluid volume; euvolemic hyponatremia is related to normal extracellular fluid volume (the determinants for this distinction were the medical history, physical examination, analytical values, and sometimes the response to treatment). In the present study, mild symptoms included nausea without vomiting, headache, and confusion. Severe symptoms included vomiting, cardiorespiratory distress, seizures, deep somnolence, and coma (Glasgow Scale <8). Chronic hyponatremia was defined as >48 hours or was not classifiable. Acute hyponatremia was defined as <48 hours. Mild hyponatremia was defined as sodium between 135 and 130 mmol/L; moderate hyponatremia, between 129 and 125 mmol/L; and profound hyponatremia, <125 mmol/L. All patients or their legal representatives agreed to the use of their anonymized clinical data for clinical research purposes by signing a written informed consent on receipt of the patient information sheet. The study was approved by the Research Ethics Committee of the Virgen Macarena Hospital of Seville, on February 13, 2015 for all participating centers. For this prospective project, all data were collected, processed, and analyzed anonymously and only for the intended purposes. All data were protected by the World Medical Association Declaration of Helsinki and the Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons in the processing of personal data. All authors declared no conflicts of interest concerning this work. A descriptive analysis was carried out using absolute values and percentages for qualitative variables, as well as central values and measures of dispersion for quantitative variables. The distribution of all variables was assessed using the Kolmogorov–Smirnov test to determine the normality of the data distribution. Subsequently, after obtaining a normal distribution, bivariate inferential analysis of possible clinical and healthcare differences, as well as of factors associated with mortality at the time of the episode and 12 months, was performed using Chi-squared, Student t test, and analysis of variance. Finally, a multivariate analysis of factors associated with mortality was performed. The strength of the associations was quantified by calculating the relative risk (RR) with a 95% CI. The statistical analysis was performed using SPSS 26.0. A total of 8169 patients with and without hyponatremia were evaluated in 69-point prevalence studies carried out every 2 weeks during the study period. A total of 1192 (14.59%) hyponatremias were identified among the total number of patients analyzed, of which 678 were hypotonic (8.3%) and 514 were non-hypotonic (6.3%). Of the participants with detected hypotonic hyponatremia, 501 agreed to participate in the study. The study flow diagram is detailed in Figure S1, Supplemental Digital Content, http://links.lww.com/MD/M652 . The different classifications and the distribution according to volume status are shown in Table . Hypervolemic hyponatremia was the most common (35.92%), followed by hypovolemic (32.7%) and euvolemic (31.33%). The clinical characteristics of all patients are shown in Table . The analyzed population is characterized by being elderly and having a high number of comorbidities. Heart failure is the most frequently recorded diagnosis among patients, also one of the main etiologies of hyponatremia in this sample. The most frequent admission diagnoses are represented in Figure S2, Supplemental Digital Content, http://links.lww.com/MD/M653 . The most common etiologies of hyponatremia, taking into account the volume status, are illustrated in Figure S3, Supplemental Digital Content, http://links.lww.com/MD/M654 . The results on the diagnostic and therapeutic management of hyponatremia are detailed in Tables and , where it is emphasized that the vast majority of subjects were not subjected to tests to assess this disorder, and only a limited number of patients received direct treatment to raise sodium levels. Finally, Figure shows the evolution of sodium from admission to discharge or death. Mortality rate during admission amounted to 15.6% (76 patients). The multivariate analysis (Table ) showed that euvolemic hyponatremia (RR = 0.81 [95% CI = 0.83–0.069] P = .037) and hyponatremia present on admission (RR = 0.39 [95% CI = 0.19–0.79] P = .009) were protective factors (Table S4, Supplemental Digital Content, http://links.lww.com/MD/M658 ). In contrast, CHF (RR = 2.17 [95% CI = 1.17–4.02] P = .013) and active neoplasia (RR = 2.94 [95% CI = 1.49–5.80] P = .002) were identified as risk factors. During the 12-month follow-up, 132 patients (30.9%) had died, 214 individuals had been readmitted (52.7%), and 124 (29.1%) had persistent hyponatremia, with no significant differences between the different types of hyponatremia according to volume status ( P > .05). Despite various studies linking this disorder to worse clinical outcomes, as well as higher economic costs, the results of this study demonstrate that internists are currently not convinced of the importance of this alteration. An example of this is the low number of diagnostic tests requested to clarify the origin. Only 12.6% requested at least 1 diagnostic test, and approximately 30% of subjects did not receive treatments to raise sodium levels. Furthermore, more than 40% of patients were discharged still hyponatremic. In Spain, no prospective observational studies have been carried out in IM units that have comprehensively analyzed this condition, so this is the first study in the field. The prevalence of hypotonic hyponatremia in IM units was 8.3% ± 7.2%. This figure is below the published prevalence (10%–20%), and this may be mainly because plasma osmolality was obtained from formulae that included urea levels. On the other hand, the prospective nature of this study has allowed us to identify non-hypotonic etiologies of hyponatremia, such as hyperglycemia or hyperproteinemia. Finally, interrupting the data collection phase during the summer months may also have contributed to this data mismatch, as several studies have reported a higher incidence of hyponatremia during this period. The reasons for the higher prevalence of hyponatremia during these months are attributed to several predisposing factors, such as increased hypotonic fluid consumption, sweating, an increased risk of hypovolemic situations, or even increased environmental temperature as a stimulus for vasopressin. The mean hospital stay in the present study was relatively extended (14 days) compared to that described in other studies carried out in IM units. This finding may be justified by the population analyzed, as many studies have shown that patients with this disorder generally stay for a longer period in hospital and have a higher number of complications. The study by Lu et al is an example of this, where patients with hyponatremia had a mean length of stay of 13.4 ± 0.2 days versus 10.7 ± 0.2 days in normal sodium patients, P < .001. Second, the included patients were characterized by high clinical complexity. The mean number of comorbidities was high when compared to the national mean number of comorbidities in IM units as reported in other studies. The mean was similar to a complex chronic patient population (4.3 comorbidities/patient). The most common type detected in this study was hypervolemic hyponatremia (35.9%), followed by hypovolemic hyponatremia and euvolemic hyponatremia (32.7% and 31.1%, respectively). Even though this distribution is not consistent with other studies carried out on hospital samples, these data justify that CHF is the most prevalent pathology in IM units. Therefore, hyponatremia in these units would more likely be a pathophysiological consequence of another disease, rather than the sole and main reason for admission, as may occur in the case of euvolemic hyponatremia. Notably, patients with euvolemic hyponatremia formed the group with significantly more symptoms ( P < .05), probably because this type is associated with more severe biochemical levels ( P < .05) and is often the sole condition for admission. In the study at hand, the majority of hyponatremias were defined as chronic (84.2%), based on the definitions provided in the guideline. However, based on the severity of the symptoms, particularly as regards individuals with severe symptoms, these subjects should have been classified as patients with acute hyponatremia. Such symptoms usually reflect the presence of brain edema, which indicates that the brain has not yet had time to rapidly adapt to hypotonicity (first 48 hours). Thus, symptomatic severity could be another supportive marker for defining chronology. Despite several studies linking this condition to poorer clinical outcomes, as well as to higher economic costs, the results of this study show that IM specialists are currently not convinced of the significance of this disorder. An example of this was the low number of additional diagnostic tests requested, only in 12.6% of cases at least 1 diagnostic test was requested, and approximately 30% of subjects did not receive any treatment whatsoever. Though it is considered that actual values exceed this figure, as for this analysis furosemide was taken as specific therapy, any patient with CHF was therefore considered as treated; the same was true for serotherapy. The percentages of additional diagnostic tests and treatment varied according to the type of volume status, being the euvolemic group the one with more additional diagnostic tests performed and receiving less specific treatment. In addition, more than 40% of patients were discharged while still in hyponatremia. These data suggest similarities with other studies that have analyzed the management of hyponatremia in different settings. Several barriers may explain this praxis inadequacy. First, it is a complex electrolyte imbalance that requires an understanding of the pathophysiology of the disease and a deep knowledge of the multiple etiologies, as well as of the different classifications, as each condition requires a different approach. On top of this, the diagnostic tools used are not very robust and accurate. To this must be added that there is no specialty responsible for the management of hyponatremia, something which also contributes to poor management of cases. The results of the study by Garrahy et al demonstrated the benefits of structured input from hyponatremia specialists: higher sodium levels at discharge, shorter hospital stays, and reduced mortality. Therefore, the role of professionals with expertise in this condition should be promoted. Another likely barrier to better management is probably the lack of hard evidence of improved mortality through the treatment of hyponatremia. At the therapeutic level, the first limitation relates to the lack of evidence concerning many therapeutic aspects, such as the most suitable treatment according to the patient’s profile, most notably in the scenario of chronic hyponatremia. To date, further research is still needed to justify the benefits of reversing mild or moderate hyponatremia. Second, due to the lack of evidence, treatment is largely based on expert opinion, which explains why different guidelines and protocols differ in some of their most basic therapeutic recommendations. Third, fear of overcorrection, as well as lack of experience with specific treatments, such as tolvaptan or urea, may be limiting their use. Finally, to successfully correct hyponatremia, laboratory tests are necessary, but as already seen, these individuals are often not sufficiently tested to deal with this condition without any guarantees. The analyzed data suggest that patients with euvolemic hyponatremia have a significantly lower risk of death than those with hypervolemic and hypovolemic hyponatremia ( P = .037). In this sense, Cuesta et al were the first to demonstrate higher mortality rates in the hypervolemic and hypovolemic hyponatremia groups compared to euvolemic hyponatremias (syndrome of inappropriate antidiuresis). This finding may be explained by several reasons. In the first place, some of the severe hyponatremias, often euvolemic, are usually induced by reversible etiologies, for example, drugs or respiratory infections. Second, underlying medical conditions associated with hypervolemic hyponatremia (CHF, liver disease, etc) are generally associated with increased mortality. Thus, it is likely that the underlying medical pathologies that have caused the alteration of water homeostasis play an important role in mortality rates. Third, biochemically more severe sodium is often considered worthy of further study by clinicians on a more frequent basis and therefore, more optimal management is applied. This has been demonstrated in the present study, where patients with profound hyponatremia and severe symptoms had more additional diagnostic tests requested and a higher treatment rate (Table S2, Supplemental Digital Content, http://links.lww.com/MD/M656 , and Table S3, Supplemental Digital Content, http://links.lww.com/MD/M657 ). Hence, the linear relationship between mortality and hyponatremia levels, which has been classically proposed, is being disputed here. Finally, this study has shown that the group of patients with in-hospital developed hyponatremia were more likely to die ( P < .005). Therefore, hyponatremia in this group of patients may be a marker of severity. Likewise, these individuals had longer hospital stays (Table S4, Supplemental Digital Content, http://links.lww.com/MD/M658 ), making it likely that they suffered a greater number of complications associated with the hospitalization period (nosocomial infections, confusional syndrome, etc), which in turn contributed to mortality. This study has some limitations that must be considered when interpreting the results. First, the observational design prevents any firm conclusions from being drawn from the suggested findings. It is also noteworthy that the selection of diagnostic tests and the prescription of treatments was entrusted to the clinicians responsible for the patients. This limited the ability to accurately confirm the types of hyponatremia by volume status. In addition, the formula used in this study corresponds to blood osmolality and not effective osmolality or tonicity, as it took into account plasma urea. Consequently, this approach has prevented the detection of a higher number of hypotonic hyponatremias. Another possible limitation has been the non-correction of sodium by total protein. Such circumstances can substantially mask hyponatremia or its biochemical severity. Lastly, the small sample size due to the prospective observational design and the fact that the participating hospitals were only located in Andalusia may raise the question of whether the results can be extrapolated to clinical practice in other areas of Spain or the world. To conclude, hyponatremia was found to be common in patients admitted to IM units, and the population was characterized by a remarkable clinical complexity. Hypervolemic hyponatremia was the most frequently observed. Deficiencies in the diagnostic and therapeutic management of this condition have been observed. Finally, euvolemic hyponatremia and hyponatremia on admission versus during admission were associated with lower mortality rates. Conceptualization: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Data curation: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Formal analysis: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Investigation: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Methodology: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Project administration: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Máximo Bernabeu-Wittel. Resources: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Software: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo BBernabeu-Wittel. Supervision: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Validation: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Visualization: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Writing—original draft: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel. Writing—review & editing: Jara Eloísa Ternero-Vega, Carlos Jiménez-de-Juan, Javier Castilla-Yelamo, Vanesa Cantón-Habas, Elena Sánchez-Ruiz-Granados, Miguel Ángel Barón-Ramos, Guillermo Ropero-Luis, Juan Gómez-Salgado, Máximo Bernabeu-Wittel.
An efficient enzyme-triggered controlled release system for colon-targeted oral delivery to combat dextran sodium sulfate (DSS)-induced colitis in mice
ab01ec34-d52e-43ca-b80a-5ee60d13c590
8205034
Pharmacology[mh]
Ulcerative colitis (UC) is an idiopathic chronic inflammatory disease of the colonic mucosa, with adverse public health effects (Kobayashi et al., ). Oral drug delivery systems (ODDSs) are highly desirable for the treatment of UC as they improve therapeutic efficiency and reduces systemic toxicity (Han et al., ). Oral colon-targeted drug delivery is of great interest for UC therapy. However, due to physiological challenges, biochemical, and environmental barriers, it is difficult to target oral drugs to the colon (Wang et al., ). Thus far, several natural, synthetic, and semi-synthetic polymers have been used to overcome the strong physiological variations in the upper gastrointestinal tract (GIT) and focus drug release in the colon (Bansal et al., ; Arévalo-Pérez et al., ). Among them, chitosan–alginate nanoparticles (CANPs) have attracted increasing interest as colon-targeted ODDSs. Chitosan is a cationic polymer derived from chitin (Rivera et al., ), while, sodium alginate is a linear hetero-polyuronic acid polymer extracted from seaweed (Tønnesen & Karlsen, ; Li et al., ). Due to their biodegradability, biocompatibility, adhesion, safety, and gel properties, both chitosan and alginate are widely used in drug delivery (Azevedo et al., ; Maity et al., ; Sorasitthiyanukarn et al., ). Moreover, the strong electrostatic interaction between the carboxyl group of alginates and the amino group of chitosan leads to shrinkage and gel formation at low pH (Thai et al., ), which enhance their protection from low gastric pH, and to provide release in intestinal conditions (Mukhopadhyay et al., ). Recently, advanced stimuli-responsive controlled release systems have received increasing attention for drug delivery (Descalzo et al., ), which can regulate the release of entrapped drug molecules by specific external stimuli such as light (Schroeder et al., ), temperature (Ruiz et al., ), magnetic fields (Tasciotti, ) or by internal enzymes (Zhang et al., ). Gut microbiota are microorganisms coexist peacefully with the host in their gut and secret several carbohydrate-active and reductive metabolizing enzymes (Peng et al., ). Intestinal enzymes are unique stimuli that are increasingly being used as triggers in controlled release systems due to their high substrate specificity (Mura et al., ). Cyclodextrin (CD) are natural cyclic oligosaccharides linked by α-1,4-glucoside bonds with cone-like conformation, which express an external hydrophilic surface and a relatively hydrophobic inner cavity. Meanwhile, CD could open the ring under the action of colonic microorganism fermentation and enzymatic hydrolysis, then the ester bond is hydrolyzed and the drug is released in colon (Park et al., ). Curcumin (Cur) is a natural yellow polyphenol occurring in turmeric roots, with proven pharmacological activities, including anti-inflammatory, anti-bacterial, anti-viral, anti-tumor, anti-ulcer, immunomodulatory, and neuroprotective effects (Liu et al., ; Nelson et al., ; Di Meo et al., ). Current studies have shown that Cur is therapeutically effective against UC (Grammatikopoulou et al., ; Sadeghi et al., ). However, the poor water solubility and instability of Cur lead to poor oral absorption and low bioavailability in the GIT, which limits its application in oral therapy (Manju & Sreenivasan, ). In this study, we developed a novel enzyme-triggered controlled release system for colon-targeted drug delivery. The β-CD was selected as the carrier of curcumin (Cur), and Cur was utilized as a model drug for convenient detection. The core–shell NPs were successfully prepared using CD–Cur inclusion complex as core and CANPs as shell. The formed CD–Cur–CANPs showed narrow particle-size distribution and a compact structure. In vitro drug release determination indicated that CD–Cur–CANPs showed pH-sensitive and α-amylase-responsive release characteristics. Furthermore, in vivo experiments demonstrated that oral administration of CD–Cur–CANPs had an efficient therapeutic, strong colonic retention, promoted colonic epithelial barrier integrity, and reshape of gut microbiota in mice with dextran sodium sulfate (DSS)-induced colitis. Materials and supplies The chitosan ( M w : 140–190 kDa, degree of deacetylation: >85%, the viscosity: 200–400 mPa s) was purchased from Aladdin (Shanghai, China). High-viscosity sodium alginate ( M w : 20–50 kDa) was purchased from Bright Moon Seaweed Group (Qingdao, China). α-Amylase from Aspergillus oryzae was purchased from Aladdin (Shanghai, China). Dextran sodium sulfate (DSS) was supplied by MP Biomedicals (Irvine, CA). Cur and β-CD were purchased from Solarbio (Beijing, China). Male C57BL/6 mice (8-week old, 18–22 g) were provided by Pengyue Laboratory Animal Technology Co., Ltd. (Jinan, China) and preserved in an aseptic environment. Mouse interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and interleukin-1β (IL-1β) ELISA kits were purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Rhodamine B isothiocyante (RBITC) and fluorescein isothiocyanate (FITC) were purchased from Macklin (Shanghai, China). Preparation and analysis of low molecular weight chitosan and unsaturated alginate The purchased high molecular weight chitosan was dissolved in 1 M aqueous acetic acid (HAc) to a concentration of 1% (w/v) and chitosanase CsnM (20 U/mL) was added to colloidal chitosan at 20 °C for 10 min in a shaky condition. Low molecular weight unsaturated alginate was prepared by oligoalginate lyase OalC6 in our lab (Li et al., ). OalC6 (1 U) was added to 1 mL of high-viscosity sodium alginate polymer (5 mg/mL in 20 mM phosphate buffer, pH 7.0) and incubated at 40 °C for 60 min in a shaky condition. Then, these degrading products were heat-treated and dialyzed by a 1000 Da dialysis bag to remove smaller chitosan oligosaccharides (COS) or alginate oligosaccharides (AOS). The average molecular weight of prepared low molecular weight chitosan (8.76 kDa) and unsaturated alginate (7.73 kDa) were analyzed using viscosity methods (Li et al., ). Cyclodextrin–curcumin (CD–Cur) preparation The molar ratio of CD to Cur was 1:1 (v/v). Briefly, 647 mg of CD was dissolved in 120 mL water, and 147 mg curcumin was dissolved in 2 mL acetone. The oil phase was added to the aqueous solution and centrifuged at 400 rpmin. Uncovered acetone was volatilized by stirring for 24 h in the dark, and centrifuged at 1000 rpm for 5 min. The supernatant was collected and lyophilized to recover water-soluble CD–Cur. Preparation and determination of nanoparticles The CD–Cur–CANPs were prepared by ion gel method. Briefly, 0.0315% (w/v) sodium alginate solution and 0.07% (w/v) chitosan were dissolved in water and 1% acetic acid solution, respectively, and stirred overnight. Then, 1 mL of water-soluble CD–Cur and 23.5 mL of prepared low molecular unsaturated alginate solution were mixed and evenly dispersed, followed by the drop-wise addition of 0.2% (w/v) 1.5 mL calcium chloride solution at a speed of 0.1 mL/min, and stirred for 30 min to form a pre-gel. Finally, 2 mL low molecular weight chitosan solution was added at a rate of 0.1 mL/min and stood for 30 min to form an evenly dispersed solution. After centrifugation at 12,000 rpm for 30 min, the supernatant was discarded and washed three times with ultra-pure water. The powder was obtained by vacuum freeze-drying. The particle size and the zeta potential of the synthetic CD–Cur–CANPs were measured via dynamic light scattering (DLS) with Zetasizer Nano ZS (Malvern Instrument, Malvern, UK). Morphological analysis was performed using transmission electron microscopy (TEM) in open field mode. Fourier transform infrared (FTIR) spectroscopy was used to analyze the samples with a FTIR spectrometer in the range of 4000–400 cm −1 . Before the measurement, the samples were dried under vacuum until a constant weight was reached. The dried samples were pressed into the powder, mixed with the KBr powder, and then pressed into a shape for FTIR spectroscopy to measure the wavelength. Encapsulation efficiency (EE) and drug release The drug EE and loading efficiency (LE) were determined using a direct method by dispersing 5 mg synthetic CD–Cur–CANPs in 10 mL anhydrous ethanol and oscillated at a constant temperature of 37 °C for 24 h to obtain completely swollen NPs. Then, the solution was cleaned ultrasonically for 30 min and centrifuged at 12,000 rpm for 30 min. The concentration of Cur in the supernatant was measured using an ultraviolet spectrophotometer at 425 nm according to the pre-determined standard curve. Finally, the EE and LE were calculated using the following formula: EE % = Weight of the drug in NPs ( mg ) Weight of the total drug ( mg ) × 100 % LE % = Weight of the drug in NPs ( mg ) Weight of the total NPs ( mg ) × 100 % A dialysis bag was used to determine the pH sensitivity and enzyme responsive release characteristics of the formed nanoparticles. First, 5 mg formed Cur–CD–CANPs were dispersed in 10 mL water and transferred to a dialysis bag. The pH sensitivity of formed Cur–CD–CANPs were performed at 100 mL different solution buffer (pHs 1.2, 6.8, and 7.4) and shaken (100 rpm) at 37 °C for 12 h. In vitro release profiles of Cur from in the presence and absence of α-amylase (10 IU/mL) were carried out for 12 h in 100 mL PBS (pH 7.4) to determine the enzyme responsive release characteristics. Simultaneously, samples were removed at different times and the concentration of free Cur was measured by fluorescence spectroscopy (425 nm). A basket rotation method was used to simulate the process of oral drugs entering the human gastrointestinal tract. First, the NPs were exposed to a simulated gastric acid environment (pH 1.2) for 2 h, followed by transferring to a pH 6.8 buffer system to simulate the small intestine environment for another 2 h. Finally, they were transferred to pH 7.4 to simulate the colon environment for 8 h. PBS (pH 7.4) with α-amylase (10 IU/mL) were performed under the same conditions. Meanwhile, 2 mL aliquots of sample solution were removed at different times to detect the free Cur content, and mixed with an equal amount of fresh slow-release medium. The cumulative drug release rate of the compound particles at time T was calculated as follows: Drug cumulative release = Cumulative drug release ( mg ) The total amount of particle loading ( mg ) × 100 % Fluorescence-labeling of chitosan The synthesis of marked low molecular weight chitosan with RBITC or FITC was based on the reaction between the isothiocyanate group of RBITC/FITC and the first-order amino group of chitosan (Cheng et al., ). Briefly, 0.5 g prepared chitosan was dissolved in 100 mL of 2% (v/v) acetic acid solution by electromagnetic stirring for 30 min. Then, 1 mol/L of NaOH was used to adjust the pH to 7.5. Under stirring, 1 mL of DMSO solution containing RBITC or FITC (1 mg) was added to the chitosan solution. After reacting in the dark at 40 °C for 1 h, the reaction was carried out at room temperature overnight. The solution was then centrifuged at 4000 rpm for 10 min, and washed with distilled water repeatedly to remove free RBITC or FTIC, until a clear supernatant was obtained. The RBITC-labeled or FITC-labeled chitosan was used to prepare Fluorescence-labeled CD–Cur–CANPs according to the above protocol. Cellular uptake of nanoparticles on RAW264.7 macrophages Mouse monocyte macrophages (RAW264.7) were purchased from the China Center for Type Culture Collection (Wuhan, China) and maintained in accordance with supplier's instructions. RAW264.7 cells were seeded onto glass coverslips in 24-well plates (4 × 10 4 cells per well) and incubated at 37 °C in 5% CO 2 for 4 h. Then, the medium was replaced with a fresh medium containing 100 µg/mL FITC-labeled CD–Cur–CANPs. After being treated in indicated time, the medium was removed and washed with PBS twice. Cells were fixed in 4% paraformaldehyde for 20 min and washed three times with PBS. Then, cells were incubated in 4′,6-diamidino-2-phenylindole dilactate (DAPI) labeling solution (0.5 µg/mL in PBS) for 5 min at room temperature and without light. Finally, the images were acquired using a DS-Mv digital camera (Nikon, Tokyo, Japan) mounted on a BX53 microscope (Olympus, Tokyo, Japan). The light intensity and exposure time of each dye spot were set as the same condition. All experiments were done in triplicate. Animal model and treatments Male C57BL/6 mice (8 weeks old, 20–22 g) were maintained in a temperature and humidity-controlled facility (25 ± 2 °C, 50 ± 5% relative humidity) under a 12 h light/dark cycle with unrestricted access to a diet of chow and water. All the animal experiments were approved by the Ethics Committee of Medical College of Qingdao University. After acclimation for 1 week, mice were divided into five groups randomly ( n = 6 per group) as follows: (1) Ctrl: negative control group and mice received regular drinking water; (2) DSS model group; (3) CD–Cur-treated DSS group; (4) CD–Cur–CANPs-treated DSS group; and (5) RBITC-labeled CD–Cur–CANPs-treated DSS group. The mice in the drug-treated groups received an equal dose of Cur (50 mg/kg) in the form of suspension by oral gavage for 7 days. Mice were given 3% DSS (molecular weight 36–50 kDa) instead of drinking water for 7 consecutive days to induce colitis. The state of mice in groups 1–4 was carefully recorded and monitored daily for signs of disease (weight loss, stool consistency, and rectal bleeding). The disease activity index (DAI) was calculated according to the percentage of daily weight loss, stool consistency and average score of fecal blood/occult blood, as shown in Table S1 . At the end of the experiment, feces were collected from groups 1–4 and stored at −80 °C for further analysis. Colon and spleen tissues were quickly stripped and weighed. The tissues were photographed and then also stored at −80 °C for further analysis. In vivo biodistribution of nanoparticles The mice in group 5 above were used to determine the tissue-targeting effect of synthetic NPs in colitis. RBITC-labeled CD–Cur–CANPs (100 μL) were administered on day 7 after the colitis model was established. To avoid background interference due to food in the GIT, the feeding of the mice was restricted 6 h before imaging. After 6 h and 12 h, the tissues and living bodies of the mice were imaged using the small animal in vivo 3D imaging system (PerkinElmer Life Sciences, Boston, MA). Furthermore, the fluorescent colon tissue at 6 h were selected to prepare frozen slices and microscopically magnified to depict the targeting uptake and distribution of the RBITC-labeled CD–Cur–CANPs. Hematoxylin and eosin (H&E) staining and ELISA assay H&E staining was performed by Sevicepio (Wuhan, China). The colon segment was fixed in 4% paraformaldehyde solution for 24 h, dehydrated, infiltrated and embedded in paraffin. The 4-μm-thick sections were obtained with the slicer and placed on the slide. Sections were stained with H&E. Images were obtained using a microscope at 200× and 400× magnification. The colon tissues were homogenized in ice-cold PBS (pH 7.4). The homogenates were centrifuged at 14,000 rpm for 5 min at 4 °C, and the supernatants were collected. Concentrations of IL-1β, IL-6, and TNF-α were measured according to the ELISA kit instructions. The results were expressed as pg of cytokines per mg of total protein in the colon homogenate. Real-time quantitative PCR Total RNA was isolated from tissues using a mirVana miRNA Isolation Kit (Ambion, Carlsbad, CA) in accordance with the manufacturer's instructions. The study primers were synthesized by Sangong Biotech (Shanghai, China) ( Table S2 ). Gene expression levels were normalized to GAPDH, followed by the relative quantification of gene expression using the 2 −ΔΔCt method. All real-time PCR reactions were performed in triplicate, as previously described (Song et al., ; Li et al., ). Western blot Total protein in the colon tissue was extracted using RIPA tissue lysate, and protein concentration was determined using a bicinchoninic acid (BCA) protein quantification kit. Western blotting analysis was performed as previously described (Song et al., ; Li et al., ). Fecal microbial species analysis The qPCR was used to analyze the microbial composition of each sample. Total DNA of fecal microbial samples was extracted using a TIANamp Stool DNA Kit (Tiangen Biotechnology Co., Ltd., Beijing, China). The concentration of DNA samples was measured using a NanoDrop spectrophotometer (ThermoFisher, Pittsburgh, PA). The microbial groups were quantified via RT-PCR (Applied Biosystems 7500 Real-Time PCR System, ABI Co., Ltd., Foster City, CA) according to the instructions of the Power SYBR® Green PCR Master Mix Kit (ABI Co., Ltd., Foster City, CA). The reaction was initiated by activation at 95 °C for 5 min, followed by 40 target cDNA amplification cycles (denaturation at 95 °C for 15 s and extension at 60 °C for 35 s). Standard DNA templates were used to quantify target DNA copy numbers. Briefly, a series of 10-fold gradient dilutions of the standard product were used, and at least six non-zero standard concentrations were applied for each assay. The primers used were listed as Table S3 . The concentration was expressed as log 10 copy number. Statistical analysis We performed statistical analysis using GraphPad Prism software (GraphPad Software, La Jolla, CA). The results were expressed as mean ± SEM. To determine the statistical significance between the two groups, we performed Student’s t -test to calculate the associated p -values. Statistical significance among multiple groups was calculated by one-way analysis of variance (ANOVA). The chitosan ( M w : 140–190 kDa, degree of deacetylation: >85%, the viscosity: 200–400 mPa s) was purchased from Aladdin (Shanghai, China). High-viscosity sodium alginate ( M w : 20–50 kDa) was purchased from Bright Moon Seaweed Group (Qingdao, China). α-Amylase from Aspergillus oryzae was purchased from Aladdin (Shanghai, China). Dextran sodium sulfate (DSS) was supplied by MP Biomedicals (Irvine, CA). Cur and β-CD were purchased from Solarbio (Beijing, China). Male C57BL/6 mice (8-week old, 18–22 g) were provided by Pengyue Laboratory Animal Technology Co., Ltd. (Jinan, China) and preserved in an aseptic environment. Mouse interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and interleukin-1β (IL-1β) ELISA kits were purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Rhodamine B isothiocyante (RBITC) and fluorescein isothiocyanate (FITC) were purchased from Macklin (Shanghai, China). The purchased high molecular weight chitosan was dissolved in 1 M aqueous acetic acid (HAc) to a concentration of 1% (w/v) and chitosanase CsnM (20 U/mL) was added to colloidal chitosan at 20 °C for 10 min in a shaky condition. Low molecular weight unsaturated alginate was prepared by oligoalginate lyase OalC6 in our lab (Li et al., ). OalC6 (1 U) was added to 1 mL of high-viscosity sodium alginate polymer (5 mg/mL in 20 mM phosphate buffer, pH 7.0) and incubated at 40 °C for 60 min in a shaky condition. Then, these degrading products were heat-treated and dialyzed by a 1000 Da dialysis bag to remove smaller chitosan oligosaccharides (COS) or alginate oligosaccharides (AOS). The average molecular weight of prepared low molecular weight chitosan (8.76 kDa) and unsaturated alginate (7.73 kDa) were analyzed using viscosity methods (Li et al., ). The molar ratio of CD to Cur was 1:1 (v/v). Briefly, 647 mg of CD was dissolved in 120 mL water, and 147 mg curcumin was dissolved in 2 mL acetone. The oil phase was added to the aqueous solution and centrifuged at 400 rpmin. Uncovered acetone was volatilized by stirring for 24 h in the dark, and centrifuged at 1000 rpm for 5 min. The supernatant was collected and lyophilized to recover water-soluble CD–Cur. The CD–Cur–CANPs were prepared by ion gel method. Briefly, 0.0315% (w/v) sodium alginate solution and 0.07% (w/v) chitosan were dissolved in water and 1% acetic acid solution, respectively, and stirred overnight. Then, 1 mL of water-soluble CD–Cur and 23.5 mL of prepared low molecular unsaturated alginate solution were mixed and evenly dispersed, followed by the drop-wise addition of 0.2% (w/v) 1.5 mL calcium chloride solution at a speed of 0.1 mL/min, and stirred for 30 min to form a pre-gel. Finally, 2 mL low molecular weight chitosan solution was added at a rate of 0.1 mL/min and stood for 30 min to form an evenly dispersed solution. After centrifugation at 12,000 rpm for 30 min, the supernatant was discarded and washed three times with ultra-pure water. The powder was obtained by vacuum freeze-drying. The particle size and the zeta potential of the synthetic CD–Cur–CANPs were measured via dynamic light scattering (DLS) with Zetasizer Nano ZS (Malvern Instrument, Malvern, UK). Morphological analysis was performed using transmission electron microscopy (TEM) in open field mode. Fourier transform infrared (FTIR) spectroscopy was used to analyze the samples with a FTIR spectrometer in the range of 4000–400 cm −1 . Before the measurement, the samples were dried under vacuum until a constant weight was reached. The dried samples were pressed into the powder, mixed with the KBr powder, and then pressed into a shape for FTIR spectroscopy to measure the wavelength. The drug EE and loading efficiency (LE) were determined using a direct method by dispersing 5 mg synthetic CD–Cur–CANPs in 10 mL anhydrous ethanol and oscillated at a constant temperature of 37 °C for 24 h to obtain completely swollen NPs. Then, the solution was cleaned ultrasonically for 30 min and centrifuged at 12,000 rpm for 30 min. The concentration of Cur in the supernatant was measured using an ultraviolet spectrophotometer at 425 nm according to the pre-determined standard curve. Finally, the EE and LE were calculated using the following formula: EE % = Weight of the drug in NPs ( mg ) Weight of the total drug ( mg ) × 100 % LE % = Weight of the drug in NPs ( mg ) Weight of the total NPs ( mg ) × 100 % A dialysis bag was used to determine the pH sensitivity and enzyme responsive release characteristics of the formed nanoparticles. First, 5 mg formed Cur–CD–CANPs were dispersed in 10 mL water and transferred to a dialysis bag. The pH sensitivity of formed Cur–CD–CANPs were performed at 100 mL different solution buffer (pHs 1.2, 6.8, and 7.4) and shaken (100 rpm) at 37 °C for 12 h. In vitro release profiles of Cur from in the presence and absence of α-amylase (10 IU/mL) were carried out for 12 h in 100 mL PBS (pH 7.4) to determine the enzyme responsive release characteristics. Simultaneously, samples were removed at different times and the concentration of free Cur was measured by fluorescence spectroscopy (425 nm). A basket rotation method was used to simulate the process of oral drugs entering the human gastrointestinal tract. First, the NPs were exposed to a simulated gastric acid environment (pH 1.2) for 2 h, followed by transferring to a pH 6.8 buffer system to simulate the small intestine environment for another 2 h. Finally, they were transferred to pH 7.4 to simulate the colon environment for 8 h. PBS (pH 7.4) with α-amylase (10 IU/mL) were performed under the same conditions. Meanwhile, 2 mL aliquots of sample solution were removed at different times to detect the free Cur content, and mixed with an equal amount of fresh slow-release medium. The cumulative drug release rate of the compound particles at time T was calculated as follows: Drug cumulative release = Cumulative drug release ( mg ) The total amount of particle loading ( mg ) × 100 % The synthesis of marked low molecular weight chitosan with RBITC or FITC was based on the reaction between the isothiocyanate group of RBITC/FITC and the first-order amino group of chitosan (Cheng et al., ). Briefly, 0.5 g prepared chitosan was dissolved in 100 mL of 2% (v/v) acetic acid solution by electromagnetic stirring for 30 min. Then, 1 mol/L of NaOH was used to adjust the pH to 7.5. Under stirring, 1 mL of DMSO solution containing RBITC or FITC (1 mg) was added to the chitosan solution. After reacting in the dark at 40 °C for 1 h, the reaction was carried out at room temperature overnight. The solution was then centrifuged at 4000 rpm for 10 min, and washed with distilled water repeatedly to remove free RBITC or FTIC, until a clear supernatant was obtained. The RBITC-labeled or FITC-labeled chitosan was used to prepare Fluorescence-labeled CD–Cur–CANPs according to the above protocol. Mouse monocyte macrophages (RAW264.7) were purchased from the China Center for Type Culture Collection (Wuhan, China) and maintained in accordance with supplier's instructions. RAW264.7 cells were seeded onto glass coverslips in 24-well plates (4 × 10 4 cells per well) and incubated at 37 °C in 5% CO 2 for 4 h. Then, the medium was replaced with a fresh medium containing 100 µg/mL FITC-labeled CD–Cur–CANPs. After being treated in indicated time, the medium was removed and washed with PBS twice. Cells were fixed in 4% paraformaldehyde for 20 min and washed three times with PBS. Then, cells were incubated in 4′,6-diamidino-2-phenylindole dilactate (DAPI) labeling solution (0.5 µg/mL in PBS) for 5 min at room temperature and without light. Finally, the images were acquired using a DS-Mv digital camera (Nikon, Tokyo, Japan) mounted on a BX53 microscope (Olympus, Tokyo, Japan). The light intensity and exposure time of each dye spot were set as the same condition. All experiments were done in triplicate. Male C57BL/6 mice (8 weeks old, 20–22 g) were maintained in a temperature and humidity-controlled facility (25 ± 2 °C, 50 ± 5% relative humidity) under a 12 h light/dark cycle with unrestricted access to a diet of chow and water. All the animal experiments were approved by the Ethics Committee of Medical College of Qingdao University. After acclimation for 1 week, mice were divided into five groups randomly ( n = 6 per group) as follows: (1) Ctrl: negative control group and mice received regular drinking water; (2) DSS model group; (3) CD–Cur-treated DSS group; (4) CD–Cur–CANPs-treated DSS group; and (5) RBITC-labeled CD–Cur–CANPs-treated DSS group. The mice in the drug-treated groups received an equal dose of Cur (50 mg/kg) in the form of suspension by oral gavage for 7 days. Mice were given 3% DSS (molecular weight 36–50 kDa) instead of drinking water for 7 consecutive days to induce colitis. The state of mice in groups 1–4 was carefully recorded and monitored daily for signs of disease (weight loss, stool consistency, and rectal bleeding). The disease activity index (DAI) was calculated according to the percentage of daily weight loss, stool consistency and average score of fecal blood/occult blood, as shown in Table S1 . At the end of the experiment, feces were collected from groups 1–4 and stored at −80 °C for further analysis. Colon and spleen tissues were quickly stripped and weighed. The tissues were photographed and then also stored at −80 °C for further analysis. biodistribution of nanoparticles The mice in group 5 above were used to determine the tissue-targeting effect of synthetic NPs in colitis. RBITC-labeled CD–Cur–CANPs (100 μL) were administered on day 7 after the colitis model was established. To avoid background interference due to food in the GIT, the feeding of the mice was restricted 6 h before imaging. After 6 h and 12 h, the tissues and living bodies of the mice were imaged using the small animal in vivo 3D imaging system (PerkinElmer Life Sciences, Boston, MA). Furthermore, the fluorescent colon tissue at 6 h were selected to prepare frozen slices and microscopically magnified to depict the targeting uptake and distribution of the RBITC-labeled CD–Cur–CANPs. H&E staining was performed by Sevicepio (Wuhan, China). The colon segment was fixed in 4% paraformaldehyde solution for 24 h, dehydrated, infiltrated and embedded in paraffin. The 4-μm-thick sections were obtained with the slicer and placed on the slide. Sections were stained with H&E. Images were obtained using a microscope at 200× and 400× magnification. The colon tissues were homogenized in ice-cold PBS (pH 7.4). The homogenates were centrifuged at 14,000 rpm for 5 min at 4 °C, and the supernatants were collected. Concentrations of IL-1β, IL-6, and TNF-α were measured according to the ELISA kit instructions. The results were expressed as pg of cytokines per mg of total protein in the colon homogenate. Total RNA was isolated from tissues using a mirVana miRNA Isolation Kit (Ambion, Carlsbad, CA) in accordance with the manufacturer's instructions. The study primers were synthesized by Sangong Biotech (Shanghai, China) ( Table S2 ). Gene expression levels were normalized to GAPDH, followed by the relative quantification of gene expression using the 2 −ΔΔCt method. All real-time PCR reactions were performed in triplicate, as previously described (Song et al., ; Li et al., ). Total protein in the colon tissue was extracted using RIPA tissue lysate, and protein concentration was determined using a bicinchoninic acid (BCA) protein quantification kit. Western blotting analysis was performed as previously described (Song et al., ; Li et al., ). The qPCR was used to analyze the microbial composition of each sample. Total DNA of fecal microbial samples was extracted using a TIANamp Stool DNA Kit (Tiangen Biotechnology Co., Ltd., Beijing, China). The concentration of DNA samples was measured using a NanoDrop spectrophotometer (ThermoFisher, Pittsburgh, PA). The microbial groups were quantified via RT-PCR (Applied Biosystems 7500 Real-Time PCR System, ABI Co., Ltd., Foster City, CA) according to the instructions of the Power SYBR® Green PCR Master Mix Kit (ABI Co., Ltd., Foster City, CA). The reaction was initiated by activation at 95 °C for 5 min, followed by 40 target cDNA amplification cycles (denaturation at 95 °C for 15 s and extension at 60 °C for 35 s). Standard DNA templates were used to quantify target DNA copy numbers. Briefly, a series of 10-fold gradient dilutions of the standard product were used, and at least six non-zero standard concentrations were applied for each assay. The primers used were listed as Table S3 . The concentration was expressed as log 10 copy number. We performed statistical analysis using GraphPad Prism software (GraphPad Software, La Jolla, CA). The results were expressed as mean ± SEM. To determine the statistical significance between the two groups, we performed Student’s t -test to calculate the associated p -values. Statistical significance among multiple groups was calculated by one-way analysis of variance (ANOVA). Preparation and characterization of nanoparticles The CD–Cur–CANPs were prepared by ion gel method using low molecular weight chitosan (8.76 kDa) and unsaturated alginate (7.73 kDa) as polymeric shells. Then, water-soluble CD–Cur inclusion complex as core encapsulated into CANPs shell . The strong electrostatic interaction between the carboxyl group of alginates and the amino group of chitosan leads to shrinkage and gel formation at low pH (Thai et al., ), which increases its pH sensitivity and protects the drugs from GIT and the aggressive gastric environment (Mukhopadhyay et al., ). As presented in , the CD–Cur complex tended to self-aggregate in the water solution, which showed simple micelles with a particle size of about 30–50 nm. The size of formed CD–Cur–CANPs is about 100–200 nm, which facilitated the adhesion and mobility of microspheres in vivo . Dynamic light scattering (DLS) image shows that the average particle size of the formed CD–Cur is 84.04 nm , and the average particle size of CD–Cur–CANPs is 462.1 nm . It should be noted that the particle size of TEM was smaller than that of DLS, which may be due to the drying process of TEM sample treatment, and the particle size measured in solution may be larger than that of TEM in the drying state. The polydispersity index (PDI) was 0.227 ± 0.17, which indicated a monodisperse size distribution. The zeta potential of the synthetic CD–Cur–CANPs was −19 mV, suggesting good stability of the formed NPs. FTIR spectroscopy was used to evaluate the formation of the CD–Cur–CANPs . The solid samples including low molecular weight chitosan and unsaturated alginate, CD–Cur complex, synthetic CANPs, and CD–Cur–CANPs were prepared for FTIR analysis. Compared with the similar peaks in the FTIR spectra of chitosan and alginate, the peaks corresponding to the NH 2 , C–O, and OH groups in the FTIR spectra of NPs also shifted significantly, which showed strong interaction between chitosan and alginate. By comparing the spectra of CANPs without drugs and drug-loaded CD–Cur–CANPs, it was observed that most of the characteristic absorption bands of CD–Cur were accompanied by a certain degree of change. These changes indicate that CD–Cur is successfully encapsulated in CANPs. EE, drug loading, and release capacity The EE and drug loading (LC) of Cur–CD–CANPs were calculated indirectly by measuring the residual drug in the solution. The EE and LC of generated Cur–CD–CANPs were 88.89% and 3.49, respectively. The dialysis bag method was used to determine the permeation rate of Cur in three different pH media (pHs 1.2, 6.8, and 7.4, respectively). As shown in , Cur has the maximum rate at pH 7.4. As pH decreases, the permeability decreases. Because of the strong electrostatic interaction between the carboxyl group of alginates and the amino group of chitosan leads to shrinkage and gel formation at low pH environment, Cur–CD–CANPs released only 15% of Cur at pH 1.2 within 12 h. These properties protect Cur from GIT and the aggressive gastric environment. To further confirm the protection of CANPs in the GIT, Cur–CD–CANPs were incubated in a medium with gradually changing pH to simulate the drug passage through the stomach (pH 1.2), small intestine (pH 6.8) and colon (pH 7.4). Only 12% and 27% of the Cur was released in the first 4 h at pH levels of 1.2 and 6.8 , which represent the pH values of the stomach and the upper part of the small intestine, respectively. The in vitro profiles of Cur release from Cur–CD–CANPs were studied in the presence and absence of α-amylase in the simulated ileum environment (pH 7.4). As shown in , the release profiles indicated that the released Cur from CD–Cur was significantly higher in the presence of α-amylase compared to the absence of α-amylase. Cur was rapidly released from Cur–CD–CANPs with 10 IU of α-amylase and reached 90% within 4 h. This drastic release in the presence of α-amylase was owing to the β-CD enzymatic degradation and making chain scission in CD and drug release. These results indicate the formed NPs with pH sensitivity and enzyme responsive release characteristics. In vivo biodistribution and accumulation of nanoparticles in colitis mice To investigate the in vivo bio-distribution of CD–Cur–CANPs, we gavaged RBITC-embedded CD–Cur–CANPs in DSS-induced colitis mice, and analyzed the time-dependent passage and the in vivo targeting efficacy of this drug formulation using a small animal in vivo 3D imaging system. After oral administration of RBITC–CD–Cur–CANPs, the fluorescence intensities of whole mice were quantitatively analyzed at 6 and 12 h . The fluorescence signals after oral administration of synthetic NPs were primarily observed in the GIT, with faint signals in other internal organs. Furthermore, the percentage of RBITC dose in major GIT segments (i.e. stomach, small intestine, and colon including cecum) was assayed 6 and 12 h post-administration of NPs. Ex vivo RBITC imaging revealed that the signals accumulated throughout the digestive tract within 6 h. After 12 h of nanoparticle administration, the fluorescence intensities in the stomach and small intestine of UC mice decreased. Concurrently, strong signals were primarily observed in the colon at 12 h, indicating that the synthetic CD–Cur–CANPs exhibit strong colon-targeting biodistribution . Furthermore, the fluorescent colonic tissue slices were microscopically magnified to depict the targeting uptake and accumulation of the RBITC-labeled NPs. As shown in , the colonic tissues of the colitis mice administered with CD–Cur–CANPs showed strong fluorescence signals (red), which indicated that CD–Cur–CANPs highly accumulated in the colonic tissues. Meanwhile, at 6 h post-administration of mice, CD–Cur–CANPs can pass through the intestinal barrier and cellular uptake through intrinsic action, indicating that the synthesized CD–Cur–CANPs exhibit strong colonic retention ability in colitis mice. To determine whether the formed CD–Cur–CANPs could be absorbed at the cellular level, CANPs was labeled with FITC dye (green fluorescence) and incubated with RAW264.7 cells in vitro . As shown in , CANPs aggregated on the membrane of RAW264.7 cell at 0 h. After incubation for 6 h, FITC-labeled CANPs began to be incorporated into RAW264.7 macrophages and a small amount of green fluorescent substance could be observed in the cytoplasm. After incubation for 12 h and 24 h, almost all CANPs entered into RAW264.7 cells. These results indicate that CANPs be easily cellular uptake by RAW264.7 cells through a rapid and effective intrinsic action. In vivo therapeutic efficacy of CD–Cur–CANPs in DSS-induced colitis The DSS-induced colitis mouse model is easy to develop and repeat, and the symptoms are similar to human UC (e.g. weight loss, colon ulcers and bloody stools), and, thus, it has been widely used to evaluate new drugs or carriers for the treatment of UC (Perše & Cerar, ). In this study. DSS-induced acute colitis was established to evaluate the therapeutic effect of CD–Cur–CANPs on colitis. The colitis induction and treatment protocol is shown in . As shown in , mice treated with DSS showed significant loss of body weight from day 4, whereas the CD–Cur–CANPs significantly prevented the weight loss from day 5 ( p < .05). Meanwhile, the severity of the disease was scored with DAI . The DAI score of the colitis model group was significantly increased compared with CD–Cur–CANPs treatment group from day 6 ( p < .05). On day 7, the DAI scores of both CD–Cur–CANPs and CD–Cur groups decreased significantly ( p < .01). Administration of DSS induced severe inflammation in that DSS treatment significantly reduced the colon length ( p < .001), whereas CD–Cur–CANPs treatment significantly reversed the reduction of colon length ( p < .01, ). DSS induction also significantly increased the weight of spleen ( p < .05), and treatment with CD–Cur–CANPs significantly reversed the increase in spleen weight ( p < .05, ). These results suggest that treatment with CD–Cur–CANPs alleviated symptoms of colitis induced by DSS, with potential therapeutic application in colitis mice. To further explore the inhibitory effect of CD–Cur–CANPs against histological damage of colon tissue in the mouse model of DSS-induced colitis, the severity of colonic damage and inflammation was analyzed using H&E staining . As expected, the administration of DSS induced serious damage and inflammation in colon tissue, consistent with the DAI scores and colon morphology . Compared with the control group, the DSS group showed severe erosion of colonic mucosa, destruction of almost all crypts, rapid decrease of goblet cells, inflammatory cell infiltration in lamina propria, glandular disorders and severe ulcers. CD–Cur treatment significantly reduced these symptoms of colitis, but the therapeutic effect was not comparable to that of CD–Cur–CANPs. The CD–Cur–CANPs group showed no apparent erosion of the colonic tissue, relatively intact crypts, neatly organized glands, complete goblet cells, and colon tissue morphology similar to that of the control group . These results indicated that treatment with CD–Cur–CANPs decreased the inflammatory response of colon tissue in DSS-induced colitis mice model. Effect of CD–Cur–CANPs on intestinal homeostasis and gut microbiota Intestinal mucosal integrity could be significantly altered during active colitis aggravating the colon inflammation. To validate the anti-inflammatory effects of CD–Cur–CANPs, the protein concentration and gene expression levels of three important pro-inflammatory cytokines (IL-1β, IL-6, and TNF-α) associated with UC pathogenesis were detected . The level of IL-1β, IL-6, and TNF-α in the colon samples were found to be increased in response to the induction of colitis in mice, whereas, significantly reduced ( p < .01) after CD–Cur–CANPs treatment. These results suggest that CD–Cur–CANPs can effectively reduce inflammation in colon tissues of mice with DSS-induced colitis. To further elucidate the role of CD–Cur–CANPs in modulating the intestinal tight junction, two major tight junction proteins (ZO-1 and occludin) were analyzed in colon tissues via western blot analysis . DSS treatment significantly decreased the protein expression of ZO-1 ( p < .01) and occludin ( p < .001). Meanwhile, the expression of ZO-1 ( p < .01) and occludin ( p < .01), which was reduced by DSS treatment was significantly reversed by treatment with CD–Cur–CANPs. CD–Cur improved the protein expression of ZO-1 and occludin in DSS-induced colitis partially, but not significantly. The results suggest that treatment with CD–Cur–CANPs improved the integrity of the intestinal barrier in mice with DSS-induced colitis. These results indicated that CD–Cur–CANPs reduced the inflammatory response and improved the intestinal integrity of mice. The protective effects of CD–Cur–CANPs against intestinal dysbacteriosis were quantitative analyzed by the change of gut microbial composition in the collected fecal pellet samples. The total levels of intestinal bacteria, Bacteroides, Bifidobacterium, Lactobacillus, Clostridium leptum group, and C. coccoides were quantitative analyzed by qPCR. As shown in , there was no significant difference in the total level of intestinal bacteria among the four groups. However, some beneficial bacteria such as Bidobacterium ( p < .01) and Lactobacillus ( p < .001) species in mice were significantly reduced upon treatment with DSS. Treatment with CD–Cur–CANPs significantly reversed the decrease in Bifidobacterium ( p < .05) and Lactobacillus ( p < .001) levels caused by DSS treatment, while the effect of free CD–Cur treatment was not as pronounced as that of the CD–Cur–CANPs group. No significant difference was observed in the levels of Bacteroides, C. leptum , and C. coccoides among the four groups. The results indicated that CD–Cur–CANPs modified the relative abundance of gut microbial composition, some of which may play a central role in the alleviation of DSS-induced colitis. The CD–Cur–CANPs were prepared by ion gel method using low molecular weight chitosan (8.76 kDa) and unsaturated alginate (7.73 kDa) as polymeric shells. Then, water-soluble CD–Cur inclusion complex as core encapsulated into CANPs shell . The strong electrostatic interaction between the carboxyl group of alginates and the amino group of chitosan leads to shrinkage and gel formation at low pH (Thai et al., ), which increases its pH sensitivity and protects the drugs from GIT and the aggressive gastric environment (Mukhopadhyay et al., ). As presented in , the CD–Cur complex tended to self-aggregate in the water solution, which showed simple micelles with a particle size of about 30–50 nm. The size of formed CD–Cur–CANPs is about 100–200 nm, which facilitated the adhesion and mobility of microspheres in vivo . Dynamic light scattering (DLS) image shows that the average particle size of the formed CD–Cur is 84.04 nm , and the average particle size of CD–Cur–CANPs is 462.1 nm . It should be noted that the particle size of TEM was smaller than that of DLS, which may be due to the drying process of TEM sample treatment, and the particle size measured in solution may be larger than that of TEM in the drying state. The polydispersity index (PDI) was 0.227 ± 0.17, which indicated a monodisperse size distribution. The zeta potential of the synthetic CD–Cur–CANPs was −19 mV, suggesting good stability of the formed NPs. FTIR spectroscopy was used to evaluate the formation of the CD–Cur–CANPs . The solid samples including low molecular weight chitosan and unsaturated alginate, CD–Cur complex, synthetic CANPs, and CD–Cur–CANPs were prepared for FTIR analysis. Compared with the similar peaks in the FTIR spectra of chitosan and alginate, the peaks corresponding to the NH 2 , C–O, and OH groups in the FTIR spectra of NPs also shifted significantly, which showed strong interaction between chitosan and alginate. By comparing the spectra of CANPs without drugs and drug-loaded CD–Cur–CANPs, it was observed that most of the characteristic absorption bands of CD–Cur were accompanied by a certain degree of change. These changes indicate that CD–Cur is successfully encapsulated in CANPs. The EE and drug loading (LC) of Cur–CD–CANPs were calculated indirectly by measuring the residual drug in the solution. The EE and LC of generated Cur–CD–CANPs were 88.89% and 3.49, respectively. The dialysis bag method was used to determine the permeation rate of Cur in three different pH media (pHs 1.2, 6.8, and 7.4, respectively). As shown in , Cur has the maximum rate at pH 7.4. As pH decreases, the permeability decreases. Because of the strong electrostatic interaction between the carboxyl group of alginates and the amino group of chitosan leads to shrinkage and gel formation at low pH environment, Cur–CD–CANPs released only 15% of Cur at pH 1.2 within 12 h. These properties protect Cur from GIT and the aggressive gastric environment. To further confirm the protection of CANPs in the GIT, Cur–CD–CANPs were incubated in a medium with gradually changing pH to simulate the drug passage through the stomach (pH 1.2), small intestine (pH 6.8) and colon (pH 7.4). Only 12% and 27% of the Cur was released in the first 4 h at pH levels of 1.2 and 6.8 , which represent the pH values of the stomach and the upper part of the small intestine, respectively. The in vitro profiles of Cur release from Cur–CD–CANPs were studied in the presence and absence of α-amylase in the simulated ileum environment (pH 7.4). As shown in , the release profiles indicated that the released Cur from CD–Cur was significantly higher in the presence of α-amylase compared to the absence of α-amylase. Cur was rapidly released from Cur–CD–CANPs with 10 IU of α-amylase and reached 90% within 4 h. This drastic release in the presence of α-amylase was owing to the β-CD enzymatic degradation and making chain scission in CD and drug release. These results indicate the formed NPs with pH sensitivity and enzyme responsive release characteristics. biodistribution and accumulation of nanoparticles in colitis mice To investigate the in vivo bio-distribution of CD–Cur–CANPs, we gavaged RBITC-embedded CD–Cur–CANPs in DSS-induced colitis mice, and analyzed the time-dependent passage and the in vivo targeting efficacy of this drug formulation using a small animal in vivo 3D imaging system. After oral administration of RBITC–CD–Cur–CANPs, the fluorescence intensities of whole mice were quantitatively analyzed at 6 and 12 h . The fluorescence signals after oral administration of synthetic NPs were primarily observed in the GIT, with faint signals in other internal organs. Furthermore, the percentage of RBITC dose in major GIT segments (i.e. stomach, small intestine, and colon including cecum) was assayed 6 and 12 h post-administration of NPs. Ex vivo RBITC imaging revealed that the signals accumulated throughout the digestive tract within 6 h. After 12 h of nanoparticle administration, the fluorescence intensities in the stomach and small intestine of UC mice decreased. Concurrently, strong signals were primarily observed in the colon at 12 h, indicating that the synthetic CD–Cur–CANPs exhibit strong colon-targeting biodistribution . Furthermore, the fluorescent colonic tissue slices were microscopically magnified to depict the targeting uptake and accumulation of the RBITC-labeled NPs. As shown in , the colonic tissues of the colitis mice administered with CD–Cur–CANPs showed strong fluorescence signals (red), which indicated that CD–Cur–CANPs highly accumulated in the colonic tissues. Meanwhile, at 6 h post-administration of mice, CD–Cur–CANPs can pass through the intestinal barrier and cellular uptake through intrinsic action, indicating that the synthesized CD–Cur–CANPs exhibit strong colonic retention ability in colitis mice. To determine whether the formed CD–Cur–CANPs could be absorbed at the cellular level, CANPs was labeled with FITC dye (green fluorescence) and incubated with RAW264.7 cells in vitro . As shown in , CANPs aggregated on the membrane of RAW264.7 cell at 0 h. After incubation for 6 h, FITC-labeled CANPs began to be incorporated into RAW264.7 macrophages and a small amount of green fluorescent substance could be observed in the cytoplasm. After incubation for 12 h and 24 h, almost all CANPs entered into RAW264.7 cells. These results indicate that CANPs be easily cellular uptake by RAW264.7 cells through a rapid and effective intrinsic action. therapeutic efficacy of CD–Cur–CANPs in DSS-induced colitis The DSS-induced colitis mouse model is easy to develop and repeat, and the symptoms are similar to human UC (e.g. weight loss, colon ulcers and bloody stools), and, thus, it has been widely used to evaluate new drugs or carriers for the treatment of UC (Perše & Cerar, ). In this study. DSS-induced acute colitis was established to evaluate the therapeutic effect of CD–Cur–CANPs on colitis. The colitis induction and treatment protocol is shown in . As shown in , mice treated with DSS showed significant loss of body weight from day 4, whereas the CD–Cur–CANPs significantly prevented the weight loss from day 5 ( p < .05). Meanwhile, the severity of the disease was scored with DAI . The DAI score of the colitis model group was significantly increased compared with CD–Cur–CANPs treatment group from day 6 ( p < .05). On day 7, the DAI scores of both CD–Cur–CANPs and CD–Cur groups decreased significantly ( p < .01). Administration of DSS induced severe inflammation in that DSS treatment significantly reduced the colon length ( p < .001), whereas CD–Cur–CANPs treatment significantly reversed the reduction of colon length ( p < .01, ). DSS induction also significantly increased the weight of spleen ( p < .05), and treatment with CD–Cur–CANPs significantly reversed the increase in spleen weight ( p < .05, ). These results suggest that treatment with CD–Cur–CANPs alleviated symptoms of colitis induced by DSS, with potential therapeutic application in colitis mice. To further explore the inhibitory effect of CD–Cur–CANPs against histological damage of colon tissue in the mouse model of DSS-induced colitis, the severity of colonic damage and inflammation was analyzed using H&E staining . As expected, the administration of DSS induced serious damage and inflammation in colon tissue, consistent with the DAI scores and colon morphology . Compared with the control group, the DSS group showed severe erosion of colonic mucosa, destruction of almost all crypts, rapid decrease of goblet cells, inflammatory cell infiltration in lamina propria, glandular disorders and severe ulcers. CD–Cur treatment significantly reduced these symptoms of colitis, but the therapeutic effect was not comparable to that of CD–Cur–CANPs. The CD–Cur–CANPs group showed no apparent erosion of the colonic tissue, relatively intact crypts, neatly organized glands, complete goblet cells, and colon tissue morphology similar to that of the control group . These results indicated that treatment with CD–Cur–CANPs decreased the inflammatory response of colon tissue in DSS-induced colitis mice model. Intestinal mucosal integrity could be significantly altered during active colitis aggravating the colon inflammation. To validate the anti-inflammatory effects of CD–Cur–CANPs, the protein concentration and gene expression levels of three important pro-inflammatory cytokines (IL-1β, IL-6, and TNF-α) associated with UC pathogenesis were detected . The level of IL-1β, IL-6, and TNF-α in the colon samples were found to be increased in response to the induction of colitis in mice, whereas, significantly reduced ( p < .01) after CD–Cur–CANPs treatment. These results suggest that CD–Cur–CANPs can effectively reduce inflammation in colon tissues of mice with DSS-induced colitis. To further elucidate the role of CD–Cur–CANPs in modulating the intestinal tight junction, two major tight junction proteins (ZO-1 and occludin) were analyzed in colon tissues via western blot analysis . DSS treatment significantly decreased the protein expression of ZO-1 ( p < .01) and occludin ( p < .001). Meanwhile, the expression of ZO-1 ( p < .01) and occludin ( p < .01), which was reduced by DSS treatment was significantly reversed by treatment with CD–Cur–CANPs. CD–Cur improved the protein expression of ZO-1 and occludin in DSS-induced colitis partially, but not significantly. The results suggest that treatment with CD–Cur–CANPs improved the integrity of the intestinal barrier in mice with DSS-induced colitis. These results indicated that CD–Cur–CANPs reduced the inflammatory response and improved the intestinal integrity of mice. The protective effects of CD–Cur–CANPs against intestinal dysbacteriosis were quantitative analyzed by the change of gut microbial composition in the collected fecal pellet samples. The total levels of intestinal bacteria, Bacteroides, Bifidobacterium, Lactobacillus, Clostridium leptum group, and C. coccoides were quantitative analyzed by qPCR. As shown in , there was no significant difference in the total level of intestinal bacteria among the four groups. However, some beneficial bacteria such as Bidobacterium ( p < .01) and Lactobacillus ( p < .001) species in mice were significantly reduced upon treatment with DSS. Treatment with CD–Cur–CANPs significantly reversed the decrease in Bifidobacterium ( p < .05) and Lactobacillus ( p < .001) levels caused by DSS treatment, while the effect of free CD–Cur treatment was not as pronounced as that of the CD–Cur–CANPs group. No significant difference was observed in the levels of Bacteroides, C. leptum , and C. coccoides among the four groups. The results indicated that CD–Cur–CANPs modified the relative abundance of gut microbial composition, some of which may play a central role in the alleviation of DSS-induced colitis. Colon-targeted drug delivery offers an efficacious treatment for UC to avoid systemic absorption and potential side effects (Arévalo-Pérez et al., ). However, ODDS for colonic region presents certain challenges. Natural polymer-based nanocarriers have biocompatibility and biodegradability, which makes them suitable for physiological drug delivery approaches (Pushpamalar et al., ). In this study, a core–shell NPs were successfully prepared using CD–Cur as core and CANPs as shell. The strong electrostatic interaction of the shells leads to shrinkage and gel formation at low pH, thereby protecting the core from GIT and the aggressive gastric environment. The in vitro release indicated that only 12% and 27% of the Cur was released at pH values of the stomach (pH 1.2) and the upper part of the small intestine (pH 6.8), respectively . These pH-dependent characteristics enable the formed CD–Cur–CMNP to avoid the initial burst of drug release in the upper GIT and ensure the subsequent sustained drug release in the colonic pH. Recently, advanced stimuli-responsive controlled release systems have received increasing attention for drug delivery. These systems can regulate the release of entrapped drug molecules by specific external stimuli such as light, temperature, magnetic fields, or by internal enzymes (Ruiz-Hernández et al., ). New synthetic pH-dependent colon delivery systems combination with timed-release, ion-sensitive or ROS-response property are being developed (Dasgupta et al., ; Sacks et al., ). Enzyme as a kind of unique stimulant has been more and more used as trigger in controlled release system (Xiong et al., ). α-Amylase is a key enzymes involved in the carbohydrate digestion process (Dhital et al., ). Herein, the synthetic CD–Cur–CANPs is not only pH-dependent, but also drastic release in the presence of α-amylase. In vitro release curve showed that the released Cur from CD–Cur–CANPs was significantly higher in the presence of α-amylase (>90% within 4 h) compared to the absence of α-amylase (36.7%, 24 h). Triggered release of Cur in the presence of α-amylase is owing to the β-CD degradation resulting ring opening and chain scission in β-CD. These results indicated that the presence of α-amylase significantly promoted the release rate of Cur in vitro . Meanwhile, the actual effect of enzyme-induced release in vivo needs further verification. Cur is recognized as a safe food additive by the US Food and Drug Administration (FDA) and has been studied as an anti-inflammatory drug for the treatment of UC. Nonetheless, its use is limited due to its low water solubility and bioavailability, poor intestinal absorption, and rapid systemic elimination (Anand et al., ). Specially, when Cur was used orally in clinical trials, weak therapeutic effects were observed because of inefficient drug delivery into the inflamed colon tissues. The hydrophobic cavity of CDs enables them to form inclusion complexes with a wide variety of poorly water-soluble and size matched guest molecules via the host–guest interactions. Herein, combination of the results obtained from in vitro cellular uptake of macrophages , in vivo biodistribution in GIT and accumulation in colonic tissue , our synthetic CD–Cur–CANPs indicated tremendous therapeutic potential in the treatment of mice colitis. Gut microbiota is a complex and dynamic ecosystem that controls the permeability of the gastrointestinal mucosa and the host immune system, which is closely associated with UC (Ahlawat et al., ). However, the present ODDSs for the treatment of UC rarely consider the effects of intestinal homeostasis and gut microbiota. Our previous studies shown that chitosan oligosaccharides (COS) have effective modifying effect on gut microbiota (He et al., ). Meanwhile, our previous study also indicated that unsaturated alginate oligosaccharides (UAOS) with C4 and C5 double-bond, enzymatic degradation products of alginate, showed great effect on gut microbiota modulation (Li et al., ). In this study, low molecular weight chitosan and enzymatic unsaturated alginate were prepared in our lab and selected as polymer shells. Current studies have shown that Cur is therapeutically effective against UC and regulative effect on gut microbiota (Grammatikopoulou et al., ; Sadeghi et al., ; Shabbir et al., ). Quantitative analysis indicated that CD–Cur–CANPs significantly reversed the decrease in Bifidobacteria and Lactobacilli induced by DSS treatment. These results demonstrated that our prepared CD–Cur–CANPs may be a promising synergistic gut microbiota-targeting approach in UC treatment. In this study, a pH-sensitive and α-amylase-triggered controlled release system was successfully prepared using CD–Cur as core and CANPs as shell. Oral administration of CD–Cur–CANPs had an efficient therapeutic efficacy, strong colonic biodistribution and accumulation, rapid macrophage uptake, promoted colonic epithelial barrier integrity and modulated production of inflammatory cytokines, which reshaped the gut microbiota in mice with DSS-induced colitis. Overall, the colon-targeted oral delivery system developed in the current study has great potential for application in UC therapy. Supplemental Material Click here for additional data file.
Managing life during the pandemic: communication strategies, mental health, and the ultimate toll of the COVID-19
98535bd9-1d8f-43ab-9d12-bf4d9322418b
8771019
Health Communication[mh]
Pandemic is generally defined as an epidemic that occurs worldwide and affects a large number of people at the same time [1]. The coronavirus outbreak (The COVID-19), which was first detected in Wuhan, China in December 2019 World Health Organization (2020). Pneumonia of Unknown Cause – China [online]. Website https://www.who.int/csr/don/05-january-2020-pneumonia-of-unkown-cause-china/en/ [accessed 26 May 2021]., has continued to be a pandemic that affects the whole world World Health Organization (2021). WHO Coronavirus (COVID-19) Dashboard [online]. Website https://covid19.who.int/ [accessed 12 Jun 2021].. According to the World Health Organization, 174.918.667 people have been infected with this disease until now and more than 3,7 million have died because of coronavirus globally, as of 12 June 2021 2 . In addition, studies have shown that in some cases, people who have recovered from coronavirus may experience deterioration in their physical health and psychological well-being [2]. Considering the fear, stress, and anxiety created by the COVID-19 pandemic [3,4], as well as the global and local precautions to protect public health (e.g., lockdown, wearing a mask, etc.), people’s mental health has also been negatively affected by this stressful period. Therefore, in these days, it is important to examine the consequences of the pandemic and the psychological experiences of people in order to suggest local and global precautions immediately. In this article, firstly, the difficulties experienced by individuals during the pandemic will be discussed and then the risk communication strategies will be explained. Finally, mental health problems associated with the COVID-19 outbreak, the COVID-19 related posttraumatic growth, and protective factors will be mentioned with the possible intervention methods. The high visibility of deaths and infected people during the COVID-19, combined with intolerance to uncertainty, has increased people’s anxiety and stress levels [5,6]. However, those are not the only reasons affecting negatively mental health. With the global and local precautions taken in this phase, people experience many different extraordinary changes and faced different obstacles in daily life. In addition to witnessing deaths and infected cases every day, people around the world have suffered psychological, social and economic consequences. Since the virus is transmitted very quickly from person to person, WHO and national governments have frequently emphasized the importance of keeping physical distance from other people World Health Organization (2020). Coronavirus Disease (COVID-19) Advice For The Public [online]. Website https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public [accessed 29 May 2021].. For this reason, some countries have applied nationwide lockdowns with different time periods. Even in regions where those lockdowns were not declared legally, people began to go out less to protect themselves and their acquaintances from the disease [7]. The fact that people are in lockdown, whether legally or voluntarily, has caused psychological and economic consequences [8]. Many of the people had to work from home in this process which might result in inequalities in the labor market by favoring older individuals, males, and individuals with higher education [9]. Besides, studies found that the gender gap in satisfaction with job and productivity increased in the working from home process. That is, women stated lower satisfaction and productivity in their job compared to men [10]. The reason for this gap is compatible with expectations of the society from different genders. That is, while men can spare time for their work as before at home settings, women are expected to fulfill responsibilities related to housework and children as well as continue to do their job. Working from home, economic difficulties, gender gap between couples and spending too much time at home with family members are new and unusual situations emerging due to the COVID-19 pandemic. Therefore, the relationships between the family members may be negatively affected by those new situations. For example, studies indicated that domestic violence has increased during pandemic, indicating a higher risk for children and their mothers End Violence Against Children (2020). Protecting Children During COVID-19 [online]. Website https://www.end-violence.org/protecting-children-during-covid-19-outbreak [accessed 29 May 2021]. [11,12]. Experiencing difficulties related to the COVID-19 might cause dyadic hostility and withdrawal while decreasing supportive response between couples. This might influence couples’ relationship quality negatively by laying more stress on preexisting problems (e.g., extended family issues, social status) and individual vulnerabilities (e.g., insecure attachment style) [13]. Education from kindergarten to university has gone through major transformations during the COVID-19 pandemic. Some countries closed all education institutions and have been resuming education online as a precaution to contain the spread of the infection UNESCO (2021). Education: From Disruption To Recovery [online]. Website https://en.unesco.org/covid19/educationresponse#durationschoolclosures [accessed 30 May 2021].. Those education-related closures have affected more than 90% of the student population worldwide United Nations (2020). Policy Brief: Education During COVID-19 and Beyond [online]. Website https://www.un.org/development/desa/dspd/wp-content/uploads/sites/22/2020/08/sg_policy_brief_covid-19_and_education_august_2020.pdf [accessed 29 May 2021].. Difficulty in accessing technological tools (e.g., tablets, internet) during remote education, the lack of face-to-face interaction with the instructor and absence of educational and social stimulus have been making students more vulnerable to the negative consequences of pandemic and increasing the risk of drop-out rates at schools OECD Policy Responses to Coronavirus (COVID-19) (2020). The Impact of COVID-19 on Student Equity and Inclusion: Supporting Vulnerable Students during School Closures and School Re-openings [online]. Website https://www.oecd.org/coronavirus/policy-responses/the-impact-of-covid-19-on-student-equity-and-inclusion-supporting-vulnerable-students-during-school-closures-and-school-re-openings-d593b5c8/ [accessed 06 Jun 2021].. The healthcare sector is another area where employees are very tired both psychologically and physically World Health Organization (2020). Protecting Health Workers from COVID-19 [online]. Website https://www.who.int/westernpacific/news/feature-stories/detail/protecting-health-workers-from-covid-19 [accessed 04 Jun 2021].. According to the WHO’s statement in May 2021, it is estimated that more than 150,000 healthcare workers have been infected by the virus in this process [14]. Healthcare employees also have struggled with various psychological disorders such as depression, anxiety, and stress [15] with the fear of infecting their loved ones [16]. In addition, people with diseases such as cancer, diabetes, or hearing loss had difficulty in accessing health care [17–19], and some patients delayed their treatment due to the fear of the COVID-19 virus [20,21]. This fear has caused many patients to deliberately underestimate their symptoms. As a result, they were exposed to delayed diagnosis, treatment, higher risk of complications and eventually they died [22]. Even if they go to the hospital, diagnostic procedures and access to technical facilities may be delayed due to special precautions for the COVID-19 patients [23]. As a result, many precautions have been taken to prevent the spread of the COVID-19 infection all over the world, highlighting the importance of social distance, but, naturally, this has caused some negative economical and psychological consequences. Many people have to spend most of their time at home, either voluntarily or compulsorily, to comply with these measures. This situation combined with economic problems negatively affected physical and mental health of the individuals and resulted in social isolation. In this extraordinary period, the importance of obtaining accurate information about COVID-19 has become even more important in order to ensure that the society adheres to these measures. This drew attention to the communication strategies used by experts when sharing information about the COVID-19 disease to promote public health. It is of great importance to ensure that the measures recommended by health authorities are complied with by the community to prevent the spread of the COVID-19 pandemic. How this information is communicated to the community is also crucial to provide compliance with the precautions. This communication type refers to “the exchange of real-time information, advice and opinions between experts and people facing threats to their health, economic or social well-being” World Health Organization (n.d). Risk Communication [online]. Website https://www.who.int/emergencies/risk-communications [accessed 05 Jun 2021].. Given the emotional, cognitive and behavioral dimensions of individuals’ responses in situations that threaten their health or safety [24,25], it seems that risk communication during the COVID-19 is not just about providing statistical and scientific information to the public. More than a one-way transfer of information, it is a two-way interaction in which experts and the public guide each other [26]. In other words, the behavior of the public may differ according to how the information is being provided by the experts. Authorities need to observe public reaction to risk communication processes to update their communication strategies in order to promote public health. Cultural and contextual differences should be considered when using these communication strategies [27]. Effective risk communication is especially important for developing countries as a preventative factor because these countries may find it difficult to overcome increased hospitalization and intensive health care [28]. Therefore, the implementation of the measures by the public in these countries by relying on the explanations of experts might reduce the spread of the COVID-19 and decrease the burden on the health sector. The dimension emphasized by the experts during risk communication is effective in the decision of the public whether to apply the warnings in their life. Realistic information such as statistical data and scientific research can be used to motivate change as well as to emphasize lively, tangible, and personal elements that touch the emotional aspects [29]. Studies have shown that emotional emphasis is more effective than providing realistic information when warning people about risky situations [30]. For example, higher fear from the virus predicted positive behavior change such as social distancing and promoted hand-washing in the COVID-19 pandemic [31]. Attention should be given to the intensity of the emotional framework of those COVID-19 warnings, otherwise it may increase the stress and anxiety in the public too much. It has been observed that people with high fear and anxiety during the COVID-19 engage in extreme behaviors such as hoarding toilet papers, medicines or face masks [32,33]. Thus, experts need to arouse feelings towards the risks on the one hand, and at the same time give a sense of trust to the public [34], which is difficult to balance in a crisis situation. In this case, it is necessary to use an appropriate emotional tone as well as realistic information so that the public can trust the experts and social policy makers and comply with the suggested precautions. In addition to how the message is framed during the COVID-19, the characteristics of the audience must also be considered in this process. An important aspect of the ability to cope with potentially dangerous events is the desire for getting information about the threatening situation. Researchers divided people into two categories regarding how to approach danger related information. Some people, referred to as monitors, have a tendency to search out information and focus on threatening aspects of the condition, whereas blunters have a tendency to avoid information [35]. People with a tendency to seek information and to consider threats (monitors), and those who do not seek detailed information about risks (blunters), may react differently to risk communication strategies. A study showed that the monitors tended to perceive more risk than blunters when the emotional dimensions of risk were emphasized, but when realistic information was given, there was no difference between the risk perception of the two groups [30]. This could be speculated that emotionally weighted messages with realistic information would be more effective on a certain group of society during the COVID-19 pandemic. Personality traits are another factor that plays a role in the interpretation of experts’ warnings about the COVID-19. For example, one study found that people who score high on the agreeableness and conscientiousness dimension gave more attention to health messages related to the COVID-19 and approved of measures such as social distancing [36]. Considering that individuals pay more attention to persuasive messages in line with their own views [37], it can be expected that people will follow those that match their personality traits when evaluating health-related warnings during a crisis such as the COVID-19 pandemic. For this reason, it is necessary to consider individual differences while preparing these warnings and determining risk communication strategies. Considering the different personality traits, cognitive skills and emotional sensitivities in the society, the development of different risk communication methods instead of standard and monotonous messages would increase compliance with the recommended measures related to COVID-19. One of the biggest challenges during the COVID-19 pandemic was trying to control rumors and misinformation about the disease, which increased the perception of uncertainty, pseudoscientific ideas, and conspiracy theories [38]. Those rumors and misinformation related to the causes, consequences and treatment of the disease spread very quickly among people, especially via social media [39]. For example, despite the lack of any valid evidence, many people around the world believed that the virus was spread by 5G technologies [40], or the aim of the vaccination is to insert a chip to the people. In addition, misinformation that some “miraculous” foods (such as garlic, minerals, drinkable silver) protect people from the COVID-19 virus was shared on many social media platform BBC-Reality Check (2020). Koronavirüs (Covid-19): Virüs ve Hastalık Hakkında Dikkate Almamanız Gereken Hurafeler [online]. Website https://www.bbc.com/turkce/haberler-dunya-51815676 [accessed 06 Jun 2021].. Those kinds of misinformation about the disease, treatment options, and vaccines might result in decreased health behaviors and increased inaccurate practices in the public. Thus, refutation of those misinformation and rumors is crucial to promote public health globally. In this way, experts provide an integrity image to the public and encourage protective attitudes towards the COVID-19 pandemic by increasing positive emotions and perceived efficacy [41]. To disprove rumors, experts must first have the latest research findings and accurate information. In order to convey this information to the public in an accurate and effective way, the cooperation between the mass media and health institutions is crucial [42]. In addition, the rapid detection of nonscientific online content from all social media platforms will facilitate their intervention. The COVID-19 outbreak and related outcomes such as fear and anxiety of getting infected, social isolation and other restrictive life-changing precautions have negatively affected the well-being of individuals. World Happiness Report [43] documented that there was an increase in different forms of “negative affect” (stress, worry, and sadness) in the global sample of countries. In the context of the COVID-19 outbreak and deterioration in well-being, mental health problems become one of the ultimate tolls of the pandemic. We can think of psychological responses to the COVID-19 outbreak on a spectrum ranging from indifference to extreme anxiety. The initial normal reactions to this abnormal life-changing situation may show itself as heightened stress, fear and anxiety, increased depressive symptoms and substance use, sleep problems, loss of control, feelings of hopelessness, helplessness and loneliness [44]. Data from the world indicated that the initial reaction to the expectations of lockdown was the increase in psychological distress and mental illness [45,46]. However, the trajectory of mental health effects has been changing as the course and restrictions of the COVID-19 outbreak change. Supportively, longitudinal data collected during the lockdown showed that there was a slow recovery in depression and anxiety symptoms a few weeks after the lockdown [47]. Thus, a moderate level of fear or anxiety may help individuals in coping with threats in the context of the COVID-19 [48], without risking mental health clinically. On the contrary, clinically significant mental health problems, that is the severe and persistent emotional responses, can emerge depending on the personal vulnerabilities and other circumstances surrounding the COVID-19 outbreak. Depression, anxiety, stress, sleep problems, obsessive compulsive disorder (OCD)-related conditions, health anxiety, and posttraumatic stress disorder (PTSD) were commonly researched mental health problems during the current pandemic. In the following section, we first review the recent data on clinically significant levels of depression and anxiety, followed by data on OCD-related conditions and health anxiety. Finally, we will review available data on PTSD, which is a great risk for patients who recovered from intensive care, and also for health care workers during the COVID-19 outbreak. We presented systematic reviews and meta-analysis findings where possible. 4.1. Depression, anxiety, and sleep problems The meta-analysis of Bueno-Navitol et al. [49] indicated that the rates of clinically significant depression in the general population considerably increased during the outbreak, with even higher rates than other pandemics in history. The comprehensive meta-analysis of Cénat et al. [50], including data from 68 independent samples, demonstrated that women and men populations who were affected from the COVID-19 outbreak had higher rates of depression, anxiety, panic disorder and insomnia compared to rates before the outbreak. The global cross-sectional study of Varma et al. [51], including data from more than 60 countries, showed that even if the number of cases in each particular country varied, high rates of stress, state anxiety, depression, and poor sleep were consistent across the globe. In this global cross-sectional study, sleep quality and loneliness were important vulnerability factors to poorer mental health. The meta-analysis of Chang et al. [52] examined studies on depression and anxiety among college student samples from different countries. Depressive and anxiety symptom prevalence were the highest for American students; anxiety symptom prevalence was the lowest for Chinese students, and depression symptom prevalence was the lowest for Turkish students. These various prevalence rates among countries may be partially explained by the government precautions. Supportively, the metaanalysis of Lee et al. [53] reported that fast and strict government precautions to control the spread of the COVID-19 reduce mental health problems. This is meaningful in the sense that physical and psychological well-being cannot be separated. The reduced uncertainty and increased safety feelings via government precautions may facilitate mental health during the COVID-19. These metaanalyses findings generally express that individuals globally report higher rates of clinically significant levels of depression and anxiety, but the rates of depression and anxiety may show some differences across countries. In terms of depression, anxiety and sleep problems, individuals who have survived after a severe COVID-19 infection, healthcare workers, and pregnant women need special interest. The meta-analysis of Liu et al. [54] documented that depression, anxiety and insomnia symptoms were highly prevalent in the COVID-19 patients, even higher than the general population affected by the outbreak. In this meta-analysis, insomnia symptoms and being a female come to the forefront. In an umbrella review of meta-analysis studies on healthcare workers by Sahebi et al. [55], the summary of findings highlighted the vulnerability of medical staff to depression and anxiety due to workload, burnout, and the fear of contracting the disease as well as stigmatization and isolation from the loved ones. The rapid review and meta-analysis by Thomfohr-Madsen et al. [56] documented that pregnant women experienced elevated levels of depression and anxiety during the COVID-19. Specifically, there was a trend in increase in anxiety levels of pregnant women as the COVID-19 process extends, probably because of chronic stress in pandemic times and ongoing uncertainty. Pregnant women in Europe and North America were more vulnerable to pregnancy anxiety than women in Asia [56]. Thus, the risk for clinically significant levels of depression and anxiety as well as sleep problems seem to be heightened in different segments of the society during the COVID-19. Currently, people deal with fear surrounding the disease, lockdown, fluctuations in the number of infected people and deaths. In these uncontrollable conditions, society feels a lack of control and deals with financial difficulties and uncertainty about the future, all of which may explain the increase in depression and anxiety rates during the new coronavirus outbreak. 4.2. OCD and health anxiety The COVID-19 pandemic may be specifically threatening for individuals who have predisposition to OCD and health anxiety. Psychologically, the uncertainty and unpredictability of the disease and related conditions need to be tolerated along with behavioral precautions such as hand washing and using disinfectants frequently. The uncertainty about the disease and behavioral precautions during the outbreak may exacerbate pre-existing OCD and health anxiety or contribute to the development of these mental health problems. To a certain degree, healthy individuals feel fear and anxiety for getting infected and transmitting the infection to loved ones. However, in this context, individuals having predisposition to OCD and health anxiety may feel extreme stress because of the nature of their mental health problems [57,58]. Taylor et al. [59] developed the COVID-19 stress scale, measuring the pandemic related stress in different domains: (a) danger and contamination fears, (b) fears about economic consequences, (c) xenophobia, (d) compulsive checking and reassurance seeking, and (e) traumatic stress symptoms about the COVID-19. In the initial scale development study, OCD symptoms and health anxiety were associated with the COVID-19 related stress. More recent examinations also corroborated that OCD symptoms, health anxiety, and fear of the COVID-19 were all correlated [60]. Although reporting mixed results, following studies mostly supported first formulations of Taylor at al. [59]. The study by Acenowr and Coles [61] supported that high severity of OCD symptoms and OC-related intrusive thoughts resulted in greater severity in and distress on the COVID-19-related intrusive thoughts. The study by Fontenelle et al. [62] further showed that being female, the number of the COVID-19-related stressful events, and pre-existing fear of harm and symmetry symptoms before the outbreak predicted OCD symptoms during the COVID-19, except fear of contamination and washing. Also, the rates of clinically significant OCD, hoarding disorder (HD), and skin picking disorder (SPD) escalated as well as already existing OCD, HD, SPD, and hair pulling worsened after the pandemic. Another study by Khosravani et al. [63] documented the increase in all OCD dimensions, namely contamination, responsibility for harm, unacceptable thoughts, and symmetry. On the contrary, Wheaton, Ward et al. [64] found that contamination and responsibility for harm symptoms worsened more than taboo thoughts or symmetry symptoms after the COVID-19. However, authors also reported a considerable variability in the responses to the COVID-19 among individuals with self-identified OCD. In this regard, the majority of individuals with self-identified OCD reported a slight worsening while fewer of them reported substantial worsening or no change, reflecting a heterogeneity in reactions to the COVID-19. Mask wearing, hand washing, and physical distancing that decrease the risk of the COVID-19 disease infection may explain the development and exacerbation of contamination obsessions and phobias, and OCD symptom severity during the COVID-19 outbreak [65]. Thus, the current literature implies that the preexisting OCD and health anxiety features, especially danger and contamination threat, may heighten the COVID-19 stress syndrome, that is, the fear of spread of disease. The COVID-19 disease may also result in an increase in the rates of OCD related phenomenon, meaning that people may develop new OCD symptoms. Preexisting OCD symptoms may indicate a slight worsening, but studies are inconsistent whether all or particular OCD dimensions tend to get worsened since individual reactions have varied according to data. 4.3. Posttraumatic stress disorder (PTSD) The novel coronavirus, SARS-CoV-2, leads to the COVID-19 disease that is highly infectious, has caused deaths, and poses future death threats. Therefore, the COVID-19 disease can be considered as a traumatic event, probably more for coronavirus survivors and healthcare workers. A metaanalysis supported that three of the ten coronavirus survivors, two of the ten healthcare workers, and one of the ten individuals in the general population reported to have a PTSD diagnosis or PTSD related symptoms [66]. This metaanalysis, inclusive studies on SARS, MERS and the COVID-19, provided data on the increase in the prevalence rates of PTSD during outbreaks. The increase in PTSD-related conditions during outbreaks has also been repeated by research specific to the COVID-19 [50]. During the recent coronavirus pandemic, females, people who had poor sleep quality and/or who live in widely affected regions, people who got infected and/or people who work with the COVID-19 patients were more vulnerable to PTSD [67]. Thus, PTSD may be one of the mental health tolls caused by the COVID-19 outbreak, which may increase substance use problems and suicide risk. 4.4. Who are vulnerable to mental health problems when facing the COVID-19? 4.4.1. Age, gender, and pre-existing mental health problems In general, younger age compared to middle or older age and being female in the Middle East and the West constituted vulnerability factors to mental health problems across global studies [51,68–72]. These results might be explained by the increased feelings of loneliness in the context of the COVID-19, social isolation, and financial difficulties among poorer young people [51] and demanding cultural gender roles for women in the Middle East and some regions of the West. Clinical and empirical studies during the COVID-19 commonly support that individuals with preexisting mental health problems were more prone to the COVID-19 negative mental health effects [57,62,73]. Due to the pre-existing cognitive vulnerabilities of overestimation of threat, inflated sense of responsibility, intolerance of uncertainty, over-importance of thoughts and their control, the COVID-19 was more likely to result in negative reactions to the pandemic, exacerbation or acquisition of anxiety-related disorders [57]. In the high-disease risk context of COVID-19, preexisting OCD-related problems worsened, with patterns of more disability, affective problems, and reduced quality of life [62]. Having a pre-existing mental health problem before the pandemic showed a trajectory of more severe depression and anxiety symptoms during the pandemic [73]. As well as the COVID-19 disease fear, anxiety, and isolation, the limited access to mental health services may explain the exacerbation of mental health problems among individuals with preexisting mental health problems. Thus, those individuals are more vulnerable to extreme negative pandemic reactions, exacerbation of symptoms, and relapse in COVID-19 context. 4.4.2. Personality and intolerance to uncertainty To the authors knowledge, studies examining the effect of personality on the clinically significant levels of mental health problems during the COVID-19 have been limited. Available research addressed the role of neuroticism, psychoticism and detachment on the mental health problems associated with the COVID-19 outbreak. In the earlier study of Mazza et al. [74], negative affectivity and detachment were associated with internalizing symptoms, assessed during the COVID-19 lockdown. In the study of Nikčević et al. [75] neuroticism was found to be related to the COVID-19 anxiety, health anxiety, and generalized anxiety and depression symptoms. One caveat in the mentioned study is that the COVID-19 anxiety was itself associated with anxiety and depression symptoms, independently from neurotic personality features. In the study of Somma et al. [76], neuroticism and psychoticism, assessed at the beginning of the lockdown, significantly predicted internalizing symptoms at the end of the lockdown. Although some personality features seem to be related to mental health problems, they seem not to fully explain the variance in those problems. Future research should examine personality factors as the course of the COVID-19 has been changing. Intolerance to uncertainty has been commonly assessed as a vulnerability factor to mental health problems during the COVID-19. This dispositional incapacity is a triggered aversive response by the absence of sufficient information and this aversive response is sustained by the lack of endurance to the perception of uncertainty [77]. Different studies reported that intolerance to uncertainty was associated with the COVID-19 stress syndrome [59], reduced general well-being [78] and internalizing symptoms [6]. Wheaton, Messner et al. [60] further indicated that intolerance to uncertainty during the COVID-19 explained the associations between OCD symptoms, health anxiety, and fear of the disease. Along with intolerance to uncertainty, other cognitive vulnerabilities such as overestimation of threat and perceptions of low ability to cope were related to mental ill-being during a real COVID-19 disease threat [61]. Furthermore, Han et al. [79] showed the association between risk perception and emotional distress and Schmidt et al. [80] between cognitive anxiety sensitivity (racing thoughts may be interpreted as losing one’s mind) and pandemic related stress. Thus, maladaptive cognitive-emotional styles seem to be particularly important for mental health in the COVID-19. The meta-analysis of Bueno-Navitol et al. [49] indicated that the rates of clinically significant depression in the general population considerably increased during the outbreak, with even higher rates than other pandemics in history. The comprehensive meta-analysis of Cénat et al. [50], including data from 68 independent samples, demonstrated that women and men populations who were affected from the COVID-19 outbreak had higher rates of depression, anxiety, panic disorder and insomnia compared to rates before the outbreak. The global cross-sectional study of Varma et al. [51], including data from more than 60 countries, showed that even if the number of cases in each particular country varied, high rates of stress, state anxiety, depression, and poor sleep were consistent across the globe. In this global cross-sectional study, sleep quality and loneliness were important vulnerability factors to poorer mental health. The meta-analysis of Chang et al. [52] examined studies on depression and anxiety among college student samples from different countries. Depressive and anxiety symptom prevalence were the highest for American students; anxiety symptom prevalence was the lowest for Chinese students, and depression symptom prevalence was the lowest for Turkish students. These various prevalence rates among countries may be partially explained by the government precautions. Supportively, the metaanalysis of Lee et al. [53] reported that fast and strict government precautions to control the spread of the COVID-19 reduce mental health problems. This is meaningful in the sense that physical and psychological well-being cannot be separated. The reduced uncertainty and increased safety feelings via government precautions may facilitate mental health during the COVID-19. These metaanalyses findings generally express that individuals globally report higher rates of clinically significant levels of depression and anxiety, but the rates of depression and anxiety may show some differences across countries. In terms of depression, anxiety and sleep problems, individuals who have survived after a severe COVID-19 infection, healthcare workers, and pregnant women need special interest. The meta-analysis of Liu et al. [54] documented that depression, anxiety and insomnia symptoms were highly prevalent in the COVID-19 patients, even higher than the general population affected by the outbreak. In this meta-analysis, insomnia symptoms and being a female come to the forefront. In an umbrella review of meta-analysis studies on healthcare workers by Sahebi et al. [55], the summary of findings highlighted the vulnerability of medical staff to depression and anxiety due to workload, burnout, and the fear of contracting the disease as well as stigmatization and isolation from the loved ones. The rapid review and meta-analysis by Thomfohr-Madsen et al. [56] documented that pregnant women experienced elevated levels of depression and anxiety during the COVID-19. Specifically, there was a trend in increase in anxiety levels of pregnant women as the COVID-19 process extends, probably because of chronic stress in pandemic times and ongoing uncertainty. Pregnant women in Europe and North America were more vulnerable to pregnancy anxiety than women in Asia [56]. Thus, the risk for clinically significant levels of depression and anxiety as well as sleep problems seem to be heightened in different segments of the society during the COVID-19. Currently, people deal with fear surrounding the disease, lockdown, fluctuations in the number of infected people and deaths. In these uncontrollable conditions, society feels a lack of control and deals with financial difficulties and uncertainty about the future, all of which may explain the increase in depression and anxiety rates during the new coronavirus outbreak. The COVID-19 pandemic may be specifically threatening for individuals who have predisposition to OCD and health anxiety. Psychologically, the uncertainty and unpredictability of the disease and related conditions need to be tolerated along with behavioral precautions such as hand washing and using disinfectants frequently. The uncertainty about the disease and behavioral precautions during the outbreak may exacerbate pre-existing OCD and health anxiety or contribute to the development of these mental health problems. To a certain degree, healthy individuals feel fear and anxiety for getting infected and transmitting the infection to loved ones. However, in this context, individuals having predisposition to OCD and health anxiety may feel extreme stress because of the nature of their mental health problems [57,58]. Taylor et al. [59] developed the COVID-19 stress scale, measuring the pandemic related stress in different domains: (a) danger and contamination fears, (b) fears about economic consequences, (c) xenophobia, (d) compulsive checking and reassurance seeking, and (e) traumatic stress symptoms about the COVID-19. In the initial scale development study, OCD symptoms and health anxiety were associated with the COVID-19 related stress. More recent examinations also corroborated that OCD symptoms, health anxiety, and fear of the COVID-19 were all correlated [60]. Although reporting mixed results, following studies mostly supported first formulations of Taylor at al. [59]. The study by Acenowr and Coles [61] supported that high severity of OCD symptoms and OC-related intrusive thoughts resulted in greater severity in and distress on the COVID-19-related intrusive thoughts. The study by Fontenelle et al. [62] further showed that being female, the number of the COVID-19-related stressful events, and pre-existing fear of harm and symmetry symptoms before the outbreak predicted OCD symptoms during the COVID-19, except fear of contamination and washing. Also, the rates of clinically significant OCD, hoarding disorder (HD), and skin picking disorder (SPD) escalated as well as already existing OCD, HD, SPD, and hair pulling worsened after the pandemic. Another study by Khosravani et al. [63] documented the increase in all OCD dimensions, namely contamination, responsibility for harm, unacceptable thoughts, and symmetry. On the contrary, Wheaton, Ward et al. [64] found that contamination and responsibility for harm symptoms worsened more than taboo thoughts or symmetry symptoms after the COVID-19. However, authors also reported a considerable variability in the responses to the COVID-19 among individuals with self-identified OCD. In this regard, the majority of individuals with self-identified OCD reported a slight worsening while fewer of them reported substantial worsening or no change, reflecting a heterogeneity in reactions to the COVID-19. Mask wearing, hand washing, and physical distancing that decrease the risk of the COVID-19 disease infection may explain the development and exacerbation of contamination obsessions and phobias, and OCD symptom severity during the COVID-19 outbreak [65]. Thus, the current literature implies that the preexisting OCD and health anxiety features, especially danger and contamination threat, may heighten the COVID-19 stress syndrome, that is, the fear of spread of disease. The COVID-19 disease may also result in an increase in the rates of OCD related phenomenon, meaning that people may develop new OCD symptoms. Preexisting OCD symptoms may indicate a slight worsening, but studies are inconsistent whether all or particular OCD dimensions tend to get worsened since individual reactions have varied according to data. The novel coronavirus, SARS-CoV-2, leads to the COVID-19 disease that is highly infectious, has caused deaths, and poses future death threats. Therefore, the COVID-19 disease can be considered as a traumatic event, probably more for coronavirus survivors and healthcare workers. A metaanalysis supported that three of the ten coronavirus survivors, two of the ten healthcare workers, and one of the ten individuals in the general population reported to have a PTSD diagnosis or PTSD related symptoms [66]. This metaanalysis, inclusive studies on SARS, MERS and the COVID-19, provided data on the increase in the prevalence rates of PTSD during outbreaks. The increase in PTSD-related conditions during outbreaks has also been repeated by research specific to the COVID-19 [50]. During the recent coronavirus pandemic, females, people who had poor sleep quality and/or who live in widely affected regions, people who got infected and/or people who work with the COVID-19 patients were more vulnerable to PTSD [67]. Thus, PTSD may be one of the mental health tolls caused by the COVID-19 outbreak, which may increase substance use problems and suicide risk. 4.4.1. Age, gender, and pre-existing mental health problems In general, younger age compared to middle or older age and being female in the Middle East and the West constituted vulnerability factors to mental health problems across global studies [51,68–72]. These results might be explained by the increased feelings of loneliness in the context of the COVID-19, social isolation, and financial difficulties among poorer young people [51] and demanding cultural gender roles for women in the Middle East and some regions of the West. Clinical and empirical studies during the COVID-19 commonly support that individuals with preexisting mental health problems were more prone to the COVID-19 negative mental health effects [57,62,73]. Due to the pre-existing cognitive vulnerabilities of overestimation of threat, inflated sense of responsibility, intolerance of uncertainty, over-importance of thoughts and their control, the COVID-19 was more likely to result in negative reactions to the pandemic, exacerbation or acquisition of anxiety-related disorders [57]. In the high-disease risk context of COVID-19, preexisting OCD-related problems worsened, with patterns of more disability, affective problems, and reduced quality of life [62]. Having a pre-existing mental health problem before the pandemic showed a trajectory of more severe depression and anxiety symptoms during the pandemic [73]. As well as the COVID-19 disease fear, anxiety, and isolation, the limited access to mental health services may explain the exacerbation of mental health problems among individuals with preexisting mental health problems. Thus, those individuals are more vulnerable to extreme negative pandemic reactions, exacerbation of symptoms, and relapse in COVID-19 context. 4.4.2. Personality and intolerance to uncertainty To the authors knowledge, studies examining the effect of personality on the clinically significant levels of mental health problems during the COVID-19 have been limited. Available research addressed the role of neuroticism, psychoticism and detachment on the mental health problems associated with the COVID-19 outbreak. In the earlier study of Mazza et al. [74], negative affectivity and detachment were associated with internalizing symptoms, assessed during the COVID-19 lockdown. In the study of Nikčević et al. [75] neuroticism was found to be related to the COVID-19 anxiety, health anxiety, and generalized anxiety and depression symptoms. One caveat in the mentioned study is that the COVID-19 anxiety was itself associated with anxiety and depression symptoms, independently from neurotic personality features. In the study of Somma et al. [76], neuroticism and psychoticism, assessed at the beginning of the lockdown, significantly predicted internalizing symptoms at the end of the lockdown. Although some personality features seem to be related to mental health problems, they seem not to fully explain the variance in those problems. Future research should examine personality factors as the course of the COVID-19 has been changing. Intolerance to uncertainty has been commonly assessed as a vulnerability factor to mental health problems during the COVID-19. This dispositional incapacity is a triggered aversive response by the absence of sufficient information and this aversive response is sustained by the lack of endurance to the perception of uncertainty [77]. Different studies reported that intolerance to uncertainty was associated with the COVID-19 stress syndrome [59], reduced general well-being [78] and internalizing symptoms [6]. Wheaton, Messner et al. [60] further indicated that intolerance to uncertainty during the COVID-19 explained the associations between OCD symptoms, health anxiety, and fear of the disease. Along with intolerance to uncertainty, other cognitive vulnerabilities such as overestimation of threat and perceptions of low ability to cope were related to mental ill-being during a real COVID-19 disease threat [61]. Furthermore, Han et al. [79] showed the association between risk perception and emotional distress and Schmidt et al. [80] between cognitive anxiety sensitivity (racing thoughts may be interpreted as losing one’s mind) and pandemic related stress. Thus, maladaptive cognitive-emotional styles seem to be particularly important for mental health in the COVID-19. In general, younger age compared to middle or older age and being female in the Middle East and the West constituted vulnerability factors to mental health problems across global studies [51,68–72]. These results might be explained by the increased feelings of loneliness in the context of the COVID-19, social isolation, and financial difficulties among poorer young people [51] and demanding cultural gender roles for women in the Middle East and some regions of the West. Clinical and empirical studies during the COVID-19 commonly support that individuals with preexisting mental health problems were more prone to the COVID-19 negative mental health effects [57,62,73]. Due to the pre-existing cognitive vulnerabilities of overestimation of threat, inflated sense of responsibility, intolerance of uncertainty, over-importance of thoughts and their control, the COVID-19 was more likely to result in negative reactions to the pandemic, exacerbation or acquisition of anxiety-related disorders [57]. In the high-disease risk context of COVID-19, preexisting OCD-related problems worsened, with patterns of more disability, affective problems, and reduced quality of life [62]. Having a pre-existing mental health problem before the pandemic showed a trajectory of more severe depression and anxiety symptoms during the pandemic [73]. As well as the COVID-19 disease fear, anxiety, and isolation, the limited access to mental health services may explain the exacerbation of mental health problems among individuals with preexisting mental health problems. Thus, those individuals are more vulnerable to extreme negative pandemic reactions, exacerbation of symptoms, and relapse in COVID-19 context. To the authors knowledge, studies examining the effect of personality on the clinically significant levels of mental health problems during the COVID-19 have been limited. Available research addressed the role of neuroticism, psychoticism and detachment on the mental health problems associated with the COVID-19 outbreak. In the earlier study of Mazza et al. [74], negative affectivity and detachment were associated with internalizing symptoms, assessed during the COVID-19 lockdown. In the study of Nikčević et al. [75] neuroticism was found to be related to the COVID-19 anxiety, health anxiety, and generalized anxiety and depression symptoms. One caveat in the mentioned study is that the COVID-19 anxiety was itself associated with anxiety and depression symptoms, independently from neurotic personality features. In the study of Somma et al. [76], neuroticism and psychoticism, assessed at the beginning of the lockdown, significantly predicted internalizing symptoms at the end of the lockdown. Although some personality features seem to be related to mental health problems, they seem not to fully explain the variance in those problems. Future research should examine personality factors as the course of the COVID-19 has been changing. Intolerance to uncertainty has been commonly assessed as a vulnerability factor to mental health problems during the COVID-19. This dispositional incapacity is a triggered aversive response by the absence of sufficient information and this aversive response is sustained by the lack of endurance to the perception of uncertainty [77]. Different studies reported that intolerance to uncertainty was associated with the COVID-19 stress syndrome [59], reduced general well-being [78] and internalizing symptoms [6]. Wheaton, Messner et al. [60] further indicated that intolerance to uncertainty during the COVID-19 explained the associations between OCD symptoms, health anxiety, and fear of the disease. Along with intolerance to uncertainty, other cognitive vulnerabilities such as overestimation of threat and perceptions of low ability to cope were related to mental ill-being during a real COVID-19 disease threat [61]. Furthermore, Han et al. [79] showed the association between risk perception and emotional distress and Schmidt et al. [80] between cognitive anxiety sensitivity (racing thoughts may be interpreted as losing one’s mind) and pandemic related stress. Thus, maladaptive cognitive-emotional styles seem to be particularly important for mental health in the COVID-19. Given that the COVID-19 was considered as a traumatic event, studies conducted by mental health researchers have focused on this mass trauma’s effects on people. In addition to the emergence of negative individual consequences after traumatic events, another possibility is posttraumatic development or growth. The literature supports that the traumatic life events not only result in psychological problems such as PTSD, depression and anxiety but also give an opportunity to people to develop themselves in a positive way by improving the quality of their interpersonal relationships. As in the previous section, the negative impacts of the COVID-19 have been clarified, the following part focuses on the positive effects defined mostly as “posttraumatic growth (PTG)”. PTG is a positive psychological transformation that has occurred as a result of struggling with traumatic life events [81]. After a traumatic event that shutter previous perception of the world such as having a feeling of being secure and safe, people try to adapt to the “new normal” by improving deeper relationships, having much personal strength, looking for new opportunities, strengthening a sense of spirituality and appreciating life [81]. During the COVID-19, the PTG was measured in different time periods. Also, the PTG data was collected from different samples like students [82], the COVID-19 patients, the general population [83] and healthcare workers [84] which is also divided into subgroups based on their closeness to the COVID-19 patients. According to the literature, in a study conducted in October 2020 with students who graduated from high school, about 13% of the students had reported PTG besides depression and anxiety [82]. The other research, examined PTG among the COVID-19 confirmed patients via using interview method, revealed that confirmed patients had reported PTG. Mainly three growth areas appeared: 1) reevaluating priorities such as reviewing their aims, values and being grateful for living, 2) developing deeper relationships in their social circle, 3) improvements in personal chance such as gaining maturity and having more awareness of the significance of health [85]. Further research on the general population (40% were healthcare workers) in April 2020 showed that the whole participants reported moderate to low PTG and about 28% of the general population exhibited PTSD symptoms. Having found all participants showed PTG, further analysis was conducted to compare the healthcare workers and nonhealthcare workers in terms of their PTG scores in the mentioned study. In the personal strength dimension, a component of PTG and defined as feeling stronger compared to the past, healthcare workers were found to experience significantly more growth compared to non-healthcare workers. Another comprehensive research investigated the PTG among nurses [84]. Parallel with the previous studies, about 40% of participants had reported PTG experience during the COVID-19 pandemic. At the same time, a high percentage of the sample (39.3%) reported trauma and the responders also stated a moderate level of emotional exhaustion. In addition, nurses who worked at hospitals designed to care for patients with the COVID-19, who worked in a critical unit or who gave care for patients with the COVID-19 had expressed significantly higher levels of total PTG compared to the other nurses who did not work in these units. In the light of the aforementioned studies, healthcare workers are more inclined to experience PTG besides more mental health problems since they have to cope with both the COVID-19 related stressors such as threat of being infected and also the fear of spreading this infection to their loved ones [83]. Furthermore, healthcare workers who have worked in isolated hospitals designed for patients with the COVID-19 or cared for these patients encounter vicarious trauma [86] since some of their patients received traumatic treatments and some of them died [83]. In addition to these adversities, health personnel exhibited positive psychological change; however, there is a curvilinear relationship in PTG. Therefore, both health organizations and the government should take more advanced precautions for healthcare workers to mitigate physical and psychological impacts of the COVID-19 pandemic. The ongoing COVID-19 outbreak has caused detrimental effects on not only physical but also psychological health. A substantial body of study highlighted that the pandemic and its restrictions were related to a greater level of depression, anxiety, stress, posttraumatic stress disorder, sleep disturbances, burnout, suicidal ideation, loneliness. On the other hand, a large-scale study underscored that not everyone was evenly influenced by the COVID-19 outbreak [87]. According to the results of this research, being women, single, younger, having more children, having received a lower level of education, residing in a rural area were risk factors during the pandemic. They were affected by more severely COVID-19 related stressors and reported greater levels of stress. In addition to the risk factors, the protective factors were also identified and listed. Resilience, active coping skills, exercise, social support, self-efficacy, stable income are accounted as a protective shield against the effects of the COVID-19 [88-91]. Having determined the risk and protective elements for the COVID-19, various interventions were introduced and/or implemented. Some interventions targeted specifically one aspect of mental consequences of the COVID-19 such as loneliness; however, most of them focused on the more general mental health problems. For instance, LOVE was a brief intervention designed to improve social connectedness to overcome loneliness reported mostly during the COVID-19 [92]. It consists of four stages: list everyone in one’s life, organize them according to their availability/helpfulness, verify the significant others and engage to these significant others via self-disclosure (the first letters of these four stages are abbreviated for the name of the program: LOVE). Similarly, cognitive behavior therapy for insomnia was tailored to the COVID-19 to mitigate sleep disturbances and ameliorate sleep quality [93]. In addition to these interventions specifically focused on some elements of outbreak, mindfulness, yoga, music intervention [94]and cognitive behavior therapy [95] were applied to overall mental health problems such as anxiety and depression. Furthermore, the other interventions highlight the importance of enhancing protective factors since they can buffer the adverse psychological distress. Several seminal studies underscored the term of resilience defined as the ability to remain healthy in the presence of stress and adversity [96]. The literature has suggested that higher level of resilience was associated with better mental health [97], higher wellbeing [88], and greater preventative effects on developing mental health disorders [98]. Thus, research conducted during the outbreak undertakes to show how to increase resilience skills among the population [88,99,100]. They gave some recommendations in terms of personal (e.g., mindfulness, implementing active coping skills), organizational (e.g., supplying protective equipment and relaxation activities), and environmental levels (e.g., room settings). The interventions also pointed out emotional regulation strategies [101], improving self-compassion such as gratitude [102,103] and stress related coping mechanisms [95]. Because of the restrictions during the pandemic, most of the psychological interventions have taken place on the internet so researchers also shift their attention towards the online-therapy processes. Due to its nature, the pandemic has affected everyone around the world. However, the source of psychological support is very scarce worldwide. In this regard, online brief therapies, even only one-session interventions, gained importance in the psychology field. Researchers who have studied the subject of online therapies found that one session intervention programs [103] as well as the other brief therapies/interventions [104] alleviated the psychological impact of the pandemic. These preliminary results are very promising. Since December 2019, unfortunately, the COVID-19 infection has spread throughout the world and has had detrimental consequences on mental health with different pathways [43]. As a first pathway to mental health problems, individuals experience health-related anxieties, that is, the fear and anxiety of the COVID-19 disease, which result in debilitating physical symptoms, hospitalization, threats of losing loved ones after contagion. The other pathway may be financial worries and stress such as losing jobs and reduction in payments, to which disadvantaged socio-economic classes of the society may be more vulnerable. The third pathway may be the changes in the living conditions at home such as meeting the demands of working and parenting at the same time. The fourth pathway is related to mental health effects of lockdown due to the loss or otherwise limited-fulfilling activities. This extraordinary period made it more crucial to get accurate information in a public mental health sensitive way about the COVID-19 globally. Uncertainty and unpredictability, personal distress, fear and anxiety and health-related worries have increased during the COVID-19 outbreak. These circumstances constitute a vulnerability context for mental health problems. The summary of implications from the present narrative review is presented in the Figure. Immense empirical research on mental health outcomes of the COVID-19 have proliferated. Research should continue to examine mental health outcomes with regard to longitudinal and persistent effects. Majority of research used various self-report measures to determine clinical risk and to to arrive at a conclusion without diagnostic assessments and also, some of the studies that were included in systematic reviews and meta-analyses lacked adequate quality. Review of the findings should be understood with these limitations. Mental health research based on rigorous methods should be one of the future aims in the COVID-19 research. However, there is enough research evidence that national and international community health agendas should put mental health issues at the center. Clinically significant mental health problems in the context of the COVID-19 need intervention and professional psychological support. Beside detrimental effects of traumatic events, positive mental health outcomes could be observed such as improved relationships. This phenomenon is known as PTG and it can also be accounted as a protective factor. Thus, PTG can be included in brief interventions programs. The COVID-19 pandemic has affected many people in different ways. Hence, tailored interventions should have been applied to people after determining the risk and protective factors in detail. In the psychology literature, resilience has been accepted as a skill and can be developed via interventions [105]. Therefore, building and maintaining this skill in the general population is highly recommended during these days. Furthermore, online brief or one session therapies and interventions have been gaining popularity in these mass trauma days. The research investigating single-session interventions on mental health supported various psychological effects. For instance, a study examining the psychological effects of online single-session interventions (such as psycho-education and anxiety management) showed that participants who received the intervention had a decrease in anxiety symptoms and negative affect, while there was no difference in positive affect and well-being [104]. The other single-session intervention study, which was designed to use behavioral activation, gratitude and cognitive restructuring techniques, revealed an increase in the well-being of the participants [103]. In the light of the mentioned studies, there was improvement in different areas in the participants according to the targeted method in intervention programs. In future studies, brief interventions may focus on COVID 19’s negative consequences in the first part of the session/s and positive effects in the second part of session/s or vice versa. This is a review article. Therefore, the study protocol did not receive institutional review board approval. Since no investigation was conducted with humans, no informed consent was required.
Primary Immunodeficiencies in Russia: Data From the National Registry
0b7782f2-2b27-4353-90c5-3469ee3fd3e8
7424007
Pathology[mh]
Primary immunodeficiencies (PID)—also referred to as “inborn errors of immunity”—are rare disorders characterized by susceptibility to infection and a preponderance of autoimmunity, allergy, autoinflammation, and malignancies. According to the latest update of the International Union of Immunological Societies Experts Committee (IUIS) classification, germline mutations in 430 genes cause 404 distinct phenotypes of immunological diseases, divided into 10 groups according to the type of immunological defect. Wide introduction of the molecular genetic techniques, including next-generation sequencing (NGS) , has led to the description of novel PID genes. This allows for a more precise assessment of clinical prognosis and for the choice of targeted therapy—or even gene therapy—as well as for family counseling . Generally, PID are described as rare diseases. Yet their reported prevalence varies greatly in different countries, depending on many factors: from data collection methodology to objective epidemiological features. In European countries, the estimated prevalence of PID ranges from 2.7/100,000 in Germany, to 4.16-5.9/100,000 in Switzerland and the United Kingdom (UK), to 8/100,000 in France . These numbers are in the range of the "orphan diseases" category. Yet recent findings, in patients with mendelian susceptibility to mycobacterial diseases (MSMD) , suggest that the actual prevalence is much higher. National PID registries , along with registries combining data for geographical regions , have proven to be an important tool for assessing the clinical and epidemiological features of PID—as well as an instrument for facilitating PID collaboration and research, both within and between countries. Several PID cohort study reports from Russia have been published recently, yet little has been known about the overall epidemiological features of PID in the heterogeneous Russian population. The aim of this study is to describe PID epidemiology in Russia, using a national registry. Registry Structure The Russian PID registry was established in 2017, as an initiative of the National Association of Experts in PID (NAEPID)—a non-profit organization facilitating collaboration amongst leading specialists in the field of primary immunodeficiencies in Russia. The registry is a secure on-line database, developed, and designed with the aim of collecting epidemiological, clinical, and genetic data of PID in Russia. It includes demographic data, clinical and laboratory details, molecular diagnosis, and treatment aspects of PID patients of all ages. Regular information updates allow for the collection of prospective data. The data is entered via an online registry form only; no paper-based documentation is needed. A group of trained managers at federal centers and doctors at regional hospitals enter the data in the database. This article analyzes the data input into the registry from its inception until February 1, 2020. At the time of the data analysis, PID variants were grouped according to the IUIS 2015–2017 classification and did not include the newly added category of bone marrow failure . The database structure includes the following obligatory fields: demographic data, family history, diagnosis, genetic testing results, and ages of disease onset and diagnosis. The extended universal fields—including detailed clinical description and treatment data—are not mandatory at the time of the first registry of a patient, but are eventually requested. New entries are reviewed automatically, and no duplicate entries can be created. Human-factor errors are prevented by built-in quality assurance measures. Patients can only be registered if the documenting center is part of the registry's collaborative team. Written informed consent is given by all registered patients or their legal guardians. Regularly updated reports on PID epidemiological data are published on the NAEPID Registry website http://naepid-reg.ru . Registry Platform The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. Centers Russia is divided into 85 regions, which are grouped geographically into eight federal districts. Data on the PID patients residing in 83 of the federal regions has been accumulated in the registry, with the input of regional and tertiary centers. No patients residing in the other two regions (Chukotka and Tuva) were registered in the database. At the time of analysis, 69 regional medical centers and 5 university clinics—located in all 8 federal districts—have contributed to the collaborative work. Three tertiary immunology centers located in Moscow serve as the main reference centers. The diagnosis of the majority of the patients (2,488/2,728, 91%) has been confirmed in at least one of the tertiary centers. Patients PID diagnosis was made according to the ESID diagnostic criteria . Patients with secondary immune defects were excluded. Although the registry collects data on all PID, 233 patients with selective IgA deficiency, and 106 patients with PFAPA (periodic fever, aphthous stomatitis, pharyngitis, adenitis) were not included in the current analysis. The entire cohort of patients (2,728) was included in the epidemiological analysis—while, for the treatment description, we used only the updated information available for the 1,851 alive patients. Genetic testing has been performed using the main molecular techniques, including Sanger sequencing, targeted next-generation sequencing (NGS), whole-exome and whole-genome sequencing, fluorescent in situ hybridization (FISH), multiplex ligation-dependent probe amplification (MLPA), and chromosomal microarray analysis (CMA), according to standard protocols. Data Verification All data entered into the registry undergoes automatic verification for typing errors and is regularly checked by the database monitor for consistency and completeness. Terminology and Definitions The actual age distribution was calculated only for the patients with updated information; the age of each patient was determined as the difference between their date of birth and the date of the last update. Patients without any contact within the last 2 years were marked as “lost to follow-up.” The diagnostic delay was estimated for all registered patients, in the nine most common PID categories, as the difference between the date of disease onset and the date of clinical diagnosis of PID. Prevalence was estimated as the number of all registered PID cases, divided by the population of Russia or of each federal district; information was obtained from open resources . Incidence was estimated as the number of new PID cases diagnosed during each year, divided by the number of live births during that year in Russia; information was obtained from open resources. Prevalence and incidence were expressed as the number of cases per 100,000 people. Mortality rate, expressed in percentage, was estimated as the number of deceased patients divided by the number of all updated PID cases; lost-to-follow-up patients were excluded. The category of “fully recovered” was not available at the time of analysis. Patients from birth to 17 years, 11 months, and 29 days were counted as children. The rest were considered adults. Statistical Analysis Demographic and epidemiological characteristics were described as average for the categorical variables, and median and range for the quantitative variables. To compare the prevalence of the diseases, the chi-squared test was used and a p -value of <0.05 was considered statistically significant. The average immunoglobulin (IG) dose was expressed as mean ± standard deviation. Statistical analysis was performed using XLSTAT Software (Addinsoft). The Russian PID registry was established in 2017, as an initiative of the National Association of Experts in PID (NAEPID)—a non-profit organization facilitating collaboration amongst leading specialists in the field of primary immunodeficiencies in Russia. The registry is a secure on-line database, developed, and designed with the aim of collecting epidemiological, clinical, and genetic data of PID in Russia. It includes demographic data, clinical and laboratory details, molecular diagnosis, and treatment aspects of PID patients of all ages. Regular information updates allow for the collection of prospective data. The data is entered via an online registry form only; no paper-based documentation is needed. A group of trained managers at federal centers and doctors at regional hospitals enter the data in the database. This article analyzes the data input into the registry from its inception until February 1, 2020. At the time of the data analysis, PID variants were grouped according to the IUIS 2015–2017 classification and did not include the newly added category of bone marrow failure . The database structure includes the following obligatory fields: demographic data, family history, diagnosis, genetic testing results, and ages of disease onset and diagnosis. The extended universal fields—including detailed clinical description and treatment data—are not mandatory at the time of the first registry of a patient, but are eventually requested. New entries are reviewed automatically, and no duplicate entries can be created. Human-factor errors are prevented by built-in quality assurance measures. Patients can only be registered if the documenting center is part of the registry's collaborative team. Written informed consent is given by all registered patients or their legal guardians. Regularly updated reports on PID epidemiological data are published on the NAEPID Registry website http://naepid-reg.ru . Registry Platform The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. The software platform used in the study was developed by Rosmed.info, using the PHP programming language. For database management, the Maria DB relational system (offshoot of the MySQL system) was utilized. Server Version: the 10.1.40-Maria DB Server and replication mechanism were used for back-up and improved performance; the server's contour and physical protection were compliant with Russian law regarding personal information protection. Russia is divided into 85 regions, which are grouped geographically into eight federal districts. Data on the PID patients residing in 83 of the federal regions has been accumulated in the registry, with the input of regional and tertiary centers. No patients residing in the other two regions (Chukotka and Tuva) were registered in the database. At the time of analysis, 69 regional medical centers and 5 university clinics—located in all 8 federal districts—have contributed to the collaborative work. Three tertiary immunology centers located in Moscow serve as the main reference centers. The diagnosis of the majority of the patients (2,488/2,728, 91%) has been confirmed in at least one of the tertiary centers. PID diagnosis was made according to the ESID diagnostic criteria . Patients with secondary immune defects were excluded. Although the registry collects data on all PID, 233 patients with selective IgA deficiency, and 106 patients with PFAPA (periodic fever, aphthous stomatitis, pharyngitis, adenitis) were not included in the current analysis. The entire cohort of patients (2,728) was included in the epidemiological analysis—while, for the treatment description, we used only the updated information available for the 1,851 alive patients. Genetic testing has been performed using the main molecular techniques, including Sanger sequencing, targeted next-generation sequencing (NGS), whole-exome and whole-genome sequencing, fluorescent in situ hybridization (FISH), multiplex ligation-dependent probe amplification (MLPA), and chromosomal microarray analysis (CMA), according to standard protocols. All data entered into the registry undergoes automatic verification for typing errors and is regularly checked by the database monitor for consistency and completeness. The actual age distribution was calculated only for the patients with updated information; the age of each patient was determined as the difference between their date of birth and the date of the last update. Patients without any contact within the last 2 years were marked as “lost to follow-up.” The diagnostic delay was estimated for all registered patients, in the nine most common PID categories, as the difference between the date of disease onset and the date of clinical diagnosis of PID. Prevalence was estimated as the number of all registered PID cases, divided by the population of Russia or of each federal district; information was obtained from open resources . Incidence was estimated as the number of new PID cases diagnosed during each year, divided by the number of live births during that year in Russia; information was obtained from open resources. Prevalence and incidence were expressed as the number of cases per 100,000 people. Mortality rate, expressed in percentage, was estimated as the number of deceased patients divided by the number of all updated PID cases; lost-to-follow-up patients were excluded. The category of “fully recovered” was not available at the time of analysis. Patients from birth to 17 years, 11 months, and 29 days were counted as children. The rest were considered adults. Demographic and epidemiological characteristics were described as average for the categorical variables, and median and range for the quantitative variables. To compare the prevalence of the diseases, the chi-squared test was used and a p -value of <0.05 was considered statistically significant. The average immunoglobulin (IG) dose was expressed as mean ± standard deviation. Statistical analysis was performed using XLSTAT Software (Addinsoft). Demographics and PID Distribution Information on 2,728 PID patients was available for analysis. Of these patients, 1,851 (68%) were marked as alive and 200 (7%) as dead. The remaining 677 (25%) were not updated during the last year or were lost to follow-up. The male-to-female ratio was 1.5:1, with 1,657 male patients (60%) and 1,071 female (40%). Of the 1,851 living patients, 1,426 (77%) were children, and 425 (23%) were adults. The majority of the children (913 of 1,426, 64%) were under 10 years old. The male-to-female ratio varied from 2:1 in children, to 1:1 in the group of adults under the age of 30 and 0.4:1 in the older patients . PID was diagnosed before the age of 18 years (in childhood) in 2,192 patients (88%), predominantly in the first 5 years of life (1,356, 54%; ). The distribution of patients among the main PID groups varied greatly between children and adults. All forms of PID were observed in children and in young adults (under the age of 25 years). Yet the majority of older patients belonged to just two categories—common variable immunodeficiency (CVID) and hereditary angioedema (HAE). Overall, primary antibody deficiencies (PAD; 699; 26%) and syndromic PID (591; 22%) were the most common disorders in Russia. These were followed by five PID groups, in similar proportions: complement deficiencies (342; 12%), phagocytic defects (262; 10%), combined T and B cell defects (368; 13%), autoinflammatory disorders (221; 8%), and immune dysregulation (196; 7%; ). Somatic phenocopies (6; <1%) and defects of innate immunity (43; 1.5%) were very rare. The most frequent PID categories in Russia, which cumulatively accounted for 53% of all registered patients, were: HAE type 1 and 2 ( n = 341), CVID ( n = 317), Wiskott–Aldrich syndrome (WAS; n = 154), X-linked agammaglobulinemia (XLA; n = 155), Chronic granulomatous disease (CGD; n = 135; of them 92 patients with X-linked CGD (X-CGD), Severe combined immunodeficiency (SCID; n = 137; of them 47 patients with X-linked SCID (X-SCID), DiGeorge syndrome (DGS; n = 130), Ataxia-telangiectasia (AT; n = 127) and Nijmegen breakage syndrome (NBS; n = 88; ). To assess mortality, we analyzed the cohort of 2,051 patients whose status was known (including 1,851 alive and 200 deceased patients). The overall mortality rate was estimated at 9.7%. The precise date of death was known for 136 of the 200 deceased patients: 127 (93%) children and 9 (7%) adults . The mortality rate ranged from 2 to 42% in different age groups; the highest rate was found in children in their first 2 years of life . The majority of infant deaths occurred in SCID patients (39 of 48, 81%; ). In the next age group (2–5 years), mortality was highest in the following four PID groups, in almost equal proportions: T and B cell defects (12/38, 32%) and syndromic PID (11/38, 29%), followed by phagocytic defects (7/38, 18%), and immune dysregulation (7/38, 18%). In total, 63% (86/136) of all PID-related deaths occurred in patients within the first 5 years of life. In older children , mortality was associated predominantly with syndromic PID (55%), immune dysregulation (9%), and PAD (13%)—whereas, in adults, it was associated only with PAD (78%) and HAE (22%; ). Diagnostic Delay Substantial PID diagnostic delay has been noted in Russia—with a median of 2 years for the whole group, but over a broad age range (0–68 years). No difference in diagnostic delay was observed, between patients diagnosed during the last 5 years ( M = 2; 0–63, 997 patients) and before 2015 ( M = 2 years; 0–68, 1,400 patients). Among the most common PID, the shortest diagnostic delay was observed in SCID ( M = 4 months, 0–68), followed by the WAS ( M = 8 months, 0–144), DGS ( M = 10 months, 0–144), and CGD ( M = 1, 0–17 years; ). In X-linked agammaglobulinemia (XLA) patients, time to diagnosis varied greatly—from 0 to 141 months, with a median of 28 months. The DNA repair disorders NBS and AT were diagnosed with a median of 2.5 years (0–23) and 3.0 years (0–14), respectively . The longest diagnostic delay was observed in CVID ( M = 6 years, 0–52) and HAE ( M = 11 years, 0–68; ). Just a few PID patients were diagnosed before the clinical onset of the disease, due to their family history; genetic testing was carried out for each of them. These included seven children with mutations in SERPING1 , two with BTK , one with WAS , and one with JAK3 defects. Genetic diagnosis led to an early start on IVIG therapy in the XLA patients, and to successful HSCT in the WAS and SCID patients. Family History The registry contained 310/2,728 (11%) familial PID cases, originating from 150 families , with the most frequent familial PIDs being HAE, WAS, and XLA. Consanguinity, as reported by the parents, was documented in 45 families. A family history of at least one death suspected to be due to PID was documented for 275 patients. These included infection-related deaths, in 185 cases, and malignancy-related deaths in 49 cases. Epidemiology The minimum overall PID prevalence in the Russian population was estimated at 1.3:100,000 people, with drastic variations among the federal districts (from 0.9 to 2.8 per 100,000; ). The average annual PID incidence was estimated to be 5.7 ± 0.6 in 100,000 live births. This ranged from 4.4 to 7.1:100,000, over the period from 2000 to 2019. During this period, the average number of newly diagnosed PID cases per year increased from 201 to 331 . Prevalence was estimated only for those PIDs frequently found in the adult group and with a low number of deaths registered in the database—CVID and HAE, with 0.22 and 0.23 per 100,000 people, respectively. This represents population frequency rates of 1 case per 430,000–450,000 people. Genetic Defects Genetic testing has been performed for 1,740 patients, with genetic defects confirmed in 1,344 (77%). PID diagnosis has been genetically confirmed in 86% of the children, yet in only 12% of the adults. Disease-causing genetic defects were detected by the following genetic methods: by direct Sanger sequencing in 903 patients (67%) and by next-generation sequencing (NGS) methods in 323 (24%) patients [including targeted panels, in 278; whole exome sequencing (WES), in 30; Clinical exome, in 13; and whole genome sequencing (WGS), in 2]. In the remaining 118 (9%) patients, cytogenetic methods and MLPA were used. Deletion of 22q.11 was confirmed via the FISH method in 80 patients, and by CMA in 26. In 6 cases, various chromosomal abnormalities resulting in syndromic forms of PID were confirmed by CMA. Mutations were found in 98 PID genes and in three genes that are not currently included in the PID classification ( NTRK1, SCN9A, XRCC4 ) . As expected, the highest number of genetic defects were found in genes underlying the most frequent “classical” PID: mutations in SERPING1 were found in 178 of 341 HAE cases (52.2%), WAS in 154 (100%) of WAS patients , BTK in 114 of 155 X-LA (73.5%) , CYBB in 98 (73%) of CGD 135 cases , NBN in 75/88 (85%) of NBS patients and ATM in 55/127 (43%) of AT patients. 106/130 DGS patients had del22q.11 confirmed. At least 20 patients (for each disease) had mutations in the following genes: MEFV, MVK, NLRP3, ELANE, SBDS, FAS, STAT3 LOF, IL2RG , and CD40LG . Rare defects, with 4–20 patients for each gene, affected predominantly recently described genes: PSTPIP1, TNFRSF1A, CXCR4, STAT1, CYBA, STXBP2, FOXP3, CTLA4, AIRE, XIAP, SH2D1A, SMARCAL1, RMRP, SPINK5, KMT2D, NFKB1, PIK3CD, PIK3R1, TNFRSF13B, RAG1, RAG2, ADA, ARTEMIS, JAK3, LIG4 , and KRAS . The remaining 57 genes had mutations recorded for single patients . The proportion of patients with genetically confirmed diagnoses was highest among those with syndromic PIDs, reaching 77% (457/591) . Within the phagocytic defect and innate immunity defect groups, 71% (185/262) and 63% (27/43) of the patients, respectively, had a genetic diagnosis. PID genetic confirmation showed about half of all patients in the groups to have immune dysregulation (56%; 109/196), autoinflammatory disorders (49%; 109/221), and complement deficiencies (52%; 179/342)—the last of these due mainly to HAE. The proportion of patients with genetic diagnoses showing T- and B-cell defects was 33% (123/368). The lowest number of patients with verified mutations, at 21% (144/699), was observed in the PAD group ; BTK abnormalities prevailed among them (114/155; 73.5%). Somatic mutations in KRAS and NRAS were confirmed in six patients. The segregation of genetic defects by mode of inheritance was nearly equal: 469 patients (38.4%) with an X-linked (XL) diseases had mutations in 10 genes, 383 (31.4%) patients with autosomal dominant (AD) diseases had mutations in 29 genes, and 369 (30.2%) patients with autosomal recessive (AR) diseases had mutations in 58 genes. In the group of AR PID patients, 218 (59%) had compound heterozygous mutations and 151 (41%) had homozygous mutations; the majority (74; 49%), as expected, were NBS patients with the “Slavic” mutation in the NBN gene . Homozygous mutations were also found in the genes with the known “hot-spots”: MEFV (11; 7%) and AIRE (5; 3%). Another “Slavic” mutation— RAG1 c.256_257delAA p.K86fs, in a compound heterozygous or homozygous state—was reported in 7/16 patients with RAG1 defects, putting this allele frequency at 25%. Testing for prenatal PID diagnosis (PND) was performed in 40 pregnancies among 37 families with previously known PID-causing genetic defects. Embryonic/fetal material was obtained by chorionic villi sampling at 10–12 weeks of gestation in 37 cases; by amniocentesis in the second trimester, in two cases; and by cordocentesis, in one. No serious complications were noted, during or after the procedures. 30/40 embryos were mutation-free. In six cases, a PID diagnosis was given; all families chose to terminate the pregnancies. Four embryos were heterozygous carriers of recessive PID mutations—all these pregnancies were carried to term. Two more sibling heterozygous carriers were born after preimplantation diagnosis. Symptomatic Treatment Treatment of PID symptoms, as documented in the registry, has been divided into three categories: immunoglobulin (IG) substitution, biologicals, and “other.” There was updated information for 1,622 patients, regarding prescribed or on-going therapy. Half of the patients (843/1,622, 52%) received IG substitution. Of these, only 32 patients (4%) have ever had an experience with subcutaneous IG (SCIG); all others received intravenous IG (IVIG), with an average dose of 0.46 ± 0.09 g/kg per month. Regular IG substitution therapy was recorded in 279/369 patients (76%) with syndromic PID, in 296/433 (68%) PAD patients and in 173/270 patients (64%) with combined PID. At least single (but not regular) IG use was recorded for 15/29 patients (52%) with defects of innate immunity, 61/124 patients (49%) with immune dysregulation, 49/172 patients (28%) with phagocytosis defects, and 25/171 patients (15%) with autoinflammatory disorders. 414/1,622 (25%) patients were treated with various biological drugs. Updated information was available for 91 HAE patients, of whom 70/91 (77%) received either a C1 inhibitor or a selective antagonist of bradykinin receptors during attacks, including 51 patients who had experience with both drugs. In other PIDs, the rate of biological treatment was highest in the group of patients with autoinflammatory disorders: 86/186 (46%). This was followed by the group of immune dysregulation, with 48/134 (36%); and of combined PID, with and without syndromic features: 63/405 (16%) and 27/242 (18%), respectively. Patients with disorders of innate immunity and PAD were treated with biologicals only, in 3/32 (9%) and in 43/453 (6%) cases, respectively. Curative Therapies Three patients in the cohort underwent gene therapy for WAS; all are currently alive. Information was available for 342/2,728 (16%) patients who underwent HSCT. Of these, 60 were deceased, 228 alive and 54 had not been updated during the prior 2 years . All transplanted patients were diagnosed with PID as children. Yet, in 5/342, HSCT was performed after 18 years of age. HSCT has been performed in 106/591 (18%) patients with PIDs with syndromic features (18% of all syndromic PIDs), including 92/106 (88%) with WAS and 25/88(28%) with NBS; in 111 patients with combined T- and B-cell defects (30% of all CID), including 79/137 SCID (58%); in 66/262(25%) patients with phagocytic defects, including 47/135 CGD (35%) and 14/107 SCN (13%); in 41/196 (21%) patients with immune dysregulation; in 5/699 (0.7%) patients with PAD [four with activated PI3K syndrome (APDS) and 1 with XLA]; in 6/221(3%) patients with autoinflammatory disorders; and in 7/43 (16%) patients with defects of innate immunity. Information on 2,728 PID patients was available for analysis. Of these patients, 1,851 (68%) were marked as alive and 200 (7%) as dead. The remaining 677 (25%) were not updated during the last year or were lost to follow-up. The male-to-female ratio was 1.5:1, with 1,657 male patients (60%) and 1,071 female (40%). Of the 1,851 living patients, 1,426 (77%) were children, and 425 (23%) were adults. The majority of the children (913 of 1,426, 64%) were under 10 years old. The male-to-female ratio varied from 2:1 in children, to 1:1 in the group of adults under the age of 30 and 0.4:1 in the older patients . PID was diagnosed before the age of 18 years (in childhood) in 2,192 patients (88%), predominantly in the first 5 years of life (1,356, 54%; ). The distribution of patients among the main PID groups varied greatly between children and adults. All forms of PID were observed in children and in young adults (under the age of 25 years). Yet the majority of older patients belonged to just two categories—common variable immunodeficiency (CVID) and hereditary angioedema (HAE). Overall, primary antibody deficiencies (PAD; 699; 26%) and syndromic PID (591; 22%) were the most common disorders in Russia. These were followed by five PID groups, in similar proportions: complement deficiencies (342; 12%), phagocytic defects (262; 10%), combined T and B cell defects (368; 13%), autoinflammatory disorders (221; 8%), and immune dysregulation (196; 7%; ). Somatic phenocopies (6; <1%) and defects of innate immunity (43; 1.5%) were very rare. The most frequent PID categories in Russia, which cumulatively accounted for 53% of all registered patients, were: HAE type 1 and 2 ( n = 341), CVID ( n = 317), Wiskott–Aldrich syndrome (WAS; n = 154), X-linked agammaglobulinemia (XLA; n = 155), Chronic granulomatous disease (CGD; n = 135; of them 92 patients with X-linked CGD (X-CGD), Severe combined immunodeficiency (SCID; n = 137; of them 47 patients with X-linked SCID (X-SCID), DiGeorge syndrome (DGS; n = 130), Ataxia-telangiectasia (AT; n = 127) and Nijmegen breakage syndrome (NBS; n = 88; ). To assess mortality, we analyzed the cohort of 2,051 patients whose status was known (including 1,851 alive and 200 deceased patients). The overall mortality rate was estimated at 9.7%. The precise date of death was known for 136 of the 200 deceased patients: 127 (93%) children and 9 (7%) adults . The mortality rate ranged from 2 to 42% in different age groups; the highest rate was found in children in their first 2 years of life . The majority of infant deaths occurred in SCID patients (39 of 48, 81%; ). In the next age group (2–5 years), mortality was highest in the following four PID groups, in almost equal proportions: T and B cell defects (12/38, 32%) and syndromic PID (11/38, 29%), followed by phagocytic defects (7/38, 18%), and immune dysregulation (7/38, 18%). In total, 63% (86/136) of all PID-related deaths occurred in patients within the first 5 years of life. In older children , mortality was associated predominantly with syndromic PID (55%), immune dysregulation (9%), and PAD (13%)—whereas, in adults, it was associated only with PAD (78%) and HAE (22%; ). Substantial PID diagnostic delay has been noted in Russia—with a median of 2 years for the whole group, but over a broad age range (0–68 years). No difference in diagnostic delay was observed, between patients diagnosed during the last 5 years ( M = 2; 0–63, 997 patients) and before 2015 ( M = 2 years; 0–68, 1,400 patients). Among the most common PID, the shortest diagnostic delay was observed in SCID ( M = 4 months, 0–68), followed by the WAS ( M = 8 months, 0–144), DGS ( M = 10 months, 0–144), and CGD ( M = 1, 0–17 years; ). In X-linked agammaglobulinemia (XLA) patients, time to diagnosis varied greatly—from 0 to 141 months, with a median of 28 months. The DNA repair disorders NBS and AT were diagnosed with a median of 2.5 years (0–23) and 3.0 years (0–14), respectively . The longest diagnostic delay was observed in CVID ( M = 6 years, 0–52) and HAE ( M = 11 years, 0–68; ). Just a few PID patients were diagnosed before the clinical onset of the disease, due to their family history; genetic testing was carried out for each of them. These included seven children with mutations in SERPING1 , two with BTK , one with WAS , and one with JAK3 defects. Genetic diagnosis led to an early start on IVIG therapy in the XLA patients, and to successful HSCT in the WAS and SCID patients. The registry contained 310/2,728 (11%) familial PID cases, originating from 150 families , with the most frequent familial PIDs being HAE, WAS, and XLA. Consanguinity, as reported by the parents, was documented in 45 families. A family history of at least one death suspected to be due to PID was documented for 275 patients. These included infection-related deaths, in 185 cases, and malignancy-related deaths in 49 cases. The minimum overall PID prevalence in the Russian population was estimated at 1.3:100,000 people, with drastic variations among the federal districts (from 0.9 to 2.8 per 100,000; ). The average annual PID incidence was estimated to be 5.7 ± 0.6 in 100,000 live births. This ranged from 4.4 to 7.1:100,000, over the period from 2000 to 2019. During this period, the average number of newly diagnosed PID cases per year increased from 201 to 331 . Prevalence was estimated only for those PIDs frequently found in the adult group and with a low number of deaths registered in the database—CVID and HAE, with 0.22 and 0.23 per 100,000 people, respectively. This represents population frequency rates of 1 case per 430,000–450,000 people. Genetic testing has been performed for 1,740 patients, with genetic defects confirmed in 1,344 (77%). PID diagnosis has been genetically confirmed in 86% of the children, yet in only 12% of the adults. Disease-causing genetic defects were detected by the following genetic methods: by direct Sanger sequencing in 903 patients (67%) and by next-generation sequencing (NGS) methods in 323 (24%) patients [including targeted panels, in 278; whole exome sequencing (WES), in 30; Clinical exome, in 13; and whole genome sequencing (WGS), in 2]. In the remaining 118 (9%) patients, cytogenetic methods and MLPA were used. Deletion of 22q.11 was confirmed via the FISH method in 80 patients, and by CMA in 26. In 6 cases, various chromosomal abnormalities resulting in syndromic forms of PID were confirmed by CMA. Mutations were found in 98 PID genes and in three genes that are not currently included in the PID classification ( NTRK1, SCN9A, XRCC4 ) . As expected, the highest number of genetic defects were found in genes underlying the most frequent “classical” PID: mutations in SERPING1 were found in 178 of 341 HAE cases (52.2%), WAS in 154 (100%) of WAS patients , BTK in 114 of 155 X-LA (73.5%) , CYBB in 98 (73%) of CGD 135 cases , NBN in 75/88 (85%) of NBS patients and ATM in 55/127 (43%) of AT patients. 106/130 DGS patients had del22q.11 confirmed. At least 20 patients (for each disease) had mutations in the following genes: MEFV, MVK, NLRP3, ELANE, SBDS, FAS, STAT3 LOF, IL2RG , and CD40LG . Rare defects, with 4–20 patients for each gene, affected predominantly recently described genes: PSTPIP1, TNFRSF1A, CXCR4, STAT1, CYBA, STXBP2, FOXP3, CTLA4, AIRE, XIAP, SH2D1A, SMARCAL1, RMRP, SPINK5, KMT2D, NFKB1, PIK3CD, PIK3R1, TNFRSF13B, RAG1, RAG2, ADA, ARTEMIS, JAK3, LIG4 , and KRAS . The remaining 57 genes had mutations recorded for single patients . The proportion of patients with genetically confirmed diagnoses was highest among those with syndromic PIDs, reaching 77% (457/591) . Within the phagocytic defect and innate immunity defect groups, 71% (185/262) and 63% (27/43) of the patients, respectively, had a genetic diagnosis. PID genetic confirmation showed about half of all patients in the groups to have immune dysregulation (56%; 109/196), autoinflammatory disorders (49%; 109/221), and complement deficiencies (52%; 179/342)—the last of these due mainly to HAE. The proportion of patients with genetic diagnoses showing T- and B-cell defects was 33% (123/368). The lowest number of patients with verified mutations, at 21% (144/699), was observed in the PAD group ; BTK abnormalities prevailed among them (114/155; 73.5%). Somatic mutations in KRAS and NRAS were confirmed in six patients. The segregation of genetic defects by mode of inheritance was nearly equal: 469 patients (38.4%) with an X-linked (XL) diseases had mutations in 10 genes, 383 (31.4%) patients with autosomal dominant (AD) diseases had mutations in 29 genes, and 369 (30.2%) patients with autosomal recessive (AR) diseases had mutations in 58 genes. In the group of AR PID patients, 218 (59%) had compound heterozygous mutations and 151 (41%) had homozygous mutations; the majority (74; 49%), as expected, were NBS patients with the “Slavic” mutation in the NBN gene . Homozygous mutations were also found in the genes with the known “hot-spots”: MEFV (11; 7%) and AIRE (5; 3%). Another “Slavic” mutation— RAG1 c.256_257delAA p.K86fs, in a compound heterozygous or homozygous state—was reported in 7/16 patients with RAG1 defects, putting this allele frequency at 25%. Testing for prenatal PID diagnosis (PND) was performed in 40 pregnancies among 37 families with previously known PID-causing genetic defects. Embryonic/fetal material was obtained by chorionic villi sampling at 10–12 weeks of gestation in 37 cases; by amniocentesis in the second trimester, in two cases; and by cordocentesis, in one. No serious complications were noted, during or after the procedures. 30/40 embryos were mutation-free. In six cases, a PID diagnosis was given; all families chose to terminate the pregnancies. Four embryos were heterozygous carriers of recessive PID mutations—all these pregnancies were carried to term. Two more sibling heterozygous carriers were born after preimplantation diagnosis. Treatment of PID symptoms, as documented in the registry, has been divided into three categories: immunoglobulin (IG) substitution, biologicals, and “other.” There was updated information for 1,622 patients, regarding prescribed or on-going therapy. Half of the patients (843/1,622, 52%) received IG substitution. Of these, only 32 patients (4%) have ever had an experience with subcutaneous IG (SCIG); all others received intravenous IG (IVIG), with an average dose of 0.46 ± 0.09 g/kg per month. Regular IG substitution therapy was recorded in 279/369 patients (76%) with syndromic PID, in 296/433 (68%) PAD patients and in 173/270 patients (64%) with combined PID. At least single (but not regular) IG use was recorded for 15/29 patients (52%) with defects of innate immunity, 61/124 patients (49%) with immune dysregulation, 49/172 patients (28%) with phagocytosis defects, and 25/171 patients (15%) with autoinflammatory disorders. 414/1,622 (25%) patients were treated with various biological drugs. Updated information was available for 91 HAE patients, of whom 70/91 (77%) received either a C1 inhibitor or a selective antagonist of bradykinin receptors during attacks, including 51 patients who had experience with both drugs. In other PIDs, the rate of biological treatment was highest in the group of patients with autoinflammatory disorders: 86/186 (46%). This was followed by the group of immune dysregulation, with 48/134 (36%); and of combined PID, with and without syndromic features: 63/405 (16%) and 27/242 (18%), respectively. Patients with disorders of innate immunity and PAD were treated with biologicals only, in 3/32 (9%) and in 43/453 (6%) cases, respectively. Three patients in the cohort underwent gene therapy for WAS; all are currently alive. Information was available for 342/2,728 (16%) patients who underwent HSCT. Of these, 60 were deceased, 228 alive and 54 had not been updated during the prior 2 years . All transplanted patients were diagnosed with PID as children. Yet, in 5/342, HSCT was performed after 18 years of age. HSCT has been performed in 106/591 (18%) patients with PIDs with syndromic features (18% of all syndromic PIDs), including 92/106 (88%) with WAS and 25/88(28%) with NBS; in 111 patients with combined T- and B-cell defects (30% of all CID), including 79/137 SCID (58%); in 66/262(25%) patients with phagocytic defects, including 47/135 CGD (35%) and 14/107 SCN (13%); in 41/196 (21%) patients with immune dysregulation; in 5/699 (0.7%) patients with PAD [four with activated PI3K syndrome (APDS) and 1 with XLA]; in 6/221(3%) patients with autoinflammatory disorders; and in 7/43 (16%) patients with defects of innate immunity. The current study represents the first attempt to systematically assess clinical and epidemiological data on patients with PID in Russia, using the online registry. At the time of analysis, 2,728 PID patients were registered, representing all districts of the country—thus making this study a valid assessment of the PID cohort in Russia. Since reporting patients in the registry was not mandatory for the treating physicians, we expect underreporting of about 15–20% and are therefore able to discuss only the minimum epidemiological characteristics of PID in Russia. Though PID prevalence in the central part of Russia (2.8 per 100,000 people) is comparable to that of most European countries (2–8 per 100,000 people) , the overall prevalence of 1.3 per 100,000 is quite low. This reflects significant under-diagnosis, especially in regions with low population density and economic status. The male-to-female ratio in our various age groups does not differ greatly from previous observations, with males predominating amongst children and females in adulthood . Our study demonstrates a high mortality rate in the Russian PID cohort—as high as 9.8%—as compared with the most recently published German and Swiss registry. Yet it is comparable to the 8.6% (641/7,430) in the previously published ESID registry and the 8% (2,232/27,550) provided by the online ESID reporting website . Significantly, half of reported PID deaths occur within the first 5 years of life. This stresses the importance of early PID diagnosis and quick referral to transplanting centers, as SCID and other CIDs account for the majority of early PID deaths. In light of these statistics, unrecognized infant PID mortality may significantly contribute to the low prevalence of PID in Russia, as patients die before they are diagnosed with PID. Thus, future introduction of neonatal PID screening utilizing TREC/KREC detection may substantially improve PID verification . Children represent the majority (77%) of PID patients in the registry. Comparing this data to other registries—where patients over 18 years old represent up to 55% of all PID —we can conclude that adults with PID are the most under-diagnosed category in Russia. This statement is confirmed by the low proportion of PAD defects in the Russian registry (21 vs. 56% in the ESID registry) . This, in turn, reflects low numbers of patients with CVID, the main PID affecting adults worldwide. The estimated prevalence of CVID in Russia is 0.2 per 100,000 people—whereas, in other registries, CVID prevalence reaches 1.3 per 100,000 people . In the recent years, Russia developed a relatively good network of pediatric immunologists, yet adult immunologists are scarce. NAEPID and the registry team have an educational and organizational plan aimed at improving adult PID diagnosis and care. The registry will be a good tool to assess success of the project in the next 5 years. Combined immunodeficiencies with syndromic features constitute the most prominent PID group in the registry (22%), presumably due to the well-defined phenotype and the high awareness of these disorders among various medical specialists. Patients with WAS and DGS have the shortest diagnostic delay and the highest proportion of genetic confirmation. Overall, the majority of genetic defects were confirmed in the clinically or analytically well-defined and well-described PID (HAE, WAS, XLA, CGD, and NBS). Most studies also have the highest genetic confirmation rate in the group of combined PID , though an Iranian study describes a predominance of genetic defects in the dysregulation group . The patients' distribution amongst PID groups differs from that of most published registries in other aspects, as well. Though PAD are underrepresented, we have relatively large groups of autoinflammatory disorders (AID) and complement defects (predominantly HAE). This is because the Russian PID database collects data on all IUIS classified PIDs, in contrast with some other countries—where AID cases are followed and reported predominantly by rheumatologists, and HAE cases predominantly by allergists . In our registry, HAE patients contribute 12% of all PID cases and have a high rate of genetic confirmation, though diagnostic delay in these cases is still quite high. Overall, diagnostic delay amongst the predominant forms of PID varied from 4 months in SCID—which is similar to data reported by others —to 141 months in XLA patients. Obviously, such long diagnostic delays lead to a number of unrecognized PAD deaths and contribute to the low proportion of humoral deficiencies in the registry. Diagnostic delay amongst NBS patients was shorter (median 2.5 years) than that reported previously in a smaller cohort of Russian NBS patients (median 5.0 years) . Yet the range of diagnostic delay is rather substantial: some patients were diagnosed as teenagers only after the onset of a malignancy, in spite of continuous follow-up by neurologists. Sadly, with the increase of PID diagnosis in the last 5 years, there has been no improvement in diagnostic delay. This, yet again, raises the question of neonatal screening. Wider availability of next-generation sequencing methods, which were routinely introduced in Russia only in 2017, may also change this dynamic. Unsurprisingly, 67% of the genetic defects in our cohort were detected via Sanger sequencing, in the most frequent and well-defined PID . A significant proportion of the mutations in the recently described genes were confirmed only with the advent of NGS techniques . NGS has allowed us to detect mutations in as many as 80 PID genes, sometimes with only one or a few patients per gene. The application of NGS to PID diagnosis has revolutionized the field by identifying novel disease-causing genes and allowing for the quick and relatively inexpensive detection of defects therein . Adult PIDs show a substantially lower rate of genetic confirmation than that seen in children. This is partially because genetic defects are often not found in CVID, even using NGS methods . Yet it also represents the fact that adults are less likely to pay for genetic testing since, in Russia, it is not covered by the state or by medical insurance. As described by others the majority of the genetic defects were found in males, due to the fact that a lot of the “old” PID have X-linked inheritance. In highly consanguineous populations, AR PIDs represent 70–90% of cases . Interestingly—though the Russian population is very heterogeneous, with low numbers of consanguineous marriages (45 families, 1.9%)—AR genetic defects comprised 30% of all defects described in the cohort, with 40% of these being homozygous for the respective mutations. This is due to the “founder effect,” known for affecting several PID genes in the Slavic population. The majority of NBS patients−74 (98.7%)—were homozygous for the “Slavic” mutation . A high frequency of the RAG1 c.256_257delAA p.K86fs mutation is also typical for Slavic populations, as previously noted . Other homozygous mutations were reported in patients with defects in the MEFV and AIRE genes, and known for the hot-spot mutations. . Our cohort included a group of patients with large aberrations, involving at least one PID gene. Therefore, we conclude that patients with complex phenotypes require implementation of not just Sanger sequencing and/or NGS methods, which can only indirectly point to a large aberration, but also cytogenetic methods, including CMA. Moreover, even well-described PIDs like HAE often require a combination of genetic methods, including MLPA, to detect large deletions frequent in this disease . Our first analysis of the Russian PID population demonstrates substantial genetic diversity and high rate of genetic diagnosis confirmation-−49% of all registered patients. This is comparable to 36–43% of genetic PID confirmation in patients from French and German registries . The importance of genetic defect verification cannot be underestimated, as it influences overall treatment approach (HSCT vs. conservative treatment) and targeted therapy validation. It is also crucial for prenatal/preimplantation testing—which, if implemented, allows families to have healthy children. This is especially important for families with currently incurable PIDs, like AT and some others. As previously published , the main treatment strategy for most PID patients (52% in the current study) is regular IG replacement. Additionally—in contrast to European data —the vast majority of patients in Russia are treated with IVIG, with only 4% of the patients having experience with subcutaneous IG replacement. Hence, IG substitution in Russia requires systemic modifications, i.e., wider availability of SCIG and home IVIG infusions that are not available at this time. To our knowledge, the Russian PID registry is the first to analyze the use of monoclonal antibodies and other biologics in the treatment of PID symptoms. The number of patients treated with this kind of therapy in this cohort is rather high, reaching 25%. Finally, 12% of patients underwent curative treatment, predominantly HSCT—a number comparable to the German registry . The proportions of transplanted patients with phagocytic disorders and with immune dysregulation were also similar in both registries. Yet, in comparison with the German registry—where one third of all HSCT was performed in CID patients—the predominant HSCT group in Russia consisted of patients with syndromic PIDs (18%) This reflects a significant prevalence of NBS patients, for whom HSCT has shown to be a successful and safe treatment strategy . In conclusion, the current study has summarized the epidemiological features of PID patients in Russia and highlighted the main challenges for the diagnosis and treatment of patients with PID. As with all other rare disease registries, the Russian PID registry is a powerful tool—not just for data collection but also to help improve PID patient care, especially in the setting of a large country with highly diverse regional features. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. This study was approved by the ethics committee of the Dmitry Rogachev National 474 Center of Pediatric Hematology, Oncology, and Immunology (approval No 2∋/2-20). All patients or their 475 legal guardians gave written informed consent for participation in the registry. All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Measuring dengue illness intensity: Development and content validity of the dengue virus daily diary (DENV-DD)
fbd35456-8ea2-4375-9571-47b59c8915d6
10447358
Internal Medicine[mh]
Dengue is a mosquito-borne viral illness endemic in more than 100 countries with cyclical epidemics in the Americas, South-East Asia, and Western Pacific regions . In 2019, the World Health Organization (WHO) reported 5.2 million cases of symptomatic dengue globally based on country and regional level passive surveillance systems . However, it is recognized that dengue prevalence is significantly under-reported using passive surveillance methods. The actual number of symptomatic cases is estimated to affect 50–60 million individuals globally per year . Field studies and human infection models typically characterize dengue illness by fever, headache, musculoskeletal pain, fatigue, rash and nausea/vomiting, in line with the WHO definition . There is no specific antiviral treatment for dengue illness and most treatments aim to only alleviate symptoms. Most signs/symptoms of dengue typically resolve within 5–14 days after onset, with the majority of cases being treated in outpatient settings. However, clinically severe illness can develop resulting in hypotension and organ dysfunction from plasma leakage and/or internal bleeding. Failure to manage fluid replacement can lead to shock, multi-organ failure, fluid overload and death . Despite its prevalence, few resources exist for understanding and capturing the patient experience of dengue (signs/symptoms and impacts on health-related quality of life [HRQoL]) throughout the course of illness, i.e. disease burden. Most existing instruments are based on healthcare provider assessment, which can be highly variable between providers, lacking standardization in timepoints and symptom assessment . Standardized patient-completed instruments are needed to adequately characterize disease burden. Recent surveys have sought to establish detailed information regarding symptoms, including intensity and duration. However, a more concise measure prioritizing concepts important to patients is needed for use in large scale or multinational clinical and cohort studies to alleviate completion burden. The Dengue Illness Index Report Card (DII-RC), a 16-item daily diary developed to assess patients’ or caregivers' subjective experience of dengue illness, was identified. However, the DII-RC captures only the presence or absence of commonly observed signs/symptoms (versus illness intensity). The developers acknowledged further adaption should be undertaken to optimize use in clinical studies by following regulatory guidance for clinical outcome assessment (COA) development . Therefore, using the DII-RC as a foundation, a new COA instrument (the Dengue Virus Daily Diary [DENV-DD], consisting of patient- and observer-reported outcome (PRO/ObsRO) instruments) was developed, in line with regulatory guidance to evaluate the trajectory and intensity of dengue-associated signs/symptoms over the course of a patient’s illness, from the patient or caregiver perspective. This study aimed to refine and evaluate the content validity and feasibility of the DENV-DD (PRO and ObsRO instruments) as a first step in its validation to assess dengue illness symptom intensity and burden in infants to adults, for clinical research and real-world use. An overview of the study design is summarized in Fig. . At key stages throughout the research, a scientific committee composed of clinical experts in dengue (AL, BC, KR, ST, TE, [See acknowledgements]) provided input and guidance. Initial instrument development The DII-RC underwent a face-validity assessment by COA development experts, including a review of WHO dengue guidelines to form the draft DENV-DD. An item-refinement meeting was held with the scientific committee to gain consensus on instruction/item wording. Two US English versions were developed, with age bands guided by International Society for Pharmacoeconomics and Outcomes Research recommendations : a PRO for self-completion by adults (aged 18+ years) and older children/adolescents (aged 8–17 years) with a caregiver present (if needed); and an ObsRO for completion by caregivers of young children (aged 1–7 years) with symptomatic dengue. Key differences included: the PRO contained additional pain-related items (e.g. muscle and bone) which caregivers are unlikely to be able to report on; the ObsRO contained items assessing ‘sleeping more’ and ‘feeling grumpy’ as observable indicators of fatigue and general illness in children as well as an ‘I don’t know’ response option for items that cannot be directly observed in very young children. While the DII-RC only captured presence or absence of symptoms via a dichotomous response scale, for the DENV-DD, the response scale was updated to include multiple response options (Fig. ). These are defined according to verbal descriptors and pictorial faces and designed to capture gradation in symptom intensity and facilitate evaluation of changes in symptom intensity over time (to be assessed during later planned psychometric validation work, in line with best practice guidance ). Two global items, assessing dengue illness intensity and daily impact of dengue illness respectively, were also adapted from the DII-RC to capture illness burden on daily life and an item to input the patient’s temperature (assessed by self/caregiver administration using a thermometer) added. Items were reworded to ensure use of patient-friendly language and a 24-h recall period was employed. This resulted in a draft 22-item PRO containing 19 symptom items, 2 global items, and 1 temperature item, and a 21-item ObsRO containing 18 symptom items, 2 global items, and 1 temperature item. Both versions were subjected to a translatability assessment (assessing Thai, Malay, Indonesian, Vietnamese, Filipino/Tagalog, Traditional Chinese, Korean, Japanese) to confirm the appropriateness of the wording; and the suitability of concepts for future translations in countries where dengue is endemic . The DENV-DD underwent forward–backward translation into Spanish, using country-neutral words but ensuring cultural relevance for the sample in Peru and Ecuador; and was migrated onto an electronic platform for the interviews. In accordance with best practice for linguistic validation, the draft DENV-DD was discussed with healthy individuals in Iquitos, Peru (prior to qualitative interviews) to ensure terms and concepts aligned with the local idioms and general reading level. Qualitative interviews A non-interventional, cross-sectional, qualitative study was conducted. This involved interviews in two dengue endemic regions (Iquitos, Peru and Machala, Ecuador) with individuals and caregivers of younger children who recently experienced laboratory-confirmed dengue. The regions were selected as they are recognized as key areas of dengue research . The interviews comprised combined concept elicitation (CE) and cognitive debriefing (CD) activities: CE explored the patient experience of dengue and informed development of a conceptual model and assessment of the conceptual comprehensiveness of the DENV-DD, and CD assessed whether the DENV-DD is understood, relevant and captures all concepts important to patients. Interviews were conducted in two rounds, allowing for modifications and testing of the updated instrument between rounds. All participant-facing study documents and interview guides were translated by certified translators and reviewed by local site investigators to ensure they were culturally sensitive and appropriately adapted to the local dialect and comprehension level. Sample and recruitment Participant recruitment took place between 2018 and 2021, with a 13-month interruption due to the COVID-19 pandemic. Participants were identified through partnership with scientific experts conducting dengue research in Peru and Ecuador. To be eligible, patients had to be 1–65 years of age (inclusive), have a laboratory-confirmed diagnosis of dengue within the last 30 days, been asymptomatic for at least 1 day, and only observed/treated for dengue as an outpatient. Participants were recruited across three age-groups: adults aged 18 years or older, adolescents/older children aged 8–17 years accompanied by their caregiver, and caregivers of younger children aged 1–7 years who had experienced dengue illness. A target of 20 interviews in each age group was expected to be sufficient for achieving ‘concept saturation’ (a point at which no new concepts are likely to emerge with further interviews) . Demographic and clinical information were also collected. All participants were compensated for participation as regionally appropriate. Ethics The study was approved and overseen by the U.S. Naval Medical Research Unit No. 6 Institutional Review Board (IRB) in Peru (NAMRU6.2014.0028) and by central SUNY Upstate IRB (417710) and Luis Vernaza Hospital (HLV-CEISH-2020-005) in Ecuador, in compliance with all applicable federal regulations governing the protection of human subjects. All participants provided written informed consent and/or assent (for children aged 8 years and above only) prior to study-related activities. Interview procedure Interviews were 60-min and conducted face-to-face or via telephone in the participant’s native language by site investigators trained in qualitative interviewing, using a semi-structured interview guide. Older children and adolescents (aged 8–17 years) were interviewed with their caregiver; however, interviewers were encouraged to engage primarily with the child/adolescent. The CE section of the interviews used broad, open-ended questions to facilitate spontaneous, unbiased elicitation of concepts regarding the patient experience of dengue. Focused questions were used if concepts of interest had not emerged or been fully explored. For the CD section, participants were asked to complete the DENV-DD either on paper or electronically on a device using a ‘think aloud’ approach and asked to share their thoughts as they read each instruction/item and selected each response. During both round one and round two interviews, the PRO was debriefed with adult patients aged 18–65 and patients who were older children/adolescents aged 8–17 (with their caregiver). The ObsRO was debriefed with caregivers of patients aged 1–7. Participants were asked detailed questions about their interpretation and understanding of instructions/item wording and the recall period, relevance of concepts, and appropriateness of response options. Qualitative analysis All interviews were audio-recorded, transcribed verbatim, and translated to English, with identifiable information redacted. The CE section of the transcripts was thematically analyzed using Atlas.ti software . Participant quotes pertaining to signs/symptoms and impacts of dengue were assigned corresponding concept codes in accordance with an agreed coding list. Concept saturation analysis was conducted at the total sample level and age-group level, examining symptom concepts only, given the focus of the study was on symptom assessment. Transcripts were chronologically grouped into equal sets with findings from each set iteratively compared. Saturation was deemed to have been achieved if no new spontaneously reported concepts emerged in the final set. For the CD section of the transcripts, dichotomous codes were assigned to each item, instruction, response option(s), and recall period to indicate whether it was understood, relevant, and/or appropriate, and why. Further codes captured any suggested changes. In some instances, caregivers of younger children were incorrectly debriefed using the PRO rather than the ObsRO. To mitigate for this, understanding and relevance was extrapolated from PRO items that were similarly worded and conceptually equivalent to ObsRO items (e.g. PRO: “I felt tired”, ObsRO: “My child felt tired); relevance was further informed by responses given during CE. Quantitative assessment An independent, small-scale quantitative assessment was conducted in the context of a DHIM (NCT04298138, ) to generate preliminary evidence of feasibility of DENV-DD completion throughout illness, instrument performance and early insight into the trajectory of signs/symptoms over the course of illness. The study was funded by the United States Army and sponsored and executed by the State University of New York, Upstate Medical University (ethical approval obtained from a centralized US IRB, WCG IRB: 20193154). Quantitative assessment study procedures Participants were recruited from non-dengue endemic areas in the Northeast US between August and December 2022 as part of the DHIM. Healthy volunteers were experimentally infected with a single dose of dengue virus serotype 3 (DENV-3); 0.5 ml of 1.4 × 10 3 pfu/ml. Study investigators monitored the development of dengue-associated signs/symptoms and tested participants for dengue via polymerase chain reaction (PCR) tests which were conducted at study visits scheduled throughout the 28-day study period. On day one of experimental infection, participants were provided with printed paper copies of the US-English paper version of the DENV-DD PRO (modified following round one qualitative interviews) and instructed to complete one every day for 28 days. Participant completed DENV-DD daily entries were reviewed by the study investigators at each study visit. Quantitative analysis Data from the DENV-DD paper diary entries was electronically scanned and then manually entered into a Microsoft Excel database. To ensure accuracy of data entry, 20% of the data was inputted by two individuals and then compared. Any discrepancies were checked against the scanned version of the diary. Descriptive analyses were conducted on the raw item-level diary ratings to provide early insight into item performance. Compliance with completing the diary every day for 28 days (throughout an episode of dengue) was assessed by looking at the level of missing data. The item-level response distributions were used to assess the appropriateness of the response options. Alongside the qualitative interview findings, this data was used to support instrument modifications. The data also provided additional insight into the progression of symptoms over the course of illness, including: the timing, duration, and intensity of symptoms. All analyses were pre-specified and detailed in the analysis plan prior to data collection. The DII-RC underwent a face-validity assessment by COA development experts, including a review of WHO dengue guidelines to form the draft DENV-DD. An item-refinement meeting was held with the scientific committee to gain consensus on instruction/item wording. Two US English versions were developed, with age bands guided by International Society for Pharmacoeconomics and Outcomes Research recommendations : a PRO for self-completion by adults (aged 18+ years) and older children/adolescents (aged 8–17 years) with a caregiver present (if needed); and an ObsRO for completion by caregivers of young children (aged 1–7 years) with symptomatic dengue. Key differences included: the PRO contained additional pain-related items (e.g. muscle and bone) which caregivers are unlikely to be able to report on; the ObsRO contained items assessing ‘sleeping more’ and ‘feeling grumpy’ as observable indicators of fatigue and general illness in children as well as an ‘I don’t know’ response option for items that cannot be directly observed in very young children. While the DII-RC only captured presence or absence of symptoms via a dichotomous response scale, for the DENV-DD, the response scale was updated to include multiple response options (Fig. ). These are defined according to verbal descriptors and pictorial faces and designed to capture gradation in symptom intensity and facilitate evaluation of changes in symptom intensity over time (to be assessed during later planned psychometric validation work, in line with best practice guidance ). Two global items, assessing dengue illness intensity and daily impact of dengue illness respectively, were also adapted from the DII-RC to capture illness burden on daily life and an item to input the patient’s temperature (assessed by self/caregiver administration using a thermometer) added. Items were reworded to ensure use of patient-friendly language and a 24-h recall period was employed. This resulted in a draft 22-item PRO containing 19 symptom items, 2 global items, and 1 temperature item, and a 21-item ObsRO containing 18 symptom items, 2 global items, and 1 temperature item. Both versions were subjected to a translatability assessment (assessing Thai, Malay, Indonesian, Vietnamese, Filipino/Tagalog, Traditional Chinese, Korean, Japanese) to confirm the appropriateness of the wording; and the suitability of concepts for future translations in countries where dengue is endemic . The DENV-DD underwent forward–backward translation into Spanish, using country-neutral words but ensuring cultural relevance for the sample in Peru and Ecuador; and was migrated onto an electronic platform for the interviews. In accordance with best practice for linguistic validation, the draft DENV-DD was discussed with healthy individuals in Iquitos, Peru (prior to qualitative interviews) to ensure terms and concepts aligned with the local idioms and general reading level. A non-interventional, cross-sectional, qualitative study was conducted. This involved interviews in two dengue endemic regions (Iquitos, Peru and Machala, Ecuador) with individuals and caregivers of younger children who recently experienced laboratory-confirmed dengue. The regions were selected as they are recognized as key areas of dengue research . The interviews comprised combined concept elicitation (CE) and cognitive debriefing (CD) activities: CE explored the patient experience of dengue and informed development of a conceptual model and assessment of the conceptual comprehensiveness of the DENV-DD, and CD assessed whether the DENV-DD is understood, relevant and captures all concepts important to patients. Interviews were conducted in two rounds, allowing for modifications and testing of the updated instrument between rounds. All participant-facing study documents and interview guides were translated by certified translators and reviewed by local site investigators to ensure they were culturally sensitive and appropriately adapted to the local dialect and comprehension level. Sample and recruitment Participant recruitment took place between 2018 and 2021, with a 13-month interruption due to the COVID-19 pandemic. Participants were identified through partnership with scientific experts conducting dengue research in Peru and Ecuador. To be eligible, patients had to be 1–65 years of age (inclusive), have a laboratory-confirmed diagnosis of dengue within the last 30 days, been asymptomatic for at least 1 day, and only observed/treated for dengue as an outpatient. Participants were recruited across three age-groups: adults aged 18 years or older, adolescents/older children aged 8–17 years accompanied by their caregiver, and caregivers of younger children aged 1–7 years who had experienced dengue illness. A target of 20 interviews in each age group was expected to be sufficient for achieving ‘concept saturation’ (a point at which no new concepts are likely to emerge with further interviews) . Demographic and clinical information were also collected. All participants were compensated for participation as regionally appropriate. Ethics The study was approved and overseen by the U.S. Naval Medical Research Unit No. 6 Institutional Review Board (IRB) in Peru (NAMRU6.2014.0028) and by central SUNY Upstate IRB (417710) and Luis Vernaza Hospital (HLV-CEISH-2020-005) in Ecuador, in compliance with all applicable federal regulations governing the protection of human subjects. All participants provided written informed consent and/or assent (for children aged 8 years and above only) prior to study-related activities. Interview procedure Interviews were 60-min and conducted face-to-face or via telephone in the participant’s native language by site investigators trained in qualitative interviewing, using a semi-structured interview guide. Older children and adolescents (aged 8–17 years) were interviewed with their caregiver; however, interviewers were encouraged to engage primarily with the child/adolescent. The CE section of the interviews used broad, open-ended questions to facilitate spontaneous, unbiased elicitation of concepts regarding the patient experience of dengue. Focused questions were used if concepts of interest had not emerged or been fully explored. For the CD section, participants were asked to complete the DENV-DD either on paper or electronically on a device using a ‘think aloud’ approach and asked to share their thoughts as they read each instruction/item and selected each response. During both round one and round two interviews, the PRO was debriefed with adult patients aged 18–65 and patients who were older children/adolescents aged 8–17 (with their caregiver). The ObsRO was debriefed with caregivers of patients aged 1–7. Participants were asked detailed questions about their interpretation and understanding of instructions/item wording and the recall period, relevance of concepts, and appropriateness of response options. Qualitative analysis All interviews were audio-recorded, transcribed verbatim, and translated to English, with identifiable information redacted. The CE section of the transcripts was thematically analyzed using Atlas.ti software . Participant quotes pertaining to signs/symptoms and impacts of dengue were assigned corresponding concept codes in accordance with an agreed coding list. Concept saturation analysis was conducted at the total sample level and age-group level, examining symptom concepts only, given the focus of the study was on symptom assessment. Transcripts were chronologically grouped into equal sets with findings from each set iteratively compared. Saturation was deemed to have been achieved if no new spontaneously reported concepts emerged in the final set. For the CD section of the transcripts, dichotomous codes were assigned to each item, instruction, response option(s), and recall period to indicate whether it was understood, relevant, and/or appropriate, and why. Further codes captured any suggested changes. In some instances, caregivers of younger children were incorrectly debriefed using the PRO rather than the ObsRO. To mitigate for this, understanding and relevance was extrapolated from PRO items that were similarly worded and conceptually equivalent to ObsRO items (e.g. PRO: “I felt tired”, ObsRO: “My child felt tired); relevance was further informed by responses given during CE. Participant recruitment took place between 2018 and 2021, with a 13-month interruption due to the COVID-19 pandemic. Participants were identified through partnership with scientific experts conducting dengue research in Peru and Ecuador. To be eligible, patients had to be 1–65 years of age (inclusive), have a laboratory-confirmed diagnosis of dengue within the last 30 days, been asymptomatic for at least 1 day, and only observed/treated for dengue as an outpatient. Participants were recruited across three age-groups: adults aged 18 years or older, adolescents/older children aged 8–17 years accompanied by their caregiver, and caregivers of younger children aged 1–7 years who had experienced dengue illness. A target of 20 interviews in each age group was expected to be sufficient for achieving ‘concept saturation’ (a point at which no new concepts are likely to emerge with further interviews) . Demographic and clinical information were also collected. All participants were compensated for participation as regionally appropriate. The study was approved and overseen by the U.S. Naval Medical Research Unit No. 6 Institutional Review Board (IRB) in Peru (NAMRU6.2014.0028) and by central SUNY Upstate IRB (417710) and Luis Vernaza Hospital (HLV-CEISH-2020-005) in Ecuador, in compliance with all applicable federal regulations governing the protection of human subjects. All participants provided written informed consent and/or assent (for children aged 8 years and above only) prior to study-related activities. Interviews were 60-min and conducted face-to-face or via telephone in the participant’s native language by site investigators trained in qualitative interviewing, using a semi-structured interview guide. Older children and adolescents (aged 8–17 years) were interviewed with their caregiver; however, interviewers were encouraged to engage primarily with the child/adolescent. The CE section of the interviews used broad, open-ended questions to facilitate spontaneous, unbiased elicitation of concepts regarding the patient experience of dengue. Focused questions were used if concepts of interest had not emerged or been fully explored. For the CD section, participants were asked to complete the DENV-DD either on paper or electronically on a device using a ‘think aloud’ approach and asked to share their thoughts as they read each instruction/item and selected each response. During both round one and round two interviews, the PRO was debriefed with adult patients aged 18–65 and patients who were older children/adolescents aged 8–17 (with their caregiver). The ObsRO was debriefed with caregivers of patients aged 1–7. Participants were asked detailed questions about their interpretation and understanding of instructions/item wording and the recall period, relevance of concepts, and appropriateness of response options. All interviews were audio-recorded, transcribed verbatim, and translated to English, with identifiable information redacted. The CE section of the transcripts was thematically analyzed using Atlas.ti software . Participant quotes pertaining to signs/symptoms and impacts of dengue were assigned corresponding concept codes in accordance with an agreed coding list. Concept saturation analysis was conducted at the total sample level and age-group level, examining symptom concepts only, given the focus of the study was on symptom assessment. Transcripts were chronologically grouped into equal sets with findings from each set iteratively compared. Saturation was deemed to have been achieved if no new spontaneously reported concepts emerged in the final set. For the CD section of the transcripts, dichotomous codes were assigned to each item, instruction, response option(s), and recall period to indicate whether it was understood, relevant, and/or appropriate, and why. Further codes captured any suggested changes. In some instances, caregivers of younger children were incorrectly debriefed using the PRO rather than the ObsRO. To mitigate for this, understanding and relevance was extrapolated from PRO items that were similarly worded and conceptually equivalent to ObsRO items (e.g. PRO: “I felt tired”, ObsRO: “My child felt tired); relevance was further informed by responses given during CE. An independent, small-scale quantitative assessment was conducted in the context of a DHIM (NCT04298138, ) to generate preliminary evidence of feasibility of DENV-DD completion throughout illness, instrument performance and early insight into the trajectory of signs/symptoms over the course of illness. The study was funded by the United States Army and sponsored and executed by the State University of New York, Upstate Medical University (ethical approval obtained from a centralized US IRB, WCG IRB: 20193154). Quantitative assessment study procedures Participants were recruited from non-dengue endemic areas in the Northeast US between August and December 2022 as part of the DHIM. Healthy volunteers were experimentally infected with a single dose of dengue virus serotype 3 (DENV-3); 0.5 ml of 1.4 × 10 3 pfu/ml. Study investigators monitored the development of dengue-associated signs/symptoms and tested participants for dengue via polymerase chain reaction (PCR) tests which were conducted at study visits scheduled throughout the 28-day study period. On day one of experimental infection, participants were provided with printed paper copies of the US-English paper version of the DENV-DD PRO (modified following round one qualitative interviews) and instructed to complete one every day for 28 days. Participant completed DENV-DD daily entries were reviewed by the study investigators at each study visit. Quantitative analysis Data from the DENV-DD paper diary entries was electronically scanned and then manually entered into a Microsoft Excel database. To ensure accuracy of data entry, 20% of the data was inputted by two individuals and then compared. Any discrepancies were checked against the scanned version of the diary. Descriptive analyses were conducted on the raw item-level diary ratings to provide early insight into item performance. Compliance with completing the diary every day for 28 days (throughout an episode of dengue) was assessed by looking at the level of missing data. The item-level response distributions were used to assess the appropriateness of the response options. Alongside the qualitative interview findings, this data was used to support instrument modifications. The data also provided additional insight into the progression of symptoms over the course of illness, including: the timing, duration, and intensity of symptoms. All analyses were pre-specified and detailed in the analysis plan prior to data collection. Participants were recruited from non-dengue endemic areas in the Northeast US between August and December 2022 as part of the DHIM. Healthy volunteers were experimentally infected with a single dose of dengue virus serotype 3 (DENV-3); 0.5 ml of 1.4 × 10 3 pfu/ml. Study investigators monitored the development of dengue-associated signs/symptoms and tested participants for dengue via polymerase chain reaction (PCR) tests which were conducted at study visits scheduled throughout the 28-day study period. On day one of experimental infection, participants were provided with printed paper copies of the US-English paper version of the DENV-DD PRO (modified following round one qualitative interviews) and instructed to complete one every day for 28 days. Participant completed DENV-DD daily entries were reviewed by the study investigators at each study visit. Data from the DENV-DD paper diary entries was electronically scanned and then manually entered into a Microsoft Excel database. To ensure accuracy of data entry, 20% of the data was inputted by two individuals and then compared. Any discrepancies were checked against the scanned version of the diary. Descriptive analyses were conducted on the raw item-level diary ratings to provide early insight into item performance. Compliance with completing the diary every day for 28 days (throughout an episode of dengue) was assessed by looking at the level of missing data. The item-level response distributions were used to assess the appropriateness of the response options. Alongside the qualitative interview findings, this data was used to support instrument modifications. The data also provided additional insight into the progression of symptoms over the course of illness, including: the timing, duration, and intensity of symptoms. All analyses were pre-specified and detailed in the analysis plan prior to data collection. Qualitative study sample demographic and clinical characteristics A total of 48 participants were interviewed: 20 adults (round 1: n = 13, round 2: n = 7), 20 adolescents/older children with their caregiver (round 1: n = 12, round 2: n = 8), and eight caregivers of children (round 1: n = 6, round 2: n = 2). Participants were recruited from Peru (n = 39/48, 81%) and Ecuador (n = 9/48, 19%). Patients ranged in age from 1 to 53 years and were generally evenly split between female (n = 25/48, 52%) and male (n = 23/48, 48%), although there was a larger proportion of male patients from Ecuador (n = 7/9, 78%) compared to Peru (n = 16/39, 41%) (Table ). Within each age group, there was representation of participants across different school grades/levels of education. From those reported, most patients had DENV-1 (n = 30/33, 91%), and one patient reported having had dengue previously. Only patients from Peru were debriefed on the electronic device. Caregivers ranged in age from 22 to 52 years, and most were the child’s mother (n = 25/28, 89%) (Table ). Concept elicitation results Participant experience of dengue The findings from both rounds of CE interviews are summarized in Fig. . Participants reported a total of 59 signs/symptoms, which broadly extended across 10 categories: feverish, gastrointestinal, pain, fatigue, skin, mouth/nose/throat, eye/vision, neurological, bleeding, and other (see Table for example quotes; Spanish translations: Additional file : Table S1). At the symptom-level, fever (n = 48/48, 100%), headache (n = 43/48, 90%), body ache/pain (n = 39/48, 81%), loss of appetite (n = 34/48, 71%), and body weakness (n = 34/48, 71%) were most frequently reported (see Fig. for a breakdown of signs/symptoms reported by participant type and Fig. for a breakdown of signs/symptoms reported either spontaneously or when probed). Total symptom duration ranged from 3 to 15 days, with 5 days (n = 7/35, 20%) and 7 days (n = 8/35, 23%) most frequently reported. Fever (n = 16/37, 43%) and headache (n = 14/37, 38%) were reported to be the most bothersome symptoms. Findings indicated symptom occurrence was transient across an individual’s illness, and frequency and intensity of individual symptoms was highly variable across individuals. All participants reported how dengue impacted HRQoL. Impacts to activities of daily living (n = 43/48, 89%), physical functioning (n = 41/48, 85%), sleep (n = 38/48, 79%), emotional wellbeing (n = 37/48, 77%), work/school (n = 22/48, 46%), finances (n = 18/48, 38%), and social functioning (n = 14/48, 29%) were mentioned. Almost all participants (n = 47/48, 98%) discussed using supportive treatments to alleviate dengue symptoms, with paracetamol and increased fluid intake (n = 33/47, 70%, each) most frequently mentioned. Concept saturation Concept saturation was achieved for the total sample, with no relevant concepts emerging in the final set of interviews (grouped into four sets of 12 interviews). At the age-group level, concept saturation was achieved for both the adult and older children/adolescent samples (both independently grouped into four sets of five interviews). For caregivers of younger children (grouped into four sets of two interviews), three symptom concepts emerged in the final set of interviews, including muscle and back pain, which are core symptoms of dengue , suggesting further interviews may have elicited additional concepts in this age group. Cognitive debriefing of the DENV-DD DENV-DD PRO In round one, adults (n = 13) and older children/adolescents (n = 12) completed and debriefed the 22-item DENV-DD PRO. All 22 items were understood by ≥ 80% participants and most items (18/22) were considered relevant to ≥ 50% participants. Items assessing vomiting, diarrhea, bruising, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some participants had difficulty understanding the recall period (n = 7/23, 30%). All participants demonstrated an understanding of the response options (n = 25/25, 100%). Based on round one findings and input from the SC, modifications were made to the DENV-DD PRO item wording and instructions to enhance understanding and relevance. Five items were added to the PRO to assess concepts reported during the interviews not captured by the DENV-DD (weak body, back pain, bad taste, bleeding, dizziness), and one item assessing any treatments taken was included. In round two, adults (n = 7) and older children/adolescents (n = 8) completed and debriefed the updated 28-item DENV-DD PRO. Findings indicated most items (27/28) were understood by ≥ 80% participants. Most items (24/28) were also considered relevant to ≥ 50% participants. Diarrhea, bruising, bleeding, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some still had difficulty understanding the re-worded recall period (n = 2/6, 33%). Nearly all participants demonstrated understanding of the response options (n = 14/15, 93%). Additional modifications were made to the DENV-DD PRO based on round two findings and input from the SC, including further updates to the instructions to clarify the recall period. Two items were added to assess concepts reported during the interviews: sore throat and leg pain. DENV-DD ObsRO In round one, six caregivers of younger children were debriefed on the DENV-DD. Four were debriefed on the 21-item ObsRO and two debriefed on the PRO. Most items (16/21) were well-understood by ≥ 80% participants and most (17/21) were considered relevant to ≥ 50% participants. Items assessing diarrhea, scratching, bruising, and red eyes were considered relevant to < 50% of participants. The instructions were broadly understood by all participants asked; most had difficulty understanding the recall period (n = 4/6, 67%). All participants asked demonstrated an understanding of the response options (n = 5/5, 100%). Following round one, modifications to item and instruction wording were made to the DENV-DD ObsRO, mostly to align with changes made to the DENV-DD PRO. Six items were added based on concepts reported in CE: weak body, bad taste, bleeding, dizziness, fever, sore throat, and one item was added to assess any supportive treatments taken. In round two, two caregivers of younger children were interviewed, both of whom were debriefed on the PRO. Most items (25/28) were understood by both participants; the remaining three items were not debriefed with either participant due to interview time constraints/no comparable item being included within the PRO. Most items (18/28) were considered relevant to at least one participant. Nausea, diarrhea, bad taste, bruising, bleeding, dizziness, and red eyes were not relevant to participants who were asked. The instructions were broadly understood by both participants; one had difficulty understanding the re-worded recall period. The one participant asked about the response options demonstrated an understanding. Following round two, modifications were made to the DENV-DD ObsRO instruction wording to align with changes to the DENV-DD PRO. Hypothesized conceptual framework Both the DENV-DD PRO (Fig. ) and ObsRO (Fig. ) assess the nine core dengue sign/symptom categories identified from CE interviews, original DII-RC and published literature . The PRO contains four additional pain items (back, leg, muscle, and bone pain), while the ObsRO captures an additional concept of ‘feeling grumpy’. Individual items capturing illness intensity, impact of illness, treatments taken, and temperature are also included in DENV-DD versions. These individual items will likely be scored separately to the DENV-DD total score, with findings from future psychometric validation evaluation informing final item inclusion, scoring domains and algorithms. The global illness intensity and impact of illness items will also be used as anchor measures to support psychometric evaluation of the DENV-DD. Comparison between age groups Experiences of dengue illness across the whole sample were generally very similar. However, adults and older children/adolescents were able to provide highly detailed descriptions of their experience, whereas caregivers of younger children were only able to communicate what they had observed or been told by their child (e.g. fewer caregivers reported their child experienced pain and eye/vision-related symptoms). The similarity in reporting by adults and older children/adolescents during CE, and the fact that no major differences in item understanding and concept relevance were identified between these age-groups during CD, supports use of the PRO in adults and older children/adolescents. The difference in reporting by caregivers supports the need for an ObsRO for assessment of younger children . Quantitative assessment sample demographic and clinical characteristics Nine participants were recruited as part of the DHIM study. Starting on Day 2–4 post-inoculation, all individuals were clinically diagnosed with dengue illness. Participants ranged in age from 22 to 40 years and were generally evenly split between female (n = 5/9, 56%) and male (n = 4/9, 44%). All participants had a high school diploma or higher. Two thirds of participants (n = 6/9, 67%) were hospitalized for 2–5 days (mean: 3.2 days) (Table ). Quantitative assessment results DENV-DD form completion rate was high (89.7%), with three participants completing the DENV-DD every day for 28 days (Additional file : Figure S1). Item-level completion rate, for completed forms, was very high at 99.5% (Additional file : Figure S2), demonstrating feasibility of completing the entire DENV-DD every day. Participants endorsed the full range of response options on the DENV-DD during the 28-day diary completion period with no evidence of ceiling effects. The intensity of individual symptom reporting for each participant peaked between Days 6–13 (Fig. ). Feverish, pain, and fatigue symptoms were experienced earliest following inoculation, followed by skin, mouth/nose/throat and eye/vision symptoms. Average symptom duration was 12 days. Back pain and tiredness were experienced for the longest duration (20–24 days), while bad taste and red eyes were experienced for the shortest duration (2–4 days). Of the 24 symptom items assessed in the DENV-DD, most were experienced by at least one participant (21/24). Vomiting, bleeding and bruising were not experienced. A total of 48 participants were interviewed: 20 adults (round 1: n = 13, round 2: n = 7), 20 adolescents/older children with their caregiver (round 1: n = 12, round 2: n = 8), and eight caregivers of children (round 1: n = 6, round 2: n = 2). Participants were recruited from Peru (n = 39/48, 81%) and Ecuador (n = 9/48, 19%). Patients ranged in age from 1 to 53 years and were generally evenly split between female (n = 25/48, 52%) and male (n = 23/48, 48%), although there was a larger proportion of male patients from Ecuador (n = 7/9, 78%) compared to Peru (n = 16/39, 41%) (Table ). Within each age group, there was representation of participants across different school grades/levels of education. From those reported, most patients had DENV-1 (n = 30/33, 91%), and one patient reported having had dengue previously. Only patients from Peru were debriefed on the electronic device. Caregivers ranged in age from 22 to 52 years, and most were the child’s mother (n = 25/28, 89%) (Table ). Participant experience of dengue The findings from both rounds of CE interviews are summarized in Fig. . Participants reported a total of 59 signs/symptoms, which broadly extended across 10 categories: feverish, gastrointestinal, pain, fatigue, skin, mouth/nose/throat, eye/vision, neurological, bleeding, and other (see Table for example quotes; Spanish translations: Additional file : Table S1). At the symptom-level, fever (n = 48/48, 100%), headache (n = 43/48, 90%), body ache/pain (n = 39/48, 81%), loss of appetite (n = 34/48, 71%), and body weakness (n = 34/48, 71%) were most frequently reported (see Fig. for a breakdown of signs/symptoms reported by participant type and Fig. for a breakdown of signs/symptoms reported either spontaneously or when probed). Total symptom duration ranged from 3 to 15 days, with 5 days (n = 7/35, 20%) and 7 days (n = 8/35, 23%) most frequently reported. Fever (n = 16/37, 43%) and headache (n = 14/37, 38%) were reported to be the most bothersome symptoms. Findings indicated symptom occurrence was transient across an individual’s illness, and frequency and intensity of individual symptoms was highly variable across individuals. All participants reported how dengue impacted HRQoL. Impacts to activities of daily living (n = 43/48, 89%), physical functioning (n = 41/48, 85%), sleep (n = 38/48, 79%), emotional wellbeing (n = 37/48, 77%), work/school (n = 22/48, 46%), finances (n = 18/48, 38%), and social functioning (n = 14/48, 29%) were mentioned. Almost all participants (n = 47/48, 98%) discussed using supportive treatments to alleviate dengue symptoms, with paracetamol and increased fluid intake (n = 33/47, 70%, each) most frequently mentioned. Concept saturation Concept saturation was achieved for the total sample, with no relevant concepts emerging in the final set of interviews (grouped into four sets of 12 interviews). At the age-group level, concept saturation was achieved for both the adult and older children/adolescent samples (both independently grouped into four sets of five interviews). For caregivers of younger children (grouped into four sets of two interviews), three symptom concepts emerged in the final set of interviews, including muscle and back pain, which are core symptoms of dengue , suggesting further interviews may have elicited additional concepts in this age group. The findings from both rounds of CE interviews are summarized in Fig. . Participants reported a total of 59 signs/symptoms, which broadly extended across 10 categories: feverish, gastrointestinal, pain, fatigue, skin, mouth/nose/throat, eye/vision, neurological, bleeding, and other (see Table for example quotes; Spanish translations: Additional file : Table S1). At the symptom-level, fever (n = 48/48, 100%), headache (n = 43/48, 90%), body ache/pain (n = 39/48, 81%), loss of appetite (n = 34/48, 71%), and body weakness (n = 34/48, 71%) were most frequently reported (see Fig. for a breakdown of signs/symptoms reported by participant type and Fig. for a breakdown of signs/symptoms reported either spontaneously or when probed). Total symptom duration ranged from 3 to 15 days, with 5 days (n = 7/35, 20%) and 7 days (n = 8/35, 23%) most frequently reported. Fever (n = 16/37, 43%) and headache (n = 14/37, 38%) were reported to be the most bothersome symptoms. Findings indicated symptom occurrence was transient across an individual’s illness, and frequency and intensity of individual symptoms was highly variable across individuals. All participants reported how dengue impacted HRQoL. Impacts to activities of daily living (n = 43/48, 89%), physical functioning (n = 41/48, 85%), sleep (n = 38/48, 79%), emotional wellbeing (n = 37/48, 77%), work/school (n = 22/48, 46%), finances (n = 18/48, 38%), and social functioning (n = 14/48, 29%) were mentioned. Almost all participants (n = 47/48, 98%) discussed using supportive treatments to alleviate dengue symptoms, with paracetamol and increased fluid intake (n = 33/47, 70%, each) most frequently mentioned. Concept saturation was achieved for the total sample, with no relevant concepts emerging in the final set of interviews (grouped into four sets of 12 interviews). At the age-group level, concept saturation was achieved for both the adult and older children/adolescent samples (both independently grouped into four sets of five interviews). For caregivers of younger children (grouped into four sets of two interviews), three symptom concepts emerged in the final set of interviews, including muscle and back pain, which are core symptoms of dengue , suggesting further interviews may have elicited additional concepts in this age group. DENV-DD PRO In round one, adults (n = 13) and older children/adolescents (n = 12) completed and debriefed the 22-item DENV-DD PRO. All 22 items were understood by ≥ 80% participants and most items (18/22) were considered relevant to ≥ 50% participants. Items assessing vomiting, diarrhea, bruising, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some participants had difficulty understanding the recall period (n = 7/23, 30%). All participants demonstrated an understanding of the response options (n = 25/25, 100%). Based on round one findings and input from the SC, modifications were made to the DENV-DD PRO item wording and instructions to enhance understanding and relevance. Five items were added to the PRO to assess concepts reported during the interviews not captured by the DENV-DD (weak body, back pain, bad taste, bleeding, dizziness), and one item assessing any treatments taken was included. In round two, adults (n = 7) and older children/adolescents (n = 8) completed and debriefed the updated 28-item DENV-DD PRO. Findings indicated most items (27/28) were understood by ≥ 80% participants. Most items (24/28) were also considered relevant to ≥ 50% participants. Diarrhea, bruising, bleeding, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some still had difficulty understanding the re-worded recall period (n = 2/6, 33%). Nearly all participants demonstrated understanding of the response options (n = 14/15, 93%). Additional modifications were made to the DENV-DD PRO based on round two findings and input from the SC, including further updates to the instructions to clarify the recall period. Two items were added to assess concepts reported during the interviews: sore throat and leg pain. DENV-DD ObsRO In round one, six caregivers of younger children were debriefed on the DENV-DD. Four were debriefed on the 21-item ObsRO and two debriefed on the PRO. Most items (16/21) were well-understood by ≥ 80% participants and most (17/21) were considered relevant to ≥ 50% participants. Items assessing diarrhea, scratching, bruising, and red eyes were considered relevant to < 50% of participants. The instructions were broadly understood by all participants asked; most had difficulty understanding the recall period (n = 4/6, 67%). All participants asked demonstrated an understanding of the response options (n = 5/5, 100%). Following round one, modifications to item and instruction wording were made to the DENV-DD ObsRO, mostly to align with changes made to the DENV-DD PRO. Six items were added based on concepts reported in CE: weak body, bad taste, bleeding, dizziness, fever, sore throat, and one item was added to assess any supportive treatments taken. In round two, two caregivers of younger children were interviewed, both of whom were debriefed on the PRO. Most items (25/28) were understood by both participants; the remaining three items were not debriefed with either participant due to interview time constraints/no comparable item being included within the PRO. Most items (18/28) were considered relevant to at least one participant. Nausea, diarrhea, bad taste, bruising, bleeding, dizziness, and red eyes were not relevant to participants who were asked. The instructions were broadly understood by both participants; one had difficulty understanding the re-worded recall period. The one participant asked about the response options demonstrated an understanding. Following round two, modifications were made to the DENV-DD ObsRO instruction wording to align with changes to the DENV-DD PRO. In round one, adults (n = 13) and older children/adolescents (n = 12) completed and debriefed the 22-item DENV-DD PRO. All 22 items were understood by ≥ 80% participants and most items (18/22) were considered relevant to ≥ 50% participants. Items assessing vomiting, diarrhea, bruising, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some participants had difficulty understanding the recall period (n = 7/23, 30%). All participants demonstrated an understanding of the response options (n = 25/25, 100%). Based on round one findings and input from the SC, modifications were made to the DENV-DD PRO item wording and instructions to enhance understanding and relevance. Five items were added to the PRO to assess concepts reported during the interviews not captured by the DENV-DD (weak body, back pain, bad taste, bleeding, dizziness), and one item assessing any treatments taken was included. In round two, adults (n = 7) and older children/adolescents (n = 8) completed and debriefed the updated 28-item DENV-DD PRO. Findings indicated most items (27/28) were understood by ≥ 80% participants. Most items (24/28) were also considered relevant to ≥ 50% participants. Diarrhea, bruising, bleeding, and taking temperature were considered relevant to < 50% of participants. Most instructions were understood; some still had difficulty understanding the re-worded recall period (n = 2/6, 33%). Nearly all participants demonstrated understanding of the response options (n = 14/15, 93%). Additional modifications were made to the DENV-DD PRO based on round two findings and input from the SC, including further updates to the instructions to clarify the recall period. Two items were added to assess concepts reported during the interviews: sore throat and leg pain. In round one, six caregivers of younger children were debriefed on the DENV-DD. Four were debriefed on the 21-item ObsRO and two debriefed on the PRO. Most items (16/21) were well-understood by ≥ 80% participants and most (17/21) were considered relevant to ≥ 50% participants. Items assessing diarrhea, scratching, bruising, and red eyes were considered relevant to < 50% of participants. The instructions were broadly understood by all participants asked; most had difficulty understanding the recall period (n = 4/6, 67%). All participants asked demonstrated an understanding of the response options (n = 5/5, 100%). Following round one, modifications to item and instruction wording were made to the DENV-DD ObsRO, mostly to align with changes made to the DENV-DD PRO. Six items were added based on concepts reported in CE: weak body, bad taste, bleeding, dizziness, fever, sore throat, and one item was added to assess any supportive treatments taken. In round two, two caregivers of younger children were interviewed, both of whom were debriefed on the PRO. Most items (25/28) were understood by both participants; the remaining three items were not debriefed with either participant due to interview time constraints/no comparable item being included within the PRO. Most items (18/28) were considered relevant to at least one participant. Nausea, diarrhea, bad taste, bruising, bleeding, dizziness, and red eyes were not relevant to participants who were asked. The instructions were broadly understood by both participants; one had difficulty understanding the re-worded recall period. The one participant asked about the response options demonstrated an understanding. Following round two, modifications were made to the DENV-DD ObsRO instruction wording to align with changes to the DENV-DD PRO. Both the DENV-DD PRO (Fig. ) and ObsRO (Fig. ) assess the nine core dengue sign/symptom categories identified from CE interviews, original DII-RC and published literature . The PRO contains four additional pain items (back, leg, muscle, and bone pain), while the ObsRO captures an additional concept of ‘feeling grumpy’. Individual items capturing illness intensity, impact of illness, treatments taken, and temperature are also included in DENV-DD versions. These individual items will likely be scored separately to the DENV-DD total score, with findings from future psychometric validation evaluation informing final item inclusion, scoring domains and algorithms. The global illness intensity and impact of illness items will also be used as anchor measures to support psychometric evaluation of the DENV-DD. Experiences of dengue illness across the whole sample were generally very similar. However, adults and older children/adolescents were able to provide highly detailed descriptions of their experience, whereas caregivers of younger children were only able to communicate what they had observed or been told by their child (e.g. fewer caregivers reported their child experienced pain and eye/vision-related symptoms). The similarity in reporting by adults and older children/adolescents during CE, and the fact that no major differences in item understanding and concept relevance were identified between these age-groups during CD, supports use of the PRO in adults and older children/adolescents. The difference in reporting by caregivers supports the need for an ObsRO for assessment of younger children . Nine participants were recruited as part of the DHIM study. Starting on Day 2–4 post-inoculation, all individuals were clinically diagnosed with dengue illness. Participants ranged in age from 22 to 40 years and were generally evenly split between female (n = 5/9, 56%) and male (n = 4/9, 44%). All participants had a high school diploma or higher. Two thirds of participants (n = 6/9, 67%) were hospitalized for 2–5 days (mean: 3.2 days) (Table ). DENV-DD form completion rate was high (89.7%), with three participants completing the DENV-DD every day for 28 days (Additional file : Figure S1). Item-level completion rate, for completed forms, was very high at 99.5% (Additional file : Figure S2), demonstrating feasibility of completing the entire DENV-DD every day. Participants endorsed the full range of response options on the DENV-DD during the 28-day diary completion period with no evidence of ceiling effects. The intensity of individual symptom reporting for each participant peaked between Days 6–13 (Fig. ). Feverish, pain, and fatigue symptoms were experienced earliest following inoculation, followed by skin, mouth/nose/throat and eye/vision symptoms. Average symptom duration was 12 days. Back pain and tiredness were experienced for the longest duration (20–24 days), while bad taste and red eyes were experienced for the shortest duration (2–4 days). Of the 24 symptom items assessed in the DENV-DD, most were experienced by at least one participant (21/24). Vomiting, bleeding and bruising were not experienced. This study added to the understanding of the outpatient dengue illness experience . Consistent with previous literature , fever, headache, musculoskeletal pain, fatigue, rash, and nausea/vomiting were identified as core signs/symptoms of dengue. Less commonly reported signs/symptoms in the literature (i.e. bad taste and sore throat) were also identified, both of which were reported as important to the dengue experience in a recent community-based perspective study . Some signs/symptoms (e.g. craving acidic foods, allergy-like symptoms, watery eyes, and loss of consciousness) were each reported by only one participant. None of these signs/symptoms are supported in the literature as being related to dengue , thus may be attributable to a co-morbid condition(s). Findings also support the previous literature that dengue signs/symptoms fluctuate throughout the course of illness and this experience often varies between individuals with the disease. This variability in illness trajectory and symptom presentation/intensity supports the need for instruments to capture the full dengue patient experience. Content validity of the DENV-DD The current study provides evidence to support the content validity of the DENV-DD. Most participants understood the DENV-DD PRO and ObsRO instructions, items, and response options as intended and indicated most concepts assessed by the DENV-DD were relevant to their (or their child’s) dengue experience. Minor changes to instruction/item wording were made across rounds of interviews based on participant and SC feedback to enhance understanding and relevance to the dengue population. Items assessing weak body, bad taste, sore throat, bleeding, dizziness, and treatments taken were added to the DENV-DD PRO and ObsRO; an item assessing fever was added to the ObsRO (already included in the PRO); and items assessing back pain and leg pain were added to the PRO. While relevance was consistently reported to be low for diarrhea, bruising, and bleeding items, input from the scientific committee recommended retaining the items as they are established (albeit less common) symptoms of dengue illness , with bruising and bleeding potentially indicating progression to clinically severe dengue . Given the nature of dengue illness (symptoms fluctuating in intensity over the course of illness), the response scale was deemed suitable for capturing the gradation in symptom intensity. Some participants had difficulty explaining what the 24-h recall period meant. Difficulty in understanding was likely exacerbated due to participants no longer having dengue at the time of the interview and being asked to reflect on their experience over the time of their dengue illness. From an instrument development perspective, a 24-h recall period was deemed most appropriate to minimize recall bias and to capture daily symptom fluctuation . Results of the quantitative assessment conducted with symptomatic participants with recent dengue onset provided evidence of the feasibility of daily DENV-DD completion, indicating that any critical issues with recall period were resolved when completing the diary in real-time. This led to the 30-item DENV-DD PRO and 28-item DENV-DD ObsRO. Study considerations In Peru and Ecuador, dengue is an illness principally experienced by low to middle socio-economic populations . Local research investigators and healthy individuals were engaged to ensure cultural appropriateness and understandability of the Spanish translations across all ages and literacy levels. A broader definition of ‘caregiver’ (e.g. siblings, grandparents) was applied to better reflect the family dynamics in South America where children often live in multi-generational households or with individuals other than their parents. Interviews were conducted mostly in-person to improve discourse and rapport between interviewers and participants and to mitigate issues with poor internet connectivity. Although a range of age groups were included in this study, the number of participants recruited in the caregiver sample representing younger children who had experienced dengue was quite small, thus concept saturation for this sample was not achieved. This may reflect a lower prevalence of symptomatic dengue in younger children in South America , but also is likely due to a 13-month interruption to data collection during the COVID-19 pandemic. Half of the caregiver sample was also inadvertently debriefed on the DENV-DD PRO instead of the ObsRO, however data for PRO items that were conceptually equivalent to ObsRO items, and CE findings, were used to extrapolate data on understanding and relevance. Further interviews with caregivers could be conducted to confirm content validity of the ObsRO. The qualitative interview findings should not be considered an estimate of symptom prevalence. Most participants were identified through clinics based on the presence of ‘classic’ dengue symptoms (e.g. fever). Hospitalized patients were excluded, resulting in clinically severe dengue not being captured, despite symptoms of bruising and bleeding which are indicative of a more severe dengue symptomology being discussed. Symptom prevalence was not an objective of this study but could be examined using the DENV-DD in future research studies. While the items more indicative of severe dengue were not tested in a population clinically diagnosed with severe dengue, the study sample was selected from a population that is at risk for severe dengue illness (i.e. dengue endemic populations). This study sample would be sufficient to demonstrate item understanding. The ongoing psychometric validation study will evaluate the performance of items, in a sample with differentiating disease intensities, allowing the opportunity for items demonstrating low relevance to be removed at this stage. Further, although the quantitative assessment provided early evidence of the feasibility of DENV-DD completion, instrument performance and insight into the trajectory of the signs/symptoms of dengue illness, due to the small sample size robust conclusions cannot be drawn from the data. Additionally, as participants in this research study were healthy trial volunteers who were experimentally infected, the typical patient experience may not have been captured. Further observational work is currently underway in Southeast Asia to provide more robust real-world data to support use of the DENV-DD in outpatient populations. Next steps To evaluate item performance, psychometric properties and develop a scoring algorithm and score interpretation guidelines, an observational study is underway in Southeast Asia where DENV-DD responses will be collected daily from patients and caregivers of patients with dengue illness. Items will be reviewed to minimise redundancy and to optimize the balance between conceptual coverage and burden of completion. The cultural relevance of the DENV-DD in new populations will also be assessed. The current study provides evidence to support the content validity of the DENV-DD. Most participants understood the DENV-DD PRO and ObsRO instructions, items, and response options as intended and indicated most concepts assessed by the DENV-DD were relevant to their (or their child’s) dengue experience. Minor changes to instruction/item wording were made across rounds of interviews based on participant and SC feedback to enhance understanding and relevance to the dengue population. Items assessing weak body, bad taste, sore throat, bleeding, dizziness, and treatments taken were added to the DENV-DD PRO and ObsRO; an item assessing fever was added to the ObsRO (already included in the PRO); and items assessing back pain and leg pain were added to the PRO. While relevance was consistently reported to be low for diarrhea, bruising, and bleeding items, input from the scientific committee recommended retaining the items as they are established (albeit less common) symptoms of dengue illness , with bruising and bleeding potentially indicating progression to clinically severe dengue . Given the nature of dengue illness (symptoms fluctuating in intensity over the course of illness), the response scale was deemed suitable for capturing the gradation in symptom intensity. Some participants had difficulty explaining what the 24-h recall period meant. Difficulty in understanding was likely exacerbated due to participants no longer having dengue at the time of the interview and being asked to reflect on their experience over the time of their dengue illness. From an instrument development perspective, a 24-h recall period was deemed most appropriate to minimize recall bias and to capture daily symptom fluctuation . Results of the quantitative assessment conducted with symptomatic participants with recent dengue onset provided evidence of the feasibility of daily DENV-DD completion, indicating that any critical issues with recall period were resolved when completing the diary in real-time. This led to the 30-item DENV-DD PRO and 28-item DENV-DD ObsRO. In Peru and Ecuador, dengue is an illness principally experienced by low to middle socio-economic populations . Local research investigators and healthy individuals were engaged to ensure cultural appropriateness and understandability of the Spanish translations across all ages and literacy levels. A broader definition of ‘caregiver’ (e.g. siblings, grandparents) was applied to better reflect the family dynamics in South America where children often live in multi-generational households or with individuals other than their parents. Interviews were conducted mostly in-person to improve discourse and rapport between interviewers and participants and to mitigate issues with poor internet connectivity. Although a range of age groups were included in this study, the number of participants recruited in the caregiver sample representing younger children who had experienced dengue was quite small, thus concept saturation for this sample was not achieved. This may reflect a lower prevalence of symptomatic dengue in younger children in South America , but also is likely due to a 13-month interruption to data collection during the COVID-19 pandemic. Half of the caregiver sample was also inadvertently debriefed on the DENV-DD PRO instead of the ObsRO, however data for PRO items that were conceptually equivalent to ObsRO items, and CE findings, were used to extrapolate data on understanding and relevance. Further interviews with caregivers could be conducted to confirm content validity of the ObsRO. The qualitative interview findings should not be considered an estimate of symptom prevalence. Most participants were identified through clinics based on the presence of ‘classic’ dengue symptoms (e.g. fever). Hospitalized patients were excluded, resulting in clinically severe dengue not being captured, despite symptoms of bruising and bleeding which are indicative of a more severe dengue symptomology being discussed. Symptom prevalence was not an objective of this study but could be examined using the DENV-DD in future research studies. While the items more indicative of severe dengue were not tested in a population clinically diagnosed with severe dengue, the study sample was selected from a population that is at risk for severe dengue illness (i.e. dengue endemic populations). This study sample would be sufficient to demonstrate item understanding. The ongoing psychometric validation study will evaluate the performance of items, in a sample with differentiating disease intensities, allowing the opportunity for items demonstrating low relevance to be removed at this stage. Further, although the quantitative assessment provided early evidence of the feasibility of DENV-DD completion, instrument performance and insight into the trajectory of the signs/symptoms of dengue illness, due to the small sample size robust conclusions cannot be drawn from the data. Additionally, as participants in this research study were healthy trial volunteers who were experimentally infected, the typical patient experience may not have been captured. Further observational work is currently underway in Southeast Asia to provide more robust real-world data to support use of the DENV-DD in outpatient populations. To evaluate item performance, psychometric properties and develop a scoring algorithm and score interpretation guidelines, an observational study is underway in Southeast Asia where DENV-DD responses will be collected daily from patients and caregivers of patients with dengue illness. Items will be reviewed to minimise redundancy and to optimize the balance between conceptual coverage and burden of completion. The cultural relevance of the DENV-DD in new populations will also be assessed. The DENV-DD was developed with qualitative input from patients/caregivers of patients who had recently experienced dengue, in accordance with regulatory guidance . The DENV-DD PRO and ObsRO versions have documented evidence of face and content validity for the assessment of signs/symptoms of dengue illness for use with outpatient children, adolescents, and adults. Observational study data will be used to evaluate the psychometric measurement properties of the DENV-DD to support its use to characterize dengue burden and assess therapeutic value in future clinical research studies and real-world applications. Additional file 1 . Table S1. Key sign/symptom categories, where n = number of participants reporting each sign/symptom concept, with example participant quotes in Spanish. Figure S1. Form-level completion of the 28-item DENV-DD PRO. Figure S2. Item-level completion of the 28-item DENV-DD PRO.
Teaching communication skills in medical education
854a1c87-be6b-4149-962d-7bb704efdc55
10449999
Internal Medicine[mh]
In the future, communication with patients will play an increasingly important role in the everyday work of physicians . In 2017, the German Master Plan for Medical Studies 2020 defined sound training in medical communication as one of the core objectives . Implementation of the resulting high requirements formulated in the National Competence-Based Learning Objectives Catalogue for Medicine (NKLM) is to take place across the board in medical studies in Germany from 2025 . Radiation oncology practice is accompanied by a variety of communicative challenges. In particular, communication ranges from technical/physical aspects of the therapy to giving a cancer diagnosis, guiding a patient through the course of radiotherapy, and accompanying them in end-of-life situations. All these occasions of conversation require good special knowledge and different skills. Beyond profound oncological knowledge, these consultations also demand essential psychological and emotional skills. Furthermore, the complex multimodal therapies and the presence of unclear fears and reservations regarding radiation treatment in many patients lead to extraordinary counselling situations. Radiation oncologists should therefore develop a special competence for such situations during their residency training and professional life. Thus, radiation oncology is particularly suited, both through the teaching physicians and the patients involved, to sensitize medical students to this topic and to train them competently . Therefore, we have designed a communication seminar for fourth- to fifth-year medical students with the aim of introducing the experiences and competencies of radiation oncology into the training of physicians in the best possible way. We report on initial experiences and evaluation results after 1 year. Concept The course was planned as an interdisciplinary project in 2018 and conducted for the first time in 2019 and again in 2022 after a pandemic-related break. The entire project was funded by the medical faculty as an innovative teaching project. The three initiators and leaders were two radiation oncologists, one of whom is a palliative care specialist, and a psycho-oncologist working in radiation oncology. Additional lecturers from other disciplines were invited to the seminar with reference to the requirements of the NKLM . The course was an optional elective course for fourth- and fifth year medical students. At this point, medical students have left the exclusively theoretical part of their studies. At the beginning of the herein presented course, all students participated in counselling of patients prior to radiation therapy. The subsequent main part of the course was a 1-week block seminar. The curriculum and evaluation questionnaire were developed using a two-stage Delphi process . After written surveys of physician employees of different experience levels and specialties ( n = 10, consisting of radiation oncologists n = 2, medical oncologists [both palliative care specialists] n = 2, general practitioner n = 1, radiologist n = 1, resident doctors n = 4) and joint discussion, those items that scored at least an average of 3.6 on a scale of 0 (disagree completely) to 4 (agree completely) were included in the curriculum. The topics covered a broad spectrum of the competency areas defined in the NKLM (Table ). The number of student participants in the seminar was limited to approximately 15 participants because of the practical parts. The block seminar included the following topics: debriefing of the counselling participated in; communication models; augmentative and alternative communication; presence and self-care: voice and breath; shared decision-making (SDM); understanding of roles; speaking/writing about patients; self-care and mindfulness; talking with children about their sick parents; ethics in medicine; talking about emotions; breaking bad news; processing mechanisms under (di)stress; psycho-oncology; and clinical ethics. Content, didactics, learning goals, and relation to the NKLM are listed in Table . Interprofessional participation The course unit “presence and self-care: voice and breath” was designed and taught by a trained opera singer and a performance artist, the course unit “augmentative and alternative communication” by a communication pedagogue, and the course unit “talking to children about their sick parents” by the leaders of a corresponding working group of the hospice initiative. The learning units “ethical case discussion” and “ethics in medicine” were newly introduced in 2022; these learning units were designed by a philosopher and a clinical ethicist . Practical exercises In addition to participation in counselling of patients, approximately 50% of the sessions in the block seminar had a workshop character with practical exercises, roleplays, and discussions and methods of self-reflection . Learning objectives of the practical exercises included improvement of self and role understanding, training of communication skills, and the topics of “breaking bad news,” “shared decision-making,” and decision discussions. Evaluation Before the seminar began, students were surveyed regarding their motivation for participating in the communication seminar. In addition, the participants were asked to rate their own communication skills with patients and to indicate the extent to which it would be true to want to work with patients later in medical practice. There were two parts of evaluation: for “each module” , and at the end of the course for the whole course “overall” . For both parts participants answered to a) criteria defined within the Delphi process on a five-point Likert scale (geometric equidistant points with two verbal poles: totally agree–totally disagree), b) German grade (1–6), and c) optional free-text comment. In addition, for each module, participants rated by self-report their achievement of learning objectives and their perceived usefulness with a five-point Likert scale (see above: totally disagree, totally agree). There was no knowledge test. Evaluation was pseudonymized. The course was planned as an interdisciplinary project in 2018 and conducted for the first time in 2019 and again in 2022 after a pandemic-related break. The entire project was funded by the medical faculty as an innovative teaching project. The three initiators and leaders were two radiation oncologists, one of whom is a palliative care specialist, and a psycho-oncologist working in radiation oncology. Additional lecturers from other disciplines were invited to the seminar with reference to the requirements of the NKLM . The course was an optional elective course for fourth- and fifth year medical students. At this point, medical students have left the exclusively theoretical part of their studies. At the beginning of the herein presented course, all students participated in counselling of patients prior to radiation therapy. The subsequent main part of the course was a 1-week block seminar. The curriculum and evaluation questionnaire were developed using a two-stage Delphi process . After written surveys of physician employees of different experience levels and specialties ( n = 10, consisting of radiation oncologists n = 2, medical oncologists [both palliative care specialists] n = 2, general practitioner n = 1, radiologist n = 1, resident doctors n = 4) and joint discussion, those items that scored at least an average of 3.6 on a scale of 0 (disagree completely) to 4 (agree completely) were included in the curriculum. The topics covered a broad spectrum of the competency areas defined in the NKLM (Table ). The number of student participants in the seminar was limited to approximately 15 participants because of the practical parts. The block seminar included the following topics: debriefing of the counselling participated in; communication models; augmentative and alternative communication; presence and self-care: voice and breath; shared decision-making (SDM); understanding of roles; speaking/writing about patients; self-care and mindfulness; talking with children about their sick parents; ethics in medicine; talking about emotions; breaking bad news; processing mechanisms under (di)stress; psycho-oncology; and clinical ethics. Content, didactics, learning goals, and relation to the NKLM are listed in Table . The course unit “presence and self-care: voice and breath” was designed and taught by a trained opera singer and a performance artist, the course unit “augmentative and alternative communication” by a communication pedagogue, and the course unit “talking to children about their sick parents” by the leaders of a corresponding working group of the hospice initiative. The learning units “ethical case discussion” and “ethics in medicine” were newly introduced in 2022; these learning units were designed by a philosopher and a clinical ethicist . In addition to participation in counselling of patients, approximately 50% of the sessions in the block seminar had a workshop character with practical exercises, roleplays, and discussions and methods of self-reflection . Learning objectives of the practical exercises included improvement of self and role understanding, training of communication skills, and the topics of “breaking bad news,” “shared decision-making,” and decision discussions. Before the seminar began, students were surveyed regarding their motivation for participating in the communication seminar. In addition, the participants were asked to rate their own communication skills with patients and to indicate the extent to which it would be true to want to work with patients later in medical practice. There were two parts of evaluation: for “each module” , and at the end of the course for the whole course “overall” . For both parts participants answered to a) criteria defined within the Delphi process on a five-point Likert scale (geometric equidistant points with two verbal poles: totally agree–totally disagree), b) German grade (1–6), and c) optional free-text comment. In addition, for each module, participants rated by self-report their achievement of learning objectives and their perceived usefulness with a five-point Likert scale (see above: totally disagree, totally agree). There was no knowledge test. Evaluation was pseudonymized. Participants A total of 30 students (26 women, 4 men) participated in the teaching project, 13 of them in 2019 and 17 in 2022. The students were at least in their seventh semester of study; 12 had already completed a professional training. Only five of the participants indicated prior experience with communication seminars. All but one indicated a later aspiration to work as a physician with direct patient contact. Reasons for participation Participants rated their communication skills with patients as relatively good even before the seminar began (2.55 ± 0.55 on a scale of 1 = very good to 5 = poor). The most frequently cited reasons for attending the communication seminar were the desire to acquire better competence in breaking bad news and confidence in conducting conversations (Fig. ). Evaluation results Both evaluation parts, “each module” and “overall,” showed very positive results (Table “overall,” Appendix: “each module”). In particular, the expectations formulated in advance regarding the acquisition of competencies for specific situations (e.g., breaking bad news) were met. In the free-text comments, the interprofessional participation and the practical exercises with roleplays were also specifically highlighted as instructive. A total of 30 students (26 women, 4 men) participated in the teaching project, 13 of them in 2019 and 17 in 2022. The students were at least in their seventh semester of study; 12 had already completed a professional training. Only five of the participants indicated prior experience with communication seminars. All but one indicated a later aspiration to work as a physician with direct patient contact. Participants rated their communication skills with patients as relatively good even before the seminar began (2.55 ± 0.55 on a scale of 1 = very good to 5 = poor). The most frequently cited reasons for attending the communication seminar were the desire to acquire better competence in breaking bad news and confidence in conducting conversations (Fig. ). Both evaluation parts, “each module” and “overall,” showed very positive results (Table “overall,” Appendix: “each module”). In particular, the expectations formulated in advance regarding the acquisition of competencies for specific situations (e.g., breaking bad news) were met. In the free-text comments, the interprofessional participation and the practical exercises with roleplays were also specifically highlighted as instructive. Good communication with patients and active participation of patients in treatment decisions are playing an increasingly important role. In the US, radiation oncologists are also involved in the nationwide “choosing wisely” campaign with various questions . This international initiative has developed brief recommendations for various specialities to reduce overuse and misuse. In Germany, a corresponding program for all clinics and departments has been established at the UKSH in Kiel as a national competence center for shared decision-making . All physicians (> 90% of each clinic) underwent a multistage training program to improve communication for doctor–patient contacts. The scientific evaluation shows consistently positive effects, not only for patient satisfaction, but also in terms of cost efficiency . The results confirm that communication skills in the medical field can be learned and taught on the one hand and will produce positive effects on the other hand . Communication will become even more important in the medical profession in the future. Challenges for physicians not only concern communication with patients in an increasingly complex medical environment which is rapidly changing due to progress, but also the function as a member and leader of a multiprofessional team . Therefore, communication skills have been integrated into the training curricula for medical students in many countries and will be given even greater consideration in medical studies as part of the implementation of the NKLM in Germany. Radiation oncology is predestined to play an important role in these training segments. The seminar presented here was conducted by radiation oncologists with the support of a multiprofessional non-physician team and with the cooperation of radio-oncological patients. The very positive evaluation shows the need for such events among medical students on the one hand; on the other hand, our results can be seen as an indication that the topics based on radio-oncological everyday life, e.g., communication of complicated scientific facts and complex treatments, breaking bad news, addressing feelings, and being part of a multiprofessional team, are particularly well suited to teach communication. This is supported by a variety of studies about teaching projects in radiation oncology and probably also applies for digitally based learning . The course was an optional offer provided to medical students. The number of participants, however, had to be limited for organizational reasons. Therefore, the participants (about 5 to 10% of a study year) cannot be considered a representative sample, so that the evaluation results cannot necessarily be applied to the entirety of medical students. In addition, there might be a selection bias. Only motivated and interested students participated, which might cause a positive bias for evaluation. Moreover, it is also possible that the desire for face-to-face teaching after corona-related teaching restrictions positively influenced the evaluation results . Nevertheless, in our opinion, such an event is certainly appropriate for teaching communication skills in future curricula. From our experience and considering the comments of the student participants, a group size of about 15 students seems to be optimal to ensure sufficient familiarity and supervision. Furthermore, repetition of the topic in the curriculum seems appropriate to increase the sustainability of the learning effect. In our opinion, the interprofessional participation chosen for this project is particularly well suited to best reflect the broad range of topics covered by the NKLM. Communication is one of the central competencies of physician action and activity. The findings of this innovative teaching project show that the field of radiation oncology is very well suited to teach this topic in medical school. Our goal with this seminar was to cover topics that are required in the NKLM but have been underrepresented in the medical curriculum so far. This represents a chance for radiation oncology to gain influence and establish a future role in shaping the minds of a future generation of doctors. We therefore hope that this course will be included in a future curriculum for medical students and may serve as a blueprint for innovative teaching in radiation oncology.
The effect of prenatal education on health anxiety of primigravid women
97db76d2-17cc-4228-9fdf-8472fac31c9c
11325567
Patient Education as Topic[mh]
Health anxiety characterized by extreme fears and delusions about health and physical symptoms . Health anxiety includes four emotional, cognitive, behavioral and perceptual components. The emotional component includes health concerns. The cognitive component refers to a strong belief that one is sick against medical evidence. The behavioral component includes reassurance-seeking behaviors in order to reduce the fear of illness, and the perceptual component includes mental preoccupation with physical symptoms and feelings . People with health anxiety have many visits to medical centers and pay significant medical costs . Health anxiety is often associated with other mental disorders. Additionally they are often not aware of the fact that their physical symptoms are caused by depression and anxiety . According to the evidence, anxiety is one of the psychological problems of a pregnant mother. Prevalence of pregnancy anxiety has been various and commonly undiagnosed , additionally, the mental state of the mother during pregnancy is effective on childbearing process and the health of the baby . This problem can take the form of illness and affect the mental health of the mother and the baby, associated with mental health problems in adulthood, and even have a profound effect on the whole life . Mother’s anxiety increases the possibility of shortness of breath in the newborn period, relationship between mother and baby and reduces the mother’s ability in the role of mother and baby, spontaneous abortion, premature birth and preeclampsia, cesarean delivery and bacterial vaginosis low birth weight, small head circumference, neuroendocrine disorders, low Apgar, crying and a more unstable condition in the baby, and decreases the production and secretion of breast milk in the postpartum period . Anxiety and worry during pregnancy, has a significant relationship with the young age of the mother in the first pregnancy, lack of support from her husband and history of abortion . There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system . Drug therapy in order to reduce anxiety and worry during pregnancy is only one of the ways to deal with anxiety due to the many side effects on the baby . Although some of nonpharmacologic interventions can be convenient and low cost, there is an inconsistent result of these modalities . Expectant mothers have high rates of anxiety and depressive disorders, and many are susceptible to a variety of stressors during pregnancy , and continuous childbirth education programs for pregnant women in different antenatal care settings are highly recommended . Based on evidence encouraging and supporting women to attend the full course of educational classes has the potential to increase women’s preference towards vaginal birth, resulting in a reduction in the caesarean section rate and fear of childbirth, self-efficacy improvement, and satisfaction to cope with labor pain . Since in Shahrekord city, the effect of courses held in clinics of comprehensive health service centers on the health anxiety of primiparous pregnant women has not been investigated, this study aims to “determine the effect of prenatal education on health anxiety of primigravid women referring to comprehensive health service centers of Shahrekord city”. Current study with ethic code: “IR.SKUMS.REC.1396.208” is a quasi-experimental in the population of primiparous pregnant women referring to Shahrekord (a city in the southwest of Iran and the capital of Chaharmahal and Bakhtiari province) comprehensive health service centers in 2019 was done. After receiving written consent, participants randomly divided into two intervention and control groups. inclusion criteria included: first pregnancy, 20 weeks gestational age, uncomplicated pregnancy, without anxiety disorders and Depression based on the General Health Questionnaire (GHQ) score, and non-use of psychoactive and herbal drugs. Exclusion criteria included dissatisfaction with the continuation of the study, absence from courses, occurrence of stressful events during the study (such as the death of relatives) and pregnancy complications during the study (such as abnormality and death of the fetus during the study). The sample size was obtained based on available continuous sampling and according to the information obtained from similar studies and the following formula, including 10% possible attrition, 65 people for each group. The file numbers of the selected pregnant women were written on one-shaped cards and put in a box, then they were randomly assigned to study groups (intervention and control groups). The information was collected by the researcher using the health anxiety questionnaire and the pregnancy outcomes registration checklist. In the first stage of the study, both groups completed the questionnaires in the 20th week of pregnancy (before the intervention). For measure health anxiety, the short form of the health anxiety questionnaire designed by Warwick and Salkoskis in 2002 was used. This questionnaire has three parts (18 questions) included as probability of contracting the disease, negative consequences of contracting the disease and general health concern . In this questionnaire, option A is assigned a score of 0 (no health anxiety) and option D is assigned a score of 3 (most health anxiety). The range of scores of this test is between 0 and 54, and higher scores indicate more health anxiety. The studies have shown that the health anxiety questionnaire has good validity and reliability . Participants were asked to select the appropriate option that accurately describes their situation in the past 6 months. The second stage was completing the questionnaire in the 28th week and the third stage in the 37th week for both groups. The intervention group participated in 8 sessions of 1.5 h, once every two weeks, from week 20 to week 37. The educational content provided to the intervention group is shown in Table . During this period, the control group received routine prenatal care specific to Iran. In the maternity hospital, the researcher completed the checklist of pregnancy outcomes including (newborn weight, Apgar score, labor duration, type of delivery and first breastfeeding time). Data analyze were used SPSS version 16 software developed by IBM. Data were expressed as frequency or mean and standard deviation. In order to analyze the data, Chi-square, t-test and regression analysis were used. Findings One hundred twenty-two primiparous pregnant women were included in the present study. Mean GHQ score was 21.68 ± 8.60, maximum 39 and minimum 6(. The attrition rate of the sample was 6.1 percent (totally 8 persons in intervention and control groups), because absence in the courses, pre-term labor and failure of filling the questionnaires. Normal vaginal delivery was 80.3% in the intervention group and 88.5% in the control group. The comparison of the components of the health anxiety questionnaire in the study groups shown in Table . The probability of contracting the disease and the overall health concern score before the intervention in the intervention and control groups were significantly different ( P < 0.05). There was no significant difference between the study groups in the 28th and 37th weeks ( P > 0.05). In the 28th week, compared to before the intervention, the probability of contracting the disease, the negative outcome of contracting the disease, and the overall health concern score in the intervention group decreased by 2.21, 0.44, and 2.62, respectively, and in the control group, the probability of contracting the disease and the overall health concern score increased by 2.33 and 2.13 points, respectively, and the score of the negative consequence of contracting the disease was reduced by 0.2 points. In the 37th week, compared to before the intervention, the probability of contracting the disease, the negative outcome of contracting the disease, and the overall health concern score in the intervention group decreased by 3.42, 0.93, and 4.36, respectively, and in the control group, the scores increased by 2.82, 0.03 and 2.86, respectively. The overall average of the various components of health anxiety before intervention vs 28th week, 28th week vs 37th week, before intervention vs 37th week didn’t significantly different ( P > 0.05), but only the average score of “ negative outcomes of contracting the disease” in the 37th week compared to before of intervention significantly decreased ( P < 0.03). According to the results of Table , there was no significant difference of gestational age, mother’s weight gain, height, head circumference and Apgar score of the baby, duration of hospitalization, and first breastfeeding time between the two groups. The duration of the active and latent phase of labor in the intervention group was significantly less than the control group ( P < 0.05). The weight of the baby in the intervention group was significantly higher than the control group ( P < 0.05). One hundred twenty-two primiparous pregnant women were included in the present study. Mean GHQ score was 21.68 ± 8.60, maximum 39 and minimum 6(. The attrition rate of the sample was 6.1 percent (totally 8 persons in intervention and control groups), because absence in the courses, pre-term labor and failure of filling the questionnaires. Normal vaginal delivery was 80.3% in the intervention group and 88.5% in the control group. The comparison of the components of the health anxiety questionnaire in the study groups shown in Table . The probability of contracting the disease and the overall health concern score before the intervention in the intervention and control groups were significantly different ( P < 0.05). There was no significant difference between the study groups in the 28th and 37th weeks ( P > 0.05). In the 28th week, compared to before the intervention, the probability of contracting the disease, the negative outcome of contracting the disease, and the overall health concern score in the intervention group decreased by 2.21, 0.44, and 2.62, respectively, and in the control group, the probability of contracting the disease and the overall health concern score increased by 2.33 and 2.13 points, respectively, and the score of the negative consequence of contracting the disease was reduced by 0.2 points. In the 37th week, compared to before the intervention, the probability of contracting the disease, the negative outcome of contracting the disease, and the overall health concern score in the intervention group decreased by 3.42, 0.93, and 4.36, respectively, and in the control group, the scores increased by 2.82, 0.03 and 2.86, respectively. The overall average of the various components of health anxiety before intervention vs 28th week, 28th week vs 37th week, before intervention vs 37th week didn’t significantly different ( P > 0.05), but only the average score of “ negative outcomes of contracting the disease” in the 37th week compared to before of intervention significantly decreased ( P < 0.03). According to the results of Table , there was no significant difference of gestational age, mother’s weight gain, height, head circumference and Apgar score of the baby, duration of hospitalization, and first breastfeeding time between the two groups. The duration of the active and latent phase of labor in the intervention group was significantly less than the control group ( P < 0.05). The weight of the baby in the intervention group was significantly higher than the control group ( P < 0.05). The present study was conducted with the aim of investigating the effect of prenatal education on health anxiety of primigravid women. In general, the results of the study show that the level of health anxiety in women participating in education courses decreased over time, but the control group experienced an increase in health anxiety from week 20 to week 37. The components of the health anxiety questionnaire, in the 28th and 37th weeks in the intervention and control groups did not have a significant difference. This result could be due to the two groups not being the same in terms of the initial values of health anxiety, and in the intervention group, the health anxiety score before the intervention was significantly higher than the control group. In the study of Doaltabadi et al. , Najafi et al. and of Khorsandi et al. (2014) pregnant women who participated in labor preparation and relaxation courses had a statistically significant lower average fear of delivery than the group receiving usual care . A quasi-experimental clinical trial by Mosavi et al. (2021) investigated the effect of virtual childbirth preparation training on primiparous women. The expected outcomes included the difference in pregnancy experience, the difference in fear of childbirth measured, the birth preference, and the type of delivery . In the research of Hassanzadeh et al. (2022), participation in Attending prenatal classes was associated with positive childbirth experience and low postpartum depression score . Delaram et al. reported in 2011 that counseling and informing women in the third trimester of pregnancy reduces their anxiety at the beginning of labor . In review, the relaxation program significantly reduced trait-state anxiety and stress in pregnant women . In general, the lack of awareness increases fear and anxiety about pregnancy and labor, especially in primiparous women, and increases the possibility of morbidity and side effects during delivery and postpartum . According to the results of the present study, there was no significant difference between the mother’s weight gain, gestational age, height, head circumference and Apgar score of the baby in the group participating in the training courses and the control group. The weight of babies in the group participating in labor preparation courses was significantly higher than the control group. In the study of Mehdizadeh et al., in line with these results, no significant difference was observed between the height, head circumference and Apgar score of the newborn in the two groups of participants in the training courses and the control group, but, but in the intervention group, the baby weight was slightly higher . In the study by Bastani et al. in 2006, in pregnant women participating in the relaxation program, the number of premature babies with low birth weight was significantly less compared to the control group, which is in line with the present results . In general, psychological stress and distress of pregnant mothers is one of the main predictors of adverse pregnancy outcomes, including low birth weight and preterm babies . During pregnancy, during exposure to stress and anxiety, various hormones, including corticotropin-releasing hormone (CRH), adrenocorticotropin-releasing hormone (ACTH), cortisol, noradrenaline and beta-endorphin, are released in large amounts in the blood . Increased beta-endorphin levels are associated with decreased placental-fetal blood flow, leading to fetal hypoxia . Maternal stress and increased secretion of catecholamines can cause contraction of blood vessels, which can lead to a range of fetal injuries, including brain damage . Mother’s mental stress is a strong predictor of low birth weight and preterm infants, which are among the major causes of mortality and morbidity in infants .Therefore, the increase in the weight of babies of women participating in pregnancy preparation courses compared to the control group is due to the reduction of their stress and anxiety, which has led to the improvement of pregnancy outcomes, including the weight of the baby. In the present study, the length of latent and active phase of labor in the case group was significantly less than the control group. In line with the present results in Mehdizadeh et al.’s study (2005), the duration of the latent phase of labor in the group participating in labor preparation courses was significantly less than the control group. The duration of the active phase of labor was significantly longer in the control group than in the group participating in labor courses . Reducing the duration of labor can also be due to the effect of the courses in reducing the fear and anxiety of the mother and her feeling of comfort. In the present study, there was no significant difference in prevalence of normal delivery between the two groups. This is while in the study of Mehdizadeh et al., and Najafi et al. in pregnant women participating in labor preparation courses, rate of normal delivery was significantly more than control group . In Hassanzadeh et al. study (2021), women reported that participation in childbirth preparation classes prepared them well for a vaginal birth, and these classes were perceived to be associated with a positive childbirth experience . In the studies of Sydsjo et al. (2014), Najafi et al. and Bastani et al.’s study (2006) the rate of emergency cesarean in pregnant women participating in training courses and relaxation program was reported to be significantly lower than in the control group . Training during pregnancy is a dynamic process in which parents get information about the physical and mental changes of pregnancy, childbirth, parenting, neuromuscular exercises and support techniques in pregnancy. These factors reduce the effect of various psychological and biological stressors created during pregnancy and increase the comfort and confidence of pregnant women . Nerum et al. , reported that a discussion with women who had fear of delivery and chosen cesarean section, most of them (93%) changed their decision and preferred normal delivery . In the present study, there was no significant difference between first breastfeeding time in the control group and the intervention group. The first breastfeeding time in the intervention group was slightly longer than the control group, which could be due to more cesarean births in this group, and the results may be different in the larger community. Limitations and implementation problems of research Occurrence of pregnancy complications such as premature birth, which were excluded from the study. Cases of pregnant women not returning to participate in the courses, we tried to prevent them from leaving the study by following up on the phone and considering a gift for them. Occurrence of pregnancy complications such as premature birth, which were excluded from the study. Cases of pregnant women not returning to participate in the courses, we tried to prevent them from leaving the study by following up on the phone and considering a gift for them. In the present study, the score of the components of the health anxiety questionnaire showed a decrease in the group under labor preparation courses, which indicates the effect of the educational courses in reducing health anxiety, the length of the active and latent phase and increased of baby. According to the results of this study, suggested that labor preparation courses be offered as a method without complications to reduce the health anxiety of pregnant women. To ensure the health of the population, policy makers should consider pregnancy education courses during health programs.
Exploring plastic biofilm formation and
2c2e8666-bb37-4f31-89ea-38e89f953c25
11196126
Microbiology[mh]
Plastic debris has become ubiquitous in aquatic ecosystems, spanning from heavily impacted areas to the most pristine environments (González‐Pleiter et al., ; McCormick et al., ). The improper disposal of plastic, estimated at 58% of the total plastic disposal, accumulates in landfills or in natural environments, where it can persist from 58 years (PET bottles) to 1200 years (HDPE pipes) (Chamas et al., ; Geyer et al., ). The most ambitious scenario suggests that between 20 and 53 Mt of plastic waste could enter aquatic ecosystems by 2030 (Borrelle et al., ). Once in seawater, plastic surfaces rapidly adsorb organic and inorganic matter, generating a nutrient‐rich layer within hours and facilitating the attachment of bacteria in <24 h (Qian et al., ). The term plastisphere defines the microbial community thriving on plastics (Zettler et al., ) and exhibits densities ranging from 1.5 × 10 5 to 8.7 × 10 6 16S rRNA gene copies (gc) mm −2 covering the floating surfaces (Liang et al., ). Biofilms function as a complex self‐sustaining ecosystem with a cooperation and competition relationship between microorganisms (Nadell et al., ). The biofilm provides protection against grazing, desiccation, solar radiation, and exposure to antibiotics. Bacteria within the biofilm can readily absorb nutrients produced as wastes by neighbouring bacteria, while horizontal gene transfer enhances the dissemination of novel functions within the plastisphere (Qian et al., ). Considering that any substrate immersed in seawater is rapidly colonised by bacteria, the large amount of already floating plastics represents a huge new potential artificial substrate drifting through the waterbodies carrying microorganisms lasting longer than natural floating substrates. Bacteria identified on plastic biofilms include hydrocarbon degraders like Pseudoalteromonas spp. or Phormidium spp. but also members of potential pathogenic clades such as Campylobacteraceae , Enterobacteriaceae, Pseudomonas , or Vibrio (McCormick et al., ; Wu et al., ; Zettler et al., ). These potential pathogens are normally identified after sequencing a 300 bp fragment of the 16S rRNA gene that limits their classification at the genus level and provides limited information about viability or pathogenicity. Due to the challenges in directly monitoring pathogens in water samples, bacterial faecal indicators such as E. coli and Enterococci are used as proxies and are included in water quality regulations like the European bathing directive (European Commission, ). A recent study has detected both indicators on plastics collected from coastal waters with human impact, detecting higher densities of E. coli per item compared to the surrounding water (Liang et al., ). Moreover, these faecal indicators have demonstrated the capacity to adhere to plastic biofilms (Metcalf et al., ) and they have also been detected on plastic debris found on beaches (Hernández‐Sánchez et al., ; Rodrigues et al., ). The presence of E. coli attached to plastics raises concerns about the potential presence of faecal pathogens. Incubation experiments in sewage have revealed the colonisation of plastics by potential pathogens such as Pseudomonas , Arcobacter , and Mycobacterium and by bacteria carrying antibiotic resistance genes sul I and tet M (Martínez‐Campos et al., ). However, although another study detected E. coli producing extended‐spectrum beta‐lactamase in water, it was not detected in plastic polymers incubated in the same water (Song et al., ). Therefore, it seems that E. coli can attach to plastic biofilms, but there may be differences in colonisation patterns due to environmental conditions and the detection techniques used. To address these knowledge gaps, this study aimed to investigate the ability of E. coli , serving as a proxy for bacterial faecal pathogens, to colonise plastics in seawater. Additionally, we sought to evaluate the persistence and stability of E. coli within the evolving biofilm, which becomes enriched by marine bacteria over time. Four independent microcosm experiments were conducted, involving the incubation of environmental plastic pellets with different strains of E. coli in seawater. The abundance of E. coli , as well as the composition and abundance of marine bacteria, were monitored using a combination of culture‐based and molecular techniques. By elucidating the colonisation dynamics of E. coli on plastic surfaces, this study contributes to our understanding of the risks associated with plastic pollution in marine ecosystems. The findings will provide valuable insights into the behaviour of E. coli as a faecal indicator bacterium in biofilms, aiding in the development of models for assessing the risk associated with plastic biofilms and informing effective mitigation strategies. Microcosms and sampling process We performed four independent microcosms (MC1–4). Each microcosm was set in one aquarium which included (i) 170 plastic pellets collected from a beach (ii) seawater provided by the ‘Centres Científics i Tecnològics’ from Universitat de Barcelona (CCiTUB) and (iii) a mix of environmental strains of E. coli (Figure ). The microcosms were performed at different time periods, so the seawater was different (Table ). We used environmental plastics since they have gone through a natural process of weathering, so they allow to reproduce a real environmental colonisation; thus, a mix of plastic pellets was collected from the sand of a beach of la Pineda (Tarragona, Spain). Since plastic pellets were collected from the environment, they presented variability in their composition, size, and degree of abrasion. They had a mean diameter of 4.6 mm (3.9–5.6 mm) and an estimated mean surface sphere area of 77 mm 2 (58–107 mm 2 ). Analyses performed using a Perkin Elmer Frontier FT‐IR Spectrometer Fourier transform infrared spectrophotometry in CCiTUB identified them as polyethylene (PE) and polypropylene (PP) (the 80% and 20%, respectively). To reduce the variability between pellets, for each sampling and each analysis, we pooled a given number of randomly selected pellets, and we mixed them (Schneider et al., ). Before use, the plastic pellets were disinfected with H 2 O 2 7.5% for 6 h and washed with sterile seawater. Disinfected pellets were used as control for the different analyses. For MC1 and MC2, we used a mix of three E. coli strains isolated from sewage and for MC3 and MC4, and we used a mix of three E. coli strains isolated from plastic samples collected in coastal waters from a previous study (Liang et al., ). The strains were isolated from sewage or plastic pellets using the selective and differential media Chromocult and identified using API20E strips (bioMerieux, Paris, France) and by Sanger sequencing of the 16S rRNA gene with universals primers 27f and 1492r (Table ). The selected E. coli strains showed >98% of identity with E. coli . The aquariums were filled with 20 L of seawater from the wet lab facilities and the absence of E. coli was verified by culture. We spiked the aquaria with 1.1 × 10 4 –6.4 × 10 4 cfu of the mixed overnight grown E. coli strains per ml −1 , and we added 170 disinfected plastic pellets. Aquariums were kept in stable conditions in the wet lab of CCiTUB. Water was kept at 20°C (±2°C), recirculated using a pump, oxygenated using an aerator, and kept with alternation between light and dark every 12 h. A total of 20 plastic pellets were collected randomly at days 1, 2, 5, 7, 12, 19, and 26 to characterise and enumerate bacteria from the plastisphere and were used for the different analysis. Meanwhile, water was collected at the beginning (day 0) and at the end of the experiment (day 26) to monitor potential changes. The physicochemical characteristics were measured including pH, dissolved oxygen, salinity using a portable multiparameter probe HI‐98194 (Hanna Instruments), and total organic carbon, inorganic carbon, total carbon, and total nitrogen were measured in CCiTUB with a TOC analyser multi N/C 3100 (Jena). Enumeration of E. coli and marine bacteria by culture media E. coli was measured using Chromocult® Coliform Agar (Merck, Darmstadt, Germany) including the E. coli /Coliform selective supplement (Merk) (2.5 mg of cefsulodine and vancomycin per 500 mL of Chromocult) and incubated at 37°C for 24 h (ISO, 2000a). The abundance of heterotrophic marine bacteria was quantified using Marine Agar 2216 (Difco, Madrid, Spain) after incubation for 48 h at 20°C. To evaluate culturable bacteria, we pooled five plastic pellets for each time and for each microcosm. Bacteria on the plastic biofilm were detached in sterile seawater after 1 min of sonication in an ultrasound bath, obtaining a bacterial suspension. The water samples and the biofilm bacterial suspension were diluted using sterile seawater if the bacterial concentration was too high. Alternatively, if the bacterial concentration was too low, they were concentrated by filtration through a 0.45 μm pore size filter (EZ‐PAK, Millipore, Darmstadt, Germany) before seeding in the required media. Results were expressed as cfu mm −2 in plastic pellets or cfu mL −1 in water. Enumeration of microorganisms by molecular methods DNA extraction For each time and microcosms, we pooled five plastic pellets to avoid differences between plastic pellets and to extract enough DNA. Besides, we extracted the DNA from 0.5 L of seawater concentrated by filtration with a 0.22 μm pore size cellulose ester membrane (SO‐PAK, Millipore, Darmstadt, Germany) from the beginning (T0) and from the end of the experiment (T26) to evaluate changes in the water microbial community. The DNeasy PowerBiofilm Extraction Kit (Qiagen, Hilden, Germany) was used following the manufacturer's instructions, and the DNA extracted was eluted to a final volume of 100 μL. DNA extraction controls including disinfected pellets and filtered seawater were run together with the samples. Quantification of E. coli and total 16S rRNA gene Total E. coli was quantified by targeting a fragment of the 16S rRNA gene by qPCR, as previously described (Huijsdens et al., ). The total 16S rRNA gene was quantified using the primers 341F and 534R (Muyzer et al., , ). Amplification of E. coli a was performed using TaqMan Environmental Master Mix 2.0 (Applied Biosystems, Foster City, CA, USA) by a StepOne Real‐Time PCR System (Applied Biosystems, Foster City, CA, USA). Each mixture, with a final volume of 20 μL, was composed of 10 μL of TaqMan Environmental Master Mix 2.0 (Applied Biosystems), 300 nM of the primers and 100 nM of the probe (Table ), 5 μL of the DNA template, and nuclease‐free water to reach the final volume. Amplifications were done under the following conditions: 10 min of an initial denaturation at 95°C, followed by 40 cycles of 15 s of denaturation at 95°C and 1 min of annealing and extension at 60°C. PCR amplification of the 16S rRNA gene was carried out in a 20 μL reaction mixture with 10 μL of PowerUp SYBR Green Master Mix (Thermo Fisher Scientific, Waltham, MA, USA), 1000 nM of the primers 341F and 534R (Table ), 1 μL of the DNA template, and nuclease‐free water to reach the final volume. The PCR program was initiated at 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 15 s, and extension at 60°C for 1 min. All samples, negative controls, and extraction and filtration blanks were run in duplicate. Molecular results of microorganisms from plastics and water were expressed as gc mm −2 and gc ml −1 , respectively. Five points of the standard curves were included in duplicate for each run and were generated from different 10‐fold serial dilutions of a gBlock gene fragment (Integrated DNA Technologies, Coralville, IA, USA) containing the target sequences. The qPCR quality controls, the description of the standard curves including the slope, intercept, R 2 and efficiency together with the limit of detection are shown in Table . Only amplification efficiencies between 90% and 110% were considered as acceptable for quantification. The limit of detection was 6 gene copies per reaction for E. coli and 80 gene copies per reaction for 16S rRNA gene. Scanning electron microscopy For each time and microcosms, three plastic pellets were fixed with 2.5% glutaraldehyde in phosphate buffer at pH 7.4 at 4°C until processing (less than 3 weeks). Samples were successively washed with phosphate buffer at pH 7.4 (4 × 10 min), fixed with 1% of osmium tetroxide, washed with Milli‐Q water (4 × 10 min), and dehydrated with different EtOH solutions in water: 50% (1 × 10 min), 70% (ON), 80% (1 × 10 min), 90% (3 × 10 min), 96% (3 × 10 min), and 100% EtOH (3 × 10 min). Finally, the samples were dried using Emitech K850 critical point dryer, mounted on double‐coated carbon conductive tape and carbon coated to improve their conductivity. Scanning electron microscope (SEM) observation was done with a JEOL JSM 7001F (JEOL, Akishima, Japan) at the CCiTUB. Illumina 16S rRNA amplicon sequencing Sample sequencing of two microcosms (MC2 and MC3) was performed using the Illumina MiSeq platform at the Genomics Unit of Centre for Genomic Regulation Core Facilities (CRG, Barcelona). The V4 region was amplified from DNA sample extracts using the primers from the Earth Microbiome Project [515F (Parada et al., ) (5′‐GTGYCAGCMGCCGCGGTAA‐3′) and 806R (Apprill et al., ) (5′‐GGACTACNVGGGTWTCTAAT‐3′)] (following IPUAC ambiguity codes for nucleotide degeneracy: Y = C, T; M = A, C; W = A, T; V = A, C, G; N = A, C, G, T). The PCR included a primer concentration of 0.2 mM and KAPA HiFi HotStart ReadyMix (Roche) in a final volume of 25 μL. Cycling conditions consisted of an initial denaturation of 3 min at 95°C, followed by 25 cycles of 95°C for 30 s, 55°C for 30 s, and 72°C for 30 s, and a final elongation step of 5 min at 72°C. Reactions were purified using AgenCourt AMPure XP beads (Beckman Coulter). The first PCR primers contained overhangs allowing the addition of full‐length Nextera adapters with barcodes for multiplex sequencing, obtaining libraries with approximately 450 bp insert sizes. Five μl of the first amplification was used as template for the second PCR with Nextera XT v2 adaptor primers in a final volume of 50 μL using the same PCR mix and thermal profile as for the first PCR with just 8 cycles. After the second PCR, 25 μL of the final product was used for purification and normalisation with SequalPrep normalisation kit (Thermo Fisher Scientific), according to manufacturer's protocol. Libraries were eluted and pooled for sequencing. Final pool libraries were analysed using Agilent Bioanalyzer or Fragment analyser High Sensitivity assay to estimate the quantity and check size distribution and were then quantified by qPCR using the KAPA Library Quantification Kit (KapaBiosystems) prior to sequencing with Illumina's Miseq 2 × 300 bp. Sequencing included negative controls including blanks from the DNA extraction process, the DNA extracted from disinfected pellets, as well as from the DNA amplification. The data are available at Mendeley Data public repository (doi: 10.17632/zp6htysmy2.1). Bioinformatic analyses Cutadapt was used to trim adapters, primers, barcodes and leading Ns from sequencing reads. Sequences were processed to amplicon sequence variants (ASV) using the default parameters of the Dada2 workflow (Callahan et al., ). Firstly, quality filtering and the trimming of sequences was set to 220 bp (for forward reads) and 175 bp (for reverse reads) with a maximum number of expected errors allowed per read set at two (EE = 2). This parameter has been shown to be a better filter than simply averaging quality scores (Edgar & Flyvbjerg, ). Filtered sequences were dereplicated, the forward and reverse reads were aligned and merged, chimeras were removed and an amplicon sequence variant (ASV) table was obtained. Taxonomy was assigned to the resulting ASVs using the SILVA SSU 138 reference database and was imported to the phyloseq R package for microbiome analyses. To obtain a more accurate profile of microbial communities, the ‘decontam’ (Davis et al., ) R package was used to remove sequences derived from contaminating DNA present in extraction or sequencing reagents. In addition, chloroplast and mitochondrial reads were removed. Data analyses Microbial abundances were log 10 converted and analysed by descriptive statistics and plotted using the statistical software R version 4.0.3 (R Development Core Team, ) through the RStudio interface including the packages ‘Rmisc,’ ‘reshape2’ and ‘ggplot2’ v. 3.0.1 (Wickham, , ). We performed four independent microcosms (MC1–4). Each microcosm was set in one aquarium which included (i) 170 plastic pellets collected from a beach (ii) seawater provided by the ‘Centres Científics i Tecnològics’ from Universitat de Barcelona (CCiTUB) and (iii) a mix of environmental strains of E. coli (Figure ). The microcosms were performed at different time periods, so the seawater was different (Table ). We used environmental plastics since they have gone through a natural process of weathering, so they allow to reproduce a real environmental colonisation; thus, a mix of plastic pellets was collected from the sand of a beach of la Pineda (Tarragona, Spain). Since plastic pellets were collected from the environment, they presented variability in their composition, size, and degree of abrasion. They had a mean diameter of 4.6 mm (3.9–5.6 mm) and an estimated mean surface sphere area of 77 mm 2 (58–107 mm 2 ). Analyses performed using a Perkin Elmer Frontier FT‐IR Spectrometer Fourier transform infrared spectrophotometry in CCiTUB identified them as polyethylene (PE) and polypropylene (PP) (the 80% and 20%, respectively). To reduce the variability between pellets, for each sampling and each analysis, we pooled a given number of randomly selected pellets, and we mixed them (Schneider et al., ). Before use, the plastic pellets were disinfected with H 2 O 2 7.5% for 6 h and washed with sterile seawater. Disinfected pellets were used as control for the different analyses. For MC1 and MC2, we used a mix of three E. coli strains isolated from sewage and for MC3 and MC4, and we used a mix of three E. coli strains isolated from plastic samples collected in coastal waters from a previous study (Liang et al., ). The strains were isolated from sewage or plastic pellets using the selective and differential media Chromocult and identified using API20E strips (bioMerieux, Paris, France) and by Sanger sequencing of the 16S rRNA gene with universals primers 27f and 1492r (Table ). The selected E. coli strains showed >98% of identity with E. coli . The aquariums were filled with 20 L of seawater from the wet lab facilities and the absence of E. coli was verified by culture. We spiked the aquaria with 1.1 × 10 4 –6.4 × 10 4 cfu of the mixed overnight grown E. coli strains per ml −1 , and we added 170 disinfected plastic pellets. Aquariums were kept in stable conditions in the wet lab of CCiTUB. Water was kept at 20°C (±2°C), recirculated using a pump, oxygenated using an aerator, and kept with alternation between light and dark every 12 h. A total of 20 plastic pellets were collected randomly at days 1, 2, 5, 7, 12, 19, and 26 to characterise and enumerate bacteria from the plastisphere and were used for the different analysis. Meanwhile, water was collected at the beginning (day 0) and at the end of the experiment (day 26) to monitor potential changes. The physicochemical characteristics were measured including pH, dissolved oxygen, salinity using a portable multiparameter probe HI‐98194 (Hanna Instruments), and total organic carbon, inorganic carbon, total carbon, and total nitrogen were measured in CCiTUB with a TOC analyser multi N/C 3100 (Jena). E. coli and marine bacteria by culture media E. coli was measured using Chromocult® Coliform Agar (Merck, Darmstadt, Germany) including the E. coli /Coliform selective supplement (Merk) (2.5 mg of cefsulodine and vancomycin per 500 mL of Chromocult) and incubated at 37°C for 24 h (ISO, 2000a). The abundance of heterotrophic marine bacteria was quantified using Marine Agar 2216 (Difco, Madrid, Spain) after incubation for 48 h at 20°C. To evaluate culturable bacteria, we pooled five plastic pellets for each time and for each microcosm. Bacteria on the plastic biofilm were detached in sterile seawater after 1 min of sonication in an ultrasound bath, obtaining a bacterial suspension. The water samples and the biofilm bacterial suspension were diluted using sterile seawater if the bacterial concentration was too high. Alternatively, if the bacterial concentration was too low, they were concentrated by filtration through a 0.45 μm pore size filter (EZ‐PAK, Millipore, Darmstadt, Germany) before seeding in the required media. Results were expressed as cfu mm −2 in plastic pellets or cfu mL −1 in water. DNA extraction For each time and microcosms, we pooled five plastic pellets to avoid differences between plastic pellets and to extract enough DNA. Besides, we extracted the DNA from 0.5 L of seawater concentrated by filtration with a 0.22 μm pore size cellulose ester membrane (SO‐PAK, Millipore, Darmstadt, Germany) from the beginning (T0) and from the end of the experiment (T26) to evaluate changes in the water microbial community. The DNeasy PowerBiofilm Extraction Kit (Qiagen, Hilden, Germany) was used following the manufacturer's instructions, and the DNA extracted was eluted to a final volume of 100 μL. DNA extraction controls including disinfected pellets and filtered seawater were run together with the samples. Quantification of E. coli and total 16S rRNA gene Total E. coli was quantified by targeting a fragment of the 16S rRNA gene by qPCR, as previously described (Huijsdens et al., ). The total 16S rRNA gene was quantified using the primers 341F and 534R (Muyzer et al., , ). Amplification of E. coli a was performed using TaqMan Environmental Master Mix 2.0 (Applied Biosystems, Foster City, CA, USA) by a StepOne Real‐Time PCR System (Applied Biosystems, Foster City, CA, USA). Each mixture, with a final volume of 20 μL, was composed of 10 μL of TaqMan Environmental Master Mix 2.0 (Applied Biosystems), 300 nM of the primers and 100 nM of the probe (Table ), 5 μL of the DNA template, and nuclease‐free water to reach the final volume. Amplifications were done under the following conditions: 10 min of an initial denaturation at 95°C, followed by 40 cycles of 15 s of denaturation at 95°C and 1 min of annealing and extension at 60°C. PCR amplification of the 16S rRNA gene was carried out in a 20 μL reaction mixture with 10 μL of PowerUp SYBR Green Master Mix (Thermo Fisher Scientific, Waltham, MA, USA), 1000 nM of the primers 341F and 534R (Table ), 1 μL of the DNA template, and nuclease‐free water to reach the final volume. The PCR program was initiated at 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 15 s, and extension at 60°C for 1 min. All samples, negative controls, and extraction and filtration blanks were run in duplicate. Molecular results of microorganisms from plastics and water were expressed as gc mm −2 and gc ml −1 , respectively. Five points of the standard curves were included in duplicate for each run and were generated from different 10‐fold serial dilutions of a gBlock gene fragment (Integrated DNA Technologies, Coralville, IA, USA) containing the target sequences. The qPCR quality controls, the description of the standard curves including the slope, intercept, R 2 and efficiency together with the limit of detection are shown in Table . Only amplification efficiencies between 90% and 110% were considered as acceptable for quantification. The limit of detection was 6 gene copies per reaction for E. coli and 80 gene copies per reaction for 16S rRNA gene. extraction For each time and microcosms, we pooled five plastic pellets to avoid differences between plastic pellets and to extract enough DNA. Besides, we extracted the DNA from 0.5 L of seawater concentrated by filtration with a 0.22 μm pore size cellulose ester membrane (SO‐PAK, Millipore, Darmstadt, Germany) from the beginning (T0) and from the end of the experiment (T26) to evaluate changes in the water microbial community. The DNeasy PowerBiofilm Extraction Kit (Qiagen, Hilden, Germany) was used following the manufacturer's instructions, and the DNA extracted was eluted to a final volume of 100 μL. DNA extraction controls including disinfected pellets and filtered seawater were run together with the samples. E. coli and total 16S rRNA gene Total E. coli was quantified by targeting a fragment of the 16S rRNA gene by qPCR, as previously described (Huijsdens et al., ). The total 16S rRNA gene was quantified using the primers 341F and 534R (Muyzer et al., , ). Amplification of E. coli a was performed using TaqMan Environmental Master Mix 2.0 (Applied Biosystems, Foster City, CA, USA) by a StepOne Real‐Time PCR System (Applied Biosystems, Foster City, CA, USA). Each mixture, with a final volume of 20 μL, was composed of 10 μL of TaqMan Environmental Master Mix 2.0 (Applied Biosystems), 300 nM of the primers and 100 nM of the probe (Table ), 5 μL of the DNA template, and nuclease‐free water to reach the final volume. Amplifications were done under the following conditions: 10 min of an initial denaturation at 95°C, followed by 40 cycles of 15 s of denaturation at 95°C and 1 min of annealing and extension at 60°C. PCR amplification of the 16S rRNA gene was carried out in a 20 μL reaction mixture with 10 μL of PowerUp SYBR Green Master Mix (Thermo Fisher Scientific, Waltham, MA, USA), 1000 nM of the primers 341F and 534R (Table ), 1 μL of the DNA template, and nuclease‐free water to reach the final volume. The PCR program was initiated at 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 15 s, and extension at 60°C for 1 min. All samples, negative controls, and extraction and filtration blanks were run in duplicate. Molecular results of microorganisms from plastics and water were expressed as gc mm −2 and gc ml −1 , respectively. Five points of the standard curves were included in duplicate for each run and were generated from different 10‐fold serial dilutions of a gBlock gene fragment (Integrated DNA Technologies, Coralville, IA, USA) containing the target sequences. The qPCR quality controls, the description of the standard curves including the slope, intercept, R 2 and efficiency together with the limit of detection are shown in Table . Only amplification efficiencies between 90% and 110% were considered as acceptable for quantification. The limit of detection was 6 gene copies per reaction for E. coli and 80 gene copies per reaction for 16S rRNA gene. For each time and microcosms, three plastic pellets were fixed with 2.5% glutaraldehyde in phosphate buffer at pH 7.4 at 4°C until processing (less than 3 weeks). Samples were successively washed with phosphate buffer at pH 7.4 (4 × 10 min), fixed with 1% of osmium tetroxide, washed with Milli‐Q water (4 × 10 min), and dehydrated with different EtOH solutions in water: 50% (1 × 10 min), 70% (ON), 80% (1 × 10 min), 90% (3 × 10 min), 96% (3 × 10 min), and 100% EtOH (3 × 10 min). Finally, the samples were dried using Emitech K850 critical point dryer, mounted on double‐coated carbon conductive tape and carbon coated to improve their conductivity. Scanning electron microscope (SEM) observation was done with a JEOL JSM 7001F (JEOL, Akishima, Japan) at the CCiTUB. 16S rRNA amplicon sequencing Sample sequencing of two microcosms (MC2 and MC3) was performed using the Illumina MiSeq platform at the Genomics Unit of Centre for Genomic Regulation Core Facilities (CRG, Barcelona). The V4 region was amplified from DNA sample extracts using the primers from the Earth Microbiome Project [515F (Parada et al., ) (5′‐GTGYCAGCMGCCGCGGTAA‐3′) and 806R (Apprill et al., ) (5′‐GGACTACNVGGGTWTCTAAT‐3′)] (following IPUAC ambiguity codes for nucleotide degeneracy: Y = C, T; M = A, C; W = A, T; V = A, C, G; N = A, C, G, T). The PCR included a primer concentration of 0.2 mM and KAPA HiFi HotStart ReadyMix (Roche) in a final volume of 25 μL. Cycling conditions consisted of an initial denaturation of 3 min at 95°C, followed by 25 cycles of 95°C for 30 s, 55°C for 30 s, and 72°C for 30 s, and a final elongation step of 5 min at 72°C. Reactions were purified using AgenCourt AMPure XP beads (Beckman Coulter). The first PCR primers contained overhangs allowing the addition of full‐length Nextera adapters with barcodes for multiplex sequencing, obtaining libraries with approximately 450 bp insert sizes. Five μl of the first amplification was used as template for the second PCR with Nextera XT v2 adaptor primers in a final volume of 50 μL using the same PCR mix and thermal profile as for the first PCR with just 8 cycles. After the second PCR, 25 μL of the final product was used for purification and normalisation with SequalPrep normalisation kit (Thermo Fisher Scientific), according to manufacturer's protocol. Libraries were eluted and pooled for sequencing. Final pool libraries were analysed using Agilent Bioanalyzer or Fragment analyser High Sensitivity assay to estimate the quantity and check size distribution and were then quantified by qPCR using the KAPA Library Quantification Kit (KapaBiosystems) prior to sequencing with Illumina's Miseq 2 × 300 bp. Sequencing included negative controls including blanks from the DNA extraction process, the DNA extracted from disinfected pellets, as well as from the DNA amplification. The data are available at Mendeley Data public repository (doi: 10.17632/zp6htysmy2.1). Cutadapt was used to trim adapters, primers, barcodes and leading Ns from sequencing reads. Sequences were processed to amplicon sequence variants (ASV) using the default parameters of the Dada2 workflow (Callahan et al., ). Firstly, quality filtering and the trimming of sequences was set to 220 bp (for forward reads) and 175 bp (for reverse reads) with a maximum number of expected errors allowed per read set at two (EE = 2). This parameter has been shown to be a better filter than simply averaging quality scores (Edgar & Flyvbjerg, ). Filtered sequences were dereplicated, the forward and reverse reads were aligned and merged, chimeras were removed and an amplicon sequence variant (ASV) table was obtained. Taxonomy was assigned to the resulting ASVs using the SILVA SSU 138 reference database and was imported to the phyloseq R package for microbiome analyses. To obtain a more accurate profile of microbial communities, the ‘decontam’ (Davis et al., ) R package was used to remove sequences derived from contaminating DNA present in extraction or sequencing reagents. In addition, chloroplast and mitochondrial reads were removed. Microbial abundances were log 10 converted and analysed by descriptive statistics and plotted using the statistical software R version 4.0.3 (R Development Core Team, ) through the RStudio interface including the packages ‘Rmisc,’ ‘reshape2’ and ‘ggplot2’ v. 3.0.1 (Wickham, , ). In this experiment, we studied the colonisation of environmental plastic pellets by E. coli and marine bacteria in aquaria with controlled conditions. We sampled plastic pellets regularly for 4 weeks, and we measured the attachment and persistence of E. coli within the plastisphere using culture and molecular methods. Colonisation of marine bacteria on plastic pellets Characterisation of the water The seawater samples used in the experiment had a total organic carbon concentration of 2.8 ± 1.12 ppm, inorganic carbon concentration of 26.28 ± 2.62 ppm, and total nitrogen concentration of 23 ± 0.35 ppm, which remained relatively stable throughout the experiment (Table ). The water temperature was maintained at 20 ± 2°C, with daily variations not exceeding 1°C. The salinity was around 38.5 ± 1 PSU, pH was 8.0 ± 0.1, and dissolved oxygen was 4.5 ± 0.3 mg/L. The initial bacterial population in seawater, as determined by the abundance of the 16S rRNA gene copies, was 6.3 × 10 6 (±4.1 × 10 6 ) gc mL −1 , while the culturable bacteria on marine agar were 3.6 × 10 5 (±3.7 × 10 5 ) cfu mL −1 (Table ), representing 6% of the 16S rRNA gene copies. The abundance of bacteria, as measured by qPCR, remained relatively stable throughout the experiment, with a final count of 5.9 × 10 6 (±4.2 × 10 6 ) gc of the 16S rRNA gene per ml at the end of the experiment. However, the abundance of culturable marine bacteria on marine agar showed a decrease ranging from 0.9 to 2.5 logs, reaching abundances of 5.0 × 10 4 (±7.3 × 10 4 ) cfu mL −1 depending on the microcosms (Figure ). Characterisation of the plastisphere The abundance of culturable marine bacteria in the plastic biofilm was measured by plating on marine agar after sonicating the biofilm. Within just 24 h, we detected 2.6 × 10 5 (±2.7 × 10 5 ) cfu per pellet, which corresponds to 3.3 × 10 3 (±3.6 × 10 3 ) cfu mm −2 corresponding to the average surface area of the plastic pellet (77 mm 2 ). This concentration remained relatively stable throughout the experiment, with a slight decrease in 3 out of the 4 microcosms (Figure ). Quantification of the 16S rRNA gene by qPCR showed a substantial increase within 24 h, with 1.4 × 10 7 (±1.1 × 10 7 ) gc per pellet, corresponding to 1.8 × 10 5 (±1.5 × 10 5 ) gc mm −2 . This concentration increased slightly further to 6.1 × 10 7 (±6.2 × 10 7 ) gc per pellet after 26 days (Figure ). Scanning electronic microscopy images revealed the early colonisation observing single bacteria, predominantly coccobacillus, within the first 24 h (Figure ). Some cells exhibited signs of division, indicating active growth, and the production of extracellular polymers for substrate attachment. On day 2, a higher density of bacteria with similar characteristics confirmed the colonisation of the plastic pellets. From day 5 to day 26, an increased presence of the exopolysaccharide matrix covering significant areas of the pellets was observed, along with the appearance of filamentous bacteria, clusters of cells characteristic of mature biofilms, and protists displaying morphological similarities to Choanozoa and Ciliophora (Figure ). These protozoa are mainly bacterivores. Colonisation of plastic pellets by E. coli We spiked the seawater with a mixture of three E. coli strains at an initial concentration of approximately 3.4 × 10 4 (±2.2 × 10 4 ) cfu mL −1 , corresponding to 7.2 × 10 4 (±6.0 × 10 4 ) gc of the 16S rRNA gene of E. coli per ml. E. coli strains for MC1 and 2 were isolated from sewage, while strains for MC3 and 4 were isolated from plastic biofilms collected in coastal waters (Liang et al., ). Characterisation of the water In seawater, culturable E. coli was detected for 5 and 12 days, depending on the microcosms (Figure ). The inactivation of culturable E. coli , measured by the time required to reduce the initial population by 1 logarithm (T 90 ), ranged from 1.2 to 3.6 days in seawater. However, the presence of E. coli DNA was detected until the last day of the experiment, with an average concentration of 2.1 (±2.0) gc of the 16S rRNA gene per ml, representing a reduction of approximately 4.0 to 5.3 logarithms over 26 days. Characterisation of the plastisphere In the plastic pellets, we detected culturable E. coli during 2–7 days from the microcosms (2 days in MC4, 5 days in MC1 and MC2, and 7 days in MC3) (Table ). The highest abundance was typically observed after 2 days of incubation, with a mean of 4.3 × 10 2 (±7.6 × 10 2 ) cfu per pellet (corresponding to 5.6 (±10.0) cfu mm −2 ), followed by a gradual decrease (Table , Figure ). Moreover, the presence of E. coli DNA was detected for 5 days in MC4, 12 days in MC3, 19 days in MC2 and throughout the 26‐day experiment in MC1 (Table ). The highest abundance was reached after 5 days, with average levels of 3.3 × 10 3 (±2.9 × 10 3 ) gc per pellet (4.2 × 10 1 (±3.8 × 10 1 ) gc mm −2 ), showing a slight decrease thereafter (Figure , Table ). In MC1, the abundance decreased to 1.3 × 10 2 gc per pellet on the last day of the experiment. Taxonomic composition of the bacterial communities colonising the pellets The microbial communities of MC2 and MC3 were analysed using high‐throughput sequencing of the 16S rRNA gene. A total of 2,475,520 reads were obtained after denoising and quality filtering the raw sequencing data. In MC2, the number of amplicon sequence variants (ASVs) varied between time points, with 415 ASVs on day 1, 751 ASVs on day 5 and a decrease to 373 ASVs at day 26. However, in MC3, the number of unique microbial taxa (ASVs) increased from 185 ASVs on day 1 to 424 on day 26. (Table ). We focused our analysis on sequences affiliated with the domain Bacteria, as the detection of Archaea in the plastic biofilms was low (with 233 reads in MC2 and just 10 reads in MC3). Distinct bacterial communities' structure was observed between MC2 and MC3 visualised by non‐metric multidimensional scaling (nMDS) of Beta‐diversity (Bray‐Curtis) coefficients (Figure ) and by hierarchical clustering analysis (Figure ). Besides, within each microcosm, two clusters were clearly defined (i) an initial biofilm cluster comprising the biofilm communities from days 1, 2 and 5 as well as the initial water sample, and (ii) a mature biofilm cluster consisting of the microbial communities from pellets collected on days 12, 19, and 26, along with the microbial community from water on day 26 (Figure ). This clustering pattern was consistent in both microcosms. Taxonomic composition of the water The microbial community composition of the water of MC2 and MC3 exhibited notable differences (Figure ). The taxonomic composition of both bacterial communities was predominantly composed of the phylum Proteobacteria, accounting for 85% and 86% of the total reads (Figure ). Within Proteobacteria, in MC2, 41% of the reads belonged to class Alphaproteobacteria, while 44% were classified as Gammaproteobacteria (Figure ). Whereas in MC3, within Proteobacteria, the class Gammaproteobacteria was the most abundant, accounting for 80% of the reads, while the class Alphaproteobacteria class represented only 6% of the reads. (Figure ). In MC2 at the end of the experiment, the bacterial community had shifted and Proteobacteria accounted for 84% of the identified affiliations, with the genus Porticoccus (Gammaproteobacteria class) representing 50% of the reads. Remarkably, this genus represented only 0.08% of the reads at the beginning of the experiment. In fact, just 20% of the ASVs were shared in water from T0 and T26 (Figure , ). Interestingly, the Escherichia ‐ Shigella group which accounted for 2% of the reads at day 0, was not detected after 26 days, although it could be detected using qPCR. Within MC3, Escherichia‐Shigella group comprised 21% of the reads at the beginning of the experiment, followed by the genera Marinobacterium (12%) and Thalassotalea (11%) (Figure ). After 26 days, there was a notable shift in the microbial community of MC3. In fact, just about the 8% of the ASVs were shared between the microbial community of T0 and T26 (Figure ). It is worth noting that neither the phylum Campylobacterota nor the Escherichia ‐ Shigella group was detected at the end of the experiment, indicating a significant change in their relative abundance over time. Taxonomic composition of the plastisphere In the plastic pellets from MC2, the Proteobacteria phylum accounted for 91 and 88% of the reads during the first 2 days (Figure ). During this period, the order Enterobacterales was outstanding representing 59% and 44% of the reads. However, its abundance gradually declined, becoming 12% of the reads at 5 day and ultimately representing only 0.3% of the ASVs at T26 (Figure ). The primary genus identified was Alteromonas , which comprised 10% of the reads until day 5 but decreased to <1% on day 26. The group Shigella ‐ Escherichia represented the 0.06% and 0.02% of the sequences on day T1 and day T2; meanwhile, it was not detected after. On day 5, there was an increase in diversity at the phylum level, with Proteobacteria representing 69% of the reads, Bacteroidota and Planctomycetota accounting for 10% of the reads each, and Bdellovibrionata comprising 4% of the reads. The latter phylum consists of obligate predators that feed on bacteria. By day 12, the alpha diversity had decreased compared to day 5 (Table ). The main orders identified were Pseudomonadales (22%), Rhodobacterales (19%), Flavobacteriales (14%), and Caulobacterales (8%) (Figure ). The abundance of order Pseudomonadales continued to increase, becoming the dominant class by day 26 and representing 77% of the reads. Within this order, the genus Halioxenophilus became the most abundant (56%), followed by an unclassified genus of the Songiibacteraceae family (14%). In the water microbial community, Pseudomonadales order also dominated (61%), but the most relevant genus was Porticoccus . On the pellets of MC3, the Proteobacteria phylum accounted for 89% and 88% of the reads during the first 2 days. The reads belonging to the Enterobacterales and Pseudomonadales orders dominated during the first 5 days. Specifically, Enterobacterales accounted for 47%, 35%, and 25% of the reads on days 1, 2, and 5, respectively. The group Shigella ‐ Escherichia was detected until day T12 representing the 0.01% of the sequences in days T1 and T2 and 0.04% in T5. Pseudomonadales represented 35%, 40%, 24% of the reads on the same respective days (Figure ). After day 5, the abundance of Enterobacterales decreased, representing only 3% of the reads on day 26. However, Shigella ‐ Escherichia was still detected until day T12 representing the 0.003% of the sequences. Pseudomonadales remained relatively constant, representing between 24% and 39% of the reads (Figure ). During the initial days (1 and 2), the most prevalent genera were Thalassotalea (Enterobacterales) (26%–16%) and Aestuariicella (Alteromonadales) (25%–27%). By day 5, although both genera were still the dominant, the global diversity increased, increasing the presence of the Flavobacteriales order from Bacteroidota phylum (Figure ). On day 12, Thalassotalea and Aestuariicella decreased to 0.6% and 3% of the reads, respectively. The genus Methylophaga (Nitrosoccales order) (16% of the reads), Marinobacter (Alteromonadales order) (12%) (known for its involvement in hydrocarbon degradation), and a genus from the Flavobacteriales order (16%) increased in abundance. Similar main genera were detected on day 19, with the addition of Alcanivorax (11%). By day 26, a higher diversity was observed, with Marinobacter being the most represented genus. Additionally, an increase in the presence of the Planctomycetota phylum was detected. For a more detailed taxonomy for all samples at different hierarchy levels, see the Krona diagrams in Figure . We compared the shared ASVs between water and pellets separating between early and late biofilm as we could define a clear separation by hierarchical clustering (Figure ). In MC2, we identified 244 ASVs that were shared between water and early biofilm pellets (T1, T2 and T5), while 336 ASVs were just detected in water and 37 ASVs were exclusively detected in plastic pellets at all three sampling times (Figure ). Notably, pellets collected at T5 exhibited a higher number of shared ASVs with the water microbial community compared to the pellets collected in T1 and T2 (105, 32 and 12 ASVs, respectively). Within MC3, we found 104 shared ASVs between water and early biofilm pellets, with 325 ASVs exclusively detected in water and 43 ASVs exclusively detected in pellets at the three sampling times, without being detected in water (Figure ). Regarding the late biofilm in MC2, we observed 181 shared ASVs between pellets collected at T12, T19 and T26, and the water sample from T26. Additionally, 414 ASVs were exclusively detected in water, while 33 ASVs were exclusively detected in pellets across the three sampling times (Figure ). In MC3, we found 135 shared ASVs between the three sampling times and water, and 74 ASVs exclusively present in water and 72 ASVs exclusively detected in pellets across all three sampling times (Figure ). Characterisation of the water The seawater samples used in the experiment had a total organic carbon concentration of 2.8 ± 1.12 ppm, inorganic carbon concentration of 26.28 ± 2.62 ppm, and total nitrogen concentration of 23 ± 0.35 ppm, which remained relatively stable throughout the experiment (Table ). The water temperature was maintained at 20 ± 2°C, with daily variations not exceeding 1°C. The salinity was around 38.5 ± 1 PSU, pH was 8.0 ± 0.1, and dissolved oxygen was 4.5 ± 0.3 mg/L. The initial bacterial population in seawater, as determined by the abundance of the 16S rRNA gene copies, was 6.3 × 10 6 (±4.1 × 10 6 ) gc mL −1 , while the culturable bacteria on marine agar were 3.6 × 10 5 (±3.7 × 10 5 ) cfu mL −1 (Table ), representing 6% of the 16S rRNA gene copies. The abundance of bacteria, as measured by qPCR, remained relatively stable throughout the experiment, with a final count of 5.9 × 10 6 (±4.2 × 10 6 ) gc of the 16S rRNA gene per ml at the end of the experiment. However, the abundance of culturable marine bacteria on marine agar showed a decrease ranging from 0.9 to 2.5 logs, reaching abundances of 5.0 × 10 4 (±7.3 × 10 4 ) cfu mL −1 depending on the microcosms (Figure ). Characterisation of the plastisphere The abundance of culturable marine bacteria in the plastic biofilm was measured by plating on marine agar after sonicating the biofilm. Within just 24 h, we detected 2.6 × 10 5 (±2.7 × 10 5 ) cfu per pellet, which corresponds to 3.3 × 10 3 (±3.6 × 10 3 ) cfu mm −2 corresponding to the average surface area of the plastic pellet (77 mm 2 ). This concentration remained relatively stable throughout the experiment, with a slight decrease in 3 out of the 4 microcosms (Figure ). Quantification of the 16S rRNA gene by qPCR showed a substantial increase within 24 h, with 1.4 × 10 7 (±1.1 × 10 7 ) gc per pellet, corresponding to 1.8 × 10 5 (±1.5 × 10 5 ) gc mm −2 . This concentration increased slightly further to 6.1 × 10 7 (±6.2 × 10 7 ) gc per pellet after 26 days (Figure ). Scanning electronic microscopy images revealed the early colonisation observing single bacteria, predominantly coccobacillus, within the first 24 h (Figure ). Some cells exhibited signs of division, indicating active growth, and the production of extracellular polymers for substrate attachment. On day 2, a higher density of bacteria with similar characteristics confirmed the colonisation of the plastic pellets. From day 5 to day 26, an increased presence of the exopolysaccharide matrix covering significant areas of the pellets was observed, along with the appearance of filamentous bacteria, clusters of cells characteristic of mature biofilms, and protists displaying morphological similarities to Choanozoa and Ciliophora (Figure ). These protozoa are mainly bacterivores. The seawater samples used in the experiment had a total organic carbon concentration of 2.8 ± 1.12 ppm, inorganic carbon concentration of 26.28 ± 2.62 ppm, and total nitrogen concentration of 23 ± 0.35 ppm, which remained relatively stable throughout the experiment (Table ). The water temperature was maintained at 20 ± 2°C, with daily variations not exceeding 1°C. The salinity was around 38.5 ± 1 PSU, pH was 8.0 ± 0.1, and dissolved oxygen was 4.5 ± 0.3 mg/L. The initial bacterial population in seawater, as determined by the abundance of the 16S rRNA gene copies, was 6.3 × 10 6 (±4.1 × 10 6 ) gc mL −1 , while the culturable bacteria on marine agar were 3.6 × 10 5 (±3.7 × 10 5 ) cfu mL −1 (Table ), representing 6% of the 16S rRNA gene copies. The abundance of bacteria, as measured by qPCR, remained relatively stable throughout the experiment, with a final count of 5.9 × 10 6 (±4.2 × 10 6 ) gc of the 16S rRNA gene per ml at the end of the experiment. However, the abundance of culturable marine bacteria on marine agar showed a decrease ranging from 0.9 to 2.5 logs, reaching abundances of 5.0 × 10 4 (±7.3 × 10 4 ) cfu mL −1 depending on the microcosms (Figure ). The abundance of culturable marine bacteria in the plastic biofilm was measured by plating on marine agar after sonicating the biofilm. Within just 24 h, we detected 2.6 × 10 5 (±2.7 × 10 5 ) cfu per pellet, which corresponds to 3.3 × 10 3 (±3.6 × 10 3 ) cfu mm −2 corresponding to the average surface area of the plastic pellet (77 mm 2 ). This concentration remained relatively stable throughout the experiment, with a slight decrease in 3 out of the 4 microcosms (Figure ). Quantification of the 16S rRNA gene by qPCR showed a substantial increase within 24 h, with 1.4 × 10 7 (±1.1 × 10 7 ) gc per pellet, corresponding to 1.8 × 10 5 (±1.5 × 10 5 ) gc mm −2 . This concentration increased slightly further to 6.1 × 10 7 (±6.2 × 10 7 ) gc per pellet after 26 days (Figure ). Scanning electronic microscopy images revealed the early colonisation observing single bacteria, predominantly coccobacillus, within the first 24 h (Figure ). Some cells exhibited signs of division, indicating active growth, and the production of extracellular polymers for substrate attachment. On day 2, a higher density of bacteria with similar characteristics confirmed the colonisation of the plastic pellets. From day 5 to day 26, an increased presence of the exopolysaccharide matrix covering significant areas of the pellets was observed, along with the appearance of filamentous bacteria, clusters of cells characteristic of mature biofilms, and protists displaying morphological similarities to Choanozoa and Ciliophora (Figure ). These protozoa are mainly bacterivores. E. coli We spiked the seawater with a mixture of three E. coli strains at an initial concentration of approximately 3.4 × 10 4 (±2.2 × 10 4 ) cfu mL −1 , corresponding to 7.2 × 10 4 (±6.0 × 10 4 ) gc of the 16S rRNA gene of E. coli per ml. E. coli strains for MC1 and 2 were isolated from sewage, while strains for MC3 and 4 were isolated from plastic biofilms collected in coastal waters (Liang et al., ). Characterisation of the water In seawater, culturable E. coli was detected for 5 and 12 days, depending on the microcosms (Figure ). The inactivation of culturable E. coli , measured by the time required to reduce the initial population by 1 logarithm (T 90 ), ranged from 1.2 to 3.6 days in seawater. However, the presence of E. coli DNA was detected until the last day of the experiment, with an average concentration of 2.1 (±2.0) gc of the 16S rRNA gene per ml, representing a reduction of approximately 4.0 to 5.3 logarithms over 26 days. Characterisation of the plastisphere In the plastic pellets, we detected culturable E. coli during 2–7 days from the microcosms (2 days in MC4, 5 days in MC1 and MC2, and 7 days in MC3) (Table ). The highest abundance was typically observed after 2 days of incubation, with a mean of 4.3 × 10 2 (±7.6 × 10 2 ) cfu per pellet (corresponding to 5.6 (±10.0) cfu mm −2 ), followed by a gradual decrease (Table , Figure ). Moreover, the presence of E. coli DNA was detected for 5 days in MC4, 12 days in MC3, 19 days in MC2 and throughout the 26‐day experiment in MC1 (Table ). The highest abundance was reached after 5 days, with average levels of 3.3 × 10 3 (±2.9 × 10 3 ) gc per pellet (4.2 × 10 1 (±3.8 × 10 1 ) gc mm −2 ), showing a slight decrease thereafter (Figure , Table ). In MC1, the abundance decreased to 1.3 × 10 2 gc per pellet on the last day of the experiment. In seawater, culturable E. coli was detected for 5 and 12 days, depending on the microcosms (Figure ). The inactivation of culturable E. coli , measured by the time required to reduce the initial population by 1 logarithm (T 90 ), ranged from 1.2 to 3.6 days in seawater. However, the presence of E. coli DNA was detected until the last day of the experiment, with an average concentration of 2.1 (±2.0) gc of the 16S rRNA gene per ml, representing a reduction of approximately 4.0 to 5.3 logarithms over 26 days. In the plastic pellets, we detected culturable E. coli during 2–7 days from the microcosms (2 days in MC4, 5 days in MC1 and MC2, and 7 days in MC3) (Table ). The highest abundance was typically observed after 2 days of incubation, with a mean of 4.3 × 10 2 (±7.6 × 10 2 ) cfu per pellet (corresponding to 5.6 (±10.0) cfu mm −2 ), followed by a gradual decrease (Table , Figure ). Moreover, the presence of E. coli DNA was detected for 5 days in MC4, 12 days in MC3, 19 days in MC2 and throughout the 26‐day experiment in MC1 (Table ). The highest abundance was reached after 5 days, with average levels of 3.3 × 10 3 (±2.9 × 10 3 ) gc per pellet (4.2 × 10 1 (±3.8 × 10 1 ) gc mm −2 ), showing a slight decrease thereafter (Figure , Table ). In MC1, the abundance decreased to 1.3 × 10 2 gc per pellet on the last day of the experiment. The microbial communities of MC2 and MC3 were analysed using high‐throughput sequencing of the 16S rRNA gene. A total of 2,475,520 reads were obtained after denoising and quality filtering the raw sequencing data. In MC2, the number of amplicon sequence variants (ASVs) varied between time points, with 415 ASVs on day 1, 751 ASVs on day 5 and a decrease to 373 ASVs at day 26. However, in MC3, the number of unique microbial taxa (ASVs) increased from 185 ASVs on day 1 to 424 on day 26. (Table ). We focused our analysis on sequences affiliated with the domain Bacteria, as the detection of Archaea in the plastic biofilms was low (with 233 reads in MC2 and just 10 reads in MC3). Distinct bacterial communities' structure was observed between MC2 and MC3 visualised by non‐metric multidimensional scaling (nMDS) of Beta‐diversity (Bray‐Curtis) coefficients (Figure ) and by hierarchical clustering analysis (Figure ). Besides, within each microcosm, two clusters were clearly defined (i) an initial biofilm cluster comprising the biofilm communities from days 1, 2 and 5 as well as the initial water sample, and (ii) a mature biofilm cluster consisting of the microbial communities from pellets collected on days 12, 19, and 26, along with the microbial community from water on day 26 (Figure ). This clustering pattern was consistent in both microcosms. Taxonomic composition of the water The microbial community composition of the water of MC2 and MC3 exhibited notable differences (Figure ). The taxonomic composition of both bacterial communities was predominantly composed of the phylum Proteobacteria, accounting for 85% and 86% of the total reads (Figure ). Within Proteobacteria, in MC2, 41% of the reads belonged to class Alphaproteobacteria, while 44% were classified as Gammaproteobacteria (Figure ). Whereas in MC3, within Proteobacteria, the class Gammaproteobacteria was the most abundant, accounting for 80% of the reads, while the class Alphaproteobacteria class represented only 6% of the reads. (Figure ). In MC2 at the end of the experiment, the bacterial community had shifted and Proteobacteria accounted for 84% of the identified affiliations, with the genus Porticoccus (Gammaproteobacteria class) representing 50% of the reads. Remarkably, this genus represented only 0.08% of the reads at the beginning of the experiment. In fact, just 20% of the ASVs were shared in water from T0 and T26 (Figure , ). Interestingly, the Escherichia ‐ Shigella group which accounted for 2% of the reads at day 0, was not detected after 26 days, although it could be detected using qPCR. Within MC3, Escherichia‐Shigella group comprised 21% of the reads at the beginning of the experiment, followed by the genera Marinobacterium (12%) and Thalassotalea (11%) (Figure ). After 26 days, there was a notable shift in the microbial community of MC3. In fact, just about the 8% of the ASVs were shared between the microbial community of T0 and T26 (Figure ). It is worth noting that neither the phylum Campylobacterota nor the Escherichia ‐ Shigella group was detected at the end of the experiment, indicating a significant change in their relative abundance over time. Taxonomic composition of the plastisphere In the plastic pellets from MC2, the Proteobacteria phylum accounted for 91 and 88% of the reads during the first 2 days (Figure ). During this period, the order Enterobacterales was outstanding representing 59% and 44% of the reads. However, its abundance gradually declined, becoming 12% of the reads at 5 day and ultimately representing only 0.3% of the ASVs at T26 (Figure ). The primary genus identified was Alteromonas , which comprised 10% of the reads until day 5 but decreased to <1% on day 26. The group Shigella ‐ Escherichia represented the 0.06% and 0.02% of the sequences on day T1 and day T2; meanwhile, it was not detected after. On day 5, there was an increase in diversity at the phylum level, with Proteobacteria representing 69% of the reads, Bacteroidota and Planctomycetota accounting for 10% of the reads each, and Bdellovibrionata comprising 4% of the reads. The latter phylum consists of obligate predators that feed on bacteria. By day 12, the alpha diversity had decreased compared to day 5 (Table ). The main orders identified were Pseudomonadales (22%), Rhodobacterales (19%), Flavobacteriales (14%), and Caulobacterales (8%) (Figure ). The abundance of order Pseudomonadales continued to increase, becoming the dominant class by day 26 and representing 77% of the reads. Within this order, the genus Halioxenophilus became the most abundant (56%), followed by an unclassified genus of the Songiibacteraceae family (14%). In the water microbial community, Pseudomonadales order also dominated (61%), but the most relevant genus was Porticoccus . On the pellets of MC3, the Proteobacteria phylum accounted for 89% and 88% of the reads during the first 2 days. The reads belonging to the Enterobacterales and Pseudomonadales orders dominated during the first 5 days. Specifically, Enterobacterales accounted for 47%, 35%, and 25% of the reads on days 1, 2, and 5, respectively. The group Shigella ‐ Escherichia was detected until day T12 representing the 0.01% of the sequences in days T1 and T2 and 0.04% in T5. Pseudomonadales represented 35%, 40%, 24% of the reads on the same respective days (Figure ). After day 5, the abundance of Enterobacterales decreased, representing only 3% of the reads on day 26. However, Shigella ‐ Escherichia was still detected until day T12 representing the 0.003% of the sequences. Pseudomonadales remained relatively constant, representing between 24% and 39% of the reads (Figure ). During the initial days (1 and 2), the most prevalent genera were Thalassotalea (Enterobacterales) (26%–16%) and Aestuariicella (Alteromonadales) (25%–27%). By day 5, although both genera were still the dominant, the global diversity increased, increasing the presence of the Flavobacteriales order from Bacteroidota phylum (Figure ). On day 12, Thalassotalea and Aestuariicella decreased to 0.6% and 3% of the reads, respectively. The genus Methylophaga (Nitrosoccales order) (16% of the reads), Marinobacter (Alteromonadales order) (12%) (known for its involvement in hydrocarbon degradation), and a genus from the Flavobacteriales order (16%) increased in abundance. Similar main genera were detected on day 19, with the addition of Alcanivorax (11%). By day 26, a higher diversity was observed, with Marinobacter being the most represented genus. Additionally, an increase in the presence of the Planctomycetota phylum was detected. For a more detailed taxonomy for all samples at different hierarchy levels, see the Krona diagrams in Figure . We compared the shared ASVs between water and pellets separating between early and late biofilm as we could define a clear separation by hierarchical clustering (Figure ). In MC2, we identified 244 ASVs that were shared between water and early biofilm pellets (T1, T2 and T5), while 336 ASVs were just detected in water and 37 ASVs were exclusively detected in plastic pellets at all three sampling times (Figure ). Notably, pellets collected at T5 exhibited a higher number of shared ASVs with the water microbial community compared to the pellets collected in T1 and T2 (105, 32 and 12 ASVs, respectively). Within MC3, we found 104 shared ASVs between water and early biofilm pellets, with 325 ASVs exclusively detected in water and 43 ASVs exclusively detected in pellets at the three sampling times, without being detected in water (Figure ). Regarding the late biofilm in MC2, we observed 181 shared ASVs between pellets collected at T12, T19 and T26, and the water sample from T26. Additionally, 414 ASVs were exclusively detected in water, while 33 ASVs were exclusively detected in pellets across the three sampling times (Figure ). In MC3, we found 135 shared ASVs between the three sampling times and water, and 74 ASVs exclusively present in water and 72 ASVs exclusively detected in pellets across all three sampling times (Figure ). The microbial community composition of the water of MC2 and MC3 exhibited notable differences (Figure ). The taxonomic composition of both bacterial communities was predominantly composed of the phylum Proteobacteria, accounting for 85% and 86% of the total reads (Figure ). Within Proteobacteria, in MC2, 41% of the reads belonged to class Alphaproteobacteria, while 44% were classified as Gammaproteobacteria (Figure ). Whereas in MC3, within Proteobacteria, the class Gammaproteobacteria was the most abundant, accounting for 80% of the reads, while the class Alphaproteobacteria class represented only 6% of the reads. (Figure ). In MC2 at the end of the experiment, the bacterial community had shifted and Proteobacteria accounted for 84% of the identified affiliations, with the genus Porticoccus (Gammaproteobacteria class) representing 50% of the reads. Remarkably, this genus represented only 0.08% of the reads at the beginning of the experiment. In fact, just 20% of the ASVs were shared in water from T0 and T26 (Figure , ). Interestingly, the Escherichia ‐ Shigella group which accounted for 2% of the reads at day 0, was not detected after 26 days, although it could be detected using qPCR. Within MC3, Escherichia‐Shigella group comprised 21% of the reads at the beginning of the experiment, followed by the genera Marinobacterium (12%) and Thalassotalea (11%) (Figure ). After 26 days, there was a notable shift in the microbial community of MC3. In fact, just about the 8% of the ASVs were shared between the microbial community of T0 and T26 (Figure ). It is worth noting that neither the phylum Campylobacterota nor the Escherichia ‐ Shigella group was detected at the end of the experiment, indicating a significant change in their relative abundance over time. In the plastic pellets from MC2, the Proteobacteria phylum accounted for 91 and 88% of the reads during the first 2 days (Figure ). During this period, the order Enterobacterales was outstanding representing 59% and 44% of the reads. However, its abundance gradually declined, becoming 12% of the reads at 5 day and ultimately representing only 0.3% of the ASVs at T26 (Figure ). The primary genus identified was Alteromonas , which comprised 10% of the reads until day 5 but decreased to <1% on day 26. The group Shigella ‐ Escherichia represented the 0.06% and 0.02% of the sequences on day T1 and day T2; meanwhile, it was not detected after. On day 5, there was an increase in diversity at the phylum level, with Proteobacteria representing 69% of the reads, Bacteroidota and Planctomycetota accounting for 10% of the reads each, and Bdellovibrionata comprising 4% of the reads. The latter phylum consists of obligate predators that feed on bacteria. By day 12, the alpha diversity had decreased compared to day 5 (Table ). The main orders identified were Pseudomonadales (22%), Rhodobacterales (19%), Flavobacteriales (14%), and Caulobacterales (8%) (Figure ). The abundance of order Pseudomonadales continued to increase, becoming the dominant class by day 26 and representing 77% of the reads. Within this order, the genus Halioxenophilus became the most abundant (56%), followed by an unclassified genus of the Songiibacteraceae family (14%). In the water microbial community, Pseudomonadales order also dominated (61%), but the most relevant genus was Porticoccus . On the pellets of MC3, the Proteobacteria phylum accounted for 89% and 88% of the reads during the first 2 days. The reads belonging to the Enterobacterales and Pseudomonadales orders dominated during the first 5 days. Specifically, Enterobacterales accounted for 47%, 35%, and 25% of the reads on days 1, 2, and 5, respectively. The group Shigella ‐ Escherichia was detected until day T12 representing the 0.01% of the sequences in days T1 and T2 and 0.04% in T5. Pseudomonadales represented 35%, 40%, 24% of the reads on the same respective days (Figure ). After day 5, the abundance of Enterobacterales decreased, representing only 3% of the reads on day 26. However, Shigella ‐ Escherichia was still detected until day T12 representing the 0.003% of the sequences. Pseudomonadales remained relatively constant, representing between 24% and 39% of the reads (Figure ). During the initial days (1 and 2), the most prevalent genera were Thalassotalea (Enterobacterales) (26%–16%) and Aestuariicella (Alteromonadales) (25%–27%). By day 5, although both genera were still the dominant, the global diversity increased, increasing the presence of the Flavobacteriales order from Bacteroidota phylum (Figure ). On day 12, Thalassotalea and Aestuariicella decreased to 0.6% and 3% of the reads, respectively. The genus Methylophaga (Nitrosoccales order) (16% of the reads), Marinobacter (Alteromonadales order) (12%) (known for its involvement in hydrocarbon degradation), and a genus from the Flavobacteriales order (16%) increased in abundance. Similar main genera were detected on day 19, with the addition of Alcanivorax (11%). By day 26, a higher diversity was observed, with Marinobacter being the most represented genus. Additionally, an increase in the presence of the Planctomycetota phylum was detected. For a more detailed taxonomy for all samples at different hierarchy levels, see the Krona diagrams in Figure . We compared the shared ASVs between water and pellets separating between early and late biofilm as we could define a clear separation by hierarchical clustering (Figure ). In MC2, we identified 244 ASVs that were shared between water and early biofilm pellets (T1, T2 and T5), while 336 ASVs were just detected in water and 37 ASVs were exclusively detected in plastic pellets at all three sampling times (Figure ). Notably, pellets collected at T5 exhibited a higher number of shared ASVs with the water microbial community compared to the pellets collected in T1 and T2 (105, 32 and 12 ASVs, respectively). Within MC3, we found 104 shared ASVs between water and early biofilm pellets, with 325 ASVs exclusively detected in water and 43 ASVs exclusively detected in pellets at the three sampling times, without being detected in water (Figure ). Regarding the late biofilm in MC2, we observed 181 shared ASVs between pellets collected at T12, T19 and T26, and the water sample from T26. Additionally, 414 ASVs were exclusively detected in water, while 33 ASVs were exclusively detected in pellets across the three sampling times (Figure ). In MC3, we found 135 shared ASVs between the three sampling times and water, and 74 ASVs exclusively present in water and 72 ASVs exclusively detected in pellets across all three sampling times (Figure ). The increase in plastic debris in coastal waters, resulting from human mismanagement, has led to the formation of persisting floating surfaces that are quickly colonised by microorganisms. One of the risks associated with the plastisphere is the potential for carrying microbial pathogens. Potential pathogens, such as members of Campylobacteracea , Enterobacteriaceae, Mycobacterium sp, Pseudomonas , or Vibrio , have already been identified in the plastisphere relying on high‐throughput sequencing (Jiang et al., ; Li et al., ; McCormick et al., ; Wu et al., ; Zettler et al., ). Other studies have used culture methods, specifically targeting Vibrio spp. and Enterobacteria, to detect ‘active’ bacteria and facilitate further strain identification (Kirstein et al., ; Liang et al., ; Silva et al., ). In this study, we investigated the colonisation of plastic pellets by marine bacteria and environmental strains of E. coli , to be used as a proxy for faecal pathogens, as commonly employed in water quality management. The experiments were conducted in seawater aquaria under stable conditions. The plastic pellets were collected from a polluted beach and consisted of a mixture of 80% PE and 20% PP, with an average surface of 77 mm 2 . The E. coli strains were obtained from raw sewage and from plastic samples from coastal waters. We observed that seawater bacteria rapidly colonised the plastic pellets within 24 h, dividing and generating exopolysaccharide substances, while new bacteria attached to the pellets. Within 2 days, bacterial populations reached densities ranging from 4.5 × 10 4 to 6.8 × 10 5 gc of the 16S rRNA gene mm −2 , remaining stable over the course of 26 days, reaching values of 2.5 × 10 5 –5.0 × 10 5 gc of the 16S rRNA gene mm −2 . The highest density observed was 2.0 × 10 6 gc per mm −2 , equivalent to 1.5 × 10 8 gc per pellet. These densities were similar to those found in biofilms from coastal plastics ranging from 1.5 × 10 5 to 8.7 × 10 6 gc of the 16S rRNA mm −2 (Liang et al., ) and between 1.1 × 10 3 and 1.9 × 10 5 cells mm −2 (Dussud et al., ) (a cell count can be assumed to 8 gc according to the mean number of 16S rRNA copies per cell). Similar abundances were also observed in colonisation experiments which detected around 10 4 –10 5 cells mm −2 (Odobel et al., ; Schlundt et al., ). The surrounding water aquaria exhibited a concentration of 6.0 × 10 6 gc per ml −1 , indicating that the surface of one plastic pellet contained 25 times the bacteria found in 1 mL of water. E. coli was attached to plastic pellets within the first day of incubation, and its presence was confirmed by culture for a period of 2 to 7 days. The density of attached culturable E. coli varied among the microcosms, ranging from a maximum concentration of 2 cfu per pellet to 1.6·10 3 cfu per pellet. E. coli attached to plastic pellets and remained ‘active’ for at least 7 days. The observed variations in the E. coli attachment and persistence could not be solely attributed to their source since differences were observed within the microcosms even when strains originated from the same source. This means, that the strains used did not show a different capability of forming biofilms on plastics under those conditions. Moreover, environmental factors cannot account for these variations, as they remained consistent across all four microcosms. E. coli could also be detected using qPCR methods during 5–26 days, depending on the microcosms. However, the qPCR results represent viable cells but also cells in viable but non‐culturable state or cells that are already dead. The initial concentration of E. coli in the water aquaria ranged from 1.1 × 10 4 to 6.4 × 10 4 cfu ml −1 , these values are similar to those found in poorly treated sewage effluent (Carrey et al., ). The inactivation of culturable E. coli in the water measured through the T 90 was within 1.2 to 3.6 days, although slightly higher still consistent with observations from other experiments ranging from 0.1 to 2.9 days (Jeanneau et al., ; Sagarduy et al., ). Although E. coli could be detected in the water using culture methods for 5 to 12 days, the rate of inactivation was faster compared to the biofilm. Additionally, on the final day, when E. coli was detected on plastic pellets using qPCR, the concentration on the pellets was higher than in the water, indicating a slower decline in biofilms compared to the water. The decrease in E. coli abundance is expected in water since faecal bacteria are adapted to persist in the digestive tracts with stable conditions such as temperature, light, pH, and redox conditions, as well as a high concentration of nutrients. Therefore, when gut‐adapted bacteria encounter a harsh environment like seawater, they do not persist for long. These findings highlight that plastic biofilms can act as a protective environment for faecal bacteria like E. coli (Rodrigues et al., ). Although its dynamics may depend on the biofilm bacterial community or stochastic factors. In fact, for example, after performing an incubation experiment, researchers detected the attachment of E. coli in wood particles, but not in neither high‐density PE nor tyre wear particles (Song et al., ). Despite the growing number of studies focusing on the plastisphere, there is a disparity in the findings among them. For instance, some studies have identified variations between colonised surfaces, while others have not (Pinto et al., ; Wright et al., ). And certain studies have reported a higher diversity of microbial communities than water, while others describe a less diverse community (Pinto et al., ; Wright et al., ). Explaining these differences may be challenging due to the involvement of multifactorial parameters that influence each colonisation process. These parameters encompass both deterministic and stochastic processes (Niederdorfer et al., ), further complicating the interpretation of the results. Thus, the variability observed in the plastisphere can be attributed to various factors from experimental design, to geography, temporal, substrate, and environmental differences (Amaral‐Zettler et al., ; Wright et al., ). These include dynamic environmental conditions such as salinity, temperature, light, and turbidity, as well as the diversity of the initial microbial community present in the water. Interactions between early colonisers, the presence of grazers, the experimental methodology, and characteristics of substrate such as polymer type, substrate weathering, and plastic additives have been reported to also contribute to the variability (Wright et al., ). In this experiment, the environmental conditions, substrate, and experimental methodology were the same. Therefore, the observed differences may be explained by the autochthonous water communities that colonise the biofilm or by stochastic factors what may also influence the attachment and evolution of E. coli in plastic biofilms. Besides, although a biofilm acts as protective environment, which can shelter bacteria for longer including faecal bacteria, it is also a full ecosystem with different communities interacting and trying to survive. Therefore, each biofilm may have different evolution considering adsorption of biomolecules during the biofilm conditioning (Bhagwat et al., ), water autochthonous bacterial community, environmental conditions, bacterial early interactions or even stochastically. E. coli becomes part of the plastisphere, but the concentration decreases over time; however, the persistence of E. coli DNA on plastic pellets for a more extended period compared to seawater implies that plastic surfaces might serve as a reservoir or provide a substrate for the retention of microbial genetic material. When comparing the plastisphere of two microcosms, notable differences were present in the microbial communities, which can likely be attributed to variations in the microbial community found in the surrounding water which become the seed of the colonisation process. These bacterial communities were different to those observed in environmental plastics from Liang et al. . However, the bacteria that attached to and generated the biofilm were less abundant in the water. Probably bacteria preferring a biofilm state than a planktonic state, become selected positively when they find a surface where they can attach and thrive. Furthermore, a clear distinction in the composition of the biofilm community was observed within the first 5 days of incubation and after 12 days. Other studies have also observed differences in microbial communities of early (<7 days incubation) or late (>7 days incubation) colonisation experiments (Wright et al., ). Normally, Proteobacteria dominate the earlier time points, whereas Bacteroidetes increased in late time points (Wright et al., ). Our results confirm that the microbial community of the plastisphere is primarily influenced by the community of the surrounding environment, although it evolves differently along the time detecting two clear different colonisation stages. In general, the plastisphere is characterised by the dominance of Proteobacteria phylum, followed by Bacteroidetes and Planctomycetes (De Tender et al., ; Oberbeckmann et al., ; Wu et al., ). Additionally, bacteria known for their potential to degrade plastics or hydrocarbons, such as Alcanivorax sp., Aestuariicella sp., Marinobacter sp., and Alteromonas sp. have been commonly identified in plastics (Dussud et al., ; Wright et al., ). We also observed the presence of these bacteria primarily in MC3. In our experiment, E. coli attachment and inactivation not only was measured by culture and by qPCR but also the group Escherichia ‐ Shigella could be detected by high throughput sequencing (HTS) during 2 or 12 days, similar to the detection observed by culture and shorter than the observed by qPCR. The difference in detection between qPCR and HTS can be explained because using specific primers for E. coli 16S rRNA gene qPCR we are selecting E. coli 16S rRNA gene among the others, whereas with HTS the amplification of 16S rRNA gene of the high abundant bacterial community of the plastisphere masks the low concentration of E. coli sequences. This finding can help in corroborating that if E. coli is found in plastics by HTS, it is probably in culturable state, so it may still be health concern. Our study highlights the potential role of biofilms as reservoirs for E. coli and indicates that biofilms may contribute to the persistence and survival of faecal bacteria in aquatic systems. However, further research is necessary to fully understand the mechanisms and implications of the differences in E. coli dynamics (attachment and inactivation) in biofilm environments and of other potential pathogens, beyond the structure of the developing biofilm microbial communities. Plastic debris in coastal waters serves as a persistent floating surface that quickly becomes colonised by marine bacteria, forming a biofilm, which were already detected 24 h after the introduction of the plastic pellets in the seawater. Faecal bacteria, specifically E. coli , were found to attach to and persist within the plastic biofilms for 2 to 7 days in a cultivable state. This suggests that plastic biofilms may facilitate the survival and transportation of faecal bacteria in aquatic environments. E. coli was detected during a similar period by high‐throughput sequencing technology but was detected much longer using qPCR detecting DNA of probably dead or viable but not cultivable cells. Despite E. coli attached to pellets in all the microcosms, different concentration, and patterns have been observed between them showing that the colonisation and evolution of the biofilm may depend on the global bacterial community or stochastic factors. The composition of the microbial communities within the biofilms was primarily influenced by the surrounding environment. Temporal shifts were observed within the first 5 days of incubation and after 12 days, indicating changes in the community structure over time. Bacteria known for their plastic or hydrocarbon‐degrading potential, such as Alcanivorax sp., Aestuariicella sp., Marinobacter sp., and Alteromonas sp., were commonly found. Bacterivore species, such as those from the Bdellovibrionota phylum, were detected, and protozoa like Choanozoa and Ciliphora appeared mainly in the late stages of the biofilm formation limiting biofilm growth. Therefore, the observed differences in E. coli colonisation may be explained by the autochthonous communities that colonise the biofilm or by stochastic factors. Further research is needed to develop comprehensive models of E. coli colonisation and persistence on plastics. The results of this study have implications for environmental monitoring, risk assessment, and the development of mitigation strategies to address the growing problem of plastic pollution in our oceans. Elisenda Ballesté: Conceptualization (lead); formal analysis (lead); funding acquisition (lead); investigation (lead); methodology (lead); project administration (lead); supervision (lead); writing – original draft (lead). Hongxia Liang: Formal analysis (equal); investigation (equal); methodology (equal). Laura Migliorato: Data curation (equal); methodology (equal). Laura Sala‐Comorera: Data curation (equal); writing – review and editing (supporting). Javier Méndez: Software (equal); validation (equal); visualization (equal); writing – review and editing (equal). Cristina Garcia‐Aljaro: Investigation (equal); methodology (equal); resources (equal); writing – review and editing (equal). The authors declare no conflicts of interest. Table S1. Physicochemical and bacterial characteristics of the seawater of the different microcosms (MC1, MC2, MC3, MC4). Table S2. Information of oligonucleotide primers and probes of molecular markers using real‐time quantitative PCR. Table S3. Performance characteristics for all qPCR assays. Table S4. Total ASVs and alpha‐diversity (Chao and Shannon index) of the microbial communities of the plastic pellets at different times (from T1 to T26) and water at T0 and T26 of microcosmos 2 and 3. Figure S1. Representation of the experiment developed in this study to evaluate the colonization of plastic pellets. MC, Microcosms. Figure S2. Hierarchical clustering analysis using Euclidean distance separating both microcosmos (MC2 and MC3) and early (T1, T2, T5) and late biofilm (T12, T19, T26). Figure S3. Taxonomic affiliation of ASVs at considering Phylum (A), Class (B), Orders (C), Classes (D) and Genera (E) in pellets (MP) collected at different times (T1, T2, T5, T12, T19 and T26) on both microcosmos (MC2 and MC3) and water. Figure S4. Venn diagrams showing the distribution and sharing of the different ASVs between water and pellets at different times considering an early biofilm (T1, T2, T5) and a late biofilm (T12, T19 and T26) in both microcosmos (MC2 and MC3). Figure S5. Krona plots of the relative abundance reads of bacteria detected by 16S metabarcoding at all sampling times (T0, T1, T2, T5, T12, T19, T26) in water samples and plastic pellets (MP) from microcosmos 2 (MC2) and 3 (MC3). Taxonomic profiles are simultaneously displayed by hierarchy levels from kingdom to genus by selecting taxonomic depths: 1: Kingdom 2: Phylum 3: Class 4: Order 5: Family 6: Genus.
Multi-omics dissection of metabolic dysregulation associated with immune recovery in people living with HIV-1
e5ca1bb3-6129-4223-9272-48fb127faf50
11786453
Biochemistry[mh]
Human immunodeficiency virus (HIV) infection remains a major global health issue, with almost 39 million people worldwide currently living with HIV. Antiretroviral therapy (ART) effectively controls HIV-1 viral replication and restores immune function, markedly improving the life expectancy of people living with HIV-1 (PLWH) . However, approximately 15% to 30% of PLWH fail to achieve the recovery of CD4 T cell counts despite successful suppression of HIV-1 replication . Termed as immunological non-responders (INRs), these people experience a higher rate of acquired immunodeficiency syndrome (AIDS) and non-AIDS events than those who achieved optimal CD4 T cell recovery . HIV-1 infection leads not only to alteration of T cell subsets but also systemic immune activation and inflammation. Our previous study reported decreased frequencies of CD4 T N and CD8 T N cells, as well as increased frequencies of CD4 T EM and CD8 T EM cells, in PLWH compared to healthy controls (HCs) . In addition, increased CD8 T cell activation and elevated levels of inflammation-related markers, including CXCL10, IL6, CRP, sCD14, sCD163, were observed in PLWH . Interestingly, growing evidence indicates that dysregulated metabolism plays a critical role in the pathogenesis of HIV-1 . For example, CD4 T cells with higher rates of glycolysis, glutaminolysis, and oxidative phosphorylation (OXPHOS) are more susceptible to infection by HIV-1 . HIV-1 specific CD8 T cells from spontaneous HIV-1 controllers exhibit enhanced survival and anti-HIV-1 effector function attributed to these cells having diverse metabolic resources, unlike those from ART-treated non-controllers that were largely glucose-dependent . The loss of Th17 cells and increase in Treg cells were associated with disease progression in PLWH, and a perturbed balance between Th17/Treg was associated with indoleamine 2,3-dioxygenase-mediated tryptophan metabolism . Additionally, increased kynurenine to tryptophan ratio was associated with higher plasma lipopolysaccharide level in PLWH . These findings highlight that systemic metabolism is tightly involved in HIV-induced immune response and inflammation. Previous metabolomics study on plasma of PLWH on successful long-term ART uncovered perturbed amino acid metabolism and elevated levels of phospholipids and ceramides (Cer) compared to HCs . A comprehensive metabolic characterization of the systemic effect of ART treatment, as well as metabolic pathways underpinning the differential outcome of immunological responders (IRs) relative to INRs, however, has been lacking by far. Exploring these treatment-related metabolic differences can potentially unveil novel intervention targets to enhance immunological recovery in PLWH, and facilitate patient management according to treatment response. In this study, we conducted quantitative lipidomics and metabolomics on plasma of PLWH before ART treatment (treatment-naïve PLWH, TNs), after ART treatment categorized based on treatment outcome i.e. IRs and INRs, as well as HCs. We provide a systematic overview on the differential metabolites associated with disease pathology and ART treatment, and examined the correlations between distinct metabolites and various immunological and inflammation-related indices to decipher the role of systemic metabolites in HIV-1 pathogenesis. Importantly, we present a metabolite panel that effectively reflects the extent of immune recovery in PLWH on ART. Our work leverages a metabolomics-driven approach to underscore key metabolic aberrations during HIV-1 infection, conferring potentially new treatment strategy for improving immunological recovery, particularly in INRs, via targeting specific metabolic pathways. Study participants The study cohort comprised 108 participants, including 24 TNs and 68 PLWH on ART (33 IRs and 35 INRs). IRs were defined as having CD4 T cell counts > 350 cells/μl, and INRs as having CD4 T cell counts ≤ 350 cells/μl after having received ART for more than 2 years with plasma HIV-1 viral load of < 20 copies/ml. Exclusion criteria included pregnancy, any glucose, kidney, or liver function abnormalities, or co-infection with hepatitis, tuberculosis, syphilis, or other infectious diseases. Additionally, 16 HIV-negative HCs were enrolled. All blood samples used in this study were collected after an overnight fast. Participant characteristics were presented in Table . Precise metabolomics Metabolites were extracted, separated, and detected as previously described . Data analysis was performed using MarkerView and PeakView software as previously described . For detailed method and instrument parameters are available in the Supplementary Material 12: Supplementary Methods. A repertoire of 512 metabolites was characterized and quantitated using isotope-labeled internal standards (Supplementary Material 1: Figure S1). To ensure analytical precision, a quality control sample was injected after every ten experimental samples. Principle component analysis (PCA) showed that the quality control samples clustered together and their metabolome was highly correlated, with Spearman correlation coefficients (r) ranging from 0.99 to 1 (Supplementary Material 2: Figure S2), validating the high stability and reproducibility of metabolomic data. Targeted lipidomics Lipids were extracted from plasma using a modified Bligh and Dyer method and analyzed by LC–MS/MS as previously described . Polar lipids were separated using normal phase high-performance liquid chromatography and analyzed using multiple reaction monitoring. Quantitation was performed using stable isotope dilution. Detailed method and instrument parameters are provided in the Supplementary Material 12: Supplementary Methods. A repertoire of 512 lipid species was confidently detected and quality control analysis was provided (Supplementary Material 1: Figure S1–Supplementary Material 2: Figure S2). Plasma inflammatory markers Inflammation-related proteins, were measured using proximity extension assay (PEA) technology (Olink Bioscience AB, Uppsala, Sweden) as described in our previous study . Key inflammation-related proteins, including CSF-1, CXCL11, CCL23, IL-15RA, IL-12B, CCL19, TNFRSF9, IL10, CXCL9, TNF, CXCL10, SLAMF1, IL-18R1, CST5, CDCP1, and IL18, closely related to HIV-1 infection were screened based on our study and used for subsequent correlation analysis between distinct metabolites and inflammation-related indices. Flow cytometry Peripheral blood mononuclear cells (PBMCs) were stained with the following human antibodies: CD3-BV650 (clone OKT3), HLA-DR-BV605 (clone L243), CD38-PE-Cy7 (clone HB-7), and PD-1-BV510 (clone EH12.2H7), purchased from BioLegend (San Diego, California, USA), and CD8-BUV737 (clone SK1), purchased from BD Biosciences (San Jose, California, USA). Events were detected on a BD FACSymphony TM A5flow cytometer (BD Biosciences) and the data were analyzed using FlowJo software V10 (Tree Star, Ashland, OR, USA). Bioinformatics and statistical analyses Statistical analyses of clinical, metabolomic, and lipidomic data were conducted to identify significant differences among groups and their associations with clinical parameters. Key statistical approaches included Chi-squared test, nonparametric tests (Mann–Whitney and Kruskal–Wallis tests). Differential metabolites and lipids were analyzed using established R packages, with adjustments for covariates such as age, sex, and BMI. Pathway enrichment and correlation analyses were performed to identify biologically significant changes and their associations with clinical parameters. Details of all statistical software and packages are provided in the supplementary methods. The study cohort comprised 108 participants, including 24 TNs and 68 PLWH on ART (33 IRs and 35 INRs). IRs were defined as having CD4 T cell counts > 350 cells/μl, and INRs as having CD4 T cell counts ≤ 350 cells/μl after having received ART for more than 2 years with plasma HIV-1 viral load of < 20 copies/ml. Exclusion criteria included pregnancy, any glucose, kidney, or liver function abnormalities, or co-infection with hepatitis, tuberculosis, syphilis, or other infectious diseases. Additionally, 16 HIV-negative HCs were enrolled. All blood samples used in this study were collected after an overnight fast. Participant characteristics were presented in Table . Metabolites were extracted, separated, and detected as previously described . Data analysis was performed using MarkerView and PeakView software as previously described . For detailed method and instrument parameters are available in the Supplementary Material 12: Supplementary Methods. A repertoire of 512 metabolites was characterized and quantitated using isotope-labeled internal standards (Supplementary Material 1: Figure S1). To ensure analytical precision, a quality control sample was injected after every ten experimental samples. Principle component analysis (PCA) showed that the quality control samples clustered together and their metabolome was highly correlated, with Spearman correlation coefficients (r) ranging from 0.99 to 1 (Supplementary Material 2: Figure S2), validating the high stability and reproducibility of metabolomic data. Lipids were extracted from plasma using a modified Bligh and Dyer method and analyzed by LC–MS/MS as previously described . Polar lipids were separated using normal phase high-performance liquid chromatography and analyzed using multiple reaction monitoring. Quantitation was performed using stable isotope dilution. Detailed method and instrument parameters are provided in the Supplementary Material 12: Supplementary Methods. A repertoire of 512 lipid species was confidently detected and quality control analysis was provided (Supplementary Material 1: Figure S1–Supplementary Material 2: Figure S2). Inflammation-related proteins, were measured using proximity extension assay (PEA) technology (Olink Bioscience AB, Uppsala, Sweden) as described in our previous study . Key inflammation-related proteins, including CSF-1, CXCL11, CCL23, IL-15RA, IL-12B, CCL19, TNFRSF9, IL10, CXCL9, TNF, CXCL10, SLAMF1, IL-18R1, CST5, CDCP1, and IL18, closely related to HIV-1 infection were screened based on our study and used for subsequent correlation analysis between distinct metabolites and inflammation-related indices. Peripheral blood mononuclear cells (PBMCs) were stained with the following human antibodies: CD3-BV650 (clone OKT3), HLA-DR-BV605 (clone L243), CD38-PE-Cy7 (clone HB-7), and PD-1-BV510 (clone EH12.2H7), purchased from BioLegend (San Diego, California, USA), and CD8-BUV737 (clone SK1), purchased from BD Biosciences (San Jose, California, USA). Events were detected on a BD FACSymphony TM A5flow cytometer (BD Biosciences) and the data were analyzed using FlowJo software V10 (Tree Star, Ashland, OR, USA). Statistical analyses of clinical, metabolomic, and lipidomic data were conducted to identify significant differences among groups and their associations with clinical parameters. Key statistical approaches included Chi-squared test, nonparametric tests (Mann–Whitney and Kruskal–Wallis tests). Differential metabolites and lipids were analyzed using established R packages, with adjustments for covariates such as age, sex, and BMI. Pathway enrichment and correlation analyses were performed to identify biologically significant changes and their associations with clinical parameters. Details of all statistical software and packages are provided in the supplementary methods. Clinical characteristics and immunological profiles Characteristics were presented in Table . No statistical differences were observed in sex and BMI among HCs, TNs, IRs, and INRs. Furthermore, there were no statistical differences in the regimen and duration of ART between IRs and INRs. Consistent with previous studies , INRs were older, exhibited lower CD4 T cell counts, and a lower CD4/CD8 ratio compared to IRs. The frequency of CD8 T N cells decreased in INRs compared to IRs, while the frequency of CD4 T EM cells and CD8 T EM cells were increased. Frequencies of HLA-DR + CD38 + CD8 T cells and PD-1 + CD8 T cells were significantly increased in TNs compared with HCs. Even in IRs, CD8 T cell counts remained significantly higher than that in HCs. Hence, ART failed to fully restore normal immunological profiles in PLWH, particularly in INRs, which necessitates new intervention targets to attain better immunological restoration. Metabolic dysregulation in treatment-naïve people living with HIV-1 Plasma levels of 161 polar metabolites (P < 0.05) and 148 lipids (P < 0.05) were significantly altered between TNs and HCs (Supplementary Material 8: Table S1). Among these differential metabolites, 103 were increased while 58 were decreased. As for differential lipids, 102 species were elevated while 46 species were reduced. Heatmap illustrates the top 50 differential polar metabolites (Fig. A) and lipids (Fig. B) between TNs and HCs. The differential polar metabolites were predominantly fatty acyls, carboxylic acids and derivatives, organooxygen compounds, hydroxy acids and derivatives, as well as indoles and derivatives (Fig. ). Systemic lipid aberrations were evident, with significant increases in TAG, acylcarnitine, Cer, as well as, free fatty acid (FFA) (Fig. , Supplementary Material 4: Figure S4). PLS-DA revealed significant segregation among HCs, TNs, and INRs for both plasma metabolome and lipidome, while the separation between INRs and IRs was less apparent (Supplementary Material 3: Figure S3). The differential polar metabolites and lipids across distinct pairwise comparisons of groups were presented in UpSet plots and volcano plots (Supplementary Material 3: Figure S3–Supplementary Material 4: Figure S4). Accordingly, the smallest set size of differential metabolites was observed between INRs and IRs. We then evaluated the correlations between the top 50 differential polar metabolites (Fig. A) and lipids (Fig. B) with immunological and inflammation-related indices using Spearman correlation. We observed that aspartyl-threonine, ribothymidine, and thymine were positively correlated with CD4 T cell counts, while 3-hydroxyoctanoic acid, hydroxyisocaproic acid, and pseudouridine were negatively correlated. Notably, thymine was also concurrently negatively correlated with HIV-1 viral load. L-Kynurenine, which was significantly elevated in TNs relative to controls (Supplementary Material 8: Table S1), was positively correlated with several inflammation-related markers; while tryptophan, the metabolic precursor of L-kynurenine, was appreciably reduced in TNs relative to HCs (Supplementary Material 8: Table S1). For plasma lipids, 14:0-carnitine, 14:1-carnitine, Cer d18:0/16:0, and Cer d18:1/24:1 were positively correlated with HIV-1 viral load. With regard to CD8 T cell activation, 3-hydroxyoctanoic acid, 3-oxododecanoic acid, 5-hydroxy-L-tryptophan, 5-hydroxyindoleacetic acid, oleoylcarnitine, and pseudouridine were positively correlated, while phosphorylcholine, ribothymidine, and thymine were negatively correlated. Metabolites correlated with CD8 T cell activation were also associated with the levels of inflammation-related markers in the same direction. Interestingly, 3-oxododecanoic acid and 5-hydroxy-L-tryptophan were negatively correlated with the frequency of CD8 T N cells, while thymine was positively correlated. As for lipids, Cer d18:0/16:0, Cer d18:0/24:1, Cer d18:1/24:1, and GM3 d18:0/16:0 were positively correlated with CD8 T cell activation and markers of inflammation, while Cer d18:0/24:1 was also negatively correlated with the frequency of CD8 T N cells. Bubble plot depicts the fold enrichment and P-values of the top 20 significantly altered pathways between TNs and HCs. Biosynthesis of unsaturated fatty acids emerged as the top altered pathway, ascribed to increased levels of alpha-linolenic acid, linoleic acid, oleic acid, arachidonic acid, and docosahexaenoic acid (DHA) in TNs relative to HCs (Fig. A). To investigate perturbed metabolic coregulation among TNs, we leveraged the R package MEGENA to construct networks of differentially correlated metabolite and lipid pairs identified by DGCA. Significant correlations (P < 0.05) were visualized, revealing key hubs within the global network. Two distinct modules with central hubs decadienedioc acid and Cer d18:0/24:1, were boxed up and magnified for visualization (Supplementary Material 5: Figure S5). Module I comprised decadienedioc acid connected to several acylcarnitines, lyso-phosphatidylcholine (LPC) and lyso-phosphatidylethanolamines (lyso-Pes) by teal lines (+ /0), indicating positive coregulation between these lipids with decadienedioc acid in HCs that were lost in TNs. Module II illustrates positive coregulation between Cer d18:0/24:1 and numerous odd-chain TAGs in HCs that was lost in TNs i.e. teal lines (+ /0). DGCA analysis thus underscores the complexity of metabolic and lipid alterations under TNs, offering insights into pathologically relevant changes and the interconnectedness of metabolic pathways. Metabolic aberrations in people living with HIV-1 on ART We next investigated plasma metabolome and lipidome changes in PLWH on ART relative to HCs (Fig. ). We found 146 polar metabolites and 113 lipids significantly different in INRs compared to HCs (Supplementary Material 9: Table S2), and 143 metabolites and 178 lipids between IRs and HCs (Supplementary Material 10: Table S3). We further evaluated the correlation of the top 50 differential metabolites (left panel) and lipids (right panel) with immunological and inflammation-related indices (Supplementary Material 6: Figure S6). CD4/CD8 ratio denotes a biomarker of immune recovery, and non-AIDS-related events in PLWH on ART . We observed that 16 alpha-hydroxy DHEA 3-sulfate, cysteineglutathione disulfide, O-phosphoethanolamine, phosphorylcholine, and thymine were positively correlated with CD4/CD8 ratio, while 3-Hydroxyoctanoic acid was negatively correlated. Furthermore, 16 alpha-hydroxy DHEA 3-sulfate, 3-Hydroxyoctanoic acid, and O-phosphoethanolamine, were positively correlated with inflammation-related markers, but phosphorylcholine and thymine were negatively correlated. Amongst plasma lipids, positive correlations between several TAGs and CD8 T cell counts emerged in PLWH on ART, which was absent in TNs. In addition, these plasma TAGs were positively correlated with CCL19, while negatively correlated with IL-18R1, while no correlation between TAGs and inflammatory markers was observed in TNs. Plasma metabolite panel for distinguishing INRs and IRs LASSO regression model performed on the all the sample with sixfold cross-validation was used to select plasma polar metabolites and lipids that distinguished INRs from IRs. A panel of 9 polar metabolites was selected from a pool of 60 differential metabolites (Supplementary Material 11: Table S4) based on area under the ROC (AUC), which included glycylhydroxyproline, gamma-Glutamylalanine, C16-Sphingosine-1-phosphate, dihydrouracil, deoxyeritadenine, creatinine, trans-S-(1-Propenyl)-L-cysteine, N6,N6,N6-Trimethyl-L-lysine, and glucosamine. The study cohort was divided into training set and test set at a ratio of 7:3. The model ROC curve indicated satisfactory predictive accuracy (AUC = 0.980) (Fig. A–C). Similarly, LASSO regression picked out 8 lipids among a repertoire of 52 differential lipids (Supplementary Material 11: Table S4), including LPC16:1, LPC16:0, FFA20:4, PE38:7 (16:1_22:6), GM3 d18:1/22:0, TAG53:3 (19:1), TAG54:7 (22:6), and TAG56:7 (22:6) (Fig. D–F) that differentiated INRs from IRs. It is noteworthy that DHA-containing lipids were significantly reduced in the plasma of INRs relative to IRs (Supplementary Material 7: Figure S7). Pathway enrichment analysis revealed that similar to TNs, biosynthesis of unsaturated fatty acids was the top significant pathways between PLWH on ART (IRs/INRs) and HCs (Fig. C). The pathway of primary bile acid biosynthesis was additionally enriched in INRs relative to HCs, attributed to altered levels of taurine, chenodeoxycholic acid, cholic acid and glycine. We then used the R MEGENA package to construct networks from differentially correlated metabolite and lipid pairs (calculated by the R package DGCA) to study perturbed metabolite coregulation (P < 0.05) between IRs and INRs (Fig. ). Module I contained methionine sulfoxide as a central hub connected to numerous short-chain TAGs (C46–C50) by red lines , indicating negative correlation in IRs that became positive in INRs. Module II comprised indolelactic acid as a hub extending to several DHA-containing lipids via purple ( ±) and blue (0/-) lines, reflective of negative coregulation between indolelactic acid and these DHA-lipids that emerged in INRs only. Characteristics were presented in Table . No statistical differences were observed in sex and BMI among HCs, TNs, IRs, and INRs. Furthermore, there were no statistical differences in the regimen and duration of ART between IRs and INRs. Consistent with previous studies , INRs were older, exhibited lower CD4 T cell counts, and a lower CD4/CD8 ratio compared to IRs. The frequency of CD8 T N cells decreased in INRs compared to IRs, while the frequency of CD4 T EM cells and CD8 T EM cells were increased. Frequencies of HLA-DR + CD38 + CD8 T cells and PD-1 + CD8 T cells were significantly increased in TNs compared with HCs. Even in IRs, CD8 T cell counts remained significantly higher than that in HCs. Hence, ART failed to fully restore normal immunological profiles in PLWH, particularly in INRs, which necessitates new intervention targets to attain better immunological restoration. Plasma levels of 161 polar metabolites (P < 0.05) and 148 lipids (P < 0.05) were significantly altered between TNs and HCs (Supplementary Material 8: Table S1). Among these differential metabolites, 103 were increased while 58 were decreased. As for differential lipids, 102 species were elevated while 46 species were reduced. Heatmap illustrates the top 50 differential polar metabolites (Fig. A) and lipids (Fig. B) between TNs and HCs. The differential polar metabolites were predominantly fatty acyls, carboxylic acids and derivatives, organooxygen compounds, hydroxy acids and derivatives, as well as indoles and derivatives (Fig. ). Systemic lipid aberrations were evident, with significant increases in TAG, acylcarnitine, Cer, as well as, free fatty acid (FFA) (Fig. , Supplementary Material 4: Figure S4). PLS-DA revealed significant segregation among HCs, TNs, and INRs for both plasma metabolome and lipidome, while the separation between INRs and IRs was less apparent (Supplementary Material 3: Figure S3). The differential polar metabolites and lipids across distinct pairwise comparisons of groups were presented in UpSet plots and volcano plots (Supplementary Material 3: Figure S3–Supplementary Material 4: Figure S4). Accordingly, the smallest set size of differential metabolites was observed between INRs and IRs. We then evaluated the correlations between the top 50 differential polar metabolites (Fig. A) and lipids (Fig. B) with immunological and inflammation-related indices using Spearman correlation. We observed that aspartyl-threonine, ribothymidine, and thymine were positively correlated with CD4 T cell counts, while 3-hydroxyoctanoic acid, hydroxyisocaproic acid, and pseudouridine were negatively correlated. Notably, thymine was also concurrently negatively correlated with HIV-1 viral load. L-Kynurenine, which was significantly elevated in TNs relative to controls (Supplementary Material 8: Table S1), was positively correlated with several inflammation-related markers; while tryptophan, the metabolic precursor of L-kynurenine, was appreciably reduced in TNs relative to HCs (Supplementary Material 8: Table S1). For plasma lipids, 14:0-carnitine, 14:1-carnitine, Cer d18:0/16:0, and Cer d18:1/24:1 were positively correlated with HIV-1 viral load. With regard to CD8 T cell activation, 3-hydroxyoctanoic acid, 3-oxododecanoic acid, 5-hydroxy-L-tryptophan, 5-hydroxyindoleacetic acid, oleoylcarnitine, and pseudouridine were positively correlated, while phosphorylcholine, ribothymidine, and thymine were negatively correlated. Metabolites correlated with CD8 T cell activation were also associated with the levels of inflammation-related markers in the same direction. Interestingly, 3-oxododecanoic acid and 5-hydroxy-L-tryptophan were negatively correlated with the frequency of CD8 T N cells, while thymine was positively correlated. As for lipids, Cer d18:0/16:0, Cer d18:0/24:1, Cer d18:1/24:1, and GM3 d18:0/16:0 were positively correlated with CD8 T cell activation and markers of inflammation, while Cer d18:0/24:1 was also negatively correlated with the frequency of CD8 T N cells. Bubble plot depicts the fold enrichment and P-values of the top 20 significantly altered pathways between TNs and HCs. Biosynthesis of unsaturated fatty acids emerged as the top altered pathway, ascribed to increased levels of alpha-linolenic acid, linoleic acid, oleic acid, arachidonic acid, and docosahexaenoic acid (DHA) in TNs relative to HCs (Fig. A). To investigate perturbed metabolic coregulation among TNs, we leveraged the R package MEGENA to construct networks of differentially correlated metabolite and lipid pairs identified by DGCA. Significant correlations (P < 0.05) were visualized, revealing key hubs within the global network. Two distinct modules with central hubs decadienedioc acid and Cer d18:0/24:1, were boxed up and magnified for visualization (Supplementary Material 5: Figure S5). Module I comprised decadienedioc acid connected to several acylcarnitines, lyso-phosphatidylcholine (LPC) and lyso-phosphatidylethanolamines (lyso-Pes) by teal lines (+ /0), indicating positive coregulation between these lipids with decadienedioc acid in HCs that were lost in TNs. Module II illustrates positive coregulation between Cer d18:0/24:1 and numerous odd-chain TAGs in HCs that was lost in TNs i.e. teal lines (+ /0). DGCA analysis thus underscores the complexity of metabolic and lipid alterations under TNs, offering insights into pathologically relevant changes and the interconnectedness of metabolic pathways. We next investigated plasma metabolome and lipidome changes in PLWH on ART relative to HCs (Fig. ). We found 146 polar metabolites and 113 lipids significantly different in INRs compared to HCs (Supplementary Material 9: Table S2), and 143 metabolites and 178 lipids between IRs and HCs (Supplementary Material 10: Table S3). We further evaluated the correlation of the top 50 differential metabolites (left panel) and lipids (right panel) with immunological and inflammation-related indices (Supplementary Material 6: Figure S6). CD4/CD8 ratio denotes a biomarker of immune recovery, and non-AIDS-related events in PLWH on ART . We observed that 16 alpha-hydroxy DHEA 3-sulfate, cysteineglutathione disulfide, O-phosphoethanolamine, phosphorylcholine, and thymine were positively correlated with CD4/CD8 ratio, while 3-Hydroxyoctanoic acid was negatively correlated. Furthermore, 16 alpha-hydroxy DHEA 3-sulfate, 3-Hydroxyoctanoic acid, and O-phosphoethanolamine, were positively correlated with inflammation-related markers, but phosphorylcholine and thymine were negatively correlated. Amongst plasma lipids, positive correlations between several TAGs and CD8 T cell counts emerged in PLWH on ART, which was absent in TNs. In addition, these plasma TAGs were positively correlated with CCL19, while negatively correlated with IL-18R1, while no correlation between TAGs and inflammatory markers was observed in TNs. LASSO regression model performed on the all the sample with sixfold cross-validation was used to select plasma polar metabolites and lipids that distinguished INRs from IRs. A panel of 9 polar metabolites was selected from a pool of 60 differential metabolites (Supplementary Material 11: Table S4) based on area under the ROC (AUC), which included glycylhydroxyproline, gamma-Glutamylalanine, C16-Sphingosine-1-phosphate, dihydrouracil, deoxyeritadenine, creatinine, trans-S-(1-Propenyl)-L-cysteine, N6,N6,N6-Trimethyl-L-lysine, and glucosamine. The study cohort was divided into training set and test set at a ratio of 7:3. The model ROC curve indicated satisfactory predictive accuracy (AUC = 0.980) (Fig. A–C). Similarly, LASSO regression picked out 8 lipids among a repertoire of 52 differential lipids (Supplementary Material 11: Table S4), including LPC16:1, LPC16:0, FFA20:4, PE38:7 (16:1_22:6), GM3 d18:1/22:0, TAG53:3 (19:1), TAG54:7 (22:6), and TAG56:7 (22:6) (Fig. D–F) that differentiated INRs from IRs. It is noteworthy that DHA-containing lipids were significantly reduced in the plasma of INRs relative to IRs (Supplementary Material 7: Figure S7). Pathway enrichment analysis revealed that similar to TNs, biosynthesis of unsaturated fatty acids was the top significant pathways between PLWH on ART (IRs/INRs) and HCs (Fig. C). The pathway of primary bile acid biosynthesis was additionally enriched in INRs relative to HCs, attributed to altered levels of taurine, chenodeoxycholic acid, cholic acid and glycine. We then used the R MEGENA package to construct networks from differentially correlated metabolite and lipid pairs (calculated by the R package DGCA) to study perturbed metabolite coregulation (P < 0.05) between IRs and INRs (Fig. ). Module I contained methionine sulfoxide as a central hub connected to numerous short-chain TAGs (C46–C50) by red lines , indicating negative correlation in IRs that became positive in INRs. Module II comprised indolelactic acid as a hub extending to several DHA-containing lipids via purple ( ±) and blue (0/-) lines, reflective of negative coregulation between indolelactic acid and these DHA-lipids that emerged in INRs only. In the present study, we leveraged an omics-driven approach to dissect metabolic dysregulation underlying the pathogenesis and treatment response of PLWH. The stark metabolic differences between PLWH on ART relative to HCs, even in IRs, indicated the persistence of systemic residual metabolic abnormalities, irrespective of ART, which could influence both immune recovery and inflammation in PLWH, thereby possibly contributing to poor prognosis and adverse outcome. Ribothymidine and thymine emerged as top altered metabolites significantly reduced in TNs. Moreover, the levels of ribothymidine and thymine exhibited positive correlations with CD4 T cell counts, CD4/CD8 ratio, and the proportion of CD4 T N cells, while showing negative correlations with CD8 T cell activation and inflammation-related markers. Furthermore, thymine level was inversely associated with HIV-1 viral load. Despite continual ART in PLWH, thymine remained positively correlated with CD4 T cell counts and CD4/CD8 ratio, while negatively correlated with inflammation-related markers. These observations showed that reductions in thymine are pathology-dependent and not ascribed to treatment, and that lowered plasma thymine persisted despite continual ART. The negative association between plasma thymine and viral load indicates that thymine is expended for HIV-1 replication, since thymine and its nucleotide derivatives are critical for efficient HIV-1 replication to ensue . Indeed, deoxy analogs of thymine were shown to exhibit potent and specific activity against HIV-1 via inhibition of HIV-1 reverse transcriptase . Thymine also sensitizes Gram-negative pathogens to antibiotic killing by increasing cellular respiration and hence reactive oxygen species production, thus converting bacteria from inactive to the active form with higher susceptibility to antibiotic killing . Likewise, lower systemic thymine may promote immune escape in PLWH. Ribothymidine, also known as 5-Methyluridine, is a pyrimidine nucleoside and methylated form of uridine. After entering the cell, ribothymidine is phosphorylated by kinases to mono- or triphosphates that inhibit viral replication. Ribothymidine is a key intermediate in the synthesis of anti-AIDS drug such as stavudine (d4T) and zidovudine (AZT). A recent study reported ribothymidine was decreased in CD4 T cells of patients with systemic lupus erythematosus . Elevated 3-hydroxyoctanoic acid in TNs was negatively correlated with CD4 T cell counts, and positively correlated with inflammation-related markers, irrespective of ART. 3-hydroxyoctanoic acid, also known as 3-hydroxyoctanoate, is a medium-chain fatty acid that activates the G-protein-coupled receptor 84 (GPR84). GPR84 activation promotes chemotaxis and pro-inflammatory cytokine release in leukocytes and macrophages . L-kynurenine, a metabolite generated from tryptophan degradation via indoleamine 2,3-dioxygenase upon induction by inflammatory cytokines , was elevated in TNs and positively correlated with inflammation-related markers. Our observation is aligned with preceding studies reporting associations between kynurenine and markers of inflammation and immune activation in PLWH . Our results also highlight the significant impact of HIV-1 infection on the biosynthesis pathway of unsaturated fatty acids, which are known to regulate the synthesis and release of pro-inflammatory mediators like IL-6 and TNF-α . Indeed, previous work had shown that HIV replication per se enhances fatty acid biosynthesis in T cells, aligned with observations of metabolic disturbances including insulin resistance and dyslipidemia in TNs . With respect to plasma lipidome in TNs, Cer was negatively correlated with proportion of CD4 T N cells, while both Cer and GM3 were positively correlated with inflammation-related markers. Higher Cer level corroborated preceding report on increased plasma Cer in PLWH . Specific Cer, including C16:0 and C24:1 Cer, were associated with immune activation and inflammation . Interestingly, Zhao et al. reported the concentration of TNF, IL-6, and IL-1β in adipose tissue were positively correlated with Cer . Furthermore, C16 Cer can induce the production of reactive oxygen species in an iron-dependent manner, leading to apoptosis and the activation of the TLR4/NF-қB inflammatory pathway in macrophages . Incorporation of host-cell-derived GM3 by HIV-1 was demonstrated to facilitate capture by mature dendritic cells, which promotes virus capture and dissemination . Plasma GM3 also increases progressively with disease severity and was inversely correlated with CD4 T cell counts in COVID-19 patients . In addition, GM3 had been shown to be induced by TNF treatment in cultured adipocytes . Our multi-omics characterization underscores dysregulated lipid metabolism in PLWH on ART. As an example, the positive correlations between CD8 T cell counts and plasma TAGs emerged only in PLWH on ART, but not TNs. Plasma TAGs were also positively correlated with CCL19, but inversely associated with IL-18R1, while no correlation between TAGs and inflammatory markers was observed in TNs. Previous studies also reported elevated TAGs in AIDS patients compared to HCs, and positive associations between plasma TAGs and IFN alpha levels . Corresponding with our results, stronger correlations were also detected between TAGs and inflammatory markers in treated PLWH that were absent in TNs . TAGs were also positively correlated with CCL19, a critical regulator of T cell activation and immune tolerance, in the adipose tissue of obese people . Furthermore, DGCA analyses revealed positive coregulation between methionine sulfoxide and plasma short-chain TAGs that emerged in INRs relative to IRs, underscoring TAG metabolism in the treatment response to ART. Methionine sulfoxide is produced from the reversible oxidation of methionine, and can be further oxidized irreversibly to methionine sulfone . Methionine sulfone was significantly increased in the plasma of both INRs and IRs relative to HCs (Supplementary Material 4: Figure S4). Oxidation of methionine residues in human apolipoprotein A (apoA) generates modified apoA, which remains functional in extracting lipids from immune cells, but triggers strong pro-inflammatory responses from monocytes and macrophages . We also identified distinct panels of 9 metabolites and 8 lipids, respectively, that effectively distinguished INRs from IRs. For metabolites, glucosamine was increased in INRs, compared with IRs. As a natural glucose derivative and essential component of glycoproteins and proteoglycans, glucosamine directly inhibited CD3-induced T cell proliferation, interfered with antigen-presenting cell functions , and suppressed T cell and DC activation . Notably, lipids containing DHAs emerged as critical metabolites differentiating INRs from IRs. Supporting results from lasso regression, DGCA analysis identified a module comprising indolelactic acid as a hub connected to several DHA-containing lipids, indicative of negative coregulation between indolelactic acid and DHA-lipids that emerged specifically in INRs. Indolelactic acid is a tryptophan metabolite predominantly produced by the gut commensal Bifidobacterium . Metabolic activity of Bifidobacteirum was previously shown to upregulate the production of omega-3 fatty acids including DHAs in the adipose of mice . The present findings hence implicate gut microbial activity in the treatment response of PLWH, given the known role of DHA-lipids in resolution of inflammation . Reductions in DHA-containing lipids in PLWH relative to HCs was aligned with a previous report , and the incorporation of DHA enhances the nano-delivery of ART across the blood–brain barrier to target HIV-1 reservoirs in the brain . Indeed, DHA supplementation was reported to reduce proinflammatory and oxidative stress factors by modulating gut microbiota and fecal metabolomic profiles in PLWH with neurocognitive impairment . Our trans-omics integration of data thus underscores the possible involvement of Bifidobacterium metabolism in ART treatment outcome mediated by DHA-lipids. Furthermore, pathway enrichment analysis revealed that “primary bile acid biosynthesis” additionally emerged in INRs relative to HCs, which was not significant in IRs compared to HCs. Bile acid perturbations were ascribed to reductions in taurine and glycine residues in the plasma of INRs. Importantly, hydrolysis of conjugated bile acids, including taurocholates and glycocholates, is an important metabolic feature of the Bifidobacterium genus . Reductions in hydrolysis products of conjugated bile acids in INRs thus supports a perturbed balance of gut commensals, particularly Bifidobacterium , in INRs compared to IRs, which might be associated with poor immune recovery and heightened inflammation. Targeting the gut microbiome may offer a strategic approach to enhance immune reconstitution following ART. In clinical trial, Bifidobacterium and DHA-lipid may serve as novel therapeutic strategies to modulate inflammation and improve immune function in PLWH (NCT02640625, NCT02164344, and NCT02005900). For example, Probiotics supplementation, including Bifidobacterium , increased abundance of Bifidobacterium in the terminal ileum for INRs, and increased abundance of Bifidobacterium in the terminal ileum correlated with increased fraction of CD4 T cells in the same compartment and increased CD4/CD8 ratio in peripheral blood . Significant reductions in T cell activation and CRP were observed in PLWH receiving ART after Bifidobacterium supplementation . These results suggested Bifidobacterium is a promising adjunctive therapy for PLWH. Similarly, interventions aimed at restoring DHA-lipid homeostasis may mitigate systemic inflammation and promote metabolic balance, ultimately supporting immune recovery. Previous study shown that TAG, arachidonic acid, and CRP levels were decreased in PLWH receiving DHA-lipids, and DHA supplementation downregulated inflammatory gene expression, including TNF and MCP-1 , which may benefit the prognosis of INRs. In conclusion, our study explored the plasma metabolome and lipidome of PLWH, revealing significant metabolic dysregulation distinct to TNs, IRs, and INRs compared to HCs. Our findings indicate the dysregulated primary bile acid pathway and DHA-containing lipids may influence the response to ART, and unveil host-commensal metabolic interactions that possibly contribute to differential treatment outcome. Overall, these insights into the metabolome and lipidome offer potential therapeutic targets for intervention and markers for monitoring immune recovery of CD4 T cell. Supplementary Material 1: Figure S1. The bar chart shows the number of metabolites (A) and lipids (B) identified in each class. The total number of identified metabolites and lipids is both 512. Supplementary Material 2: Figure S2. Quality control (QC) of the precise metabolomics (A, B) and lipids (C, D) analysis. (A,C) PCA analysis on the patient samples and QC runs. (B, D) Spearman’s correlation coefficients of QC run. Supplementary Material 3: Figure S3. Metabolic and lipidomic profiles in healthy controls (HCs), treatment-naïve people living with HIV-1(TNs), immunological responders (IRs), and immunological non-responders (INRs). (A, C) PLS-DA score plots for metabolites (A) and lipids (C) in HCs, TNs, IRs, and INRs. (B, D) Upset plot of differential metabolites (B) and lipids (D) in HCs, TNs, IRs, and INRs. The differential compounds were screened out based on the p value less than 0.05. Supplementary Material 4: Figure S4. Volcano plots visualize the log10 (P value) on the y axis and the log2 (fold change [FC]) on the x axis for differential metabolites (A) and lipids (B) compared between the various groups (TN vs HC, IR vs HC, INR vs HC, IR vs INR). Supplementary Material 5: Figure S5. Multiscale embedded gene coexpression network shows differential correlation analyses of metabolites and lipids in healthy controls (HCs) to to treatment-naïve people living with HIV-1 (TNs). The node color indicates metabolite class and the edge color indicates the differential correlation class: −/+ (137) means negative correlation and positive correlation in HCs and TNs. A total of 137 lipid pair connected by lines in the global network displayed this pattern of change . +/++ means a higher positive correlation in TNs. CE: cholesteryl esters; Cer: ceramides; DAG: diacylglycerols; FFA: free fatty acids; Gb3: Ceramide trihexoside; GluCer: glucosylceramides; LacCer: lactosylceramides; LPC: lyso-phosphatidylcholines; LPE: lyso-phosphatidylethanolamines; PC: phosphatidylcholines; PE: phosphatidylethanolamines; PI: phosphatidylinositols; SM: sphingomyelins; TAG: triacylglycerols. Supplementary Material 6: Figure S6. Correlations between the top 50 differential metabolites (A) and lipids (B) with clinical parameters parameters (T cell counts, viral load, and T cell subsets) and inflammation-related markers in people living with HIV-1 on ART. Spearman correlation coefficients were calculated and were shown in a heatmap plot in which significant correlations were indicated by color: red and blue for positive and negative correlations, respectively. *** representing p Supplementary Material 7: Figure S7. Metabolic (A) and lipid (B) markers that distinguish immunological non-responders (INRs) from immunological responders (IRs) Supplementary Material 8: Table S1. Differential metabolites (A) and lipids (B) between TNs and HCs. Supplementary Material 9: Table S2. Differential metabolites (A) and lipids (B) between INRs and HCs. Supplementary Material 10: Table S3. Differential metabolites (A) and lipids (B) between IRs and HCs. Supplementary Material 11: Table S4. Differential metabolites (A) and lipids (B) between INRs and IRs. Supplementary Material 12: Supplementary Methods.
Digital pathology structure and deployment in Veneto: a proof-of-concept study
c4d7bde7-37dd-45b3-b21e-395dd659d8de
11415458
Pathology[mh]
In the era of targeted medicine, pathologists must deal with an increasing daily workload to assess extremely detailed diagnoses to guide specific patient-designed therapies. In this scenario, digital pathology (DP) and whole-slide imaging (WSI) represent disruptive technologies with great potential to meet the needs of the modern medical community. These technologies not only allow pathologists to quickly share challenging cases with experienced, remotely located colleagues to get second opinions but they also enable scanned slides to be stored in virtual archives from where they may be easily and quickly retrieved . Furthermore, DP and WSI perfectly suit the application of artificial intelligence (AI)-based algorithms, which can improve the accuracy of diagnosis and the development of personalized treatment plans, assisting clinicians with decision-making. Also, AI can provide healthcare systems across multiple locations with the fundamental process of slide screening to recognize morphological patterns to quickly guide proper diagnosis and therapy ; however, despite great strides in recent years, the adoption of DP remains limited to only a few pathology laboratories. While generally exploited for research purposes , DP has also been progressively employed for primary diagnoses by several institutions . Such innovative tools have been variably adopted either by pathology laboratories alone or by complex healthcare networks of hospitals , like as in South Tyrol, Italy , and the DigiPatICS project in Catalonia, Spain . These networks, however, have found that connecting an extensive network of hospitals is a huge challenge. Having a group of modern but different hospitals sharing a unique model may be complex in terms of identifying proper hardware (scanners and displays etc.) and software tools for each laboratory. Furthermore, despite technological progress increasing the application of DP and its affordability, the pathology community’s widespread adoption of this technology for primary diagnosis is still lagging behind , hampered by technical and managerial issues . These considerations are of particular concern for specific fields, such as cytopathology, which still lack broadly acknowledged validating guidelines . Azienda Zero is the broadest public healthcare provider in Veneto’s North Italian region and almost 5 million people benefit from the Azienda Zero healthcare system. It is organized into two leading academic hospitals (The University of Verona and Padova), a comprehensive cancer center/research (“Istituto di Ricovero e Cura a Carattere Scientifico”) hospital (“Istituto Oncologico Veneto;” IOV), and nine hospital networks (“Unità Locale Socio Sanitaria;” ULSSs); these latter networks include 26 smaller local districts. As far as pathological anatomy is concerned, the nine ULSSs, the two academic hospitals, and the IOV collect samples from institutions located throughout the entire Veneto territory . While all of them are equipped with basic facilities, such as intraoperative frozen-section microscopy, a few hospitals provide more specific services. For example, (i) pathological samples from specialist surgery are analyzed by the laboratories of the two academic hospitals and three community hospitals; (ii) the Padova University Hospital and seven ULSSs gather cytological material from the whole region for the cervical cancer screening program, and (iii) an on-call pathologist is available 24/7 in the two academic hospitals and two community hospitals for intraoperative evaluation of transplantation specimens. The routine workflow of such laboratories globally accounts for about 240,000 slides per month and nearly 3 million slides per year (Fig. ). In light of the above, the pathology laboratories in the Veneto region suit the adoption of a widely shared DP system. The aim of the present work is to describe the planning and effort required to enable the network of pathology laboratories in Veneto to go fully digital. In the following sections, we detail the characteristics of the hardware and software components considered and related to clinical needs, describing this crucial step of the digital transformation journey for pathological anatomy, providing a structured approach to assess feasibility, ensure compliance, and optimize the implementation of DP in the healthcare domain. The DP program was supported with European funds to optimize pathological diagnosis in the network of public hospitals in the Veneto region. After a comprehensive assessment of data and needs provided by individual hospitals, the regional governance outlined a strategic framework with specific clinical needs. A market investigation called “Technologies for Digital Pathology” was carried out to verify the presence of appropriate tools tailored to the network’s needs. We want to stress that although technological tools, such as scanners and displays, represent one of the main points of the “digital pathway,” going fully digital is not only about buying tools; the transformation of a conventional pathology laboratory should encompass all the processes and the people routinely working there, who must be adequately trained to embrace the cultural change. Considering this step as mandatory for each laboratory, in the following sections, we describe the characteristics of the items necessary to realize the project according to the clinical needs. Image-acquisition system Scanners are essential for the digital workflow, and a variety of models are now available on the market . Consultations with various suppliers were planned according to the predicted services for each community and academic hospital. Four scanner categories were selected based on their specific technical characteristics: Batch 1: high-throughput scanners. Such devices were meant to satisfy the needs for routine diagnosis in high-volume institutions (i.e., the academic hospitals of Verona and Padova, and the community hospitals of Treviso, Venezia, and Vicenza) provided by high-volume specialistic surgical services. Therefore, these had to be capable of quickly and accurately digitizing a large number of hematoxylin and eosin (H&E)-stained slides and immunohistochemistry (IHC) preparations. Batch 2: medium-throughput scanners. For the smaller community hospitals, there would have been no point in acquiring expensive hardware, performing far beyond the required daily workflow. Instead, these hospitals were more likely to benefit from less-expensive medium-capacity scanners, proficiently supplying the demands of their inpatient and outpatient services. Batch 3: low-throughput scanners. This type of scanner was intended to address the two critical tasks of pathology laboratories: fluorescent microscopy and on-site intraoperative examination, both required for transplantation surgery. Such services would not have benefited from the bulky and heavy scanners commonly employed for routine diagnosis, as they usually deal with only a small number of slides. Conversely, light (but still fast) devices better suit these applications. For transplantation, quick digitization also allows physicians to share challenging cases with remotely located colleagues, which can help them to address the pressing demands of the operating theater. Batch 4: cytopathology. Digitalization of cytological material always carries some issues linked to the peculiar characteristics of these specimens. Compared to histology, cytological material does not often homogeneously distribute onto slides, so scanning of multiple focal planes (“Z-stacking”) is often required to obtain appropriate WSIs . The Z-stacking process typically results in longer scanning times and larger digital files. To overcome this problem, some companies have incorporated enhanced methods of Z-stacking into their products, for example, software selecting and combining the sharpest image from each focus into one single image (“Extended Focus”). As screening cytology accounts for a noteworthy percentage of the daily workflow of different hospitals, specific consultations were carried out to identify the most suitable devices, potentially integrating with already available AI-based support systems . Workstation and viewing software Digital displays (i.e., monitors) are an indispensable component of the workflow, and choosing the best display is just as important as selecting the most appropriate microscopy setup for a pathologist providing formal diagnoses. Displays vary greatly in parameters such as size, shape, esthetic design, resolution, brightness, contrast ratio, refresh rate, screen reflection, and viewing angle. A total of 230 workstations were used to let pathologists, residents, students, other professionals, and meeting attendants visualize and navigate WSIs. Each workstation was made up of a personal computer provided with a CPU i7 processor, 16 Gb RAM, 512 Gb SSD, and a full HD graphic card connected to two diagnostic medical-grade US Food and Drug Administration-approved full HD monitors (resolution 1920 × 1080 pixels), at least 27 inches in size. Users could use one display to show histological images and the other to view data. Regardless of the use case, there is a paucity of data on display specifications, particularly their definitions, how they apply to different display categories, and how the specifications can be used to help choose the right display for the intended use. Based on the tasks that they were intended for , we identified two categories of displays, distilled from manufacturer and computing hardware websites: (i) medical grade (MG) and (ii) consumer off-the-shelf (COTS). MG displays are typically expensive, built for multiyear use, and have standardized features to provide a uniform experience for their users. These are contrasted with COTS displays, which are general‐purpose displays. Most pathologists today use COTS displays as their primary display, provided to them as part of a standard core workstation configuration. Once scanned, pathologists can analyze displayed WSIs thanks to specific viewing software provided by the supplier, coupled with an internal laboratory integrative system (LIS). Apart from visualizing virtual slides, the viewing software allows physicians to navigate WSIs with a range of annotation functions, including drawing regions of interest, zooming in and out, rotating, and measuring. Laboratory integrative system (Fig. ) One of the key points for the realization of the project was the unification of the workflow for all laboratories, which were thought of as a coordinated network. The distribution of pathology laboratories across multiple locations, with two high-volume academic hospitals, greatly suits this project. Therefore, to achieve this goal and following a market consultation, the PATHOXWEB 2.0 software provided by TESI Group was chosen as the regional LIS to be uniformly shared by all the institutions. PATHOXWEB 2.0 is a state-of-the-art updated web-based software developed to support professionals in handling the whole workflow of pathology laboratories going fully digital, from sample registration to definitive diagnosis and archiving. The web-based configuration of the system allows its remote usage on workstations and computers by any physician, technician, biologist, or other professional. Thanks to a middleware program, the software can be easily interfaced with local healthcare informatics systems and, as for the interests of a DP laboratory, has the following functions : Acceptance of requests, check-in, and registration of samples collected from inpatient and outpatient services by reading unique barcodes and assignment of a unique identifier for the accepted exam; Assistance in specimen traceability and workflow quality checks in all laboratory phases. The software can be interfaced to laboratory devices that carry out sample tracking, including cassette printers, scanners, tissue processors, stainers, and gross-picture acquisition programs; Residual material, blocks, and slide storage and retrieval management thanks to the interaction with specifically configured archiving desks; Interface with WSI viewing software for supporting the visualization of scanned digital slides; Assisted reporting with archiving of standard sentences and use of International Collaboration on Cancer Reporting-based checklists; TelePathox program, a web-based consultation service allowing online correspondence of challenging cases with remotely located, experienced colleagues to get trusted second opinions. Storage According to the guidelines, “tissue blocks and glass slides should be kept long enough to ensure that the patient is treated properly”. Such a period has been operatively quantified by the UK’s Royal College of Pathologists as 10 years for histology slides and smears, and the patient’s entire life for blocks . Planning a storage solution for digital files requires the consideration of several factors, including the number and size of the files, the number of users who will access the files, the interface between users, the level of data protection and security, and the budget. In the context of the realization of the current project, the digital storage process of WSI focused on the following points: Cloud-based archiving of digital images and data in central/federated vendor-neutral archives, allowing easy and fast case identification and retrieval, planning for a long-term central archive after initial local management at the laboratory (short-term local archive). The purpose of this distinction is to streamline the management of the digital archive through centralization. It could be possible to automatically recover the slides of each patient every time there is an examination or letting the local pathologist decide which cases to keep in the short-term archive; Association of digitally saved data with other healthcare-related information uploaded from the internal LIS; Image backup option, able to provide disaster recovery and continuity exchange with external health organizations; Indexing research tool employing the topography and morphology Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) vocabulary. Artificial intelligence The academic community of the Veneto region has always welcomed the adoption of new technologies as long as validated safety requirements support them. In pathology, as in many other scientific fields, AI is revolutionizing the approach to historically problematic issues, speeding up the decision-making process, and increasing the standardization of reports. Thus, with the goal of both enhancing diagnostic accuracy as well as reducing subjective interpretations and turn-around times, the present project chose to fully embrace the currently approved AI-based deep-learning tools, including: Support for prostate biopsy screening for cancer and Gleason score grading ; Automatic evaluation of breast cancer IHC biomarkers (estrogen receptors, progesterone receptors, and Ki-67 proliferative index) ; Computer-assisted PD-L1 scoring in lung cancer specimens . The use of the abovementioned products for in vitro diagnostic purposes has been endorsed by international regulatory entities after strict evaluation of their evidence-supported safety standards and performances. Specialist networks Specialist networks act to exchange knowledge, experiences and best practices, and training. In Veneto, the nephropathology network consists of numerous points of care that, however, cannot have the necessary expertise due to their diagnostic complexity. The project plans to structure reporting workstations within the network’s nephrology departments, similar to those in the pathology laboratories. This will allow the entire regional nephrology network to be connected, to manage the most complex cases, carry out process quality controls, standardize reporting, and consequently improving the therapeutic management of patients. System tracking, quality checks, and archiving of tissue specimens Standardized tracking, archiving, and conservation of materials are pivotal to patient care and may significantly influence diagnostic accuracy and, ultimately, clinical management. To ensure precision and repeatability of the whole workflow, pathology laboratories involved in the project had several quality control checkpoints. Barcode-scanning equipment integrated with a local LIS was employed to provide the lowest rate possible of human-eye-caused manual mistakes. First, each specimen received by the laboratory was accompanied by an e-request providing relevant clinical data created in local electronic healthcare record systems by the sending clinicians. After registration, the sample was given a unique identification number labeled with a DataMatrix-type barcode. Such an identification code is essential to track the specimen through its entire “life” within the laboratory, notifying professionals about its status, and eventual actions required from the registration to the final released diagnosis. Specifically, (i) labels were applied to the containers for sampling and material storing; (ii) barcodes were printed on each paraffin block cassette; and (iii) labels including the DataMatrix code were automatically printed on each H&E, IHC, and/or cytological slide related to the specimen. Most hardware and computers (i.e., tissue processors, scanners and stainers, etc.) of the involved institutions will be equipped with barcode readers integrated with the LIS, enabling the working team to check the samples safely during the subsequent processing steps. As previously mentioned, for some regulatory entities , the recommendations advise prolonged tissue conservation after the final diagnosis, covering the whole patient’s life. The LIS automatically records a complete audit trail of each block and slide. The system relies on a modular system, progressively adding more units to its capacity without limiting the number of collectible items for short and long storage periods. Validation Validation is defined as a process that demonstrates that WSIs will perform as expected for their intended use and environment before using them for patient care. Regarding the validation of the entire digital network (scanners, displays and AI tools, etc.) for primary diagnosis, the recommendation for each institution is to follow the College of American Pathologists (CAP) guidelines , to ensure the safety and robustness of the project. The validation process must include a sample set of at least 60 cases for one application, or use case, that reflects the spectrum and complexity of specimen types. Several pathologists per center are experienced in DP and will add value to the validation process, randomly evaluating the selected WSIs previously assessed by conventional light microscopy (LM) after the washout period. Although all discordances between WSI and glass-slide diagnoses discovered during validation need to be reconciled, a 95% diagnostic concordance threshold for the same observer between LM and WSI will be the goal, according to the CAP guidelines. As far as cytopathology is concerned, widely acknowledged validating guidelines are still lacking . Hence, an expert-based consensus on the non-inferiority of virtual to glass slides reviewed by dedicated cytopathologists was chosen as a reliable criterion for safely adopting the WSI technique in this field. Scanners are essential for the digital workflow, and a variety of models are now available on the market . Consultations with various suppliers were planned according to the predicted services for each community and academic hospital. Four scanner categories were selected based on their specific technical characteristics: Batch 1: high-throughput scanners. Such devices were meant to satisfy the needs for routine diagnosis in high-volume institutions (i.e., the academic hospitals of Verona and Padova, and the community hospitals of Treviso, Venezia, and Vicenza) provided by high-volume specialistic surgical services. Therefore, these had to be capable of quickly and accurately digitizing a large number of hematoxylin and eosin (H&E)-stained slides and immunohistochemistry (IHC) preparations. Batch 2: medium-throughput scanners. For the smaller community hospitals, there would have been no point in acquiring expensive hardware, performing far beyond the required daily workflow. Instead, these hospitals were more likely to benefit from less-expensive medium-capacity scanners, proficiently supplying the demands of their inpatient and outpatient services. Batch 3: low-throughput scanners. This type of scanner was intended to address the two critical tasks of pathology laboratories: fluorescent microscopy and on-site intraoperative examination, both required for transplantation surgery. Such services would not have benefited from the bulky and heavy scanners commonly employed for routine diagnosis, as they usually deal with only a small number of slides. Conversely, light (but still fast) devices better suit these applications. For transplantation, quick digitization also allows physicians to share challenging cases with remotely located colleagues, which can help them to address the pressing demands of the operating theater. Batch 4: cytopathology. Digitalization of cytological material always carries some issues linked to the peculiar characteristics of these specimens. Compared to histology, cytological material does not often homogeneously distribute onto slides, so scanning of multiple focal planes (“Z-stacking”) is often required to obtain appropriate WSIs . The Z-stacking process typically results in longer scanning times and larger digital files. To overcome this problem, some companies have incorporated enhanced methods of Z-stacking into their products, for example, software selecting and combining the sharpest image from each focus into one single image (“Extended Focus”). As screening cytology accounts for a noteworthy percentage of the daily workflow of different hospitals, specific consultations were carried out to identify the most suitable devices, potentially integrating with already available AI-based support systems . Digital displays (i.e., monitors) are an indispensable component of the workflow, and choosing the best display is just as important as selecting the most appropriate microscopy setup for a pathologist providing formal diagnoses. Displays vary greatly in parameters such as size, shape, esthetic design, resolution, brightness, contrast ratio, refresh rate, screen reflection, and viewing angle. A total of 230 workstations were used to let pathologists, residents, students, other professionals, and meeting attendants visualize and navigate WSIs. Each workstation was made up of a personal computer provided with a CPU i7 processor, 16 Gb RAM, 512 Gb SSD, and a full HD graphic card connected to two diagnostic medical-grade US Food and Drug Administration-approved full HD monitors (resolution 1920 × 1080 pixels), at least 27 inches in size. Users could use one display to show histological images and the other to view data. Regardless of the use case, there is a paucity of data on display specifications, particularly their definitions, how they apply to different display categories, and how the specifications can be used to help choose the right display for the intended use. Based on the tasks that they were intended for , we identified two categories of displays, distilled from manufacturer and computing hardware websites: (i) medical grade (MG) and (ii) consumer off-the-shelf (COTS). MG displays are typically expensive, built for multiyear use, and have standardized features to provide a uniform experience for their users. These are contrasted with COTS displays, which are general‐purpose displays. Most pathologists today use COTS displays as their primary display, provided to them as part of a standard core workstation configuration. Once scanned, pathologists can analyze displayed WSIs thanks to specific viewing software provided by the supplier, coupled with an internal laboratory integrative system (LIS). Apart from visualizing virtual slides, the viewing software allows physicians to navigate WSIs with a range of annotation functions, including drawing regions of interest, zooming in and out, rotating, and measuring. ) One of the key points for the realization of the project was the unification of the workflow for all laboratories, which were thought of as a coordinated network. The distribution of pathology laboratories across multiple locations, with two high-volume academic hospitals, greatly suits this project. Therefore, to achieve this goal and following a market consultation, the PATHOXWEB 2.0 software provided by TESI Group was chosen as the regional LIS to be uniformly shared by all the institutions. PATHOXWEB 2.0 is a state-of-the-art updated web-based software developed to support professionals in handling the whole workflow of pathology laboratories going fully digital, from sample registration to definitive diagnosis and archiving. The web-based configuration of the system allows its remote usage on workstations and computers by any physician, technician, biologist, or other professional. Thanks to a middleware program, the software can be easily interfaced with local healthcare informatics systems and, as for the interests of a DP laboratory, has the following functions : Acceptance of requests, check-in, and registration of samples collected from inpatient and outpatient services by reading unique barcodes and assignment of a unique identifier for the accepted exam; Assistance in specimen traceability and workflow quality checks in all laboratory phases. The software can be interfaced to laboratory devices that carry out sample tracking, including cassette printers, scanners, tissue processors, stainers, and gross-picture acquisition programs; Residual material, blocks, and slide storage and retrieval management thanks to the interaction with specifically configured archiving desks; Interface with WSI viewing software for supporting the visualization of scanned digital slides; Assisted reporting with archiving of standard sentences and use of International Collaboration on Cancer Reporting-based checklists; TelePathox program, a web-based consultation service allowing online correspondence of challenging cases with remotely located, experienced colleagues to get trusted second opinions. According to the guidelines, “tissue blocks and glass slides should be kept long enough to ensure that the patient is treated properly”. Such a period has been operatively quantified by the UK’s Royal College of Pathologists as 10 years for histology slides and smears, and the patient’s entire life for blocks . Planning a storage solution for digital files requires the consideration of several factors, including the number and size of the files, the number of users who will access the files, the interface between users, the level of data protection and security, and the budget. In the context of the realization of the current project, the digital storage process of WSI focused on the following points: Cloud-based archiving of digital images and data in central/federated vendor-neutral archives, allowing easy and fast case identification and retrieval, planning for a long-term central archive after initial local management at the laboratory (short-term local archive). The purpose of this distinction is to streamline the management of the digital archive through centralization. It could be possible to automatically recover the slides of each patient every time there is an examination or letting the local pathologist decide which cases to keep in the short-term archive; Association of digitally saved data with other healthcare-related information uploaded from the internal LIS; Image backup option, able to provide disaster recovery and continuity exchange with external health organizations; Indexing research tool employing the topography and morphology Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) vocabulary. The academic community of the Veneto region has always welcomed the adoption of new technologies as long as validated safety requirements support them. In pathology, as in many other scientific fields, AI is revolutionizing the approach to historically problematic issues, speeding up the decision-making process, and increasing the standardization of reports. Thus, with the goal of both enhancing diagnostic accuracy as well as reducing subjective interpretations and turn-around times, the present project chose to fully embrace the currently approved AI-based deep-learning tools, including: Support for prostate biopsy screening for cancer and Gleason score grading ; Automatic evaluation of breast cancer IHC biomarkers (estrogen receptors, progesterone receptors, and Ki-67 proliferative index) ; Computer-assisted PD-L1 scoring in lung cancer specimens . The use of the abovementioned products for in vitro diagnostic purposes has been endorsed by international regulatory entities after strict evaluation of their evidence-supported safety standards and performances. Specialist networks act to exchange knowledge, experiences and best practices, and training. In Veneto, the nephropathology network consists of numerous points of care that, however, cannot have the necessary expertise due to their diagnostic complexity. The project plans to structure reporting workstations within the network’s nephrology departments, similar to those in the pathology laboratories. This will allow the entire regional nephrology network to be connected, to manage the most complex cases, carry out process quality controls, standardize reporting, and consequently improving the therapeutic management of patients. Standardized tracking, archiving, and conservation of materials are pivotal to patient care and may significantly influence diagnostic accuracy and, ultimately, clinical management. To ensure precision and repeatability of the whole workflow, pathology laboratories involved in the project had several quality control checkpoints. Barcode-scanning equipment integrated with a local LIS was employed to provide the lowest rate possible of human-eye-caused manual mistakes. First, each specimen received by the laboratory was accompanied by an e-request providing relevant clinical data created in local electronic healthcare record systems by the sending clinicians. After registration, the sample was given a unique identification number labeled with a DataMatrix-type barcode. Such an identification code is essential to track the specimen through its entire “life” within the laboratory, notifying professionals about its status, and eventual actions required from the registration to the final released diagnosis. Specifically, (i) labels were applied to the containers for sampling and material storing; (ii) barcodes were printed on each paraffin block cassette; and (iii) labels including the DataMatrix code were automatically printed on each H&E, IHC, and/or cytological slide related to the specimen. Most hardware and computers (i.e., tissue processors, scanners and stainers, etc.) of the involved institutions will be equipped with barcode readers integrated with the LIS, enabling the working team to check the samples safely during the subsequent processing steps. As previously mentioned, for some regulatory entities , the recommendations advise prolonged tissue conservation after the final diagnosis, covering the whole patient’s life. The LIS automatically records a complete audit trail of each block and slide. The system relies on a modular system, progressively adding more units to its capacity without limiting the number of collectible items for short and long storage periods. Validation is defined as a process that demonstrates that WSIs will perform as expected for their intended use and environment before using them for patient care. Regarding the validation of the entire digital network (scanners, displays and AI tools, etc.) for primary diagnosis, the recommendation for each institution is to follow the College of American Pathologists (CAP) guidelines , to ensure the safety and robustness of the project. The validation process must include a sample set of at least 60 cases for one application, or use case, that reflects the spectrum and complexity of specimen types. Several pathologists per center are experienced in DP and will add value to the validation process, randomly evaluating the selected WSIs previously assessed by conventional light microscopy (LM) after the washout period. Although all discordances between WSI and glass-slide diagnoses discovered during validation need to be reconciled, a 95% diagnostic concordance threshold for the same observer between LM and WSI will be the goal, according to the CAP guidelines. As far as cytopathology is concerned, widely acknowledged validating guidelines are still lacking . Hence, an expert-based consensus on the non-inferiority of virtual to glass slides reviewed by dedicated cytopathologists was chosen as a reliable criterion for safely adopting the WSI technique in this field. In the past few decades, DP and AI have represented disruptive technologies in physicians’ ways of viewing their daily work, meeting the pressing scientific community’s demand for refined and reproducible diagnoses in the modern era of precision medicine. Despite great technological progress, the adoption of DP for routine clinical diagnostic use remains limited, and the complete transition to DP of a pathology network constitutes an organizational, functional, and technical challenge. Generally, the need for significant investment, along with pathologists’ skepticism of the adoption of new technologies has hampered comprehensive DP employment by many institutions. These considerations notwithstanding, plenty of solutions supporting the complete automatization of a pathology laboratory workflow are now available and the number of departments and networks fully embracing DP for primary diagnosis keeps on increasing worldwide . Since the first pioneer institutions such as the Memorial Sloan Kettering Cancer Center in the USA , many laboratories have followed in their footsteps, resulting in the modern broad comprehensive regional systems such as the Catalonian DigiPatICS , among others. Regarding Italy, WSI-based routine diagnosis has been limited to a few isolated (but still illuminating) models, such as the frozen-section telepathology service in South Tyrol and the locally connected network in Eastern Sicily . Thus, the digital transformation of the network of public pathology laboratories in the Italian Veneto region reported herein certainly represents a milestone for the further widespread adoption of this revolutionary technology in Italy. The realization of the present project involved a network of 12 hospitals producing more than 2.5 million slides per year, with a number that is likely to increase. Although such a process has required a large amount of public funds and resources, it will remarkably decrease operational and asset costs in the coming years. Several studies have outlined an estimated saving of more than $250,000 per institution per year after the implementation of DP , making the achievement of such a goal in the Veneto region likely as well. Alongside the economic aspects, the broad deployment of DP, WSI, and AI in laboratories in Veneto will play a crucial role in facilitating the exchange of information among the involved centers. Not only will this enhance the standardization and reproducibility of pathology reports but it will also help physicians establish experienced second opinions to improve overall patients’ clinical management. Apart from the evaluation of routine biopsy and surgical specimens, it is worth mentioning the implications of knowledge and opinion-sharing in two particular fields of pathology: intraoperative consultation and cytopathology. For the former, on-site evaluation of frozen sections is supposed to quickly provide surgeons in the operating theater with the most appropriate pathological response possible. Also, especially in the transplantation setting, rendering widely reproducible diagnoses is not usually straightforward due to subtle histological changes influencing diagnostics classifications. Modern DP-based cockpits significantly help overcome such an issue , offering the opportunity to get real-time opinions from remote, experienced colleagues. Hence, the broad network of online connected hospitals will let pathologists, particularly the youngest ones, exploit a teleconsultation service to share the most challenging cases regardless of their physical location. Together with obvious advantages in terms of clinical care, this system will provide a striking tool to help young professionals face the compelling situations of intraoperative consultation. For cytopathology, to date, cervical screening smears and other materials (e.g., thyroid fine-needle aspirations and effusion cytology) account for a noteworthy percentage of many laboratories’ routine workflows . Nevertheless, the CAP still considered cytopathology as immature for diagnostic DP adoption . Although soon to be released, validation guidelines for employing WSIs for primary diagnosis are still missing . This fact is probably linked to the particular characteristics of cytological specimens and the lack of specifically designed validation studies. As proposed in other recent studies , at the time of registration of the current project, the investigators agreed to a consensus-based method to judge the feasibility and safety of WSIs in cytopathology; however, the increasing amount of data that will be progressively collected and shared by the connected hospitals of the network will perfectly suit the need for a broad multicenter study strictly following new guidelines to support the validation of WSIs for cytopathology. Finally, the huge number of archived WSIs will create large pathology repositories for educational purposes and future technological developments. In particular, a consistent number of linked institutions gathering samples from a highly variable range of inpatient and outpatient services will help to build and expand web-based libraries of pathology fields that are often limited by the scarcity of examples constituting the cohorts (transplantation and neurosurgery, etc.) . This will represent an incredible teaching opportunity for residents, students, and young pathologists to widen their knowledge. Similarly, the easily accessible massive amount of digital data produced will certainly advance the further adoption of AI-guided diagnoses, serving both as external validation datasets for already available algorithms and as the basis for developing new home-built ones. In conclusion, this proof of concept shows which assessments were made that guided the choices relating to the tools and organizational processes in planning the DP project of the Veneto region. The Veneto DP program aims to transform 12 pathology laboratories from public hospitals producing more than 2.5 million slides per year in a coordinated network of services completely adopting WSI technology. Such an outstanding achievement has been made possible by the integration of all the health information systems of the participating institutions, along with the tireless work of dedicated pathologists, informatics, engineers, and clinicians involved in the project. The preliminary results emerging from the use of the employed LIS, devices, and AI tools firmly confirm and support the safe and effective nature of the entire project.
Suspension-associated dislocation of the jaw in hanging
346b553d-6c8b-4297-ab62-c35172bdaf72
10421754
Pathology[mh]
Hanging is a common type of death presented to forensic pathologists in countries with requirements that unnatural and unexpected deaths be independently investigated . In Australia, the most recent cause of death statistics showed that among the 1937 deaths classified as hanging and strangulation, 98.5% ( n = 1909) were determined as intentional self-harm, 0.7% ( n = 13) were unintentional, 0.6% ( n = 11) were assault, and 0.2% ( n = 4) were undetermined . The role of the medical investigation of such deaths by a forensic pathologist not only requires determination of the cause of death, but to provide information to assist the relevant authority in the determination of the manner of death (be it the medical examiner themselves or the coroner). This requires consideration to be given to the circumstances of death and for the medicolegal death investigation to rule out any other competing cause of death which may suggest a manner other than suicide. Therefore, in cases of death from hanging, one looks to document not only injuries that are known to be associated with hanging (such as an upsloping ligature abrasion to the neck), but also to identify those that may be inconsistent with self-inflicted hanging or that may suggest the involvement of a third party in the death. Consequently, the forensic pathologist must be cognizant of the spectrum of injuries that are known to be associated with death from hanging, so that cases with atypical injuries may be appropriately escalated for police investigation. Conversely, it is vital that injuries are not incorrectly attributed as being due to assault and creating unnecessary investigational work for police and unwarranted distress for the deceased’s next of kin. We present two cases of deaths reported to the coroner that were believed to be caused by self-inflicted hanging and that were observed to have unexpected unilateral dislocation of the temporomandibular joint (TMJ) identified on routine post-mortem computed tomography (PMCT), without any suspicious circumstances surrounding the deaths. This injury was unexplained and had not been previously observed at our Forensic Institute nor was it identified after a review of the scientific research literature using one biomedical database (Medline via Ovid). Preamble Approximately 7000 deaths per year are reported to the coroner in our jurisdiction, with reportable cases being deaths that are unexpected; unnatural, following a medical procedure; or occurring in those in care or custody. Medicolegal death investigation initially comprises a preliminary review by a forensic pathologist of scene images, the circumstances as known at that time, examination of a full-body PMCT scan, and a comprehensive external assessment of the deceased. The wishes of the next of kin with regard to autopsy and any concerns they may have surrounding the death are determined. This informs the coroner’s decision on whether or not to direct a full autopsy examination. In the two cases presented here, a full autopsy examination was not performed. It is beyond the remit of this case report to discuss the complexities of the decision regarding the scope of the post-mortem examination in cases such as this, and there is available literature on this topic elsewhere . All individuals admitted to our institute undergo a whole-body PMCT scan using a mortuary-located dual-source SOMATOM Definition Flash CT scanner (Siemens Healthcare, Erlangen, Germany). Images were viewed using syngo.via version VB60A software (Siemens Healthcare, Erlangen, Germany). CT technique at our institution includes a head-to-toe scan range, at 120 kVp, 280 effective mAs, 1.5 mm slice thickness, pitch of 0.6, rotation time of 0.55 s, 500 mm field of view, and reconstruction kernel of B30f medium smooth. Case 1 A 67-kg, 173-cm-tall male aged in his twenties was located with a rope noose encircling his neck, fully suspended from a tree. He had a history of depression and recent interpersonal life stressors. The ligature (Fig. ) was a 1.5-cm diameter woven rope fashioned as a “hangman-style” noose with a suspension point behind the left ear (left posterior). Examination of the post-mortem CT scan showed a closed mouth with isolated right mandibular condylar head displacement anterior to the articular eminence, resulting in a left crossbite and left mandibular excursion (Fig. ). The left condylar head position was congruous with the condylar fossa, as expected in a mouth-closed position. A fracture was present in the left greater cornu hyoid, and there was plastic deformation of the right greater cornu. External examination showed an adult male with established rigor mortis, fixed hypostasis, and no signs of decomposition. There was a deviation of the jaw toward the left (Fig. ). The neck skin had a moderately deep ligature abrasion that comprised a dried, yellow–brown furrow with diagonal indentations within it in keeping with the ligature. Its lowest point was the right anterior neck, and it was upsloping across the neck, above the laryngeal prominence anteriorly, fading into the hairline behind the ears and with sparing of the posterior neck. The suspension point was behind the left ear. There were no other injuries to the neck or elsewhere on the body. There were no ocular or facial petechiae. Case 2 A 111-kg, 181-cm-tall male aged in his thirties was located with a rope noose encircling his neck, partially suspended in a factory. He had a history of recent interpersonal life stressors. The ligature was a 1.0-cm diameter braided rope that was described as encircling the neck, but the details of the configuration were not provided, and the ligature was removed at the scene by first responders. Examination of the post-mortem CT scan (Fig. ) showed a near-closed mouth with isolated left mandibular condylar head displacement anterior to the articular eminence, resulting in a right crossbite and right mandibular excursion. The right condylar head position was congruous with the condylar fossa, as expected in a mouth-closed position. There were right hyoid and superior cornu thyroid cartilage fractures. External examination showed an adult male with established rigor mortis, fixed hypostasis, and no signs of decomposition. There was a subtle deviation of the jaw toward the right (Fig. ). The neck skin had a dried, brown ligature abrasion in keeping with the ligature. Its lowest point was the left anterior neck, above the laryngeal prominence anteriorly and diagonally upsloping across the left and right sides of the neck (steeper on the right than the left). It traversed the posterior neck just below the hairline posteriorly (higher on the right than the left). The proposed suspension point was behind the right ear (Fig. ), just below the hairline (right posterior). There were scattered conjunctival and scleral petechiae. There were no other injuries to the neck or elsewhere on the body. Approximately 7000 deaths per year are reported to the coroner in our jurisdiction, with reportable cases being deaths that are unexpected; unnatural, following a medical procedure; or occurring in those in care or custody. Medicolegal death investigation initially comprises a preliminary review by a forensic pathologist of scene images, the circumstances as known at that time, examination of a full-body PMCT scan, and a comprehensive external assessment of the deceased. The wishes of the next of kin with regard to autopsy and any concerns they may have surrounding the death are determined. This informs the coroner’s decision on whether or not to direct a full autopsy examination. In the two cases presented here, a full autopsy examination was not performed. It is beyond the remit of this case report to discuss the complexities of the decision regarding the scope of the post-mortem examination in cases such as this, and there is available literature on this topic elsewhere . All individuals admitted to our institute undergo a whole-body PMCT scan using a mortuary-located dual-source SOMATOM Definition Flash CT scanner (Siemens Healthcare, Erlangen, Germany). Images were viewed using syngo.via version VB60A software (Siemens Healthcare, Erlangen, Germany). CT technique at our institution includes a head-to-toe scan range, at 120 kVp, 280 effective mAs, 1.5 mm slice thickness, pitch of 0.6, rotation time of 0.55 s, 500 mm field of view, and reconstruction kernel of B30f medium smooth. A 67-kg, 173-cm-tall male aged in his twenties was located with a rope noose encircling his neck, fully suspended from a tree. He had a history of depression and recent interpersonal life stressors. The ligature (Fig. ) was a 1.5-cm diameter woven rope fashioned as a “hangman-style” noose with a suspension point behind the left ear (left posterior). Examination of the post-mortem CT scan showed a closed mouth with isolated right mandibular condylar head displacement anterior to the articular eminence, resulting in a left crossbite and left mandibular excursion (Fig. ). The left condylar head position was congruous with the condylar fossa, as expected in a mouth-closed position. A fracture was present in the left greater cornu hyoid, and there was plastic deformation of the right greater cornu. External examination showed an adult male with established rigor mortis, fixed hypostasis, and no signs of decomposition. There was a deviation of the jaw toward the left (Fig. ). The neck skin had a moderately deep ligature abrasion that comprised a dried, yellow–brown furrow with diagonal indentations within it in keeping with the ligature. Its lowest point was the right anterior neck, and it was upsloping across the neck, above the laryngeal prominence anteriorly, fading into the hairline behind the ears and with sparing of the posterior neck. The suspension point was behind the left ear. There were no other injuries to the neck or elsewhere on the body. There were no ocular or facial petechiae. A 111-kg, 181-cm-tall male aged in his thirties was located with a rope noose encircling his neck, partially suspended in a factory. He had a history of recent interpersonal life stressors. The ligature was a 1.0-cm diameter braided rope that was described as encircling the neck, but the details of the configuration were not provided, and the ligature was removed at the scene by first responders. Examination of the post-mortem CT scan (Fig. ) showed a near-closed mouth with isolated left mandibular condylar head displacement anterior to the articular eminence, resulting in a right crossbite and right mandibular excursion. The right condylar head position was congruous with the condylar fossa, as expected in a mouth-closed position. There were right hyoid and superior cornu thyroid cartilage fractures. External examination showed an adult male with established rigor mortis, fixed hypostasis, and no signs of decomposition. There was a subtle deviation of the jaw toward the right (Fig. ). The neck skin had a dried, brown ligature abrasion in keeping with the ligature. Its lowest point was the left anterior neck, above the laryngeal prominence anteriorly and diagonally upsloping across the left and right sides of the neck (steeper on the right than the left). It traversed the posterior neck just below the hairline posteriorly (higher on the right than the left). The proposed suspension point was behind the right ear (Fig. ), just below the hairline (right posterior). There were scattered conjunctival and scleral petechiae. There were no other injuries to the neck or elsewhere on the body. It is well recognized that hanging causes a range of injuries primarily to the head and structures of the neck, the most typical of which are an upsloping ligature abrasion of the neck skin, cutaneous and ocular petechiae, fractures of the hyoid bone, laryngeal cartilages, and cervical spine . Injuries to the limbs are also well recognized in these deaths . When presented with a death purporting to be self-inflicted hanging, it is the remit of the forensic pathologist to assess the deceased’s injuries to not only assist in independently confirming this proposition, but to critically evaluate the finding of any injury that may suggest otherwise. We have described two cases of unilateral TMJ dislocation (TMJD) in cases of lethal hanging. To our knowledge, this is not a previously recognized finding in such deaths and, as such, prompts consideration to the issues of TMJD causation, significance, and why this entity has not been previously identified. The cause of the TMJD is of the utmost importance in assessing the significance of this finding. A range of possibilities exist, including artifact, antemortem trauma, or ligature-related dislocation, and clearly, the implications of each vary. Post-mortem artifacts as mimics of injury are commonly encountered , and forensic pathologists should be familiar with the decomposition of human remains and injury mimics that may arise as a consequence of decomposition. Well-recognized artifactual hemorrhages have been described in the neck which may cause consternation in hanging deaths if not correctly recognized . As decomposition progresses, disarticulation of joints may occur , which includes the TMJ. The two cases presented showed only early decompositional changes, not to the extent that skeletal changes would be expected, and this hypothesis would appear unlikely in these cases. It is known that the condylar heads of the mandible normally translate forwards from the condylar fossa in an open-mouth position , but many hanging deaths have a closed-mouth position (as in the cases presented), and unilateral TMJD is not in keeping with this. The discovery of unexpected injury raises the specter of antemortem blunt force trauma or even the involvement of another in the death. In these two cases, the decedents had either diagnosed depression or an increase in life stressors as a potential trigger for suicide. A thorough police investigation disclosed no history of recent assault and found no suspicious features in the scene or circumstances. The external examination of the body did not reveal facial bruising or injuries elsewhere to suggest an assault. Notwithstanding that the distinction between homicidal and self-inflicted hanging may be problematic , there was no suggestion of antemortem trauma, and no mechanism of injury relating to body handling was identified in either case. It therefore seems possible that TMJD represents a previously unrecognized artifact of ligature suspension. In our two cases, the TMJD occurred on the side opposite the point of ligature suspension. A postulated mechanism is that the mandibular head on the side of suspension is pulled up into the mandibular fossa, fixing it in position, but allowing distraction of the contralateral mandibular head out of the fossa by the pull of the ligature opposite the suspension point, triggering the unilateral dislocation. It is unknown if the decedents had an element of pre-existing TMJ dysfunction, increasing their vulnerability to TMJD. Dislocation may be assisted by muscle relaxation in the post-mortem period. The TMJD persisted even after removal of the ligature, but rigor mortis was fully fixed in both men (including the pterygoid and masseter muscles) which may have impacted the ability of the condyle to relocate. To our knowledge, the finding of TMJD in deaths from hanging has not been previously recognized or reported. While injuries are typically identified and documented during a full autopsy examination, the TMJ region is not routinely examined, and this occurs for a combination of reasons. Facial asymmetry due to unilateral TMJD may not be appreciated upon external examination as the face is commonly distorted as a result of ligature suspension and tongue protrusion through the lips and teeth; therefore, it may be attributed to this instead. Additionally, rigor mortis makes mouth opening difficult or impossible, so jaw movement is not easily assessed. The TMJD is not associated with an open-mouth position as it is during life, so clinical diagnostic criteria cannot be employed after death. In relation to the internal examination procedure, the head incision is bicoronal and located retroauricular; thus, it does not expose the TMJ (anterior to external auditory canal). A neck dissection may or may not include a facial subcutaneous dissection, and during this, the masseter and parotid are left in place; thus, the TMJ is not exposed. PMCT is a well-established and validated tool used in medicolegal death investigation providing a permanent digital record of the deceased and revealing significant findings prior to autopsy. It is a superior technique to autopsy for the demonstration of skeletal trauma and is an invaluable adjunct to autopsy in modern forensic pathology practice. Not all centers have the capacity to integrate PMCT into routine practice, and in those that do, it may be a recent addition to the forensic pathology diagnostic armamentarium. The authors speculate that this may have contributed to the lack of recognition of TMJD before now. This case report highlights a new observation revealed by PMCT in a commonly encountered type of death—temporomandibular joint dislocation in cases of hanging. Further work is required to elucidate the prevalence of this finding and its characteristics among the population of hanging deaths.
New Horizons: Novel Adrenal Regenerative Therapies
528526df-658a-4c0d-91e1-0ca4c2178de9
7398608
Physiology[mh]
The adrenal gland is composed of 2 main tissue types that establish a bidirectional connection: the catecholamine-producing chromaffin cells in the medulla and the primarily steroid-producing cells in the cortex. Close interactions between these 2 components are necessary for differentiation and morphogenesis of the gland. The adrenal cortex consists of 3 different zones: the outer zona glomerulosa, the zona fasciculata, and the zona reticularis, which surrounds the medulla. Each of these zones produces distinct steroid hormones (mineralocorticosteroids, glucocorticoids, and androgens, respectively), which are involved in a variety of physiological functions . In mice, the zona reticularis is absent, but during certain postnatal developmental stages an X-zone, derived from the fetal adrenal, is present. In the adult adrenal gland, different populations of progenitor cells, residing in the adrenal capsule, cortex, and medulla, have been described . Homeostasis and regeneration in the adrenal cortex are maintained through capsular and cortical progenitors . In response to paracrine and endocrine signaling these cells proliferate, migrate centripetally, and differentiate into cortical steroid-producing cells. In mice, cortical cell renewal appears to be sex-dependent, and females show a 3-fold higher tissue turnover than males. This sexual dimorphism is under hormonal control, and it has been shown that androgens suppress the recruitment of capsular cells to the steroidogenic lineage and inhibit proliferation of steroidogenic cells . In vitro, testosterone has been shown to affect proliferation of human mesenchymal stem and endothelial progenitor cells, but if androgens also have an effect on adrenocortical cell turnover in humans still needs to be elucidated. In the adrenal medulla, progenitor cells give rise to neurons, glia and chromaffin cells. Adrenal insufficiency is a condition in which the adrenal gland does not produce adequate amounts of glucocorticoids and/or mineralocorticoids. These steroids play a central role in the body’s homeostasis of energy, salt, and fluids; thus, adrenal insufficiency is a potentially life-threatening condition. Primary adrenal insufficiency is due to impairment of the adrenal gland with ~80% of the cases being from autoimmune adrenalitis (Addison disease). Other cases of primary adrenal insufficiency might be idiopathic, caused by adrenal metastases, or due to congenital adrenal hyperplasia (CAH). CAH is a group of inherited autosomal recessive disorders encompassing deficiencies in the adrenal steroidogenesis pathway leading to impaired cortisol biosynthesis with 21-hydroxylase deficiency due to genetic changes in the CYP21A gene as the most common cause. In addition, adrenal insufficiency can be induced by infectious diseases and, as seen lately in an increasing number of patients, by novel medications targeting hypertension and cancer. For example, steroid hormone synthesis inhibitors or immune checkpoint inhibitors lead to an impairment of the hypothalamic-pituitary-adrenal (HPA) axis. Last, surgical bilateral adrenalectomy as required in certain adrenal tumors causes complete adrenal insufficiency, necessitating an effective hormone substitution therapy. Despite decades of evaluating various treatment algorithms, the management of adrenal insufficiency remains a major therapeutic challenge . Pharmacological corticosteroid substitution is a lifesaving procedure; however, it cannot restore physiological feedback regulation of an intact HPA axis, the essential circadian and pulsatile hormone secretion is not to be mimicked, and physiologically necessary adaptations to variable demands are not automatically happening. Moreover, there is a high incidence of side effects from inappropriate glucocorticoid substitution. Drastic therapeutic measures such as bilateral adrenalectomy are effective to treat female infertility of CAH patients but, because of the risk of adrenal crisis, this option is only considered in patients in whom other treatments have failed . Prenatal therapy with steroids such as dexamethasone may also bear particular risks. Current emerging therapeutic alternatives include new and more physiological means of glucocorticoid delivery, inhibitors, or antagonists at the level of CRH or adrenocorticotropic hormone (ACTH) secretion and/or action, as well as gonadotropin-releasing hormone analogs, anti-androgens, aromatase inhibitors, and estrogen receptor blockers. Many of these new treatment options show promising results in preclinical studies, but still require additional clinical validation. For example, different approaches where the glucocorticoid concentration has been optimized through delayed release do show an improved cortisol pattern, but cannot fully recapitulate the normal physiological circadian rhythm . Autoimmune Addison disease is generally regarded as an irreversible progressive disease in which the adrenal gland is destroyed. However, numerous case reports have shown that in some patients, spontaneous remission has occurred. This suggests a high degree of heterogeneity in adrenal function in patients with Addison disease. Furthermore, recent studies have shown that shortly after the onset of Addison disease, residual adrenal function might be restored using B-lymphocyte-depleting immunotherapy and ACTH treatment . Because the autoimmune response is directed against steroidogenic enzymes expressed in adrenocortical cells and not in the adrenocortical stem cells, ACTH might stimulate the differentiation of the persisting progenitor or stem cells and, in combination with the B-cell depletion, the newly differentiated steroidogenic cells will not be destroyed by the immune cells. Currently, success using this strategy is limited with only small serum cortisol and urine steroid metabolite increases. Nevertheless, this therapy has the potential to at least improve adrenal function. Gene therapy might be a possibility for restoring steroidogenesis in CAH, which is a monogenic disease. Approaches in this direction ~20 years ago based on adenovirus-mediated gene transfer of human CYP21A into 21-hydroxy-deficient mice indeed showed improvements in animal models of CAH. Because of various functional limitations and regulatory restrictions, these approaches were not processed beyond the early stages. Nevertheless, very recently, several research groups have taken up this idea again, and it was shown that transfer of the human CYP21A gene to mouse models of CAH ameliorates the systemic steroid metabolism. These promising results suggest that gene therapy could in fact be a feasible option for treatment of CAH. However, 1 limitation is that the effect seems to be only temporary because of cortical cell turnover . This means that achieving a long-term correction requires that the transgene is inserted into the genome of adrenocortical stem cells. Adrenal cell transplantation would be a desirable therapeutic alternative for patients with adrenal insufficiency. Such strategy would go beyond the evident goal of replacing insufficient hormones and rather allow for restoration of the HPA axis function and a possible curative strategy. Cellular therapies aim to repair the mechanisms underlying disease initiation and progression, and such therapies are designed for various disorders including cardiovascular, neurological, autoimmune, and others. Multiple cell types can be used in these therapies, including primary, stem, or progenitor cells. For pancreatic islets, significant success has been achieved in replacement strategies for the treatment of type 1 diabetes, and human islet transplantation has become an established therapy for patients with type 1 diabetes. However, because of the general limitations of human allogeneic cell and organ replacement strategies, alternative cell sources such as xenogeneic cells are being generated to form the basis for cell replacement strategies of endocrine tissues. The availability of such alternative cell sources has generated a renewed attention for macroencapsulation strategies as a safeguard to control the potential risks of inflammatory reactions and graft rejection. Furthermore, endocrine tissues such as pancreatic islets and the adrenal are complex multicellular functional entities that are highly dependent upon their specific microenvironment. To overcome these limitations, optimization of adrenal transplantation and use of immuno-isolating biomaterials and/or devices has been initiated. In this regard, novel biomaterials have been developed to improve cell viability of engineered cells upon implantation. Biomaterials are mostly based on the principle of biomimicry and aim to resemble the extracellular matrix around the cells to establish a niche that supports cell viability and function. In their natural niche, cells are interacting with the extracellular matrix consisting of glycosaminoglycans and proteins. These support the cells with anchor points where the cells can attach, as for example integrin binding motifs on fibronectin or laminin. In the biomaterial field, the extracellular matrix is mimicked by natural or synthetic polymers. These can often take up large amounts of water establishing a 2-dimensional or 3-dimensional hydrogel environment for the desired cells. Some of these polymers also give the possibility to release growth factors or establish gradients for cell differentiation. An approach for the differentiation of steroid-producing cells could be to place progenitor cells into such hydrogels, which have an adapted stiffness similar to the adrenal. Furthermore, anchor points could be provided that are establishing steroid-producing cell specific cell-matrix interactions. A variety of systems has been reported that implement novel customized polymers that recapitulate key properties of the endocrine environment and therefore may promote survival and function, reduce inflammatory reactions, and, by barrier-creating methodologies, might allow for transplantation in the absence of immunosuppression. For example, our group has developed a macroencapsulation system, originally designed for pancreatic islet encapsulation that provides sufficient immune-isolation while maintaining regular cell function. This concept has been tested successfully in various preclinical models of diabetes, in a first clinical case of type 1 diabetes and has now entered the clinical trial application phase. Based on this experience, this concept has been adopted for the development of an adrenal cell replacement therapy, the ultimate alternative for patients suffering from adrenal insufficiency. Thereby, bovine adrenocortical cells embedded in alginate and encapsulated in an immunoisolating device were transplanted into adrenalectomized rats. This procedure was shown to be an effective method to avoid the need of chronic immunosuppression . Furthermore, this model provided a microenvironment that ensured 3-dimensional cell–cell interaction and where fibrosis was suppressed. The transplants were highly vascularized and viable for at least 4 weeks. For encapsulated adrenocortical cells, the cortisol production was highly increased when compared with transplantation of nonencapsulated cells . Approaches to optimize the viability have shown that the presence of stem cells increases the functionality of the transplant. Thereby, instead of using whole organs or fully differentiated adrenocortical cells, stem cell therapy is an additional option. Stem cell therapy has evolved from the early stages of using embryonic cells to advanced applications of attempting to repair and restore human organs. Pluripotent stem cells (PSCs) offer the possibility of an unlimited, renewable source of replacement cells and tissues to restore adrenal functionality. Cellular therapies based on PSCs and cell reprogramming techniques have been developed for different tissues; however, such approaches have been largely overlooked in the adrenal field. Different cell types have been used as a substrate to generate cells with steroidogenic potential . Adult stem cells from different sources, such as urine, adipose tissue, bone marrow, umbilical cord, or blood have been successfully reprogrammed to steroid-producing cells by overexpressing SF-1/Ad4BP. Embryonic stem cells or induced PSCs, which have the potential to generate all cell types in the body, have also been differentiated toward a steroidogenic phenotype upon gene manipulation and/or modulation of specific pathways. Such cells express steroidogenic enzymes and are able to produce a range of different steroid hormones in vitro. However, only limited in vivo studies have been performed using reprogrammed mouse or human cells in the adrenal field. Although cells are viable after implantation into mice, full functionality and responsiveness to adrenal stimuli have not been reported. These observations might be due to incomplete differentiation and limited steroidogenic potential of the cells before transplantation. Therefore, another possibility is to generate steroid-producing cells directly from adult adrenocortical stem cells/progenitors . Thereby, cell replacement therapies involving donor-derived stem cells or patient-derived stem cells, in which a specific mutation has been corrected, would represent an attractive alternative. Looking forward, the generation of bona fide steroidogenic cells from humans combined with novel biomaterials and encapsulation in immune-isolating devices might offer alternative therapies for patients with adrenal insufficiency. Cellular replacement strategies using adrenal cells have also been considered for other diseases than adrenal insufficiency. Since the 1980s, different research groups have tested the possibility of using chromaffin cells for the treatment of pain because these cells produce several neuroactive substances, including catecholamines and opioid peptides, which can cause pain relief. Although these strategies are still in their infancy phase for pain neurorestoratology, cell-based therapies could open up new avenues for the relief of pain. Chromaffin cells from the adrenal medulla have also been considered as a potential source of dopamine-producing cells to treat neurodegenerative conditions like Parkinson disease (PD) . The physiopathology of PD is associated with the loss of dopaminergic neurons, and because of the close relation of chromaffin cells to catecholaminergic neurons, a substantial number of studies has promoted the use of chromaffin cells for this purpose. However, transplantation of chromaffin cells in patients produced only partial motor improvements and these were only transient and highly variable among subjects. In addition, high morbidity and mortality were associated with this grafting procedure. Therefore, within the past few years, adrenomedullary stem cells have been tested for cell replacement therapies of PD. Following this strategy, a significantly higher population of cells was shown to acquire a dopaminergic phenotype when compared with fully differentiated chromaffin cells. Current treatments for adrenal insufficiency are limited to glucocorticoid, mineralocorticoid, and DHEA replacement strategies. Glucocorticoids are secreted following a circadian rhythm, which is impossible to adequately recapitulate using current replacement therapies with synthetic glucocorticoids; therefore, patients often suffer from poor quality of life and increased mortality. Promising results have recently been obtained using cell replacement therapies. In particular, encapsulation of adrenocortical (stem) cells opens new prospects for successful transplantation. Furthermore, stem cells from the adrenal medulla might have the potential to be used for the treatment of neurodegenerative diseases. However, more studies and optimization are needed before a long-term functioning transplantable graft is available for the treatment of adrenal insufficiency or non-adrenal diseases using adrenal cells and stem cells. Furthermore, such an ambitious endeavor obligatory requires a high level of interdisciplinarity including physicians, cell biologists, immunologists, and materials scientists.
Effectiveness of a suction device for containment of pathogenic aerosols and droplets
f0d12d16-91dd-4eab-b9b1-86f452afffea
11268607
Microbiology[mh]
The COVID-19 pandemic had over 4.9 million known infections in Canada and continues to have lasting impacts on our healthcare system . Throughout the pandemic, healthcare providers were also at high risk of being exposed to COVID-19. In the initial wave alone, healthcare providers accounted for 19.4% of infections . Given that COVID-19 and other respiratory illnesses will continue to have an impact on healthcare providers, it is important to look for innovative devices to control the rapid spread of pathogenic aerosols in closed spaces to avoid such aerosol-triggered pandemics in the future, especially in settings that do not have access to adequate ventilation systems. Numerical simulations have demonstrated that, in a clinical environment, suction devices significantly lower healthcare workers’ exposure . Few studies on aerosol removal devices have been recently introduced in the scientific literature . However, there remains a lack of comprehensive data on the performance of these suction devices in terms of spatial and temporal measurements. Furthermore, the literature highlights a significant oversight: the lack of a potentially community-driven, iterative design process that facilitates rapid prototyping and global deployment with minimal resources. This study introduces several novelties to address these gaps: The development and testing of a novel, rapidly manufacturable suction device designed for iterative improvements. This device is versatile enough to be adapted for use in multiple scenarios. A comprehensive and controlled experimental framework to directly measure droplet concentrations, enabling a more accurate evaluation of suction device efficacy. This approach significantly advances the methodological tools available for such assessments. In our experimental investigation, we first report on the design of an adaptable, rapidly manufacturable suction canister device. We then conducted comprehensive measurements of droplet concentration across different spatial configurations and quantified the half-lives of droplets within various size categories to demonstrate the device’s efficacy. Canister design and fabrication The study introduces a new suction device specifically engineered to mitigate aerosol and droplet generation through high-powered suction. The primary goal is to lower the potential risk of infection in the surrounding environment by minimizing aerosol particle count. After multiple design consultations with two clinicians and fluid dynamics specialists, a scalable canister design was selected to achieve this objective. The canister is cylindrical with a height of 250 mm and a diameter of 110 mm, featuring chamfered corners with a radius of 40 mm. The canister’s total surface area is approximately 0.1 m 2 . The inside of the canister has a cavity for suction with a skin thickness of 5 mm. At the bottom of the canister, there’s an attachment neck standing 50 mm high with a diameter of 50 mm. This neck features a 1-1/2 inch National Pipe Tapered (NPT) threading at the tip to facilitate attachment to flat surfaces such as tables. Additionally, the neck contains a concentric inner hole with a ½ inch NPT threading that can connect to fitting adapters for suction. The canister design features four identical rectangular trenches on the side surface of the main body, arranged in a circular pattern to ensure 360° coverage. The purpose of these trenches is to potentially house the planar UVC lights, a significant component for fomite disinfection. The canister has 368 holes, each with a diameter of 3 mm, thereby providing a total suction area of 2601 mm 2 . These holes were located on the canister’s side and top walls. Canister models were fabricated using Fused Deposition Modelling (FDM) 3D printing, leveraging its speed and versatility for rapid prototyping and complex shape production. A cartesian FDM printer (ANYCUBIC Chiron model, Shenzhen, China) with a substantial build volume of 400 x 400 x 450 mm was utilized. The models were fabricated from polylactic acid (PLA) thermoplastic polyester through a 0.4 mm nozzle. The printing settings included a layer height of 0.2 mm, line width of 0.4 mm, wall thickness corresponding to 4-line counts, and 40% infill density. The use of 3D printing for fabrication opens up the possibility of a rapid iterative design process, flexibility for different configurations, and fast global deployment for future pandemics. A CAD model and an image of the 3D printed prototype of the canister are presented in . Droplet generation To evaluate the suction device’s effectiveness in capturing aerosols from respiratory activities, we used an artificial cough generator to create a droplet-laden flow that mimics a cough. This generator produced a flow laden with droplets that simulates a natural cough. The SARS-CoV-2 virus is approximately 0.1‒0.5 μm in size. On the other hand, saliva ejecta can start as small as 1 μm and can extend up to magnitudes 100 times larger, with an average size of about 10 μm . Generally, cough dynamics are bifurcated into two main categories: large, inertia-dominated droplets (≥10 μm) and puff cloud aerosols (≤10 μm). Droplets sizing 100 μm or more are too heavy for their size to stay airborne, whereas aerosols ranging between 5 and 10 μm remain suspended in the air, as delineated by the Wells Curve . Large droplets tend to follow a ballistic trajectory, whereas aerosol sized droplets circulate within a bulk puff cloud However, it is crucial to understand that the transition from aerosol to droplet is not abrupt, and size is not the sole determinant of cough dynamics. Relying exclusively on the Wells Curve for infection control can lead to misjudgements, especially since the 2-metre distancing recommendation overlooks the turbulent eddy cloud transporting the droplets and instead zeroes in on droplet size alone . In this study, for clarity, both aerosols and droplets will be collectively referred to as "droplets" since we are measuring particle numbers. The details of this experimental setup have been previously outlined in ; thus, only a brief description is provided here. The artificial cough generator was connected to pressurized air lines, the pressure of which was regulated. The maximum cough flow rate was managed by an acrylic flow controller fitted with a needle valve. Aspects of the cough release, such as duration, were controlled by solenoid valves, the opening timings of which were managed by a digital delay generator (Model 575, Berkley Nucleonics, USA). A droplet generator (LaVision, Bielefeld, Germany) was attached to the solenoid valves’ outlet to seed the flow with a polydisperse distribution of droplets. A propylene glycol solution was atomized into the flow to simulate airway secretions. A bypass airflow and an integrated pressure regulator allowed partial control of the droplet loading and size distribution. Finally, the flow outlet was connected to the thoracic cavity of an airway manikin (Ht-Man, Hawktree, Ottawa, Canada), from which the artificial cough was released. We used a commercially available particle counter (Kanomax, New Jersey, USA) to collect samples across six size bins: 0.3, 0.5, 1.0, 3.0, 5.0, and 10.0+ μm. The particle counter was equipped with an integrated pump and operated at a sampling rate of 0.14 Hz. Unlike the approach in , the particle counter in our study was used to measure free space droplet counts and evaluate the suction device’s efficacy by directly sampling from within the suction cavity. Given that there are only a few studies on the efficacy of suction devices, the experimental methodology in this field is not yet well-established. For instance, a study employed particle image velocimetry to analyze general flow structures and used water-sensitive filter paper to detect the presence and settling of droplets. In contrast, our study emphasizes direct sampling from specific locations to assess the temporal variation of suspended droplets. Droplet measurement Several experiments were performed to assess the droplet containment efficacy of the suction device. The experimental equipment was set up inside a sealed enclosure, which was 2.5 m long, 1.6 m wide, and 1.9 m tall. The orientation of the airway manikin was adjusted using an articulated arm, where the mouth was fixed at 1 m height from the ground. The first two experiments evaluated the suction device’s capacity to capture airborne particles from the surrounding environment as designed. In the first configuration, the suction device was operated at a flow rate of 400 L/min. Simultaneously, an artificial cough generator created a single cough lasting 500 ms, positioned 50 cm away from the suction device, aligned with the manikin’s face. The particle counter, equipped with a 6.35 mm diameter and 30 mm long sampling probe, was used for measurements. A small hole was drilled just above the neck of the canister (50 mm from the base), and the sampling probe was inserted into this hole to measure droplet loading within the suction stream. The particle counter, sampling the flow at 6-second intervals and a suction rate of 2.83 L/min, outputted particle counts as #/m 3 . By knowing the suction flow rate generated by the canister and the individual sampling duration, it was possible to calculate the particle removal efficacy of the device. In the second configuration, we maintained the same conditions as before, with one key modification: instead of a single cough event, an artificial cough was repeated every minute following the initial cough at the 5-minute mark within the measurement domain. In the third and final series of experiments, the particle counter was positioned at angles of 0, 45, 90, and 135 degrees relative to the manikin, as depicted in . Unlike previous configurations, the particle counter probe was no longer inserted into the suction canister. Instead, it directly sampled the atmosphere within the cough chamber. This alteration provided a broader perspective on the distribution and behaviour of the droplets post-cough. We kept the suction canister’s location constant throughout these experiments and placed it 50 cm from the manikin. This controlled positioning allowed us to focus on the influence of varying the particle counter’s location, contributing to a more comprehensive understanding of the suction device’s overall droplet containment capabilities. These experimental approaches collectively provided direct counts of particles captured directly by the suction device and those in the spatial and temporal environmental aerosol load under controlled conditions during single and repeated cough events. Outcomes The primary outcome for the device efficiency is the droplet removal rate, determined by sampling air from inside the suction canister. The droplet removal rate quantifies the number of droplets the canister eliminates per second. This is based on readings from the particle counter, estimated from the duration of particle counter sampling, the suction flow rate of the particle counter, the total suction flow rate of the canister device, and droplet counts from the particle counter itself. The study introduces a new suction device specifically engineered to mitigate aerosol and droplet generation through high-powered suction. The primary goal is to lower the potential risk of infection in the surrounding environment by minimizing aerosol particle count. After multiple design consultations with two clinicians and fluid dynamics specialists, a scalable canister design was selected to achieve this objective. The canister is cylindrical with a height of 250 mm and a diameter of 110 mm, featuring chamfered corners with a radius of 40 mm. The canister’s total surface area is approximately 0.1 m 2 . The inside of the canister has a cavity for suction with a skin thickness of 5 mm. At the bottom of the canister, there’s an attachment neck standing 50 mm high with a diameter of 50 mm. This neck features a 1-1/2 inch National Pipe Tapered (NPT) threading at the tip to facilitate attachment to flat surfaces such as tables. Additionally, the neck contains a concentric inner hole with a ½ inch NPT threading that can connect to fitting adapters for suction. The canister design features four identical rectangular trenches on the side surface of the main body, arranged in a circular pattern to ensure 360° coverage. The purpose of these trenches is to potentially house the planar UVC lights, a significant component for fomite disinfection. The canister has 368 holes, each with a diameter of 3 mm, thereby providing a total suction area of 2601 mm 2 . These holes were located on the canister’s side and top walls. Canister models were fabricated using Fused Deposition Modelling (FDM) 3D printing, leveraging its speed and versatility for rapid prototyping and complex shape production. A cartesian FDM printer (ANYCUBIC Chiron model, Shenzhen, China) with a substantial build volume of 400 x 400 x 450 mm was utilized. The models were fabricated from polylactic acid (PLA) thermoplastic polyester through a 0.4 mm nozzle. The printing settings included a layer height of 0.2 mm, line width of 0.4 mm, wall thickness corresponding to 4-line counts, and 40% infill density. The use of 3D printing for fabrication opens up the possibility of a rapid iterative design process, flexibility for different configurations, and fast global deployment for future pandemics. A CAD model and an image of the 3D printed prototype of the canister are presented in . To evaluate the suction device’s effectiveness in capturing aerosols from respiratory activities, we used an artificial cough generator to create a droplet-laden flow that mimics a cough. This generator produced a flow laden with droplets that simulates a natural cough. The SARS-CoV-2 virus is approximately 0.1‒0.5 μm in size. On the other hand, saliva ejecta can start as small as 1 μm and can extend up to magnitudes 100 times larger, with an average size of about 10 μm . Generally, cough dynamics are bifurcated into two main categories: large, inertia-dominated droplets (≥10 μm) and puff cloud aerosols (≤10 μm). Droplets sizing 100 μm or more are too heavy for their size to stay airborne, whereas aerosols ranging between 5 and 10 μm remain suspended in the air, as delineated by the Wells Curve . Large droplets tend to follow a ballistic trajectory, whereas aerosol sized droplets circulate within a bulk puff cloud However, it is crucial to understand that the transition from aerosol to droplet is not abrupt, and size is not the sole determinant of cough dynamics. Relying exclusively on the Wells Curve for infection control can lead to misjudgements, especially since the 2-metre distancing recommendation overlooks the turbulent eddy cloud transporting the droplets and instead zeroes in on droplet size alone . In this study, for clarity, both aerosols and droplets will be collectively referred to as "droplets" since we are measuring particle numbers. The details of this experimental setup have been previously outlined in ; thus, only a brief description is provided here. The artificial cough generator was connected to pressurized air lines, the pressure of which was regulated. The maximum cough flow rate was managed by an acrylic flow controller fitted with a needle valve. Aspects of the cough release, such as duration, were controlled by solenoid valves, the opening timings of which were managed by a digital delay generator (Model 575, Berkley Nucleonics, USA). A droplet generator (LaVision, Bielefeld, Germany) was attached to the solenoid valves’ outlet to seed the flow with a polydisperse distribution of droplets. A propylene glycol solution was atomized into the flow to simulate airway secretions. A bypass airflow and an integrated pressure regulator allowed partial control of the droplet loading and size distribution. Finally, the flow outlet was connected to the thoracic cavity of an airway manikin (Ht-Man, Hawktree, Ottawa, Canada), from which the artificial cough was released. We used a commercially available particle counter (Kanomax, New Jersey, USA) to collect samples across six size bins: 0.3, 0.5, 1.0, 3.0, 5.0, and 10.0+ μm. The particle counter was equipped with an integrated pump and operated at a sampling rate of 0.14 Hz. Unlike the approach in , the particle counter in our study was used to measure free space droplet counts and evaluate the suction device’s efficacy by directly sampling from within the suction cavity. Given that there are only a few studies on the efficacy of suction devices, the experimental methodology in this field is not yet well-established. For instance, a study employed particle image velocimetry to analyze general flow structures and used water-sensitive filter paper to detect the presence and settling of droplets. In contrast, our study emphasizes direct sampling from specific locations to assess the temporal variation of suspended droplets. Several experiments were performed to assess the droplet containment efficacy of the suction device. The experimental equipment was set up inside a sealed enclosure, which was 2.5 m long, 1.6 m wide, and 1.9 m tall. The orientation of the airway manikin was adjusted using an articulated arm, where the mouth was fixed at 1 m height from the ground. The first two experiments evaluated the suction device’s capacity to capture airborne particles from the surrounding environment as designed. In the first configuration, the suction device was operated at a flow rate of 400 L/min. Simultaneously, an artificial cough generator created a single cough lasting 500 ms, positioned 50 cm away from the suction device, aligned with the manikin’s face. The particle counter, equipped with a 6.35 mm diameter and 30 mm long sampling probe, was used for measurements. A small hole was drilled just above the neck of the canister (50 mm from the base), and the sampling probe was inserted into this hole to measure droplet loading within the suction stream. The particle counter, sampling the flow at 6-second intervals and a suction rate of 2.83 L/min, outputted particle counts as #/m 3 . By knowing the suction flow rate generated by the canister and the individual sampling duration, it was possible to calculate the particle removal efficacy of the device. In the second configuration, we maintained the same conditions as before, with one key modification: instead of a single cough event, an artificial cough was repeated every minute following the initial cough at the 5-minute mark within the measurement domain. In the third and final series of experiments, the particle counter was positioned at angles of 0, 45, 90, and 135 degrees relative to the manikin, as depicted in . Unlike previous configurations, the particle counter probe was no longer inserted into the suction canister. Instead, it directly sampled the atmosphere within the cough chamber. This alteration provided a broader perspective on the distribution and behaviour of the droplets post-cough. We kept the suction canister’s location constant throughout these experiments and placed it 50 cm from the manikin. This controlled positioning allowed us to focus on the influence of varying the particle counter’s location, contributing to a more comprehensive understanding of the suction device’s overall droplet containment capabilities. These experimental approaches collectively provided direct counts of particles captured directly by the suction device and those in the spatial and temporal environmental aerosol load under controlled conditions during single and repeated cough events. The primary outcome for the device efficiency is the droplet removal rate, determined by sampling air from inside the suction canister. The droplet removal rate quantifies the number of droplets the canister eliminates per second. This is based on readings from the particle counter, estimated from the duration of particle counter sampling, the suction flow rate of the particle counter, the total suction flow rate of the canister device, and droplet counts from the particle counter itself. The droplet removal rate from a single cough event over 35 minutes for droplet size bins of 0.3, 0.5, and 1.0 μm is illustrated in . The droplet removal rate, as illustrated in the figure, indicates that the background droplet loading was relatively low, as evidenced by the data from the first 5 minutes. Five minutes into the measurement period, a single cough was introduced. This resulted in a significant increase in droplets being propelled into the suction device due to the strong airflow from the cough, as evidenced by a substantial spike at t = 5 minutes. At this juncture, the suction device removed over 6,000 droplets per second for each size bin. After the initial spike, the observations indicate a diffusion timescale of several minutes for these small droplets. The peak concentration of diffused aerosol in the environment occurs approximately 5 minutes post-cough event. Our previous findings suggest that an exponential decay function aptly represents the decrease in droplet nuclei. The calculated half-lives of the droplets for the configuration considered in were 11.0, 6.4, and 2.9 minutes for droplet size bins of 0.3, 0.5, and 1.0 μm, respectively. In the second experiment, the artificial cough was repeated every minute following the initial cough at the 5-minute mark within the measurement domain. The droplet removal rate from this setup is presented in . The continuous generation of aerosols resulted in the suction device’s steady removal of droplets. Initially, the droplet removal rate swiftly escalated to over 6000 droplets per second for all size bins. In the final series of experiments, we contemplated four distinct experimental configurations. In these scenarios, the particle counter was positioned at angles of 0, 45, 90, and 135 degrees relative to the manikin. In contrast, the manikin and the suction canister device positions were fixed, as illustrated in . As the particle counter probe was no longer inserted into the suction canister, the results represent the droplet count measurements from the particle counter in the enclosed environment instead of the suction removal rates by the canister. This means that the results represent the ambient droplet concentrations in these experiments. displays droplet count measurements for selected locations of one and four. The normalized aggregated droplet counts and volumes for all locations and size bins are depicted in . The normalized aggregate total droplet volume, calculated from the droplet counts and particle size bins, showed a similar trend. The suction device lessened the peak total droplet volume and accelerated its decay rate. These measurements indicate that the half-lives of the total droplet volume decreased from 23.6 minutes to 15.6 minutes with the application of the suction device. The aggregate peak droplet count was achieved approximately 8 minutes after the cough event. The peak droplet count with the suction device operational was about 10% lower. At 20 minutes, i.e., approximately 12 minutes after the peak droplet count, 82% of the peak droplet count remained suspended with the suction device off and 66% with the suction device on. However, since the peak droplet concentration was also reduced with the suction device, these percentages translate to an 18% removal rate without the suction device and a 23% removal rate with the suction device from their respective peak values. At 30 minutes or about 22 minutes after the peak droplet count, the reduction from peak counts was 24% without the suction device and 43% with the suction device. The experiment’s findings confirm the suction device’s capability to effectively remove droplets from the environment, making it a vital tool in enhancing indoor air quality. Given the sustained performance of the suction device irrespective of single or multiple cough events, this demonstrates its potential utility in reducing the risk of airborne disease transmission. This information further reinforces the idea that small droplets can remain airborne for extended periods in constrained spaces with limited air circulation. The calculated half-lives of the droplets of different size bins provide evidence of smaller droplets’ increased air suspension capacity compared to the larger ones. This is consistent with our previous research and related literature . The results from the sustained cough in the second experiment (see ) highlight the robustness of the suction device, especially with its consistent performance over time. Figs and provide a comprehensive view of the suction device’s performance under varying conditions. Regardless of whether it was handling a single cough event or coping with repeated events at regular intervals, the device demonstrated an ability to remove droplets from the environment efficiently. Given that the total viral load is likely proportional to the total aerosol volume , the total viral loading will be reduced with the deployment of these devices. This makes it a promising tool for enhancing indoor air quality and potentially reducing the risk of airborne disease transmission. However, further study is required to confirm this. For the series of experiments considering different experimental configurations, it is evident that the position and angle of the particle counter play a crucial role in the outcome. Notably, there is a significant difference in droplet counts when the particle counter was placed at different locations. Studies have indicated that cough droplets spread extensively sideways , with the turbulent flow of the initial puff cloud facilitating mixing. Despite some variability and noise in the measurements, a clear decay pattern was observed, irrespective of whether the canister was switched on or off. The canister reduced aerosol counts more quickly than background dissipation. Yet, at location 1, aerosol droplets persisted for up to an hour, albeit at reduced concentrations. We also observed that the suction device proved more effective at location four. This could be attributed to location four being off-axis and positioned behind the manikin, implying that droplets could only reach this location via diffusion. Given that the cough was directed towards the suction device, it effectively eliminated particles diffusing in the opposite direction once the cough stream had halted. The drop in the half-lives of total droplet volume with the device in operation underscores its efficiency. Analysis of the normalized aggregated droplet count reveals that the peak droplet counts typically occurred around the 7-minute mark on average. When the suction device was in operation, this peak count was found to be roughly 15% lower and appeared earlier than when no suction was applied. Even though droplets continued to exist, the rate at which droplet counts diminished increased noticeably compared to scenarios where no suction was utilized. Our results are consistent with those of other similar devices. For example, K. Okuhata et al . evaluated a tabletop HEPA filter device’s effectiveness using particle image velocimetry. This study measured the planar velocity field and visualized aerosol particle dispersion. Results indicated that when used, the suction device reduced aerosol droplets by 91.8% and deposited droplets by 68.7%. However, caution should be exercised when comparing such measurements across studies, as the removal rates depend on several factors, including ambient conditions, cough composition, room size and geometry, and the measurement technique. Several other studies have looked at portable local exhaust systems and extra-oral devices, primarily for dental, ENT and GI procedures . Most studies show that portable devices can significantly reduce droplet counts. T. Maurais et al . even demonstrated that portable devices perform better in isolation (compared to a combination of room ventilation and devices). The increase in turbulence from the negative pressure ventilation is thought to more easily disperse droplets that can be removed by a portable device close to the droplet source. Also, although studies confirm other designs, such as patient hoods or extra-oral suction devices, are effective, they are limited to certain clinical environments . Other advantages of our design include: More portable and can be connected to most standard hospital suction ports Adaptable design that can be easily modified 3D printing makes it possible to scale up production quickly in the case of another respiratory pandemic It does not require the use of a HEPA filter (but can be easily adapted to include one) The design already provides space for UVC filters that can be added for decontamination Finally, our results align with other studies demonstrating that portable suction devices can significantly reduce aerosol droplet concentrations, even in real-world environments , including homes, offices, schools and hospital rooms. Further study is warranted to evaluate the effectiveness of our device in other settings outside of the laboratory. Limitations and sources of error Our study’s primary sources of error stem from the optical properties of the aerosols generated by the simulated cough, the chaotic nature of cough flows due to turbulence, and the measurement accuracy of the particle counter. The particle counter used in this study has a sampling flow rate accuracy of ±5% and a size resolution of 15%. Its calibration is traceable to the National Institute of Standards and Technology (NIST). This counter employs a light scattering technique to determine particle concentration. This method introduces particles into an optical chamber, and their scattering intensity is recorded. The properties of the particles govern the correlation between scattered light intensity and particle concentration. In situations where these properties are not defined, as in this study, there might be significant deviations in the absolute particle counts. However, since our primary objective hinges on comparing particle counts against a baseline, the observed trends should remain consistent. Furthermore, the nature of turbulence is chaotic, with its behaviours being predictable predominantly in a statistical sense due to its extreme sensitivity to starting conditions. This high degree of variability presents challenges in studying phenomena such as cough flows. To account for variations attributed to turbulence, we repeated experiments several times. Our findings are represented through mean and standard deviation values, as illustrated in . Further details on the robustness of the cough generator manikin, the measurement technique, and associated errors are discussed in . In this reference, cough images captured from the manikin and spatial measurements taken without a suction device were compared with computational fluid mechanics analysis, incorporating humidity and temperature effects. The standard deviation values, shaded around the mean in , highlight significant decay and trends that cannot be solely attributed to variations. However, it should be noted that inherent variability is an inevitable aspect of studies involving the chaotic nature of turbulence, which predominates cough flow dynamics. Consequently, measurements were repeated, and mean values were presented to ensure reliability and consistency in our findings. Our study’s primary sources of error stem from the optical properties of the aerosols generated by the simulated cough, the chaotic nature of cough flows due to turbulence, and the measurement accuracy of the particle counter. The particle counter used in this study has a sampling flow rate accuracy of ±5% and a size resolution of 15%. Its calibration is traceable to the National Institute of Standards and Technology (NIST). This counter employs a light scattering technique to determine particle concentration. This method introduces particles into an optical chamber, and their scattering intensity is recorded. The properties of the particles govern the correlation between scattered light intensity and particle concentration. In situations where these properties are not defined, as in this study, there might be significant deviations in the absolute particle counts. However, since our primary objective hinges on comparing particle counts against a baseline, the observed trends should remain consistent. Furthermore, the nature of turbulence is chaotic, with its behaviours being predictable predominantly in a statistical sense due to its extreme sensitivity to starting conditions. This high degree of variability presents challenges in studying phenomena such as cough flows. To account for variations attributed to turbulence, we repeated experiments several times. Our findings are represented through mean and standard deviation values, as illustrated in . Further details on the robustness of the cough generator manikin, the measurement technique, and associated errors are discussed in . In this reference, cough images captured from the manikin and spatial measurements taken without a suction device were compared with computational fluid mechanics analysis, incorporating humidity and temperature effects. The standard deviation values, shaded around the mean in , highlight significant decay and trends that cannot be solely attributed to variations. However, it should be noted that inherent variability is an inevitable aspect of studies involving the chaotic nature of turbulence, which predominates cough flow dynamics. Consequently, measurements were repeated, and mean values were presented to ensure reliability and consistency in our findings. This study describes the design and fabrication of a suction canister from thermoplastic polyester using 3D FDM printing. The design and deployment of the device prioritized flexibility, speed, and affordability and could be used for rapid deployment in future pandemics. We confirmed its ability to effectively remove droplets from the momentum-driven initial cough stream and the later droplet-filled ambient air. The device’s suction efficiency also consistently maintained a stable droplet removal rate across multiple cough events, showcasing its reliable performance. While the results did fluctuate depending on location, ambient conditions, and characteristics of the turbulent cough, the aggregated data demonstrated a substantial reduction in aerosol half-life, dropping from 23.6 minutes to 15.6 minutes. This research highlights the potential of a straightforward, cost-effective device in mitigating airborne disease transmission. The scalability of this solution is impressive, as multiple devices could be deployed rapidly and at low cost in public spaces, thereby improving public health safety. Its affordability and swift deployment via FDM 3-D printing make this device particularly well-suited for use in settings with limited resources.
Diagnostik und Therapie der Minimal Change Glomerulopathie beim Erwachsenen – 2023
50a2d4dd-7a67-4d91-81a0-ca01ef9947ee
10511370
Internal Medicine[mh]
Der traditionelle Begriff Minimal Change Glomerulopathie (Minimal Change Disease [MCD]) wurde für eine hochgradig proteinurische glomeruläre Erkrankung geprägt, die sich histologisch durch fehlende lichtmikroskopische Veränderungen auszeichnet, bei jedoch typischem elektronenmikroskopischem Befund eines meist vollständigen Verlustes der podozytären Fußfortsätze. Diese prominente Veränderung der Zellstruktur legt eine wesentliche Rolle der Podozyten bei der Pathogenese der Proteinurie nahe. Deshalb werden Erkrankungen mit derartigen histologischen Charakteristika (neben der MCD auch bestimmte Formen der fokal-segmentalen Glomerulosklerose [FSGS]) unter dem Begriff Podozytopathien zusammengefasst. Ihr exakter Mechanismus ist zwar meist unbekannt, wird jedoch als autoimmun vermittelt angenommen. Im Folgenden werden MCD und FSGS getrennt voneinander betrachtet. Es sei jedoch auf die anhaltende Diskussion hingewiesen, ob es sich bei MCD und FSGS tatsächlich um zwei separate Krankheitsbilder handelt, oder diese vielmehr als verschiedene Manifestationsformen bzw. Krankheitsstadien einer gemeinsam zugrundeliegenden Erkrankung anzusehen sind . In der Tat zeichnet sich in jüngster Zeit ein Paradigmenwechsel ab, bei dem sich Klassifikation und Behandlung von PatientInnen mit „primärer Podozytopathie“ nicht mehr rein anhand des histologischen Schädigungsmusters (MCD vs. FSGS), sondern vielmehr entsprechend der zugrundeliegenden Pathogenese, z. B. primär/idiopathisch/autoimmun vs. externe Noxe vs. genetisch ausrichtet . Die Podozytopathie in Form einer MCD ist die mit Abstand häufigste Ursache eines nephrotischen Syndroms im Kindesalter, liegt aber auch etwa 10–15 % aller Fälle eines primären nephrotischen Syndroms im Erwachsenenalter zugrunde . Die MCD präsentiert sich klinisch als akut einsetzendes nephrotisches Syndrom, das meistens durch ein gutes Ansprechen auf Immunsuppressiva, insbesondere Glukokortikoide (GC), gekennzeichnet ist. Die Diagnose erfolgt durch eine Nierenbiopsie. Im Kindesalter wird aufgrund der guten Ansprechrate auf eine Therapie mit GC und der hohen diagnostischen Vortestwahrscheinlichkeit primär auf eine diagnostische Nierenbiopsie verzichtet und die Krankheit entsprechend als steroid-sensitives (SSNS) bzw. steroid-resistentes nephrotisches Syndrom (SRNS) bezeichnet. Die MCD kann in jedem Alter auftreten, Männer sind häufiger betroffen als Frauen. Die Prognose ist gut und das Risiko einer terminalen Niereninsuffizienz gering. Trotzdem kann es im Krankheitsverlauf zu schwerwiegenden krankheits- oder therapiebedingten Komplikationen kommen. Komplizierte Verläufe umfassen steroid-abhängige (SA), steroid-resistente (SR) und häufig relapsierende (HR) Formen. Diese erfordern den Einsatz alternativer, steroidsparender Immunsuppressiva wie Calcineurin-Inhibitoren (CNI), Cyclophosphamid (CYC), Mycophenolsäure (MFS) oder B‑Zell-depletierender Substanzen wie Rituximab (RTX). Der Entstehungsmechanismus der MCD ist nicht geklärt und vermutlich multifaktoriell. Virale Infektionen oder Allergene werden als mögliche Trigger angesehen. Eine Dysregulation bestimmter T‑Zell Subgruppen wird ebenso beschrieben . Für eine pathophysiologische Rolle von B‑Zellen in der Entstehung der MCD spricht der erfolgreiche Einsatz B‑Zell-depletierender Substanzen wie RTX . Bislang nicht identifizierte „Permeabilitätsfaktoren“ führen schließlich zu einer Schädigung der Podozyten. In einer rezent veröffentlichten Untersuchung konnten erstmals anti-Nephrin Autoantikörper bei ca. 30 % der untersuchten MCD-PatientInnen nachgewiesen werden . Die MCD tritt meist ohne erkennbaren Auslöser auf (primär). Insbesondere bei Erstmanifestation muss jedoch eine sorgfältige Evaluierung hinsichtlich sekundärer Ursachen erfolgen (siehe Tab. ; ). Die weiterführende Diagnostik erfolgt individuell. Im Rahmen der Basisdiagnostik sollten folgende Aspekte beachtet werden: Bei Erstmanifestation mit großer Proteinurie (z. B. Urinstix) sowie wenn die Einleitung oder Intensivierung einer immunsuppressiven Therapie erwogen wird, sollte das Ausmaß der Proteinurie mittels Bestimmung der Protein-Kreatinin und/oder Albumin-Kreatinin Ratio, idealerweise aus dem Morgenurin oder aus einem Aliquot des „versuchten Sammelintervalls“, bestimmt werden. Alternativ ist die Quantifizierung der Proteinurie im 24 h-Sammelurin möglich (Vorteil: präziser/Nachteile: aufwändiger, häufig Sammelfehler). Cave: Die Kreatinin-basierte Bestimmung der Nierenfunktion (sowohl Clearance als auch eGFR) ist nicht validiert für das nephrotische Syndrom. Aufgrund gesteigerter tubulärer Sekretion von Kreatinin kommt es beim nephrotischen Syndrom zur systematischen Überschätzung der GFR . Die Bestimmung von anti-Phospholipase A2 Rezeptor Antikörpern (anti-PLA2R Ak) mittels ELISA und/oder indirekter Immunfluoreszenztests (IIFT) sollte neben dem Ausschluss anderer potenzieller sekundärer Ursachen (siehe Tab. ) bei jedem Erwachsenen bei Erstmanifestation eines nephrotischen Syndroms erfolgen. Beim klinischen Bild eines akut einsetzenden nephrotischen Syndroms ohne direkt erkennbaren Auslöser (beispielsweise Nachweis von anti-PLA2R Ak) ist im Erwachsenenalter grundsätzlich die Indikation zur diagnostischen Nierenbiopsie gegeben. Die Diagnose einer MCD kann ausschließlich histologisch gestellt werden. Das Fehlen lichtmikroskopischer Auffälligkeiten ist namensgebend für diese Entität. Auch die Immunofluoreszenz-Diagnostik ist üblicherweise unauffällig oder zeigt allenfalls eine milde mesangiale Akzentuierung und manchmal überwiegend diskrete IgM und/oder C1q Ablagerungen ohne elektronenmikroskopisches Korrelat. Dies scheint keine prognostische Relevanz zu haben und das Ansprechen auf die Therapie mit GC ist vergleichbar mit PatientInnen ohne Auffälligkeiten in der Immunofluoreszenz-Diagnostik . Erst mittels Elektronenmikroskopie kann die charakteristische ausgeprägte Abflachung („ effacement “) der Podozyten-Fußfortsätze nachgewiesen werden. Insbesondere bei älteren PatientInnen mit MCD können sich auch fokal interstitielle Fibrose und Tubulusatrophie im Sinne unspezifischer Alterungsprozesse finden. Weitere pathologische Befunde schließen automatisch die Diagnose MCD aus . Die MCD manifestiert üblicherweise abrupt (binnen Tagen bis wenigen Wochen) mit dem Vollbild eines nephrotischen Syndroms, definiert als Proteinurie ≥ 3,5 g/24 h bzw. ≥ 3 g/g Protein-Kreatinin Ratio begleitend von einer Hypalbuminämie (laborspezifische Grenzwerte) und Ödemen und einer Hyperlipidämie. Oft können die PatientInnen den genauen Zeitpunkt nennen, an dem die Ödeme erstmals auffielen. Die beträchtliche Wassereinlagerung geht häufig mit einer Gewichtszunahme einher. Bei älteren PatientInnen kommt es auch zu atypischen Manifestationsformen mit Hypertonie und eingeschränkter Nierenfunktion . Eine Mikrohämaturie findet sich in bis zur Hälfte der Fälle (Tab. ). Eine Krankheitsmanifestation kann auch mit einer akuten Nierenschädigung (AKI) vergesellschaftet sein (in Einzelfällen sogar mit Dialysepflicht) . Die renale Langzeitprognose bei steroid-sensitiver MCD ist sehr gut und das Risiko einer terminalen Niereninsuffizienz äußerst gering . Allerdings können insbesondere bei häufigen Rückfällen schwerwiegende Krankheits- oder therapieassoziierte Komplikationen wie Thromboembolien oder Infektionen auftreten. Die MCD geht mit einer hohen Neigung zu Rezidiven einher, Rückfälle können noch Jahrzehnte nach Erstmanifestation auftreten. Obwohl die einzelnen Episoden zumeist gut mit GC beherrschbar sind, handelt es sich somit um eine potenziell chronisch-rezidivierende Erkrankung, worüber die PatientInnen bei Diagnosestellung aufgeklärt werden sollten. Spontanremissionen sind möglich, treten aber meist erst spät auf, sodass in Anbetracht der krankheitsassoziierten Komplikationen grundsätzlich eine immunsuppressive Therapie erfolgen sollte. Bis zu 15 % aller Kinder und Jugendlichen mit SSNS haben auch als Erwachsene Rückfälle. Aus der relativ hohen Prävalenz einerseits und der Neigung zu Rezidiven andererseits ist eine Transition, also ein geplanter Übergang von der Kinderklinik in die Erwachsenenmedizin, kein unübliches Szenario. Derartige Situationen sollten nach Möglichkeit dazu genutzt werden, eine genaue Zusammenfassung des bisherigen Krankheitsverlaufs, kumulativer Immunsuppression, etwaiger manifester Toxizität (Minderwuchs …) zu erfassen beziehungsweise eine Basisdiagnostik (metabolisch, Knochenstoffwechsel) vorzunehmen, sowie individuelle Impflücken zu schließen. Die Datenlage zur optimalen Behandlung der MCD im Erwachsenenalter ist limitiert. Derzeitige Therapiestrategien sind meist aus pädiatrischen Studien extrapoliert, beruhen auf retrospektiven Untersuchungen oder kleinen Fallserien . Für den überwiegenden Teil der PatientInnen stellen GC die erste Wahl für die Initialbehandlung der MCD dar. Allerdings ist das Therapieansprechen im Vergleich zu Kindern und Jugendlichen verzögert und insgesamt weniger gut. Nur die Hälfte der PatientInnen erreicht nach 4 Wochen leitliniengerechter Behandlung eine komplette Remission ( complete remission [CR]), die mediane Zeit bis zur Remission beträgt 2 Monate , in bis zu 25 % bleibt eine Remission überhaupt aus. Im Gegensatz zu anderen nephrotischen Glomerulopathien kommt es bei der MCD im Fall eines Therapieansprechens zu einer völligen Normalisierung des Harnbefundes, das heißt, ein Therapieansprechen folgt in der Regel einem „Alles-oder-Nichts“-Prinzip. Eine partielle Remission ist für die MCD untypisch und nicht selten Ausdruck einer anderen zugrundeliegenden Podozytopathie (z. B. „FSGS“), die in der initialen Diagnostik nicht erkannt wurde. Das Ansprechen auf eine erste GC-Therapie erlaubt auch früh eine Prognoseeinschätzung, ein frühes Rezidiv ist bereits ein Prädiktor für einen ungünstigen Verlauf im Sinne von vermehrten Rezidiven. Gemäß den KDIGO-Leitlinien soll die Erstmanifestation einer MCD wie folgt therapiert werden: Prednisolon in einer Dosierung von 1 mg/kg/d (Maximaldosis 80 mg/d). Die Einnahme sollte morgens zwischen 7 und 8 Uhr als Einzelgabe erfolgen. Das alternative alternierende Schema (Prednisolon 120 mg/d jeden 2. Tag) ist in Mitteleuropa ungebräuchlich. Aufgrund der ödembedingten passageren Gewichtszunahme sollte dabei das Körpergewicht vor Krankheitsmanifestation als Ausgangswert herangezogen werden. Nationale Guidelines und aktuelle kontrollierte Studien (TURING: ISRCTN16948923 ) limitieren die Höchstdosis auf 60 mg/d . Die Mindestdauer dieser GC-Hochdosistherapie beträgt 4 Wochen (zu diesem Zeitpunkt erreicht die Hälfte der PatientInnen eine CR). Bleibt eine frühe CR aus, soll laut KDIGO die Anfangsdosierung für maximal 16 Wochen beibehalten werden. Über diesen Zeitraum erreichen dann etwa 80 % der PatientInnen eine CR. Wir sehen eine derartig lange Hochdosis-GC Therapie, die in Einzelfällen einer Gesamtdosis von bis zu 10 g Prednisolon-Äquivalent entspricht, insbesondere bei älteren PatientInnen kritisch und empfehlen bei ausbleibender Remission schon früher, beispielsweise nach 8 Wochen, alternative Behandlungsstrategien in Erwägung zu ziehen, obschon entsprechende Studien (RIFIREINS: NCT03970577 ) zum Zeitpunkt des Schreibens erst rekrutieren (siehe Alternativtherapien) . Nach Erreichen einer CR soll die GC-Therapie in unveränderter Dosierung noch für 2 weitere Wochen fortgeführt werden, bevor eine schrittweise Dosisreduktion („Tapering“) beginnen kann. Die optimale Dauer einer GC-Therapie ist unklar, auch fehlen einheitliche Empfehlungen für das Tapering, KDIGO empfiehlt eine Dosisreduktion von 5–10 mg Prednisolon-Äquivalent pro Woche, sobald eine CR erreicht ist . Die Gesamtdauer der Behandlung sollte längstens 24 Wochen betragen. Ein zu frühes oder rasches Ausschleichen birgt laut älteren Studien die Gefahr eines Rezidivs und der daraus resultierenden Problematik einer insgesamt höheren GC-Exposition . Ein Vorschlag für ein zeitgemäßes Niedrigdosis-GC Therapieschema ist in Abb. dargestellt und orientiert sich am Protokoll der TURING-Studie [TURING: ISRCTN16948923]. Insbesondere der rechte Arm (kein Response bis Woche 8) weicht jedoch deutlich von KDIGO ab und beschränkt die Hochdosistherapie ab diesem Zeitpunkt auf maximal 60 mg/d Prednisolon für eine Dauer von höchstens weiteren 4 Wochen . Die kumulative Gesamtdosis und die Therapiedauer unterschreiten die Empfehlungen aktueller Guidelines deutlich und sind bis dato nicht durch Evidenz aus kontrollierten Studien gedeckt. Die Autoren sind dennoch der Meinung, dass ein solches Schema praktikabel, effizient und sicher ist. Auch war ein rascheres Ausschleichen von GC in einer rezenten Publikation aus Japan effektiv und sicher. Rezidive traten bei raschem Ausschleichen zwar früher, jedoch nicht signifikant häufiger auf . Bleibt eine CR auch nach 16 Wochen Therapiedauer (mit modifizierter Hochdosis GC-Therapie [s. oben] oder einer entsprechenden Alternativstrategie) aus, wird von einer Steroid‑/Therapieresistenz ausgegangen (SR-MCD). In derartigen Fällen sollte primär die Therapieadhärenz besprochen werden. Scheint diese gegeben, ist als nächstes die Diagnose zu reevaluieren. Dabei soll die Indikation zu einer Rebiopsie großzügig gestellt werden. Häufig findet sich dann eine FSGS, die entweder zuvor wegen eines sampling errors nicht erkannt wurde oder die sich erst im Krankheitsverlauf entwickelt hat. Grundsätzlich bedeutet Steroidresistenz eine schlechtere Prognose. Rezidiv (erneute Proteinurie > 3,5 g/d nach Erreichen einer CR) Etwa zwei Drittel der erwachsenen PatientInnen, die primär eine CR erreichen, rezidivieren nach Beendigung der GC-Therapie, und selbst bei „Frühansprechern“ kommt es in gut 20 % der Fälle zu einem Rezidiv binnen eines Jahres . Rund die Hälfte aller Rezidive tritt in den ersten 6 Monaten nach Beendigung der Steroidtherapie auf . Jüngere PatientInnen neigen eher zu Rezidiven als ältere. Bei „seltenen Rezidiven“ (laut KDIGO-Definition < 4 pro Jahr) sollte die ursprünglich gewählte Therapie wiederholt werden, wobei im Fall einer GC-Therapie üblicherweise eine kürzere Therapiedauer ausreicht. Nach 4 Wochen oder Erreichen einer Remission wird die GC-Therapie in 5 mg-Schritten alle 3–5 Tage reduziert und nach 1–2 Monaten beendet. Bei häufigen Rezidiven (definiert als ≥ 2 Episoden binnen 6 Monaten oder ≥ 4 in 1 Jahr) sollte, um therapieassoziierte Nebenwirkungen einer prolongierten GC-Therapie zu vermeiden, eine alternative Behandlungstherapie erwogen werden (siehe unten). Manchmal kommt es im Krankheitsverlauf zu selbstlimitierenden Rezidiven, die nur zufällig (z. B. wegen eines positiven Harnstreifens) auffallen. Bei asymptomatischen PatientInnen und stabiler Nierenfunktion kann dann für einige Tage das Eintreten einer Spontanremission abgewartet werden, bevor eine neuerliche immunsuppressive Therapie begonnen wird. Alternativ kann auch eine kurze, niedrig-dosierte GC-Therapie zielführend sein. Grundsätzlich ist festzuhalten, dass die Definition von „seltenen Rezidiven“ in Einzelfällen eine beträchtliche kumulative GC-Exposition bedingen kann und bei derartigen, formal nicht „häufig-rezidivierenden“ Verläufen alternative Behandlungsstrategien geprüft werden sollten. Insbesondere RTX scheint uns hier eine gute Option (Details siehe unten). Aufgrund des potenziell chronisch-rezidivierenden Charakters der MCD empfiehlt es sich, die kumulative GC-Dosis prospektiv zu erfassen. Des Weiteren sollte der Therapiebeginn klar aus den Arztbriefen nachvollziehbar sein, um eine strukturierte ambulante Betreuung weiter zu gewährleisten. Alternativtherapien In Anbetracht der beträchtlichen Nebenwirkungen einer prolongierten hochdosierten GC-Therapie gewinnen daher, analog zum Management anderer autoimmuner Nierenerkrankungen, alternative Behandlungsansätze zunehmend an Bedeutung. Eine mögliche Vorgehensweise ist in Abb. zusammengefasst. Als „GC-sparende“ Therapiestrategien für die Erstlinientherapie der MCD kommen die beiden CNI Tacrolimus (TAC) oder Cyclosporin A (CsA), sowie RTX oder MFS in Frage, wobei uns insbesondere RTX als attraktive Option erscheint. Die Auswahl erfolgt nach lokaler Erfahrung, Begleiterkrankungen sowie PatientInnenwunsch. Mögliche Schemata sind: TAC-Monotherapie 0,05–0,1 mg/kg bid (möglicher Talspiegel 5–10 ng/ml) . In der MinTac-Studie wurde ein ursprünglich niedriger Talspiegel (4–8 ng/ml) erst bei ungenügendem Ansprechen nach 8 Wochen auf 9–12 ng/ml erhöht . Dabei wurde eine CR unter TAC später erreicht als unter GC, erst nach 8 Wochen waren die Resultate vergleichbar. Bei verhältnismäßig kurzer Therapiedauer war die Rezidivhäufigkeit hoch . Zur Therapiedauer s. unten Kombinationstherapie: TAC + niedrig dosierte GC 0,05 mg/kg bid [Talspiegel 6–8 ng/ml] in Kombination mit niedrig dosierten GC (Startdosis: Pred 0,5 mg/kg/d) . CsA-Monotherapie 3–5 mg/kg/d bid, Talspiegel: 100–175 ng/ml. TAC zeichnet sich durch ein im Vergleich zu CsA günstigeres Nebenwirkungsprofil aus und wird deshalb heute allgemein bevorzugt. Beide Substanzen zeigen jedoch z. T. unterschiedliche Nebenwirkungen, sodass ein Wechsel angezeigt sein kann. Kombinationstherapie: GC + RTX Mögliche Therapieprotokolle in dieser Indikation werden im unteren Abschnitt (SA/HR MCD) besprochen. Kombinationstherapie: MFS + niedrig dosierte GC 1000 mg (Mycophenolat-Mofetil) bzw. 720 mg (Natrium-Mycophenolat) bid, in Kombination mit niedrig dosierten GC (Prednisolon 0,5 mg/kg/d; maximal 40 mg/d). Bei Erreichen einer CR werden GC binnen 4–6 Wochen ausgeschlichen, MFS wird für eine Dauer von 24 Wochen fortgesetzt. Aufgrund des beträchtlichen Nebenwirkungspotenzials sehen wir CYC nicht mehr als adäquate Erstlinientherapie bei GC-Kontraindikation an. Auch in der zweiten Linie liegen nun Daten für alternative Behandlungsansätze vor (zum Stellenwert von CYC bei HR/SA Verläufe s. unten). Sowohl für CNI- als auch MFS-basierte Schemata gibt es keine klaren Empfehlungen bezüglich der Therapiedauer. Bei guter Verträglichkeit kann TAC über 1–2 Jahre eingenommen werden. Schleicht man die Therapie langsam aus, verringert sich das Rezidivrisiko im Vergleich zu abruptem Absetzen . Man kann die TAC-Dosis nach einem Jahr in quartalsmäßigen Schritten um je 25 % reduzieren . Das höhere Risiko eines Krankheitsrezidivs nach Beendigung der Therapie muss mit der Problematik der CNI-Nephrotoxizität bei langjähriger Einnahme abgewogen und mit dem PatientInnen diskutiert werden. Insbesondere bei PatientInnen mit vorbestehender Einschränkung der Nierenfunktion und/oder tubulointerstitieller Fibrose in der Biopsie sollte eine Behandlung mit CNI nicht zu lange fortgeführt werden. Steroid-abhängige (SA) sowie häufig-relapsierende (HR) MCD In diesen Szenarien können gemäß der aktuellen KDIGO-Leitlinie orales CYC, CNI, RTX oder MFS als gleichwertige Optionen (vergleichbare Remissionsraten um 70–90 %) zum Einsatz kommen . Die Entscheidung sollte nach sorgfältiger individueller Abwägung der jeweiligen Vor- und Nachteile (insbesondere Nebenwirkungsprofil), Verfügbarkeit und PatientInnen-Präferenz getroffen werden. Empfohlene Therapieschemata für CNI und MFS entsprechen den oben genannten. Wir sehen jedoch RTX inzwischen als geeignete, mit wenigen Nebenwirkungen behaftete, Option für die SA oder HR MCD an, zudem könnte sie bei möglicher Antikörper-mediierten Erkrankung als targeted therapy gesehen werden (s. oben). Üblicherweise geht einer RTX-Therapie ein GC-Wiederbeginn (in der Standarddosierung von 1 mg/kgKG) voraus. Es gibt kein RTX-Standardschema und die Dosierung erfolgt üblicherweise nach den beiden bekannten Schemata (rheumatologisch/hämatologisch) entweder mit 2 × 1 g im Abstand von 2 Wochen oder 4 × 375 mg/m 2 in wöchentlichen Abständen, wobei auch niedrigere Dosierungen verwendet werden (z. B. 375 mg/m 2 im Abstand von 1 oder 2 Wochen). In einer Studie bei PatientInnen mit SA MCD war der Therapieeffekt besser (geringere Rezidivrate), wenn RTX nach Erreichen einer CR verabreicht wurde . Wird eine anti-CD20-Therapie noch im manifesten nephrotischen Syndrom verabreicht, kann der renale Verlust des Wirkstoffs eine frühere neuerliche Gabe erforderlich machen. Aus diesen Gründen halten wir die RTX-Dosierung aus dem TURING-Protokoll (2 × 1 g im Abstand von 2 Wochen) für zweckmäßig (Details zu einer Therapie mit RTX: Siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Begleitende GC können dann binnen 1–2 Monaten oder noch rascher ausgeschlichen werden. In Einzelfällen, insbesondere bei HR-Krankheitsverlauf, wird zur Vermeidung neuerlicher Rezidive eine Erhaltungstherapie mit RTX etabliert (z. B. in halbjährlichen Abständen mit 500 mg). Diese Strategie wird zunehmend auch für PatientInnen mit formal „seltenen Rezidiven“ im langfristigen Verlauf gewählt . Aufgrund der beträchtlichen Toxizität sollte eine Therapie mit CYC nur bei komplizierten Krankheitsverläufen durchgeführt werden. Zum Einsatz kommt orales CYC (2 mg/kg/d; 200 mg/d Maximaldosis) adjustiert für Alter und Nierenfunktion, über 12 Wochen. Diese Therapie resultiert in einer kumulativen CYC-Dosis 168 mg/kg (geschätzte Schwelle für gonadale Toxizität bei Männern: 200–250 mg/kg) . Eine CYC-Therapie sollte nicht länger als 3 Monate erfolgen und im Verlauf nicht neuerlich verabreicht werden. Die Toxizität der Behandlung muss mit den PatientInnen vorab ausführlich besprochen werden (Details zu einer Therapie mit CYC: Siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Steroidresistente (SR) MCD (ausbleibende Remission nach 16 Wochen Hochdosis GC Therapie) Eine SR MCD tritt bei Erwachsenen in bis zu 10 % der Fälle auf. Wird eine korrekte Therapieadhärenz angenommen, muss die ursprüngliche Diagnose hinterfragt werden. Eine Rebiopsie sollte daher in Betracht gezogen werden, bestätigt jedoch zumeist entweder die Erstdiagnose (MCD) oder ergibt den Befund einer FSGS. Allerdings schließt auch der erneute Nachweis einer MCD eine zugrundeliegende FSGS nicht aus (sampling error). Die nachfolgende Therapie richtet sich zunächst in beiden Fällen nach den Empfehlungen der SR FSGS, sodass optional auch ein Therapieversuch mit einem CNI vor einer Rebiopsie erfolgen kann. Insbesondere bei jungen PatientInnen und/oder entsprechender Familienanamnese muss an dieser Stelle auch die Möglichkeit zugrundeliegender genetischer Ursachen und eine eventuelle genetische Abklärung berücksichtigt werden (Verweis auf den ÖGN-Konsensus: FSGS). Üblicherweise wird eine Behandlung mit einem CNI begonnen und parallel die laufende GC-Therapie reduziert/ausgeschlichen. Ein Therapieansprechen sollte nach einigen Monaten einer CNI-Behandlung ersichtlich sein. Eine begleitende niedrig dosierte GC-Therapie (5–7,5 mg/d) kann zusätzlich erfolgen. Im Fall eines Ansprechens sollte die Behandlung nach einem Jahr in Standarddosierung langsam ausgeschlichen werden. Der Stellenwert von RTX für die SR MCD ist bei Erwachsenen weniger gut belegt als bei Kindern. In einer kleinen Kohorte, die auch 5 PatientInnen mit Steroidresistenz beinhaltete, erreichten alle PatientInnen nach RTX-Gabe eine Remission . Supportivtherapie Grundsätzlich gelten die allgemeinen Empfehlungen für alle proteinurischen Glomerulopathien (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Aufgrund der oft raschen Remission sind eine frühzeitige Behandlung mit RASi oder Statinen, sowie eine diätetische Eiweißrestriktion nicht zwingend erforderlich, sondern erst dann, wenn ein frühes Ansprechen ausbleibt. In ausgewählten Fällen ist eine Thromboseprophylaxe indiziert (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Kochsalzrestriktion Diuretika („negative NaCl-Bilanz“) Blutdruckeinstellung Nikotinkarenz Bereits zum Zeitpunkt der Erstdiagnose sollte vorausschauend ein Augenmerk auf mögliche zukünftige metabolische und infektiöse Nebenwirkungen der Immunsuppression gerichtet werden. Spätestens nach Erreichen einer Remission sollte in Abhängigkeit von anderen Risikofaktoren für eine Osteoporose (weibliches Geschlecht, höheres Alter, Rauchen …) eine Index-Densitometrie veranlasst werden (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). In Remission sollte der Impfstatus aktualisiert und ein strukturiertes Impfprogramm diskutiert werden. Dieses beinhaltet neben jährlichen Influenza-Impfungen eine zweiteilige Pneumokokken-Impfung (PCV13 gefolgt von PPSV23) sowie eine Hepatitis‑B Grundimmunisierung. Obwohl in seltenen Fällen zeitliche Zusammenhänge zwischen Impfungen und de-novo MCD bzw. Krankheitsrezidiven berichtet wurde , ist auch eine Schutzimpfung gegen COVID 19 bzw. Auffrischungsimpfungen nach erreichter klinischer Remission unbedingt zu empfehlen. Etwa zwei Drittel der erwachsenen PatientInnen, die primär eine CR erreichen, rezidivieren nach Beendigung der GC-Therapie, und selbst bei „Frühansprechern“ kommt es in gut 20 % der Fälle zu einem Rezidiv binnen eines Jahres . Rund die Hälfte aller Rezidive tritt in den ersten 6 Monaten nach Beendigung der Steroidtherapie auf . Jüngere PatientInnen neigen eher zu Rezidiven als ältere. Bei „seltenen Rezidiven“ (laut KDIGO-Definition < 4 pro Jahr) sollte die ursprünglich gewählte Therapie wiederholt werden, wobei im Fall einer GC-Therapie üblicherweise eine kürzere Therapiedauer ausreicht. Nach 4 Wochen oder Erreichen einer Remission wird die GC-Therapie in 5 mg-Schritten alle 3–5 Tage reduziert und nach 1–2 Monaten beendet. Bei häufigen Rezidiven (definiert als ≥ 2 Episoden binnen 6 Monaten oder ≥ 4 in 1 Jahr) sollte, um therapieassoziierte Nebenwirkungen einer prolongierten GC-Therapie zu vermeiden, eine alternative Behandlungstherapie erwogen werden (siehe unten). Manchmal kommt es im Krankheitsverlauf zu selbstlimitierenden Rezidiven, die nur zufällig (z. B. wegen eines positiven Harnstreifens) auffallen. Bei asymptomatischen PatientInnen und stabiler Nierenfunktion kann dann für einige Tage das Eintreten einer Spontanremission abgewartet werden, bevor eine neuerliche immunsuppressive Therapie begonnen wird. Alternativ kann auch eine kurze, niedrig-dosierte GC-Therapie zielführend sein. Grundsätzlich ist festzuhalten, dass die Definition von „seltenen Rezidiven“ in Einzelfällen eine beträchtliche kumulative GC-Exposition bedingen kann und bei derartigen, formal nicht „häufig-rezidivierenden“ Verläufen alternative Behandlungsstrategien geprüft werden sollten. Insbesondere RTX scheint uns hier eine gute Option (Details siehe unten). Aufgrund des potenziell chronisch-rezidivierenden Charakters der MCD empfiehlt es sich, die kumulative GC-Dosis prospektiv zu erfassen. Des Weiteren sollte der Therapiebeginn klar aus den Arztbriefen nachvollziehbar sein, um eine strukturierte ambulante Betreuung weiter zu gewährleisten. In Anbetracht der beträchtlichen Nebenwirkungen einer prolongierten hochdosierten GC-Therapie gewinnen daher, analog zum Management anderer autoimmuner Nierenerkrankungen, alternative Behandlungsansätze zunehmend an Bedeutung. Eine mögliche Vorgehensweise ist in Abb. zusammengefasst. Als „GC-sparende“ Therapiestrategien für die Erstlinientherapie der MCD kommen die beiden CNI Tacrolimus (TAC) oder Cyclosporin A (CsA), sowie RTX oder MFS in Frage, wobei uns insbesondere RTX als attraktive Option erscheint. Die Auswahl erfolgt nach lokaler Erfahrung, Begleiterkrankungen sowie PatientInnenwunsch. Mögliche Schemata sind: TAC-Monotherapie 0,05–0,1 mg/kg bid (möglicher Talspiegel 5–10 ng/ml) . In der MinTac-Studie wurde ein ursprünglich niedriger Talspiegel (4–8 ng/ml) erst bei ungenügendem Ansprechen nach 8 Wochen auf 9–12 ng/ml erhöht . Dabei wurde eine CR unter TAC später erreicht als unter GC, erst nach 8 Wochen waren die Resultate vergleichbar. Bei verhältnismäßig kurzer Therapiedauer war die Rezidivhäufigkeit hoch . Zur Therapiedauer s. unten Kombinationstherapie: TAC + niedrig dosierte GC 0,05 mg/kg bid [Talspiegel 6–8 ng/ml] in Kombination mit niedrig dosierten GC (Startdosis: Pred 0,5 mg/kg/d) . CsA-Monotherapie 3–5 mg/kg/d bid, Talspiegel: 100–175 ng/ml. TAC zeichnet sich durch ein im Vergleich zu CsA günstigeres Nebenwirkungsprofil aus und wird deshalb heute allgemein bevorzugt. Beide Substanzen zeigen jedoch z. T. unterschiedliche Nebenwirkungen, sodass ein Wechsel angezeigt sein kann. Kombinationstherapie: GC + RTX Mögliche Therapieprotokolle in dieser Indikation werden im unteren Abschnitt (SA/HR MCD) besprochen. Kombinationstherapie: MFS + niedrig dosierte GC 1000 mg (Mycophenolat-Mofetil) bzw. 720 mg (Natrium-Mycophenolat) bid, in Kombination mit niedrig dosierten GC (Prednisolon 0,5 mg/kg/d; maximal 40 mg/d). Bei Erreichen einer CR werden GC binnen 4–6 Wochen ausgeschlichen, MFS wird für eine Dauer von 24 Wochen fortgesetzt. Aufgrund des beträchtlichen Nebenwirkungspotenzials sehen wir CYC nicht mehr als adäquate Erstlinientherapie bei GC-Kontraindikation an. Auch in der zweiten Linie liegen nun Daten für alternative Behandlungsansätze vor (zum Stellenwert von CYC bei HR/SA Verläufe s. unten). Sowohl für CNI- als auch MFS-basierte Schemata gibt es keine klaren Empfehlungen bezüglich der Therapiedauer. Bei guter Verträglichkeit kann TAC über 1–2 Jahre eingenommen werden. Schleicht man die Therapie langsam aus, verringert sich das Rezidivrisiko im Vergleich zu abruptem Absetzen . Man kann die TAC-Dosis nach einem Jahr in quartalsmäßigen Schritten um je 25 % reduzieren . Das höhere Risiko eines Krankheitsrezidivs nach Beendigung der Therapie muss mit der Problematik der CNI-Nephrotoxizität bei langjähriger Einnahme abgewogen und mit dem PatientInnen diskutiert werden. Insbesondere bei PatientInnen mit vorbestehender Einschränkung der Nierenfunktion und/oder tubulointerstitieller Fibrose in der Biopsie sollte eine Behandlung mit CNI nicht zu lange fortgeführt werden. sowie häufig-relapsierende (HR) MCD In diesen Szenarien können gemäß der aktuellen KDIGO-Leitlinie orales CYC, CNI, RTX oder MFS als gleichwertige Optionen (vergleichbare Remissionsraten um 70–90 %) zum Einsatz kommen . Die Entscheidung sollte nach sorgfältiger individueller Abwägung der jeweiligen Vor- und Nachteile (insbesondere Nebenwirkungsprofil), Verfügbarkeit und PatientInnen-Präferenz getroffen werden. Empfohlene Therapieschemata für CNI und MFS entsprechen den oben genannten. Wir sehen jedoch RTX inzwischen als geeignete, mit wenigen Nebenwirkungen behaftete, Option für die SA oder HR MCD an, zudem könnte sie bei möglicher Antikörper-mediierten Erkrankung als targeted therapy gesehen werden (s. oben). Üblicherweise geht einer RTX-Therapie ein GC-Wiederbeginn (in der Standarddosierung von 1 mg/kgKG) voraus. Es gibt kein RTX-Standardschema und die Dosierung erfolgt üblicherweise nach den beiden bekannten Schemata (rheumatologisch/hämatologisch) entweder mit 2 × 1 g im Abstand von 2 Wochen oder 4 × 375 mg/m 2 in wöchentlichen Abständen, wobei auch niedrigere Dosierungen verwendet werden (z. B. 375 mg/m 2 im Abstand von 1 oder 2 Wochen). In einer Studie bei PatientInnen mit SA MCD war der Therapieeffekt besser (geringere Rezidivrate), wenn RTX nach Erreichen einer CR verabreicht wurde . Wird eine anti-CD20-Therapie noch im manifesten nephrotischen Syndrom verabreicht, kann der renale Verlust des Wirkstoffs eine frühere neuerliche Gabe erforderlich machen. Aus diesen Gründen halten wir die RTX-Dosierung aus dem TURING-Protokoll (2 × 1 g im Abstand von 2 Wochen) für zweckmäßig (Details zu einer Therapie mit RTX: Siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Begleitende GC können dann binnen 1–2 Monaten oder noch rascher ausgeschlichen werden. In Einzelfällen, insbesondere bei HR-Krankheitsverlauf, wird zur Vermeidung neuerlicher Rezidive eine Erhaltungstherapie mit RTX etabliert (z. B. in halbjährlichen Abständen mit 500 mg). Diese Strategie wird zunehmend auch für PatientInnen mit formal „seltenen Rezidiven“ im langfristigen Verlauf gewählt . Aufgrund der beträchtlichen Toxizität sollte eine Therapie mit CYC nur bei komplizierten Krankheitsverläufen durchgeführt werden. Zum Einsatz kommt orales CYC (2 mg/kg/d; 200 mg/d Maximaldosis) adjustiert für Alter und Nierenfunktion, über 12 Wochen. Diese Therapie resultiert in einer kumulativen CYC-Dosis 168 mg/kg (geschätzte Schwelle für gonadale Toxizität bei Männern: 200–250 mg/kg) . Eine CYC-Therapie sollte nicht länger als 3 Monate erfolgen und im Verlauf nicht neuerlich verabreicht werden. Die Toxizität der Behandlung muss mit den PatientInnen vorab ausführlich besprochen werden (Details zu einer Therapie mit CYC: Siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Eine SR MCD tritt bei Erwachsenen in bis zu 10 % der Fälle auf. Wird eine korrekte Therapieadhärenz angenommen, muss die ursprüngliche Diagnose hinterfragt werden. Eine Rebiopsie sollte daher in Betracht gezogen werden, bestätigt jedoch zumeist entweder die Erstdiagnose (MCD) oder ergibt den Befund einer FSGS. Allerdings schließt auch der erneute Nachweis einer MCD eine zugrundeliegende FSGS nicht aus (sampling error). Die nachfolgende Therapie richtet sich zunächst in beiden Fällen nach den Empfehlungen der SR FSGS, sodass optional auch ein Therapieversuch mit einem CNI vor einer Rebiopsie erfolgen kann. Insbesondere bei jungen PatientInnen und/oder entsprechender Familienanamnese muss an dieser Stelle auch die Möglichkeit zugrundeliegender genetischer Ursachen und eine eventuelle genetische Abklärung berücksichtigt werden (Verweis auf den ÖGN-Konsensus: FSGS). Üblicherweise wird eine Behandlung mit einem CNI begonnen und parallel die laufende GC-Therapie reduziert/ausgeschlichen. Ein Therapieansprechen sollte nach einigen Monaten einer CNI-Behandlung ersichtlich sein. Eine begleitende niedrig dosierte GC-Therapie (5–7,5 mg/d) kann zusätzlich erfolgen. Im Fall eines Ansprechens sollte die Behandlung nach einem Jahr in Standarddosierung langsam ausgeschlichen werden. Der Stellenwert von RTX für die SR MCD ist bei Erwachsenen weniger gut belegt als bei Kindern. In einer kleinen Kohorte, die auch 5 PatientInnen mit Steroidresistenz beinhaltete, erreichten alle PatientInnen nach RTX-Gabe eine Remission . Grundsätzlich gelten die allgemeinen Empfehlungen für alle proteinurischen Glomerulopathien (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Aufgrund der oft raschen Remission sind eine frühzeitige Behandlung mit RASi oder Statinen, sowie eine diätetische Eiweißrestriktion nicht zwingend erforderlich, sondern erst dann, wenn ein frühes Ansprechen ausbleibt. In ausgewählten Fällen ist eine Thromboseprophylaxe indiziert (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). Kochsalzrestriktion Diuretika („negative NaCl-Bilanz“) Blutdruckeinstellung Nikotinkarenz Bereits zum Zeitpunkt der Erstdiagnose sollte vorausschauend ein Augenmerk auf mögliche zukünftige metabolische und infektiöse Nebenwirkungen der Immunsuppression gerichtet werden. Spätestens nach Erreichen einer Remission sollte in Abhängigkeit von anderen Risikofaktoren für eine Osteoporose (weibliches Geschlecht, höheres Alter, Rauchen …) eine Index-Densitometrie veranlasst werden (siehe separater Beitrag „Allgemeine Empfehlungen für die Behandlung glomerulärer Erkrankungen“). In Remission sollte der Impfstatus aktualisiert und ein strukturiertes Impfprogramm diskutiert werden. Dieses beinhaltet neben jährlichen Influenza-Impfungen eine zweiteilige Pneumokokken-Impfung (PCV13 gefolgt von PPSV23) sowie eine Hepatitis‑B Grundimmunisierung. Obwohl in seltenen Fällen zeitliche Zusammenhänge zwischen Impfungen und de-novo MCD bzw. Krankheitsrezidiven berichtet wurde , ist auch eine Schutzimpfung gegen COVID 19 bzw. Auffrischungsimpfungen nach erreichter klinischer Remission unbedingt zu empfehlen. Wegen des günstigen Verlaufs der Erkrankung sind Nierentransplantationen bei MCD eine Rarität und eher Ausdruck einer früheren Fehldiagnose, zumeist im Sinne einer initial „übersehenen“ FSGS oder einer anderen Zweitpathologie. Trotzdem kann eine MCD im Transplantat wieder auftreten . Etwas häufiger ist das de novo -Auftreten einer MCD, wobei also kein Zusammenhang mit der Grundkrankheit besteht . Die Manifestation erfolgt früh (fast immer binnen 4 Monate nach Transplantation); Lebendspenden dürften ein etwas höheres Risiko aufweisen. Fast immer wird durch GC eine CR erreicht. De novo MCD in der Schwangerschaft ist äußerst selten. Therapieempfehlungen beruhen auf Fallberichten. GC, TAC und RTX wurden erfolgreich verwendet . Über den oftmals chronisch-rezidivierenden Verlauf der Erkrankung sollen PatientInnen aufgeklärt werden. Selbstkontrollen mittels Urinteststreifen (beispielsweise im Rahmen eines Infekts oder nach Erreichen einer Remission in monatlichen Abständen) ermöglichen eine frühe Diagnose eines Rezidivs und einen raschen Therapiebeginn. Auch in anhaltender Remission erachten wir nephrologische Kontrollen, beispielsweise in jährlichen Abständen, für sinnvoll. In diesem Rahmen sollten auch Blutdruck und metabolische Parameter im Auge behalten werden, das Gewicht dokumentiert und der Impfstatus adressiert werden.
Fully digital pathology laboratory routine and remote reporting of oral and maxillofacial diagnosis during the COVID-19 pandemic: a validation study
88d1fd03-7e2d-426c-9bcc-88a1356c0f92
7955219
Pathology[mh]
The current outbreak of COVID-19 has instigated the need for adaptation of several aspects of modern life with consideration for social distancing, since the aerosolized particles of SARS-Cov 2 virus can remain airborne for up to 3 hours and through speaking, coughing, and sneezing contaminate surfaces for several hours/days . Mitigation strategies include horizontal isolation, which directly affects the functionality of several primary services, including health care. Dental practices usually require the use of high-speed rotational drills cooled with water, which widely spreads aerosols all over a small office with limited ventilation. Infectious aerosols were considered a key etiologic factor in prior coronavirus outbreaks . In addition, direct or indirect contact with exposed mucosa is related to a higher risk for transmission . Previous studies reported a high rate (91.7%) of SARS-Cov 2 in the saliva even before lung tissue involvement . Given these circumstances, elective dental procedures are strongly recommended to be postponed. Recently, recommendations had been made regarding the urgent need of reorganizing services enrolled in the diagnosis of head and neck cancer since the delay histological diagnosis can delay treatment and have an adverse affect on patient prognosis . In this context, incisional/scalpel biopsies remain the standard procedure and should be prioritized as usual despite the risk of SARS-Cov 2 transmission. The need for histological processing of surgical specimens requires pathology laboratories to maintain and monitor their workload, operating with a few staff members (dependent upon availability of safe space). Technology can mitigate this issue with a single additional step of glass slide digitalization which can be easily conducted by one technician. Evidence shows that whole slide images (WSI) are suitable for histopathological diagnosis with a non-inferior performance to light microscopy in several subspecialties of human pathology, including head and neck pathology . Digital pathology can also be a really useful tool for remote discussion of these cases in Multidisciplinary Team Meetings (MDTMs) to plan patient management, in addition to sharing with trainees and consultation with other pathologists. Despite the strong evidence regarding the diagnostic usefulness of WSI systems in several subspecialties of human pathology, its utilization for primary diagnosis remains limited to only a few services in the UK , Netherlands , and Spain . However, the need for maintaining pathology services despite disruptive challenges such as the COVID-19 pandemic has reinforced the need for digital pathology as an alternative and reliable diagnostic method. Subsequently, a recent survey of experience in the UK shows that several services are already conducting validation studies for the implementation of digital pathology. In addition to digital pathology validation (of histology), the current outbreak has also accelerated remote reporting by pathologists outside their usual work environment, highlighting the need for remote report validation pathology . We have previously validated the use of WSI for primary diagnosis in oral and maxillofacial pathology . In this study, we propose validation of remote WSI assessment and reporting for the diagnosis of oral and maxillofacial pathologies to substantiate the creation of the first fully oral digital pathology laboratory in Brazil. Adaptations required in laboratory infrastructure and workflow Professionals enrolled in laboratory workflow tasks had their temperature routinely monitored and any occasional symptoms were kept under close surveillance, and staff rotation was adopted to avoid over exposure to the virus, since the correlation among viral load and symptoms/prognosis is still not clear . Pathology labs are usually designed to constrain fire incidences and are equipped with refrigeration and air-conditioning making them poorly ventilated which can contribute to aerosol suspension. Therefore, some adaptations to the infrastructure and workflow of the pathology laboratory were made, and additional precautions were taken to provide a safe work condition for the staff as seen in Fig. : (1) reduction of the number and circulation of staff in the laboratory (one pathologist (1a) and one technician (1b) were enrolled in gross analysis), reduction of the frequency of staff exposure (the team should address a schedule with rotational shifts); (2) exhaust systems—fume hood and exhaust vents—were kept functioning to minimize the potential suspension of microdroplets and a HEPA filter was installed. (3) Disinfection of surfaces, floor, and shoes of the staff prior laboratory use were encouraged; (4) disinfection of plastic containers containing biopsy tissue was reinforced due to the high risk of contamination while the clinician manipulates these recipients within surgery and given the time SARS-Cov 2 remains viable on this material . (5) After processing the material, masks were also disinfected before being discarded . For appropriate disinfection, a solution with 62–71% ethanol or 0.1% sodium hypochlorite within 1 minute is indicated . (6) Stringent and regular handwashing by all staff members was also reinforced. Further steps of processing only required one experienced technician including (7) automated tissue processing; (8) embedding within paraffin blocks; (9) tissue sectioning; (10) staining; and (11) scanning. The majority of adaptations in the laboratory work flow were related to work security. There were no changes in quality control procedures for gross examination, processing, embedding, microtomy, and staining. To ensure digital image quality control, an additional step of macroscopic evaluation of glass slides previous to scanning was conducted including assessment of clean, well placed and dried coverslips, as well as artifacts/bubble-free slides. After scanning, the digitalized images were checked for out of focus, missing tissue, and striping artifacts. The scanned slides were then uploaded to the institutional server and made available through a link, which redirected the pathologist to a web based WSI viewer enabling online visualization, navigation, and flipping functions. Study design The methodology for this study is based on the guidance for remote reporting of digital pathology slides during periods of exceptional service pressure , previously detailed in the RCPath guidance for digital pathology implementation . We undertook the present learning process of validation for nine months to simultaneously train 8 PhD students for remote reporting of oral and maxillofacial pathologies. This report, however, is based on the learning process of 3 trainees and the lead pathologist, which evaluated the same set of cases during a 5-week period experience with a washout period of 1 month , as we considered our team sufficiently trained and aligned after almost 9 months of digital pathology diagnostic experience for primary diagnosis purposes. To further consolidate the learning curve and to ensure risk reduction strategies, digital pathology diagnosis and conventional microscope diagnosis were compared when challenging cases or cases with malignant clinical suspicion were assessed. These cases were deferred to glass and reviewed by the lead pathologist using conventional light microscopy (CLM). These risk reduction strategies were associated with expert interconsultation when needed . The sample comprised oral biopsies, consecutively and prospectively selected from the cases processed in the Pathology Laboratory of FOP-UNICAMP for 5 weeks. Only first-time evaluation cases were considered in this analysis. Cases previously assessed and, those which had required further evaluation were only included when the entire set of digital slides (hematoxylin and eosin (H&E), immunohistochemical and special stains) for each such case were evaluated within the 5 weeks period. A total of 162 glass slides from 109 patients were included in this validation and were assessed by all 4 pathologists in both analyses (digital and conventional) to allow intraobserver comparison. The glass slides were scanned using the Aperio Digital Pathology System (Leica Biosystems, Wetzlar, Germany) with a spatial sampling of 0.47μm per pixel, with automated focusing and magnification at ×20. WSI included all of the tissue present on conventional glass slides . The workstation specifications are listed in Table . Prints of each screen are shown in Fig. to demonstrate visual differences in the pathologist’s perception of colors and the brightness of the images. All equipment enrolled in this validation passed the University of Leeds Point of Use QA tool test ( http://www.virtualpathology.leeds.ac.uk/research/systems/pouqa/ ) Histopathological routine workflow previous to the COVID-19 pandemic was conducted in a multi-header CLM 5 position and enhanced by the possibility to share the glass slide visualization via a television set, which receives live video images through a camera attached to the CLM. After the glass slide evaluation, a brief description of histological characteristics is usually provided by the lead pathologist and annotated by one trainee in a temporary sheet and by another trainee in the FOP-UNICAMP Anatomopathological Examination Request Website For Biopsies’ Records and Histopathological Reports ( https://w2.fop.unicamp.br/patologia/index.php?sid=2 ) in the histological description field. Our online system provides a totally-digital environment, which enables pathologists to see the biopsy request form (as well as clinical photos, serological exams, and radiographs), to consult clinicians about details regarding the patient evolution and disease course, as well as to assess pending cases awaiting for reporting. The diagnostic method used to validate remote WSI assessment was an online meeting platform (Google Meet). The lead pathologist shared his screen with pathology trainees, allowing a fully digital workflow for case discussion and remote reporting. Digital slides and clinical information registered on the website were also shown to all participants for clinicopathological correlation closely resembling the histopathological routine workflow previous to COVID-19 pandemic. The digital assessment was conducted via an online meeting (Google Meet) for 5 weeks and, after a 1-month washout period, glass slides were analyzed to compare both methods. Diagnoses were classified as (1) concordant; (2) slightly discordant: no clinical or prognostic implications; or (3) discordant. Discordant cases were reviewed by the lead pathologist to reach a preferred diagnosis using both digital and conventional methods, and a stratification of the discordant cases was made as in (1) low degree of difficulty, (2) moderate degree of difficulty, and (3) high degree of difficulty. Additionally, the pathologists were required to list any problems (technical or case related) which may have impaired the digital and/or glass slide evaluation. The timing for each analysis was not assessed as a performance metric because it was necessary to register time individually (for each case) to provide median values as variables and, ask participants to assess their own analysis time would have added bias in this study. Statistics We assessed the global percentage of agreement ( P o ), and κ statistics (Fleiss’s Kappa) to establish the agreement between DM and CM since intraobserver agreement is the primary form of analysis and preferred measurement . κ values of < 0.00 indicates poor agreement, 0.0–0.2 slight agreement, 0.2–0.4 fair agreement, 0.4–0.6 moderate agreement, 0.6–0.8 substantial or good agreement, and > 0.8 excellent or almost perfect agreement. The interobserver concordance was not considered in this analysis, since it is more suitable for evaluating a pathologist’s performance instead of the method’s performance. Statistical analyses were conducted using Real Statistics Resource Pack for Excell. Professionals enrolled in laboratory workflow tasks had their temperature routinely monitored and any occasional symptoms were kept under close surveillance, and staff rotation was adopted to avoid over exposure to the virus, since the correlation among viral load and symptoms/prognosis is still not clear . Pathology labs are usually designed to constrain fire incidences and are equipped with refrigeration and air-conditioning making them poorly ventilated which can contribute to aerosol suspension. Therefore, some adaptations to the infrastructure and workflow of the pathology laboratory were made, and additional precautions were taken to provide a safe work condition for the staff as seen in Fig. : (1) reduction of the number and circulation of staff in the laboratory (one pathologist (1a) and one technician (1b) were enrolled in gross analysis), reduction of the frequency of staff exposure (the team should address a schedule with rotational shifts); (2) exhaust systems—fume hood and exhaust vents—were kept functioning to minimize the potential suspension of microdroplets and a HEPA filter was installed. (3) Disinfection of surfaces, floor, and shoes of the staff prior laboratory use were encouraged; (4) disinfection of plastic containers containing biopsy tissue was reinforced due to the high risk of contamination while the clinician manipulates these recipients within surgery and given the time SARS-Cov 2 remains viable on this material . (5) After processing the material, masks were also disinfected before being discarded . For appropriate disinfection, a solution with 62–71% ethanol or 0.1% sodium hypochlorite within 1 minute is indicated . (6) Stringent and regular handwashing by all staff members was also reinforced. Further steps of processing only required one experienced technician including (7) automated tissue processing; (8) embedding within paraffin blocks; (9) tissue sectioning; (10) staining; and (11) scanning. The majority of adaptations in the laboratory work flow were related to work security. There were no changes in quality control procedures for gross examination, processing, embedding, microtomy, and staining. To ensure digital image quality control, an additional step of macroscopic evaluation of glass slides previous to scanning was conducted including assessment of clean, well placed and dried coverslips, as well as artifacts/bubble-free slides. After scanning, the digitalized images were checked for out of focus, missing tissue, and striping artifacts. The scanned slides were then uploaded to the institutional server and made available through a link, which redirected the pathologist to a web based WSI viewer enabling online visualization, navigation, and flipping functions. The methodology for this study is based on the guidance for remote reporting of digital pathology slides during periods of exceptional service pressure , previously detailed in the RCPath guidance for digital pathology implementation . We undertook the present learning process of validation for nine months to simultaneously train 8 PhD students for remote reporting of oral and maxillofacial pathologies. This report, however, is based on the learning process of 3 trainees and the lead pathologist, which evaluated the same set of cases during a 5-week period experience with a washout period of 1 month , as we considered our team sufficiently trained and aligned after almost 9 months of digital pathology diagnostic experience for primary diagnosis purposes. To further consolidate the learning curve and to ensure risk reduction strategies, digital pathology diagnosis and conventional microscope diagnosis were compared when challenging cases or cases with malignant clinical suspicion were assessed. These cases were deferred to glass and reviewed by the lead pathologist using conventional light microscopy (CLM). These risk reduction strategies were associated with expert interconsultation when needed . The sample comprised oral biopsies, consecutively and prospectively selected from the cases processed in the Pathology Laboratory of FOP-UNICAMP for 5 weeks. Only first-time evaluation cases were considered in this analysis. Cases previously assessed and, those which had required further evaluation were only included when the entire set of digital slides (hematoxylin and eosin (H&E), immunohistochemical and special stains) for each such case were evaluated within the 5 weeks period. A total of 162 glass slides from 109 patients were included in this validation and were assessed by all 4 pathologists in both analyses (digital and conventional) to allow intraobserver comparison. The glass slides were scanned using the Aperio Digital Pathology System (Leica Biosystems, Wetzlar, Germany) with a spatial sampling of 0.47μm per pixel, with automated focusing and magnification at ×20. WSI included all of the tissue present on conventional glass slides . The workstation specifications are listed in Table . Prints of each screen are shown in Fig. to demonstrate visual differences in the pathologist’s perception of colors and the brightness of the images. All equipment enrolled in this validation passed the University of Leeds Point of Use QA tool test ( http://www.virtualpathology.leeds.ac.uk/research/systems/pouqa/ ) Histopathological routine workflow previous to the COVID-19 pandemic was conducted in a multi-header CLM 5 position and enhanced by the possibility to share the glass slide visualization via a television set, which receives live video images through a camera attached to the CLM. After the glass slide evaluation, a brief description of histological characteristics is usually provided by the lead pathologist and annotated by one trainee in a temporary sheet and by another trainee in the FOP-UNICAMP Anatomopathological Examination Request Website For Biopsies’ Records and Histopathological Reports ( https://w2.fop.unicamp.br/patologia/index.php?sid=2 ) in the histological description field. Our online system provides a totally-digital environment, which enables pathologists to see the biopsy request form (as well as clinical photos, serological exams, and radiographs), to consult clinicians about details regarding the patient evolution and disease course, as well as to assess pending cases awaiting for reporting. The diagnostic method used to validate remote WSI assessment was an online meeting platform (Google Meet). The lead pathologist shared his screen with pathology trainees, allowing a fully digital workflow for case discussion and remote reporting. Digital slides and clinical information registered on the website were also shown to all participants for clinicopathological correlation closely resembling the histopathological routine workflow previous to COVID-19 pandemic. The digital assessment was conducted via an online meeting (Google Meet) for 5 weeks and, after a 1-month washout period, glass slides were analyzed to compare both methods. Diagnoses were classified as (1) concordant; (2) slightly discordant: no clinical or prognostic implications; or (3) discordant. Discordant cases were reviewed by the lead pathologist to reach a preferred diagnosis using both digital and conventional methods, and a stratification of the discordant cases was made as in (1) low degree of difficulty, (2) moderate degree of difficulty, and (3) high degree of difficulty. Additionally, the pathologists were required to list any problems (technical or case related) which may have impaired the digital and/or glass slide evaluation. The timing for each analysis was not assessed as a performance metric because it was necessary to register time individually (for each case) to provide median values as variables and, ask participants to assess their own analysis time would have added bias in this study. We assessed the global percentage of agreement ( P o ), and κ statistics (Fleiss’s Kappa) to establish the agreement between DM and CM since intraobserver agreement is the primary form of analysis and preferred measurement . κ values of < 0.00 indicates poor agreement, 0.0–0.2 slight agreement, 0.2–0.4 fair agreement, 0.4–0.6 moderate agreement, 0.6–0.8 substantial or good agreement, and > 0.8 excellent or almost perfect agreement. The interobserver concordance was not considered in this analysis, since it is more suitable for evaluating a pathologist’s performance instead of the method’s performance. Statistical analyses were conducted using Real Statistics Resource Pack for Excell. The intraobserver metrics were calculated based on fixed categories of the final diagnosis and the interpretation of descriptive diagnosis was based on agreement on certain aspects of the report such as grading dysplasia, microinvasion, and specific structures as perineural invasion, pigmentation of basal layer, vascular structures, and presence of odontogenic components. The intraobserver agreement between DM and CM was considered almost perfect ( κ ranged from 0.85 to 0.98, with 95% CI, ranging from 0.81 to 1). All intraobserver metrics—the percentage of global agreements for each pathologist, kappa statistics, disagreements, and error rate, as well as pathologist’s experiences and reported reasons for diagnostic agreements for each pathologist by method—are shown in Table . Among the lead pathologist’s disagreements cases, one was regarding a borderline case of severe dysplasia grade/OSCC diagnosis (which was deferred to glass and reported as OSCC), and the other encompassed a case of squamous cell carcinoma with a kerathoacantoma-like configuration, which was referred for remote interconsultation to a dermathopathologist. The most frequent pitfall from using this form of DM was lag in screen mirroring, followed by lack of radiographic and clinical photo/information. Despite not being frequent, some reported pitfalls caught included (1) necrosis, pointed as a pitfall for the diagnosis of OSCC for pathologist 2; (2) the need for a higher magnification to assess dysplasia for pathologist 3; (3) inflammation, pointed as a pitfall for the diagnosis of orthokeratinized odontogenic cyst (misinterpreted as dentigerous cyst) for pathologist 4. Among all 39 discordant diagnoses, the majority of preferred diagnoses were reached out by CM 42 (65.6%), while 30 (46.8%) were reached by DM. Individual percentage of preferred diagnosis is shown in Table . The most significant and frequent disagreements within trainees encompassed epithelial dysplasia grading and differentiation between severe dysplasia (carcinoma in situ) and microinvasive OSCC. Among the 12 slides which required dysplasia grading for diagnosis, pathologist 2 had an intraobserver disagreement in 8 (66.6%) slides, pathologist 3 in 6 (50%) slides, and pathologist 4 in 9 (75%) slides. Special attention was required when assessing 7 borderline cases to establish a difference between a severe dysplasia (carcinoma in situ) or a microinvasive OSCC—in 2 slides for pathologist 2, none for pathologist 3, and in 2 slides for pathologist 4. Among all discrepant slides, 6 were common to all pathologists Table . Other disagreements common to all pathologists were related to the differentiation among (1) paracoccidioidomycosis and coccidioidomycosis fungal particles; (2) periapical granuloma and periapical inflammatory cyst; and (3) odontogenic keratocyst and dentigerous cyst. The role of digital pathology in remote reporting has increased largely due to the COVID-19 pandemic. Prior to the pandemic, the extent of use of WSI in our institution was limited to teaching , pathology conferences, and occasionally consulting experts around the world. Additionally, cases enrolled in MDTM discussion or referred for interconsultations are not usually included in the digital workflow . In our routine practice, interconsultations referred to us are almost always glass slide-based, while our cases referred to external pathologists are digitally shared through a secure link. MDTMs for expert interconsultation have broken down conventional geographic barriers, providing a more definitive diagnosis in a reduced time for challenging cases, and reducing costs associated with transportation of paraffin-embedded blocks/glass slides . In this dataset, interconsultation was needed for one case in which a squamous cell carcinoma presenting in the neck was interpreted as a keratoacanthoma. According to Patel and coworkers , a wide variety of hardware specifications have been used worldwide for interconsultations (telepathology), even in suboptimal conditions. In the present validation, display specification was not ideal as stated by Williams (resolution of 3 megapixels or greater, 24 in. or more for a desktop display) but all equipment passed the University of Leeds Point of Use QA tool test ( http://www.virtualpathology.leeds.ac.uk/research/systems/pouqa/ ) . Experts advise that pathologists should be aware of display specifications (including luminance and contrast) and natural light interference in the digital image assessment not only for interconsultations but also for remote diagnosis in exceptional circumstances such as COVID-19. Although we have previously conducted a validation study for primary diagnosis of oral pathology cases , the daily pathology practice has largely remained CLM-based. This scenario drastically changed when restrictive measures were introduced for staff safety in our institution. According to Browning, the number of pathologists using WSI for remote reporting is higher now than before the outbreak with validation processes due to start soon or almost completed in many institutions in the UK . This highlights the need to maintain quality control standards while these essential health care services are being provided. Numerous reference/consult centers worldwide have already transitioned to or have introduced a fully digital pathology service with the advantages including the possibility of consulting an expert for difficult cases, better ergonomy, larger field of vision, improvement of the workflow, and quantification of prognostic parameters . In addition, the emergence of digital microscopy in a pandemic scenario not only offers a way to maintain essential health care but also contributes to a re-evaluation of the best practice in pathology. Back and neck pain associated with microscope use has been reported by pathologists as one specific issue which can be overcome by digitial pathology . This same burden was also the initial point of a need for innovation on how reports are produced to further improve ergonomy . Some studies have also stated that digitalization, despite being an extra step, allows better quality control, and also streamlines histotechnical processing of special and immunohistochemical stains. Continued education is also another important role of digital pathology when considering this unprecedented scenario and it must be based on best practice recommendations . Pathology trainees conventional training and education now have the opportunity to be engaged in international meetings and discussions with experts around the world in their respective specialties and are likely to remain a valuable resource even in a post-pandemic scenario. In the present validation study, the intraobserver agreement was considered almost perfect with P o = 98.7% for the lead pathologist. An excellent performance was also reported by Hanna and coworkers, who undertook a randomized, prospective study in which researchers had remote reporting validated with a high percentage of agreement (100%) among DM and CLM . The error rate (ER) obtained when comparing DM diagnosis with a reference standard (in this study, the lead pathologist’s diagnosis after reviewing intraobserver discordant cases and establishing a preferred diagnose between them) should be 3%, and is acceptable if not higher than 4% . The error rate of DM diagnosis is 1.2% for the lead pathologist, 8.6% for pathologist 2, 7.4% for pathologist 3, and 8% for pathologist 4. Despite the high ER for trainees, the lead pathologist’s ER suggests that experience may have a crucial role when evaluating WSI for primary diagnosis. Therefore, it is important not only to considerer individual aspects of lesions in such discordances (challenging, subjective, or borderline cases) but also to take into account previous digital pathology experience. The majority of disagreements identified in this validation enrolled borderline cases (differentiation between severe dysplasia (carcinoma in situ) or microinvasive OSCC, differentiation between squamous cell carcinoma and kerathoacanthoma), as well as dysplasia grading, similar to previous reports . These disagreements are related to the subjectivity of dysplasia analysis and may be explained by the experience of trainees and their previous background, especially if we consider the disagreements for each pathologist only in the dysplasia category (seven, eight, and five slides for pathologist 1, 2, and 3, respectively) and compare it with only one discordant slide for the lead pathologist (considered a challenging case with high degree of difficulty). Experience should be taken into account, as trainees had experience from different previous services and their interpretation may suffer from variations depending on how involved they were in routine diagnosis and their respective training time. The grade of difficulty in the interpretation of such cases is highly variable. One slide of epithelial dysplasia was classified as a high degree of difficulty case since it required new sections to allow a better assessment of dysplasia grade, while the other epithelial dysplasia WSI was considered a case of low degree of difficulty. It is also important to emphasize the importance of carefully analyze all the tissue present in the slide (conventional or digital) since it is expected that deeper sections would reveal different configurations of the epithelium in the same way that it is not unusual to observe different grades of dysplasia in two synchronous half of the same biopsy. Additionally, the presence of inflammation could lead to confusion in the differentiation between reactive epithelial atypia and oral epitelial dysplasia (Fig. ). Others disagreements were related to missing epithelium in digital slides (in those cases, an inflammatory odontogenic cyst was interpreted as a periapical granuloma by the DM), which is probably not related to misinterpretation, lack of knowledge or training time/experience, but to the lack of time and, sometimes, lack of patience to locate all fragments from a curettage biopsy on the screen, since this is not a case of high degree of difficulty. Molin and collaborators reported the lack of time as a reason for possible diagnostic discordance in one case of a missed fungus in as esophagus biopsy . In this particular situation (missing epithelium), this slight discordance does not drastically interfere in treatment decisions. On the other hand, the differentiation among borderline cases such as severe epithelial dysplasia (carcinoma in situ) and OSCC (early stage), as well as dysplasia grading, presents a great dilemma, since it influences the treatment choice and management of pre-malignant lesions . An odontogenic keratocyst was misinterpreted as a dentigerous cyst on DM by pathologists 2 and 4 and on CM by pathologist 3 (as shown in Table ). Despite trainees not having reported any challenges regarding this specific case, inflammation was identified in the reassessment of the case as a possible pitfall for this disagreement since it can cause epithelial changes. Another case of orthokeratinized odontogenic cyst, discordant only for pathologist 4, was also misinterpreted as a dentigerous cyst due to excessive inflammation that caused several reactive alterations . These cases are not of high complexity once the pathologist has the clinical course and information regarding previous procedures (curettage or marsupialization) that can cause inflammation and epithelium alterations. Another disagreement common to all pathologists occurred in one case of paracoccidioidomycosis in which trainees diagnosed the fungal particles as coccidioidomycosis. A non-representative biopsy, with only a few larger than expected fungal structures (but still within the range of size compatible with the two hypothesis) without typical sprouts were raised as reasons for disagreement in this slide considered as a case of moderate degree of difficulty. Since both infections are clinically similar and present granulomatous inflammation, morphology of fungal structures and geographic patterns of endemic areas help to achieve the proper diagnosis. Sphere sizes in coccidioidomycosis are larger (10 to 80μm) with a double refractile outer wall containing several endospores (no sprouts) while in paracoccidioidomycosis, the sphere size is around 6 to 20μm, surrounded by sprouts . In another case, discordant only for pathologist 3, the need for a higher magnification to assess dysplasia was reported. This technology pitfall has been reported by other groups in different evaluation aspects, for example, when assessing Helicobacter pylori in gastric biopsies or mitosis counting for breast . Dysplasia grading is usually associated with its intrinsic subjectivity of analysis, but not currently associated with the need for higher magnification, since a contextual evaluation takes place based not only on cytological characteristics but also in architectural criteria and evaluation by comparing neighboring cells for pleomorphism. Cross et al. summarized a series of diagnostic difficulties pathologists may experience with assessing dysplasia, detecting metastasis and micrometastasis, identifying mitotic figures, eosinophils, and fine nuclear details highlighted as some of the challenges . All participants in our study reported lag in screen mirroring as the most frequently reported pitfall. Participants in this study did not utilize the institutional virtual private network (VPN). The average home broadband connection in Brazil is around 4.84 Megabits per second (Mbps) and associated with the present resolution displays we achieved a good performance despite recurrent pixelation. The majority of challenges reported in the literature are related to internet speed and workstations limitations but without any impairment to the digital pathology workflow . In one glass slide, the lead pathologist reported the lack of inflammatory cell details by DM . According to Browning , the most reported challenge among students is related to technical difficulties in accessing WSI. Samueli and coworkers reported a survey in which students were trained in 4 modules: self-assigned reading, remote lecture (Zoom), quiz based on digital slide sets, and a frontal review of WSI (Zoom). Eighty percent of students reported technical issues while accessing slides as a challenge but despite that 67% stated they were more prone to attend online classes . Given the challenging and unprecedented scenario of the current pandemic, adaptations in the laboratory workflow and digital pathology can aid to mitigate the impact of pandemic COVID-19 outbreak on oral and maxillofacial pathology laboratory routines. This validation study ensures the feasibility of the remote histopathology diagnosis while ensuring social distancing and limiting the spread of SARS-Cov 2. Overview of results and pathologist perception Flipping is a great advantage of DM (rotation of the image with a single click). The wide view provided by a scanned image, automated focus, and easy navigation within different magnifications allows fast recognition of regions of interest, overcoming light, focus, and magnification handling issues, and characteristics of CLM. Pathologists should be cautious to not miss important histological structures on DM when their confidence increases. By relying on the wide view provided by DM, pathologists may feel secure to give a diagnosis at a lower magnification, being prone to error—not a technology limitation. Training time (experience) and calibration in pathology are crucial for good performance. Reported pitfalls when using a digital environment were as follows: Technology-related pitfalls: lag screen mirroring, lack of details of inflammatory cells, and need for a higher magnification to assess dysplasia. Case-related pitfalls: bad quality clinical photo, challenging/borderline case, clinical information, and hypothesis do not relate with the histological characteristics, lack of clinical photo/information, lack of radiographs, misleading clinical diagnosis/hypothesis, necrosis, non-representative biopsy/small amount of tissue, need for special staining, the subjectivity of dysplasia analysis; Technical processing-related pitfalls: artifact, fixation, the thickness of tissue section, inclusion, staining, and cases that required a deeper tissue sectioning. Flipping is a great advantage of DM (rotation of the image with a single click). The wide view provided by a scanned image, automated focus, and easy navigation within different magnifications allows fast recognition of regions of interest, overcoming light, focus, and magnification handling issues, and characteristics of CLM. Pathologists should be cautious to not miss important histological structures on DM when their confidence increases. By relying on the wide view provided by DM, pathologists may feel secure to give a diagnosis at a lower magnification, being prone to error—not a technology limitation. Training time (experience) and calibration in pathology are crucial for good performance. Reported pitfalls when using a digital environment were as follows: Technology-related pitfalls: lag screen mirroring, lack of details of inflammatory cells, and need for a higher magnification to assess dysplasia. Case-related pitfalls: bad quality clinical photo, challenging/borderline case, clinical information, and hypothesis do not relate with the histological characteristics, lack of clinical photo/information, lack of radiographs, misleading clinical diagnosis/hypothesis, necrosis, non-representative biopsy/small amount of tissue, need for special staining, the subjectivity of dysplasia analysis; Technical processing-related pitfalls: artifact, fixation, the thickness of tissue section, inclusion, staining, and cases that required a deeper tissue sectioning.
Expert consensus on the clinical strategies for orthodontic treatment with clear aligners
947c4be1-c7f1-4497-94bd-97e26060ba78
11904224
Dentistry[mh]
Malocclusion is a common oral disease with the estimated prevalence among general population ranging from 43.5% to 67.2%. , It is associated with the risk of various oral dysfunctions and esthetic concerns, which may have detrimental effects on mental health and the quality of life. – Recent years have witnessed the growing popularity of clear aligners among patients owing to their esthetic appeal, comfort, and convenience in oral hygiene maintenance. , However, as a novel technology distinct from traditional fixed orthodontic appliances, clear aligner treatment (CAT) presents new challenges in case selection, treatment strategy, aligner design, and follow-up monitoring, which are associated with the differences in material characteristics and properties, and treatment outcomes. – Therefore, key clinical aspects of CAT are demanded to help improve treatment efficacy and promote continued development and dissemination of this clinical technique. Clear aligners are removable orthodontic appliances that were first introduced two decades ago and have been used to treat nearly 20 million patients worldwide. Since their launch, significant innovations have been achieved in the development of clear-aligned materials. The use of big data analyses and design software has enabled the aligners to tightly envelope the tooth surface and apply gentle continuous force which can be designed based on the desired tooth-specific movement direction and distance. The optimal sequence of tooth movement can be calculated precisely to ensure that tooth moves in the desired direction. , Furthermore, clinical solutions have evolved from optimizing individual to optimizing group teeth movement, while clinical indications have expanded from simple to complex cases, including surgical cases. – Consequently, CAT has become the primary innovating trend in orthodontics. , To date, over 5000 publications on clear aligners have been indexed on PubMed, including case reports, clinical trials, retrospective clinical studies, and reviews, highlighting the on-going interest in this field. – The purpose of this expert consensus was to summarize the core technology of CAT and provide clinical guidance for practitioners in terms of indications, treatment strategies, aligner design, and follow-up monitoring. Indications and contraindications The current indications for CAT are comparable to those for fixed orthodontics. Clear aligners can be used to treat nearly all types of malocclusions, especially the patients with high esthetic and comfort requirements, poor periodontal conditions, susceptibility to caries, or enamel developmental defects. However, clear aligners are not recommended for patients with clinically short crowns, requiring extensive mesial movement of the posterior teeth, or showing poor compliance. However, treatment difficulty of clear aligner therapy varies greatly among cases. Thus, we suggest difficulty-grading criteria for CAT. Grading of treatment difficulty Clear aligners are made of elastic materials, and teeth are moved by the rebound force generated by the elastic deformations of the aligner materials when the aligners are positioned. Thus, aligners mainly provide a “pushing force”, and their clinical efficiency varies among different types of tooth movements (Fig. ). – Therefore, it is crucial to accurately assess treatment difficulty and select most suitable cases. We developed the CAT-CAT difficulty assessment tool, which assigns scores based on model analysis, X-ray examination results, and clinical examination results. According to the literature and authors’ clinical experience, clinical cases were divided into four grades: easy, moderate, difficult and challenging (Table ). Owing to the biomechanical differences between CAT and traditional fixed orthodontics, it is imperative for clinicians to fully understand the characteristics of CAT and gradually implement treatment based on the difficulty level in each case to help minimize the associated risks. The current indications for CAT are comparable to those for fixed orthodontics. Clear aligners can be used to treat nearly all types of malocclusions, especially the patients with high esthetic and comfort requirements, poor periodontal conditions, susceptibility to caries, or enamel developmental defects. However, clear aligners are not recommended for patients with clinically short crowns, requiring extensive mesial movement of the posterior teeth, or showing poor compliance. However, treatment difficulty of clear aligner therapy varies greatly among cases. Thus, we suggest difficulty-grading criteria for CAT. Clear aligners are made of elastic materials, and teeth are moved by the rebound force generated by the elastic deformations of the aligner materials when the aligners are positioned. Thus, aligners mainly provide a “pushing force”, and their clinical efficiency varies among different types of tooth movements (Fig. ). – Therefore, it is crucial to accurately assess treatment difficulty and select most suitable cases. We developed the CAT-CAT difficulty assessment tool, which assigns scores based on model analysis, X-ray examination results, and clinical examination results. According to the literature and authors’ clinical experience, clinical cases were divided into four grades: easy, moderate, difficult and challenging (Table ). Owing to the biomechanical differences between CAT and traditional fixed orthodontics, it is imperative for clinicians to fully understand the characteristics of CAT and gradually implement treatment based on the difficulty level in each case to help minimize the associated risks. Different from traditional fixed orthodontic appliances, clear aligners are made of elastic materials, which cover the whole or partial clinical crowns and create a “pushing” force produced from material deformation of the clear aligners. Thus, theoretically, the force can be designed to exert onto any part of the tooth crowns as long as it is closely covered by the aligners. Thus, the crowns’ surface area and the fitness of the aligners are the key points to the success of treatment. Attachments used in clear aligner treatment are bonded on the crowns, which can not only increase the surface area but also afford more action points of the force. Attachments in various shapes and sizes can be designed to supplement clear aligners for different biomechanical demands. Besides, as we know, several types of arch wires made from different materials and in different shapes and/or sizes are used in traditional fixed treatment. In general, arch wires are used from thin to thick, round to rectangular, Niti to stainless steel, and therefore soft and flexible to solid and stable during the treatment. By doing so, teeth movement can be controlled in a predicted way. However, in clear aligner treatment, for each brand, the same aligner material is used throughout the whole aligner treatment, which is not as flexible as Niti wire nor as stable as stainless-steel wire. Thus, to move individual and/or group of teeth, tooth movement need to be designed in a stepwise mode, according to the natures of specific tooth movements. Moreover, aligners’ elastic force is directly proportional to the amount of material deformation within a certain range, whereas excessive deformation can lead to plastic deformation, resulting in a loss of the force. Additionally, all the elastic force decreases with the deformation time. Therefore, when designing clear aligners, a series of intermediate statuses is used to bridge the initial and final status. The aligners are regularly replaced, helping the teeth move gradually to the desired position under the effect of a continuous gentle force (Fig. ). Thus, the initial, intermediate and final positions are the three keys to the success of clear aligner therapy. The initial position is determined based on patients’ characteristics, especially the digital dental models that capture the intraoral dentition and occlusion. Intermediate positions aim to ensure that the path and rate of tooth movement comply with the biological and biomechanical principles of orthodontic tooth movement. The ideal final position necessitates well-aligned dental arches, normal anterior overjet/overbite, and perfect posterior interdigitations. Therefore, CAT is essentially a process of tooth repositioning in three dimensions. A critical aspect of this process is the acquisition and redistribution of space. There are currently five main methods for gaining space: arch expansion, molar distalization, incisor proclination, interproximal reduction (IPR), and extraction. – Clinical treatment plans should be designed based on individual cases. Next, we will discuss specific strategies for various clear aligner treatments in details, based on the methods of gaining space. As illustrated in Fig. , clear aligner treatment encompasses nine procedures in clinical practice, starting from diagnosis, clear aligner treatment difficulty assessment based on CAT-CAT, acquisition of digital models and aligner treatment planning. Once the aligner treatment planning is ready, aligner fabrication ensues. Then, clear aligner treatment progresses to clinical section that involves fitting of initial set of aligners, follow-up appointments and monitoring, and end of the active clear aligner treatment. Lastly, retention is required and important following orthodontic treatment. Diagnosis The precise initial position of the teeth requires complete and accurate patients’ data. And thus, data collection for CAT is essential, including facial and intraoral photographs, radiographic data [panoramic tomography, cephalometric radiographs, and cone beam computed tomography scans (CBCT)], and digital dental models that can be obtained through silicone rubber (PVS) impressions or intraoral scanning. , Based on these patient data, a meticulous diagnosis is established. CAT-CAT aligner difficulty assessment Orthodontic treatment goals are similar, regardless of treatment modalities. CAT plans should be based on patient complaints, presentation, and diagnosis. CAT can make orthodontic treatment easier, faster and more effective. However, before patients can be recommended for CAT treatment, difficulty level should be assessed (Table ) to ensure patient suitability. And clinicians should ensure that they have made the correct diagnosis and appropriate treatment plans. As for some difficult or challenging cases, such as patients with severe periodontitis or needing surgical treatment, multi-disciplinary treatment (MDT) and specialists’ guidance are necessary. Digital models As mentioned above, digital models can be acquired through either intraoral scanning or PVS impression taking. Aligner treatment planning Recently, we developed a novel clear aligner treatment philosophy—biomechanics-guided, esthetics-driven, periodontium-supported and temporomandibular joint-compatible clear aligner therapy (BEPT-CAT)—that can guide practitioners to perform aligner treatment planning. Most cases of malocclusion are caused by “incorrect” tooth position, resulting in the discrepancies in necessary and available space. And thus, the treatment principles focus on either increasing the amount of space available or reducing the tooth amount. Common clinical methods for increasing the available space include arch expansion, molar distalization, and incisor proclination, while methods for reducing the tooth amount include IPR and extraction. , Arch expansion Indications Narrow dental arch: a narrow dental arch can be determined based on the relationship between the most prominent points on the buccal surfaces of the crowns of the lower posterior teeth and the Wala ridge. Pont index analysis and Howes value can also assist in the width assessment. Pretreatment CBCT can be used to clarify the spatial relationship between the root and alveolar bone, which helps avoid excessive expansion that may result in bone fenestration or dehiscence. Excessive buccal corridor: excessive buccal corridor refers to excess negative space between the dental arch and the buccal mucosa of the oral cavity. Previous studies have shown that an excessive or insufficient buccal corridor jeopardizes smile esthetics. , An excessive buccal corridor is indicative of the arch expansion. Considerations for final position design Factors that must be considered include arch symmetry, arch coordination, and appropriate expansion amount to prevent bone fenestration or dehiscence. The volume of basal bone on buccal side should be analyzed in CBCT to determine the upper limit of the expansion. The amount of up-to-2 mm expansion on each side is safe in most cases. As for adolescents, the greater regenerative potential of alveolar bone remodeling makes arch expansion much safer. To prevent buccal inclination of crowns during expansion, the final position design should ensure that all the expanded posterior teeth are in lingual inclination (from the lateral view, the palatal cusps are invisible) (Fig. ). Attachment design Attachments are required on the buccal surfaces of teeth during arch expansion to prevent buccal inclination. For teeth with inadequate height of lingual cusps, lingual attachments may be placed simultaneously. Considerations for staging It is recommended to design a staged expansion for any expansion exceeding 1 mm unilaterally, such as a “V-pattern” design like molar distalization. Homonymous teeth in the same jaw are suggested to expand simultaneously because they can act as reciprocal anchorages. By adhering to these principles, clinicians can effectively incorporate arch expansion into clear alignment treatment plans, ensuring optimal outcomes in patients with dental arch discrepancies. Molar distalization Indications Almost normal facial pattern with distal (Class II) or mesial (Class III) molar relationship may be an indication for molar distalization. It may be accompanied by mild to moderate crowding, deep overjet, or an anterior crossbite/edge-to-edge bite. However, molar distalization is not generally recommended for neutral molar relationship (Class I). , Sufficient space in the posterior dental arch is necessary for molar distalization. CBCT evaluation from a three-dimensional perspective is recommended for molar distalization greater than 2 mm. Vertically, the presence of a low maxillary sinus increases the difficulty of upper molar distalization, especially when the molar roots penetrate the cavity. Third molar extraction is recommended to reduce distalization resistance and provide more space. , Considerations for final position design The upper limit of molar distalization of clear aligner treatment depends on the available retromolar space. The third molars can be extracted if there is no sufficient space. The amount of less than 2 mm molar distalization on one side is considered predictable in most cases while the mesio-distal inclination of posterior teeth and the potential of bone growth in children and adolescents should be taken into consideration. Based on the literature and clinical experience, the predictability of molar distalization using clear aligners is approximately 88%. Thus, it is feasible to design the final position based on the actually required distalization distance (i.e., to obtain a neutral relationship) where no or minimal overtreatment is required. Additionally, to prevent labial fenestration and/or dehiscence in the lower anterior region, it is necessary to avoid labial movement of the lower anterior teeth, particularly the roots. This is because class II intermaxillary elastics are commonly applied during upper molar distalization, which exert a mesial force on the lower arch and labially push the lower anterior teeth. Attachment design Molar distalization does not require the supplement of attachments. However, attachments are recommended to enhance the grip of teeth with short crowns. Moreover, molar distalization is often accompanied by other complex movements such as intrusion and rotation, and attachments are usually required to improve the success rates of these movements and prevent off-tracking. Traditional rectangular attachments are generally designed for the canines to increase the retention of aligners and minimize the impact of precision cuts. – Intermaxillary elastics When clear aligners exert a pushing force to achieve molar distalization via material deformation, the counteracting force may procline the anterior teeth. Thus, if anterior tooth proclination is undesirable, the anchorage of the anterior teeth should be reinforced. Intermaxillary elastics are commonly used in practice to achieve this aim. In maxillary molar distalization, precision cuts are designed at the maxillary canines, whereas buttons are bonded to the buccal surface of the mandibular first molars (cut out on lower aligners) to allow the use of Class II intermaxillary elastics (Fig. ). If simultaneous eruption of the canine is desirable (e.g., low positioned or insufficiently erupted canines), a button can be bonded to the labial surface of canine near the gingival margin to facilitate eruption (Fig. ). However, precision cuts at the mandibular molars are prone to aligner displacement or off-tracking and are not recommended. Additionally, if necessary, implant devices can be used to enhance the anchorage, provided they do not obstruct molar distalization. – On the other hand, if the proclination of anterior teeth is desirable (e.g., Class II Division 2), it can be designed simultaneously with molar distalization, acting as reciprocal anchorage to eliminate the need for any elastics. Nevertheless, anterior proclination and molar distalization should be closely monitored during follow-up appointments for real-time adjustments. Considerations for staging The staging of tooth movements involves the consideration of anchorage. Typically, molar distalization is designed in a “V-patten” staging, in which the second molars are moved first, and then the first molars once the second molars have reached the halfway point of their total moving distance; thereafter, the second premolars start to move once the second molars have completed their “journey” (Fig. ). Thus, no more than four teeth are distalized at each stage (V-pattern). Finally, the space created by canine distalization can be used to align and/or retract the anterior teeth. By doing so, the anchorage is often adequate for most distalization cases; however, a long-term treatment is unavoidable. In some cases, in order to shorten the treatment duration and increase patient compliance and cooperation, alignment of the anterior teeth is performed simultaneously with molar distalization, allowing patients to observe quick esthetic changes (Fig. ). In addition, implant screws can be used to strengthen anchorage, allowing more teeth to distalize simultaneously, to shorten treatment duration (Fig. ). – Proclination of anterior teeth Indications Patients presenting with straight or concave facial profiles and retro-inclined or upright anterior teeth accompanied by mild crowding, such as cases with deep overbite caused by lingual inclination of the upper anterior teeth, are indicated for proclination of anterior teeth, which can be combined with other methods to obtain enough space. Considerations for final position design The sagittal position and proclination of the anterior teeth, especially the upper anterior teeth, are crucial for facial esthetics and are one of the main indicators for profile analysis. – Thus, the degree of proclination of the anterior teeth should be carefully evaluated based on facial morphology, and a combination with other methods that help acquire sufficient space should be considered. For patients with a severe lingually inclined deep anterior overbite, the roots-and-bone relationship should be considered. The roots need to be positioned within the cancellous region of the alveolar bone. , Theoretically, a proclination of 1 mm (2.5°) in the anterior segment provides 2 mm of space. Therefore, the proclination design in the final position is based on the amount of space required, facial morphology, and the roots-and-bone relationship. Attachment design More than 3° of incisor proclination activates the power ridge in the designing software system, which applies labial-torquing force on the crowns, whereas lingual-torquing force on the roots and effectively achieves root-controlled movement of the anterior teeth. Traditional attachments on canines are recommended to reduce the risk of aligner off-tracking in the anterior segment. Considerations for staging A minor proclination can be synchronized with the alignment of mild crowding. However, in cases with lingually inclined deep overbite, staged tooth movement is required. Proclination is first performed to torque the roots into the cancellous bone, and then followed by intrusion and retraction of the anterior teeth. Interproximal reduction (IPR) Indications Although IPR is a method for gaining space, it has always been controversial because of the potential damage to the enamel and the resulting risk of caries. The authors suggested that IPR should be used as a supplement to other methods, rather than as the primary method, to gain space. The following situations warrant an IPR design , : Bolton discrepancy due to the missing teeth or malformed teeth. Gingival embrasure defects (black triangles) due to periodontal disease. Poor crown morphology with contact points nearby the incisal edge. Considerations for final position design In general, IPR is designed in the anterior segment, if needed. It is advisable to limit the maximum amount of IPR to 0.25 mm on the proximal surface of each tooth. Studies have shown that IPR amounting to no more than 50% of the enamel thickness generally does not increase the risk of caries. – Considerations for staging Since the IPR site is the anatomical contact point of the crown rather than the actual contact point, restoring normal contact points first undoubtedly facilitates IPR performance. However, in practice, there may be situations in which insufficient space hinders the alignment of the dental arch, which requires a comprehensive assessment of the timing of IPR. Graded IPR is recommended to alleviate this contradiction. Fluoride application after IPR performance is suggested. Tooth extraction Tooth extraction is a common method for reducing tooth amount in orthodontic treatment and is mainly indicated when the discrepancy between the available and required space exceeds 8 mm, such as in cases with severe crowding or severe maxillary and/or mandibular protrusions. Two types of tooth extraction patterns are commonly used in clear alignment treatment: extraction of lower incisor and extraction of premolars (first or second). Extraction of lower incisors Indications An almost normal facial pattern with stable posterior occlusion, no indication for upper extraction, and the total required space in the mandible exceeding 6 mm. Bolton ratio discrepancy due to missing teeth or malformed teeth in maxilla. Poor prognosis of a lower incisor due to periodontal disease or dental trauma. Considerations for final position design: The extraction of a lower incisor results in the lack of the midline of the lower dental arch. Instead, the long axis of the lower central incisor may be designed as the lower midline. In most cases, IPR of the upper anterior teeth is necessary to resolve the discrepant Bolton ratio and achieve normal anterior overbite and overjet. Attachment design: it is recommended to design vertical rectangular attachments or root-control attachments on the adjacent teeth to the extraction space, which facilitate the reciprocal movement of the adjacent teeth, especially their roots. Considerations for staging: extracting a lower incisor can effectively relieve crowding in the lower anterior section and provide space for the intrusion of the lower anterior teeth, resulting in a high rate of treatment success. Therefore, special staging considerations are generally not required. Extraction of the first premolars Based on the symmetry principle, the extraction of the first 4 premolars is the most common pattern of extraction in orthodontic practice. However, cases needing the extraction of 4 premolars belong to difficult level in CAT (Table ), and clinicians need to reach a certain level of orthodontic experience to complete the treatment. Indications: Extraction of the first 4 premolars is indicated when the discrepancy between the available and required space exceeds 8 mm, such as cases with severe crowding and/or bimaxillary protrusion, and etc. Considerations for final position design: most cases with tooth extraction are challenging to treat, as extensive tooth movement is unavoidable, requiring three-dimensional repositioning of these teeth. Treatment success relies on the torque control of the anterior teeth and the mesial-tipping avoidance of the posterior teeth. – Therefore, the final position requires an over-treatment design, as follows: Anterior teeth exhibit a labial inclination with incisor angles of approximately 120°. To prevent excessive lingual inclination, adequate labial inclination and torque control (root-lingual torque) should be designed during the whole procedure of anterior retraction. Cases with more lingual inclination at the initial and/or longer retraction distances require a larger positive torque in the design. Anterior teeth are in a shallow overjet/overbite or edge-to-edge position without occlusal contact. The pendulum effect of anterior retraction, compounded by any pre-existing deep bite condition, may require the over-treatment of anterior intrusion. Canines are mesially tipped with the roots closer to the extraction space. Posterior teeth are distally tipped, with additional negative torque to prevent buccal inclination of molars and loss of posterior anchorage. Attachment design: In such cases, attachment design should consider the following: Power ridge on incisors is recommended to aid in the torque control of the anterior teeth, which can be activated when more than 3° root-lingual torque is designed. Optimized attachments with strong root control or traditional rectangular attachments are recommended for the canines. Horizontal rectangular attachments with strong retention are recommended for posterior teeth. Intermaxillary elastics: To increase posterior anchorage, Class II elastics can be designed during anterior retraction (precision cuts at the upper canines and bonding of buttons on the buccal surface of the lower first molars). Alternatively, implant anchorage can be used in the anterior region to assist the intrusion and body retraction of the anterior teeth. – Different modes of elastic tractions with or without mini-implants and their corresponding biomechanics are displayed in Fig. . Considerations for staging: a personalized design is suggested for each case. The staging design should vary according to the specific circumstances because of the complex and variable nature of extraction cases. However, in most cases, we recommend distalizing canines and distal tipping of the posterior teeth (anchorage preparation) first. When canines complete the first third of the total moving distance, 6 anterior teeth start to move simultaneously by then. And finally, mesial movement of the posterior teeth begins when anterior teeth movement is completed. To prevent the “bowing effect”, it is suggested to avoid mesial movement of the posterior teeth simultaneously with the retraction of anterior teeth. Extraction of the second premolars Indications: In the following cases, second premolars are extracted instead of first premolars, which usually increases the treatment difficulty. Clinicians should be cautious to make a treatment scheme design like this: Serious damage/abnormality on the second premolar and/or its periodontal tissue. Second premolar is impacted or blocked-out of the dental arch. Minimal anchorage design. Considerations for final position design: Compared to those in the first premolar extracted case, molars should be designed with more distal inclination (anchorage preparation) since the molars are more prone to mesial tipping, especially in the cases that more than 3 mm mesial movement of molars is required (minimal anchorage design), while less over-treatment of anterior teeth is needed. Considerations for staging: we suggest, firstly, a sequential distal movement of the first premolars and canines, and distal-tipping anchorage preparation of the first molars. Then, move anterior teeth afterwards. And finally, mesially move the molars sequentially. Bite jump (surgical and growth jump) A bite jump refers to the changes in the three-dimensional position of the mandible and/or mandibular dental arch resulting from intermaxillary elastics, self-growth, and/or orthognathic surgery. It is important to note that the design of bite jump should be tailored based on the specific circumstances of the patient, and clinical feasibility should be considered. Except orthognathic surgery, bite jumps caused by other methods develop gradually in clinical practice, which can span the whole course of treatment. Indications: Adolescents with mild skeletal or functional mandibular hypoplasia or retrognathia; , Functional Class III, with the mandible being able to retrude to edge-to-edge occlusion; Severe skeletal deformities requiring orthodontic-orthognathic treatment ; Mandibular malposition caused by premature individual tooth contacts. Intermaxillary elastics: The sagittal bite jump requires the use of intermaxillary elastics or orthodontic appliances with mandibular advancement function. , Considerations for staging: In the design software, bite jump can be placed at any stage of the treatment or throughout the treatment process. The authors typically place bite jump at the end of the treatment, which makes it easier for clinicians to assess the amount and direction of the jump and detect any abnormalities in a timely manner during clinical monitoring. Below, we are going to delve into some special considerations in clear aligner design. A lot of clinicians are confused by these issues in practice. Special considerations in clear aligner design Over-treatment design: as we discussed before, clear aligners exert mainly a “pushing force”, and therefore their clinical efficiency varies among different types of teeth movements (Fig. ). To better realize the actual teeth movement, over-treatment design is recommended in some cases, which is related to the predictability of CAT. For example, to intrude anterior teeth and correct deep bite, a shallow overbite and even open bite is designed in the final position, while large positive torque may be given to the incisors which are lingual inclined or up-righted initially when retraction of anterior teeth is required to correct the convex profile. However, the appropriate amount of over-treatment design is determined case by case, and until now, there is no consensus on this specific issue. According to our experience and previous clinical studies, the amount of over-treatment should be designed based on the initial status of teeth and the type and amount of the teeth movement. , Challenges and strategies in the complex tooth movements: compared to expansion and molar distalization, intrusion, extrusion and torque control are more complex tooth movements in CAT, which have much lower predictability (Fig. ). Thus, over-treatment is commonly designed for these types of movements. Besides, sufficient space for tooth movements should be taken into considerations. For intrusion, the root-and-bone relationship needs to be analyzed in CBCT images to make sure that the roots are in the cancellous bones, while for extrusion, the intermaxillary space is required. And loose proximal contact points are always good for the movement. Then, sufficient anchorage for the movement is important. There are usually two ways to strengthen anchorage in CAT. One is to move teeth in a stepwise mode. We recommend a “Frog pattern” staging for anterior teeth intrusion, in which incisors and canines are intruded separately and in cycles (Fig. ). Extrusion of posterior teeth is suggested to be designed in a “V pattern” staging. Besides, power ridge design and positive torque is distributed in the whole procedure of incisor retracting to provide a better torque control. The other way to enhance anchorage is to use auxiliary devices and elastics, such as temporary anchorage devices (TAD) implanted in the anterior section to provide an extra intruding force and root lingual torque on anterior teeth (Fig. ). Furthermore, appropriate attachment design could provide clear aligners with greater retention, which is the key for CAT. Traditional attachments on premolars are recommended when intrusion of anterior teeth is needed while traditional attachments on canines are suggested for incisor’s torque control. Differences in the design of CAT between adolescents and adults: as we know, the main difference between adolescents and adults is growth potential which may lead to different orthodontic treatment plan. Mandible growth can result in anteroposterior bite jump, and thus, bite jump design without surgery is more possible to realize in adolescents. Besides, the prevalence of oral caries is higher in adolescents, and therefore, interproximal reduction (IPR) design should be used more cautiously. Moreover, traditional attachments or optimized attachments in larger size are recommended in adolescents due to their inadequate crowns. A recently published expert consensus on adolescents’ orthodontic treatment has deeply discussed this special issue. Aligner fabrication Once aligner treatment planning is ready, clear aligners that move teeth incrementally can be fabricated based on either thermoforming or 3D printing. Fitting of initial set of aligners Patients are informed to the clinic for the initial appliance placement when clinicians receive the aligners. On this day, the resin attachments are bonded onto the teeth according to the digital design, and the first set of aligners is tried in (fitness should be checked). Subsequently, patients are issued a set of instructions, including the required wearing duration, method of aligner placement, and usage of chewies. The patients are also informed about the importance of oral hygiene. Additional information and instructions are provided to the patients, as relevant, depending on the tooth movement plan, such as molar distalization, IPR, or extraction. Follow-up monitoring Patient compliance management Regular follow-up visits are essential and can be used to inform patients about treatment progress and challenges, helping them understand their roles in the process, increasing their confidence, compliance, and cooperation. , Cooperation in the long duration of orthodontic treatment is a huge challenge to majority of people, especially persisting in wearing clear aligners day by day. Thus, close contact with patients helps to know their status and give them a hand or timely reminding if needed. Pleasure communication and compliments on patients are always effective in maintaining good relationship between clinicians and patients, which is beneficial for the cooperation as well. To encourage patients, practitioners can show them the changes already occurred by comparing with their pre-treatment photos and inform them that all these changes are owing to their compliance and cooperation. Let patients be aware of that their efforts will pay back. By doing so, patients will be more confident in the treatment. Besides, some application programs registered by patients’ ID number can be used on smart cell phone to help record the wearing date and remind to change a new set, which is convenient for patients in daily life. Things to do in the follow-up visits To evaluate treatment progress, comprehensive examinations should be performed, including the following assessment: Tooth and periodontium status assessment, including mobility, premature contact presence, and occlusal trauma. , Occlusion changes, including the sagittal relationship, occlusal contacts, inclination, midline of upper/lower dental arch, overjet, overbite, torque and space, comparing to baseline and digital design. Temporomandibular joint health assessment should interrogate any pain, tenderness, and clicking in the joint area, especially in patients with temporomandibular disease before treatment and in adult patients using intermaxillary elastics. – Any detachment and/or abrasion of attachments should be checked according to digital design. Aligner fitness assessments account for the progress in tooth movement, especially any gap observed in the space from the incisal edges of the anterior teeth, cusps of the posterior teeth, and the area around the attachments and along the aligner margin. Management of off-tracking Off-tracking refers to the incomplete fitting between the teeth and aligners, indicative of a discrepancy between the direction and/or distance of actual tooth movement and that planned in the digital design (Fig. ). The management of off-tracking involves removing attachments and using aligners to guide the off-tracking teeth back into the desired path using intra-/inter-maxillary elastics. Off-tracking manifestations can be categorized into the following three situations: Off-tracking in the vertical dimension due to insufficient extrusion or anterior intrusion. Insufficient extrusion may manifest as uniform vacuoles emerging at the incisal edges or cusps and can be managed by removing the attachments on the off-tracking teeth and applying intra-/inter-maxillary elastics (Fig. ). Alternatively, in cases of insufficient anterior intrusion, which manifest as inadequate correction of the anterior deep bite, auxiliary devices, such as implants or redesigning additional aligners to increase the staging design for tooth movement, may be added. Off-tracking in the horizontal dimension commonly occurs in rotation correction, especially in severely rotated premolars. The removal of attachments and use of a power chain can be helpful in most cases (Fig. ). Off-tracking in the sagittal dimension is characterized by mesial inclination of the posterior teeth and torque loss of the anterior teeth (lingual inclination). , Mismatches between the attachments and vacuoles on the aligners can be observed on mesially inclined posterior teeth, as well as the gaps between the mesial cups and aligners. Distal up-righting of these off-tracking teeth must be performed using intermaxillary elastics and/or sectional arch wires after the removal of the attachments (Fig. ). The loss of anterior tooth torque manifests as lingual inclination of the upper/lower anterior teeth, increased overbite, early contact of anterior teeth, and posterior open bite. In such cases, the aligners may need to be redesigned to restart the program. Timing and considerations of program restart Sometimes not only one series of clear aligners are needed to complete the treatment. There are five possible reasons for this: The discrepancy between designed tooth movement and actual tooth movement, which result in an incomplete correction of the malocclusion, often occurring in some complex tooth movement, like intrusion, root control and more than 3 mm molar distalization. More series of aligners are designed to accomplish the treatment goal. Unwanted tooth movement occurs and leads to reduced occlusal contacts or even open bite in posterior segment, which may be due to the aligners’ effect of occlusal pad. More series of aligners are designed to consolidate the occlusion. More teeth should be included into treatment, which is common in adolescents with erupting second molars. A new series of aligners are usually designed to cover these second molars and some heterotopic or impacted teeth, if any. The change of occlusal relationship may occur, due to mandible growth and/or removal of occlusal interference. Then, a completely new design should be done according to the new and stable occlusal relationship. Bad cooperation in patients, leads to serious off-tracking, and even totally unfitting. A new series of aligners are designed based on current status. Treatment outcome Treatment is complete after waring the final set of aligners, if the treatment objective has been achieved. The criteria for ending CAT are consistent with those for ending traditional fixed orthodontic treatment. At the end of the treatment, the attachments and other auxiliary devices are removed, and retainers are prescribed as usual. Retention Retention is of vital importance to clear aligner treatment. Different modalities of retention can be chosen based on patient-specific characteristics, e.g., periodontal condition, caries vulnerability, etc. Patients should be recalled to check tooth alignment, retainer fitting, and signs of relapse. The precise initial position of the teeth requires complete and accurate patients’ data. And thus, data collection for CAT is essential, including facial and intraoral photographs, radiographic data [panoramic tomography, cephalometric radiographs, and cone beam computed tomography scans (CBCT)], and digital dental models that can be obtained through silicone rubber (PVS) impressions or intraoral scanning. , Based on these patient data, a meticulous diagnosis is established. Orthodontic treatment goals are similar, regardless of treatment modalities. CAT plans should be based on patient complaints, presentation, and diagnosis. CAT can make orthodontic treatment easier, faster and more effective. However, before patients can be recommended for CAT treatment, difficulty level should be assessed (Table ) to ensure patient suitability. And clinicians should ensure that they have made the correct diagnosis and appropriate treatment plans. As for some difficult or challenging cases, such as patients with severe periodontitis or needing surgical treatment, multi-disciplinary treatment (MDT) and specialists’ guidance are necessary. As mentioned above, digital models can be acquired through either intraoral scanning or PVS impression taking. Recently, we developed a novel clear aligner treatment philosophy—biomechanics-guided, esthetics-driven, periodontium-supported and temporomandibular joint-compatible clear aligner therapy (BEPT-CAT)—that can guide practitioners to perform aligner treatment planning. Most cases of malocclusion are caused by “incorrect” tooth position, resulting in the discrepancies in necessary and available space. And thus, the treatment principles focus on either increasing the amount of space available or reducing the tooth amount. Common clinical methods for increasing the available space include arch expansion, molar distalization, and incisor proclination, while methods for reducing the tooth amount include IPR and extraction. , Arch expansion Indications Narrow dental arch: a narrow dental arch can be determined based on the relationship between the most prominent points on the buccal surfaces of the crowns of the lower posterior teeth and the Wala ridge. Pont index analysis and Howes value can also assist in the width assessment. Pretreatment CBCT can be used to clarify the spatial relationship between the root and alveolar bone, which helps avoid excessive expansion that may result in bone fenestration or dehiscence. Excessive buccal corridor: excessive buccal corridor refers to excess negative space between the dental arch and the buccal mucosa of the oral cavity. Previous studies have shown that an excessive or insufficient buccal corridor jeopardizes smile esthetics. , An excessive buccal corridor is indicative of the arch expansion. Considerations for final position design Factors that must be considered include arch symmetry, arch coordination, and appropriate expansion amount to prevent bone fenestration or dehiscence. The volume of basal bone on buccal side should be analyzed in CBCT to determine the upper limit of the expansion. The amount of up-to-2 mm expansion on each side is safe in most cases. As for adolescents, the greater regenerative potential of alveolar bone remodeling makes arch expansion much safer. To prevent buccal inclination of crowns during expansion, the final position design should ensure that all the expanded posterior teeth are in lingual inclination (from the lateral view, the palatal cusps are invisible) (Fig. ). Attachment design Attachments are required on the buccal surfaces of teeth during arch expansion to prevent buccal inclination. For teeth with inadequate height of lingual cusps, lingual attachments may be placed simultaneously. Considerations for staging It is recommended to design a staged expansion for any expansion exceeding 1 mm unilaterally, such as a “V-pattern” design like molar distalization. Homonymous teeth in the same jaw are suggested to expand simultaneously because they can act as reciprocal anchorages. By adhering to these principles, clinicians can effectively incorporate arch expansion into clear alignment treatment plans, ensuring optimal outcomes in patients with dental arch discrepancies. Molar distalization Indications Almost normal facial pattern with distal (Class II) or mesial (Class III) molar relationship may be an indication for molar distalization. It may be accompanied by mild to moderate crowding, deep overjet, or an anterior crossbite/edge-to-edge bite. However, molar distalization is not generally recommended for neutral molar relationship (Class I). , Sufficient space in the posterior dental arch is necessary for molar distalization. CBCT evaluation from a three-dimensional perspective is recommended for molar distalization greater than 2 mm. Vertically, the presence of a low maxillary sinus increases the difficulty of upper molar distalization, especially when the molar roots penetrate the cavity. Third molar extraction is recommended to reduce distalization resistance and provide more space. , Considerations for final position design The upper limit of molar distalization of clear aligner treatment depends on the available retromolar space. The third molars can be extracted if there is no sufficient space. The amount of less than 2 mm molar distalization on one side is considered predictable in most cases while the mesio-distal inclination of posterior teeth and the potential of bone growth in children and adolescents should be taken into consideration. Based on the literature and clinical experience, the predictability of molar distalization using clear aligners is approximately 88%. Thus, it is feasible to design the final position based on the actually required distalization distance (i.e., to obtain a neutral relationship) where no or minimal overtreatment is required. Additionally, to prevent labial fenestration and/or dehiscence in the lower anterior region, it is necessary to avoid labial movement of the lower anterior teeth, particularly the roots. This is because class II intermaxillary elastics are commonly applied during upper molar distalization, which exert a mesial force on the lower arch and labially push the lower anterior teeth. Attachment design Molar distalization does not require the supplement of attachments. However, attachments are recommended to enhance the grip of teeth with short crowns. Moreover, molar distalization is often accompanied by other complex movements such as intrusion and rotation, and attachments are usually required to improve the success rates of these movements and prevent off-tracking. Traditional rectangular attachments are generally designed for the canines to increase the retention of aligners and minimize the impact of precision cuts. – Intermaxillary elastics When clear aligners exert a pushing force to achieve molar distalization via material deformation, the counteracting force may procline the anterior teeth. Thus, if anterior tooth proclination is undesirable, the anchorage of the anterior teeth should be reinforced. Intermaxillary elastics are commonly used in practice to achieve this aim. In maxillary molar distalization, precision cuts are designed at the maxillary canines, whereas buttons are bonded to the buccal surface of the mandibular first molars (cut out on lower aligners) to allow the use of Class II intermaxillary elastics (Fig. ). If simultaneous eruption of the canine is desirable (e.g., low positioned or insufficiently erupted canines), a button can be bonded to the labial surface of canine near the gingival margin to facilitate eruption (Fig. ). However, precision cuts at the mandibular molars are prone to aligner displacement or off-tracking and are not recommended. Additionally, if necessary, implant devices can be used to enhance the anchorage, provided they do not obstruct molar distalization. – On the other hand, if the proclination of anterior teeth is desirable (e.g., Class II Division 2), it can be designed simultaneously with molar distalization, acting as reciprocal anchorage to eliminate the need for any elastics. Nevertheless, anterior proclination and molar distalization should be closely monitored during follow-up appointments for real-time adjustments. Considerations for staging The staging of tooth movements involves the consideration of anchorage. Typically, molar distalization is designed in a “V-patten” staging, in which the second molars are moved first, and then the first molars once the second molars have reached the halfway point of their total moving distance; thereafter, the second premolars start to move once the second molars have completed their “journey” (Fig. ). Thus, no more than four teeth are distalized at each stage (V-pattern). Finally, the space created by canine distalization can be used to align and/or retract the anterior teeth. By doing so, the anchorage is often adequate for most distalization cases; however, a long-term treatment is unavoidable. In some cases, in order to shorten the treatment duration and increase patient compliance and cooperation, alignment of the anterior teeth is performed simultaneously with molar distalization, allowing patients to observe quick esthetic changes (Fig. ). In addition, implant screws can be used to strengthen anchorage, allowing more teeth to distalize simultaneously, to shorten treatment duration (Fig. ). – Proclination of anterior teeth Indications Patients presenting with straight or concave facial profiles and retro-inclined or upright anterior teeth accompanied by mild crowding, such as cases with deep overbite caused by lingual inclination of the upper anterior teeth, are indicated for proclination of anterior teeth, which can be combined with other methods to obtain enough space. Considerations for final position design The sagittal position and proclination of the anterior teeth, especially the upper anterior teeth, are crucial for facial esthetics and are one of the main indicators for profile analysis. – Thus, the degree of proclination of the anterior teeth should be carefully evaluated based on facial morphology, and a combination with other methods that help acquire sufficient space should be considered. For patients with a severe lingually inclined deep anterior overbite, the roots-and-bone relationship should be considered. The roots need to be positioned within the cancellous region of the alveolar bone. , Theoretically, a proclination of 1 mm (2.5°) in the anterior segment provides 2 mm of space. Therefore, the proclination design in the final position is based on the amount of space required, facial morphology, and the roots-and-bone relationship. Attachment design More than 3° of incisor proclination activates the power ridge in the designing software system, which applies labial-torquing force on the crowns, whereas lingual-torquing force on the roots and effectively achieves root-controlled movement of the anterior teeth. Traditional attachments on canines are recommended to reduce the risk of aligner off-tracking in the anterior segment. Considerations for staging A minor proclination can be synchronized with the alignment of mild crowding. However, in cases with lingually inclined deep overbite, staged tooth movement is required. Proclination is first performed to torque the roots into the cancellous bone, and then followed by intrusion and retraction of the anterior teeth. Interproximal reduction (IPR) Indications Although IPR is a method for gaining space, it has always been controversial because of the potential damage to the enamel and the resulting risk of caries. The authors suggested that IPR should be used as a supplement to other methods, rather than as the primary method, to gain space. The following situations warrant an IPR design , : Bolton discrepancy due to the missing teeth or malformed teeth. Gingival embrasure defects (black triangles) due to periodontal disease. Poor crown morphology with contact points nearby the incisal edge. Considerations for final position design In general, IPR is designed in the anterior segment, if needed. It is advisable to limit the maximum amount of IPR to 0.25 mm on the proximal surface of each tooth. Studies have shown that IPR amounting to no more than 50% of the enamel thickness generally does not increase the risk of caries. – Considerations for staging Since the IPR site is the anatomical contact point of the crown rather than the actual contact point, restoring normal contact points first undoubtedly facilitates IPR performance. However, in practice, there may be situations in which insufficient space hinders the alignment of the dental arch, which requires a comprehensive assessment of the timing of IPR. Graded IPR is recommended to alleviate this contradiction. Fluoride application after IPR performance is suggested. Tooth extraction Tooth extraction is a common method for reducing tooth amount in orthodontic treatment and is mainly indicated when the discrepancy between the available and required space exceeds 8 mm, such as in cases with severe crowding or severe maxillary and/or mandibular protrusions. Two types of tooth extraction patterns are commonly used in clear alignment treatment: extraction of lower incisor and extraction of premolars (first or second). Extraction of lower incisors Indications An almost normal facial pattern with stable posterior occlusion, no indication for upper extraction, and the total required space in the mandible exceeding 6 mm. Bolton ratio discrepancy due to missing teeth or malformed teeth in maxilla. Poor prognosis of a lower incisor due to periodontal disease or dental trauma. Considerations for final position design: The extraction of a lower incisor results in the lack of the midline of the lower dental arch. Instead, the long axis of the lower central incisor may be designed as the lower midline. In most cases, IPR of the upper anterior teeth is necessary to resolve the discrepant Bolton ratio and achieve normal anterior overbite and overjet. Attachment design: it is recommended to design vertical rectangular attachments or root-control attachments on the adjacent teeth to the extraction space, which facilitate the reciprocal movement of the adjacent teeth, especially their roots. Considerations for staging: extracting a lower incisor can effectively relieve crowding in the lower anterior section and provide space for the intrusion of the lower anterior teeth, resulting in a high rate of treatment success. Therefore, special staging considerations are generally not required. Extraction of the first premolars Based on the symmetry principle, the extraction of the first 4 premolars is the most common pattern of extraction in orthodontic practice. However, cases needing the extraction of 4 premolars belong to difficult level in CAT (Table ), and clinicians need to reach a certain level of orthodontic experience to complete the treatment. Indications: Extraction of the first 4 premolars is indicated when the discrepancy between the available and required space exceeds 8 mm, such as cases with severe crowding and/or bimaxillary protrusion, and etc. Considerations for final position design: most cases with tooth extraction are challenging to treat, as extensive tooth movement is unavoidable, requiring three-dimensional repositioning of these teeth. Treatment success relies on the torque control of the anterior teeth and the mesial-tipping avoidance of the posterior teeth. – Therefore, the final position requires an over-treatment design, as follows: Anterior teeth exhibit a labial inclination with incisor angles of approximately 120°. To prevent excessive lingual inclination, adequate labial inclination and torque control (root-lingual torque) should be designed during the whole procedure of anterior retraction. Cases with more lingual inclination at the initial and/or longer retraction distances require a larger positive torque in the design. Anterior teeth are in a shallow overjet/overbite or edge-to-edge position without occlusal contact. The pendulum effect of anterior retraction, compounded by any pre-existing deep bite condition, may require the over-treatment of anterior intrusion. Canines are mesially tipped with the roots closer to the extraction space. Posterior teeth are distally tipped, with additional negative torque to prevent buccal inclination of molars and loss of posterior anchorage. Attachment design: In such cases, attachment design should consider the following: Power ridge on incisors is recommended to aid in the torque control of the anterior teeth, which can be activated when more than 3° root-lingual torque is designed. Optimized attachments with strong root control or traditional rectangular attachments are recommended for the canines. Horizontal rectangular attachments with strong retention are recommended for posterior teeth. Intermaxillary elastics: To increase posterior anchorage, Class II elastics can be designed during anterior retraction (precision cuts at the upper canines and bonding of buttons on the buccal surface of the lower first molars). Alternatively, implant anchorage can be used in the anterior region to assist the intrusion and body retraction of the anterior teeth. – Different modes of elastic tractions with or without mini-implants and their corresponding biomechanics are displayed in Fig. . Considerations for staging: a personalized design is suggested for each case. The staging design should vary according to the specific circumstances because of the complex and variable nature of extraction cases. However, in most cases, we recommend distalizing canines and distal tipping of the posterior teeth (anchorage preparation) first. When canines complete the first third of the total moving distance, 6 anterior teeth start to move simultaneously by then. And finally, mesial movement of the posterior teeth begins when anterior teeth movement is completed. To prevent the “bowing effect”, it is suggested to avoid mesial movement of the posterior teeth simultaneously with the retraction of anterior teeth. Extraction of the second premolars Indications: In the following cases, second premolars are extracted instead of first premolars, which usually increases the treatment difficulty. Clinicians should be cautious to make a treatment scheme design like this: Serious damage/abnormality on the second premolar and/or its periodontal tissue. Second premolar is impacted or blocked-out of the dental arch. Minimal anchorage design. Considerations for final position design: Compared to those in the first premolar extracted case, molars should be designed with more distal inclination (anchorage preparation) since the molars are more prone to mesial tipping, especially in the cases that more than 3 mm mesial movement of molars is required (minimal anchorage design), while less over-treatment of anterior teeth is needed. Considerations for staging: we suggest, firstly, a sequential distal movement of the first premolars and canines, and distal-tipping anchorage preparation of the first molars. Then, move anterior teeth afterwards. And finally, mesially move the molars sequentially. Bite jump (surgical and growth jump) A bite jump refers to the changes in the three-dimensional position of the mandible and/or mandibular dental arch resulting from intermaxillary elastics, self-growth, and/or orthognathic surgery. It is important to note that the design of bite jump should be tailored based on the specific circumstances of the patient, and clinical feasibility should be considered. Except orthognathic surgery, bite jumps caused by other methods develop gradually in clinical practice, which can span the whole course of treatment. Indications: Adolescents with mild skeletal or functional mandibular hypoplasia or retrognathia; , Functional Class III, with the mandible being able to retrude to edge-to-edge occlusion; Severe skeletal deformities requiring orthodontic-orthognathic treatment ; Mandibular malposition caused by premature individual tooth contacts. Intermaxillary elastics: The sagittal bite jump requires the use of intermaxillary elastics or orthodontic appliances with mandibular advancement function. , Considerations for staging: In the design software, bite jump can be placed at any stage of the treatment or throughout the treatment process. The authors typically place bite jump at the end of the treatment, which makes it easier for clinicians to assess the amount and direction of the jump and detect any abnormalities in a timely manner during clinical monitoring. Below, we are going to delve into some special considerations in clear aligner design. A lot of clinicians are confused by these issues in practice. Special considerations in clear aligner design Over-treatment design: as we discussed before, clear aligners exert mainly a “pushing force”, and therefore their clinical efficiency varies among different types of teeth movements (Fig. ). To better realize the actual teeth movement, over-treatment design is recommended in some cases, which is related to the predictability of CAT. For example, to intrude anterior teeth and correct deep bite, a shallow overbite and even open bite is designed in the final position, while large positive torque may be given to the incisors which are lingual inclined or up-righted initially when retraction of anterior teeth is required to correct the convex profile. However, the appropriate amount of over-treatment design is determined case by case, and until now, there is no consensus on this specific issue. According to our experience and previous clinical studies, the amount of over-treatment should be designed based on the initial status of teeth and the type and amount of the teeth movement. , Challenges and strategies in the complex tooth movements: compared to expansion and molar distalization, intrusion, extrusion and torque control are more complex tooth movements in CAT, which have much lower predictability (Fig. ). Thus, over-treatment is commonly designed for these types of movements. Besides, sufficient space for tooth movements should be taken into considerations. For intrusion, the root-and-bone relationship needs to be analyzed in CBCT images to make sure that the roots are in the cancellous bones, while for extrusion, the intermaxillary space is required. And loose proximal contact points are always good for the movement. Then, sufficient anchorage for the movement is important. There are usually two ways to strengthen anchorage in CAT. One is to move teeth in a stepwise mode. We recommend a “Frog pattern” staging for anterior teeth intrusion, in which incisors and canines are intruded separately and in cycles (Fig. ). Extrusion of posterior teeth is suggested to be designed in a “V pattern” staging. Besides, power ridge design and positive torque is distributed in the whole procedure of incisor retracting to provide a better torque control. The other way to enhance anchorage is to use auxiliary devices and elastics, such as temporary anchorage devices (TAD) implanted in the anterior section to provide an extra intruding force and root lingual torque on anterior teeth (Fig. ). Furthermore, appropriate attachment design could provide clear aligners with greater retention, which is the key for CAT. Traditional attachments on premolars are recommended when intrusion of anterior teeth is needed while traditional attachments on canines are suggested for incisor’s torque control. Differences in the design of CAT between adolescents and adults: as we know, the main difference between adolescents and adults is growth potential which may lead to different orthodontic treatment plan. Mandible growth can result in anteroposterior bite jump, and thus, bite jump design without surgery is more possible to realize in adolescents. Besides, the prevalence of oral caries is higher in adolescents, and therefore, interproximal reduction (IPR) design should be used more cautiously. Moreover, traditional attachments or optimized attachments in larger size are recommended in adolescents due to their inadequate crowns. A recently published expert consensus on adolescents’ orthodontic treatment has deeply discussed this special issue. Indications Narrow dental arch: a narrow dental arch can be determined based on the relationship between the most prominent points on the buccal surfaces of the crowns of the lower posterior teeth and the Wala ridge. Pont index analysis and Howes value can also assist in the width assessment. Pretreatment CBCT can be used to clarify the spatial relationship between the root and alveolar bone, which helps avoid excessive expansion that may result in bone fenestration or dehiscence. Excessive buccal corridor: excessive buccal corridor refers to excess negative space between the dental arch and the buccal mucosa of the oral cavity. Previous studies have shown that an excessive or insufficient buccal corridor jeopardizes smile esthetics. , An excessive buccal corridor is indicative of the arch expansion. Considerations for final position design Factors that must be considered include arch symmetry, arch coordination, and appropriate expansion amount to prevent bone fenestration or dehiscence. The volume of basal bone on buccal side should be analyzed in CBCT to determine the upper limit of the expansion. The amount of up-to-2 mm expansion on each side is safe in most cases. As for adolescents, the greater regenerative potential of alveolar bone remodeling makes arch expansion much safer. To prevent buccal inclination of crowns during expansion, the final position design should ensure that all the expanded posterior teeth are in lingual inclination (from the lateral view, the palatal cusps are invisible) (Fig. ). Attachment design Attachments are required on the buccal surfaces of teeth during arch expansion to prevent buccal inclination. For teeth with inadequate height of lingual cusps, lingual attachments may be placed simultaneously. Considerations for staging It is recommended to design a staged expansion for any expansion exceeding 1 mm unilaterally, such as a “V-pattern” design like molar distalization. Homonymous teeth in the same jaw are suggested to expand simultaneously because they can act as reciprocal anchorages. By adhering to these principles, clinicians can effectively incorporate arch expansion into clear alignment treatment plans, ensuring optimal outcomes in patients with dental arch discrepancies. Narrow dental arch: a narrow dental arch can be determined based on the relationship between the most prominent points on the buccal surfaces of the crowns of the lower posterior teeth and the Wala ridge. Pont index analysis and Howes value can also assist in the width assessment. Pretreatment CBCT can be used to clarify the spatial relationship between the root and alveolar bone, which helps avoid excessive expansion that may result in bone fenestration or dehiscence. Excessive buccal corridor: excessive buccal corridor refers to excess negative space between the dental arch and the buccal mucosa of the oral cavity. Previous studies have shown that an excessive or insufficient buccal corridor jeopardizes smile esthetics. , An excessive buccal corridor is indicative of the arch expansion. Factors that must be considered include arch symmetry, arch coordination, and appropriate expansion amount to prevent bone fenestration or dehiscence. The volume of basal bone on buccal side should be analyzed in CBCT to determine the upper limit of the expansion. The amount of up-to-2 mm expansion on each side is safe in most cases. As for adolescents, the greater regenerative potential of alveolar bone remodeling makes arch expansion much safer. To prevent buccal inclination of crowns during expansion, the final position design should ensure that all the expanded posterior teeth are in lingual inclination (from the lateral view, the palatal cusps are invisible) (Fig. ). Attachments are required on the buccal surfaces of teeth during arch expansion to prevent buccal inclination. For teeth with inadequate height of lingual cusps, lingual attachments may be placed simultaneously. It is recommended to design a staged expansion for any expansion exceeding 1 mm unilaterally, such as a “V-pattern” design like molar distalization. Homonymous teeth in the same jaw are suggested to expand simultaneously because they can act as reciprocal anchorages. By adhering to these principles, clinicians can effectively incorporate arch expansion into clear alignment treatment plans, ensuring optimal outcomes in patients with dental arch discrepancies. Indications Almost normal facial pattern with distal (Class II) or mesial (Class III) molar relationship may be an indication for molar distalization. It may be accompanied by mild to moderate crowding, deep overjet, or an anterior crossbite/edge-to-edge bite. However, molar distalization is not generally recommended for neutral molar relationship (Class I). , Sufficient space in the posterior dental arch is necessary for molar distalization. CBCT evaluation from a three-dimensional perspective is recommended for molar distalization greater than 2 mm. Vertically, the presence of a low maxillary sinus increases the difficulty of upper molar distalization, especially when the molar roots penetrate the cavity. Third molar extraction is recommended to reduce distalization resistance and provide more space. , Considerations for final position design The upper limit of molar distalization of clear aligner treatment depends on the available retromolar space. The third molars can be extracted if there is no sufficient space. The amount of less than 2 mm molar distalization on one side is considered predictable in most cases while the mesio-distal inclination of posterior teeth and the potential of bone growth in children and adolescents should be taken into consideration. Based on the literature and clinical experience, the predictability of molar distalization using clear aligners is approximately 88%. Thus, it is feasible to design the final position based on the actually required distalization distance (i.e., to obtain a neutral relationship) where no or minimal overtreatment is required. Additionally, to prevent labial fenestration and/or dehiscence in the lower anterior region, it is necessary to avoid labial movement of the lower anterior teeth, particularly the roots. This is because class II intermaxillary elastics are commonly applied during upper molar distalization, which exert a mesial force on the lower arch and labially push the lower anterior teeth. Attachment design Molar distalization does not require the supplement of attachments. However, attachments are recommended to enhance the grip of teeth with short crowns. Moreover, molar distalization is often accompanied by other complex movements such as intrusion and rotation, and attachments are usually required to improve the success rates of these movements and prevent off-tracking. Traditional rectangular attachments are generally designed for the canines to increase the retention of aligners and minimize the impact of precision cuts. – Intermaxillary elastics When clear aligners exert a pushing force to achieve molar distalization via material deformation, the counteracting force may procline the anterior teeth. Thus, if anterior tooth proclination is undesirable, the anchorage of the anterior teeth should be reinforced. Intermaxillary elastics are commonly used in practice to achieve this aim. In maxillary molar distalization, precision cuts are designed at the maxillary canines, whereas buttons are bonded to the buccal surface of the mandibular first molars (cut out on lower aligners) to allow the use of Class II intermaxillary elastics (Fig. ). If simultaneous eruption of the canine is desirable (e.g., low positioned or insufficiently erupted canines), a button can be bonded to the labial surface of canine near the gingival margin to facilitate eruption (Fig. ). However, precision cuts at the mandibular molars are prone to aligner displacement or off-tracking and are not recommended. Additionally, if necessary, implant devices can be used to enhance the anchorage, provided they do not obstruct molar distalization. – On the other hand, if the proclination of anterior teeth is desirable (e.g., Class II Division 2), it can be designed simultaneously with molar distalization, acting as reciprocal anchorage to eliminate the need for any elastics. Nevertheless, anterior proclination and molar distalization should be closely monitored during follow-up appointments for real-time adjustments. Considerations for staging The staging of tooth movements involves the consideration of anchorage. Typically, molar distalization is designed in a “V-patten” staging, in which the second molars are moved first, and then the first molars once the second molars have reached the halfway point of their total moving distance; thereafter, the second premolars start to move once the second molars have completed their “journey” (Fig. ). Thus, no more than four teeth are distalized at each stage (V-pattern). Finally, the space created by canine distalization can be used to align and/or retract the anterior teeth. By doing so, the anchorage is often adequate for most distalization cases; however, a long-term treatment is unavoidable. In some cases, in order to shorten the treatment duration and increase patient compliance and cooperation, alignment of the anterior teeth is performed simultaneously with molar distalization, allowing patients to observe quick esthetic changes (Fig. ). In addition, implant screws can be used to strengthen anchorage, allowing more teeth to distalize simultaneously, to shorten treatment duration (Fig. ). – Almost normal facial pattern with distal (Class II) or mesial (Class III) molar relationship may be an indication for molar distalization. It may be accompanied by mild to moderate crowding, deep overjet, or an anterior crossbite/edge-to-edge bite. However, molar distalization is not generally recommended for neutral molar relationship (Class I). , Sufficient space in the posterior dental arch is necessary for molar distalization. CBCT evaluation from a three-dimensional perspective is recommended for molar distalization greater than 2 mm. Vertically, the presence of a low maxillary sinus increases the difficulty of upper molar distalization, especially when the molar roots penetrate the cavity. Third molar extraction is recommended to reduce distalization resistance and provide more space. , The upper limit of molar distalization of clear aligner treatment depends on the available retromolar space. The third molars can be extracted if there is no sufficient space. The amount of less than 2 mm molar distalization on one side is considered predictable in most cases while the mesio-distal inclination of posterior teeth and the potential of bone growth in children and adolescents should be taken into consideration. Based on the literature and clinical experience, the predictability of molar distalization using clear aligners is approximately 88%. Thus, it is feasible to design the final position based on the actually required distalization distance (i.e., to obtain a neutral relationship) where no or minimal overtreatment is required. Additionally, to prevent labial fenestration and/or dehiscence in the lower anterior region, it is necessary to avoid labial movement of the lower anterior teeth, particularly the roots. This is because class II intermaxillary elastics are commonly applied during upper molar distalization, which exert a mesial force on the lower arch and labially push the lower anterior teeth. Molar distalization does not require the supplement of attachments. However, attachments are recommended to enhance the grip of teeth with short crowns. Moreover, molar distalization is often accompanied by other complex movements such as intrusion and rotation, and attachments are usually required to improve the success rates of these movements and prevent off-tracking. Traditional rectangular attachments are generally designed for the canines to increase the retention of aligners and minimize the impact of precision cuts. – When clear aligners exert a pushing force to achieve molar distalization via material deformation, the counteracting force may procline the anterior teeth. Thus, if anterior tooth proclination is undesirable, the anchorage of the anterior teeth should be reinforced. Intermaxillary elastics are commonly used in practice to achieve this aim. In maxillary molar distalization, precision cuts are designed at the maxillary canines, whereas buttons are bonded to the buccal surface of the mandibular first molars (cut out on lower aligners) to allow the use of Class II intermaxillary elastics (Fig. ). If simultaneous eruption of the canine is desirable (e.g., low positioned or insufficiently erupted canines), a button can be bonded to the labial surface of canine near the gingival margin to facilitate eruption (Fig. ). However, precision cuts at the mandibular molars are prone to aligner displacement or off-tracking and are not recommended. Additionally, if necessary, implant devices can be used to enhance the anchorage, provided they do not obstruct molar distalization. – On the other hand, if the proclination of anterior teeth is desirable (e.g., Class II Division 2), it can be designed simultaneously with molar distalization, acting as reciprocal anchorage to eliminate the need for any elastics. Nevertheless, anterior proclination and molar distalization should be closely monitored during follow-up appointments for real-time adjustments. The staging of tooth movements involves the consideration of anchorage. Typically, molar distalization is designed in a “V-patten” staging, in which the second molars are moved first, and then the first molars once the second molars have reached the halfway point of their total moving distance; thereafter, the second premolars start to move once the second molars have completed their “journey” (Fig. ). Thus, no more than four teeth are distalized at each stage (V-pattern). Finally, the space created by canine distalization can be used to align and/or retract the anterior teeth. By doing so, the anchorage is often adequate for most distalization cases; however, a long-term treatment is unavoidable. In some cases, in order to shorten the treatment duration and increase patient compliance and cooperation, alignment of the anterior teeth is performed simultaneously with molar distalization, allowing patients to observe quick esthetic changes (Fig. ). In addition, implant screws can be used to strengthen anchorage, allowing more teeth to distalize simultaneously, to shorten treatment duration (Fig. ). – Indications Patients presenting with straight or concave facial profiles and retro-inclined or upright anterior teeth accompanied by mild crowding, such as cases with deep overbite caused by lingual inclination of the upper anterior teeth, are indicated for proclination of anterior teeth, which can be combined with other methods to obtain enough space. Considerations for final position design The sagittal position and proclination of the anterior teeth, especially the upper anterior teeth, are crucial for facial esthetics and are one of the main indicators for profile analysis. – Thus, the degree of proclination of the anterior teeth should be carefully evaluated based on facial morphology, and a combination with other methods that help acquire sufficient space should be considered. For patients with a severe lingually inclined deep anterior overbite, the roots-and-bone relationship should be considered. The roots need to be positioned within the cancellous region of the alveolar bone. , Theoretically, a proclination of 1 mm (2.5°) in the anterior segment provides 2 mm of space. Therefore, the proclination design in the final position is based on the amount of space required, facial morphology, and the roots-and-bone relationship. Attachment design More than 3° of incisor proclination activates the power ridge in the designing software system, which applies labial-torquing force on the crowns, whereas lingual-torquing force on the roots and effectively achieves root-controlled movement of the anterior teeth. Traditional attachments on canines are recommended to reduce the risk of aligner off-tracking in the anterior segment. Considerations for staging A minor proclination can be synchronized with the alignment of mild crowding. However, in cases with lingually inclined deep overbite, staged tooth movement is required. Proclination is first performed to torque the roots into the cancellous bone, and then followed by intrusion and retraction of the anterior teeth. Patients presenting with straight or concave facial profiles and retro-inclined or upright anterior teeth accompanied by mild crowding, such as cases with deep overbite caused by lingual inclination of the upper anterior teeth, are indicated for proclination of anterior teeth, which can be combined with other methods to obtain enough space. The sagittal position and proclination of the anterior teeth, especially the upper anterior teeth, are crucial for facial esthetics and are one of the main indicators for profile analysis. – Thus, the degree of proclination of the anterior teeth should be carefully evaluated based on facial morphology, and a combination with other methods that help acquire sufficient space should be considered. For patients with a severe lingually inclined deep anterior overbite, the roots-and-bone relationship should be considered. The roots need to be positioned within the cancellous region of the alveolar bone. , Theoretically, a proclination of 1 mm (2.5°) in the anterior segment provides 2 mm of space. Therefore, the proclination design in the final position is based on the amount of space required, facial morphology, and the roots-and-bone relationship. More than 3° of incisor proclination activates the power ridge in the designing software system, which applies labial-torquing force on the crowns, whereas lingual-torquing force on the roots and effectively achieves root-controlled movement of the anterior teeth. Traditional attachments on canines are recommended to reduce the risk of aligner off-tracking in the anterior segment. A minor proclination can be synchronized with the alignment of mild crowding. However, in cases with lingually inclined deep overbite, staged tooth movement is required. Proclination is first performed to torque the roots into the cancellous bone, and then followed by intrusion and retraction of the anterior teeth. Indications Although IPR is a method for gaining space, it has always been controversial because of the potential damage to the enamel and the resulting risk of caries. The authors suggested that IPR should be used as a supplement to other methods, rather than as the primary method, to gain space. The following situations warrant an IPR design , : Bolton discrepancy due to the missing teeth or malformed teeth. Gingival embrasure defects (black triangles) due to periodontal disease. Poor crown morphology with contact points nearby the incisal edge. Considerations for final position design In general, IPR is designed in the anterior segment, if needed. It is advisable to limit the maximum amount of IPR to 0.25 mm on the proximal surface of each tooth. Studies have shown that IPR amounting to no more than 50% of the enamel thickness generally does not increase the risk of caries. – Considerations for staging Since the IPR site is the anatomical contact point of the crown rather than the actual contact point, restoring normal contact points first undoubtedly facilitates IPR performance. However, in practice, there may be situations in which insufficient space hinders the alignment of the dental arch, which requires a comprehensive assessment of the timing of IPR. Graded IPR is recommended to alleviate this contradiction. Fluoride application after IPR performance is suggested. Although IPR is a method for gaining space, it has always been controversial because of the potential damage to the enamel and the resulting risk of caries. The authors suggested that IPR should be used as a supplement to other methods, rather than as the primary method, to gain space. The following situations warrant an IPR design , : Bolton discrepancy due to the missing teeth or malformed teeth. Gingival embrasure defects (black triangles) due to periodontal disease. Poor crown morphology with contact points nearby the incisal edge. In general, IPR is designed in the anterior segment, if needed. It is advisable to limit the maximum amount of IPR to 0.25 mm on the proximal surface of each tooth. Studies have shown that IPR amounting to no more than 50% of the enamel thickness generally does not increase the risk of caries. – Since the IPR site is the anatomical contact point of the crown rather than the actual contact point, restoring normal contact points first undoubtedly facilitates IPR performance. However, in practice, there may be situations in which insufficient space hinders the alignment of the dental arch, which requires a comprehensive assessment of the timing of IPR. Graded IPR is recommended to alleviate this contradiction. Fluoride application after IPR performance is suggested. Tooth extraction is a common method for reducing tooth amount in orthodontic treatment and is mainly indicated when the discrepancy between the available and required space exceeds 8 mm, such as in cases with severe crowding or severe maxillary and/or mandibular protrusions. Two types of tooth extraction patterns are commonly used in clear alignment treatment: extraction of lower incisor and extraction of premolars (first or second). Indications An almost normal facial pattern with stable posterior occlusion, no indication for upper extraction, and the total required space in the mandible exceeding 6 mm. Bolton ratio discrepancy due to missing teeth or malformed teeth in maxilla. Poor prognosis of a lower incisor due to periodontal disease or dental trauma. Considerations for final position design: The extraction of a lower incisor results in the lack of the midline of the lower dental arch. Instead, the long axis of the lower central incisor may be designed as the lower midline. In most cases, IPR of the upper anterior teeth is necessary to resolve the discrepant Bolton ratio and achieve normal anterior overbite and overjet. Attachment design: it is recommended to design vertical rectangular attachments or root-control attachments on the adjacent teeth to the extraction space, which facilitate the reciprocal movement of the adjacent teeth, especially their roots. Considerations for staging: extracting a lower incisor can effectively relieve crowding in the lower anterior section and provide space for the intrusion of the lower anterior teeth, resulting in a high rate of treatment success. Therefore, special staging considerations are generally not required. Extraction of the first premolars Based on the symmetry principle, the extraction of the first 4 premolars is the most common pattern of extraction in orthodontic practice. However, cases needing the extraction of 4 premolars belong to difficult level in CAT (Table ), and clinicians need to reach a certain level of orthodontic experience to complete the treatment. Indications: Extraction of the first 4 premolars is indicated when the discrepancy between the available and required space exceeds 8 mm, such as cases with severe crowding and/or bimaxillary protrusion, and etc. Considerations for final position design: most cases with tooth extraction are challenging to treat, as extensive tooth movement is unavoidable, requiring three-dimensional repositioning of these teeth. Treatment success relies on the torque control of the anterior teeth and the mesial-tipping avoidance of the posterior teeth. – Therefore, the final position requires an over-treatment design, as follows: Anterior teeth exhibit a labial inclination with incisor angles of approximately 120°. To prevent excessive lingual inclination, adequate labial inclination and torque control (root-lingual torque) should be designed during the whole procedure of anterior retraction. Cases with more lingual inclination at the initial and/or longer retraction distances require a larger positive torque in the design. Anterior teeth are in a shallow overjet/overbite or edge-to-edge position without occlusal contact. The pendulum effect of anterior retraction, compounded by any pre-existing deep bite condition, may require the over-treatment of anterior intrusion. Canines are mesially tipped with the roots closer to the extraction space. Posterior teeth are distally tipped, with additional negative torque to prevent buccal inclination of molars and loss of posterior anchorage. Attachment design: In such cases, attachment design should consider the following: Power ridge on incisors is recommended to aid in the torque control of the anterior teeth, which can be activated when more than 3° root-lingual torque is designed. Optimized attachments with strong root control or traditional rectangular attachments are recommended for the canines. Horizontal rectangular attachments with strong retention are recommended for posterior teeth. Intermaxillary elastics: To increase posterior anchorage, Class II elastics can be designed during anterior retraction (precision cuts at the upper canines and bonding of buttons on the buccal surface of the lower first molars). Alternatively, implant anchorage can be used in the anterior region to assist the intrusion and body retraction of the anterior teeth. – Different modes of elastic tractions with or without mini-implants and their corresponding biomechanics are displayed in Fig. . Considerations for staging: a personalized design is suggested for each case. The staging design should vary according to the specific circumstances because of the complex and variable nature of extraction cases. However, in most cases, we recommend distalizing canines and distal tipping of the posterior teeth (anchorage preparation) first. When canines complete the first third of the total moving distance, 6 anterior teeth start to move simultaneously by then. And finally, mesial movement of the posterior teeth begins when anterior teeth movement is completed. To prevent the “bowing effect”, it is suggested to avoid mesial movement of the posterior teeth simultaneously with the retraction of anterior teeth. Extraction of the second premolars Indications: In the following cases, second premolars are extracted instead of first premolars, which usually increases the treatment difficulty. Clinicians should be cautious to make a treatment scheme design like this: Serious damage/abnormality on the second premolar and/or its periodontal tissue. Second premolar is impacted or blocked-out of the dental arch. Minimal anchorage design. Considerations for final position design: Compared to those in the first premolar extracted case, molars should be designed with more distal inclination (anchorage preparation) since the molars are more prone to mesial tipping, especially in the cases that more than 3 mm mesial movement of molars is required (minimal anchorage design), while less over-treatment of anterior teeth is needed. Considerations for staging: we suggest, firstly, a sequential distal movement of the first premolars and canines, and distal-tipping anchorage preparation of the first molars. Then, move anterior teeth afterwards. And finally, mesially move the molars sequentially. Bite jump (surgical and growth jump) A bite jump refers to the changes in the three-dimensional position of the mandible and/or mandibular dental arch resulting from intermaxillary elastics, self-growth, and/or orthognathic surgery. It is important to note that the design of bite jump should be tailored based on the specific circumstances of the patient, and clinical feasibility should be considered. Except orthognathic surgery, bite jumps caused by other methods develop gradually in clinical practice, which can span the whole course of treatment. Indications: Adolescents with mild skeletal or functional mandibular hypoplasia or retrognathia; , Functional Class III, with the mandible being able to retrude to edge-to-edge occlusion; Severe skeletal deformities requiring orthodontic-orthognathic treatment ; Mandibular malposition caused by premature individual tooth contacts. Intermaxillary elastics: The sagittal bite jump requires the use of intermaxillary elastics or orthodontic appliances with mandibular advancement function. , Considerations for staging: In the design software, bite jump can be placed at any stage of the treatment or throughout the treatment process. The authors typically place bite jump at the end of the treatment, which makes it easier for clinicians to assess the amount and direction of the jump and detect any abnormalities in a timely manner during clinical monitoring. Below, we are going to delve into some special considerations in clear aligner design. A lot of clinicians are confused by these issues in practice. Special considerations in clear aligner design Over-treatment design: as we discussed before, clear aligners exert mainly a “pushing force”, and therefore their clinical efficiency varies among different types of teeth movements (Fig. ). To better realize the actual teeth movement, over-treatment design is recommended in some cases, which is related to the predictability of CAT. For example, to intrude anterior teeth and correct deep bite, a shallow overbite and even open bite is designed in the final position, while large positive torque may be given to the incisors which are lingual inclined or up-righted initially when retraction of anterior teeth is required to correct the convex profile. However, the appropriate amount of over-treatment design is determined case by case, and until now, there is no consensus on this specific issue. According to our experience and previous clinical studies, the amount of over-treatment should be designed based on the initial status of teeth and the type and amount of the teeth movement. , Challenges and strategies in the complex tooth movements: compared to expansion and molar distalization, intrusion, extrusion and torque control are more complex tooth movements in CAT, which have much lower predictability (Fig. ). Thus, over-treatment is commonly designed for these types of movements. Besides, sufficient space for tooth movements should be taken into considerations. For intrusion, the root-and-bone relationship needs to be analyzed in CBCT images to make sure that the roots are in the cancellous bones, while for extrusion, the intermaxillary space is required. And loose proximal contact points are always good for the movement. Then, sufficient anchorage for the movement is important. There are usually two ways to strengthen anchorage in CAT. One is to move teeth in a stepwise mode. We recommend a “Frog pattern” staging for anterior teeth intrusion, in which incisors and canines are intruded separately and in cycles (Fig. ). Extrusion of posterior teeth is suggested to be designed in a “V pattern” staging. Besides, power ridge design and positive torque is distributed in the whole procedure of incisor retracting to provide a better torque control. The other way to enhance anchorage is to use auxiliary devices and elastics, such as temporary anchorage devices (TAD) implanted in the anterior section to provide an extra intruding force and root lingual torque on anterior teeth (Fig. ). Furthermore, appropriate attachment design could provide clear aligners with greater retention, which is the key for CAT. Traditional attachments on premolars are recommended when intrusion of anterior teeth is needed while traditional attachments on canines are suggested for incisor’s torque control. Differences in the design of CAT between adolescents and adults: as we know, the main difference between adolescents and adults is growth potential which may lead to different orthodontic treatment plan. Mandible growth can result in anteroposterior bite jump, and thus, bite jump design without surgery is more possible to realize in adolescents. Besides, the prevalence of oral caries is higher in adolescents, and therefore, interproximal reduction (IPR) design should be used more cautiously. Moreover, traditional attachments or optimized attachments in larger size are recommended in adolescents due to their inadequate crowns. A recently published expert consensus on adolescents’ orthodontic treatment has deeply discussed this special issue. An almost normal facial pattern with stable posterior occlusion, no indication for upper extraction, and the total required space in the mandible exceeding 6 mm. Bolton ratio discrepancy due to missing teeth or malformed teeth in maxilla. Poor prognosis of a lower incisor due to periodontal disease or dental trauma. Considerations for final position design: The extraction of a lower incisor results in the lack of the midline of the lower dental arch. Instead, the long axis of the lower central incisor may be designed as the lower midline. In most cases, IPR of the upper anterior teeth is necessary to resolve the discrepant Bolton ratio and achieve normal anterior overbite and overjet. Attachment design: it is recommended to design vertical rectangular attachments or root-control attachments on the adjacent teeth to the extraction space, which facilitate the reciprocal movement of the adjacent teeth, especially their roots. Considerations for staging: extracting a lower incisor can effectively relieve crowding in the lower anterior section and provide space for the intrusion of the lower anterior teeth, resulting in a high rate of treatment success. Therefore, special staging considerations are generally not required. Based on the symmetry principle, the extraction of the first 4 premolars is the most common pattern of extraction in orthodontic practice. However, cases needing the extraction of 4 premolars belong to difficult level in CAT (Table ), and clinicians need to reach a certain level of orthodontic experience to complete the treatment. Indications: Extraction of the first 4 premolars is indicated when the discrepancy between the available and required space exceeds 8 mm, such as cases with severe crowding and/or bimaxillary protrusion, and etc. Considerations for final position design: most cases with tooth extraction are challenging to treat, as extensive tooth movement is unavoidable, requiring three-dimensional repositioning of these teeth. Treatment success relies on the torque control of the anterior teeth and the mesial-tipping avoidance of the posterior teeth. – Therefore, the final position requires an over-treatment design, as follows: Anterior teeth exhibit a labial inclination with incisor angles of approximately 120°. To prevent excessive lingual inclination, adequate labial inclination and torque control (root-lingual torque) should be designed during the whole procedure of anterior retraction. Cases with more lingual inclination at the initial and/or longer retraction distances require a larger positive torque in the design. Anterior teeth are in a shallow overjet/overbite or edge-to-edge position without occlusal contact. The pendulum effect of anterior retraction, compounded by any pre-existing deep bite condition, may require the over-treatment of anterior intrusion. Canines are mesially tipped with the roots closer to the extraction space. Posterior teeth are distally tipped, with additional negative torque to prevent buccal inclination of molars and loss of posterior anchorage. Attachment design: In such cases, attachment design should consider the following: Power ridge on incisors is recommended to aid in the torque control of the anterior teeth, which can be activated when more than 3° root-lingual torque is designed. Optimized attachments with strong root control or traditional rectangular attachments are recommended for the canines. Horizontal rectangular attachments with strong retention are recommended for posterior teeth. Intermaxillary elastics: To increase posterior anchorage, Class II elastics can be designed during anterior retraction (precision cuts at the upper canines and bonding of buttons on the buccal surface of the lower first molars). Alternatively, implant anchorage can be used in the anterior region to assist the intrusion and body retraction of the anterior teeth. – Different modes of elastic tractions with or without mini-implants and their corresponding biomechanics are displayed in Fig. . Considerations for staging: a personalized design is suggested for each case. The staging design should vary according to the specific circumstances because of the complex and variable nature of extraction cases. However, in most cases, we recommend distalizing canines and distal tipping of the posterior teeth (anchorage preparation) first. When canines complete the first third of the total moving distance, 6 anterior teeth start to move simultaneously by then. And finally, mesial movement of the posterior teeth begins when anterior teeth movement is completed. To prevent the “bowing effect”, it is suggested to avoid mesial movement of the posterior teeth simultaneously with the retraction of anterior teeth. Indications: In the following cases, second premolars are extracted instead of first premolars, which usually increases the treatment difficulty. Clinicians should be cautious to make a treatment scheme design like this: Serious damage/abnormality on the second premolar and/or its periodontal tissue. Second premolar is impacted or blocked-out of the dental arch. Minimal anchorage design. Considerations for final position design: Compared to those in the first premolar extracted case, molars should be designed with more distal inclination (anchorage preparation) since the molars are more prone to mesial tipping, especially in the cases that more than 3 mm mesial movement of molars is required (minimal anchorage design), while less over-treatment of anterior teeth is needed. Considerations for staging: we suggest, firstly, a sequential distal movement of the first premolars and canines, and distal-tipping anchorage preparation of the first molars. Then, move anterior teeth afterwards. And finally, mesially move the molars sequentially. A bite jump refers to the changes in the three-dimensional position of the mandible and/or mandibular dental arch resulting from intermaxillary elastics, self-growth, and/or orthognathic surgery. It is important to note that the design of bite jump should be tailored based on the specific circumstances of the patient, and clinical feasibility should be considered. Except orthognathic surgery, bite jumps caused by other methods develop gradually in clinical practice, which can span the whole course of treatment. Indications: Adolescents with mild skeletal or functional mandibular hypoplasia or retrognathia; , Functional Class III, with the mandible being able to retrude to edge-to-edge occlusion; Severe skeletal deformities requiring orthodontic-orthognathic treatment ; Mandibular malposition caused by premature individual tooth contacts. Intermaxillary elastics: The sagittal bite jump requires the use of intermaxillary elastics or orthodontic appliances with mandibular advancement function. , Considerations for staging: In the design software, bite jump can be placed at any stage of the treatment or throughout the treatment process. The authors typically place bite jump at the end of the treatment, which makes it easier for clinicians to assess the amount and direction of the jump and detect any abnormalities in a timely manner during clinical monitoring. Below, we are going to delve into some special considerations in clear aligner design. A lot of clinicians are confused by these issues in practice. Over-treatment design: as we discussed before, clear aligners exert mainly a “pushing force”, and therefore their clinical efficiency varies among different types of teeth movements (Fig. ). To better realize the actual teeth movement, over-treatment design is recommended in some cases, which is related to the predictability of CAT. For example, to intrude anterior teeth and correct deep bite, a shallow overbite and even open bite is designed in the final position, while large positive torque may be given to the incisors which are lingual inclined or up-righted initially when retraction of anterior teeth is required to correct the convex profile. However, the appropriate amount of over-treatment design is determined case by case, and until now, there is no consensus on this specific issue. According to our experience and previous clinical studies, the amount of over-treatment should be designed based on the initial status of teeth and the type and amount of the teeth movement. , Challenges and strategies in the complex tooth movements: compared to expansion and molar distalization, intrusion, extrusion and torque control are more complex tooth movements in CAT, which have much lower predictability (Fig. ). Thus, over-treatment is commonly designed for these types of movements. Besides, sufficient space for tooth movements should be taken into considerations. For intrusion, the root-and-bone relationship needs to be analyzed in CBCT images to make sure that the roots are in the cancellous bones, while for extrusion, the intermaxillary space is required. And loose proximal contact points are always good for the movement. Then, sufficient anchorage for the movement is important. There are usually two ways to strengthen anchorage in CAT. One is to move teeth in a stepwise mode. We recommend a “Frog pattern” staging for anterior teeth intrusion, in which incisors and canines are intruded separately and in cycles (Fig. ). Extrusion of posterior teeth is suggested to be designed in a “V pattern” staging. Besides, power ridge design and positive torque is distributed in the whole procedure of incisor retracting to provide a better torque control. The other way to enhance anchorage is to use auxiliary devices and elastics, such as temporary anchorage devices (TAD) implanted in the anterior section to provide an extra intruding force and root lingual torque on anterior teeth (Fig. ). Furthermore, appropriate attachment design could provide clear aligners with greater retention, which is the key for CAT. Traditional attachments on premolars are recommended when intrusion of anterior teeth is needed while traditional attachments on canines are suggested for incisor’s torque control. Differences in the design of CAT between adolescents and adults: as we know, the main difference between adolescents and adults is growth potential which may lead to different orthodontic treatment plan. Mandible growth can result in anteroposterior bite jump, and thus, bite jump design without surgery is more possible to realize in adolescents. Besides, the prevalence of oral caries is higher in adolescents, and therefore, interproximal reduction (IPR) design should be used more cautiously. Moreover, traditional attachments or optimized attachments in larger size are recommended in adolescents due to their inadequate crowns. A recently published expert consensus on adolescents’ orthodontic treatment has deeply discussed this special issue. Once aligner treatment planning is ready, clear aligners that move teeth incrementally can be fabricated based on either thermoforming or 3D printing. Patients are informed to the clinic for the initial appliance placement when clinicians receive the aligners. On this day, the resin attachments are bonded onto the teeth according to the digital design, and the first set of aligners is tried in (fitness should be checked). Subsequently, patients are issued a set of instructions, including the required wearing duration, method of aligner placement, and usage of chewies. The patients are also informed about the importance of oral hygiene. Additional information and instructions are provided to the patients, as relevant, depending on the tooth movement plan, such as molar distalization, IPR, or extraction. Patient compliance management Regular follow-up visits are essential and can be used to inform patients about treatment progress and challenges, helping them understand their roles in the process, increasing their confidence, compliance, and cooperation. , Cooperation in the long duration of orthodontic treatment is a huge challenge to majority of people, especially persisting in wearing clear aligners day by day. Thus, close contact with patients helps to know their status and give them a hand or timely reminding if needed. Pleasure communication and compliments on patients are always effective in maintaining good relationship between clinicians and patients, which is beneficial for the cooperation as well. To encourage patients, practitioners can show them the changes already occurred by comparing with their pre-treatment photos and inform them that all these changes are owing to their compliance and cooperation. Let patients be aware of that their efforts will pay back. By doing so, patients will be more confident in the treatment. Besides, some application programs registered by patients’ ID number can be used on smart cell phone to help record the wearing date and remind to change a new set, which is convenient for patients in daily life. Things to do in the follow-up visits To evaluate treatment progress, comprehensive examinations should be performed, including the following assessment: Tooth and periodontium status assessment, including mobility, premature contact presence, and occlusal trauma. , Occlusion changes, including the sagittal relationship, occlusal contacts, inclination, midline of upper/lower dental arch, overjet, overbite, torque and space, comparing to baseline and digital design. Temporomandibular joint health assessment should interrogate any pain, tenderness, and clicking in the joint area, especially in patients with temporomandibular disease before treatment and in adult patients using intermaxillary elastics. – Any detachment and/or abrasion of attachments should be checked according to digital design. Aligner fitness assessments account for the progress in tooth movement, especially any gap observed in the space from the incisal edges of the anterior teeth, cusps of the posterior teeth, and the area around the attachments and along the aligner margin. Management of off-tracking Off-tracking refers to the incomplete fitting between the teeth and aligners, indicative of a discrepancy between the direction and/or distance of actual tooth movement and that planned in the digital design (Fig. ). The management of off-tracking involves removing attachments and using aligners to guide the off-tracking teeth back into the desired path using intra-/inter-maxillary elastics. Off-tracking manifestations can be categorized into the following three situations: Off-tracking in the vertical dimension due to insufficient extrusion or anterior intrusion. Insufficient extrusion may manifest as uniform vacuoles emerging at the incisal edges or cusps and can be managed by removing the attachments on the off-tracking teeth and applying intra-/inter-maxillary elastics (Fig. ). Alternatively, in cases of insufficient anterior intrusion, which manifest as inadequate correction of the anterior deep bite, auxiliary devices, such as implants or redesigning additional aligners to increase the staging design for tooth movement, may be added. Off-tracking in the horizontal dimension commonly occurs in rotation correction, especially in severely rotated premolars. The removal of attachments and use of a power chain can be helpful in most cases (Fig. ). Off-tracking in the sagittal dimension is characterized by mesial inclination of the posterior teeth and torque loss of the anterior teeth (lingual inclination). , Mismatches between the attachments and vacuoles on the aligners can be observed on mesially inclined posterior teeth, as well as the gaps between the mesial cups and aligners. Distal up-righting of these off-tracking teeth must be performed using intermaxillary elastics and/or sectional arch wires after the removal of the attachments (Fig. ). The loss of anterior tooth torque manifests as lingual inclination of the upper/lower anterior teeth, increased overbite, early contact of anterior teeth, and posterior open bite. In such cases, the aligners may need to be redesigned to restart the program. Timing and considerations of program restart Sometimes not only one series of clear aligners are needed to complete the treatment. There are five possible reasons for this: The discrepancy between designed tooth movement and actual tooth movement, which result in an incomplete correction of the malocclusion, often occurring in some complex tooth movement, like intrusion, root control and more than 3 mm molar distalization. More series of aligners are designed to accomplish the treatment goal. Unwanted tooth movement occurs and leads to reduced occlusal contacts or even open bite in posterior segment, which may be due to the aligners’ effect of occlusal pad. More series of aligners are designed to consolidate the occlusion. More teeth should be included into treatment, which is common in adolescents with erupting second molars. A new series of aligners are usually designed to cover these second molars and some heterotopic or impacted teeth, if any. The change of occlusal relationship may occur, due to mandible growth and/or removal of occlusal interference. Then, a completely new design should be done according to the new and stable occlusal relationship. Bad cooperation in patients, leads to serious off-tracking, and even totally unfitting. A new series of aligners are designed based on current status. Regular follow-up visits are essential and can be used to inform patients about treatment progress and challenges, helping them understand their roles in the process, increasing their confidence, compliance, and cooperation. , Cooperation in the long duration of orthodontic treatment is a huge challenge to majority of people, especially persisting in wearing clear aligners day by day. Thus, close contact with patients helps to know their status and give them a hand or timely reminding if needed. Pleasure communication and compliments on patients are always effective in maintaining good relationship between clinicians and patients, which is beneficial for the cooperation as well. To encourage patients, practitioners can show them the changes already occurred by comparing with their pre-treatment photos and inform them that all these changes are owing to their compliance and cooperation. Let patients be aware of that their efforts will pay back. By doing so, patients will be more confident in the treatment. Besides, some application programs registered by patients’ ID number can be used on smart cell phone to help record the wearing date and remind to change a new set, which is convenient for patients in daily life. To evaluate treatment progress, comprehensive examinations should be performed, including the following assessment: Tooth and periodontium status assessment, including mobility, premature contact presence, and occlusal trauma. , Occlusion changes, including the sagittal relationship, occlusal contacts, inclination, midline of upper/lower dental arch, overjet, overbite, torque and space, comparing to baseline and digital design. Temporomandibular joint health assessment should interrogate any pain, tenderness, and clicking in the joint area, especially in patients with temporomandibular disease before treatment and in adult patients using intermaxillary elastics. – Any detachment and/or abrasion of attachments should be checked according to digital design. Aligner fitness assessments account for the progress in tooth movement, especially any gap observed in the space from the incisal edges of the anterior teeth, cusps of the posterior teeth, and the area around the attachments and along the aligner margin. Off-tracking refers to the incomplete fitting between the teeth and aligners, indicative of a discrepancy between the direction and/or distance of actual tooth movement and that planned in the digital design (Fig. ). The management of off-tracking involves removing attachments and using aligners to guide the off-tracking teeth back into the desired path using intra-/inter-maxillary elastics. Off-tracking manifestations can be categorized into the following three situations: Off-tracking in the vertical dimension due to insufficient extrusion or anterior intrusion. Insufficient extrusion may manifest as uniform vacuoles emerging at the incisal edges or cusps and can be managed by removing the attachments on the off-tracking teeth and applying intra-/inter-maxillary elastics (Fig. ). Alternatively, in cases of insufficient anterior intrusion, which manifest as inadequate correction of the anterior deep bite, auxiliary devices, such as implants or redesigning additional aligners to increase the staging design for tooth movement, may be added. Off-tracking in the horizontal dimension commonly occurs in rotation correction, especially in severely rotated premolars. The removal of attachments and use of a power chain can be helpful in most cases (Fig. ). Off-tracking in the sagittal dimension is characterized by mesial inclination of the posterior teeth and torque loss of the anterior teeth (lingual inclination). , Mismatches between the attachments and vacuoles on the aligners can be observed on mesially inclined posterior teeth, as well as the gaps between the mesial cups and aligners. Distal up-righting of these off-tracking teeth must be performed using intermaxillary elastics and/or sectional arch wires after the removal of the attachments (Fig. ). The loss of anterior tooth torque manifests as lingual inclination of the upper/lower anterior teeth, increased overbite, early contact of anterior teeth, and posterior open bite. In such cases, the aligners may need to be redesigned to restart the program. Sometimes not only one series of clear aligners are needed to complete the treatment. There are five possible reasons for this: The discrepancy between designed tooth movement and actual tooth movement, which result in an incomplete correction of the malocclusion, often occurring in some complex tooth movement, like intrusion, root control and more than 3 mm molar distalization. More series of aligners are designed to accomplish the treatment goal. Unwanted tooth movement occurs and leads to reduced occlusal contacts or even open bite in posterior segment, which may be due to the aligners’ effect of occlusal pad. More series of aligners are designed to consolidate the occlusion. More teeth should be included into treatment, which is common in adolescents with erupting second molars. A new series of aligners are usually designed to cover these second molars and some heterotopic or impacted teeth, if any. The change of occlusal relationship may occur, due to mandible growth and/or removal of occlusal interference. Then, a completely new design should be done according to the new and stable occlusal relationship. Bad cooperation in patients, leads to serious off-tracking, and even totally unfitting. A new series of aligners are designed based on current status. Treatment is complete after waring the final set of aligners, if the treatment objective has been achieved. The criteria for ending CAT are consistent with those for ending traditional fixed orthodontic treatment. At the end of the treatment, the attachments and other auxiliary devices are removed, and retainers are prescribed as usual. Retention is of vital importance to clear aligner treatment. Different modalities of retention can be chosen based on patient-specific characteristics, e.g., periodontal condition, caries vulnerability, etc. Patients should be recalled to check tooth alignment, retainer fitting, and signs of relapse. CAT is associated with some risks to dental and periodontal health. Caries Poor oral hygiene during CAT can disrupt the oral microbiota, leading to white spot lesions or even caries. However, compared to patients undergoing fixed orthodontic treatment, patients wearing clear aligners have lower levels of white spot lesions, total bacterial plaque, and cariogenic bacteria in the saliva. – This may be related to the reduced detrimental effect of clear aligners on oral hygiene. Root resorption CAT may lead to root resorption. However, it reported that CAT applied a gentler force, resulting in a lower rate and severity of root resorption, compared to those observed in fixed orthodontic treatment. – Factors such as post-treatment root position (relationship with the cortical bone), extraction, tooth position, and specific tooth movement patterns (intrusion and extrusion) are all risk factors for root resorption, whereas post-treatment root position is most closely related to root resorption. Therefore, reducing the risk of root resorption requires limiting root movement within cancellous bone and avoiding unnecessary reciprocal movement. Furthermore, a clear aligner design software with a root-bone system makes the root-bone relationship visible in the digital design, which helps reduce root resorption risks. Periodontal damage Standard orthodontic treatments do not cause periodontal damage. However, orthodontic appliances may increase the difficulty of maintaining oral hygiene, leading to a higher rate of gingivitis and periodontitis. Clinical trials have shown that, compared to fixed orthodontic appliances, clear aligners are more favorable for maintaining periodontal health in patients. – Moreover, for cases with an unsatisfactory periodontal status, design changes can help mitigate these risks by decreasing the speed of tooth movement, reducing teeth coverage by aligners, and prolonging the wearing duration for each set of aligners. Thus, clear aligners are recommended for patients susceptible to gingivitis and/or periodontitis. Meanwhile, alveolar bone defect (fenestration and dehiscence) is also a common complication of orthodontic treatment. A recent study found that the incidence of fenestration in patients treated with clear aligner and fixed appliance was 23.96% and 26.18%, respectively. Another investigation also showed that non-extraction CAT was associated with increased presence of alveolar bone dehiscence and fenestration. Thus, root-bone relationship should be considered and evaluated carefully, especially arch expansion is designed. Relapse After orthodontic treatment, relapse tends to occur because of incomplete remodeling of the periodontal tissues and muscular system. In the literature, relapse has been mainly linked to occlusal stability, types of tooth movement, root-bone relationships, and the balance of intraoral and extraoral muscle forces, with the type of orthodontic appliance used having minimal impact on relapse risk. – The use of retainers and correction of oral bad habits (such as tongue-thrust swallowing, etc.) are currently considered the most effective measures for reducing relapse risk. Poor oral hygiene during CAT can disrupt the oral microbiota, leading to white spot lesions or even caries. However, compared to patients undergoing fixed orthodontic treatment, patients wearing clear aligners have lower levels of white spot lesions, total bacterial plaque, and cariogenic bacteria in the saliva. – This may be related to the reduced detrimental effect of clear aligners on oral hygiene. CAT may lead to root resorption. However, it reported that CAT applied a gentler force, resulting in a lower rate and severity of root resorption, compared to those observed in fixed orthodontic treatment. – Factors such as post-treatment root position (relationship with the cortical bone), extraction, tooth position, and specific tooth movement patterns (intrusion and extrusion) are all risk factors for root resorption, whereas post-treatment root position is most closely related to root resorption. Therefore, reducing the risk of root resorption requires limiting root movement within cancellous bone and avoiding unnecessary reciprocal movement. Furthermore, a clear aligner design software with a root-bone system makes the root-bone relationship visible in the digital design, which helps reduce root resorption risks. Standard orthodontic treatments do not cause periodontal damage. However, orthodontic appliances may increase the difficulty of maintaining oral hygiene, leading to a higher rate of gingivitis and periodontitis. Clinical trials have shown that, compared to fixed orthodontic appliances, clear aligners are more favorable for maintaining periodontal health in patients. – Moreover, for cases with an unsatisfactory periodontal status, design changes can help mitigate these risks by decreasing the speed of tooth movement, reducing teeth coverage by aligners, and prolonging the wearing duration for each set of aligners. Thus, clear aligners are recommended for patients susceptible to gingivitis and/or periodontitis. Meanwhile, alveolar bone defect (fenestration and dehiscence) is also a common complication of orthodontic treatment. A recent study found that the incidence of fenestration in patients treated with clear aligner and fixed appliance was 23.96% and 26.18%, respectively. Another investigation also showed that non-extraction CAT was associated with increased presence of alveolar bone dehiscence and fenestration. Thus, root-bone relationship should be considered and evaluated carefully, especially arch expansion is designed. After orthodontic treatment, relapse tends to occur because of incomplete remodeling of the periodontal tissues and muscular system. In the literature, relapse has been mainly linked to occlusal stability, types of tooth movement, root-bone relationships, and the balance of intraoral and extraoral muscle forces, with the type of orthodontic appliance used having minimal impact on relapse risk. – The use of retainers and correction of oral bad habits (such as tongue-thrust swallowing, etc.) are currently considered the most effective measures for reducing relapse risk. The design of clear aligners continues to evolve, taking advantage of the novel materials and insights generated by global big data studies, leading to less difficulty in complex cases treatment, allowing more patients worldwide to achieve better treatment outcome by this technology. A novel clear aligner philosophy—biomechanics-guided, esthetics-driven, periodontium-supported and TMJ-compatible clear aligner therapy (BEPT-CAT)—may be applied in clinical practice to guide aligner treatment planning and execution. Moreover, the possibility of tiny attachments or attachment-free designs may become feasible, further improve patients’ comfort and esthetics during treatment. In the future, individual dental practices may be equipped with devices that allow to 3D-print the elements of the novel designs, further increasing treatment personalization. Advances in science and technology are driving progress in orthodontics. Esthetic, comfortable, convenient, and efficient orthodontic treatment will be realized through digitally oriented invisible aligner technology, bringing CAT into mainstream use.
Additives in fibers and fabrics.
2f7825bc-ff99-405c-8971-2aae358ac83b
1475207
Preventive Medicine[mh]
CAR-T Therapy for Pediatric High-Grade Gliomas: Peculiarities, Current Investigations and Future Strategies
cb7ae2db-407c-4f95-a62e-70da757b606a
9115105
Pediatrics[mh]
Pediatric HGGs are among the most common malignant brain tumors in pediatrics and represent the leading cause of cancer related death in childhood . Traditionally, pediatric and adult gliomas were commonly classified according to the WHO grading, with HGGs including WHO grade III and IV aggressive tumors . Over the last years, key differences in epigenomic and genetic features between pHGGs and their adult counterparts emerged, despite analogies in aggressiveness and histology . In 2021 WHO Classification of CNS tumors, pediatric gliomas are differentiated from adults– on the heels of 2016 WHO classification – and the term “glioblastoma” is abandoned in pediatric oncologic setting . Four groups of pHGGs are now described: Diffuse midline glioma, H3 K27-altered ; Diffuse hemispheric glioma, H3 G34-mutant ; Diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype ; and Infant-type hemispheric glioma . Diffuse midline glioma, H3 K27-altered had already been included in previous classification but the term “altered” aims at including other mechanisms besides previously reported H3-K27 mutations. This group encompasses DIPG, one of the most aggressive types of pHGG. Diffuse hemispheric gliomas, H3 G34-mutant typically arise in cerebral hemispheres and are characterized by a G34R/V substitution of histone H3 due to a mutation of H3F3A gene . Diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype encompass a heterogeneous group of pHGGs not showing either H3 or IDH mutations. Finally, Infant-type hemispheric glioma include HGGs occurring in infant and newborns. This latter includes 3 subgroups distinguished by molecular features: subgroup 1 involves alterations in one of the genes ALK, ROS1, NTRK1/2/3 , or MET; subgroup 2, RAS/MAPK pathway alterations and hemispheric localization; subgroup 3 refers to tumors with RAS/MAPK mutations arising in midline structures . Molecular characterization of the first subgroup of this class promoted investigation of target therapies, such as Larotrectinib for NTRK-fusion positive pHGGs (NCT02576431). Even though our knowledge on biological and molecular features of pHGGs largely increased during the past years, therapeutic approaches remain limited and ineffective. Current multimodal treatments encompass surgery, radiotherapy and chemotherapy, reaching 5-year survival rate less than 20% . New therapeutic strategies are necessary and novel immunotherapies hold great promise for possible effective treatment. The principle of immunotherapy relies on restoring the physiological ability of immune system to recognize and eliminate tumor cells. This goal can be achieved through a wide variety of approaches and, so far, development of chimeric antigen receptor (CAR) expressing T cells is one of the ultimate advances in this field. CAR-T cells are T lymphocytes genetically modified by either viral vectors (retroviral or lentiviral) or by non-viral approaches (sleeping beauty transposition) to express a chimeric construct deriving from the fusion of the variable portions of a monoclonal antibody single chain to the signal transduction domains of the CD3 z chain . This structure combines the specificity of MHC-independent antibody recognition with the anti-tumor potential of T lymphocytes, thus allowing to transfer any antigenic specificity to T cells. In order to potentiate the antitumor efficacy of these constructs, second-generation CAR-T cells have been created by the inclusion of one costimulatory domain, such as the CD28, 41BB or OX40 molecules, resulting in a higher capacity for cytokine production, a greater expansion and a longer persistence . Subsequently, the combination of two signal domains into third-generation CAR-T constructs showed a further increase in the activation profile . CAR-T therapy has given outstanding results against several B-cell malignancies and myeloma, in both adults and children. Currently, the clinical trials reported so far on CAR-T cells directed towards the CD19 antigen, widely expressed by acute lymphoblastic leukemia cells (ALL), has documented a strong tumor activity even in patients highly resistant to conventional treatments or relapsed after allogeneic transplantation, obtaining CR rates of approximately 80%. Unfortunately, the results obtained so far with CAR-T cells in solid tumors have been less effective and fewer clinical trials or case reports have been reported in the literature. Several limits can hinder the development of CAR-T in solid tumors, including: the difficulty of finding a suitable target antigen (the so-called “antigen dilemma”), the strongly immunosuppressive TME, the limited persistence in vivo and finally, the insufficient T cell trafficking and homing to tumors and the limited persistence of exhausted T cells. The first antigens proposed for targeting pHGGs derive from studies conducted on adult gliomas, in particular HER-2, EphA2 and IL-13Rα2 . HER-2 is a tyrosine kinase receptor overexpressed in HGGs, whose levels of expression correlate to poor outcome not only in GBM but also in other pediatric tumors, such as medulloblastoma . HER-2 is detected at low levels in normal, healthy brain whereas is overexpressed on CNS cancer stem cells, making it an effective and safe target for the treatment of HGGs . Monoclonal antibodies (moAbs) recognizing HER-2 showed extraordinary results in the treatment of breast cancer, but the presence of blood brain barrier (BBB) limits their use for brain tumors. Differently from moAbs, T-cells can cross the BBB and traffic to brain tissue and cerebrospinal fluid from blood flow to recognize tumor cells, as demonstrated by clinical response of melanoma brain metastasis after intravenous infusion of adoptive tumor-infiltrating lymphocytes and by the detection of CD19-CAR-T in cerebrospinal fluids of patients with ALL . Indeed, Ahmed et al. tested the safety of intravenous injection of virus-specific (VS) CAR-T cells directed to HER-2 in a phase I clinical trial on patients affected by high grade gliomas: with 17 patients treated, including 7 children <18 years, the approach resulted to be safe, without any dose-limiting toxicity (DLT) reported . Interestingly, one of the adolescent patients with unresectable right thalamic HGG showed the reduction of 30% of the longest tumor diameter lasted for 9.2 months and an Overall Survival (OS) of 34.2 months after two infusions of CAR-T cells . In order to target CNS tumors, CAR-T can be delivered locally in tumor resection cavity or in the ventricular system with improved tumor control, even at lower doses, and reduced systemic circulation and toxicity, as recently highlighted by Theruvat et al. and Donovan et al. . Currently, two clinical trials are evaluating intracranial injection of Anti-HER-2 CAR-T for treatment of pHGGs (NCT02442297; NCT03500991) . Preliminary results of the phase 1 trial “BrainChild-01” showed that repetitive intracranial infusion of HER-2 CAR-T on a cohort of pediatric and young adult patients with CNS tumors is safe, well tolerated, and able to induce immune response . Two other clinical trials, namely “Brainchild-02” and “BrainChild-03”, are evaluating the effectiveness of locoregional infusion of CAR-T cells targeting EGFR and B7-H3, respectively (NCT03638167 NCT04185038). EphA2 is a tyrosine kinase receptor involved in oncogenic pathways in several tumors, including breast cancer, lung cancer and HGG . High expression of EphA2 has been correlated to worse clinical outcomes in adult HGG and recently the same association was confirmed in pediatric HGGs . Preclinical studies reported a promising antitumoral activity of EphA2-redirected CAR-T-cells for the treatment of HGG. Recently, results of the first-in-human trial of intravenous administration EphA2-CAR-T in patients with high grade gliomas showed safety and transient clinical responses . However, the trial enrolled patients >18 years, therefore the efficacy of EphA2- CAR-T in pediatric cohorts remains to be investigated. IL13Rα2 is a monomeric, high-affinity, IL13 receptor, detected at high levels in more than 50% of HGGs, whose overexpression is associated with poor outcomes . Brown et al. started a clinical trial to evaluate the efficacy of intracranial infusion of IL13Rα2-CAR-T in adult and pediatric patients with HGGs (NCT02208362). The first published results show safety of the therapeutic approach and documented a complete radiographic response in a patient for up to 7.5 months . Recently, both GD2 and B7-H3 were found to be highly expressed in pediatric diffuse midline glioma (DIPG). Interestingly, Haydar et al. established a hierarchy of antigens expressions in pediatric brain tumors showing that, despite a high heterogeneity, GD2 and B7-H3 maintain the highest expression as compared to IL-13Rα2, HER2 and EphA2 . These data suggest the importance of focusing on these targets for pHGGs rather than antigens mostly relevant in adult gliomas. GD2 is involved in mechanisms of cell growth, motility and invasiveness of several pediatric tumors like neuroblastoma, Ewing’s Sarcoma and pHGGs . After the promising results of clinical trials using several generations of GD2-CAR-T cells to treat patients affected by neuroblastoma, first, and other GD2 + solid tumors, after, (NCT03373097; NCT04099797) this approach is currently under preclinical and clinical evaluation also for pHGG ; NCT05298995). In particular, Mount and Majzner et al. demonstrated uniform and high levels of GD2 on H3-K27 diffuse midline glioma cells and developed a second-generation GD2 CAR construct which showed efficient anti-tumor activity in DMG PDXs models , leading to the activation of a phase I clinical trial, currently enrolling pediatric patients. They published the results of the first 4 patients with H3K27-altered diffuse midline glioma (DMG) treated in the trial and showed that the toxicity profile strongly depends on the tumor location and is manageable and reversible with intensive supportive care . In addition, reduction of neurological deficits and radiological improvement after CAR-T administration were reported. In particular, in one of the treated patients, affected by spinal cord DMG, a reduction of 90% of the tumor volume after the first intravenous administration and a further reduction of neoplasm dimensions of 80% after a second intraventricular injection was observed . At Bambino Gesù Children’s Hospital we are currently activating a phase I clinical trial to test the safety and, preliminarily, efficacy of third-generation (incorporating 4.1BB and CD28 costimulatory domains) GD2-CAR T cells, infused i.v., for the treatment of pediatric patients affected by CNS tumors ; NCT05298995). In particular, in view of the correlation between tumor location and toxicity, the study has an innovative design of sequential enrollment of patients into 3 different arms, the first being represented by medulloblastoma and other embryonal tumor (arm A), the second by hemispheric HGG (arm B) and the third by tumors with midline location, at higher risk of severe toxicity, namely thalamic HGG, DMG, DIPG (arm C). B7-H3 is a transmembrane protein with important immune inhibition functions, belonging to the B7 family of immune checkpoint proteins, and is overexpressed in several pediatric malignancies - including pHGGs . Majzner et al. tested the efficacy of B7-H3 CAR-T in xenografts models of several different paediatric solid tumors, including osteosarcoma, medulloblastoma and Ewing’s sarcoma) . Interestingly, they showed a potent antitumor activity, strictly dependent on the antigen density on tumors, which resulted to be aberrantly high compared to healthy tissues . The results is extremely important, considering the wide expression of the antigen in normal tissues. Recently, phase 1 clinical trials exploring the safety of either loco-regional or intravenous infusion of B7-H3 CAR-T cells for the treatment of paediatric patients with solid tumors, including pHGGs, started (NCT04185038; NCT04897321). As already mentioned, the success of CAR-T cell therapy relies on the choice of target antigens highly expressed on tumor cells, amongst several factors. However, the recognition of a single antigen is limited by the possible occurrence of the so-called “antigen escape” phenomenon: either the down-regulation of the target antigen or the selection of already negative subclones by the pressure of the treatment, in highly heterogeneous tumors, can represent the cause of subsequent tumor recurrence. This issue is particularly relevant in the setting of pHGG which are extremely heterogeneous and often characterized by subpopulations of tumor cells with different antigen expression. For this reason, CAR-T cells able to recognize multiple antigens on target cells by incorporating 2 antigen targeting domains within one CAR construct have been developed. For HGG, a tandem CAR (TanCAR) targeting simultaneously HER-2 and IL-13Rα2 was designed . TanCAR-T cells showed an elevated capacity to lyse glioma cells in vitro and in vivo and to prevent tumor recurrence in the animal model. Moreover, CAR-T cells targeting 3 glioblastoma associated antigens (HER2, IL13Rα2, and EphA2) were also developed and showed even more effectiveness as compared to the bispecific CAR-T . Another interesting strategy to overcome antigenic escape in HGG is the combination of CAR-T cells with Bi-specific T-cell engagers (BiTEs) targeting different antigens. Choi et al. developed EGFRvIII-redirected CAR-T cells secreting EGFR-BiTEs in the tumor site . This strategy was tested in mouse models and resulted capable of dulling the antigenic loss effects. Moreover,- since systemic administration of EGFR-BiTE could induce off-tumor toxicities - the local release by these CAR-T cells reduces the risk of cytotoxic T-cells activity on healthy tissues expressing EGFR. The complex intra- and inter-tumor heterogeneity of the tumor microenvironment (TME) plays a crucial role in mediating tumor progression and resistance to therapy . Several different cells, including tumor-associated glioma stem cells (GSC), stromal cells (resident brain glial cells, oligodendrocytes, astrocytes, ependymal cells, microglia) and infiltrating immune cells, are key regulators of growth and vascularization of HGGs . In adult HGGs, TME presents a wide range of immune suppressive mechanisms preventing tumor recognition and eradication by the innate and adaptive immune system. The presence of immuno-suppressive cytokines (TGF-β, IL-10), chemokines, and regulatory immune-suppressive cells, such as tumor-associated macrophages/microglia (GAMs) and myeloid-derived suppressor cells (MDSCs), limits the effectiveness of current therapies and are briefly review below . GSCs are characterized by their self-renewal properties and play a key role in drug escape mechanisms, for example driving radiation-induced DNA methylation changes after radiotherapy GSCs can promote a microinvasion of healthy tissues in areas that support their proliferation – such as subventricular, perinecrotic and perivascular areas - evading therapeutic agents and contributing to recurrence of disease. Recently, a mouse model of disease derived by orthotopic transplantation of GSCs of pHGGs samples was developed . Methylation, histology and clinical outcomes resembled accurately the patients’ tumors of origin, indicating the ability of GSCs to differentiate in tumor cells. One of the most innovative approaches targeting GSCs niche in pHGGs is represented by immunovirotherapy . Friedman et al. observed the ability of oncolytic HSV-1 to kill tumor cells and CD133+ GSCs in preclinical models . Results from a phase 1 clinical trial on the use of oncolytic HSV1 in patients with pHGGs indicated the safety of this approach . Combining T-cells based therapies with this kind of modern immunological approaches affecting GSCs could significantly improve tumor control. GAMs represent an important component of the glioma microenvironment constituting the main proportion of infiltrating cells in adult and pediatric HGGs (i.e., glioblastoma and DIPG) . Inside the tumor, GAMs usually acquire a specific phenotype of activation that favors tumor growth, angiogenesis and promotes the invasion of normal brain parenchyma. Most of the macrophages recruited into TME, polarize toward an M2-like phenotype and exhibit suppression of the proliferation and functionality of tumor-infiltrating T cells through the production of anti-inflammatory cytokines . In adult HGG, GAMs have been extensively described as pivotal drivers of progression and survival of malignant cells in the TME . Hence, several anti-GAMs aproaches are currently under evaluation for treatment of adult GBM. Recently, Lin et al. studied TME and GAMs features specifically in DIPG . Although DIPG tumor cells produce Colony Stimulating Factor 1 (CSF1), a cytokine associated with the M2 pro-tumorigenic phenotype, DIPG-associated macrophages do not seem to have the characteristic of macrophages of type 2 . Transcriptome analysis have shown that GAMs from DIPG express lower levels of some inflammatory cytokines and chemokines (IL6, IL1A, IL1B, CCL3, CCL4) compared to adult HGG-associated macrophages. Moreover, Engler et al. observed a negative correlation between levels of GAMs accumulation and survival rate in adult HGGs but not in pediatric tumors, highlighting the marked biological differences between the pediatric and adult counterpart . In addition, the analysis of lymphocyte infiltrate in primary DIPG tissues reveals the lack of tumor-infiltrating lymphocytes (TILs), defining it as a immunologically “cold” tumor . For this reason, adoptive immunotherapies (e.g. CAR-T cells) leading to the transferal of T-cells in the TME represents a promising strategy to induce a strong inflammatory and antitumor response. MDSCs are involved in immune suppression in several types of cancers, including HGG . Although only few studies focus on their role specifically in pHGGs, recently Mueller et al. reported a correlation between high levels of circulating MDSCs and poor prognosis in patients with DIPG, suggesting a role in immunosuppression and tumor escape mechanisms in pHGGs as well . Moreover, MDSCs showed to impair efficacy of immunotherapy in other pediatric tumors such as neuroblastoma, representing a relevant target to improve efficacy of modern CAR-T cells therapies . Successful immune escape of tumor cells includes also the production of soluble factors in the microenvironment by tumor cells (TGF-β, LDH5), the induction of co-inhibitory molecules (PD-1, LAG-3 and TIM-3) and the release of immunosuppressive factors (CSF-1, VEGF, PGE2, NO, Arg I, IDO and Gal-1) . In adult HGG the production of lactate dehydrogenase isoform 5 (LDH5) and TGF-β impairs NK cells cytotoxic function, usually relevant for the elimination of glioma cells . Although in adult HGGs there is a strong infiltration of NK and myeloid cells into the tumor, a similar finding in pediatric brain tumors has not been reported . Solid tumor, including pHGG, showed important mechanisms of resistance to current CAR-T cell therapies. Important obstacles to CAR-T efficacy is the presence of immunosuppressive factors in TME – including, but not limited to, TGF-β, PD-1 or CTLA4 mediated signals – which prevent CAR-T cells expansion and finally induce their exhaustion . For this reason, strategies able to circumvent these barriers are needed to improve the effectiveness of CAR-T therapies for pHGGs. Recently, CAR-T cells releasing transgenic cytokines after activation at tumor site have been developed. This approach was conceived to overcome the insufficient production of pro-inflammatory cytokines by T cells accumulated in the tumor environment. In details, T-cells redirected for universal cytokine-mediated killing (TRUCKs) cells are fourth generation CAR-T cells, armed with immune stimulatory cytokines that improve CAR-T cell expansion and persistence . In details, CAR-T cells can be engineered with inducible expression cassette for the cytokine of interest, including IL-7, CCL-19, IL-15, IL-18, and IL-1 . In particular, transgenic expression of IL-15 could be an appealing strategy to enhance CAR-T cell effector function in HGGs patients, thanks to the well-known ability of this cytokine to induce a more memory stem cell-like phenotype of transduced T cells . The group of Krenciute et al. for example, showed an increased persistence and a greater antitumor activity of IL-13Rα2-CAR-T cells expressing IL-15 constitutively as compared to conventional IL-13Rα2-CAR-T cells . Also, anti-GD2 TRUCKs secreting IL-12 or IL-18 after activation have been developed, showing improved T-cells activation and increased monocyte recruitment after in vitro migration assay . On one hand, cytokines release following CAR-T-cells activation results in improved CAR-T cell persistence and stronger antitumor activity, both in vitro and in vivo . On the other hand, continuous release of secreted cytokine could cause toxicities, limiting the clinical therapeutic window of these CAR-T cells . This limitation could be overcome by the development of CAR-T cells expressing constitutively active cytokine receptors, such as IL-2, IL-7, and IL-15 receptors, able to activate the relative intracellular axis, instead of releasing the soluble cytokines. In particular, the expression of constitutively signaling IL-7 receptor (C7R) on a EphA2-CAR-T produced promising results in terms of proliferation, survival and antitumoral activity of CAR-T cells both in vitro and in orthotopic xenograft models of HGG . Another major obstacle provided by the TME, to the activation and expansion of CAR-T cells is represented by the widely expressed immune checkpoint receptors – such as PD-1, CTLA-4, TIM-3, and LAG-3. Conventional immune checkpoint inhibitors (ICIs) blocking CTLA-4 (i.e. ipilimumab) or PD-1 (i.e. nivolumab) have shown great success in some solid tumors, including non-small cell lung cancer and metastatic melanoma, but not in HGGs, probably owing to the negligible infiltration of effector T cells in these tumors and to the low mutational burden of pHGGs, leading to few immunogenic tumor neoantigens . Nevertheless, combined with CART cells, ICIs might improve the ability of the transgenic T cells to exert their antitumor activity, overcoming the exhaustion induced by the TME . In pre-clinical models of glioma, checkpoint blockade has been studied as an adjuvant to improve the efficacy of CAR-T therapy. For example, combination of HER-2 redirected CAR-T cells with anti-PD1 antibody induced enhancement of CAR-T cells activity against HGG cells in vitro . Furthermore, Song et al. reported the ability of anti-PD1 antibodies to improve EGFRvIII-CAR-T cells antitumoral effects in a mouse model of HGG , suggesting that PD1 blockade might represent an effective strategy. Based on these promising pre-clinical studies, two phase I clinical trials are currently investigating 2 nd generation CAR-T cells in combination with pembrolizumab or nivolumab for the treatment of adult patients with HGG (NCT03726515; NCT04003649). Interestingly, the role of some intracellular signaling pathways in the activity of CAR-T cells has been investigated, unveiling new potential approaches to improve CAR-T cells efficacy. For example, the diacylglycerol kinase (DGK), a physiologic negative regulator of the signal transduction of the T-cell receptor (TCR), is able to negatively regulate CAR-T cell activation . Therefore, the knockout of DGK can induce an improvement of the anti-tumor cytotoxicity of CAR-T cells. In a mouse glioma model, EGFRvIII-CAR-T cells lacking DGK revealed elevated effector function of the transgenic CAR-T cells, with increased antitumor activity and tumor infiltration . Moreover, recently, we reported that the co-administration of linsitinib – a dual IGF1R/IR inhibitor – is able to improve GD2-CAR-T antitumor activity and increase tumor cell death of primary cells of H3K27 diffuse midline glioma, both in vitro and in vivo . These results support the hypothesis that the use of combinatorial approaches might potentiate the efficacy of CAR-T cells for the treatment of pHGGs. Another promising and sophisticated immunotherapy approach is represented by oncolytic viruses (OV): genetically modified viral agents able to replicate in tumor cells with a negligible replication ability in non-neoplastic cells . OV antitumoral activity is based on two mechanisms: i) induction of direct lysis of tumor cells through infection and replication; ii) stimulation of effector function and antitumor activity of the T cells in TME. The latter mechanism as shown to be extremely relevant, if not even the most relevant, in the antitumor activity of OV and represents the rational for combining OV and CAR-T cells for the treatment of solid tumors, with the aim to increase trafficking and antitumoral cytotoxicity of T-cells in TME . In particular, Huang et al. studied a preclinical model of the combination of anti-B7-H3 CAR-T cells with and IL-7-loaded oncolytic adenovirus (oAD-IL7) for the treatment of HGG . The combined strategy promoted T-cells proliferation and antitumoral activity in vitro and reduced mortality in the xenograft mouse models . Conversely, there are some controversial results showing the inefficacy of OVs and CAR-T cells combination for HGGs. Recently, it was observed that the pro-inflammatory activity of an OV (VSVmIFNβ) can impair EGFRvIII CAR-T cells cytotoxicity against HGG cells . The reason of these unexpected results may lie on the complexity of inflammation mechanisms occurring in TME, which need to be better understood in order to develop precise, effective, and safe combination strategies involving CAR-T cells. Moreover, the group of Park et al. developed an interesting combined strategy exploiting OVs to induce the expression of the CAR target in infected tumor cells, hence increasing CAR-T cells antitumoral activity . In details, in this approach an engineered OV delivers a transgene leading to the expression of a truncated form of CD19 (CD19t) on the neoplastic cells, promoting the cytotoxic activity of CD19-CAR-T cells. Lastly, reduced T-Cell trafficking and homing in the TME was underlined as one of the major obstacles of CAR-T therapies. In particular, reduced production of chemokines and modification of Extra-Cellular Matrix (ECM) are involved in hindering the migration of T-cell to the tumor site . Interestingly, Jin et al. developed CAR-T cells expressing chemokines receptors (CXCR1 and CXCR2) to improve intratumoral trafficking . The results observed in xenograft models of HGG confirmed the efficacy of this approach, unveiling the importance of increased T-cells homing to improve CAR-T therapies efficacy. Moreover, the CNS location adds a relevant and peculiar obstacle to the migration of CAR T cells to tumor: the presence of the BBB, a permeability barrier characterized by the connection, through tight junctions, of endothelial cells with the luminal and abluminal membranes lining the capillaries of the brain. Despite the documented ability of i.v. administered CAR T-cells to cross the BBB, as already mentioned, targeted delivery of T cells at the level of the CNS is an attractive option to reduce systemic toxicity and increase CAR T-cell concentration at tumor site. Indeed, as mentioned above, a superior efficacy with reduced toxicity of intraventricular/intrathecal administration of CAR-T cells was already shown and the strategy represents a valid and promising approach to circumvent the BBB obstacle. Despite all the presented approaches show great potential to improve CAR-T cell function and safety in preclinical models, their use in clinical setting is limited at present. The results of future clinical trials will shed new lights on potential and limitations of these highly innovative approaches. Conventional therapies, radiation and chemotherapy are not sufficient for achieving a sustained disease remission in patients affected by HGG, both in adult that pediatric patients, and new therapeutic strategies are necessary. Immunotherapy is a new therapeutic approach that harnesses the inherent activity of the immune system to control and eliminate malignant cells. To date, CAR-T cell therapy has shown promise in early clinical trials in HGG patients but could not achieve the same sustained success observed in hematological malignancies. Several hurdles, including the immunosuppressive TME, the heterogeneity in target antigen expression and the difficulty of accessing the tumor site, impair the antitumor efficacy of CAR-T cells. Several CAR-T antigenic targets have been considered so far, and recently GD2 and B7-H3 look very promising for pediatric tumors. However, heterogeneity of expression is a limiting factor for single antigen-redirected CAR-T in solid tumors. For this reason, considering innovative strategies such as next generation CAR-T cells or combinatory approaches with other immunotherapy agents (e.g. BiTEs, Oncolytic viruses and ICIs) could improve tumor control. “Next generation” or multivalent CAR-T have been developed and might have a large impact on treatment of pHGG, improving efficacy of T cells therapies and overcoming obstacles of TME. LA, GC, and FB conceived the article. LA and GC compiled the review and prepared the draft of the manuscript. FB, AM, AC, and GB critically reviewed the manuscript. FB edited the manuscript. All authors contributed to the article and approved the submitted version. The work was partly supported by grants from: AIRC (Associazione Italiana Ricerca sul Cancro, My First AIRC – ID 20450 - to FB) and Ministero della Salute (Ricerca Corrente and 5x1000 to FB). The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Evaluation of Publication of Pediatric Drug Trials
0292ded7-d932-4a7d-bca8-4857e034fa8c
8047779
Pediatrics[mh]
Nonpublication of clinical trials compromises the integrity of scientific evidence and represents a breach in ethical obligations to trial participants. Timely publication of trials in the medical literature is especially important for pediatric trials, which are often challenging to conduct because of small participant pools and unique ethical and practical considerations. Pediatric trials frequently fill critical gaps in medical knowledge. Many medications used in pediatric populations, for example, have not been formally tested in children; therefore, essential data on pediatric safety and efficacy are not available. To increase our understanding of publication practices for pediatric studies, we assessed the rate of publication of pediatric drug trials. We also examined the information that was lost when trials were not published. We performed a cross-sectional evaluation of pediatric trials registered in ClinicalTrials.gov. Trials were included if they examined a drug intervention in children younger than 18 years; had a randomized trial design; were registered between January 1, 2014, and June 30, 2016; and were completed or discontinued by June 30, 2018. This date was selected to allow a minimum of 2 years from trial end to publication of trial findings. The study was deemed exempt from institutional review board approval by the institutional review board at Boston Children’s Hospital because it did not involve human participants. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. We considered a trial published if results were reported in a peer-reviewed medical journal. Publications were identified through searches in PubMed, GoogleScholar, Embase, and company websites. If no publication was found, we searched for trial reports in conference abstracts, press releases, thesis documents, trial registries, and preprint servers. We also contacted investigators to inquire about trial status and availability of trial results. Reports from unpublished studies were reviewed for findings on mortality, adverse events, and efficacy of the drug intervention. Data analysis was performed from August 28, 2020, to October 30, 2020. We used SciPy, version 1.5.0 (Python Software Foundation) to perform χ 2 tests and Mann-Whitney tests, with statistical significance prespecified at a 2-sided P < .05. Among 189 pediatric drug trials, academic institutions were the most common funding source (92 trials [48.7%]), and most (120 trials [63.5%]) included an international trial site . Seventy-nine trials (41.8%) remained unpublished after a median follow-up period of 3.6 years (interquartile range, 3.0-4.8 years). These studies accounted for 8395 of 24 338 pediatric participants (34.5%). Publication rates at 2 and 4 years were 33.3% (63 of 189 trials) and 71.7% (109 of 152 trials), respectively. Thirty trials (15.9%) were discontinued, and 6 of 30 discontinued trials (20.0%) and 104 of 159 completed trials (65.4%) were published ( P < .001). The most frequent reasons for discontinuation were insufficient patient enrollment (10 trials [33.3%]), scientific reasons (8 trials [26.7%]), and business decisions (5 trials [16.7%]). Trial reports were identified through online searches (n = 38) and email correspondence (n = 11) for 49 of the 79 unpublished trials (62.0%) . Of these trials, 2 (4.1%) included deaths, 14 (28.6%) included serious adverse events, and 31 (63.3%) included nonserious adverse events. Efficacy data for the investigational drug were available in unpublished reports for 43 trials (87.8%). For the entire cohort of 79 unpublished trials, safety or efficacy data were available for a total of 44 unpublished trials (55.7%). In this sample of pediatric drug trials, two-thirds remained unpublished 2 years after trial end, representing considerable loss of scientific information and inefficiency in research practices. More than half of unpublished trials generated safety and efficacy findings, which were largely inaccessible to clinicians and the scientific community. A limitation of our study is that trial reports were available for only 62% of unpublished trials, and results may not be representative of all unpublished studies. Nonetheless, our findings point to the need for additional efforts and incentive mechanisms to ensure that pediatric trials are published in a timely fashion and that participation by children in clinical trials contributes to advances in clinical care.
Continuous renal replacement therapy in neonates and children: what does the pediatrician need to know? An overview from the Critical Care Nephrology Section of the European Society of Paediatric and Neonatal Intensive Care (ESPNIC)
a631b348-80dc-4e3c-be5b-70849a399d33
10912166
Internal Medicine[mh]
Continuous renal replacement therapy (CRRT) is the preferred method of renal support in critically ill children in the pediatric intensive care unit (PICU) as it allows for continuous and controlled fluid and solute clearance in hemodynamically unstable patients . In contrast, intermittent modalities like hemodialysis are applied in stable patients outside of ICU and in the outpatient setting . The use of CRRT in the PICU has been rising due to standardized definitions, and therefore earlier recognition of acute kidney injury (AKI) as well as fluid overload (FO); moreover, in recent years, a growing number of patients with sepsis, following cardiac surgery or respiratory failure , are at increased risk of AKI. Thus, CRRT has become an important and integral part of modern pediatric critical care. Moreover, in the last decades, technical refinements have made CRRT safer, even in small children, and new dedicated machines for the use in small children have found their way in daily practice . However, a recent survey of the Critical Care Nephrology section of the European Society of Paediatric and Neonatal Intensive Care (ESPNIC) demonstrated that wide practice variations exist in all aspects of CRRT, from timing of initiation, vascular access, modality and dose delivery, anticoagulation method, discontinuation of CRRT, among others, as well as follow-up of PICU survivors . The purpose of this review is to give an overview on the indications and key technical aspects, and discuss major controversies as wells as future research aspects in the management of CRRT. Indications for CRRT The most common indications for CRRT in critically ill children are AKI and FO. AKI is common and occurs in 10% of hospitalized children and 30–60% among critically ill children and is associated with adverse short- and long-term outcomes . Due to introduction of a standardized definition by KDIGO (Kidney Disease: improving global outcomes) and pRIFLE (Pediatric Risk, Injury, Failure, Loss, End Stage Renal Disease) criteria, AKI is more commonly and earlier diagnosed . The AWARE study recently evaluated the incidence of AKI among critically ill children demonstrating that one of four children admitted to 32 PICUs worldwide developed AKI and 12.6% developed severe AKI, defined as stage KDIGO 2 and 3. Moreover, this study highlighted that mortality rises with increasing severity of AKI (11% in severe AKI vs. 2.6% in AKI stage 1 or no AKI) and the need for CRRT . Besides AKI, FO is common in critically ill children . FO is most commonly defined according to the formula described by Goldstein et al.: % FO = Fluid in − fluid out / intensive care admission body weight in kg × 100 . Although FO might occur without AKI, more often, patients at risk of severe FO often are the same who are at risk for AKI and include patients with sepsis, those with cardiac surgery and respiratory failure. The association between the severity of FO and increased mortality has been demonstrated in several studies including a recent meta-analysis which showed a 6% increase in the odds of mortality for every 1% increase in FO . However, in clinical practice, fluid status is often difficult to assess and no absolute surrogate marker for FO exists. Therefore, fluid stewardship and fluid restriction in critically ill children after the initial resuscitation phase are very important to prevent severe FO. Acid–base and severe electrolyte abnormalities are often associated with AKI . Severe metabolic acidosis unresponsive to conventional medical therapy might trigger earlier initiation of CRRT, especially in patients with acute respiratory distress syndrome (ARDS) and lung-protective ventilation as the combination of respiratory and metabolic acidosis may result in severe acidemia. Other electrolyte abnormalities such as hyperkalemia, hyponatremia, and hyperphosphatemia may accompany AKI and should be considered in the decision-making to initiate CRRT. Severe hyperkalemia may occur without AKI, for example, in the case of tumor lysis syndrome and initiation of CRRT is recommended when potassium levels raise > 6.5 mmol/L despite medical therapy . In addition, there are a variety of non-renal indications for CRRT. Elimination of toxins in patients with inborn errors of metabolism is well established, although CRRT is mainly indicated for ammonia removal as well as in acute liver failure patients, where the early initiation of CRRT to eliminate ammonia and other water soluble toxins has been associated with improved survival as bridge to recovery or bridge to transplant strategy . The elimination of cytokines and inflammatory mediators in sepsis-induced multi-organ dysfunction syndrome (MODS) as an immunomodulatory approach has received increasing attention in recent years and some adult studies have demonstrated positive effects on survival . For these purposes, combination of CRRT with other extracorporeal techniques has been used, like therapeutic plasma exchange (TPE) or the addition of specific filters like Cytosorb ® or oXiris ® which add adsorption to the other CRRT mechanisms . Table summarizes the most common indications for CRRT in critically ill children. Timing of initiation Despite the increased use of CRRT, identifying the optimal timing of initiation remains a difficult decision in clinical practice. On the one hand, an early initiation strategy might result in improved outcomes, especially in patients with conditions with high risk for AKI and significant FO; several retrospective pediatric studies have shown an association between increased mortality and higher degree of fluid overload at CRRT initiation . On the other hand, CRRT is an invasive therapy and may be associated with complications especially in smaller children. Therefore, a more conservative “wait and watch” strategy may avoid overtreatment as some patients might recover and without needing any CRRT treatment. Modem et al. showed a significant difference towards earlier initiation of CRRT in pediatric survivors when compared with non-survivors (2 vs. 3.4 days) . Additionally, the study by Cortina et al. showed that the odds of mortality increased by 1% for every hour of delay in CRRT initiation . However, these were single-center retrospective studies and therefore results may not be generalizable. Optimal timing of initiation of CRRT has been studied in several RCT’s in adult patients with mixed results . The most recent multicenter AKIKI2 trial showed that a more delayed CRRT strategy led to a reduction in CRRT use. However, on the other hand, the hazard for death at 60 days increased significantly in the more delayed strategy group . Taken together, given current available evidence, most experts would advise to consider initiation of CRRT in critically ill children with FO > 10% when diuretics are unable to reverse or maintain fluid balance . The most common indications for CRRT in critically ill children are AKI and FO. AKI is common and occurs in 10% of hospitalized children and 30–60% among critically ill children and is associated with adverse short- and long-term outcomes . Due to introduction of a standardized definition by KDIGO (Kidney Disease: improving global outcomes) and pRIFLE (Pediatric Risk, Injury, Failure, Loss, End Stage Renal Disease) criteria, AKI is more commonly and earlier diagnosed . The AWARE study recently evaluated the incidence of AKI among critically ill children demonstrating that one of four children admitted to 32 PICUs worldwide developed AKI and 12.6% developed severe AKI, defined as stage KDIGO 2 and 3. Moreover, this study highlighted that mortality rises with increasing severity of AKI (11% in severe AKI vs. 2.6% in AKI stage 1 or no AKI) and the need for CRRT . Besides AKI, FO is common in critically ill children . FO is most commonly defined according to the formula described by Goldstein et al.: % FO = Fluid in − fluid out / intensive care admission body weight in kg × 100 . Although FO might occur without AKI, more often, patients at risk of severe FO often are the same who are at risk for AKI and include patients with sepsis, those with cardiac surgery and respiratory failure. The association between the severity of FO and increased mortality has been demonstrated in several studies including a recent meta-analysis which showed a 6% increase in the odds of mortality for every 1% increase in FO . However, in clinical practice, fluid status is often difficult to assess and no absolute surrogate marker for FO exists. Therefore, fluid stewardship and fluid restriction in critically ill children after the initial resuscitation phase are very important to prevent severe FO. Acid–base and severe electrolyte abnormalities are often associated with AKI . Severe metabolic acidosis unresponsive to conventional medical therapy might trigger earlier initiation of CRRT, especially in patients with acute respiratory distress syndrome (ARDS) and lung-protective ventilation as the combination of respiratory and metabolic acidosis may result in severe acidemia. Other electrolyte abnormalities such as hyperkalemia, hyponatremia, and hyperphosphatemia may accompany AKI and should be considered in the decision-making to initiate CRRT. Severe hyperkalemia may occur without AKI, for example, in the case of tumor lysis syndrome and initiation of CRRT is recommended when potassium levels raise > 6.5 mmol/L despite medical therapy . In addition, there are a variety of non-renal indications for CRRT. Elimination of toxins in patients with inborn errors of metabolism is well established, although CRRT is mainly indicated for ammonia removal as well as in acute liver failure patients, where the early initiation of CRRT to eliminate ammonia and other water soluble toxins has been associated with improved survival as bridge to recovery or bridge to transplant strategy . The elimination of cytokines and inflammatory mediators in sepsis-induced multi-organ dysfunction syndrome (MODS) as an immunomodulatory approach has received increasing attention in recent years and some adult studies have demonstrated positive effects on survival . For these purposes, combination of CRRT with other extracorporeal techniques has been used, like therapeutic plasma exchange (TPE) or the addition of specific filters like Cytosorb ® or oXiris ® which add adsorption to the other CRRT mechanisms . Table summarizes the most common indications for CRRT in critically ill children. Despite the increased use of CRRT, identifying the optimal timing of initiation remains a difficult decision in clinical practice. On the one hand, an early initiation strategy might result in improved outcomes, especially in patients with conditions with high risk for AKI and significant FO; several retrospective pediatric studies have shown an association between increased mortality and higher degree of fluid overload at CRRT initiation . On the other hand, CRRT is an invasive therapy and may be associated with complications especially in smaller children. Therefore, a more conservative “wait and watch” strategy may avoid overtreatment as some patients might recover and without needing any CRRT treatment. Modem et al. showed a significant difference towards earlier initiation of CRRT in pediatric survivors when compared with non-survivors (2 vs. 3.4 days) . Additionally, the study by Cortina et al. showed that the odds of mortality increased by 1% for every hour of delay in CRRT initiation . However, these were single-center retrospective studies and therefore results may not be generalizable. Optimal timing of initiation of CRRT has been studied in several RCT’s in adult patients with mixed results . The most recent multicenter AKIKI2 trial showed that a more delayed CRRT strategy led to a reduction in CRRT use. However, on the other hand, the hazard for death at 60 days increased significantly in the more delayed strategy group . Taken together, given current available evidence, most experts would advise to consider initiation of CRRT in critically ill children with FO > 10% when diuretics are unable to reverse or maintain fluid balance . Vascular access The performance and delivery of CRRT depends heavily on an efficient vascular access . Vascular access is essential in achieving adequate blood flow rates, which prolongs circuit lifetime (CL) and thus reduces interruptions while optimizing the delivered CRRT dose. However, vascular access may be challenging especially in newborns and infants and the availability of adequate dialysis catheters, especially for small children, remains problematic. In order to minimize complications, KDIGO recommends placement of an adequate central line using ultrasound guidance . The most important factor ensuring low resistance during high blood flow rates is the location of the catheter tip and its diameter. The catheter should be long enough so that the tip resides at the superior cavoatrial junction when using the upper body approach, or the inferior vena cava when using the femoral approach. The right internal jugular vein is recommended as the first choice because of its straight course into the right atrium, which leads to less contact with the vessel wall, and thus better flow and lower risk of catheter-associated central vein thrombosis . The femoral vein is considered second choice and a good option in pediatrics, as it is easily accessible in children. KDIGO recommends that the subclavian vein should be avoided due to the increased risk of insertion complications and of central vein thrombosis. However, recent literature suggests that cannulation of the left brachiocephalic vein, using the supra- or infraclavicular ultrasound-guided approach, is an excellent choice in neonates and small infants, due to the large caliber of this vessel and due to the fact that it is non-collapsible . Although renal function will recover in the majority of children with AKI, the long-term vascular health of a patient requiring CRRT should always be considered, and in order to prevent thromboembolic complications, a catheter-to-vessel ratio of 45% should not be exceeded . CRRT modality and dose Continuous modalities are most frequently used in critically ill children as it allows for gentle fluid removal and minimizes fluid shifts and therefore is preferred in critically ill children at risk of severe hypotension or cerebral edema . In contrast, intermittent renal replacement therapies like hemodialysis (iHD) are frequently used in stable patients with AKI outside the ICU and in the outpatient setting in chronic kidney disease (CKD). iHD may be useful or preferable in a few clinical scenarios when rapid elimination of small molecules like electrolytes or toxins is required such as in case of life-threatening hyperkalemia, as it allows for rapid solute clearance and ultrafiltration during relatively short treatment sessions . Peritoneal dialysis (PD) is another alternative to CRRT, according to local preferences and expertise . The advantage of PD is that it is possible in newborns, easy to perform without requiring complex technology, and is cheaper than CRRT. However, it is contra-indicated in children with abdominal pathology or surgery, requires a surgical intervention for the catheter insertion, and results in lower clearance and ultrafiltration volumes compared to extracorporeal therapies, as fluid and/or solute removal rates are dependent on the diffusion capacity of the peritoneum. Recently, continuous flow peritoneal dialysis (CFPD) has been proposed as effective technique to improve ultrafiltration in children with AKI and FO . Table summarizes the pro and cons of the different CRRT modalities, as well as its alternatives iHD and PD and Table illustrates the main characteristics of PD compared to extracorporeal therapies. Regarding CRRT, a variety of modalities which differ in their mode of solute clearance may be used . Figure demonstrates the physical principles of diffusion and convection, while Fig. illustrates the different modalities of CRRT. Continuous venovenous hemofiltration (CVVH) relies on the physical principle of convection, where ultrafiltration fluid is eliminated due to a hydrostatic gradient along the semi-permeable membrane and together with the fluid solutes is cleared, a mechanism called “solvent drag.” Convective techniques may provide enhanced elimination of middle molecular weight solutes like inflammatory mediators which might be beneficial in critically ill patients. As high ultrafiltrate rates are necessary to achieve sufficient solute clearance, a replacement fluid is administered pre- or postfilter (pre-dilution or post-dilution). On the other hand, continuous venovenous hemodialysis (CVVHD) relies on the principle of diffusion, where a dialysis fluid runs through the filter and molecules diffuse from blood to dialysate along a concentration gradient. Diffusive modalities allow for very effective clearance of small molecular weight solutes. Ultrafiltration rates are relatively low compared with convective modalities, which allows for fluid elimination without the need for replacement fluids. Continuous venovenous hemodiafiltration (CVVHDF) combines convection and diffusion to effectively clear both small and middle weight molecules and eliminate fluid. This method requires the administration of both a dialysis and a replacement fluid. Here again, the choice of the modality is dependent on the patient’s condition but also on the institutional preference and resources. Although CVVHDF seems the most effective choice as it combines filtration and dialysis and provides the broadest therapeutic options, it is also the most complex and resource-intense method as it requires frequent flow adjustments, monitoring of electrolytes, and fluid bag changes. Rarely performed is slow continuous ultrafiltration (SCUF), a method based on convection and used primarily for volume management without administration of a replacement fluid. It is used in a few clinical scenarios, for example, in a patient on ECMO to prevent or treat FO . The combined use of ECMO and CRRT is possible in different configurations. CRRT can be performed using a separate dialysis catheter or the CRRT device can be integrated into to the ECMO circuit by connecting the access and return line of CRRT before and after the oxygenator . The CRRT dose corresponds to the effluent flow rate, which consists of the sum of dialysate and total ultrafiltrate flow. KDIGO practical guidelines recommend a delivered dose of 20–25 ml/kg/h . Given the discrepancy between prescribed and delivered dose, a prescribed dose of 30–35 ml/kg/h is considered standard volume CRRT dose. High-volume CRRT (up to 80 ml/kg/h) may be necessary to increase ammonia clearance in newborns with inborn errors of metabolism or children with acute liver failure . Moreover, high volume CRRT has been shown to reduce vasopressor requirements and provide hemodynamic stability by removing inflammatory mediators in sepsis . However, several large adult RCT’s have failed to demonstrate a difference in survival rates in patients treated with high volume compared to standard volume CRRT . Method of anticoagulation The efficacy of CRRT is directly related to the longevity of the circuit as clotting of the circuit increases downtime, leads to blood loss of the patient, and may cause hemodynamic instability during de- and reconnection and increased costs . To increase CL, anticoagulation of the extracorporeal circuit is necessary. Table shows the most frequently used anticoagulation methods in pediatric CRRT and their advantages and disadvantages. Unlike adults, in children, due to small catheters and low blood flow rates, a strategy with no anticoagulation will result in short CL and thus ineffective CRRT . The recent survey of ESPNIC Critical Care Nephrology Section demonstrated that the most frequently used anticoagulation methods in Europe are heparin (41% of responders) and regional citrate anticoagulation (RCA, 35%) . While heparin is widely available, cheap, and easy to administer, it leads to systemic anticoagulation of the patient and thus may lead to bleeding complications. In children, heparin is mainly administered as unfractionated heparin (UFH) and rarely, unlike adults, as low molecular weight heparin (LMWH). In contrast, RCA is a strictly extracorporeal anticoagulation method and, in a recent review of all available pediatric studies, seems to reduce clotting events and prolong CL . In adults, RCA has been shown to prolong CL in several prospective RCTs, and therefore, KDIGO recommends RCA as first-line anticoagulation method in patients who do not have a contraindication for citrate . However, RCA is a complex method and requires strict protocols, well-trained personnel, and frequent monitoring . Whenever both heparin and RCA are contra-indicated or not available, prostacyclin can be used as alternative. Although not frequently used and therefore with limited experience, it seems a promising agent as it is easy to administer as continuous infusion into the circuit, does not require monitoring, and has a positive safety profile, with systemic hypotension being the most serious adverse event . Deep et al. reported on their positive results using prostacyclin as anticoagulation method in children with acute liver failure . More recently, a synthetic serine protease inhibitor, nafamostat, is being used as an alternative anticoagulant method with promising results among children receiving CRRT . Dedicated neonatal and pediatric machines Most commercially available CRRT machines are not designed and not licensed for smaller children. However, it should be mentioned that due to technological refinements in modern machines resulting in circuits with smaller extracorporeal volumes and accurate ultrafiltration rates, a high safety level has been achieved . However, some concerns remain in small children with a body weight of less than 8 kg. Recently, a dedicated neonatal of infant dialysis machine called CARPEDIEM ® has become available and, according to the recent ESPNIC survey, is used in Europe by up to 15% centers . This machine allows for safe administration of CRRT in newborns due to its accuracy and very small extracorporeal volumes: On the other hand, the diminutive design limits treatment options and does not allow RCA . Moreover, a second machine is needed for larger children and thus the team needs training on two different machines. Therefore, the use of dedicated neonatal machines might be limited to centers with a large neonatal or infant population, for example, centers with a large congenital cardiac surgery program . Liberation from CRRT Discontinuation of CRRT, as per the KDIGO guidelines, should be considered “when CRRT is no longer required either because intrinsic kidney function has recovered to the point that is adequate to meet patient needs, or because CRRT is no longer consistent with the goal of care” . These guidelines also state that the use of diuretics is not recommended to enhance kidney function recovery, or to reduce the duration of CRRT. The clinical indicators for discontinuation of CRRT include increased urinary output, no more fluid overload, and the patient is no longer receiving vasoactive medications. Clinician should consider “filter holiday” if spontaneous urine output is > 0.5 mL/kg/h and fluid status, acid–base status, and electrolytes are controlled. The ESPNIC survey confirms that among the abovementioned, the increase of the native urine output and the resolution of FO were the 2 factors most often associated with the decision to perform a trial of liberation from CKRT . The performance and delivery of CRRT depends heavily on an efficient vascular access . Vascular access is essential in achieving adequate blood flow rates, which prolongs circuit lifetime (CL) and thus reduces interruptions while optimizing the delivered CRRT dose. However, vascular access may be challenging especially in newborns and infants and the availability of adequate dialysis catheters, especially for small children, remains problematic. In order to minimize complications, KDIGO recommends placement of an adequate central line using ultrasound guidance . The most important factor ensuring low resistance during high blood flow rates is the location of the catheter tip and its diameter. The catheter should be long enough so that the tip resides at the superior cavoatrial junction when using the upper body approach, or the inferior vena cava when using the femoral approach. The right internal jugular vein is recommended as the first choice because of its straight course into the right atrium, which leads to less contact with the vessel wall, and thus better flow and lower risk of catheter-associated central vein thrombosis . The femoral vein is considered second choice and a good option in pediatrics, as it is easily accessible in children. KDIGO recommends that the subclavian vein should be avoided due to the increased risk of insertion complications and of central vein thrombosis. However, recent literature suggests that cannulation of the left brachiocephalic vein, using the supra- or infraclavicular ultrasound-guided approach, is an excellent choice in neonates and small infants, due to the large caliber of this vessel and due to the fact that it is non-collapsible . Although renal function will recover in the majority of children with AKI, the long-term vascular health of a patient requiring CRRT should always be considered, and in order to prevent thromboembolic complications, a catheter-to-vessel ratio of 45% should not be exceeded . Continuous modalities are most frequently used in critically ill children as it allows for gentle fluid removal and minimizes fluid shifts and therefore is preferred in critically ill children at risk of severe hypotension or cerebral edema . In contrast, intermittent renal replacement therapies like hemodialysis (iHD) are frequently used in stable patients with AKI outside the ICU and in the outpatient setting in chronic kidney disease (CKD). iHD may be useful or preferable in a few clinical scenarios when rapid elimination of small molecules like electrolytes or toxins is required such as in case of life-threatening hyperkalemia, as it allows for rapid solute clearance and ultrafiltration during relatively short treatment sessions . Peritoneal dialysis (PD) is another alternative to CRRT, according to local preferences and expertise . The advantage of PD is that it is possible in newborns, easy to perform without requiring complex technology, and is cheaper than CRRT. However, it is contra-indicated in children with abdominal pathology or surgery, requires a surgical intervention for the catheter insertion, and results in lower clearance and ultrafiltration volumes compared to extracorporeal therapies, as fluid and/or solute removal rates are dependent on the diffusion capacity of the peritoneum. Recently, continuous flow peritoneal dialysis (CFPD) has been proposed as effective technique to improve ultrafiltration in children with AKI and FO . Table summarizes the pro and cons of the different CRRT modalities, as well as its alternatives iHD and PD and Table illustrates the main characteristics of PD compared to extracorporeal therapies. Regarding CRRT, a variety of modalities which differ in their mode of solute clearance may be used . Figure demonstrates the physical principles of diffusion and convection, while Fig. illustrates the different modalities of CRRT. Continuous venovenous hemofiltration (CVVH) relies on the physical principle of convection, where ultrafiltration fluid is eliminated due to a hydrostatic gradient along the semi-permeable membrane and together with the fluid solutes is cleared, a mechanism called “solvent drag.” Convective techniques may provide enhanced elimination of middle molecular weight solutes like inflammatory mediators which might be beneficial in critically ill patients. As high ultrafiltrate rates are necessary to achieve sufficient solute clearance, a replacement fluid is administered pre- or postfilter (pre-dilution or post-dilution). On the other hand, continuous venovenous hemodialysis (CVVHD) relies on the principle of diffusion, where a dialysis fluid runs through the filter and molecules diffuse from blood to dialysate along a concentration gradient. Diffusive modalities allow for very effective clearance of small molecular weight solutes. Ultrafiltration rates are relatively low compared with convective modalities, which allows for fluid elimination without the need for replacement fluids. Continuous venovenous hemodiafiltration (CVVHDF) combines convection and diffusion to effectively clear both small and middle weight molecules and eliminate fluid. This method requires the administration of both a dialysis and a replacement fluid. Here again, the choice of the modality is dependent on the patient’s condition but also on the institutional preference and resources. Although CVVHDF seems the most effective choice as it combines filtration and dialysis and provides the broadest therapeutic options, it is also the most complex and resource-intense method as it requires frequent flow adjustments, monitoring of electrolytes, and fluid bag changes. Rarely performed is slow continuous ultrafiltration (SCUF), a method based on convection and used primarily for volume management without administration of a replacement fluid. It is used in a few clinical scenarios, for example, in a patient on ECMO to prevent or treat FO . The combined use of ECMO and CRRT is possible in different configurations. CRRT can be performed using a separate dialysis catheter or the CRRT device can be integrated into to the ECMO circuit by connecting the access and return line of CRRT before and after the oxygenator . The CRRT dose corresponds to the effluent flow rate, which consists of the sum of dialysate and total ultrafiltrate flow. KDIGO practical guidelines recommend a delivered dose of 20–25 ml/kg/h . Given the discrepancy between prescribed and delivered dose, a prescribed dose of 30–35 ml/kg/h is considered standard volume CRRT dose. High-volume CRRT (up to 80 ml/kg/h) may be necessary to increase ammonia clearance in newborns with inborn errors of metabolism or children with acute liver failure . Moreover, high volume CRRT has been shown to reduce vasopressor requirements and provide hemodynamic stability by removing inflammatory mediators in sepsis . However, several large adult RCT’s have failed to demonstrate a difference in survival rates in patients treated with high volume compared to standard volume CRRT . The efficacy of CRRT is directly related to the longevity of the circuit as clotting of the circuit increases downtime, leads to blood loss of the patient, and may cause hemodynamic instability during de- and reconnection and increased costs . To increase CL, anticoagulation of the extracorporeal circuit is necessary. Table shows the most frequently used anticoagulation methods in pediatric CRRT and their advantages and disadvantages. Unlike adults, in children, due to small catheters and low blood flow rates, a strategy with no anticoagulation will result in short CL and thus ineffective CRRT . The recent survey of ESPNIC Critical Care Nephrology Section demonstrated that the most frequently used anticoagulation methods in Europe are heparin (41% of responders) and regional citrate anticoagulation (RCA, 35%) . While heparin is widely available, cheap, and easy to administer, it leads to systemic anticoagulation of the patient and thus may lead to bleeding complications. In children, heparin is mainly administered as unfractionated heparin (UFH) and rarely, unlike adults, as low molecular weight heparin (LMWH). In contrast, RCA is a strictly extracorporeal anticoagulation method and, in a recent review of all available pediatric studies, seems to reduce clotting events and prolong CL . In adults, RCA has been shown to prolong CL in several prospective RCTs, and therefore, KDIGO recommends RCA as first-line anticoagulation method in patients who do not have a contraindication for citrate . However, RCA is a complex method and requires strict protocols, well-trained personnel, and frequent monitoring . Whenever both heparin and RCA are contra-indicated or not available, prostacyclin can be used as alternative. Although not frequently used and therefore with limited experience, it seems a promising agent as it is easy to administer as continuous infusion into the circuit, does not require monitoring, and has a positive safety profile, with systemic hypotension being the most serious adverse event . Deep et al. reported on their positive results using prostacyclin as anticoagulation method in children with acute liver failure . More recently, a synthetic serine protease inhibitor, nafamostat, is being used as an alternative anticoagulant method with promising results among children receiving CRRT . Most commercially available CRRT machines are not designed and not licensed for smaller children. However, it should be mentioned that due to technological refinements in modern machines resulting in circuits with smaller extracorporeal volumes and accurate ultrafiltration rates, a high safety level has been achieved . However, some concerns remain in small children with a body weight of less than 8 kg. Recently, a dedicated neonatal of infant dialysis machine called CARPEDIEM ® has become available and, according to the recent ESPNIC survey, is used in Europe by up to 15% centers . This machine allows for safe administration of CRRT in newborns due to its accuracy and very small extracorporeal volumes: On the other hand, the diminutive design limits treatment options and does not allow RCA . Moreover, a second machine is needed for larger children and thus the team needs training on two different machines. Therefore, the use of dedicated neonatal machines might be limited to centers with a large neonatal or infant population, for example, centers with a large congenital cardiac surgery program . Discontinuation of CRRT, as per the KDIGO guidelines, should be considered “when CRRT is no longer required either because intrinsic kidney function has recovered to the point that is adequate to meet patient needs, or because CRRT is no longer consistent with the goal of care” . These guidelines also state that the use of diuretics is not recommended to enhance kidney function recovery, or to reduce the duration of CRRT. The clinical indicators for discontinuation of CRRT include increased urinary output, no more fluid overload, and the patient is no longer receiving vasoactive medications. Clinician should consider “filter holiday” if spontaneous urine output is > 0.5 mL/kg/h and fluid status, acid–base status, and electrolytes are controlled. The ESPNIC survey confirms that among the abovementioned, the increase of the native urine output and the resolution of FO were the 2 factors most often associated with the decision to perform a trial of liberation from CKRT . In hospital outcomes Several studies have reported on the outcomes of critically ill children requiring CRRT over the last 20 years . Overall reported PICU mortality in these studies was high and ranged from 35 to 64%, and thus rivals mortality in adults. However, most of these studies were single-center studies and the patient population differed significantly between studies. Two larger multicenter studies from the North American prospective pediatric continuous renal replacement registry (ppCRRT) found mortality rates of 42 and 43%, respectively . In more recent studies, however, mortality seems to have decreased slightly . This effect may be caused by the overall increased use of CRRT among less sick children and/or earlier initiation of CRRT due to better recognition of AKI in critically ill children. Critically ill children requiring CRRT are a heterogeneous group with different diagnosis, and several pediatric studies have shown that outcome is mainly related to the underlying disease, severity of illness, presence of MODS, and the degree of FO at CRRT initiation. Highest mortality rates have been described in children with onco-hematologic disease (50–80%), especially after stem cell transplantation, in patients with liver disease (50–69%), cardiac disease/surgery (35–62%), and sepsis (33–44%) . On the other hand, excellent outcomes with low mortality rates have been reported in children with metabolic disease (10–27%) and primary renal disease (6–34%). In addition to increased mortality, CRRT has been associated with increased length of mechanical ventilation and ICU stay . Long-term outcomes Historically, it was believed that patients who recovered kidney function after AKI had benign long-term outcomes. There is growing evidence demonstrating that these children are also at risk for adverse long-term outcomes such as CKD, proteinuria, hypertension, increased healthcare utilization, and mortality . In a systematic review of pediatric AKI studies, the pooled long-term incidence of proteinuria was 13%, hypertension 7%, abnormal GFR (< 90 mL/min/1.73m 2 ) 28%, and end-stage kidney disease 0.4% . Together, these studies highlight the importance of kidney health surveillance after episodes of childhood AKI. Follow-up of AKI survivors Current AKI follow-up care is inadequate due to low rates of AKI recognition in hospitalized children, suboptimal documentation of AKI events and follow-up recommendations in discharge summaries, lack of awareness of AKI and its consequences at both patient and provider level, lack of clear post-AKI follow-up care guidelines, and limited access to pediatric nephrology clinics. In a population-based study in Ontario, Canada, from 1996 to 2017 involving ~ 1700 children who received dialysis for AKI, nephrology follow-up was suboptimal (19% by 1 year and 27% by 10 years) . However, most dialysis-treated AKI survivors (97%) had at least one outpatient physician visit by 1 year. Similarly, < 25% of PICU patients with AKI were followed up by a pediatric nephrologist in a 5-year period after discharge but > 95% had an outpatient physician visit within 1 year . These data suggest that general pediatricians and primary care providers should be targeted for knowledge translation strategies related to post-AKI follow-up care. Strategies to improve post-AKI follow-up must be initiated at the time of hospitalization and should include increased recognition (e.g., electronic health record alerts and provider education), improving documentation of AKI episode in discharge summaries, effective communication to primary care providers regarding care plan, and education of patients and their families about the AKI event, potential long-term risks, and need for regular follow-up. However, the optimal timing and content of post-AKI follow-up care remains unclear. KDIGO guidelines suggest evaluating “patients 3 months after AKI for resolution, new onset, or worsening of pre-existing CKD” (ungraded recommendation) . Generally, children with severe AKI (stage 3 or receiving dialysis), prolonged AKI duration (≥ 7 days), and/or incomplete recovery should be re-assessed soon after the discharge and nephrologist referral should also be considered for these children. General pediatricians or primary care providers can follow-up those with less severe AKI. Several studies have reported on the outcomes of critically ill children requiring CRRT over the last 20 years . Overall reported PICU mortality in these studies was high and ranged from 35 to 64%, and thus rivals mortality in adults. However, most of these studies were single-center studies and the patient population differed significantly between studies. Two larger multicenter studies from the North American prospective pediatric continuous renal replacement registry (ppCRRT) found mortality rates of 42 and 43%, respectively . In more recent studies, however, mortality seems to have decreased slightly . This effect may be caused by the overall increased use of CRRT among less sick children and/or earlier initiation of CRRT due to better recognition of AKI in critically ill children. Critically ill children requiring CRRT are a heterogeneous group with different diagnosis, and several pediatric studies have shown that outcome is mainly related to the underlying disease, severity of illness, presence of MODS, and the degree of FO at CRRT initiation. Highest mortality rates have been described in children with onco-hematologic disease (50–80%), especially after stem cell transplantation, in patients with liver disease (50–69%), cardiac disease/surgery (35–62%), and sepsis (33–44%) . On the other hand, excellent outcomes with low mortality rates have been reported in children with metabolic disease (10–27%) and primary renal disease (6–34%). In addition to increased mortality, CRRT has been associated with increased length of mechanical ventilation and ICU stay . Historically, it was believed that patients who recovered kidney function after AKI had benign long-term outcomes. There is growing evidence demonstrating that these children are also at risk for adverse long-term outcomes such as CKD, proteinuria, hypertension, increased healthcare utilization, and mortality . In a systematic review of pediatric AKI studies, the pooled long-term incidence of proteinuria was 13%, hypertension 7%, abnormal GFR (< 90 mL/min/1.73m 2 ) 28%, and end-stage kidney disease 0.4% . Together, these studies highlight the importance of kidney health surveillance after episodes of childhood AKI. Current AKI follow-up care is inadequate due to low rates of AKI recognition in hospitalized children, suboptimal documentation of AKI events and follow-up recommendations in discharge summaries, lack of awareness of AKI and its consequences at both patient and provider level, lack of clear post-AKI follow-up care guidelines, and limited access to pediatric nephrology clinics. In a population-based study in Ontario, Canada, from 1996 to 2017 involving ~ 1700 children who received dialysis for AKI, nephrology follow-up was suboptimal (19% by 1 year and 27% by 10 years) . However, most dialysis-treated AKI survivors (97%) had at least one outpatient physician visit by 1 year. Similarly, < 25% of PICU patients with AKI were followed up by a pediatric nephrologist in a 5-year period after discharge but > 95% had an outpatient physician visit within 1 year . These data suggest that general pediatricians and primary care providers should be targeted for knowledge translation strategies related to post-AKI follow-up care. Strategies to improve post-AKI follow-up must be initiated at the time of hospitalization and should include increased recognition (e.g., electronic health record alerts and provider education), improving documentation of AKI episode in discharge summaries, effective communication to primary care providers regarding care plan, and education of patients and their families about the AKI event, potential long-term risks, and need for regular follow-up. However, the optimal timing and content of post-AKI follow-up care remains unclear. KDIGO guidelines suggest evaluating “patients 3 months after AKI for resolution, new onset, or worsening of pre-existing CKD” (ungraded recommendation) . Generally, children with severe AKI (stage 3 or receiving dialysis), prolonged AKI duration (≥ 7 days), and/or incomplete recovery should be re-assessed soon after the discharge and nephrologist referral should also be considered for these children. General pediatricians or primary care providers can follow-up those with less severe AKI. This review, from the Critical Care Nephrology section of the European Society of Paediatric and Neonatal Intensive Care (ESPNIC), provides an overview of current recommendations regarding key aspects of CRRT delivery which might be of interest for general pediatricians. Additionally, we intended to stress the importance of adequate follow-up of PICU patients with AKI and CRRT as recent findings demonstrate that these children are at increased risk for adverse long-term outcomes.
The use of medicinal plants to prevent COVID-19 in Nepal
508be173-83af-4f10-aade-c97a1228efb1
8027983
Pharmacology[mh]
The new coronavirus disease (COVID-19) pandemic has caused global socioeconomic disturbances with a worrisome number of deaths and health issues, and the world has been struggling to find medicine to treat and prevent COVID-19 . A number of combinations and trials have been done, but so far, they have not produced promising results . The different types of misinformation related to COVID-19 have been spreading throughout the world through social media , including use of medicinal plant products to prevent or cure COVID-19. Due to this situation, ethnobiologists should collaborate with local people and document the medicinal plants used with caution to stop the inaccurate sharing of information . There is a strong inter-relationship between people and plants according to needs . People are dependent on plants for different purposes such as for food, medicine, and houses . Plant species have always been a fundamental source for the discovery of drugs . People had used medicinal plants to fight against pandemics in the past , and dependency of people on medicinal plants might have increased in these days around the world as medicinal plants can be an alternative option to prevent COVID-19 . Different researchers have suggested herbal medicine as a potential option to cure or prevent COVID-19 . Countries like China and India are integrating their use with western medicine to boost the immunity power of COVID-19 patients . In China, traditional medicine showed encouraging results in improving symptom management and reducing the deterioration, mortality, and recurrence rates . On the other hand, the World Health Organization (WHO) (2020) claims medicinal plants might be good for the health and in supporting the immune system, but not in preventing or curing COVID-19. The WHO Africa (2020) claims unscientific products to treat COVID-19 can be unsafe for people, as they may abandon self-hygienic practices, may increase self-medication, and may be a risk to patient safety. Lifestyle, diet, age, sex, medicinal conditions, and environmental factors have been playing an important role in the personal fate towards the severity of COVID-19 . The source of information, such as social media, plays an important role to combat pandemics . People receive information regarding COVID-19 and other diseases from different sources including the social media, local people, national health authorities, and the WHO, based on respondent characteristics such as age and gender as well as occupation, state of their living, and primary mode of disease treatment method . In Nepal, the medicinal plants are often used in the traditional medicine system, which includes Scholarly medical system (The Ayurveda, homeopathy, the Unani, and the Tibetan medicine), Folk medicine (ethnomedicine, community medicine, household medicine, and any other forms of local medicines), and Shamanistic (Dhami-jhankri, Jharphuke, Pundit-Lama-Pujari-Gurau, and Jyotish). Among them, folk medicine system is using more medicinal plants in Nepal . The first scientific research published in ethnobotany is dated back to 1955 . More than 80% of the people in Nepal have been using traditional medicine such as medicinal plants . Medicinal plants are the primary source of healthcare for the people in Nepal and are an integral part of their culture . Most of the people in Nepal have been using medicinal plants as the alternative to allopathic or western medicine . It has also been playing an important role in increasing the economic level of people as Nepal exports medicinal plants to different countries in the world . The elder people living in rural areas have more knowledge of traditional medicine . In Nepal, COVID-19 cases are increasing daily but the health care system is fragile and has a lack of infrastructure . In this context, home remedies, like the use of medicinal plants supported by the relevant authorities, can serve as an alternative option to combat COVID-19. The Nepal government has also valued medicinal plants as an immunity power booster used with prescriptions . But, there a considerable amount of false information spread in Nepal regarding the use of medicinal plants and people are randomly using plants which can go against the traditional methodology and make it difficult to combat COVID-19. The present study has attempted to reveal the status of medicinal plant use in Nepal during COVID-19. Specifically, this study is aimed to address the following objectives: (1) document the status and source of medicinal plants used to prevent COVID-19, (2) know the relationship between the number of plants reported and covariates, and (3) know the relationship between information sources respondents follow and respondent characteristics. Methods of data collection A set of questionnaire forms were prepared by Google Form developer. The Google Form was initially tested to validate and understand the response rate from respondents. We followed the code of ethics of the International Society of Ethnobiology . We wrote a consent message to all the people we reached with the form and also placed clearly written consent message at the top of the form. Additionally, we asked a consent question at the beginning of the form for written consent from each respondent. The Google Form was circulated through social media (such as Facebook) and emails in our friend circles asking them to circulate the form with consent message at first as much as possible and inform us whether the form has been sent to others. From our friend circles’ help and our efforts, we reached a total of 998 people throughout the online survey in June 09, 2020, to July 18, 2020, in which a total of 774 (77.55%) people filled the form in different parts of Nepal and provided information about the different variables (Table ) used for the study. Sample population A total of 774 respondents participated in the survey, of whom 407 (52.58%) were from the urban area and 367 (47.42%) were from the rural area. The age of the respondents varied from 16 to 76 years. Among them, 65.51% were below 30 years of age; all of the respondents were literate, and most of them (69.5%) had attended University. There were more male respondents (60.85%) than female (Table ). Data analysis The status of medicinal plants used during COVID-19 (increase, decrease, same, and never used) and recommendation of medicinal plants (strong, moderate, low, and never) was calculated and shown in the bar graph using Microsoft Excel 2013. The medicinal plants recorded were tabulated in the table with respective scientific, local, and English names with their family and parts (root, stem, leaves, rhizome, roots) used. The scientific names from local name identification followed the Dictionary of Nepalese plant nam e and ethnomedicine study from Nepal , and the family assignation in this paper followed the TROPICOS . Finally, we reaffirmed plant species by taxonomic experts from Tribhuvan University Nepal and collected herbarium specimens were deposited in the National Herbarium and Plant Laboratories (KATH) Godawari, Lalitpur Nepal, and specimen codes were presented in a table for each species. For all the species, frequency of citation (FC) and relative frequency of citation (RFC) were calculated following Tardio and Pardo-de-Santayana (2008) . [12pt]{minimal} $$ RFC= $$ RFC = FC N where FC = number of respondents who mentioned the use of species and N = total number of respondents took part in a survey. The results of the RFC and the top 10 medicinal plants used are presented in the radar diagram using Microsoft Excel 2013. The Shapiro test, Kruskal-Wallis test, Wilcoxon test, chi-square test, and related diagrams were drawn using R . The Shapiro test was performed to test the normality of the data. As the data of plant number was not normally distributed, the Kruskal-Wallis test was performed to test the relationship between several plants with an occupation, education level, primary treatment mode, and age class. The Wilcoxon test was performed to see the differences in number of plants reported with gender and place of living during COVID-19 pandemic. The relationship between information sources and respondent characteristics was shown in the graph and statistically analyzed using the chi-square test. A set of questionnaire forms were prepared by Google Form developer. The Google Form was initially tested to validate and understand the response rate from respondents. We followed the code of ethics of the International Society of Ethnobiology . We wrote a consent message to all the people we reached with the form and also placed clearly written consent message at the top of the form. Additionally, we asked a consent question at the beginning of the form for written consent from each respondent. The Google Form was circulated through social media (such as Facebook) and emails in our friend circles asking them to circulate the form with consent message at first as much as possible and inform us whether the form has been sent to others. From our friend circles’ help and our efforts, we reached a total of 998 people throughout the online survey in June 09, 2020, to July 18, 2020, in which a total of 774 (77.55%) people filled the form in different parts of Nepal and provided information about the different variables (Table ) used for the study. A total of 774 respondents participated in the survey, of whom 407 (52.58%) were from the urban area and 367 (47.42%) were from the rural area. The age of the respondents varied from 16 to 76 years. Among them, 65.51% were below 30 years of age; all of the respondents were literate, and most of them (69.5%) had attended University. There were more male respondents (60.85%) than female (Table ). The status of medicinal plants used during COVID-19 (increase, decrease, same, and never used) and recommendation of medicinal plants (strong, moderate, low, and never) was calculated and shown in the bar graph using Microsoft Excel 2013. The medicinal plants recorded were tabulated in the table with respective scientific, local, and English names with their family and parts (root, stem, leaves, rhizome, roots) used. The scientific names from local name identification followed the Dictionary of Nepalese plant nam e and ethnomedicine study from Nepal , and the family assignation in this paper followed the TROPICOS . Finally, we reaffirmed plant species by taxonomic experts from Tribhuvan University Nepal and collected herbarium specimens were deposited in the National Herbarium and Plant Laboratories (KATH) Godawari, Lalitpur Nepal, and specimen codes were presented in a table for each species. For all the species, frequency of citation (FC) and relative frequency of citation (RFC) were calculated following Tardio and Pardo-de-Santayana (2008) . [12pt]{minimal} $$ RFC= $$ RFC = FC N where FC = number of respondents who mentioned the use of species and N = total number of respondents took part in a survey. The results of the RFC and the top 10 medicinal plants used are presented in the radar diagram using Microsoft Excel 2013. The Shapiro test, Kruskal-Wallis test, Wilcoxon test, chi-square test, and related diagrams were drawn using R . The Shapiro test was performed to test the normality of the data. As the data of plant number was not normally distributed, the Kruskal-Wallis test was performed to test the relationship between several plants with an occupation, education level, primary treatment mode, and age class. The Wilcoxon test was performed to see the differences in number of plants reported with gender and place of living during COVID-19 pandemic. The relationship between information sources and respondent characteristics was shown in the graph and statistically analyzed using the chi-square test. Status of medicinal plant use Out of 774 respondents, 323 (42%) respondents agreed that the use of the medicinal plant has increased during COVID-19, whereas 313 (40.44%) agreed the use of medicinal plants during COVID-19 is the same as that of normal condition (Fig. ). Most of the respondents, 349 (45.09%), believed that information/knowledge of medicinal plants has increased during COVID-19, 333 (43.02%) believed it is the same as usual, and 93 (11.89%) considered that they are confused about the use of medicinal plants (Fig. ). A total of 670 (86.5%) of the respondents had recommended medicinal plants to prevent COVID-19, whereas 104 (13.4%) had not recommended. Most of them had made a moderate recommendation (Fig. ). Medicinal plants recorded A total of 60 species of medicinal plants from 36 families and 54 genera were documented as being perceived. Among them, the most common families were Apiaceae (6 species), Zingiberaceae (4 species), Amaryllidaceae (4 species) and Lamiaceae (4species). And most common genus were Allium (3 species), Terminalia (2 species), Mentha (2 species), Cinnamomum (2 species), and Syzygium. Likewise, the most perceived species was Zingiber officinale (39.79%) followed by Curcuma angustifolia (34.11%). The habit analysis showed that the medicinal plants belonging to herb, shrub, climber, and tree species were 56.67%, 11.67 %, 6.67%, and 25% respectively (Table ). Leaves (33.68%) were the most predominantly used parts, followed by seeds (23.33%), fruits (21.67%), roots (13.33%), rhizomes (11.67%), whole plant (8.33%), bark (6.67%) stem (1.67%), and bulb (1.67%) (Fig. ). The most commonly used method of preparations was to grind the parts, boil with hot water or milk, and drink. Relative frequency of citation The relative frequencies of citations ranged from 0.001 to 0.398 and for ten most cited species value ranged from 0.03 to 0.398. The most cited species was Zingiber officinale (308 times cited and frequency of citation was 0.398) followed by Curcuma angustifolia (264 times cited and frequency of citation was 0.341) (Fig. ). Source and cultivating conditions of medicinal plants The respondents had mentioned that they were getting medicinal plants from home gardens (45.61%), markets (32.03%), and jungles (10.73%), and the remaining respondents were getting medicinal plants from all of the above three sources. Most of the respondents were also cultivating (47%) more medicinal plants during COVID-19 than before, and few have just started (3%) (Fig. ). Number of plants reported and covariates The number of reported plants used by individual respondents ranged from 0 to 12 (Fig. ). In the occupational category, people who were engaged in agriculture and those with jobs used comparatively more medicinal plants than others, but the difference was not significant (Kruskal-Wallis, χ 2 = 7.921, df = 5, p = 0.1606). The people with university-level education were using more plant species compared to people with secondary-level and primary-level education, and the differences were statistically significant ( Kruskal-Wallis, χ 2 = 50.736, df = 2, p = < 0.0001 ). The people living in the city were using more plants than people living in the village, which was statistically significant ( W = 85818, p = 0.0002). The people whose primary method of treatment was allopathic were using a statistically significant low number of plants (Kruskal-Wallis, χ 2 = 32.524, df = 3, p = 0.0001) compared to the respondents whose primary methods of treatment were Ayurvedic and homeopathic. The female respondents were using more plants than males; the difference in the use of plants by males and females was statistically significant ( W = 77489, p = 0.03864). Age group of 20–29 and below (< 20) reported more number of species being used. The number of medicinal plant species reported was statistically significantly different among the age groups (Kruskal-Wallis, χ 2 = 25.484, df = 6, p = 0.0003). Information sources People are using different sources to prevent COVID-19, such as social media like Facebook Twitter, official information from the World Health Organization, the national health authorities, and local communities (Fig. ). The information adopted from social media is risky but in significant proportion, more than 25% of secondary education respondents and female respondents are using social media information, and there was a statistically significant relationship between information source and gender ( χ 2 = 8.0304, p = 0.0459). The relationship between information source and education was statistically significant ( χ 2 = 34.714, p = 0.0005). The jobless people were following the local community for obtaining information (more than 50%), and the relationship between the source of information and occupation was marginally significant (χ 2 = 23.863, p = 0.0699). The people living with their families were depending more on local communities and social media for plant use information (more than 50% and 25% respectively), and the relationship between the source of information and living with the family was statistically significant ( χ 2 = 7.9621, p = 0.0445). The people who using Ayurvedic as the primary treatment were mainly following information provided by the communities (more than 50%), and there was a statistically significant association between the information source and the primary treatment method ( χ 2 = 17.406, p = 0.0095). The people living in the city and village during the lockdown of COVID-19 both followed similar sources of information, and there is no significant association between source of information and people living in lockdown ( χ 2 = 4.6375, p = 0.2054). Out of 774 respondents, 323 (42%) respondents agreed that the use of the medicinal plant has increased during COVID-19, whereas 313 (40.44%) agreed the use of medicinal plants during COVID-19 is the same as that of normal condition (Fig. ). Most of the respondents, 349 (45.09%), believed that information/knowledge of medicinal plants has increased during COVID-19, 333 (43.02%) believed it is the same as usual, and 93 (11.89%) considered that they are confused about the use of medicinal plants (Fig. ). A total of 670 (86.5%) of the respondents had recommended medicinal plants to prevent COVID-19, whereas 104 (13.4%) had not recommended. Most of them had made a moderate recommendation (Fig. ). A total of 60 species of medicinal plants from 36 families and 54 genera were documented as being perceived. Among them, the most common families were Apiaceae (6 species), Zingiberaceae (4 species), Amaryllidaceae (4 species) and Lamiaceae (4species). And most common genus were Allium (3 species), Terminalia (2 species), Mentha (2 species), Cinnamomum (2 species), and Syzygium. Likewise, the most perceived species was Zingiber officinale (39.79%) followed by Curcuma angustifolia (34.11%). The habit analysis showed that the medicinal plants belonging to herb, shrub, climber, and tree species were 56.67%, 11.67 %, 6.67%, and 25% respectively (Table ). Leaves (33.68%) were the most predominantly used parts, followed by seeds (23.33%), fruits (21.67%), roots (13.33%), rhizomes (11.67%), whole plant (8.33%), bark (6.67%) stem (1.67%), and bulb (1.67%) (Fig. ). The most commonly used method of preparations was to grind the parts, boil with hot water or milk, and drink. The relative frequencies of citations ranged from 0.001 to 0.398 and for ten most cited species value ranged from 0.03 to 0.398. The most cited species was Zingiber officinale (308 times cited and frequency of citation was 0.398) followed by Curcuma angustifolia (264 times cited and frequency of citation was 0.341) (Fig. ). The respondents had mentioned that they were getting medicinal plants from home gardens (45.61%), markets (32.03%), and jungles (10.73%), and the remaining respondents were getting medicinal plants from all of the above three sources. Most of the respondents were also cultivating (47%) more medicinal plants during COVID-19 than before, and few have just started (3%) (Fig. ). The number of reported plants used by individual respondents ranged from 0 to 12 (Fig. ). In the occupational category, people who were engaged in agriculture and those with jobs used comparatively more medicinal plants than others, but the difference was not significant (Kruskal-Wallis, χ 2 = 7.921, df = 5, p = 0.1606). The people with university-level education were using more plant species compared to people with secondary-level and primary-level education, and the differences were statistically significant ( Kruskal-Wallis, χ 2 = 50.736, df = 2, p = < 0.0001 ). The people living in the city were using more plants than people living in the village, which was statistically significant ( W = 85818, p = 0.0002). The people whose primary method of treatment was allopathic were using a statistically significant low number of plants (Kruskal-Wallis, χ 2 = 32.524, df = 3, p = 0.0001) compared to the respondents whose primary methods of treatment were Ayurvedic and homeopathic. The female respondents were using more plants than males; the difference in the use of plants by males and females was statistically significant ( W = 77489, p = 0.03864). Age group of 20–29 and below (< 20) reported more number of species being used. The number of medicinal plant species reported was statistically significantly different among the age groups (Kruskal-Wallis, χ 2 = 25.484, df = 6, p = 0.0003). People are using different sources to prevent COVID-19, such as social media like Facebook Twitter, official information from the World Health Organization, the national health authorities, and local communities (Fig. ). The information adopted from social media is risky but in significant proportion, more than 25% of secondary education respondents and female respondents are using social media information, and there was a statistically significant relationship between information source and gender ( χ 2 = 8.0304, p = 0.0459). The relationship between information source and education was statistically significant ( χ 2 = 34.714, p = 0.0005). The jobless people were following the local community for obtaining information (more than 50%), and the relationship between the source of information and occupation was marginally significant (χ 2 = 23.863, p = 0.0699). The people living with their families were depending more on local communities and social media for plant use information (more than 50% and 25% respectively), and the relationship between the source of information and living with the family was statistically significant ( χ 2 = 7.9621, p = 0.0445). The people who using Ayurvedic as the primary treatment were mainly following information provided by the communities (more than 50%), and there was a statistically significant association between the information source and the primary treatment method ( χ 2 = 17.406, p = 0.0095). The people living in the city and village during the lockdown of COVID-19 both followed similar sources of information, and there is no significant association between source of information and people living in lockdown ( χ 2 = 4.6375, p = 0.2054). Status and sources of medicinal plant Medicinal plants have attracted the attention of several stakeholders around the world . They have chemical diversity and can play a significant role in new drug development . In this study, the majority of respondents in Nepal reported that the use of medicinal plants has increased during COVID-19 and also believed that information about the medicinal plants has increased, and most of them recommend medicinal plants to prevent COVID-19. Researchers such as Rastogi et al. (2020) and Vellingiri et al. (2020) have claimed that medicinal plant-based treatments should be beneficial to treat and prevent COVID-19 . Yang et al. reported that plant species traditionally used as food can help to enhance the immune system of the body and help to prevent the manifestation of COVID-19 . In the past, medicinal plants were combined with western medicine to treat a similar disease, severe acute respiratory syndrome (SARS) . There is no effective medicine available so far for the treatment of COVID-19; medicinal plants are being used globally that might have increased the demand for medicinal plants . Some plants are useful to treat viral disease, but COVID-19 is a new disease, and the effectiveness of the medicinal plants to cure it has not been tested yet. Therefore, the excessive use of medicinal plants, however, could be problematic and is a matter of concern. Easy access to social media which often publish unreliable advertisements might have a role to play in the increasing use of medicinal plants. Moreover, local availability of medicinal plants and an incorrect belief that medicinal plants have no side effects among people might also be responsible for the same. All the stakeholders including ethnobotanists and community leaders should come together to educate people about the proper use of medicinal plants. Medicinal plants recorded and frequency of citation We recorded a total of 60 plant species, and most of the species were similar to the study based on a preliminary survey in five heavily affected cities, Wuhan, Milan, Madrid, New York, and Rio de Janeiro, and twelve less-affected rural areas, Appalachia, Jamaica, Bolivia, Romania, Belarus, Lithuania, Poland, Georgia, Turkey, Pakistan, Cambodia, and South Africa, which recorded 193 plant taxa from 69 families . A study in Morocco had recorded a total of 23 species which include some similar species viz. Allium sativum , Allium cepa , and Zingiber officinale . A study from India recorded 15 species . A study from China have screened 26 medicinal plants for possible treatment of COVID-19 ; likewise, other studies from China have discussed about medicinal plants similar to our study . A study from Bangladesh screened 149 plants from 71 families and found they have potential molecules for preparing a drug for the treatment of COVID-19 . Most of the species reported in this study are locally available, home garden species, and used for daily food at home. The leaves were the most used parts of the plants corroborating the findings of other related studies in Asia . The use of leaves is mainly due to the presence of active secondary metabolites . Underground parts, such as roots and rhizomes, are rich in bioactive constituents . However, indiscriminate use of underground parts might lead to conservation threats particularly to wild species . Similarly, the use of bark in an excessive amount and the whole plant use might create problems in conservatio n. The citation of species might have been influenced from social media along with the cultural, religious, and community leaders within Nepal and neighboring India. For instance, the famous Hindu Swami Ramdev of India has suggested that Tinospora cordifolia boiled in water, Curcuma angustifolia , Zanthoxylum armatum powder, and Ocimum tenuiflorum leaves can prevent COVID-19 (written in India TV News of 14 March 2020). The most cited species in this study are also the most commonly used species in Nepal, such as Zingiber officinale , C. angustifolia , and Allium sativum . These species are planted in almost every household of rural Nepal, and these species are also listed by the Nepal Ministry of Health & Population Department of Ayurveda & Alternative Medicine, Teku, Kathmandu, as an alternative medicine to boost the immunity power of people . Plants like Curcuma angustifolia , Cuminum cyminum , Allium sativum , Terminalia bellirica , Z. officinale , O. tenuiflorum , Cinnamomum species, Piper nigrum , Vitis vinifera , and Citrus spp. were also recommended by the Indian Government to boost immunity power but does not claim to cure or treat COVID-19 . Some of these medicinal plants used might show a placebo effect on people as treatment of diseases like COVID-19 depending on multiple factors such as psychological factor . The medicinal plants reported in the study have different chemical compounds and constituents that have been proved in treating different diseases and ailments. T. bellirica, Cinnamomum species , Piper nigrum , dry Z. officinale , and raisin contain phytonutrients, chlorophyll, vitamins, minerals, eugenol, and a bioactive compound; Z. officinale contains sesquiterpenes . Chemical constituents 8-Gingerol and 10-Gingerol from Z. officinale were active against COVID-19 . COVID-19 patients might have a cytokine storm , and Curcuma species like angustifolia and caesia have the capacity to block cytokine release . Allium sativum contains sulfoxide, proteins, and polyphenols like bioactive sulfur-containing compounds which are antiviral with immunostimulatory potential . Tinospora cordifolia has alkaloids, glycosides, lactones, and steroids with immunomodulatory roles and can treat fever, chronic diarrhea, and asthma . Citrus species contain polysaccharides and polyphenolic compounds which improve the immunity of body . Ocimum species like Ocimum tenuiflorum extract contains Tulsinol (A, B, C, D, E, F, G) and dihydrodieuginol that possess immunomodulatory and Angiotensin-converting enzyme 2 (ACE II) blocking properties to inhibit replication of coronavirus . Phyllanthus emblica is antioxidative and anti-inflammatory, and its extract Phyllaemblicin G7 has the potential to treat COVID-19 . Azardirachta indica extracts Nimbolin A , Nimocin, and Cycloartanols (24-Methylenecycloartanol and 24-Methylenecycloartan-3-one) have shown potential to inhibit COVID-19 . Mentha arvensis possess eugenol, terpenes, and flavonoids which are good antioxidants and modulators of xenobiotic enzymes which help to inhibit COVID-19 . Cinnamom species like Cinnamom unverum contains antioxidant and antiviral compounds (eugenol, cinnamic acid, caryophyllene) which might help to inhibit COVID-19 . The species with a lower frequency of citation are also useful in some way; Camellia sinensis has immunomodulatory properties due to the presence of epigallocatechin gallate, quercetin, and gallic acid in its leaves . Euphorbia species like E uphorbia thymifolia has antioxidant and antiviral activities . Functional food such as Allium cepa , Nigella sativa , Carica papayas , and other species are functional food; they possess immunomodulatory properties in several ways and help in effective health management if taken in an adequate manner . However, there is no proper research and scientific evidence supporting that medicinal plants can prevent or cure COVID-19. The use of medicinal plants is traditional and has a long history with its own theory, like traditional Chinese medicines whose composition is typical and complicated. A creative evaluation system should be developed before its use to prevent or treat COVID-19 . Some researchers have suggested natural products obtained from plants might be an alternative option to treat COVID-19 . But at present, the use of different, unproven medicine, as well as herbal medicine, has been the only way to protect vulnerable patients and such medicines should not be overlooked, or taken without the prescription from a health personnel . The effectiveness of above-mentioned medicinal plants should be tested scientifically then added to the discovery of drugs used to treat COVID-19. Source and cultivating conditions of medicinal plants Most of the respondents obtained medicinal plants from home gardens or farms. It is interesting to find that people are cultivating more medicinal plants during COVID-19, which is a positive sign for the development of gardening or farming practices in the country. This type of activity will support the sustainable conservation of medicinal plants. However, collecting medicinal plants from the jungle will cause several issues in the conservation of plants . Different types of actions can be taken to conserve and for the sustainable use of such species, including assessing the conditions of plant use and their presence as well as policy formation . Some people have also just started to plant medicinal plants which is a good sign for the sustainable livelihood in Nepal. Number of plants reported and covariates The use of medicinal plants depends on several covariates, such as occupation, education level, age, class, living condition, and treatment methods that people usually follow. The sociocultural acceptance of people vary within different places and communities . People living in villages most live with their families in Nepal, and studies have found that the use of medicinal plants usually comes from families . During COVID-19, well-educated people perceived more medicinal plants in Nepal, contrary to the results of other studies, which found that well-educated people often rely on modern medicine for treatment . Females reported more medicinal plants than males, similar to other studies , probably because women are more involved in household work and invest more time in the kitchen, caring for their family, and in food and health, as well as in farm work such as cutting grasses and collecting fodder. People adopting agriculture reported a higher number of medicinal plants, which may be because they have easier access to medicinal plants. In Nepal, people with agricultural occupations and living in rural areas used more traditional methods to stay healthy . The job holders also reported comparatively more number of plants. Interestingly, the youths (age groups below 30) have reported using more medicinal plants, probably because they lived with their families and learned more about the medicinal plants from the elders. This group is also the most active group on social media. Most respondents also claimed that they were more aware of the medicinal plants during COVID-19, which is a good sign as the research by Tiwari et al. (2020) has mentioned that young people are forgetting the use of medicinal plants. However, the misunderstanding of medicinal plants is also dangerous, and the stakeholders need to think about and provide accurate information to the young people . Young people should follow a reliable source to obtain information about medicinal plants. People who primarily use Ayurvedic and homeopathy remedies reported more number of medicinal plants. The use of plants and the acquisition of knowledge usually depends on the culture and primary health care system . Information sources and respondent characteristics The source of information is the key to using medicinal plants, and it is not good to follow social websites and rely on them, as the usefulness and accuracy of messages regarding COVID-19 provided by social media such as YouTube have not been tested . However, in this study, a large number of respondents were found to be engaged in social media to obtain information regarding COVID-19. Most of the people were not relying on the WHO and national health authorities, similar to the study of Bhagavathula et al. . Most well-educated people, female, job holders, people living with families, people who are following allopathy as a primary treatment, and people who live in the village are all following social media to obtain knowledge of prevention methods and using medicinal plant-based on the source which might be incorrect and thus harmful. This is because the frequent use of social media and the practices of using several sources of social media have caused an overload and increased people’s concerns . This study recommends the use of official websites of the WHO and national health authorities to gain information regarding COVID-19. Most people also rely on the communities for the use of medicinal plants which might cause traditional malfunction. Therefore, it is unwise to adopt unscientific sources of information and use medicinal plants privately. The correct use of medicinal plants passes from generation to generation, which is usually applicable to old diseases. No valid medicine has been developed to prevent or cure COVID-19 so far. The COVID-19 pandemic has created a large crisis, and it needs large-scale behavior changes . For instance, we need to change our behavior and follow valid information to use different preventive measures to be free from COVID-19. The collaboration between diverse stakeholders such as the government, volunteers, people, and other sectors is deemed necessary to transmit information and respond to crisis through improving information flow . Different studies on herbal remedies are deemed necessary which would be helpful to prepare an antiviral drug against COVID-19 as well as to help prevent going against traditional methodology related to the use of medicinal plants . There is an urgent need to disseminate a high level of public awareness to prevent misinformation regarding treatment and prevention measures of COVID-19 . Limitation of the study This is online survey based study. The questionnaire was mostly circulated among the educated social network colleagues of ours as they can read and understand about the issues, provide their consent, and fill the form similar to other studies from the globe. This might create some bias on the study, but during extreme condition (such as COVID-19 lockdown) this is one of the prime ways to get information and help deal with the extreme situation. Researchers have reported that well-educated people preferred to follow modern medicine, but during COVID-19 time educated people were aware about the medicinal plants as opportunistic medicine . This behavior of educated people helps to increase concern of them on medicinal plants. Further, a field-based study might cover responses from all levels and classes of people with quantification of uses. Medicinal plants have attracted the attention of several stakeholders around the world . They have chemical diversity and can play a significant role in new drug development . In this study, the majority of respondents in Nepal reported that the use of medicinal plants has increased during COVID-19 and also believed that information about the medicinal plants has increased, and most of them recommend medicinal plants to prevent COVID-19. Researchers such as Rastogi et al. (2020) and Vellingiri et al. (2020) have claimed that medicinal plant-based treatments should be beneficial to treat and prevent COVID-19 . Yang et al. reported that plant species traditionally used as food can help to enhance the immune system of the body and help to prevent the manifestation of COVID-19 . In the past, medicinal plants were combined with western medicine to treat a similar disease, severe acute respiratory syndrome (SARS) . There is no effective medicine available so far for the treatment of COVID-19; medicinal plants are being used globally that might have increased the demand for medicinal plants . Some plants are useful to treat viral disease, but COVID-19 is a new disease, and the effectiveness of the medicinal plants to cure it has not been tested yet. Therefore, the excessive use of medicinal plants, however, could be problematic and is a matter of concern. Easy access to social media which often publish unreliable advertisements might have a role to play in the increasing use of medicinal plants. Moreover, local availability of medicinal plants and an incorrect belief that medicinal plants have no side effects among people might also be responsible for the same. All the stakeholders including ethnobotanists and community leaders should come together to educate people about the proper use of medicinal plants. We recorded a total of 60 plant species, and most of the species were similar to the study based on a preliminary survey in five heavily affected cities, Wuhan, Milan, Madrid, New York, and Rio de Janeiro, and twelve less-affected rural areas, Appalachia, Jamaica, Bolivia, Romania, Belarus, Lithuania, Poland, Georgia, Turkey, Pakistan, Cambodia, and South Africa, which recorded 193 plant taxa from 69 families . A study in Morocco had recorded a total of 23 species which include some similar species viz. Allium sativum , Allium cepa , and Zingiber officinale . A study from India recorded 15 species . A study from China have screened 26 medicinal plants for possible treatment of COVID-19 ; likewise, other studies from China have discussed about medicinal plants similar to our study . A study from Bangladesh screened 149 plants from 71 families and found they have potential molecules for preparing a drug for the treatment of COVID-19 . Most of the species reported in this study are locally available, home garden species, and used for daily food at home. The leaves were the most used parts of the plants corroborating the findings of other related studies in Asia . The use of leaves is mainly due to the presence of active secondary metabolites . Underground parts, such as roots and rhizomes, are rich in bioactive constituents . However, indiscriminate use of underground parts might lead to conservation threats particularly to wild species . Similarly, the use of bark in an excessive amount and the whole plant use might create problems in conservatio n. The citation of species might have been influenced from social media along with the cultural, religious, and community leaders within Nepal and neighboring India. For instance, the famous Hindu Swami Ramdev of India has suggested that Tinospora cordifolia boiled in water, Curcuma angustifolia , Zanthoxylum armatum powder, and Ocimum tenuiflorum leaves can prevent COVID-19 (written in India TV News of 14 March 2020). The most cited species in this study are also the most commonly used species in Nepal, such as Zingiber officinale , C. angustifolia , and Allium sativum . These species are planted in almost every household of rural Nepal, and these species are also listed by the Nepal Ministry of Health & Population Department of Ayurveda & Alternative Medicine, Teku, Kathmandu, as an alternative medicine to boost the immunity power of people . Plants like Curcuma angustifolia , Cuminum cyminum , Allium sativum , Terminalia bellirica , Z. officinale , O. tenuiflorum , Cinnamomum species, Piper nigrum , Vitis vinifera , and Citrus spp. were also recommended by the Indian Government to boost immunity power but does not claim to cure or treat COVID-19 . Some of these medicinal plants used might show a placebo effect on people as treatment of diseases like COVID-19 depending on multiple factors such as psychological factor . The medicinal plants reported in the study have different chemical compounds and constituents that have been proved in treating different diseases and ailments. T. bellirica, Cinnamomum species , Piper nigrum , dry Z. officinale , and raisin contain phytonutrients, chlorophyll, vitamins, minerals, eugenol, and a bioactive compound; Z. officinale contains sesquiterpenes . Chemical constituents 8-Gingerol and 10-Gingerol from Z. officinale were active against COVID-19 . COVID-19 patients might have a cytokine storm , and Curcuma species like angustifolia and caesia have the capacity to block cytokine release . Allium sativum contains sulfoxide, proteins, and polyphenols like bioactive sulfur-containing compounds which are antiviral with immunostimulatory potential . Tinospora cordifolia has alkaloids, glycosides, lactones, and steroids with immunomodulatory roles and can treat fever, chronic diarrhea, and asthma . Citrus species contain polysaccharides and polyphenolic compounds which improve the immunity of body . Ocimum species like Ocimum tenuiflorum extract contains Tulsinol (A, B, C, D, E, F, G) and dihydrodieuginol that possess immunomodulatory and Angiotensin-converting enzyme 2 (ACE II) blocking properties to inhibit replication of coronavirus . Phyllanthus emblica is antioxidative and anti-inflammatory, and its extract Phyllaemblicin G7 has the potential to treat COVID-19 . Azardirachta indica extracts Nimbolin A , Nimocin, and Cycloartanols (24-Methylenecycloartanol and 24-Methylenecycloartan-3-one) have shown potential to inhibit COVID-19 . Mentha arvensis possess eugenol, terpenes, and flavonoids which are good antioxidants and modulators of xenobiotic enzymes which help to inhibit COVID-19 . Cinnamom species like Cinnamom unverum contains antioxidant and antiviral compounds (eugenol, cinnamic acid, caryophyllene) which might help to inhibit COVID-19 . The species with a lower frequency of citation are also useful in some way; Camellia sinensis has immunomodulatory properties due to the presence of epigallocatechin gallate, quercetin, and gallic acid in its leaves . Euphorbia species like E uphorbia thymifolia has antioxidant and antiviral activities . Functional food such as Allium cepa , Nigella sativa , Carica papayas , and other species are functional food; they possess immunomodulatory properties in several ways and help in effective health management if taken in an adequate manner . However, there is no proper research and scientific evidence supporting that medicinal plants can prevent or cure COVID-19. The use of medicinal plants is traditional and has a long history with its own theory, like traditional Chinese medicines whose composition is typical and complicated. A creative evaluation system should be developed before its use to prevent or treat COVID-19 . Some researchers have suggested natural products obtained from plants might be an alternative option to treat COVID-19 . But at present, the use of different, unproven medicine, as well as herbal medicine, has been the only way to protect vulnerable patients and such medicines should not be overlooked, or taken without the prescription from a health personnel . The effectiveness of above-mentioned medicinal plants should be tested scientifically then added to the discovery of drugs used to treat COVID-19. Most of the respondents obtained medicinal plants from home gardens or farms. It is interesting to find that people are cultivating more medicinal plants during COVID-19, which is a positive sign for the development of gardening or farming practices in the country. This type of activity will support the sustainable conservation of medicinal plants. However, collecting medicinal plants from the jungle will cause several issues in the conservation of plants . Different types of actions can be taken to conserve and for the sustainable use of such species, including assessing the conditions of plant use and their presence as well as policy formation . Some people have also just started to plant medicinal plants which is a good sign for the sustainable livelihood in Nepal. The use of medicinal plants depends on several covariates, such as occupation, education level, age, class, living condition, and treatment methods that people usually follow. The sociocultural acceptance of people vary within different places and communities . People living in villages most live with their families in Nepal, and studies have found that the use of medicinal plants usually comes from families . During COVID-19, well-educated people perceived more medicinal plants in Nepal, contrary to the results of other studies, which found that well-educated people often rely on modern medicine for treatment . Females reported more medicinal plants than males, similar to other studies , probably because women are more involved in household work and invest more time in the kitchen, caring for their family, and in food and health, as well as in farm work such as cutting grasses and collecting fodder. People adopting agriculture reported a higher number of medicinal plants, which may be because they have easier access to medicinal plants. In Nepal, people with agricultural occupations and living in rural areas used more traditional methods to stay healthy . The job holders also reported comparatively more number of plants. Interestingly, the youths (age groups below 30) have reported using more medicinal plants, probably because they lived with their families and learned more about the medicinal plants from the elders. This group is also the most active group on social media. Most respondents also claimed that they were more aware of the medicinal plants during COVID-19, which is a good sign as the research by Tiwari et al. (2020) has mentioned that young people are forgetting the use of medicinal plants. However, the misunderstanding of medicinal plants is also dangerous, and the stakeholders need to think about and provide accurate information to the young people . Young people should follow a reliable source to obtain information about medicinal plants. People who primarily use Ayurvedic and homeopathy remedies reported more number of medicinal plants. The use of plants and the acquisition of knowledge usually depends on the culture and primary health care system . The source of information is the key to using medicinal plants, and it is not good to follow social websites and rely on them, as the usefulness and accuracy of messages regarding COVID-19 provided by social media such as YouTube have not been tested . However, in this study, a large number of respondents were found to be engaged in social media to obtain information regarding COVID-19. Most of the people were not relying on the WHO and national health authorities, similar to the study of Bhagavathula et al. . Most well-educated people, female, job holders, people living with families, people who are following allopathy as a primary treatment, and people who live in the village are all following social media to obtain knowledge of prevention methods and using medicinal plant-based on the source which might be incorrect and thus harmful. This is because the frequent use of social media and the practices of using several sources of social media have caused an overload and increased people’s concerns . This study recommends the use of official websites of the WHO and national health authorities to gain information regarding COVID-19. Most people also rely on the communities for the use of medicinal plants which might cause traditional malfunction. Therefore, it is unwise to adopt unscientific sources of information and use medicinal plants privately. The correct use of medicinal plants passes from generation to generation, which is usually applicable to old diseases. No valid medicine has been developed to prevent or cure COVID-19 so far. The COVID-19 pandemic has created a large crisis, and it needs large-scale behavior changes . For instance, we need to change our behavior and follow valid information to use different preventive measures to be free from COVID-19. The collaboration between diverse stakeholders such as the government, volunteers, people, and other sectors is deemed necessary to transmit information and respond to crisis through improving information flow . Different studies on herbal remedies are deemed necessary which would be helpful to prepare an antiviral drug against COVID-19 as well as to help prevent going against traditional methodology related to the use of medicinal plants . There is an urgent need to disseminate a high level of public awareness to prevent misinformation regarding treatment and prevention measures of COVID-19 . This is online survey based study. The questionnaire was mostly circulated among the educated social network colleagues of ours as they can read and understand about the issues, provide their consent, and fill the form similar to other studies from the globe. This might create some bias on the study, but during extreme condition (such as COVID-19 lockdown) this is one of the prime ways to get information and help deal with the extreme situation. Researchers have reported that well-educated people preferred to follow modern medicine, but during COVID-19 time educated people were aware about the medicinal plants as opportunistic medicine . This behavior of educated people helps to increase concern of them on medicinal plants. Further, a field-based study might cover responses from all levels and classes of people with quantification of uses. This study found that medicinal plants used and the beliefs related to them have increased during COVID-19. A total of 63 medicinal plant species used to prevent COVID-19 were investigated and recorded. The frequently used plants in the home were recorded more in comparison to other plants. The plants’ cultivation status have increased during COVID-19. The use of medicinal plants was associated with social and demographic variables. Likewise, the source of medicinal plants also varied with the demographic social factors of the respondents. This study recommends undertaking studies of medicinal plants used during COVID-19. The validity and reliability of such medicinal plants should be tested further by phytochemical and pharmacological research, and invalid information should be monitored and controlled in different social media platforms and communities. It is recommended that people follow information from authentic sources related to the COVID-19 pandemic.
Optimal PSA density threshold for prostate biopsy in benign prostatic obstruction patients with elevated PSA levels but negative MRI findings
9c3d61f3-a08a-4124-8d6f-680eb8e2e748
11874838
Surgical Procedures, Operative[mh]
Prostate cancer (PCa) is the second most commonly diagnosed solid malignancy worldwide, with an ever-increasing incidence . The definitive diagnosis of prostate cancer relies on histopathological verification through prostate biopsy. Currently, the indication for prostate biopsy is determined by prostate specific antigen (PSA) levels, PSA density (PSAD), other biomarkers and/or suspicious digital rectal examination (DRE) and/or imaging. According to the EAU guidelines, multiparameter prostate MRI (mpMRI) should be performed before considering prostate biopsy in biopsy-naïve patients with clinical suspicion of PCa . If the mpMRI demonstrates lesion(s) with features suggesting PCa, both systematic and targeted biopsy should be performed. If the mpMRI is negative, the decision to proceed with prostate biopsy should be based on the degree of clinical suspicion of PCa, which comprises indications from PSA levels, PSAD and other biomarkers. In situations where the clinical suspicion of PCa is low, prostate biopsy could be omitted when a shared decision is made with the patient. In clinical practice, there is a large group of patients with benign prostate hyperplasia (BPH) who have indications for surgery to relieve bladder outlet obstruction. These patients present with elevated PSA levels but negative subsequent mpMRI results. Current research has proposed different PSAD thresholds (such as 0.10 ng/ml/cc, 0.15 ng/ml/cc, 0.20 ng/ml/cc, and even higher) as cutoffs to determine prostate biopsy results for patients with negative magnetic resonance results . However, no unanimous conclusion has been reached, and these studies do not specifically focus on BPH patients with bladder outlet obstruction. Since BPH itself can cause elevated PSA levels, the findings from patients without BPH may not be applicable to those with BPH. Therefore, there is currently a lack of evidence to guide prostate biopsy for BPH patients with elevated PSA levels and negative mpMRI results before proceeding to BPH surgery. Additionally, prostate biopsy is often associated with minor bleeding, urinary symptoms, and occasionally serious infectious complications , which increase patients’ economic, physical and psychological burdens. To propose the best biopsy strategy for BPH patients with elevated PSA levels and negative mpMRI results, we conducted a retrospective analysis of clinical data from our center over the past ten years. The data included information on prostate biopsy and prostate surgery for BPH patients with negative mpMRI results and elevated PSA levels. We performed a comprehensive analysis to identify a sensitive threshold for predicting prostate cancer and to improve the accuracy of biopsy. Our aim was to optimize the prostate biopsy strategy for BPH patients with bladder outlet obstruction. Patients After institutional review board approval (KY2021186) was obtained, we retrospectively analyzed the clinical data of patients who were diagnosed with BPH and admitted to the inpatient department for surgery. Surgical intervention was indicated for patients with moderate-to-severe voiding symptoms attributed to BPH refractory to medical therapy as well as for patients with clinically significant complications. We reviewed the consecutive clinical profiles of surgical candidates with BPH from January 2010 to September 2021 and enrolled patients with elevated PSA levels (PSA ≥ 4 ng/ml), negative DRE, and mpMRI PI-RADS scores ≤ 2 in our study. We excluded patients admitted for biopsy due to simply elevated PSA levels who did not present with voiding symptoms attributed to BPH. We also excluded patients whose PSA results were confounded by recent urinary obstruction or infection and who did not undergo repeated testing. Furthermore, patients without surgical or biopsy prostate pathological results were also excluded from our study. After patient selection, detailed clinical data were obtained from the patients’ electronic medical records at the hospital. Follow-up assessments were conducted during scheduled postoperative visits or through consultative phone calls for patients who failed to attend routine visits. The collected results were recorded. Clinical data Patient baseline data, including age, IPSS, medical treatment with 5 alpha reductase inhibitors, history of bladder stones and indwelling urinary catheters, PSA levels, F/T PSA ratio, DRE, TRUS and mpMRI findings, were collected. Specifically, since treatment with 5 alpha reductase inhibitors for more than 3 months reduces PSA levels by an average of approximately 50% , we doubled the PSA value of those patients in further analysis. For patients with prostate cancer (PCa), the Gleason score of the biopsy specimen was obtained. Prior to 2015, patients without PI-RADS V2 scoring information were individually scored by radiologists with over 5 years of experience in our hospital. The radiologists who scored the prostate with the PI-RADS V2 system were blinded to the pathological results. The prostate volume (PV) was calculated based on the mpMRI data (PV = 0.52 × anteroposterior diameter × transverse diameter × craniocaudal diameter). The PSAD is defined as the ratio of the PSA value to the PV. All prostate biopsies were performed using a transrectal systematic 12-core procedure in the left lateral decubitus position under local anesthesia. BPH surgery was performed using either monopolar transurethral resection of the prostate or greenlight laser enucleation of the prostate. The histopathology of all biopsies was reported separately and analyzed by an experienced uropathologist using the 2014 International Society of Urological Pathology (ISUP) modified classification. For BPH surgery, all the removed prostatic tissues were subjected to pathological examination performed by the same uropathologist. Any PCa with an ISUP grade ≥ 2 (Gleason score ≥ 3 + 4) was defined as clinically significant PCa (csPCa). Statistical analysis SPSS (version 23) and R software (version 3.5.3) were used for all the statistical analyses. Continuous variables are expressed as medians and interquartile ranges. The Mann‒Whitney U test was used for 2 continuous variables, and the Chi‒square test or Fisher’s exact test was used for categorical variables. Multivariate logistic regression analysis was performed with clinically relevant parameters and statistically significant variables identified by univariate analysis. The predictive ability was evaluated using a receiver operating characteristic (ROC) curve. Decision curve analysis (DCA) was used to identify the strategy with the greatest net clinical benefit. After institutional review board approval (KY2021186) was obtained, we retrospectively analyzed the clinical data of patients who were diagnosed with BPH and admitted to the inpatient department for surgery. Surgical intervention was indicated for patients with moderate-to-severe voiding symptoms attributed to BPH refractory to medical therapy as well as for patients with clinically significant complications. We reviewed the consecutive clinical profiles of surgical candidates with BPH from January 2010 to September 2021 and enrolled patients with elevated PSA levels (PSA ≥ 4 ng/ml), negative DRE, and mpMRI PI-RADS scores ≤ 2 in our study. We excluded patients admitted for biopsy due to simply elevated PSA levels who did not present with voiding symptoms attributed to BPH. We also excluded patients whose PSA results were confounded by recent urinary obstruction or infection and who did not undergo repeated testing. Furthermore, patients without surgical or biopsy prostate pathological results were also excluded from our study. After patient selection, detailed clinical data were obtained from the patients’ electronic medical records at the hospital. Follow-up assessments were conducted during scheduled postoperative visits or through consultative phone calls for patients who failed to attend routine visits. The collected results were recorded. Patient baseline data, including age, IPSS, medical treatment with 5 alpha reductase inhibitors, history of bladder stones and indwelling urinary catheters, PSA levels, F/T PSA ratio, DRE, TRUS and mpMRI findings, were collected. Specifically, since treatment with 5 alpha reductase inhibitors for more than 3 months reduces PSA levels by an average of approximately 50% , we doubled the PSA value of those patients in further analysis. For patients with prostate cancer (PCa), the Gleason score of the biopsy specimen was obtained. Prior to 2015, patients without PI-RADS V2 scoring information were individually scored by radiologists with over 5 years of experience in our hospital. The radiologists who scored the prostate with the PI-RADS V2 system were blinded to the pathological results. The prostate volume (PV) was calculated based on the mpMRI data (PV = 0.52 × anteroposterior diameter × transverse diameter × craniocaudal diameter). The PSAD is defined as the ratio of the PSA value to the PV. All prostate biopsies were performed using a transrectal systematic 12-core procedure in the left lateral decubitus position under local anesthesia. BPH surgery was performed using either monopolar transurethral resection of the prostate or greenlight laser enucleation of the prostate. The histopathology of all biopsies was reported separately and analyzed by an experienced uropathologist using the 2014 International Society of Urological Pathology (ISUP) modified classification. For BPH surgery, all the removed prostatic tissues were subjected to pathological examination performed by the same uropathologist. Any PCa with an ISUP grade ≥ 2 (Gleason score ≥ 3 + 4) was defined as clinically significant PCa (csPCa). SPSS (version 23) and R software (version 3.5.3) were used for all the statistical analyses. Continuous variables are expressed as medians and interquartile ranges. The Mann‒Whitney U test was used for 2 continuous variables, and the Chi‒square test or Fisher’s exact test was used for categorical variables. Multivariate logistic regression analysis was performed with clinically relevant parameters and statistically significant variables identified by univariate analysis. The predictive ability was evaluated using a receiver operating characteristic (ROC) curve. Decision curve analysis (DCA) was used to identify the strategy with the greatest net clinical benefit. Descriptive characteristics of the overall population This study included a total of 1347 patients admitted to our department from January 1, 2010, to September 1, 2020, who required prostate surgery and had a PSA level ≥ 4 ng/ml. After careful selection, 318 patients were enrolled in the study for the final analysis (Fig. ). Among these 318 BPH patients, the median IPSS was 21, indicating that patients experienced moderate to severe voiding symptoms. Of the 318 patients, 15.7% (50/318) presented with urinary retention, 10.8% (34/318) presented with bladder stones, and 27.7% (88/318) were receiving long-term treatment with 5 alpha reductase inhibitors. Based on the current guidelines, all 318 patients were recommended for prostate biopsy. Among them, 68 patients underwent systemic transrectal prostate biopsy first and later received BPH surgery or radical prostatectomy according to the pathological results, while the remaining 250 patients refused biopsy and opted for transurethral surgery solely to relieve obstruction. Overall, 8.2% (26/318) of the patients were histologically diagnosed with PCa. Specifically, 61.5% (16/26) of the prostate cancer patients were classified as having insignificant PCa (insPCa), and 38.5% (10/16) were classified as having csPCa. Table presents a comparison of the clinical characteristics between BPH patients (non-PCa) and prostate cancer patients (PCa) in the study cohort. Clinical parameters such as IPSS, percentage of patients with urinary retention, bladder stones and receiving 5 alpha reductase inhibitor treatment did not significantly differ between the BPH and PCa groups. However, there were significant differences in the PSA level, prostate volume, and PSAD between the two groups (all P < 0.05). Patients with PCa exhibited higher levels of PSA and PSAD but lower prostate volume. A total of 251 patients were successfully followed up. After a median of 33 months postoperative follow-up, based on their latest PSA results, all patients showed a reduction compared to the preoperative PSA value in BPH patients. Specifically, no patients with prostate cancer experienced recurrence after radical prostatectomy. Risk factors for prostate cancer To explore the potential risk factors for prostate cancer, we conducted univariate and multivariate logistic regression comparing 26 non-PCa patients and 292 PCa patients. Univariate analysis revealed that greater PSA levels (odds ratio [OR] 1.16, 95% confidence interval [CI] 1.09–1.24; p < 0.001), lower F/T ratios (OR 0.01, 95% CI 0-0.35; p = 0.023) and greater PSAD levels (OR 207738.27, 95% CI 3990.48-10814523.29; p < 0.001) were predictors of PCa biopsy in patients with negative MRI findings. Multivariate analysis excluded the PSA value and F/T ratio and confirmed that PSAD (OR 1638230, 95% CI 9803-525887770; p < 0.001) was the only independent predictor of PCa biopsy (Table ). Based on the logistic regression model, a predictive model for PCa was created (see Supplement). Diagnostic variables for prostate cancer PSAD was proven by a multivariate logistic regression model to be an independent predictor of PCa. The distribution of prostate cancer between different PSAD thresholds showed that the number of PCa cases detected increased with increasing PSAD (Fig. ). To explore the best clinical parameters that would help to discriminate nonPCa from PCa, we performed ROC analysis between the four clinical parameter candidates, PSAD, PSA, F/T ratio and the logistic model created above. ROC curve analysis of PCa detection revealed a larger AUC for the predictive model (AUC = 0.855) and PSAD (AUC = 0.848) than for the PSA (AUC = 0.722) or F/T ratio (AUC = 0.635) (Fig. a). The best threshold for PSAD was 0.30 ng/ml/cc, and by setting a cutoff of 0.30 for PSAD, the maximal AUC was achieved. Next, we examined the ability of various PSAD thresholds and conventional clinical parameters, such as a PSA concentration of 10 ng/ml and an F/T ratio of 0.16, to predict PCa (Table ). The specificity for PCa was 60% for 10 ng/ml PSA and 57% for an F/T ratio of 0.16, with corresponding sensitivities of 65% and 62%, respectively. The specificity was 93% for a PSAD of 0.30 ng/ml/cm3, 76% for a PSAD of 0.20 ng/ml/cm3, and 57% for a PSAD of 0.15 ng/ml/cm3, with corresponding sensitivities of 65%, 69% and 92%, respectively. The highest Youden’s J index among the various clinical parameters was 0.58 for patients with a PSAD of 0.30 ng/ml/cm3. a PSAD cutoff of 0.30 outperformed the other thresholds and was the most suitable predictive marker. We also performed DCA to validate the diagnostic value of different biopsy strategies. According to the DCA for the different biopsy strategies (Fig. b), biopsy PSAD ≥ 0.30 and the predictive model showed the greatest net benefit at each probability threshold compared to the other strategies. With a probability of 0.05–0.15, the net benefit of a biopsy PSAD ≥ 0.30 surpassed that of the predictive model. In accordance with previous findings, the optimal strategy would be to restrict biopsies to men with a PSAD of 0.30 ng/ml/cm3. This study included a total of 1347 patients admitted to our department from January 1, 2010, to September 1, 2020, who required prostate surgery and had a PSA level ≥ 4 ng/ml. After careful selection, 318 patients were enrolled in the study for the final analysis (Fig. ). Among these 318 BPH patients, the median IPSS was 21, indicating that patients experienced moderate to severe voiding symptoms. Of the 318 patients, 15.7% (50/318) presented with urinary retention, 10.8% (34/318) presented with bladder stones, and 27.7% (88/318) were receiving long-term treatment with 5 alpha reductase inhibitors. Based on the current guidelines, all 318 patients were recommended for prostate biopsy. Among them, 68 patients underwent systemic transrectal prostate biopsy first and later received BPH surgery or radical prostatectomy according to the pathological results, while the remaining 250 patients refused biopsy and opted for transurethral surgery solely to relieve obstruction. Overall, 8.2% (26/318) of the patients were histologically diagnosed with PCa. Specifically, 61.5% (16/26) of the prostate cancer patients were classified as having insignificant PCa (insPCa), and 38.5% (10/16) were classified as having csPCa. Table presents a comparison of the clinical characteristics between BPH patients (non-PCa) and prostate cancer patients (PCa) in the study cohort. Clinical parameters such as IPSS, percentage of patients with urinary retention, bladder stones and receiving 5 alpha reductase inhibitor treatment did not significantly differ between the BPH and PCa groups. However, there were significant differences in the PSA level, prostate volume, and PSAD between the two groups (all P < 0.05). Patients with PCa exhibited higher levels of PSA and PSAD but lower prostate volume. A total of 251 patients were successfully followed up. After a median of 33 months postoperative follow-up, based on their latest PSA results, all patients showed a reduction compared to the preoperative PSA value in BPH patients. Specifically, no patients with prostate cancer experienced recurrence after radical prostatectomy. To explore the potential risk factors for prostate cancer, we conducted univariate and multivariate logistic regression comparing 26 non-PCa patients and 292 PCa patients. Univariate analysis revealed that greater PSA levels (odds ratio [OR] 1.16, 95% confidence interval [CI] 1.09–1.24; p < 0.001), lower F/T ratios (OR 0.01, 95% CI 0-0.35; p = 0.023) and greater PSAD levels (OR 207738.27, 95% CI 3990.48-10814523.29; p < 0.001) were predictors of PCa biopsy in patients with negative MRI findings. Multivariate analysis excluded the PSA value and F/T ratio and confirmed that PSAD (OR 1638230, 95% CI 9803-525887770; p < 0.001) was the only independent predictor of PCa biopsy (Table ). Based on the logistic regression model, a predictive model for PCa was created (see Supplement). PSAD was proven by a multivariate logistic regression model to be an independent predictor of PCa. The distribution of prostate cancer between different PSAD thresholds showed that the number of PCa cases detected increased with increasing PSAD (Fig. ). To explore the best clinical parameters that would help to discriminate nonPCa from PCa, we performed ROC analysis between the four clinical parameter candidates, PSAD, PSA, F/T ratio and the logistic model created above. ROC curve analysis of PCa detection revealed a larger AUC for the predictive model (AUC = 0.855) and PSAD (AUC = 0.848) than for the PSA (AUC = 0.722) or F/T ratio (AUC = 0.635) (Fig. a). The best threshold for PSAD was 0.30 ng/ml/cc, and by setting a cutoff of 0.30 for PSAD, the maximal AUC was achieved. Next, we examined the ability of various PSAD thresholds and conventional clinical parameters, such as a PSA concentration of 10 ng/ml and an F/T ratio of 0.16, to predict PCa (Table ). The specificity for PCa was 60% for 10 ng/ml PSA and 57% for an F/T ratio of 0.16, with corresponding sensitivities of 65% and 62%, respectively. The specificity was 93% for a PSAD of 0.30 ng/ml/cm3, 76% for a PSAD of 0.20 ng/ml/cm3, and 57% for a PSAD of 0.15 ng/ml/cm3, with corresponding sensitivities of 65%, 69% and 92%, respectively. The highest Youden’s J index among the various clinical parameters was 0.58 for patients with a PSAD of 0.30 ng/ml/cm3. a PSAD cutoff of 0.30 outperformed the other thresholds and was the most suitable predictive marker. We also performed DCA to validate the diagnostic value of different biopsy strategies. According to the DCA for the different biopsy strategies (Fig. b), biopsy PSAD ≥ 0.30 and the predictive model showed the greatest net benefit at each probability threshold compared to the other strategies. With a probability of 0.05–0.15, the net benefit of a biopsy PSAD ≥ 0.30 surpassed that of the predictive model. In accordance with previous findings, the optimal strategy would be to restrict biopsies to men with a PSAD of 0.30 ng/ml/cm3. In this study, we focused on a specific group of patients: those with benign prostatic hyperplasia (BPH) who were admitted to our center and required surgery to relieve bladder outlet obstruction (with a median IPSS of 21 and a median prostate volume of 63.92 ml). Preoperative assessment revealed elevated PSA levels but negative mpMRI results. While men with negative mpMRI findings have a low risk of high-grade prostate cancer, there is still a small chance of missed csPCa, ranging from 5 to 15% in one recent systematic review. Therefore, identifying patients with prostate cancer is still necessary . Currently, evidence on the management of these patients is lacking. In our study, we proposed that a PSAD cutoff of 0.30 could be used for initiating prostate biopsy in patients with negative MRI by retrospectively analyzing clinical data and follow-up data. To the best of our knowledge, our study is the first to focus on this specific subset of patients who face the dilemma of whether to undergo prostate biopsy. We also developed a predictive model based on logistic regression analysis that incorporates the PSA concentration, F/T ratio, and PSAD. However, this model, via ROC analysis, only showed slightly better performance than PSAD alone (AUC 0.855 vs. 0.848). Furthermore, according to the decision curve analysis, the net benefit of the predictive model was lower than that of PSAD alone for a clinically relevant threshold probability of 0.05–0.15. Therefore, we conclude that applying a PSAD cutoff of 0.30 is a better risk stratification option than using the predictive model. PCa and BPH are both common diseases among older men, and often coexist, affecting each other’s management strategies . Ideally, urologists should accurately diagnose prostate cancer before performing surgery for BPH to reduce the incidence of incidental prostate cancer and guide men toward the appropriate initial treatment option. Incidental prostate cancer refers to the discovery of PCa after prostate surgery for benign prostate hyperplasia and is found in 5–11% of BPH/LUTS patients who undergo appropriate diagnostic evaluation . Previous studies have identified several clinical parameters, such as patient age , PSA , PSAD , preoperative prostate biopsy , and the presence of suspicious lesions on prostate MRI or ultrasound , as predictive variables for incidental PCa. However, most of these studies were retrospective and did not specifically focus on patients with negative prostate MRI results. Furthermore, research on incidental PCa has not provided a practical prostate biopsy strategy to target patients with PCa and avoid unnecessary biopsies in patients without PCa. Elevated PSA levels should prompt a PCa screening process for individuals who meet specific criteria, such as being over 55 years of age or having a strong family history, as outlined in current guidelines. In China, as well as the United States, a PSA level of 4.0 ng/ml is the generally accepted threshold for prostate biopsy . Prostate biopsy is an invasive urological procedure associated with complications such as infection, bleeding, and substantial discomfort to patients. Therefore, after a positive PSA screening is confirmed, further assessments should be conducted to minimize unnecessary biopsies. Systematic analysis has shown that compared with systematic biopsy, mpMRI, which has an average sensitivity of 91% and specificity of 37%, can reduce the number of biopsies by 30% while maintaining the detection of significant cancers . However, recent systematic reviews have reported false-negative rates of MRI results ranging from 5 to 15%, highlighting the need for risk-adapted strategies in biopsy selection . 68GaPSMA PET/CT, another emerging diagnostic imaging modality for PCa, demonstrated a diagnostic accuracy equal to 92% in the diagnosis of csPCa in men high risk for cancer , however, it is not yet routinely utilized in our clinical practive due to its relatively high cost. PSA derivatives (F/T ratio, PSA velocity, and PSAD) and urine- or blood-based molecular markers (Stockholm-3 model, the Prostate Health Index, the 4 K score Test, PCA3 test, and ExoDx test) have all been reported to effectively stratify high risk patients for biopsy and improve the diagnostic accuracy of PCa . F/T ratio could be used to indicated prostate cancer risk in patients with a PSA level less than 10 ng/ml, in a study of a case-finding protocol on 14,453 patients, PCa prevalence in case of tPSA ≤ 2.5 ng/ml (F/T ratio < 0.15), 2.6–4 ng/ml (F/T ratio < 0.20) and 4.1–10 ng/ml (F/T ratio < 0.25) was equal to 29.1%, 37.4% and 28.8%, respectively . PCA3 test, which measures the expression of the prostate cancer antigen 3 gene in post DRE urine, is a well-established urine-based biomarker. Pepe et al. reported that an AUC for a 25 cutoff of PCA3 compared to 35 cutoff was 0.678 vs. 0.634 respectively for detecting PCa in repeated saturation prostate biopsy . However, according to guidelines, these biomarkers are not yet recommended as first-line screening tests in combination with serum PSA. These methods are mainly recommended for individuals who have previously undergone a negative prostate biopsy or are listed as options with weak evidence to guide biopsy decision making in those with PSA levels between 2 and 10 ng/ml . Furthermore, the additional cost of urine and blood-based biomarkers still hinders their widespread application. On the other hand, PSA derivatives are easily accessible and commonly used in clinical practice. Among the PSA derivatives, PSAD has been identified as a strong predictive variable for incidental PCa , and multiple studies suggest the combination of MRI findings and PSAD to define patients who can safely avoid biopsy . However, the exact cutoff for selecting patients at high risk of harboring PCa despite negative MRI results is still under debate . According to a recent meta-analysis, EAU guidelines recommend using a cutoff of 0.20 ng/ml/cc in patients with negative MRI results , while other studies propose different thresholds. Distler et al. reported that obtaining a biopsy in patients with negative MRI results and a PSAD ≥ 0.15 increased the detection of csPCa by 10% compared to that of MRI alone, and this approach could avoid approximately 20% of unnecessary biopsies. Other researchers have recommended alternative thresholds for PSAD. Hansen et al. categorized patients in three groups based on PSAD (≤ 0.10, 0.10–0.20, and > 0.20) and found that a PSAD of ≤ 0.20 was associated with low detection of csPCa in patients undergoing repeated biopsy with negative MRI. By incorporating PSAD, the NPV of negative MRI increased from 0.71 to 0.91. Pellegrino et al. suggested that a PSAD cutoff of 0.15 is appropriate only when MRI accuracy is very low. For average MRI accuracy, a higher cutoff of at least 0.20 should be used, assuming that MRI accuracy has improved over the years. In our study, we proposed a cutoff of 0.30 to detect PCa, which is higher than the literature proposed above. The difference may reflect the different patient characteristic between our study and the literature. Our study specifically targeted patients with bladder outlet obstruction due to benign prostatic hyperplasia while the literature primarily focused on the general population. There are several limitations to this study. First, our study was a single-center retrospective study and was limited by the inherent flaws of its retrospective design. Second, the pathological results in the study were obtained from biopsies and transurethral prostate enucleation specimens. The latter lacked a peripheral zone of prostate tissue, which may have led to a reduced proportion of PCa patients. However, this limitation is mitigated by the follow-up results. During a minimum follow-up time of 12 months (median of 33 months), patients with negative pathological findings showed no progression of PSA, and no patients were diagnosed with PCa after BPH surgery. Thus, we can safely conclude that these patients did not have PCa before management. Third, our study analyzed the whole population of PCa patients rather than focusing solely on patients with conventional csPCa. Since csPCa accounts for only 3.1% of the entire population, this small proportion could substantially influence the results of the univariate and multivariate analyses, making the identified risk factors unreliable. Finally, further multicenter clinical trials are needed to validate this conclusion. Our study suggested that for BPH patients with surgical indications, in the case of PSA abnormalities and negative imaging findings, using a PSAD threshold of 0.30 could be a useful tool for making personalized biopsy decisions. This approach can help reduce the complications and length of hospital stay associated with biopsies as well as reduce hospital costs. Below is the link to the electronic supplementary material. Supplementary Material 1
The Presence of Two Distinct Lineages of the Foot-And-Mouth Disease Virus Type A in Russia in 2013–2014 Has Significant Implications for the Epidemiology of the Virus in the Region
5d658082-adb4-434d-806c-6387fa480085
11769220
Biochemistry[mh]
The foot-and-mouth disease virus (FMDV) is a member of the Picornaviridae family, within the genus Aphthovirus . The susceptible host range includes buffaloes, cattle, goats, sheep, and pigs . The FMD genome is a single-stranded positive-sense RNA of 8500 base pairs in length. The viral capsid includes the structural proteins VP1, VP2, VP3, and VP4, while non-structural proteins are mainly involved in viral replication and pathogenesis . The VP1 protein is the primary immunogenic component of the virion and serves as the principal phylogenetic target for genome-based epidemiology of the virus . The primary means of transmission of the virus is via alimentary and airborne routes, as evidenced by the occurrence of infected or reconvalescent animals when they share pastures with wild animals. The bodily secretions and excretions of infected animals, including saliva, nasal discharge, milk, semen, and even exhaled air, serve as a source of infection, infecting other animals in contact with the source and contaminating feed, litter, and water troughs. The pathogen has the capacity to disseminate over considerable distances via a variety of vectors, including wind and fomites such as staff clothing and footwear, vehicles, and other materials . Given the error-prone replication and high evolutionary rates of mutation, foot-and-mouth disease virus (FMDV) exists as seven immunological serotypes, distinguished by differences in the nucleotide sequences of the VP1 protein. The seven immunological serotypes of FMDV, designated A, O, C, Asia-1, SAT-1, SAT-2, and SAT-3, exhibit a high degree of diversity, encompassing a multitude of topotypes, genetic lineages, and strains . The distribution of FMD serotypes in endemic regions is uneven, which makes the FMD molecular epidemiology a crucial element of disease study and control programs . It should be noted that there is no cross-protection between different serotypes. Consequently, infection or vaccination with one FMD serotype does not provide protection against infection with other serotypes of the pathogen . A review of the most recent data from the World Organization for Animal Health (WOAH) and media reports suggests that, despite the implementation of control measures, the epizootic situation of foot-and-mouth disease (FMD) remains a significant global concern. As indicated by official data, the disease affected 65 countries between 2000 and 2022. Of these, 25 were in Asia, 37 in Africa, 2 in Europe, and 1 in South America. Furthermore, five known types of FMD were identified: type O in 44 countries, type A in 25, type SAT-1 in 10, type SAT-2 in 14, and type Asia-1 in 5. However, the pathogen type was not determined in 14 African countries. In some countries, the circulation of two to four FMDV types has been documented (Afghanistan, Vietnam, the Democratic Republic of the Congo, Egypt, Iran, Kenya, China, Thailand, Tanzania, Turkey, etc.). The active circulation of diverse virus strains and the occurrence of successive waves of infection serve to accelerate the spread of the virus in FMD-free regions. In this context, genomic analysis tools are of particular importance for analyzing the molecular aspects of FMD virus evolution and spread. Serotype A is classified into three topotypic profiles: Asia, Europe–South America (Euro-SA), and Africa. The Asia topotype is the most prevalent in the Middle East and South Asian and West Eurasian countries . In its turn, each topotype is further subdivided into genetic lineages, named after the country where and when it was recovered, for example, A-IRN99, A-Iran05, A-IRQ24,46, and A-TUR-2006 . Egypt is affected by the African topotype . Russia borders on countries with a history of FMD circulation, which heightens risks of FMDV transboundary incursion into the country . A total of 94 outbreaks of foot-and-mouth disease (FMD) have been reported in Russia between 2005 and 2024. However, until 2013, the cases registered in the Russian Federation were of serotype O, and serotype A had not been detected for more than 20 years . In 2013, however, the situation became dramatically more challenging, with 21 outbreaks of FMD type A detected in the Far Eastern, Siberian, and North Caucasian Federal Districts. In particular, nine and six outbreaks were identified in 2013 in Zabaikalsky Krai and Amur Oblast, respectively, with the epizootic continuing in 2014–2015. Additionally, two outbreaks were identified in Karachay-Cherkessia, three in Krasnodar Krai, and one in Kabardino-Balkaria, marking the first occurrence of such incidents in decades in the North Caucasus region. Outbreaks of this serotype of FMD have also been reported in Mongolia and eastern Kazakhstan . Since 2013, Russia has had a buffer zone along its southern borders where prophylactic vaccination of cattle and livestock with trivalent FMD vaccine types A, O, and Asia-1 has been carried out twice a year. Currently, according to WOAH, more than 50 regions of Russia have the status of FMDV-free zones without vaccination, and four zones along the southern borders (South, East Siberia, Far East, and Ural-West Siberia) have the status of FMDV-free zones with vaccination ( https://www.woah.org/app/uploads/2023/05/russia-eng.png ) (accessed on 20 May 2023). In the 2013–2014 period, among the FMDV outbreaks affecting pigs, cattle, and small ruminants, 28% were attributed to serotype A. However, no detailed molecular epidemiologic studies have been conducted to elucidate the circulation pattern of this serotype . The present study reports the detection and characterization of the molecular epidemiology of the FMDV type A in Russia during 2013–2014, employing a range of methods including serological, antigenic, and phylogenomic analyses. 2.1. Sampling and FMD Data This study was conducted using aphthous epithelium (vesicle walls) collected from cattle suspected of being infected with FMD. The biological samples were submitted to the FGBI “ARRIAH” (OIE Regional Reference Laboratory and FAO Reference Center for FMD) from the Far East and North Caucasus regions of the Russian Federation during the period 2013–2014. The data on FMD outbreaks in the Russian Federation for the period 2013–2014 were retrieved from the WOAH World Animal Health Information System ( https://wahis.woah.org/#/event-management , accessed on 19 December 2024). In this study, an outbreak was defined as a geographically distinct occurrence of laboratory-confirmed FMD in one or several animals officially reported to the WOAH. Laboratory confirmation was conducted by the Federal Center for Animal Health (FGBI “ARRIAH”). 2.2. Virus Isolation For the isolation and propagation of the FMDV, continuous monolayer cell cultures of Siberian ibex kidney cells (PSGK-30) and pig kidney cells from Instituto Biologico-Rim Suino-2 (IB-RS-2) were utilized. The infected cell cultures (CCSs) were incubated at 37 °C until the full cytopathic effect (CPE) was observed. A maximum of three passages were performed in the cell culture for the purpose of virus adaptation. The virus was deemed to have been adapted if it resulted in 90–100% CPE within 18–24 h ( https://www.woah.org/fileadmin/Home/eng/Health_standards/tahm/3.01.08_FMD.pdf , accessed on 19 December 2024). Viral infectivity titration by the micromethod was conducted in 96-well culture plates on a continuous IB-RS-2 culture of cells with a concentration of 0.8–1.0 × 10 6 cells/mL at 37 °C and 5% CO 2 for 48 h. The viral titer was determined by observing the number of wells exhibiting a characteristic CPE under an inverted microscope, as described by Karber, and expressed as log TCD50/mL (TCD, tissue cytopathic dose). This procedure was conducted at 37 °C and 5% CO 2 for 48 h. 2.3. Antigenic Matching of FMD Virus Isolates The reference sera were provided by the FGBI “ARRIAH” from cattle vaccinated with FMD monovalent inactivated vaccines from the following type A strains at 21–30 days post-vaccination: A22 No. 550/Azerbaijan/64, A22/Iraq/64, A/Iran/97, A No. 2029/Turkey/06, A No. 2045/Kyrgyzstan/07, A No. 2155/Zabaikalsky/2013, and A No. 2166/Krasnodarsky/2013. FMD viral isolates were matched with the production strain in the microneutralization (MN) reaction in the IB-RS-2 cell culture. The titers of reference sera of cattle vaccinated with FMD vaccines developed from homologous and heterologous FMD strains were determined by the checkerboard method using five doses of virus. The serum titer against 100 TCD 50 was calculated using linear regression and expressed as log. The r 1 -value was defined as the reciprocal arithmetic log10 titer of reference serum against both heterologous and homologous viruses. The value was interpreted in accordance with the methodology proposed by M. Rweyemamu (1984), whereby an r 1 value of ≥0.3 indicates a close antigenic relationship among the strains; therefore, the use of a vaccine based on this production strain is likely to confer protection against challenges with the field isolate . Conversely, r 1 < 0.3 indicates no antigenic relationship among the strains, and the production strain does not confer protection against the field isolate. The cut-off value range of r 1 was 0.28–0.32. 2.4. Sequencing of FMDV Type A Strains RNA was extracted from cell culture supernatants using Trizol™ reagent (Thermo Fischer Scientific, Waltham, MA, USA) in accordance with the manufacturer’s instructions. Due to the significant contamination of RNA preparations with DNA, as revealed by fluorometry on the Qubit™ (Thermo Fischer Scientific, USA) with DNA- and RNA-specific dyes, samples were subjected to an additional round of treatment with RNase-Free DNase Set and repurified with RNeasy™ Mini Kit (both Qiagen, Germany), according to supplied protocols. The resulting RNA concentrations were in the range of 84–200 ng/μL. RNAseq libraries were prepared from 100 to 500 ng of RNA using the NEBNext ® Ultra™ II RNA Library Prep Kit for Illumina (New England Biolabs, Ipswich, MA, USA) using a random primer-based protocol without prior enrichment. The library quality was assessed with the TapeStation 4200 automated electrophoresis system (Agilent Technologies, Santa Clara, CA, USA). The peak distribution maximum of the obtained libraries was in the range of 320–370 bp. Sequencing was performed on a NovaSeq™ 6000 system (Illumina, San Diego, CA, USA) utilizing an SP flow cell with 2 × 100 bp paired-end reads. Seven libraries were sequenced in total, resulting in 9.06–23.02 million raw read pairs per library. 2.5. De Novo Assembly of FMDV Type A Strain Genomes The raw reads were curated in three stages. Initially, the reads were filtered by quality using the positional quality over a flowcell with the filtebytile.sh script from the BBTools package . Subsequently, the adapter sequences were removed, and the reads were trimmed by quality with the fastp package . Finally, the read coverage was normalized to 120×, and low-covered contamination from the cell line RNA was removed with the bbnorm script . De novo assembly was conducted with the SPAdes v. 3.15.0 assembler in the rnaviral mode . The assembled viral contigs were then manually curated in CLC Genomics Workbench v. 24.0.1 (Qiagen, Germany) to ensure the absence of misassemblies. The final consensus was obtained by mapping the filtered and trimmed, but not normalized, sequencing reads to the viral contigs and extracting the consensus. Viral genomes were annotated using the VADR annotation suite, with RefSeq and high-quality, complete FMDV genomes employed as reference models . 2.6. Reconstruction of the Phylogenetic Trees The genome dataset utilized for the reconstruction of the whole genome phylogenetic tree consisted of seven genomes obtained in the present study, complete FMDV serotype A genomes from the World Reference Laboratory for Foot-and-Mouth Disease (WRLFMD) database ( https://www.wrlfmd.org/fmdv-genome/fmd-prototype-strains , accessed on 19 December 2024), and complete representative genomes of FMDV type A outbreaks. In total, 52 genomes were used for the alignment (see ). For the phylogenetic tree on the VP1 gene, the sequences of this gene were extracted from the full-genome sequences prepared above, and 29 VP1 sequences from the WRLFMD database were incorporated into the resulting dataset. Thus, 79 sequences were utilized for VP1 gene alignment . Sequence alignment was conducted using the MAFFT v7.520 package with the default parameters . Alignment trimming was performed using Trimal v1.4 with the following parameters: all columns with gaps of more than 20% of sequences or similarity scores below 0.001 were removed . Final alignments consisted of 8010 and 639 columns for whole FMDV genomes and VP1 gene sequences, respectively. To identify the most suitable nucleotide substitution model, the modeltest-ng package was employed using maximum likelihood tree reconstruction . The analysis yielded the GTR + I + G4 model as the optimal choice for both the full-genome data and the VP1 gene data, with substantial support from the Bayesian Information Criterion (BIC). A phylogenetic reconstruction was performed with the aid of the RAxML-NG package, with 1000 bootstrap iterations . The resulting phylogenetic tree was subsequently visualized using the Interactive Tree of Life (iTOL) web server . A comparative analysis of nucleotide identities and sequence differences was conducted using the “Create Pairwise Comparison” tool of CLC Genomics Workbench v.24.0.1 (Qiagen, Germany). This analysis employed the alignments utilized for the tree reconstructions. This study was conducted using aphthous epithelium (vesicle walls) collected from cattle suspected of being infected with FMD. The biological samples were submitted to the FGBI “ARRIAH” (OIE Regional Reference Laboratory and FAO Reference Center for FMD) from the Far East and North Caucasus regions of the Russian Federation during the period 2013–2014. The data on FMD outbreaks in the Russian Federation for the period 2013–2014 were retrieved from the WOAH World Animal Health Information System ( https://wahis.woah.org/#/event-management , accessed on 19 December 2024). In this study, an outbreak was defined as a geographically distinct occurrence of laboratory-confirmed FMD in one or several animals officially reported to the WOAH. Laboratory confirmation was conducted by the Federal Center for Animal Health (FGBI “ARRIAH”). For the isolation and propagation of the FMDV, continuous monolayer cell cultures of Siberian ibex kidney cells (PSGK-30) and pig kidney cells from Instituto Biologico-Rim Suino-2 (IB-RS-2) were utilized. The infected cell cultures (CCSs) were incubated at 37 °C until the full cytopathic effect (CPE) was observed. A maximum of three passages were performed in the cell culture for the purpose of virus adaptation. The virus was deemed to have been adapted if it resulted in 90–100% CPE within 18–24 h ( https://www.woah.org/fileadmin/Home/eng/Health_standards/tahm/3.01.08_FMD.pdf , accessed on 19 December 2024). Viral infectivity titration by the micromethod was conducted in 96-well culture plates on a continuous IB-RS-2 culture of cells with a concentration of 0.8–1.0 × 10 6 cells/mL at 37 °C and 5% CO 2 for 48 h. The viral titer was determined by observing the number of wells exhibiting a characteristic CPE under an inverted microscope, as described by Karber, and expressed as log TCD50/mL (TCD, tissue cytopathic dose). This procedure was conducted at 37 °C and 5% CO 2 for 48 h. The reference sera were provided by the FGBI “ARRIAH” from cattle vaccinated with FMD monovalent inactivated vaccines from the following type A strains at 21–30 days post-vaccination: A22 No. 550/Azerbaijan/64, A22/Iraq/64, A/Iran/97, A No. 2029/Turkey/06, A No. 2045/Kyrgyzstan/07, A No. 2155/Zabaikalsky/2013, and A No. 2166/Krasnodarsky/2013. FMD viral isolates were matched with the production strain in the microneutralization (MN) reaction in the IB-RS-2 cell culture. The titers of reference sera of cattle vaccinated with FMD vaccines developed from homologous and heterologous FMD strains were determined by the checkerboard method using five doses of virus. The serum titer against 100 TCD 50 was calculated using linear regression and expressed as log. The r 1 -value was defined as the reciprocal arithmetic log10 titer of reference serum against both heterologous and homologous viruses. The value was interpreted in accordance with the methodology proposed by M. Rweyemamu (1984), whereby an r 1 value of ≥0.3 indicates a close antigenic relationship among the strains; therefore, the use of a vaccine based on this production strain is likely to confer protection against challenges with the field isolate . Conversely, r 1 < 0.3 indicates no antigenic relationship among the strains, and the production strain does not confer protection against the field isolate. The cut-off value range of r 1 was 0.28–0.32. RNA was extracted from cell culture supernatants using Trizol™ reagent (Thermo Fischer Scientific, Waltham, MA, USA) in accordance with the manufacturer’s instructions. Due to the significant contamination of RNA preparations with DNA, as revealed by fluorometry on the Qubit™ (Thermo Fischer Scientific, USA) with DNA- and RNA-specific dyes, samples were subjected to an additional round of treatment with RNase-Free DNase Set and repurified with RNeasy™ Mini Kit (both Qiagen, Germany), according to supplied protocols. The resulting RNA concentrations were in the range of 84–200 ng/μL. RNAseq libraries were prepared from 100 to 500 ng of RNA using the NEBNext ® Ultra™ II RNA Library Prep Kit for Illumina (New England Biolabs, Ipswich, MA, USA) using a random primer-based protocol without prior enrichment. The library quality was assessed with the TapeStation 4200 automated electrophoresis system (Agilent Technologies, Santa Clara, CA, USA). The peak distribution maximum of the obtained libraries was in the range of 320–370 bp. Sequencing was performed on a NovaSeq™ 6000 system (Illumina, San Diego, CA, USA) utilizing an SP flow cell with 2 × 100 bp paired-end reads. Seven libraries were sequenced in total, resulting in 9.06–23.02 million raw read pairs per library. The raw reads were curated in three stages. Initially, the reads were filtered by quality using the positional quality over a flowcell with the filtebytile.sh script from the BBTools package . Subsequently, the adapter sequences were removed, and the reads were trimmed by quality with the fastp package . Finally, the read coverage was normalized to 120×, and low-covered contamination from the cell line RNA was removed with the bbnorm script . De novo assembly was conducted with the SPAdes v. 3.15.0 assembler in the rnaviral mode . The assembled viral contigs were then manually curated in CLC Genomics Workbench v. 24.0.1 (Qiagen, Germany) to ensure the absence of misassemblies. The final consensus was obtained by mapping the filtered and trimmed, but not normalized, sequencing reads to the viral contigs and extracting the consensus. Viral genomes were annotated using the VADR annotation suite, with RefSeq and high-quality, complete FMDV genomes employed as reference models . The genome dataset utilized for the reconstruction of the whole genome phylogenetic tree consisted of seven genomes obtained in the present study, complete FMDV serotype A genomes from the World Reference Laboratory for Foot-and-Mouth Disease (WRLFMD) database ( https://www.wrlfmd.org/fmdv-genome/fmd-prototype-strains , accessed on 19 December 2024), and complete representative genomes of FMDV type A outbreaks. In total, 52 genomes were used for the alignment (see ). For the phylogenetic tree on the VP1 gene, the sequences of this gene were extracted from the full-genome sequences prepared above, and 29 VP1 sequences from the WRLFMD database were incorporated into the resulting dataset. Thus, 79 sequences were utilized for VP1 gene alignment . Sequence alignment was conducted using the MAFFT v7.520 package with the default parameters . Alignment trimming was performed using Trimal v1.4 with the following parameters: all columns with gaps of more than 20% of sequences or similarity scores below 0.001 were removed . Final alignments consisted of 8010 and 639 columns for whole FMDV genomes and VP1 gene sequences, respectively. To identify the most suitable nucleotide substitution model, the modeltest-ng package was employed using maximum likelihood tree reconstruction . The analysis yielded the GTR + I + G4 model as the optimal choice for both the full-genome data and the VP1 gene data, with substantial support from the Bayesian Information Criterion (BIC). A phylogenetic reconstruction was performed with the aid of the RAxML-NG package, with 1000 bootstrap iterations . The resulting phylogenetic tree was subsequently visualized using the Interactive Tree of Life (iTOL) web server . A comparative analysis of nucleotide identities and sequence differences was conducted using the “Create Pairwise Comparison” tool of CLC Genomics Workbench v.24.0.1 (Qiagen, Germany). This analysis employed the alignments utilized for the tree reconstructions. 3.1. FMD Outbreaks in the Russian Federation in 2013–2014 A total of 33 outbreaks were recorded in the Russian Federation over the period 2013–2014 in two regions, namely the North Caucasian and Far Eastern Federal Districts, situated in close proximity to the southern border of the country . Of the outbreaks, 24 (77%) were caused by FMDV serotype A, which had affected Russia for the first time in 2013. The remaining nine outbreaks were attributed to FMDV serotype O. All outbreaks of the O type emerged in Primorsky Krai in 2014, while the outbreaks of the A type affected both study areas in 2013 and 2014. 3.2. Virus Characterization The main characteristics of the viral isolates described in this paper are summarized in . Prior to molecular biological studies, viral infectivity titers were determined using the microtiter method in 96-well plates on IB-RS-2 culture cells and expressed as log TCD50/mL. The titration results of the FMDV type A isolates are presented in . 3.3. Viral Genome Reconstruction and Phylogenetic Analysis A total of seven viral genomic libraries were sequenced on the Novaseq™ 6000 instrument (Illumina, San Diego, CA, USA), resulting in 9.06–23.02 million paired-end reads per sample. After de novo assembly of trimmed and normalized reads, seven full-genome viral contigs of length 8126–8259 bp were obtained. The coverage of viral contigs varied in the range of 3661×–35142× . Phylogenetic analysis demonstrated that FMDV strains isolated in the Russian Federation in 2013–2014 are part of the genetically diverse topotype Asia, which comprises multiple genetic sublineages. It is noteworthy that this topotype encompasses a genetically distinct line, SEA-97, which is genetically disparate from other representatives of the topotype. This observation may suggest the necessity for further reclassification of FMD virus genetic lines and topotypes. Isolates A-2167, A-2171, and A-2169 constitute a distinct and well-defined cluster within the Indo-Pakistani branch ( and ), which has been designated as the genetic lineage Iran-05. The most closely related isolate from our dataset was identified as the FMDV strain A/TUR/11/2013, which was isolated in the Black Sea region of Turkey . The pairwise analysis of nucleotide identities within the Iran-05 sublineage revealed that all analyzed strains exhibited 84.8% identities with Turkish strains, which displayed 397–398 differences (SNP and small indels) between their genomes. The Kabardino-Balkarian strain exhibited 32 polymorphisms from the A-2167 and A-2169 strains, which were almost identical within the cluster (zero differences in the trimmed alignment) ( B). A comparative genomic analysis of the VP1 dataset revealed the existence of a highly homologous strain, designated A/IRN/125/2010, that exhibited a nucleotide divergence of 22–25 nucleotides from the Russian strains, in addition to the previously identified A/TUR/11/2013 strain . Isolates A-2170, A-2182, A-2203, and A-2225 exhibited a close phylogenetic relationship with strains isolated in Thailand, Vietnam, Malaysia, and China between 2010 and 2015 ( and ). These strains belong to the diverse SEA-97 (South-East Asia) genetic lineage . Pairwise comparisons demonstrated that the Far East isolates formed a distinctive cluster, exhibiting a high level of similarity to the Chinese A/CHA/HY/2013 and Thai (A/TAI/46-1/2015) strains. These strains displayed an identity of 97.62–98.86 and 98.41–99.17 percent in comparison to the Russian strains, respectively . It is noteworthy that the number of differences between the Russian Far East isolates was occasionally higher than that between the isolates and foreign ones, which suggests the possibility of multiple transboundary introgressions. The VP1 gene sequence pairwise comparisons confirm this observation . 3.4. Vaccine Matching To identify the most optimal and effective vaccine strains, an analysis of the antigenic characteristics of the isolated FMDV type A strains was performed. A cross-microneutralization reaction was conducted to assess the antigenic compatibility of FMDV type A isolates from the Far East and North Caucasus regions of the Russian Federation with the vaccine strain A 22 . The antigenic matching of FMDV type A isolates from the Far East and North Caucasus regions of the Russian Federation with vaccine strains A 22 /550, A/Iran/97, A/Turkey/06, and A/Kyrgyzstan/07 was conducted using reference serum samples from cattle immunized with monovalent vaccines. The results of these matching studies are presented in . As illustrated by the data presented in , the isolates from outbreaks in the North Caucasus demonstrated antigenic variations from the type A vaccine strains utilized for comparison: A 22 /550, A 22 /Iraq/64, A/Iran/97, A/Turkey/06, and A/Kyrgyzstan/07 (r 1 < 0.3). A similar antigenic dissimilarity was observed between the Far Eastern isolates and the vaccine strains of type A, which included the strains A 22 /550, A 22 /Iraq/64, A/Iran/97, and A/Kyrgyzstan/07 (r 1 < 0.3). However, the FMDV type A isolates No. 2155, 2156, 2175, and 2177, isolated in 2013 in the Zabaykalsky Krai and Amur Oblast, demonstrated antigenic relatedness (r 1 0.35–0.48) to the A/Turkey/06 strain. The obtained results in indicated the presence of close antigenic relatedness between the production strain A/2155/Zabaikalsky/2013 and the FMDV isolates A/2177/Amursky/2013, A/2203/Zabaikalsky/2014, A/2225/Zabaikalsky/2014, and A/2233/Zabaikalsky/2014 (r 1 0.46–0.74). Moreover, a close antigenic relatedness between the production strain A/2166/Krasnodarsky/2013 and the isolate A/2171/Kabardino-Balkarian/2013 (r 1 = 0.69) was observed. A total of 33 outbreaks were recorded in the Russian Federation over the period 2013–2014 in two regions, namely the North Caucasian and Far Eastern Federal Districts, situated in close proximity to the southern border of the country . Of the outbreaks, 24 (77%) were caused by FMDV serotype A, which had affected Russia for the first time in 2013. The remaining nine outbreaks were attributed to FMDV serotype O. All outbreaks of the O type emerged in Primorsky Krai in 2014, while the outbreaks of the A type affected both study areas in 2013 and 2014. The main characteristics of the viral isolates described in this paper are summarized in . Prior to molecular biological studies, viral infectivity titers were determined using the microtiter method in 96-well plates on IB-RS-2 culture cells and expressed as log TCD50/mL. The titration results of the FMDV type A isolates are presented in . A total of seven viral genomic libraries were sequenced on the Novaseq™ 6000 instrument (Illumina, San Diego, CA, USA), resulting in 9.06–23.02 million paired-end reads per sample. After de novo assembly of trimmed and normalized reads, seven full-genome viral contigs of length 8126–8259 bp were obtained. The coverage of viral contigs varied in the range of 3661×–35142× . Phylogenetic analysis demonstrated that FMDV strains isolated in the Russian Federation in 2013–2014 are part of the genetically diverse topotype Asia, which comprises multiple genetic sublineages. It is noteworthy that this topotype encompasses a genetically distinct line, SEA-97, which is genetically disparate from other representatives of the topotype. This observation may suggest the necessity for further reclassification of FMD virus genetic lines and topotypes. Isolates A-2167, A-2171, and A-2169 constitute a distinct and well-defined cluster within the Indo-Pakistani branch ( and ), which has been designated as the genetic lineage Iran-05. The most closely related isolate from our dataset was identified as the FMDV strain A/TUR/11/2013, which was isolated in the Black Sea region of Turkey . The pairwise analysis of nucleotide identities within the Iran-05 sublineage revealed that all analyzed strains exhibited 84.8% identities with Turkish strains, which displayed 397–398 differences (SNP and small indels) between their genomes. The Kabardino-Balkarian strain exhibited 32 polymorphisms from the A-2167 and A-2169 strains, which were almost identical within the cluster (zero differences in the trimmed alignment) ( B). A comparative genomic analysis of the VP1 dataset revealed the existence of a highly homologous strain, designated A/IRN/125/2010, that exhibited a nucleotide divergence of 22–25 nucleotides from the Russian strains, in addition to the previously identified A/TUR/11/2013 strain . Isolates A-2170, A-2182, A-2203, and A-2225 exhibited a close phylogenetic relationship with strains isolated in Thailand, Vietnam, Malaysia, and China between 2010 and 2015 ( and ). These strains belong to the diverse SEA-97 (South-East Asia) genetic lineage . Pairwise comparisons demonstrated that the Far East isolates formed a distinctive cluster, exhibiting a high level of similarity to the Chinese A/CHA/HY/2013 and Thai (A/TAI/46-1/2015) strains. These strains displayed an identity of 97.62–98.86 and 98.41–99.17 percent in comparison to the Russian strains, respectively . It is noteworthy that the number of differences between the Russian Far East isolates was occasionally higher than that between the isolates and foreign ones, which suggests the possibility of multiple transboundary introgressions. The VP1 gene sequence pairwise comparisons confirm this observation . To identify the most optimal and effective vaccine strains, an analysis of the antigenic characteristics of the isolated FMDV type A strains was performed. A cross-microneutralization reaction was conducted to assess the antigenic compatibility of FMDV type A isolates from the Far East and North Caucasus regions of the Russian Federation with the vaccine strain A 22 . The antigenic matching of FMDV type A isolates from the Far East and North Caucasus regions of the Russian Federation with vaccine strains A 22 /550, A/Iran/97, A/Turkey/06, and A/Kyrgyzstan/07 was conducted using reference serum samples from cattle immunized with monovalent vaccines. The results of these matching studies are presented in . As illustrated by the data presented in , the isolates from outbreaks in the North Caucasus demonstrated antigenic variations from the type A vaccine strains utilized for comparison: A 22 /550, A 22 /Iraq/64, A/Iran/97, A/Turkey/06, and A/Kyrgyzstan/07 (r 1 < 0.3). A similar antigenic dissimilarity was observed between the Far Eastern isolates and the vaccine strains of type A, which included the strains A 22 /550, A 22 /Iraq/64, A/Iran/97, and A/Kyrgyzstan/07 (r 1 < 0.3). However, the FMDV type A isolates No. 2155, 2156, 2175, and 2177, isolated in 2013 in the Zabaykalsky Krai and Amur Oblast, demonstrated antigenic relatedness (r 1 0.35–0.48) to the A/Turkey/06 strain. The obtained results in indicated the presence of close antigenic relatedness between the production strain A/2155/Zabaikalsky/2013 and the FMDV isolates A/2177/Amursky/2013, A/2203/Zabaikalsky/2014, A/2225/Zabaikalsky/2014, and A/2233/Zabaikalsky/2014 (r 1 0.46–0.74). Moreover, a close antigenic relatedness between the production strain A/2166/Krasnodarsky/2013 and the isolate A/2171/Kabardino-Balkarian/2013 (r 1 = 0.69) was observed. The current FMD prevention and control policy in the Russian Federation is based on three main strategies . The first strategy is the implementation of export and import controls in accordance with the recommendations set forth in the World Organization for Animal Health (WOAH) Terrestrial Animal Health Code. The second strategy is the implementation of a zoning plan for the country’s territory. The third strategy is the application of a preventive vaccination program for cattle and small ruminants in regions bordering affected countries. In order to vaccinate animals against FMD, the Russian Federation successfully employs the use of domestically produced inactivated vaccines with high protection profiles. These vaccines contain FMDV antigens of the A, O, and Asia-1 serotypes, depending on the epidemiological context . Russia is not endemic to foot-and-mouth disease , yet the potential for a spillover from neighboring countries in Asia persists. This is due to the constant threat of the disease being transmitted from affected regions through cattle or wild animals . Outbreaks of foot-and-mouth disease caused by incursions from outside have been reported in Russia since 1995, with subsequent outbreaks occurring in 2000, 2004–2006, and 2010–2014 . It is noteworthy that prior to 2013, waves of FMD outbreaks were attributed to type O and Asia 1 lineages, which exhibited similarities at the VP1 target to FMD from China, Kazakhstan, and Mongolia . The 2013–2014 outbreaks of FMD resulted in the emergence of a novel epidemiological situation in geographically distant regions, including the Far East and southern Russia. A considerable number of outbreaks were recorded in Zabaykalsky Krai (nine outbreaks) and the Amur Oblast (six outbreaks), as well as, for the first time in many decades, in the Karachaevo-Cherkesian Republic (two outbreaks), Krasnodarsky Krai (three outbreaks), and the Kabardino-Balkarian Republic (one outbreak) . In this study, we employed full-genome sequencing and phylogenetic reconstruction across the entire genome and the VP1 gene locus to conduct a comprehensive analysis of the molecular epidemiology of FMD in Russia during the 2013–2014 period. Representative isolates were selected based on information from the FMD reference laboratory and data from the literature regarding outbreaks that occurred in geographically proximate regions. The phylogenetic analysis demonstrated that the sequenced strains were representative of two distinct genetic lineages of serotype A: SEA97 in the Far East and Iran-05 in the North Caucasus. The topology of the phylogenetic trees obtained for the complete genome and the VP1 gene is similar, indicating both the reliability of the analysis and the unlikelihood of earlier recombination events in the isolates under study ( and ). All outbreaks in the Far East in 2013 were caused by a virus belonging to the genetic lineage A/Sea-97 (A/Southeast Asia-97). The Sea-97 lineage is divided into five major groups, designated G1-G5, of which the isolates studied belong to the G2 group along with isolates from China, Thailand, and Vietnam . This group is endemic to the countries of Southeast Asia: Vietnam, Thailand, Malaysia, Laos, and Cambodia . Literature analysis indicates that since 2013, this virus has gone beyond its natural range and spread to China, Kazakhstan, and Mongolia, from where it entered Russia. The high level of genetic similarity between the Russian and Chinese isolates ( , and ) supports the relatedness of these strains and, taking into account the published reports of the Sea-97 lineage in China in as early as 2009 ( https://www.wrlfmd.org/country-reports/country-reports-2009 , accessed on 19 December 2024), suggests the most likely route of transmission from south to north . The observed expansion events indicate a new twist in the ever-changing epidemiology of FMD in East Asia and increase the risk of further range expansion to more distant countries, including FMD-free countries. A/ASIA/Sea-97 was the most common topotype among serotype A viruses in sporadic or endemic outbreaks in Southeast, Central, and East Asia between 2009 and 2020. By contrast, outbreaks reported in the southern regions of Russia were attributed to the Iran-05 lineage of the Asia topotype, which was initially identified in Iran in 2003 during the 2005–2006 epidemic. It subsequently disseminated throughout the Middle East, becoming established in this region . Since the onset of the Middle East epidemic, the lineage has undergone evolutionary changes, giving rise to numerous sublineages within the genetic lineage . Many of the sublineages were relatively short-lived, disappearing shortly after their initial appearance. However, some were able to persist and dominate for a considerable period of time . The available genetic and epidemiological data since 2011 indicate that the SIS-10 sublineage of the A/Iran-05 lineage, to which the Russian isolates belong (see and ), has been the dominant strain across Turkey . According to the reports from the World Reference Laboratory for Foot-and-Mouth Disease (WRLFMD, in Pirbright, UK), isolates of the genetic sublineage A/ASIA/Iran-05/SIS-10 served as the causative agents of FMD outbreaks in Middle Eastern countries between 2011 and 2013. This raises the question of the origin of these spillover events. While the introduction of FMD from Turkey into the North Caucasus is unlikely, the most probable scenario is that the A/Iran-05 SIS10 sublineage spread across the Transcaucasian countries, subsequently spilling over into the North Caucasus. The incursions of two different lineages necessitated a re-examination of the control and eradication policies that were in place at that time. The isolates under study exhibited antigenic differences from the type A vaccine strains used for comparison, including A 22 /550, A 22 /Iraq/64, A/Iran/97, A/Turkey/06, and A/Kyrgyzstan/07 (r 1 < 0.3). These findings are consistent with the results of the phylogenetic and antigenic relatedness study conducted by Mahapatra and colleagues , which indicated that the virus isolates circulating in the Middle East in 2012–2013 belonged to the SIS-10 sublineage. The A-Iran-05 genetic lineage exhibited antigenic drift and a low antigenic match with the vaccine strains A 22 /IRQ/64 and, in particular, A/Turkey/06, which have been the predominant strains utilized for FMD control in this region since 2006 . Isolates from the Zabaikalsky Krai and Amur Oblast in 2013-2014, belonging to the “Southeast Asia-Sea-97” genetic lineage, “Asia” topotype also exhibited antigenic divergence from strains A 22 /550, A 22 /Iraq/64, A/Iran/97, and A/Kyrgyzstan/07 (r 1 < 0.3). Moreover, the FMDV isolates A-2155, A-2156, A-2175, and A-2177, registered in 2013 in the Zabaikalsky Krai and Amur Oblast, showed antigenic relatedness (r 1 0.35–0.48) to the A/Turkey/06 strain. As a consequence of the identified antigenic dissimilarities observed between the isolates and the available vaccine strains, the newly obtained strains of FMDV A/2155/Zabaykalsky/2013 and A/2166/Krasnodarsky/2013 were officially registered and deposited in the All-Russian State Collection of Exotic Types of Foot-and-Mouth Disease Virus and Other Animal Pathogens of the Federal Center for Animal Health “ARRIAH”. Also, these strains were recommended for inclusion in the trivalent vaccine in the FMD-free regions of the Russian Federation. In conclusion, the genetic analysis of FMDV isolates from various regions of the Russian Federation in 2013–2014 revealed that these isolates belonged to two different lineages: the SEA-97 and Iran-05. The present study underscores the necessity for a more rigorous FMD surveillance program in the highlighted regions, the importance of developing prompt intervention strategies, continuous surveillance, and vaccine strain selection for effective management and control of FMDV.
Health-related quality of life after radical cystectomy for bladder cancer in elderly patients with ileal orthotopic neobladder, ureterocutaneostomy or ileal conduit: cross-sectional study using validated questionnaires
4fbc7a03-5c29-47ce-9cf1-5bcfaedb730d
11892295
Surgical Procedures, Operative[mh]
Radical cystectomy (RC) and urinary diversion (UD) are the gold standard procedures for localized muscle-invasive and high-risk nonmuscle-invasive bladder cancer that is unresponsive to endoscopic treatment or intravesical BCG . The choice of UD technique depends on factors such as patient condition, age, life expectancy, comorbidities, continence status, renal function, and surgeon experience. UD surgery impacts various aspects of health-related quality of life, including physical, sexual, and psychosocial well-being; daily activities; and body image-related problems . This study aims to compare perioperative data and postoperative quality of life outcomes, which are mentioned less frequently in the literature, for three different urinary diversion techniques (orthotopic neobladder (ONB), ileal conduit (IC), and ureterocutaneostomy (UC)) via the SF-36 and Barthel indices. Study design and selection criteria We conducted a single-center cross-sectional study at the University of Health Sciences, Adana City Teaching and Research Hospital in Turkey. This study was conducted in accordance with the principles of the Declaration of Helsinki, and the study protocol was approved by the ethics committee of the University of Health Sciences, Adana City Training and Research Hospital Scientific Research (Date/number: 07.12.2023–141/2997). Informed consent was obtained from all patients. Data were collected from records of patients who underwent RC and UD between February 2018 and January 2024, with quality-of-life surveys administered in person and by phone. Surgery, age, sex, BMI (kg/m2), complications, operation time, hospital stay, drain withdrawal time, hemoglobin changes, follow-up period, and creatinine changes were recorded. Patients were divided into three groups: UC, IC, and ONB. The Barthel index and SF-36 questionnaires were administered to each group. We excluded individuals who died from noncancer causes within the first postoperative three months, had no follow-up data, had local recurrence or metastasis, or had other oncological diseases. UC was performed for those with positive lymph nodes and positive urethral surgical margins, those with serious neurological/psychiatric diseases and limited life expectancy, those with impaired liver/kidney functions, those who received high-dose preoperative radiotherapy (RT), those with complex urethral stenosis and incontinence, those with inflammatory bowel disease and those who underwent salvage cystectomy. Surgical technique All patients underwent open RC via standard techniques. For IC, a 15 cm ileal segment was isolated 20 cm proximal from the ileocecal valve, the ureters were spatulated and anastomosed was performed using the Wallace technique in 13 and the Bricker technique in 29 patients. For ONB, a 50 cm ileal segment was shaped in a W configuration, and ureters were implanted via the Hautmann technique. For UC, a V- or U-shaped skin flap was created, the ureter was extraperitonealized, and a double J stent was inserted before the skin was closed with 4–0 polyglactin sutures . Follow-up data collection In this study, survey data were recorded from 42 patients with IC, 11 patients with ONB, and 39 patients with UC who survived and completed the questionnaire, out of a total of 65 patients treated with IC, 15 treated with ONB, and 59 treated with UC. The SF-36 is a validated 36-item questionnaire assessing eight health-related quality of life domains: physical, social, role (physical and emotional), mental, energy, bodily pain, and general health perception. It also includes a single item about health status changes. The scores for each domain range from 0 (worst) to 100 (best) . The Barthel Index (BI) is a validated questionnaire assessing the independence of geriatric patients in self-care (feeding, bathing, dressing, toileting) and mobility (transfers, stairs) related to daily activities. Each of the ten items has two to four response categories, with total scores ranging from 0 (fully dependent) to 100 (fully independent) . Few studies have reported the use of the BI in urological contexts, particularly among candidates for radical cystectomy . Statistical analysis Categorical measurements are summarized with numbers and percentages, and numerical measurements are summarized with means and standard deviations. The chi square test was used for categorical comparisons, the Pearson chi square test was used in cases where there was no expected value problem, and Fisher’s exact test was used in cases where there was an expected value problem. The Shapiro‒Wilk test was used to check the assumption of a normal distribution. When comparing numerical measurements, one-way analysis of variance was chosen if the assumptions were met, and if not, the Kruskal‒Wallis test was chosen. For significant findings, Tukey’s test or the Games–Howell test was used for pairwise comparisons, and if the assumption was not met, Bonferroni correction was performed with the Mann‒Whitney U test. IBM SPSS Statistics 20.0 was used for statistical analysis, and the significance level was set at 0.05 for all tests. We conducted a single-center cross-sectional study at the University of Health Sciences, Adana City Teaching and Research Hospital in Turkey. This study was conducted in accordance with the principles of the Declaration of Helsinki, and the study protocol was approved by the ethics committee of the University of Health Sciences, Adana City Training and Research Hospital Scientific Research (Date/number: 07.12.2023–141/2997). Informed consent was obtained from all patients. Data were collected from records of patients who underwent RC and UD between February 2018 and January 2024, with quality-of-life surveys administered in person and by phone. Surgery, age, sex, BMI (kg/m2), complications, operation time, hospital stay, drain withdrawal time, hemoglobin changes, follow-up period, and creatinine changes were recorded. Patients were divided into three groups: UC, IC, and ONB. The Barthel index and SF-36 questionnaires were administered to each group. We excluded individuals who died from noncancer causes within the first postoperative three months, had no follow-up data, had local recurrence or metastasis, or had other oncological diseases. UC was performed for those with positive lymph nodes and positive urethral surgical margins, those with serious neurological/psychiatric diseases and limited life expectancy, those with impaired liver/kidney functions, those who received high-dose preoperative radiotherapy (RT), those with complex urethral stenosis and incontinence, those with inflammatory bowel disease and those who underwent salvage cystectomy. All patients underwent open RC via standard techniques. For IC, a 15 cm ileal segment was isolated 20 cm proximal from the ileocecal valve, the ureters were spatulated and anastomosed was performed using the Wallace technique in 13 and the Bricker technique in 29 patients. For ONB, a 50 cm ileal segment was shaped in a W configuration, and ureters were implanted via the Hautmann technique. For UC, a V- or U-shaped skin flap was created, the ureter was extraperitonealized, and a double J stent was inserted before the skin was closed with 4–0 polyglactin sutures . In this study, survey data were recorded from 42 patients with IC, 11 patients with ONB, and 39 patients with UC who survived and completed the questionnaire, out of a total of 65 patients treated with IC, 15 treated with ONB, and 59 treated with UC. The SF-36 is a validated 36-item questionnaire assessing eight health-related quality of life domains: physical, social, role (physical and emotional), mental, energy, bodily pain, and general health perception. It also includes a single item about health status changes. The scores for each domain range from 0 (worst) to 100 (best) . The Barthel Index (BI) is a validated questionnaire assessing the independence of geriatric patients in self-care (feeding, bathing, dressing, toileting) and mobility (transfers, stairs) related to daily activities. Each of the ten items has two to four response categories, with total scores ranging from 0 (fully dependent) to 100 (fully independent) . Few studies have reported the use of the BI in urological contexts, particularly among candidates for radical cystectomy . Categorical measurements are summarized with numbers and percentages, and numerical measurements are summarized with means and standard deviations. The chi square test was used for categorical comparisons, the Pearson chi square test was used in cases where there was no expected value problem, and Fisher’s exact test was used in cases where there was an expected value problem. The Shapiro‒Wilk test was used to check the assumption of a normal distribution. When comparing numerical measurements, one-way analysis of variance was chosen if the assumptions were met, and if not, the Kruskal‒Wallis test was chosen. For significant findings, Tukey’s test or the Games–Howell test was used for pairwise comparisons, and if the assumption was not met, Bonferroni correction was performed with the Mann‒Whitney U test. IBM SPSS Statistics 20.0 was used for statistical analysis, and the significance level was set at 0.05 for all tests. Demographic and perioperative data from 139 patients were compared, revealing similar mean ages, sex distributions, BMIs, and hemoglobin changes across the groups ( P = 0.282, P = 0.506, P = 0.697, P = 0.128). The UC group had the shortest operation time, whereas the ONB group had the longest operation time ( P < 0.001). Hospital stays were shorter in the UC group, with no difference between the IC and ONB groups ( P = 0.002). The drain withdrawal and follow-up periods were also shorter in the UC group ( P = 0.002, P < 0.001), with no significant differences between the IC and ONB groups. Preoperative and postoperative creatinine changes are detailed in Table . Among 92 patients who completed the quality of life survey, significant differences were noted only for emotional function and fatigue scores ( P = 0.016, P = 0.001), with higher emotional scores in the ONB group and higher fatigue scores in the UC group. A relationship was observed between the occurrence of complications in patients and both the length of hospital stay and ASA score. Accordingly, the presence of complications increases the length of hospital stay, while an ASA score of 4 increases the risk of complications. A relationship was observed between the occurrence of complications and the follow-up duration in patients. Accordingly, the presence of complications reduces the follow-up duration. A relationship was observed between mortality in patients and surgery duration, preoperative Hgb levels, and Charlson Comorbidity Index (CCI) score. Accordingly, patients who died had shorter surgery duration, lower preoperative Hgb levels, and higher CCI scores compared to patients who survived. A relationship was observed between mortality in patients and follow-up duration, preoperative, and postoperative creatinine levels. Accordingly, patients who died had shorter follow-up durations and consistently higher creatinine levels during both preoperative and postoperative periods compared to patients who survived (Table ). No relationship was found between the occurrence of complications in patients and the scale scores. The multivariate analysis performed for complications yielded that age and ASA score emerged as the most effective measurements in determining complications. The following conclusions were derived from the analysis: A one-year decrease in age was associated with a 1.11-fold increase in the risk of complications (95% CI: 1.02–1.21). An ASA score of 4 was associated with a 31.98-fold increase in the risk of complications (95% CI: 3.97–257.64) compared to scores of 2 or 3 (Table ). The multivariate analysis performed for mortality yielded that CCI score and pre-op Hemoglobin value emerged as the most effective measurements in determining mortality. The following conclusions were derived from the analysis: A one-unit increase in CCI score was associated with a 1.84-fold increase in the risk of mortality (95% CI: 1.31–2.58). A one-unit decrease in pre-op Hgb value was associated with a 1.32-fold increase in the risk of mortality (95% CI: 1.04–1.67) (Table ). Despite the finding that pre-operative albumin levels exhibited significant disparities between the UC and IC groups, these differences were not deemed to be of clinical significance, as both measurements fell within the clinical normal range. In the IC group, one patient required additional intervention for colostomy and died 11 months later. In the ONB group, one patient developed an anastomotic leak and died 6 months later. In the UC group, one patient died from rectal perforation. In the UC group, two patients underwent ileum resections, two had colostomies, and one had a splenectomy and survived. Urinary diversions are chosen on the basis of patient preference, performance status, life expectancy, and oncological control. The most common types are IC and ONB in clinical practice . While ONB benefits quality of life, including social function and body image, it is technically more challenging and has a higher reoperation rate. IC is easier and quicker to perform . The simplest form, UC, is suitable for severely ill patients (ASA score 3), as it reduces complications and metabolic issues by not using intestinal segments. This method is also appropriate for patients who need anticoagulation, those with inflammatory bowel disease, or those with a history of multiple intestinal surgeries . Vladimir et al. reported that they aimed to reduce operation time by applying UC to 35% of patients with poor health status . Deliveliotis, Longo, Suzuki, and Kilciler reported that UC had an approximately 80-min shorter operative time than IC did ( p < 0.001), whereas Knap et al. reported that the operative time supported IC but was not statistically significant (IC: 280 min, UC: 337 min) . Deliveliotis and Longo also reported shorter hospital stays in the UC group than in the IC group . Kilciler et al. reported no increased risk of reoperation, complications, or longer hospital stays with UC than with IC, indicating that UC is a safe alternative with low complication rates . However, some studies reported contrary findings . In our study, the operative time was shortest in the UC group, longer in the IC group, and longest in the ONB group ( p < 0.001) (Table ). A shorter operation time is crucial, as it lowers anesthesia risk, particularly for high-risk patients. These findings align with the literature. Similarly, studies by Mete Kilciler and Sumit Sainin reported comparable blood losses in patients undergoing IC and UC . Moeen et al. reported no significant difference in the number or rates of blood transfusions between continent (ONB and ureterosigmoidostomy) and incontinent (UC and IC) groups . In our study, the changes in hemoglobin among the UC, IC, and ONB groups were statistically similar ( P = 0.128) (Table ). We believe that being a high-volume center with increased experience reduces the need for blood transfusions. In UC, the risk of renal function deterioration is greater than that in IC because of recurrent pyelonephritis and hydronephrosis caused by stomal stenosis . A study comparing four urinary diversion methods (UC, IC, an ileal neobladder, and an ileocecal neobladder) revealed no significant differences in renal function. However, recurrent acute pyelonephritis and chemotherapy may contribute to renal function deterioration . In our study, the preoperative and postoperative creatinine values did not significantly differ between the UC and ONB groups, whereas the IC group exhibited a statistically significant increase ( p = 0.004) (Table ). Although all the groups presented increases in UC and ONB compared with the preoperative values, the increases in UC and ONB were less apparent than those in IC (Fig. ). This may be due to regular urine flow in UCs and low-pressure reservoir function in ONB. The increase in creatinine in the IC group may be linked to factors such as small reservoir size, reflux, anastomotic stenosis, and lower overall kidney function. In our study, the mean follow-up duration in the UC group was determined to be 11.7 ± 13.5 months. Kulaksızoğlu et al. reported a follow-up duration of 12 months and stated that health-related quality of life returned to baseline values within this period . The study by Okada Y and colleagues, however, shows that there were patients with a minimum follow-up period of 11 months . In another study, although the average follow-up period was more than 1 year, patients evaluated over a minimum period of 6 months were reported . Although our findings are generally consistent with the literature, the large standard deviation indicates substantial individual variability. Quality of life after urinary diversion (UD) is influenced by factors such as age, comorbidities, UD type, complications, patient expectations, oncological failure, and surgeon experience. It is recommended that UD be performed by experienced surgeons in high-volume centers . Our clinic in southern Turkey is a high-volume oncology center, where radical cystectomy and urinary diversion are routinely performed by expert surgeons. A few studies have compared the quality of life among three different diversion methods (UC, IC, ONB) . In our study, we assessed the groups via the SF-36 and the Barthel index , both of which are quality-of-life surveys. This is notable, as this is the first study to utilize these questionnaires within the same patient groups. Erber et al. administered the EORTC QLQ-C30 and the QLQ-BLM30 quality-of-life questionnaires to patients with IC and ONB. They reported that ONB was statistically significant for physical function and global health status, although other functions (role, emotional, cognitive, social) were not significantly different, despite ONB being better. Diarrhea was significantly more common in ONB patients, whereas ileus was more frequently detected in ONB patients; however, Parekh et al. reported higher ileus rates in IC patients. Nieuwenhuijzen et al. reported similar ileus rates in both IC and ONB, indicating that intestinal issues such as ileus can be multifactorial . In our study, bowel issues (diarrhea, constipation, ileus) were more common in the UC group, although this difference was not statistically significant ( p = 0,769). Although fewer complaints are expected since the bowel segment is not used, the increased prevalence of bowel problems in this patient group may be attributed to their greater debilitation and vulnerability to anesthesia complications, as well as their additional comorbidities. In another study, the EORTC QLQ-C30 and QLQ-BLM30 questionnaires were administered to elderly patients who underwent IC and ONB. These findings suggest that ONB may offer advantages over IC for elderly patients, particularly with respect to cognitive and bowel function. However, both types of urinary diversion can provide satisfactory quality of life if there are no long-term complications and if the reservoir functions well . Elbadry et al. administered the Functional Assessment of Cancer Therapy (FACT-BL) questionnaire to patients with IC and ONB. IC patients outperformed ONB patients in all domains, with physical and functional well-being and general FACT-G scores showing statistical significance. However, ONB patients reported significantly fewer urinary control issues and better perceptions of their body appearance . Saika et al. evaluated the quality of life associated with ONB, IC, and UC diversion methods in elderly patients via the EORTC QLQ-C30. They reported that ONB demonstrated better physical function results, attributing this to challenges with stoma management and body image concerns among the elderly . Mucciardi et al. reported that UC was well tolerated in high-risk patients because of its short operation time and hospital stay, low complication rates, and significant improvements in quality of life for those who underwent the procedure . For the quality of life study, patients were divided into two groups: Group I included ONB patients and ureterosigmoidostomy patients, whereas Group II included UC patients and IC patients. They utilized the FACT-BL and the Sexual Health Inventory for Men (SHIM) to assess erectile function. The results indicated that Egyptian patients preferred continent urinary diversion to minimize daily life restrictions, even ureterosigmoidostomy, which is the least preferred method. Additionally, active patient participation has been shown to enhance postoperative quality of life . Ali A. S. et al. analyzed 2,285 patients across 21 studies and found that a variety of quality of life questionnaires were used. Among the 16 studies, there was no significant difference, while 4 studies favored ONB, and urinary leakage was reported in 1 study involving ONB patients. Overall, ONB patients demonstrated slightly better quality of life scores than IC patients did, particularly in younger and fitter patients . Thulin et al. reported increased nocturnal urinary incontinence in ONB patients, which negatively impacted their sleep quality and reduced their health-related quality of life (HrQoL) compared with other diversion methods. Consequently, lower HrQoL was linked to poorer physical health and energy levels . In our study, we found statistically significant differences in emotional function and fatigue between groups regarding quality of life general and subscale scores. Compared with the other two groups, the ONB group had higher emotional function scores, whereas the UC group reported greater fatigue ( P = 0.016; P = 0.001) (Table ). This may be due to poor sleep quality related to the debilitation and comorbidities of UC patients, as well as technical issues with the urostomy bag at night. The higher emotional function score in the ONB group was linked to better urinary control and improved body image, which helped reduce social issues, and patients felt better outdoors. One study reported that age, postoperative complications, BMI, and tumor-related factors did not influence postoperative quality of life. Yannic Volz et al. reported that while bladder-preserving approaches gained importance, the choice of urinary diversion affected quality of life. However, they did not find significant long-term differences in symptoms, functionality, or overall quality of life . The limitations of our study include the short follow-up period, being a single center, the small number of patients in the ONB group, and the low response rates among patients and their relatives who participated in the survey. Although it is known that there should be at least a one-year period between surgery and the interview, another limitation is that patients were included in the study at different time points, based on their similar postoperative recovery times. In conclusion, our results concerning the quality of life scales for the three urinary diversion methods revealed that many subscores were similar; however, emotional function was greater in the ONB group, whereas fatigue was greater in UC patients. According to the results of our study, ONB may be more appropriate for young and fit patients, while UC may be more appropriate for high-risk elderly. Although individual factors and the physician's experience are taken into account in the selection of urinary diversion, quality of life questionnaires and patient counseling are essential. The advantages and disadvantages of the operation should be discussed in detail with the patient, and the patient should be informed.
Minimally invasive lateral, posterior, and posterolateral sacroiliac joint fusion for low back pain: a systematic review and meta-analysis
f017a0c8-766b-4969-906d-056682b408a2
11806475
Surgical Procedures, Operative[mh]
Low back pain is very common in today’s society and often seriously affects patients’ quality of life. Some causes of low back pain are related to sacroiliac joint diseases, which need to be confirmed by physical examinations, such as a pelvic compression separation test and Patrick’s test. Sacroiliac joint diseases may be treated conservatively and/or surgically. Conservative treatment includes medication, physiotherapy, acupuncture, and local blocking. If conservative treatment is ineffective, sacroiliac joint fusion may be considered. – Minimally invasive sacroiliac joint fusion has the advantages of less blood loss and a shorter operation time than open fusion, and includes three surgical approaches: lateral, posterior and posterolateral. There are currently few comparative studies of the three surgical approaches in minimally invasive sacroiliac joint fusion, and most meta-analyses have not provided separate analyses for each. The purpose of the present study was to conduct a systematic review and meta-analysis evaluating the therapeutic effects of minimally invasive lateral, posterior, and posterolateral sacroiliac joint fusion on low back pain of sacroiliac joint origin. This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 guidelines, and was registered on PROSPERO under registration No. CRD42023451047. Search strategy The PubMed, Web of Science, Embase, Cochrane Library, and ClinicalTrials.gov databases were searched for articles related to minimally invasive sacroiliac joint fusion for low back pain of sacroiliac joint origin, published up to 31 August 2024. The Medical Subject Heading (MeSH) search terms included sacroiliac joint/surgery and minimally invasive surgical procedures. These keywords and their corresponding free words, together with Boolean operators, were used to create search formulas. The detailed search strategy is provided in Supplemental materials. After removal of duplicates, titles and abstracts were independently screened by two independent reviewers (KX and YLL), with any disagreements resolved by a third reviewer (SHX), followed by full-text review of remaining articles by two independent reviewers (KX and YLL), according to the inclusion and exclusion criteria. In addition, the references of the screened articles were read as an alternative way to identify articles suitable for this meta-analysis. The final selected articles were classified according to the surgical approaches: lateral, posterior, and posterolateral. Inclusion and exclusion criteria The inclusion criteria comprised: (1) studies on low back pain of sacroiliac joint origin; (2) studies on minimally invasive sacroiliac joint fusion for low back pain; (3) studies with a follow-up period of more than 6 months; (4) studies published in English; and (5) quantitative studies. The exclusion criteria comprised: (1) studies including patients with neoplastic disease; (2) studies including patients with acute traumatic diseases; (3) review articles; (4) animal studies; (5) studies with incomplete data; (6) studies on revisional surgery of the sacroiliac joint; (7) studies including patients with infectious diseases; (8) studies with a sample size less than 10; and/or (9) studies with overlapping data. Assessment of studies The methodological quality of included studies was assessed independently by two authors (SHX and YLL). Cohort studies, a type of observational study design for evaluating the association between exposure and outcomes, with participants divided into exposed and non-exposed groups, were evaluated for quality using the Newcastle-Ottawa Scale, with a maximum of 9 points. Each cohort study was assessed based on eight items, which were divided into three categories: selection, comparability and outcome. All cohort studies mentioned in the present manuscript were comparative studies, not single-arm studies. The Cochrane Collaboration’s tool was used to assess the quality of randomized controlled trials (RCTs), focusing on seven key evaluation criteria, including random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, completeness of outcome data, selective reporting, and identification of potential sources of bias. For case series and prospective multicenter single-arm studies, quality evaluation was conducted using the Joanna Briggs Institute critical appraisal tool, which comprises 10 questions assessing the internal validity and risk of bias. A case series was defined as a descriptive study design that focuses on the characteristics, clinical presentations, treatment responses, and outcomes of patients, without including a control group. Data extraction The following data were extracted from the included articles: (1) demographic parameters, including study type, approach, sample size (minimally invasive sacroiliac joint fusion, MISIJF), sex, age, internal implant name, comparison group, follow-up time and prior lumbar fusion proportion; (2) visual analog scale (VAS) score for low back pain preoperatively, and at 6 and 12 months postoperatively; and (3) the total complication rate, revision rate, and fusion rate during the follow-up period. The total complication rate included any intraoperative and postoperative complication. Fusion was defined as bone bridging within the sacroiliac joint and absence of screw loosening on the radiographic image. For both RCTs and cohort studies, only participants who received minimally invasive sacroiliac joint fusion were analyzed. All the parameters were confirmed by two authors (KX and YLL), and in case of disagreement, the two authors negotiated to reach a consensus. Outcomes Improvement of VAS score was defined as the mean difference in low back pain scores before and after surgery. The VAS score improvements were calculated at 6 and 12 months postoperatively and pooled estimates for different surgical approaches are presented. Additionally, the pooled total complication rate, pooled revision rate, and pooled fusion rate were calculated and reported for the various surgical techniques. Certainty of evidence The Grading of Recommendations Assessment, Development and Evaluation (GRADE) was used to evaluate the certainty of evidence for the meta-analysis. Using the GRADE approach, the study design was first considered, followed by examination of various factors for potentially downgrading or upgrading the quality of a body of evidence, which was finally classified into one of four levels: very low, low, moderate or high. GRADE was used to assess the quality of evidence for each pooled effect estimate. Statistical analyses Meta-analysis, sensitivity analysis and publication bias testing (Egger’s test) were performed using Stata 14 (2015) statistical software (StataCorp LLC, College Station, TX, USA). If the number of studies included in the meta-analysis exceeded 10, a publication bias assessment was conducted. Data are presented as mean and SD, with results of individual studies and syntheses displayed utilizing forest plots. When calculating the pooled parameter, P (Q test) <0.1 or I 2 > 50% was considered to indicate heterogeneity. A leave-one-out sensitivity analysis was conducted to identify potential sources of heterogeneity. Specifically, one study was removed at a time and the meta-analysis was re-run to assess the impact of each study on the overall results. After exploring potential factors contributing to the heterogeneity in the identified study, the study may be excluded. However, caution should be exercised when excluding studies in meta-analyses with a small number of included studies. If no heterogeneity was present, a fixed-effect model was used to combine effect sizes; if heterogeneity existed, a random-effect model was applied. The PubMed, Web of Science, Embase, Cochrane Library, and ClinicalTrials.gov databases were searched for articles related to minimally invasive sacroiliac joint fusion for low back pain of sacroiliac joint origin, published up to 31 August 2024. The Medical Subject Heading (MeSH) search terms included sacroiliac joint/surgery and minimally invasive surgical procedures. These keywords and their corresponding free words, together with Boolean operators, were used to create search formulas. The detailed search strategy is provided in Supplemental materials. After removal of duplicates, titles and abstracts were independently screened by two independent reviewers (KX and YLL), with any disagreements resolved by a third reviewer (SHX), followed by full-text review of remaining articles by two independent reviewers (KX and YLL), according to the inclusion and exclusion criteria. In addition, the references of the screened articles were read as an alternative way to identify articles suitable for this meta-analysis. The final selected articles were classified according to the surgical approaches: lateral, posterior, and posterolateral. The inclusion criteria comprised: (1) studies on low back pain of sacroiliac joint origin; (2) studies on minimally invasive sacroiliac joint fusion for low back pain; (3) studies with a follow-up period of more than 6 months; (4) studies published in English; and (5) quantitative studies. The exclusion criteria comprised: (1) studies including patients with neoplastic disease; (2) studies including patients with acute traumatic diseases; (3) review articles; (4) animal studies; (5) studies with incomplete data; (6) studies on revisional surgery of the sacroiliac joint; (7) studies including patients with infectious diseases; (8) studies with a sample size less than 10; and/or (9) studies with overlapping data. The methodological quality of included studies was assessed independently by two authors (SHX and YLL). Cohort studies, a type of observational study design for evaluating the association between exposure and outcomes, with participants divided into exposed and non-exposed groups, were evaluated for quality using the Newcastle-Ottawa Scale, with a maximum of 9 points. Each cohort study was assessed based on eight items, which were divided into three categories: selection, comparability and outcome. All cohort studies mentioned in the present manuscript were comparative studies, not single-arm studies. The Cochrane Collaboration’s tool was used to assess the quality of randomized controlled trials (RCTs), focusing on seven key evaluation criteria, including random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, completeness of outcome data, selective reporting, and identification of potential sources of bias. For case series and prospective multicenter single-arm studies, quality evaluation was conducted using the Joanna Briggs Institute critical appraisal tool, which comprises 10 questions assessing the internal validity and risk of bias. A case series was defined as a descriptive study design that focuses on the characteristics, clinical presentations, treatment responses, and outcomes of patients, without including a control group. The following data were extracted from the included articles: (1) demographic parameters, including study type, approach, sample size (minimally invasive sacroiliac joint fusion, MISIJF), sex, age, internal implant name, comparison group, follow-up time and prior lumbar fusion proportion; (2) visual analog scale (VAS) score for low back pain preoperatively, and at 6 and 12 months postoperatively; and (3) the total complication rate, revision rate, and fusion rate during the follow-up period. The total complication rate included any intraoperative and postoperative complication. Fusion was defined as bone bridging within the sacroiliac joint and absence of screw loosening on the radiographic image. For both RCTs and cohort studies, only participants who received minimally invasive sacroiliac joint fusion were analyzed. All the parameters were confirmed by two authors (KX and YLL), and in case of disagreement, the two authors negotiated to reach a consensus. Improvement of VAS score was defined as the mean difference in low back pain scores before and after surgery. The VAS score improvements were calculated at 6 and 12 months postoperatively and pooled estimates for different surgical approaches are presented. Additionally, the pooled total complication rate, pooled revision rate, and pooled fusion rate were calculated and reported for the various surgical techniques. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) was used to evaluate the certainty of evidence for the meta-analysis. Using the GRADE approach, the study design was first considered, followed by examination of various factors for potentially downgrading or upgrading the quality of a body of evidence, which was finally classified into one of four levels: very low, low, moderate or high. GRADE was used to assess the quality of evidence for each pooled effect estimate. Meta-analysis, sensitivity analysis and publication bias testing (Egger’s test) were performed using Stata 14 (2015) statistical software (StataCorp LLC, College Station, TX, USA). If the number of studies included in the meta-analysis exceeded 10, a publication bias assessment was conducted. Data are presented as mean and SD, with results of individual studies and syntheses displayed utilizing forest plots. When calculating the pooled parameter, P (Q test) <0.1 or I 2 > 50% was considered to indicate heterogeneity. A leave-one-out sensitivity analysis was conducted to identify potential sources of heterogeneity. Specifically, one study was removed at a time and the meta-analysis was re-run to assess the impact of each study on the overall results. After exploring potential factors contributing to the heterogeneity in the identified study, the study may be excluded. However, caution should be exercised when excluding studies in meta-analyses with a small number of included studies. If no heterogeneity was present, a fixed-effect model was used to combine effect sizes; if heterogeneity existed, a random-effect model was applied. Search results A total of 679 articles were identified using the search strategy. After screening and assessing eligible articles, 48 articles were ultimately included in the meta-analysis. , , – The selection process is detailed in . Characteristics of included studies A total of 32 studies investigated the lateral approach, 10 investigated the posterior approach, four investigated the posterolateral approach, and two compared the lateral and posterolateral approaches (the studies by Claus et al. and Cahueque et al. ). The included literature contained three RCTs, , , all of which focused on the lateral approach. In two of the RCTs, , the control group received conservative treatment, and in the RCT by Randers et al., the control group underwent sham surgery. The remaining study types included six cohort studies, , , , , , a total of 33 case series, – , , , , – , , – , , , , , , – and six prospective multicenter single-arm studies. , , , , , All included studies are summarized in . Methodological quality of included studies and GRADE assessment Bias assessments for the case series and prospective multicenter single-arm studies, cohort studies, and RCTs are provided in Supplemental Tables S1, S2, and S3, respectively. Each pooled effect estimate included case series studies, which are considered very low-quality evidence in GRADE assessment, with no potential for upgrading the quality of evidence of case series studies. Therefore, the quality of all pooled effect estimates was rated as very low. Tables summarizing the findings for the lateral approach, posterior approach, and posterolateral approach are provided in Supplemental Tables S4, S5, and S6, respectively. Outcomes of the meta-analysis VAS score improvement at 6 months postoperatively Nine studies on the lateral approach, , , , , , , , , three studies on the posterior approach, , , and three studies on the posterolateral approach, , , reported preoperative and 6-month postoperative VAS scores for low back pain, with pooled mean differences of 4.3 (95% confidence interval [CI] 3.6, 5.0; I 2 = 92.2%, Q test P value <0.001), 4.8 (95% CI 3.6, 6.0; I 2 = 86.1%, Q test P value = 0.001), and 3.0 (95% CI 1.6, 4.4; I 2 = 89.9%, Q test P value <0.001), respectively . Following the leave-one-out analysis, no studies were excluded. In these studies, significant pain reduction was consistently observed. VAS score improvement at 12 months postoperatively Fifteen studies on the lateral approach, , , , , , , , , , , , – two on the posterior approach, , and four on the posterolateral approach, , , , reported preoperative and 12-month postoperative VAS scores for low back pain, with pooled mean differences of 5.0 (95% CI 4.5, 5.4; I 2 = 85.3%, Q test P value <0.001), 4.9 (95% CI 3.6, 6.2; I 2 = 82.5%, Q test P value = 0.017), and 3.8 (95% CI 1.9, 5.7; I 2 = 96.5%, Q test P value = 0.001), respectively (Supplemental Figure S1). Leave-one-out analysis revealed no source of heterogeneity for all approaches. No publication bias was identified in the lateral approach studies. Significant postoperative reduction in pain was consistently observed in these studies. Thus, the impact of preoperative pain scores on VAS score improvement at 12 months postoperatively was analyzed. For studies with preoperative VAS scores ≥8, the pooled mean difference in postoperative scores was 5.18 (95% CI 4.72, 5.64), which was numerically higher than that for studies with preoperative VAS scores <8, at 3.94 (95% CI 2.88, 5.00; Supplemental Figure S2). Total complication rate Sixteen studies on the lateral approach, , , , , ,, , , , , , , , – six on the posterior approach, , – , and three on the posterolateral approach, , , reported total complication rates, with pooled total complication rates of 9.2% (95% CI 4.4%, 15.2%; I 2 = 83.0%, Q test P value <0.001), 1% (95% CI 0.1%, 2.6%; I 2 = 22.1%, Q test P value = 0.267), and 3.7% (95% CI 0.0%, 21.0%; I 2 = 77.9%, Q test P value = 0.011), respectively . Leave-one-out analysis revealed no source of heterogeneity for the lateral and posterolateral approaches. No publication bias was detected in the lateral approach studies. Revision rate Twenty-eight studies on the lateral approach, , , – , – , – , , – , , – seven on the posterior approach, , – , , and five on the posterolateral approach, , – reported revision rates, with pooled revision rates of 2.4% (95% CI 1.3%, 3.9%; I 2 = 49.6%, Q test P value = 0.002), 0.6% (95% CI 0.0%, 1.8%; I 2 = 42.4%, Q test P value = 0.108), and 0.9% (95% CI 0.0%, 2.9%; I 2 = 0.0%, Q test P value = 0.875), respectively . Leave-one-out analysis revealed no source of heterogeneity in the lateral approach. No publication bias was detected in the lateral approach studies. Analyses of the impact of different implant types on the revision rate for the lateral approach revealed that the pooled revision rate for iFuse implants was 3.3% (95% CI 2.1%, 4.7%), which was numerically higher than that for non-iFuse implants at 1.1% (95% CI 0.2%, 2.4%; Supplemental Figure S3). Fusion rate Seven studies on the lateral approach and four studies on the posterolateral approach reported fusion rates, , , , , – , – with pooled fusion rates of 88.1% (95% CI 76.7%, 96.4%; I 2 = 83.0%, Q test P value <0.001) and 95.2% (95% CI 84.7%, 100.0%; I 2 = 76.0%, Q test P value = 0.006), respectively. Following the leave-one-out analysis, no studies were excluded. The pooled fusion rate for four studies on the posterior approach was 66.9% (95% CI 29.9%, 95.5%; I 2 = 91.5%, Q test P value <0.001). – , Through leave-one-out analysis, the study by Fuchs et al. was identified as the source of heterogeneity, with a fusion rate of 31%, significantly lower than other studies, possibly due to suboptimal implant positioning and shorter follow-up duration. After excluding this study, heterogeneity decreased, and the pooled fusion rate was recalculated using a fixed-effects model to be 83.1% (95% CI 69.5%, 93.8%; I 2 = 0.0%, Q test P value = 0.427; Supplemental Figure S4). Summary of the three approaches The outcomes of the three surgical approaches are summarized in . The pooled complication rate for the lateral approach was 9.2% (95% CI 4.4%, 15.2%), numerically higher than 1% (95% CI 0.1%, 2.6%) for the posterior approach. The pooled revision rate for the lateral approach was 2.4% (95% CI 1.3%, 3.9%), also numerically higher than 0.6% (95% CI 0%, 1.8%) for the posterior approach. The remaining indicators were numerically similar. A total of 679 articles were identified using the search strategy. After screening and assessing eligible articles, 48 articles were ultimately included in the meta-analysis. , , – The selection process is detailed in . A total of 32 studies investigated the lateral approach, 10 investigated the posterior approach, four investigated the posterolateral approach, and two compared the lateral and posterolateral approaches (the studies by Claus et al. and Cahueque et al. ). The included literature contained three RCTs, , , all of which focused on the lateral approach. In two of the RCTs, , the control group received conservative treatment, and in the RCT by Randers et al., the control group underwent sham surgery. The remaining study types included six cohort studies, , , , , , a total of 33 case series, – , , , , – , , – , , , , , , – and six prospective multicenter single-arm studies. , , , , , All included studies are summarized in . Bias assessments for the case series and prospective multicenter single-arm studies, cohort studies, and RCTs are provided in Supplemental Tables S1, S2, and S3, respectively. Each pooled effect estimate included case series studies, which are considered very low-quality evidence in GRADE assessment, with no potential for upgrading the quality of evidence of case series studies. Therefore, the quality of all pooled effect estimates was rated as very low. Tables summarizing the findings for the lateral approach, posterior approach, and posterolateral approach are provided in Supplemental Tables S4, S5, and S6, respectively. VAS score improvement at 6 months postoperatively Nine studies on the lateral approach, , , , , , , , , three studies on the posterior approach, , , and three studies on the posterolateral approach, , , reported preoperative and 6-month postoperative VAS scores for low back pain, with pooled mean differences of 4.3 (95% confidence interval [CI] 3.6, 5.0; I 2 = 92.2%, Q test P value <0.001), 4.8 (95% CI 3.6, 6.0; I 2 = 86.1%, Q test P value = 0.001), and 3.0 (95% CI 1.6, 4.4; I 2 = 89.9%, Q test P value <0.001), respectively . Following the leave-one-out analysis, no studies were excluded. In these studies, significant pain reduction was consistently observed. VAS score improvement at 12 months postoperatively Fifteen studies on the lateral approach, , , , , , , , , , , , – two on the posterior approach, , and four on the posterolateral approach, , , , reported preoperative and 12-month postoperative VAS scores for low back pain, with pooled mean differences of 5.0 (95% CI 4.5, 5.4; I 2 = 85.3%, Q test P value <0.001), 4.9 (95% CI 3.6, 6.2; I 2 = 82.5%, Q test P value = 0.017), and 3.8 (95% CI 1.9, 5.7; I 2 = 96.5%, Q test P value = 0.001), respectively (Supplemental Figure S1). Leave-one-out analysis revealed no source of heterogeneity for all approaches. No publication bias was identified in the lateral approach studies. Significant postoperative reduction in pain was consistently observed in these studies. Thus, the impact of preoperative pain scores on VAS score improvement at 12 months postoperatively was analyzed. For studies with preoperative VAS scores ≥8, the pooled mean difference in postoperative scores was 5.18 (95% CI 4.72, 5.64), which was numerically higher than that for studies with preoperative VAS scores <8, at 3.94 (95% CI 2.88, 5.00; Supplemental Figure S2). Total complication rate Sixteen studies on the lateral approach, , , , , ,, , , , , , , , – six on the posterior approach, , – , and three on the posterolateral approach, , , reported total complication rates, with pooled total complication rates of 9.2% (95% CI 4.4%, 15.2%; I 2 = 83.0%, Q test P value <0.001), 1% (95% CI 0.1%, 2.6%; I 2 = 22.1%, Q test P value = 0.267), and 3.7% (95% CI 0.0%, 21.0%; I 2 = 77.9%, Q test P value = 0.011), respectively . Leave-one-out analysis revealed no source of heterogeneity for the lateral and posterolateral approaches. No publication bias was detected in the lateral approach studies. Revision rate Twenty-eight studies on the lateral approach, , , – , – , – , , – , , – seven on the posterior approach, , – , , and five on the posterolateral approach, , – reported revision rates, with pooled revision rates of 2.4% (95% CI 1.3%, 3.9%; I 2 = 49.6%, Q test P value = 0.002), 0.6% (95% CI 0.0%, 1.8%; I 2 = 42.4%, Q test P value = 0.108), and 0.9% (95% CI 0.0%, 2.9%; I 2 = 0.0%, Q test P value = 0.875), respectively . Leave-one-out analysis revealed no source of heterogeneity in the lateral approach. No publication bias was detected in the lateral approach studies. Analyses of the impact of different implant types on the revision rate for the lateral approach revealed that the pooled revision rate for iFuse implants was 3.3% (95% CI 2.1%, 4.7%), which was numerically higher than that for non-iFuse implants at 1.1% (95% CI 0.2%, 2.4%; Supplemental Figure S3). Fusion rate Seven studies on the lateral approach and four studies on the posterolateral approach reported fusion rates, , , , , – , – with pooled fusion rates of 88.1% (95% CI 76.7%, 96.4%; I 2 = 83.0%, Q test P value <0.001) and 95.2% (95% CI 84.7%, 100.0%; I 2 = 76.0%, Q test P value = 0.006), respectively. Following the leave-one-out analysis, no studies were excluded. The pooled fusion rate for four studies on the posterior approach was 66.9% (95% CI 29.9%, 95.5%; I 2 = 91.5%, Q test P value <0.001). – , Through leave-one-out analysis, the study by Fuchs et al. was identified as the source of heterogeneity, with a fusion rate of 31%, significantly lower than other studies, possibly due to suboptimal implant positioning and shorter follow-up duration. After excluding this study, heterogeneity decreased, and the pooled fusion rate was recalculated using a fixed-effects model to be 83.1% (95% CI 69.5%, 93.8%; I 2 = 0.0%, Q test P value = 0.427; Supplemental Figure S4). Nine studies on the lateral approach, , , , , , , , , three studies on the posterior approach, , , and three studies on the posterolateral approach, , , reported preoperative and 6-month postoperative VAS scores for low back pain, with pooled mean differences of 4.3 (95% confidence interval [CI] 3.6, 5.0; I 2 = 92.2%, Q test P value <0.001), 4.8 (95% CI 3.6, 6.0; I 2 = 86.1%, Q test P value = 0.001), and 3.0 (95% CI 1.6, 4.4; I 2 = 89.9%, Q test P value <0.001), respectively . Following the leave-one-out analysis, no studies were excluded. In these studies, significant pain reduction was consistently observed. Fifteen studies on the lateral approach, , , , , , , , , , , , – two on the posterior approach, , and four on the posterolateral approach, , , , reported preoperative and 12-month postoperative VAS scores for low back pain, with pooled mean differences of 5.0 (95% CI 4.5, 5.4; I 2 = 85.3%, Q test P value <0.001), 4.9 (95% CI 3.6, 6.2; I 2 = 82.5%, Q test P value = 0.017), and 3.8 (95% CI 1.9, 5.7; I 2 = 96.5%, Q test P value = 0.001), respectively (Supplemental Figure S1). Leave-one-out analysis revealed no source of heterogeneity for all approaches. No publication bias was identified in the lateral approach studies. Significant postoperative reduction in pain was consistently observed in these studies. Thus, the impact of preoperative pain scores on VAS score improvement at 12 months postoperatively was analyzed. For studies with preoperative VAS scores ≥8, the pooled mean difference in postoperative scores was 5.18 (95% CI 4.72, 5.64), which was numerically higher than that for studies with preoperative VAS scores <8, at 3.94 (95% CI 2.88, 5.00; Supplemental Figure S2). Sixteen studies on the lateral approach, , , , , ,, , , , , , , , – six on the posterior approach, , – , and three on the posterolateral approach, , , reported total complication rates, with pooled total complication rates of 9.2% (95% CI 4.4%, 15.2%; I 2 = 83.0%, Q test P value <0.001), 1% (95% CI 0.1%, 2.6%; I 2 = 22.1%, Q test P value = 0.267), and 3.7% (95% CI 0.0%, 21.0%; I 2 = 77.9%, Q test P value = 0.011), respectively . Leave-one-out analysis revealed no source of heterogeneity for the lateral and posterolateral approaches. No publication bias was detected in the lateral approach studies. Twenty-eight studies on the lateral approach, , , – , – , – , , – , , – seven on the posterior approach, , – , , and five on the posterolateral approach, , – reported revision rates, with pooled revision rates of 2.4% (95% CI 1.3%, 3.9%; I 2 = 49.6%, Q test P value = 0.002), 0.6% (95% CI 0.0%, 1.8%; I 2 = 42.4%, Q test P value = 0.108), and 0.9% (95% CI 0.0%, 2.9%; I 2 = 0.0%, Q test P value = 0.875), respectively . Leave-one-out analysis revealed no source of heterogeneity in the lateral approach. No publication bias was detected in the lateral approach studies. Analyses of the impact of different implant types on the revision rate for the lateral approach revealed that the pooled revision rate for iFuse implants was 3.3% (95% CI 2.1%, 4.7%), which was numerically higher than that for non-iFuse implants at 1.1% (95% CI 0.2%, 2.4%; Supplemental Figure S3). Seven studies on the lateral approach and four studies on the posterolateral approach reported fusion rates, , , , , – , – with pooled fusion rates of 88.1% (95% CI 76.7%, 96.4%; I 2 = 83.0%, Q test P value <0.001) and 95.2% (95% CI 84.7%, 100.0%; I 2 = 76.0%, Q test P value = 0.006), respectively. Following the leave-one-out analysis, no studies were excluded. The pooled fusion rate for four studies on the posterior approach was 66.9% (95% CI 29.9%, 95.5%; I 2 = 91.5%, Q test P value <0.001). – , Through leave-one-out analysis, the study by Fuchs et al. was identified as the source of heterogeneity, with a fusion rate of 31%, significantly lower than other studies, possibly due to suboptimal implant positioning and shorter follow-up duration. After excluding this study, heterogeneity decreased, and the pooled fusion rate was recalculated using a fixed-effects model to be 83.1% (95% CI 69.5%, 93.8%; I 2 = 0.0%, Q test P value = 0.427; Supplemental Figure S4). The outcomes of the three surgical approaches are summarized in . The pooled complication rate for the lateral approach was 9.2% (95% CI 4.4%, 15.2%), numerically higher than 1% (95% CI 0.1%, 2.6%) for the posterior approach. The pooled revision rate for the lateral approach was 2.4% (95% CI 1.3%, 3.9%), also numerically higher than 0.6% (95% CI 0%, 1.8%) for the posterior approach. The remaining indicators were numerically similar. Diagnosing sacroiliac joint-related pain requires imaging studies to exclude other causes of low back pain, such as lumbar spine disorders and peripheral plexopathies. Peripheral plexopathies often affect multiple nerve roots, while lumbar spine disorders typically involve a single nerve root. Currently, there are two ways to achieve sacroiliac joint fusion: minimally invasive surgery and open surgery. According to the literature, minimally invasive sacroiliac joint fusion is generally believed to be superior to open surgery. For example, minimally invasive sacroiliac joint fusion has been associated with less blood loss and a shorter operation time than open fusion, but with similar Oswestry Disability Index. In another study, open sacroiliac joint fusion was associated with greater hospitalization costs than minimally invasive fusion, and minimally invasive sacroiliac joint fusion has been associated with better patient-reported outcomes than open fusion. To the best of our knowledge, there are very few studies comparing different surgical approaches for minimally invasive sacroiliac joint fusion. Claus et al. reported similar pain relief at 6 and 12 months postoperatively between the lateral approach and the posterolateral approach, which is consistent with the present conclusions. Cahueque et al. reported a case of nerve compression following lateral approach surgery, requiring revision surgery. However, no such cases were observed with the posterolateral approach, which aligns with the present pooled findings of a higher complication and revision rate with the lateral approach. In the present study, literature on the treatment of low back pain of sacroiliac joint origin with minimally invasive sacroiliac joint fusion was analyzed. Although pain relief and fusion rates were similar across all approaches, the lateral approach might be associated with a higher risk of total complications and revision surgery. In 2024, a meta-analysis by Ghaddaf et al. concluded that minimally invasive sacroiliac joint fusion using triangular titanium implants is superior to non-surgical treatments in terms of pain relief, functional improvement, and enhanced quality of life. A review by Mehkri et al. suggested that VAS scores significantly decreased during follow-up after minimally invasive sacroiliac joint fusion, with an average reduction of 50.33% at 6 months postoperatively and 61.94% at 12 months postoperatively. The mean fusion rate was 84.92%. These findings are similar to those in the present meta-analysis; however, previous meta-analyses have generally not differentiated between the surgical approaches. Of note, the 2023 study by Whang et al. compared pain relief across three different surgical approaches for sacroiliac joint fusion but did not evaluate pain relief at specific time points, whereas the present review included updated studies and pain improvement was calculated at 6 and 12 months postoperatively. The studies included in the present investigation displayed a wide range of demographic characteristics, with mean ages spanning between 32 and 69.8 years, and the proportions of female patients ranging from 21% to 100% (see ). These demographic factors may have influenced the differences in therapeutic outcomes among the different surgical approaches. Low back pain may be caused by various conditions located in the lumbar and pelvic regions. Prior lumbar fusion is a common comorbidity, and has been reported in several studies (see ). , , , , , , , , , , , , , , , , , , – However, the vast majority of studies did not perform subgroup analyses or provide specific data for such analyses, making it impossible to merge this information. The impact of comorbidities on the treatment outcomes of minimally invasive sacroiliac joint fusion warrants further investigation. At present, there are many studies on minimally invasive sacroiliac joint fusion via the lateral approach, and iFuse is the main implant type. However, there are limited studies involving the posterior and posterolateral approaches. The current meta-analysis indicated that surgery via the posterior approach might offer the advantage of fewer complications. Therefore, minimally invasive sacroiliac joint fusion via the posterior approach has potential for development and deserves additional research. The results of the present meta-analysis may be limited by several factors: (1) few studies on the posterior and posterolateral approaches were included; (2) most of the studies included in this meta-analysis were case series, providing very low-quality evidence. There were no RCTs on the posterior and posterolateral approach; (3) the search strategy required the inclusion of the terms ‘minimally’ or ‘minimal’ in the title or abstract to ensure a focused selection of studies. However, this approach may have constrained the comprehensiveness of the search, potentially omitting studies that did not use these specific keywords; and (4) through the bias assessment, some articles were found to have a high risk of bias. However, since sensitivity analysis indicated that these high-bias articles were not a source of significant heterogeneity, and the number of studies on the posterior and posterolateral approaches was relatively small, exclusion was considered inappropriate. Therefore, the present findings should be interpreted with caution. More high-quality studies on minimally invasive sacroiliac joint fusion are required in the future to obtain more convincing results. Pain relief and fusion rates were similar across all approaches to minimally invasive sacroiliac joint fusion for low back pain of sacroiliac joint origin. However, it is important to note that the lateral approach might be associated with a higher risk of total complications and revision surgery. sj-pdf-1-imr-10.1177_03000605251315300 - Supplemental material for Minimally invasive lateral, posterior, and posterolateral sacroiliac joint fusion for low back pain: a systematic review and meta-analysis Supplemental material, sj-pdf-1-imr-10.1177_03000605251315300 for Minimally invasive lateral, posterior, and posterolateral sacroiliac joint fusion for low back pain: a systematic review and meta-analysis by Kai Xu, Ya-Ling Li, Song-Hua Xiao and Yong-Wei Pan in Journal of International Medical Research
Percutaneous vertebroplasty by two-step fluoroscopy: a treatment for osteoporotic compression fractures of thoracic vertebrae in older adults
bf10136c-46b1-4515-822e-ab43487d4c9b
11809072
Surgical Procedures, Operative[mh]
The progressive aging of China’s population has led to an increase in osteoporosis- associated thoracic vertebral fractures . Patients with these fractures often experience severe chest and back pain, along with radiating intercostal pain that can extend to the chest. In some cases, patients may even seek respiratory treatment due to the pain. When left untreated, these fractures can lead to complications such as chronic chest and back pain or thoracic kyphosis, significantly impacting the patient’s quality of life . Percutaneous vertebroplasty has become a widely used treatment for osteoporotic vertebral compression fractures (OVCFs) . However, identifying the affected vertebrae in the thoracic region, which consists of 12 vertebrae, can be challenging due to the narrower transverse diameter of the pedicle compared to lumbar vertebrae, as well as the presence of the scapular block . This complexity makes it more difficult to identify affected thoracic vertebrae through lateral X-ray imaging, posing challenges and potential risks during vertebral puncture . The appropriateness of using imaging strategies that expose patients to radiation in this context is uncertain, and there have been relatively few studies focused on this issue . Some scholars have reported using computed tomography (CT) guidance for puncture procedures . However, compared to CT, most operating rooms are equipped with a C-arm machine, which offers simpler programming and more. convenient operation. To minimize the number of surgical fluoroscopy sessions and shorten the surgery duration, we employed a two-step fluoroscopy method using a C- arm machine for PVP. During the study period from January 2019 to January 2022, a total of 48 patients with OVCFs of the thoracic vertebrae underwent percutaneous vertebroplasty using this technique. This approach resulted in reduced intraoperative fluoroscopy exposure and a shortened duration of surgery, leading to positive outcomes. Patients This study retrospectively analyzed clinical and imaging data from 48 patients (32 female, 16 male) with osteoporotic vertebral compression fractures (OVCFs) of the thoracic vertebrae who were treated at Yangquan First People’s Hospital from January 2019 to January 2022. The patients’ ages ranged from 60 to 86 years (mean: 69.7 ± 6.8 years), and the duration of their illness ranged from 2 to 35 days (mean: 10.2 ± 3.5 days). All patients reported persistent pain in the chest, waist, and back, which limited their daily activities. X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) scans confirmed single-segment OVCFs in all patients, with 2 cases involving upper segment fractures (T1-4), 26 cases involving middle segment fractures (T5-8), and 20 cases involving lower segment fractures (T9- 12). Inclusion criteria: (1) Patients with MRI-diagnosed thoracic vertebral compression fractures with pain exhibiting consistent localization to the fracture site detected via plain radiography ;(2) Patients ≥ 60 years of age in good general condition without any serious cardiovascular or cerebrovascular diseases; (3) patients with severe pain but without any nerve compression or injury-related symptoms; and (4) patients capable of laying in a prostrate position for 1–2 h and tolerating the surgical procedure . Exclusion criteria:(1)Vertebral body fractures or dislocation complicated by injuries to the nerves or spinal cord; (2) patients with bone cement allergies or coagulatory dysfunction; (3) patients with severe heart and/or lung diseases who were unable to tolerate surgery; or (4) patients with pain found to be caused by disc herniation and vertebral body or paravertebral tumors . The medical ethics committee of Yangquan First People’s Hospital approved this study.All participating patients provided written informed consent and agreed to the publication of these data. Surgical approach Surgical instrumentation and reagents The affected vertebrae were located preoperatively using a C-arm machine, which also monitored the intraoperative puncture process and dynamically tracked the dispersion of bone cement in the vertebral body. The procedure involved using 40 g of polymethyl methacrylate (PMMA) granule powder and 20 mL of water agent for the bone cement . Surgical procedure Routine radiography and MRI scans were used for the preoperative examination of the thoracic vertebrae. After confirming the injury of the affected vertebral body with MRI, it was observed that the vertebral body exhibited a mild wedge deformation along with diffuse high signal intensity. (Fig. ), they were marked on the anteroposterior and lateral views on X-ray film. Accurately determining the shape of the injured vertebrae and assessing the presence of osteophyte hyperplasia in adjacent. vertebrae were crucial for facilitating easier fluoroscopic detection. The patient was positioned in a prone-decubitus position. . A two-step procedure was used for C-arm machine monitoring. First, The endplate of the lower edge of the injured vertebra was flat, with symmetrical bilateral pedicles and a centrally located spinous process. Subsequently,the surface projections of the pedicles were marked bilaterally. After routine disinfection and the placing of aseptic towels, local infiltration anesthesia was administered. The punture needle was placed in the skin about 2–3 cm on both sides of the target vertebra such that the tip of the needle was located in the posterolateral border of the pedicle. If there were deviations in needle positioning, it could then be slightly adjusted, with 2–3 adjustments being typical. Fluoroscopy was then used to determine whether the puncture needle was properly fixed on the lateral edge of the pedicle , to assess puncture needle depth, and to confirm the scale (typically 3 cm or less). The needle was then further advanced in a 5- 10° position of mild abduction with anteroposterior fluoroscopic monitoring such that the needle direction of the puncture needle was between the upper and lower edges of the vertebral body. Before surgery, the distance between the entry point of the pedicle and the posterior edge of the vertebral body was measured (generally ~ 2 cm). When the puncture needle reached the medial edge of the pedicle, the entry of the needle into the bone above 2 cm was confirmed. Then, the machine was adjusted to the lateral position to allow clinicians to determine whether the two puncture needles had reached or entered the posterior edge of the vertebral body, after which the puncture needle was advanced into the middle 1/3 of the vertebral body. When the bone cement was prepared to a toothpaste-like consistency, it was injected under fluoroscopic guidance. Intraoperative fluoroscopy monitoring.(a) Needle inserted at the posterior margin of the T7 vertebra;(b) An image of a lateral view of the T7 vertebra with bone cement; (c) An anteroposterior view showing the vertebral body injected with bone cement (Fig. ). Bone cement injection Forty grams of PMMA granule powder were thoroughly mixed with 20 ml of a water agent to form a paste, which was then carefully loaded into a syringe attached to A push rod. Once the bone cement reached a toothpaste-like consistency, it was injected under fluoroscopic guidance. Injection was conducted slowly and continuously, immediately halted if any cement entered a vein or protruded beyond the vertebral body. The needle core was positioned within the needle path to prevent blockage. Following solidification of the bone cement, the needle sheath was rotated to prevent trailing. Upon completion of the procedure, the patient was allowed to reposition themselves in bed independently. After 24 h post-surgery, they were permitted to wear a brace for ground movement. Analyses of clinical efficacy Pain levels were evaluated utilizing visual analog scale (VAS) scores, whereas the anterior edge height, posterior edge height, and Cobb angle of kyphosis were measured preoperatively, and at 2 days, 3 months, 6 months, and 12 months post- procedure. The Cobb angle was defined based on a vertical line between the upper edge of the upper vertebral body of the injured vertebrae and the lower edge of the next vertebral body (Fig. ) . All results were independently measured by three clinicians, with the average value being reported and used for all analyses. Statistical analyses SPSS 19.0 was used to analyze the results of this study. Normally distributed data are reported as means ± standard error and were compared via one-way analysis of variance(ANOVA), with data within groups being compared using paired t-tests. P < 0.05 served as the cut-off to define statistical significance . This study retrospectively analyzed clinical and imaging data from 48 patients (32 female, 16 male) with osteoporotic vertebral compression fractures (OVCFs) of the thoracic vertebrae who were treated at Yangquan First People’s Hospital from January 2019 to January 2022. The patients’ ages ranged from 60 to 86 years (mean: 69.7 ± 6.8 years), and the duration of their illness ranged from 2 to 35 days (mean: 10.2 ± 3.5 days). All patients reported persistent pain in the chest, waist, and back, which limited their daily activities. X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) scans confirmed single-segment OVCFs in all patients, with 2 cases involving upper segment fractures (T1-4), 26 cases involving middle segment fractures (T5-8), and 20 cases involving lower segment fractures (T9- 12). Inclusion criteria: (1) Patients with MRI-diagnosed thoracic vertebral compression fractures with pain exhibiting consistent localization to the fracture site detected via plain radiography ;(2) Patients ≥ 60 years of age in good general condition without any serious cardiovascular or cerebrovascular diseases; (3) patients with severe pain but without any nerve compression or injury-related symptoms; and (4) patients capable of laying in a prostrate position for 1–2 h and tolerating the surgical procedure . Exclusion criteria:(1)Vertebral body fractures or dislocation complicated by injuries to the nerves or spinal cord; (2) patients with bone cement allergies or coagulatory dysfunction; (3) patients with severe heart and/or lung diseases who were unable to tolerate surgery; or (4) patients with pain found to be caused by disc herniation and vertebral body or paravertebral tumors . The medical ethics committee of Yangquan First People’s Hospital approved this study.All participating patients provided written informed consent and agreed to the publication of these data. Surgical instrumentation and reagents The affected vertebrae were located preoperatively using a C-arm machine, which also monitored the intraoperative puncture process and dynamically tracked the dispersion of bone cement in the vertebral body. The procedure involved using 40 g of polymethyl methacrylate (PMMA) granule powder and 20 mL of water agent for the bone cement . Surgical procedure Routine radiography and MRI scans were used for the preoperative examination of the thoracic vertebrae. After confirming the injury of the affected vertebral body with MRI, it was observed that the vertebral body exhibited a mild wedge deformation along with diffuse high signal intensity. (Fig. ), they were marked on the anteroposterior and lateral views on X-ray film. Accurately determining the shape of the injured vertebrae and assessing the presence of osteophyte hyperplasia in adjacent. vertebrae were crucial for facilitating easier fluoroscopic detection. The patient was positioned in a prone-decubitus position. . A two-step procedure was used for C-arm machine monitoring. First, The endplate of the lower edge of the injured vertebra was flat, with symmetrical bilateral pedicles and a centrally located spinous process. Subsequently,the surface projections of the pedicles were marked bilaterally. After routine disinfection and the placing of aseptic towels, local infiltration anesthesia was administered. The punture needle was placed in the skin about 2–3 cm on both sides of the target vertebra such that the tip of the needle was located in the posterolateral border of the pedicle. If there were deviations in needle positioning, it could then be slightly adjusted, with 2–3 adjustments being typical. Fluoroscopy was then used to determine whether the puncture needle was properly fixed on the lateral edge of the pedicle , to assess puncture needle depth, and to confirm the scale (typically 3 cm or less). The needle was then further advanced in a 5- 10° position of mild abduction with anteroposterior fluoroscopic monitoring such that the needle direction of the puncture needle was between the upper and lower edges of the vertebral body. Before surgery, the distance between the entry point of the pedicle and the posterior edge of the vertebral body was measured (generally ~ 2 cm). When the puncture needle reached the medial edge of the pedicle, the entry of the needle into the bone above 2 cm was confirmed. Then, the machine was adjusted to the lateral position to allow clinicians to determine whether the two puncture needles had reached or entered the posterior edge of the vertebral body, after which the puncture needle was advanced into the middle 1/3 of the vertebral body. When the bone cement was prepared to a toothpaste-like consistency, it was injected under fluoroscopic guidance. Intraoperative fluoroscopy monitoring.(a) Needle inserted at the posterior margin of the T7 vertebra;(b) An image of a lateral view of the T7 vertebra with bone cement; (c) An anteroposterior view showing the vertebral body injected with bone cement (Fig. ). Bone cement injection Forty grams of PMMA granule powder were thoroughly mixed with 20 ml of a water agent to form a paste, which was then carefully loaded into a syringe attached to A push rod. Once the bone cement reached a toothpaste-like consistency, it was injected under fluoroscopic guidance. Injection was conducted slowly and continuously, immediately halted if any cement entered a vein or protruded beyond the vertebral body. The needle core was positioned within the needle path to prevent blockage. Following solidification of the bone cement, the needle sheath was rotated to prevent trailing. Upon completion of the procedure, the patient was allowed to reposition themselves in bed independently. After 24 h post-surgery, they were permitted to wear a brace for ground movement. The affected vertebrae were located preoperatively using a C-arm machine, which also monitored the intraoperative puncture process and dynamically tracked the dispersion of bone cement in the vertebral body. The procedure involved using 40 g of polymethyl methacrylate (PMMA) granule powder and 20 mL of water agent for the bone cement . Routine radiography and MRI scans were used for the preoperative examination of the thoracic vertebrae. After confirming the injury of the affected vertebral body with MRI, it was observed that the vertebral body exhibited a mild wedge deformation along with diffuse high signal intensity. (Fig. ), they were marked on the anteroposterior and lateral views on X-ray film. Accurately determining the shape of the injured vertebrae and assessing the presence of osteophyte hyperplasia in adjacent. vertebrae were crucial for facilitating easier fluoroscopic detection. The patient was positioned in a prone-decubitus position. . A two-step procedure was used for C-arm machine monitoring. First, The endplate of the lower edge of the injured vertebra was flat, with symmetrical bilateral pedicles and a centrally located spinous process. Subsequently,the surface projections of the pedicles were marked bilaterally. After routine disinfection and the placing of aseptic towels, local infiltration anesthesia was administered. The punture needle was placed in the skin about 2–3 cm on both sides of the target vertebra such that the tip of the needle was located in the posterolateral border of the pedicle. If there were deviations in needle positioning, it could then be slightly adjusted, with 2–3 adjustments being typical. Fluoroscopy was then used to determine whether the puncture needle was properly fixed on the lateral edge of the pedicle , to assess puncture needle depth, and to confirm the scale (typically 3 cm or less). The needle was then further advanced in a 5- 10° position of mild abduction with anteroposterior fluoroscopic monitoring such that the needle direction of the puncture needle was between the upper and lower edges of the vertebral body. Before surgery, the distance between the entry point of the pedicle and the posterior edge of the vertebral body was measured (generally ~ 2 cm). When the puncture needle reached the medial edge of the pedicle, the entry of the needle into the bone above 2 cm was confirmed. Then, the machine was adjusted to the lateral position to allow clinicians to determine whether the two puncture needles had reached or entered the posterior edge of the vertebral body, after which the puncture needle was advanced into the middle 1/3 of the vertebral body. When the bone cement was prepared to a toothpaste-like consistency, it was injected under fluoroscopic guidance. Intraoperative fluoroscopy monitoring.(a) Needle inserted at the posterior margin of the T7 vertebra;(b) An image of a lateral view of the T7 vertebra with bone cement; (c) An anteroposterior view showing the vertebral body injected with bone cement (Fig. ). Forty grams of PMMA granule powder were thoroughly mixed with 20 ml of a water agent to form a paste, which was then carefully loaded into a syringe attached to A push rod. Once the bone cement reached a toothpaste-like consistency, it was injected under fluoroscopic guidance. Injection was conducted slowly and continuously, immediately halted if any cement entered a vein or protruded beyond the vertebral body. The needle core was positioned within the needle path to prevent blockage. Following solidification of the bone cement, the needle sheath was rotated to prevent trailing. Upon completion of the procedure, the patient was allowed to reposition themselves in bed independently. After 24 h post-surgery, they were permitted to wear a brace for ground movement. Pain levels were evaluated utilizing visual analog scale (VAS) scores, whereas the anterior edge height, posterior edge height, and Cobb angle of kyphosis were measured preoperatively, and at 2 days, 3 months, 6 months, and 12 months post- procedure. The Cobb angle was defined based on a vertical line between the upper edge of the upper vertebral body of the injured vertebrae and the lower edge of the next vertebral body (Fig. ) . All results were independently measured by three clinicians, with the average value being reported and used for all analyses. SPSS 19.0 was used to analyze the results of this study. Normally distributed data are reported as means ± standard error and were compared via one-way analysis of variance(ANOVA), with data within groups being compared using paired t-tests. P < 0.05 served as the cut-off to define statistical significance . In this study 41 patients underwent bilateral puncture treatment while 7 underwent unilateral puncture-based treatment including 5 cases with poor cardiopulmonary function and 2 with more than 2/3 vertebral height loss. All patients reported significant relief from postoperative pain with 6 receiving a local nerve block for additional pain management. The shortest and longest operative durations recorded were 30 and 51 min respectively with a mean duration of 40.0 ± 2.5 min. Hospital stays ranged from 3 to 5 days. Although 9 patients experienced extravertebral leakage, none displayed clinical symptoms or evidence of bone cement overflow into the spinal canal. Paraspinal leakage occurred in 3 cases intervertebral disc leakage in 4 cases and leakage in the anterior vertebral vein in 2 cases. Before treatment the mean VAS score of patients was 7.5 ± 0.6. Subsequently at 2 days, 3 months, 6 months, and 12 months after the procedure, these mean scores decreased to 2.3 ± 0.6, 2.2 ± 0.5, 2.2 ± 0.4, and 2.0 ± 0.3, respectively. This decline was statistically significant ( P < 0.05) compared to the preoperative VAS score. The preoperative Cobb angle was 12.1° ± 0.9°, and the Cobb angle values at the corresponding time points were 12.2° ± 0.8°, 12.3° ± 1.1°, 12.3° ± 1.0°, and 12.2° ± 0.9°. Initially, the mean height of the vertebral body in these patients was 17.38 ± 1.56 mm. Postoperatively, at 2 days, 3 months, 6 months, and 12 months, these values were 19.30 ± 1.81 mm, 19. 12 ± 1.60 mm, 19.00 ± 1.45 mm, and 19.00 ± 1.20 mm, respectively. No significant difference was observed between postoperative and preoperative Cobb angle and vertebral height ( P > 0.05) (Fig. ). Due to the smaller dimensions of the pedicle in thoracic vertebrae compared to lumbar vertebrae with a forward and downward inclination and their connection to the ribs on both sides with the scapula above, fluoroscopic interference is a common issue. In certain studies, researchers have utilized CT scans to monitor these vertebrae effectively . In the present study, the utilization of a C-arm machine offers straightforward implementation , presenting several key advantages for performing the PVP procedure.( 1) In most primary hospital operating rooms, a C-arm machine is readily available, offering convenient access to appropriate rescue and anesthesia equipment. This setup contrasts with a CT room, thereby ensuring greater patient safety.(2) CT scans necessitate repeated scanning and movement of the operating bed, thereby heightening the risk of contamination; (3) Additionally, CT scans may not comprehensively reveal the puncture path and lack the intuitive real-time guidance provided by a C-arm machine.In an effort to reduce radiation-related harm to patients, a two-step approach was herein employed to simplify the C-arm machine-related procedures.Prior to surgery, the distance between the pedicle entry point and the posterior edge of the vertebral body (~ 2 cm) was measured via CT. In the initial stage of fluoroscopic assessment, the inferior margin of the affected vertebra aligns flush under anteroposterior fluoroscopy, with the spinous process positioned centrally within the vertebral body. The puncture needle’s depth from the lateral edge of the pedicle to the medial edge (the inner wall of the spinal canal) should measure at least 2 cm, with the puncture needle extending to the posterior edge of the vertebral body according to preoperative CT imaging.During the second stage of fluoroscopic assessment, under lateral observation, the puncture needle tip was positioned at the posterior edge of the vertebral body before being advanced into the center of the vertebral body. This method offers the advantage of utilizing preoperative CT results to measure the distance from the puncture point to the posterior edge of the vertebral body. Moreover, it requires only two rotational positions for the C-arm machine, thereby minimizing the need for repetitive adjustments. Confirming the identity of the affected vertebrae is essential prior to treatment. Some patients with lower thoracic vertebral fractures only exhibit lower back pain without associated tenderness, and these patients generally exhibit no clear radiographic features such that MRI-based diagnosis is often required. Preoperative MRI is thus necessary before the PVP procedure in order to accurately identify the affected vertebrae. Moreover, given that there are many thoracic vertebrae, it can be difficult to determine the locations of the middle thoracic vertebrae, and MRI scans can enable the direct observation of the shape and characteristics of the affected vertebrae including whether wedge degeneration or anterior osteophytes were present. In certain scenarios, the ‘relay method’ becomes necessary for identification.This method involves positioning a Kirschner needle between the T12 and the affected vertebra to confirm the vertebral body where the needle is located.Subsequently, the Kirschner needle and the affected vertebra are placed in a fluoroscopic image to facilitate identification.In general, a transpedicular approach is utilized; however, in cases where the pedicle width is narrow, a thin thoracic puncture needle may be employed for puncture via the parapedicular approach. PVP represents a revolutionary approach to treating vertebral fractures . Recently, an increasing number of reports have documented the utilization of this procedure as a treatment method for thoracic vertebral fractures, emphasizing its therapeutic efficacy . Ballon kyphoplasty was developed as an alternative to PVP to reduce chances of cement leakage and improve the vertebral height. Meta analytic studies have shown that PVP and kyphoplasty reduce pain comparably, and patient functional outcomes have been similar in most series . But Ballon kyphoplasty is 10–20 times more expensive than a PVP procedure . However, conflicting conclusions seem to arise in the literature regarding whether balloon kyphoplasty is more effective than PVP in preventing collapse of augmented vertebrae. Sahinturk et al. reported on a study involving 351 patients who underwent both balloon kyphoplasty and PVP. They concluded that balloon kyphoplasty did not prevent the loss of height in augmented vertebral bodies over mid- and long-term follow-up periods compared to PVP. Cerny et al. conducted a study involving 280 patients who underwent both treatments as well. Their findings suggest that kyphoplasty procedures are more effective in preventing further collapse of vertebral bodies compared to PVP. However,they noted that for precise accuracy in measuring vertebral compression ratio, regular postoperative CT scans would have been necessary . In this study, patients showed significant improvements in VAS scores for both back and chest pain compared to preoperative values ( P < 0.05), However,there were no significant corresponding improvements in vertebral height or Cobb angle values. These results suggest that PVP has the potential to effectively alleviate pain associated with OVCFs in older adults.Several factors refer to the following aspects. For one, the use of bone cement filler enabled the restoration of vertebral body stability and prevented further movement of the fracture, thus preventing additional vertebral body compression . This also led to a reduction in the stimulation of the nerve endings in the vertebral body. Bone cement injection results in a transient exothermic reaction that can destroy the surrounding tissue and nerve endings . Chiras J et al. reported that among a cohort of 274 patients who underwent vertebroplasty, the complication rate was 1.3% for those with osteoporotic vertebral fractures and as high as 10% for those with metastatic fractures . In general, the procedure is relatively safe, and complications from either procedure appear to be primarily caused by improper needle placement or inattention to fluoroscopic patterns of cement flow during the injection process.Leakage of cement into the epidural or paravertebral areas has been reported in 30% to 70% of vertebroplasties, but usually it has been minor and has not resulted in adverse events . In the present study, the most prevalent complication observed is cement leakage .Among the patients in the present cohort, nine experienced slight bone cement leakage in the anterior,superior, inferior, and lateral aspects of the vertebral body. However, none of them encountered bone cement infiltration into the spinal canal. To prevent this complication, it is essential to position the leading end of the bone cement push rod at the junction of the anterior and middle thirds of the vertebral body. This positioning prevents the backward leakage of bone cement. The bone cement should be injected when it reaches a toothpaste-like consistency , which helps prevent its entry into the vertebral venous sinus or nearby blood vessels thereby minimizing the risk of migration. Additionally, monitoring the bone cement injection through lateral fluoroscopy of the vertebral body is essential. This ensures the proper directionality of bone cement filling and helps prevent any leakage into the spinal canal . The selection of unilateral or bilateral bone cement injection approaches was primarily dependent on the physical condition of the patient and the distribution of the bone cement in the target vertebra following unilateral injection . In this study, the bilateral pedicle approach was commonly utilized for bone cement injection. The unilateral approach was only used when patients were in poor overall. condition and could not tolerate longer surgeries. To ensure uniform distribution of the bone cement within the affected vertebra, during unilateral injections, the puncture needle was inserted through the midline of the vertebra to reach the contralateral side whenever possible. Wang et al. . Previously reported a higher incidence of nerve root stimulation following unilateral PVP procedures compared to bilateral punctures. However, this was not observed in the present study cohort. Cost-effectiveness analysis of percutaneous vertebroplasty (PVP) compared to conservative treatment and open surgery is an important consideration. Compared to conservative treatment, PVP may have higher direct medical costs in the short term because it involves surgery and the use of specialized equipment. However, PVP typically provides faster pain relief and functional improvement , potentially reducing patients’ long-term medical costs such as prolonged use of pain medications and rehabilitation therapy. Compared to open surgery, PVP generally incurs lower direct medical costs due to its minimally invasive nature, which avoids extensive incisions in the skin and soft tissues, leading to shorter surgery times and faster patient discharge. However, operative intervention is necessary in a very small subset of patients with vertebral compression fractures in whom a progressive neurologic deficit or intractable pain develops from the fracture deformity. These operations are extensive, involving prolonged duration of anesthesia, blood transfusion, and associated complications . Therefore, PVP may also shorten patients’ recovery time and hospital stay, consequently decreasing indirect costs for both patients and the healthcare system. There are several limitations to this study. Firstly, the current sample size may not be adequate to draw definitive conclusions. Larger and more diverse samples are necessary for future studies to validate the findings and improve generalizability. Secondly, this study involved a small patient cohort with a short follow-up duration, making it challenging to rule out the potential for an increased risk of new vertebral body fractures at later time points after PVP. Therefore, further research is necessary to explore long-term outcomes and optimal patient selection strategies for PVP. Thirdly, PVP was not associated with any pronounced Cobb angle improvements such that this procedure was not beneficial to patients with severe vertebral compression and associated kyphosis. Lastly, the study lacked a control group undergoing conservative treatment for comparison. Percutaneous vertebroplasty by two-step fluoroscopy not only alleviates pain associated with thoracic vertebral osteoporotic compression fractures (OVCFs) in elderly individuals, but also promotes corresponding improvements in motor ability and quality of life. Furthermore it reduces intraoprative fluoroscopy exposure and shortens the duration of surgery. This technique holds considerable reference value for treating thoracic OVCF in elderly individuals. However, due to the larger number of thoracic vertebrae and the smaller diameter of the pedicles, along with interference from the scapula during fluoroscopy, this surgical method presents a learning curve in terms of positioning and puncture. Integrating navigation or robotics in the future may further reduce the surgical duration and shorten the learning curve . Additionally, due to the limited number of patients and short follow-up duration in this study, it is necessary to conduct multi-center, large-sample, long-term follow-up studies in the future to obtain more accurate results.
Advancing precision rheumatology: applications of machine learning for rheumatoid arthritis management
144dd543-6c7d-4a9b-bd80-cf7002d54eb8
11194317
Internal Medicine[mh]
Introduction Rheumatoid arthritis (RA) is a prevalent autoimmune disorder characterized by inflammation and discomfort in numerous small joints, potentially leading to joint deformity and impaired functionality. Furthermore, it ranks among the primary contributors to chronic disability . Furthermore, RA not only impacts the joints but also has implications for other bodily systems, including the cardiovascular and respiratory systems, leading to an elevated susceptibility to conditions such as myocardial infarction, stroke, and pulmonary fibrosis . Chronic illnesses and persistent pain can result in psychological distress for patients, manifesting as symptoms of depression and anxiety . Hence, it is imperative to promptly identify individuals with a high susceptibility to RA in order to facilitate early diagnosis and anticipate the potential severity of disease progression. Furthermore, the timely administration of efficacious medications is essential in impeding the advancement of the disease. The phrase “machine learning (ML)” surged in popularity in the late 1990s in the field of artificial intelligence . In the past decade, ML has made significant advancements as a result of the increased availability of data and improvements in algorithms, enabling the identification of complex patterns and correlations within datasets . The biomedical field has experienced a significant increase in data volume, ranging from molecular details to comprehensive information on the human body system, due to advancements in high-throughput sequencing technologies, electronic health records, and medical imaging . Healthcare providers and researchers are currently facing a growing number of clinical challenges, leading them to explore ways to enhance decision-making effectiveness, refine personalized treatment strategies, and optimize resource allocation methods. ML is uniquely positioned to extract valuable patterns and insights from large datasets, potentially automating and enhancing the efficiency of healthcare decision-making and services. The incremental incorporation of biomedicine with various disciplines, including computational science, mathematics, and statistics, has spurred interdisciplinary partnerships, leading to accelerated progress in the application of ML in the field of biomedicine . In the clinical practice of RA, Rheumatoid Factor (RF) and Anti-Citrullinated Protein Antibody (ACPA) serve as crucial diagnostic biomarkers for RA, playing key roles in its diagnosis. However, approximately 20-25% of RA patients are seronegative, posing challenges to early diagnosis and potentially leading to delayed diagnosis and treatment . With the advent and development of biologics, significant progress has been made in the treatment of RA. Nevertheless, many RA patients exhibit poor responses to drug treatments, failing to achieve sustained remission , and currently, it is not possible to predict which treatment drugs will have the best therapeutic effect on individual patients. The accumulation of biomedical big data may provide new insights into better understanding the heterogeneity of RA . With the increase in data volume and complexity, traditional statistical analysis methods have become insufficient, especially when dealing with nonlinear relationships and complex interactions between variables . These unmet needs pose challenges to the precision medicine of RA. Using ML techniques for data processing and pattern recognition to build predictive models for RA can assist clinicians in making more accurate data-driven decisions . Therefore, understanding the prevalent ML algorithms in RA, their effectiveness, and potential applications is crucial. Our study is dedicated to evaluating recent literature on applications of ML in RA classification and outcome prediction, with the goal of offering a dependable benchmark for reference and guiding future research endeavors. By enhancing the utilization of sophisticated modeling in RA and advocating for precision medicine in the field, our work aims to propel advancements in RA treatment and management. ML algorithms to enhance precision rheumatology ML, a crucial component of artificial intelligence, is divided into two main categories: supervised and unsupervised learning. Supervised learning employs labeled training datasets to identify patterns and relationships. Upon training, the model can predict or classify new data inputs, yielding corresponding results. This method utilizes a range of algorithms, such as logistic regression, random forests, gradient boosting, and decision trees. Each algorithm contributes uniquely to the robustness and accuracy of predictive outcomes, making supervised learning integral to advancements in data-driven research methodologies . Supervised learning is divided into two principal methodologies: classification and regression . Classification methodologies segregate patients according to distinct characteristics . By employing datasets comprising genetic information, gene expression profiles, and clinical indicators from patients with RA, algorithms can be trained to identify RA patients within populations, as well as to ascertain which patients exhibit optimal responses to specific treatments. Regression models, on the other hand, are designed to predict continuous outcomes , such as disease activity scores and response rates to treatments in RA patients, thus facilitating personalized monitoring and management to optimize treatment efficacy. In contrast, unsupervised learning explores inherent patterns and relationships in datasets without predetermined labels . Clustering algorithms, an exemplary application of unsupervised learning, automatically group data into multiple clusters to maximize intra-cluster similarity and minimize inter-cluster similarity, aiding significantly in RA research by identifying potential patient subgroups who may exhibit favorable responses to specific treatments or distinct disease progression patterns. Deep learning, employing Artificial Neural Network (ANN) technologies, enhances the analysis and prediction of complex data through sophisticated non-linear mapping relationships . Particularly, Convolutional Neural Networks (CNNs) in deep learning architectures are adept in processing image data , enabling automatic feature learning from multiple convolutional layers which assist physicians in identifying early signs of arthritis or disease progression in X-ray or Magnetic Resonance Imaging (MRI) images of RA patients. In summary, supervised and unsupervised learning each serve specific roles, while deep learning technologies enhance the capability of these methods to process complex data, thereby effectively advancing the field of precision rheumatology. In the preprocessing phase, data cleaning and organization are paramount, involving the removal of duplicates and correction of anomalies . Furthermore, feature engineering plays a critical role in identifying predictors (x) that significantly influence the target variable (y) through strategic selection and transformation of data, a crucial task in supervised learning. Accurate feature selection not only enhances the precision of the model but also its interpretability. When constructing predictive models, addressing the challenge of managing a large volume of available features is commonplace. While the use of advanced and efficient algorithms is vital, ineffective predictive information derived from these features, or the presence of numerous irrelevant variables, can impair model performance. Implementing key feature selection strategies is crucial, including statistical filtering, wrapper methods, and advanced embedded techniques . For instance, Random Forest assesses feature importance by calculating their contribution to model accuracy , whereas Logistic Regression identifies key influencing factors by analyzing the magnitude and direction of coefficients . Through rigorous feature selection, the dimensionality and complexity of the dataset are effectively reduced, thereby enhancing the interpretability and practical application of the predictive model in clinical decision-making . For example, identifying RA patients with specific genetic mutations through feature selection has indicated that these individuals respond more positively to methotrexate, a principal drug for RA treatment. This insight assists physicians in devising targeted treatment plans, thereby improving therapeutic outcomes. ML algorithms are increasingly recognized as powerful analytical tools in the field of RA research. As depicted in , they provide assistance across multiple domains, including diagnosis, disease progression forecasting, prediction of treatment responses, and identification of potential complications. These computational tools are guiding the field towards a more refined and individualized approach, allowing clinicians and researchers to explore the complexities of RA with greater accuracy. ML models in precision diagnosis and therapeutics for RA A variety of predictive models have been built using ML algorithms in RA research. Presented in is the appraisal of performance when these ML models serve as classifiers across a multitude of data types from various sources. The functionalities of these classifiers include identification of individuals at risk for RA, diagnosis and differentiation of subtypes, discrimination of disease activity levels, forecasting of treatment outcomes as effective or ineffective, and predicting the presence or absence of comorbidities. 3.1 Stratification of RA risk cohorts Identifying individuals at risk for RA is crucial for early intervention, which has been shown to yield substantially better outcomes when applied during the preclinical stages rather than after the overt development of clinically significant arthritis . Specifically, by identifying individuals at high risk and conducting regular medical examinations and monitoring RA-related biomarkers, such as inflammation levels and autoantibodies, early detection of the disease can utilize the ‘window of opportunity’ for therapeutic intervention. Early interventions can help prevent severe radiographic damage and disability, thus significantly improving patient prognosis . The exact etiology of RA remains not fully understood; however, it is known that genetic and environmental factors, as well as their interactions, influence the onset and progression of RA . ML, as an effective data analysis tool, is capable of processing and interpreting large volumes of diverse data, ranging from genetic factors to lifestyle choices. ML can uncover potential risk patterns within complex genetic and environmental datasets, assisting clinicians in making more accurate disease predictions and risk assessments. Predictive modeling harnessing ML techniques to pinpoint individuals at an elevated risk for RA can be principally segregated into two domains: forecasting the incident risk in asymptomatic persons and assessing the progression likelihood in symptomatic patients with undifferentiated arthritis towards RA. The detection of RA susceptibility in the broad population leans on the analysis of genetic variants alongside common clinical risk indicators such as family history, age, and gender. A study found nine single nucleotide polymorphisms (SNPs) linked to RA, by combining these variations into a risk score and using ML algorithms, researchers were able to accurately distinguish RA patients from those without the condition, exhibiting five-fold cross-validated AUCs surpassing the 0.9 threshold . 11 risk factors for RA were identified from National Health and Nutrition Examination Survey (NHANES) data and used to create a Bayesian logistic regression model, which was refined using a Genetic Algorithm. The model showed high predictive accuracy with an AUC of 0.826 on the validation set . These findings highlight the potential of machine learning strategies in predicting risk populations for RA. Genetic risk scores derived from SNPs can help identify an individual’s potential genetic risks, thereby providing a crucial foundation for personalized medicine . However, translating these studies into clinical decision support tools faces obstacles, primarily ensuring the equal applicability of Polygenic risk score (PRS) across populations . In reality, PRS exhibits limited transferability among populations, and its clinical utility in RA remains undetermined, necessitating substantial investment in extensive data collection across diverse ethnic groups and methodological research to enhance genetic prediction in admixed individuals . Another critical issue is the interpretability of genetic findings in participants, requiring clinicians to possess the capacity to comprehend and interpret data . Furthermore, privacy and security of the involved genetic data must be adequately ensured. Federated learning, as a distributed machine learning technique, aims to achieve collaborative modeling while ensuring data privacy, security, and legal compliance . Participants can train their local models using their proprietary data, and through iterative training, each participant contributes to the construction of a global model without sharing their data externally . This approach fosters collaboration among multiple medical institutions, facilitating the sharing of model learning outcomes . The likelihood of individuals with undifferentiated arthritis (UA), who exhibit joint symptoms without fulfilling the full diagnostic criteria, subsequently progressing to RA poses a clinical conundrum. Accurate prediction of this progression can facilitate early diagnosis and intervention for those at risk, while concurrently preventing overtreatment and diminishing both the health repercussions and superfluous healthcare expenditures for those unlikely to develop RA . Models are increasingly geared towards the evaluation of dynamic variables, reflecting shifts correlated with disease activity, such as gene expression profiles, epigenetic modifications, and a spectrum of detailed symptomatic and clinical markers. A notable investigation sought to unearth clinically pertinent predictive biomarkers from peripheral blood CD4 T cells in UA patients, employing a support vector machine (SVM) classification model. This approach demonstrated that an integration of the pre-established Leiden predictive rule with a 12-gene risk indicator notably enhanced the prognostic capability from the original (AUC=0.74) to a significantly improved accuracy for seronegative UA patients (AUC=0.84) . A comparative analysis of three distinct ML algorithms revealed that a SVM model, which integrated DNA methylation profiles from 40 CpG sites with clinical parameters including disease activity score (DAS) and RF, effectively distinguished individuals with UA who were predisposed to developing RA within one year, achieving an AUC range of 0.85 to 1 . Contemporary studies report promising predictive performance in identifying at-risk individuals within the general population and in forecasting RA development in patients with UA, and that the features having the greatest impact on predictive outcomes were identified and selected as much as possible during model training in order to simplify the model and potentially improve performance and generalizability. More important than performance, however, is the potential for practical clinical application, and future studies will need to examine the generalizability of the model by testing it in populations of multiple ethnicities and regions, and tracking the progression of individuals to RA in larger prospective cohorts to observe the accuracy of the model. 3.2 Diagnosis and subtype classification of RA The diagnostic framework for RA, especially in the context of seronegative RA, is intricate and often obstructed by the absence of potent biomarkers, impeding early detection and management . Investigations are thus aimed at the identification of new biomarkers to bridge this gap. Non-invasive imaging techniques are pivotal in elucidating inflammatory activity and its effects on joint morphology, especially when serological markers are indistinct or inconclusive. These tools are indispensable for both diagnostic purposes and for monitoring treatment efficacy . Furthermore, the application of ML algorithms in the analysis of imaging data presents a sophisticated approach to patient classification . Üreten K et al. presented a model of a Visual Geometry Group-16 (VGG-16) neural network for hand radiographs augmented by transfer learning to distinguish RA patients from non-RA patients, which achieved an AUC of 0.97 . Ultrasound imaging of the metacarpophalangeal joints in RA patients has been categorized for classification purposes, employing a DenseNet-based deep learning model in several regions of interest, significant efficacy was demonstrated in distinguishing between synovial proliferation and healthy and diseased synovium, as evidenced by AUCs exceeding 0.8 . Additionally, research has been conducted utilizing hand RGB images and gripforce as features to develop a random forest model with an AUC of 0.97 for distinguishing between individuals with RA and control subjects, thereby offering a supplementary diagnostic tool for RA . Image-based predictive models have shown notable performance in research settings, accurately differentiating RA patients from others in various cohorts, thereby contributing to the precision and efficiency of RA diagnosis. These models facilitate the early detection of abnormal changes within the joints, enabling timely intervention and ultimately delaying the progression of RA. However, their clinical application still faces significant challenges. A primary obstacle is the interpretability of the models. Owing to the ‘black box’ nature of deep learning models, the decision-making processes are opaque and difficult to comprehend, which may affect both physician and patient trust and understanding of model predictions . To address this limitation, some well-known methods can be utilized: The Class Activation Mapping (CAM) technique helps in understanding the regions of interest within images as attended by the model ; Shapley Additive exPlanations (SHAP) elucidate the global impact of each feature on the model ; and Local Interpretable Model-agnostic Explanations (LIME) explicate the local prediction process for individual samples . Collectively, these methods provide interpretability tools that enhance comprehension of the model’s decision-making process and improve its interpretability. Future studies are also suggested to involve multi-center collaborations to enhance image collection with the intent to further refine and generalize these diagnostic models. In RA, both individual analyses and integrative omics studies have accumulated a vast amount of data, providing insights into the mechanisms of RA from multiple perspectives. Genomics identifies genetic variations associated with RA, revealing potential genetic mechanisms influencing gene expression . Epigenetic modifications, including DNA methylation, histone modifications, chromatin remodeling, and non-coding RNA, play crucial roles in maintaining normal gene expression patterns. Epigenomics studies these modifications to reveal gene expression and regulatory mechanisms in RA, offering insights into the diverse molecular processes involved . Transcriptomics, by analyzing the variations in gene expression under different conditions, provides a detailed elucidation of which genes are upregulated or downregulated in RA. This process not only involves the regulation at the genetic level but also directly affects the production and function of the corresponding proteins . Proteomics provides a comprehensive analysis of protein composition, expression levels, and modification states, elucidating the interactions and connections among proteins that may play key roles in RA inflammation and immune response processes . Metabolomics provides insights into the shifts in metabolic states and pathways during the progression of RA. These changes are potentially influenced by alterations in gene and protein activities. Furthermore, metabolites themselves can play a modulatory role, affecting gene transcription and protein expression, thereby forming a complex interplay that influences disease dynamics . Host genomic variations significantly influence the composition of the gut microbiota, which can synthesize, regulate, or degrade endogenous small molecules or macromolecules, resulting in metabolic changes. Utilizing metagenomics and related techniques reveals the role of gut microbiota in the development of RA by influencing metabolic pathways and modulating the host immune system . Omic studies are characterized by the generation of vast, high-dimensional datasets. ML algorithms are critically employed for visualization and processing such information—finding patterns, crafting predictive models, and examining large-scale, multi-omic data to identify biomarkers and pathways implicated in disease progression . Existing research has integrated multimodal data and employed various machine learning algorithms to develop high-performance diagnostic models for RA. Key genes highly correlated with RA phenotypes have been identified through the application of weighted gene co-expression network analysis (WGCNA) and differential gene expression (DEG) analysis on RA blood sample microarray datasets. These genes have been deployed as features to assess the performance of six ML models, with five demonstrating commendable efficacy (AUC > 0.85) . Through the sourcing of RA patient peripheral blood sample microarray datasets from the GEO database, a platelet-related signature risk score model was formulated, comprised of six genes, using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. The model exhibited AUCs of 0.801 and 0.979 across the training and validation sets, respectively . Employing the Generalized Matrix Learning Vector Quantization (GMLVQ) method, mRNA expression profiles of cytokines and chemokines from synovial biopsies were analyzed, leading to the identification of two gene sets. These sets were instrumental in generating a model capable of differentiating between various arthritis types, with AUC scores reaching 0.996 and 0.764 for distinguishing diagnosed RA from non-inflammatory cases and early-stage RA from self-remitting arthritis, respectively . By focusing on the expression of 19 N6-methyladenosine (m6A) methylation regulators, diagnostic models have been established to separate RA from non-RA conditions. A subset of these regulators, particularly IGF2BP3 and YTHDC2, demonstrated accuracies and AUCs exceeding 0.8 across most ML models, indicating the potential diagnostic importance of m6A methylation profiles . A multi-variable classification model, incorporating 26 metabolites and lipids, was devised utilizing three ML algorithms. The logistic regression model, in particular, stood out for its ability to differentiate seropositive and seronegative RA from normal controls within an independent validation cohort, securing an AUC of 0.91, thus showcasing that a holistic metabolomic and lipidomic approach grounded in Liquid Chromatography-Mass Spectrometry (LC-MS) can effectively segregate RA cases . Serum antigens were analyzed in patient cohorts with RA, osteoarthritis (OA), and healthy controls. Subsequently, distinct biomarker sets were identified for the differentiation of RA, ACPA-positive RA, and ACPA-negative RA using feature selection through the Random Forest algorithm. The model demonstrated exceptional performance with AUC values of 0.9949, 0.9913, and 1.0, respectively, establishing a proteomics-based diagnostic model for RA . Furthermore, leveraging metagenomic data to predict the microbiomic characteristics of the gut in autoimmune diseases has been demonstrated to discriminate between various types of autoimmune disorders . Histopathology, as a fundamental pillar in confirming disease diagnosis, stands as the definitive standard for the verification of numerous ailments . Overlap of symptoms in certain pathologies may obscure the principal etiology responsible for articular manifestations; in such instances, tissue biopsy, particularly of synovial tissue, proves invaluable. Following Total Knee Arthroplasty (TKA), synovial samples from 147 OA and 60 RA individuals were subjected to hematoxylin and eosin (H&E) staining. Utilization of a Random Forest Algorithm, integrating pathologist-derived scores with computer vision-generated cellular density measures, led to the construction of an optimal discriminative model for OA and RA, achieving a model AUC of 0.91 . This serves as a potent discriminative tool for RA assessment. Orange et al. utilized consensus clustering of gene expression data from synovial tissues of patients with RA to identify three distinct synovial subtypes: high-inflammatory, low-inflammatory, and mixed. They subsequently employed a support vector ML algorithm to distinguish between these subtypes based on histological features, achieving area under the curve values of 0.88, 0.71, and 0.59, respectively . Despite the high performance of ML-derived predictive models for RA diagnosis, concerns on potential model overfitting due to limited sample sizes, which may exaggerate effect sizes, cannot be overlooked. Additionally, independent evaluation of the research methodology, data processing, and outcomes by an external party ensures the accuracy and reliability of the research findings. Validation of these models in diverse datasets, supplemented by molecular biology experimentation, is imperative for evaluating true diagnostic merit. Predictive models relying on histopathological data encounter additional challenges, including the necessity for manual feature annotation by pathologists and the invasiveness of the procedure, compounded by technical and sample handling issues. External validation is a critical quality control measure, ensuring that model utility and accuracy in diagnosing RA reflect true clinical relevance and potential for widespread application. The diagnosis of RA extends beyond segregating RA from healthy subjects or OA patients. Future investigations must address the diagnostic capacity of predictive model-derived markers in distinguishing seronegative RA from other inflammatory arthritides, such as psoriatic arthritis, reactive arthritis, or spondyloarthritis. Concomitantly, safeguarding against confounding variables and maintaining diversity within patient cohorts are essential to render the model universally applicable. 3.3 Prediction of disease activity and imaging progression in RA Radiographic deterioration in RA is characterized by the degree of articular damage and the presence of distinct lesions such as joint space narrowing, bone erosion, and osteoporosis, as revealed through diagnostic imaging modalities including X-rays, magnetic resonance imaging, or computed tomography scans . The quantification and prognostication of structural joint impairment traditionally hinge on clinical expertise, underscoring the necessity for an automated, bias-free evaluation method. A study utilizing SVM modeling on cohorts comprising 374 Korean and 399 North American patients with incipient RA identified SNPs correlated with radiographic progression. An integrated model encompassing SNPs with clinical parameters exhibited optimal performance, yielding a mean ten-fold cross-validation AUC of 0.78, providing a more satisfactory distinction between severe and non-severe progression . Radiological damage bears a significant association with disease activity in RA, with heightened activity posing an increased risk for osseous impairment. CNNs trained on ultrasound imagery of RA joints, have facilitated the automatic grading of disease activity, achieving an overall classification accuracy of 83.9% . Vodencarevic et al. used data from 135 consultations with 41 RA patients to predict flare incidents during biologic disease-modifying antirheumatic drugs (DMARDs) tapering in remission. They combined multiple ML models to achieve an AUC of 0.81 . Furthermore, baseline serum proteomics from 130 stable RA patients in clinical remission was analyzed for biomarkers predictive of future disease flares, employing LASSO and eXtreme Gradient Boosting (XGBoost) algorithms to construct predictive models. The XGBoost model exhibited superior performance in differentiating between relapsed and non-relapsed patients with an AUC of 0.80 . The expansive volume of patient intelligence and clinical information harbored in electronic medical records (EMR) and electronic health records (EHR) constitutes a substantial body of data ripe for investigation . Nonetheless, hindrances such as imbalances in data record quantities across patients, omissions of pivotal information, and the variability in patient conditions and therapeutic outcomes over time contribute to the complex temporal nature of the data . Conventional ML techniques encounter constraints concerning data pre-processing, time-series analysis capacity, and the simplification of intricate relational processing . Deep learning integrated with structured EHR data, have been deployed to prognosticate disease activity during subsequent outpatient rheumatology consultations, wherein the model trained on the UH cohort manifested an AUC of 0.91 for internal validation and 0.74 for external cohort testing . Feldman et al. endeavored to enhance the precision of RA disease activity evaluation by integrating electronic medical records and claims data, achieving an AUC of 0.76 in discriminating high/moderate from low disease activity/remission . Chandran et al. employed the use of biologic agents or tofacitinib as a surrogate for distinguishing disease severity indicators, with the model accurately predicting both current and future disease activity validated across various databases with AUCs exceeding 0.7 . The aforementioned results substantiate the viability of employing routinely documented clinical and laboratory data to assess and forecast disease activity in RA. With the progressive advancements in information technology, an extensive array of data has become accessible, prompting researchers to explore ML methodologies for the extraction of RA patient records from electronic health record data, thereby enabling the study of substantial populations at minimal expense. Algorithms trained via ML are progressively leveraged with EMR for clinical investigations. These algorithms function by detecting specifiable patterns in the data associated with RA, yet systematic disparities in EMR data quality present hurdles for model generalizability. Despite these challenges, high-caliber investigations are somewhat limited and the dependability and transferability of pertinent ML methods remain largely undetermined, rendering periodic evaluation of algorithm performance imperative. The current research trend involves the utilization of thousands of digitally annotated images obtained from large-scale observational studies, clinical trials, and electronic medical records, along with clinical data, to automatically classify and quantify the extent of joint damage and activity scores in RA using ML algorithms . 3.4 Prediction of RA treatment response In the realm of RA therapeutics, a plethora of options including nonsteroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, conventional synthetic DMARDs, biologic DMARDs, and oral small molecules have been made available . The selection of appropriate treatments continues to challenge clinicians owing to the vast range of alternatives and the prevalent trial-and-error approach in therapeutic prescription, exacerbated by a lack of comprehensive knowledge regarding drug efficacy and safety across distinct patient demographics . Methotrexate (MTX) stands as the quintessential first-line therapy in RA treatment strategies . Investigation into whether disparities in the gut microbiome across individuals could serve as predictive markers for MTX efficacy in newly onset RA was conducted by Artacho et al. Fecal samples from 26 new-onset RA patients, procured prior to MTX treatment, were analyzed using 16S ribosomal RNA (16S rRNA) and shotgun sequencing. Subsequent construction of a predictive model via random forests revealed that a response to MTX treatment at 4 months could be anticipated, with an AUC of 0.84, based on colony characterization . Additional research involving ML algorithms applied to clinical and biological data from 493 and 239 patients across two cohorts, aimed to predict MTX treatment response at 9 months. Notably, the Light Gradient Boosting Machine (LightGBM) model acquired AUCs of 0.73 and 0.72 in training and external validation sets, respectively . Lim et al. analyzed exome sequencing data from 349 RA patients and predicted treatment response to MTX using six ML algorithms. They identified 95 genetic factors and 5 non-genetic factors that influenced response. The predictions had strong performance with AUCs between 0.776 and 0.828 in the test set . Plant et al. utilized whole blood samples from RA patients initiating MTX treatment, both before and 4 weeks after commencement, conducting gene expression profiling to foretell treatment response at 6 months. Application of an L2 regularized logistic regression yielded an AUC of 0.78 . The development of these predictive models has contributed significantly towards identifying patients who are more likely to respond favorably to, or may not derive benefit from, MTX treatment. Anti-tumor necrosis factor (anti-TNF) agents have been established as pivotal second-line therapeutic agents following methotrexate. A prospective multicenter study recruited 104 RA patients and 29 healthy donors to discover predictive biomarkers for anti-TNF treatment using ML. A hybrid model combining clinical and molecular variables achieved a high AUC value of 0.91 . The DREAM RA Responder Challenge introduced a novel approach to predicting anti-TNF treatment response by proposing an optimal model that incorporates Gaussian Process Regression (GPR) and integrates demographic, clinical, and genetic markers. This model accurately predicts the Disease Activity Score in patients 24 months post-baseline assessment and categorizes treatment response according to the EULAR response criteria, effectively identifying non-responders to anti-TNF therapy with an AUC of 0.6 in cross-validation data . Kim et al. utilized 11 datasets containing 256 synovial tissue samples, integrating RA-associated pathway activation scores and four ML types, and found that the SVM model performed the best, with an AUC of 0.87 using the pathway-driven model and an AUC of 0.9 using the DEG-driven model . Recent research has emphasized the potential benefits of integrating diverse datasets for the purpose of treatment decision-making. ML algorithms have demonstrated efficacy in enhancing the precision of response prediction for TNF inhibitors and MTX. Furthermore, ML methodologies are being increasingly utilized in forecasting treatment responses to a range of other biologic therapies . Clinical data may be limited by trial design, including inclusion and exclusion criteria.Using deep learning technology for cluster analysis on RA patients has revealed the connection between patient characteristics and treatment response . Advancements in spatial omics technologies enable a comprehensive and spatially intact analysis of synovial tissue in RA patients. This approach allows for precise localization of cells, exploration of cellular interactions, assessment of cell type distributions, and identification of disease-associated molecular markers . Integrating traditional multi-omics with spatial data, spatial multi-omics elucidates the complexity and dynamics of biological processes across various levels, including their interactions and influences on each other. This approach deepens our understanding of the pathological mechanisms of RA and enhances our knowledge of its spatial heterogeneity . The biopsy-driven RA randomized clinical trial (R4RA), which utilizes spatial omics to create synovial biopsy gene maps, provides a paradigm for predicting drug treatment responses and refining therapeutic strategies. This is crucial for achieving personalized medicine and optimizing treatment outcomes. Despite some progress, spatial omics in RA research is still in its early stages. Numerous challenges remain, such as high costs, high demands on sample handling, patient acceptance, ethical issues, and the need for advanced computational tools for data integration . Overcoming these challenges will be crucial for developing accurate, interpretable, and clinically applicable predictive models. In summary while opportunities exist for refining the accuracy of these predictions, progress is evident in this area of study. In the future, using a larger, more comprehensive datase, appropriate algorithms, and methods in parameter optimization, improving model features and validating against independent cohorts may further improve the discriminative power of predictive models. 3.5 Prediction of comorbidities related to RA ML is also gaining attention in the prediction of comorbidities associated with RA. Focus within extant research has primarily been oriented towards the identification of risk factors for osteoporosis , assessment of cardiovascular risk , and the prediction of interstitial lung disease development in individuals with RA. Current models pertaining to comorbidities are limited in both quantity and accuracy, with constraints stemming from various sources, notably the scarcity of comprehensive comorbidity data within RA patient cohort datasets. Furthermore, there is significant variability in data quality across different cohorts. To overcome these obstacles, future research should prioritize the accumulation of larger, more robust datasets and improve integration among diverse data sources.Simultaneously, there is a necessity for the advancement of algorithms with broader applicability, thereby enabling the utilization of ML in the prediction of complications associated with RA. Stratification of RA risk cohorts Identifying individuals at risk for RA is crucial for early intervention, which has been shown to yield substantially better outcomes when applied during the preclinical stages rather than after the overt development of clinically significant arthritis . Specifically, by identifying individuals at high risk and conducting regular medical examinations and monitoring RA-related biomarkers, such as inflammation levels and autoantibodies, early detection of the disease can utilize the ‘window of opportunity’ for therapeutic intervention. Early interventions can help prevent severe radiographic damage and disability, thus significantly improving patient prognosis . The exact etiology of RA remains not fully understood; however, it is known that genetic and environmental factors, as well as their interactions, influence the onset and progression of RA . ML, as an effective data analysis tool, is capable of processing and interpreting large volumes of diverse data, ranging from genetic factors to lifestyle choices. ML can uncover potential risk patterns within complex genetic and environmental datasets, assisting clinicians in making more accurate disease predictions and risk assessments. Predictive modeling harnessing ML techniques to pinpoint individuals at an elevated risk for RA can be principally segregated into two domains: forecasting the incident risk in asymptomatic persons and assessing the progression likelihood in symptomatic patients with undifferentiated arthritis towards RA. The detection of RA susceptibility in the broad population leans on the analysis of genetic variants alongside common clinical risk indicators such as family history, age, and gender. A study found nine single nucleotide polymorphisms (SNPs) linked to RA, by combining these variations into a risk score and using ML algorithms, researchers were able to accurately distinguish RA patients from those without the condition, exhibiting five-fold cross-validated AUCs surpassing the 0.9 threshold . 11 risk factors for RA were identified from National Health and Nutrition Examination Survey (NHANES) data and used to create a Bayesian logistic regression model, which was refined using a Genetic Algorithm. The model showed high predictive accuracy with an AUC of 0.826 on the validation set . These findings highlight the potential of machine learning strategies in predicting risk populations for RA. Genetic risk scores derived from SNPs can help identify an individual’s potential genetic risks, thereby providing a crucial foundation for personalized medicine . However, translating these studies into clinical decision support tools faces obstacles, primarily ensuring the equal applicability of Polygenic risk score (PRS) across populations . In reality, PRS exhibits limited transferability among populations, and its clinical utility in RA remains undetermined, necessitating substantial investment in extensive data collection across diverse ethnic groups and methodological research to enhance genetic prediction in admixed individuals . Another critical issue is the interpretability of genetic findings in participants, requiring clinicians to possess the capacity to comprehend and interpret data . Furthermore, privacy and security of the involved genetic data must be adequately ensured. Federated learning, as a distributed machine learning technique, aims to achieve collaborative modeling while ensuring data privacy, security, and legal compliance . Participants can train their local models using their proprietary data, and through iterative training, each participant contributes to the construction of a global model without sharing their data externally . This approach fosters collaboration among multiple medical institutions, facilitating the sharing of model learning outcomes . The likelihood of individuals with undifferentiated arthritis (UA), who exhibit joint symptoms without fulfilling the full diagnostic criteria, subsequently progressing to RA poses a clinical conundrum. Accurate prediction of this progression can facilitate early diagnosis and intervention for those at risk, while concurrently preventing overtreatment and diminishing both the health repercussions and superfluous healthcare expenditures for those unlikely to develop RA . Models are increasingly geared towards the evaluation of dynamic variables, reflecting shifts correlated with disease activity, such as gene expression profiles, epigenetic modifications, and a spectrum of detailed symptomatic and clinical markers. A notable investigation sought to unearth clinically pertinent predictive biomarkers from peripheral blood CD4 T cells in UA patients, employing a support vector machine (SVM) classification model. This approach demonstrated that an integration of the pre-established Leiden predictive rule with a 12-gene risk indicator notably enhanced the prognostic capability from the original (AUC=0.74) to a significantly improved accuracy for seronegative UA patients (AUC=0.84) . A comparative analysis of three distinct ML algorithms revealed that a SVM model, which integrated DNA methylation profiles from 40 CpG sites with clinical parameters including disease activity score (DAS) and RF, effectively distinguished individuals with UA who were predisposed to developing RA within one year, achieving an AUC range of 0.85 to 1 . Contemporary studies report promising predictive performance in identifying at-risk individuals within the general population and in forecasting RA development in patients with UA, and that the features having the greatest impact on predictive outcomes were identified and selected as much as possible during model training in order to simplify the model and potentially improve performance and generalizability. More important than performance, however, is the potential for practical clinical application, and future studies will need to examine the generalizability of the model by testing it in populations of multiple ethnicities and regions, and tracking the progression of individuals to RA in larger prospective cohorts to observe the accuracy of the model. Diagnosis and subtype classification of RA The diagnostic framework for RA, especially in the context of seronegative RA, is intricate and often obstructed by the absence of potent biomarkers, impeding early detection and management . Investigations are thus aimed at the identification of new biomarkers to bridge this gap. Non-invasive imaging techniques are pivotal in elucidating inflammatory activity and its effects on joint morphology, especially when serological markers are indistinct or inconclusive. These tools are indispensable for both diagnostic purposes and for monitoring treatment efficacy . Furthermore, the application of ML algorithms in the analysis of imaging data presents a sophisticated approach to patient classification . Üreten K et al. presented a model of a Visual Geometry Group-16 (VGG-16) neural network for hand radiographs augmented by transfer learning to distinguish RA patients from non-RA patients, which achieved an AUC of 0.97 . Ultrasound imaging of the metacarpophalangeal joints in RA patients has been categorized for classification purposes, employing a DenseNet-based deep learning model in several regions of interest, significant efficacy was demonstrated in distinguishing between synovial proliferation and healthy and diseased synovium, as evidenced by AUCs exceeding 0.8 . Additionally, research has been conducted utilizing hand RGB images and gripforce as features to develop a random forest model with an AUC of 0.97 for distinguishing between individuals with RA and control subjects, thereby offering a supplementary diagnostic tool for RA . Image-based predictive models have shown notable performance in research settings, accurately differentiating RA patients from others in various cohorts, thereby contributing to the precision and efficiency of RA diagnosis. These models facilitate the early detection of abnormal changes within the joints, enabling timely intervention and ultimately delaying the progression of RA. However, their clinical application still faces significant challenges. A primary obstacle is the interpretability of the models. Owing to the ‘black box’ nature of deep learning models, the decision-making processes are opaque and difficult to comprehend, which may affect both physician and patient trust and understanding of model predictions . To address this limitation, some well-known methods can be utilized: The Class Activation Mapping (CAM) technique helps in understanding the regions of interest within images as attended by the model ; Shapley Additive exPlanations (SHAP) elucidate the global impact of each feature on the model ; and Local Interpretable Model-agnostic Explanations (LIME) explicate the local prediction process for individual samples . Collectively, these methods provide interpretability tools that enhance comprehension of the model’s decision-making process and improve its interpretability. Future studies are also suggested to involve multi-center collaborations to enhance image collection with the intent to further refine and generalize these diagnostic models. In RA, both individual analyses and integrative omics studies have accumulated a vast amount of data, providing insights into the mechanisms of RA from multiple perspectives. Genomics identifies genetic variations associated with RA, revealing potential genetic mechanisms influencing gene expression . Epigenetic modifications, including DNA methylation, histone modifications, chromatin remodeling, and non-coding RNA, play crucial roles in maintaining normal gene expression patterns. Epigenomics studies these modifications to reveal gene expression and regulatory mechanisms in RA, offering insights into the diverse molecular processes involved . Transcriptomics, by analyzing the variations in gene expression under different conditions, provides a detailed elucidation of which genes are upregulated or downregulated in RA. This process not only involves the regulation at the genetic level but also directly affects the production and function of the corresponding proteins . Proteomics provides a comprehensive analysis of protein composition, expression levels, and modification states, elucidating the interactions and connections among proteins that may play key roles in RA inflammation and immune response processes . Metabolomics provides insights into the shifts in metabolic states and pathways during the progression of RA. These changes are potentially influenced by alterations in gene and protein activities. Furthermore, metabolites themselves can play a modulatory role, affecting gene transcription and protein expression, thereby forming a complex interplay that influences disease dynamics . Host genomic variations significantly influence the composition of the gut microbiota, which can synthesize, regulate, or degrade endogenous small molecules or macromolecules, resulting in metabolic changes. Utilizing metagenomics and related techniques reveals the role of gut microbiota in the development of RA by influencing metabolic pathways and modulating the host immune system . Omic studies are characterized by the generation of vast, high-dimensional datasets. ML algorithms are critically employed for visualization and processing such information—finding patterns, crafting predictive models, and examining large-scale, multi-omic data to identify biomarkers and pathways implicated in disease progression . Existing research has integrated multimodal data and employed various machine learning algorithms to develop high-performance diagnostic models for RA. Key genes highly correlated with RA phenotypes have been identified through the application of weighted gene co-expression network analysis (WGCNA) and differential gene expression (DEG) analysis on RA blood sample microarray datasets. These genes have been deployed as features to assess the performance of six ML models, with five demonstrating commendable efficacy (AUC > 0.85) . Through the sourcing of RA patient peripheral blood sample microarray datasets from the GEO database, a platelet-related signature risk score model was formulated, comprised of six genes, using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. The model exhibited AUCs of 0.801 and 0.979 across the training and validation sets, respectively . Employing the Generalized Matrix Learning Vector Quantization (GMLVQ) method, mRNA expression profiles of cytokines and chemokines from synovial biopsies were analyzed, leading to the identification of two gene sets. These sets were instrumental in generating a model capable of differentiating between various arthritis types, with AUC scores reaching 0.996 and 0.764 for distinguishing diagnosed RA from non-inflammatory cases and early-stage RA from self-remitting arthritis, respectively . By focusing on the expression of 19 N6-methyladenosine (m6A) methylation regulators, diagnostic models have been established to separate RA from non-RA conditions. A subset of these regulators, particularly IGF2BP3 and YTHDC2, demonstrated accuracies and AUCs exceeding 0.8 across most ML models, indicating the potential diagnostic importance of m6A methylation profiles . A multi-variable classification model, incorporating 26 metabolites and lipids, was devised utilizing three ML algorithms. The logistic regression model, in particular, stood out for its ability to differentiate seropositive and seronegative RA from normal controls within an independent validation cohort, securing an AUC of 0.91, thus showcasing that a holistic metabolomic and lipidomic approach grounded in Liquid Chromatography-Mass Spectrometry (LC-MS) can effectively segregate RA cases . Serum antigens were analyzed in patient cohorts with RA, osteoarthritis (OA), and healthy controls. Subsequently, distinct biomarker sets were identified for the differentiation of RA, ACPA-positive RA, and ACPA-negative RA using feature selection through the Random Forest algorithm. The model demonstrated exceptional performance with AUC values of 0.9949, 0.9913, and 1.0, respectively, establishing a proteomics-based diagnostic model for RA . Furthermore, leveraging metagenomic data to predict the microbiomic characteristics of the gut in autoimmune diseases has been demonstrated to discriminate between various types of autoimmune disorders . Histopathology, as a fundamental pillar in confirming disease diagnosis, stands as the definitive standard for the verification of numerous ailments . Overlap of symptoms in certain pathologies may obscure the principal etiology responsible for articular manifestations; in such instances, tissue biopsy, particularly of synovial tissue, proves invaluable. Following Total Knee Arthroplasty (TKA), synovial samples from 147 OA and 60 RA individuals were subjected to hematoxylin and eosin (H&E) staining. Utilization of a Random Forest Algorithm, integrating pathologist-derived scores with computer vision-generated cellular density measures, led to the construction of an optimal discriminative model for OA and RA, achieving a model AUC of 0.91 . This serves as a potent discriminative tool for RA assessment. Orange et al. utilized consensus clustering of gene expression data from synovial tissues of patients with RA to identify three distinct synovial subtypes: high-inflammatory, low-inflammatory, and mixed. They subsequently employed a support vector ML algorithm to distinguish between these subtypes based on histological features, achieving area under the curve values of 0.88, 0.71, and 0.59, respectively . Despite the high performance of ML-derived predictive models for RA diagnosis, concerns on potential model overfitting due to limited sample sizes, which may exaggerate effect sizes, cannot be overlooked. Additionally, independent evaluation of the research methodology, data processing, and outcomes by an external party ensures the accuracy and reliability of the research findings. Validation of these models in diverse datasets, supplemented by molecular biology experimentation, is imperative for evaluating true diagnostic merit. Predictive models relying on histopathological data encounter additional challenges, including the necessity for manual feature annotation by pathologists and the invasiveness of the procedure, compounded by technical and sample handling issues. External validation is a critical quality control measure, ensuring that model utility and accuracy in diagnosing RA reflect true clinical relevance and potential for widespread application. The diagnosis of RA extends beyond segregating RA from healthy subjects or OA patients. Future investigations must address the diagnostic capacity of predictive model-derived markers in distinguishing seronegative RA from other inflammatory arthritides, such as psoriatic arthritis, reactive arthritis, or spondyloarthritis. Concomitantly, safeguarding against confounding variables and maintaining diversity within patient cohorts are essential to render the model universally applicable. Prediction of disease activity and imaging progression in RA Radiographic deterioration in RA is characterized by the degree of articular damage and the presence of distinct lesions such as joint space narrowing, bone erosion, and osteoporosis, as revealed through diagnostic imaging modalities including X-rays, magnetic resonance imaging, or computed tomography scans . The quantification and prognostication of structural joint impairment traditionally hinge on clinical expertise, underscoring the necessity for an automated, bias-free evaluation method. A study utilizing SVM modeling on cohorts comprising 374 Korean and 399 North American patients with incipient RA identified SNPs correlated with radiographic progression. An integrated model encompassing SNPs with clinical parameters exhibited optimal performance, yielding a mean ten-fold cross-validation AUC of 0.78, providing a more satisfactory distinction between severe and non-severe progression . Radiological damage bears a significant association with disease activity in RA, with heightened activity posing an increased risk for osseous impairment. CNNs trained on ultrasound imagery of RA joints, have facilitated the automatic grading of disease activity, achieving an overall classification accuracy of 83.9% . Vodencarevic et al. used data from 135 consultations with 41 RA patients to predict flare incidents during biologic disease-modifying antirheumatic drugs (DMARDs) tapering in remission. They combined multiple ML models to achieve an AUC of 0.81 . Furthermore, baseline serum proteomics from 130 stable RA patients in clinical remission was analyzed for biomarkers predictive of future disease flares, employing LASSO and eXtreme Gradient Boosting (XGBoost) algorithms to construct predictive models. The XGBoost model exhibited superior performance in differentiating between relapsed and non-relapsed patients with an AUC of 0.80 . The expansive volume of patient intelligence and clinical information harbored in electronic medical records (EMR) and electronic health records (EHR) constitutes a substantial body of data ripe for investigation . Nonetheless, hindrances such as imbalances in data record quantities across patients, omissions of pivotal information, and the variability in patient conditions and therapeutic outcomes over time contribute to the complex temporal nature of the data . Conventional ML techniques encounter constraints concerning data pre-processing, time-series analysis capacity, and the simplification of intricate relational processing . Deep learning integrated with structured EHR data, have been deployed to prognosticate disease activity during subsequent outpatient rheumatology consultations, wherein the model trained on the UH cohort manifested an AUC of 0.91 for internal validation and 0.74 for external cohort testing . Feldman et al. endeavored to enhance the precision of RA disease activity evaluation by integrating electronic medical records and claims data, achieving an AUC of 0.76 in discriminating high/moderate from low disease activity/remission . Chandran et al. employed the use of biologic agents or tofacitinib as a surrogate for distinguishing disease severity indicators, with the model accurately predicting both current and future disease activity validated across various databases with AUCs exceeding 0.7 . The aforementioned results substantiate the viability of employing routinely documented clinical and laboratory data to assess and forecast disease activity in RA. With the progressive advancements in information technology, an extensive array of data has become accessible, prompting researchers to explore ML methodologies for the extraction of RA patient records from electronic health record data, thereby enabling the study of substantial populations at minimal expense. Algorithms trained via ML are progressively leveraged with EMR for clinical investigations. These algorithms function by detecting specifiable patterns in the data associated with RA, yet systematic disparities in EMR data quality present hurdles for model generalizability. Despite these challenges, high-caliber investigations are somewhat limited and the dependability and transferability of pertinent ML methods remain largely undetermined, rendering periodic evaluation of algorithm performance imperative. The current research trend involves the utilization of thousands of digitally annotated images obtained from large-scale observational studies, clinical trials, and electronic medical records, along with clinical data, to automatically classify and quantify the extent of joint damage and activity scores in RA using ML algorithms . Prediction of RA treatment response In the realm of RA therapeutics, a plethora of options including nonsteroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, conventional synthetic DMARDs, biologic DMARDs, and oral small molecules have been made available . The selection of appropriate treatments continues to challenge clinicians owing to the vast range of alternatives and the prevalent trial-and-error approach in therapeutic prescription, exacerbated by a lack of comprehensive knowledge regarding drug efficacy and safety across distinct patient demographics . Methotrexate (MTX) stands as the quintessential first-line therapy in RA treatment strategies . Investigation into whether disparities in the gut microbiome across individuals could serve as predictive markers for MTX efficacy in newly onset RA was conducted by Artacho et al. Fecal samples from 26 new-onset RA patients, procured prior to MTX treatment, were analyzed using 16S ribosomal RNA (16S rRNA) and shotgun sequencing. Subsequent construction of a predictive model via random forests revealed that a response to MTX treatment at 4 months could be anticipated, with an AUC of 0.84, based on colony characterization . Additional research involving ML algorithms applied to clinical and biological data from 493 and 239 patients across two cohorts, aimed to predict MTX treatment response at 9 months. Notably, the Light Gradient Boosting Machine (LightGBM) model acquired AUCs of 0.73 and 0.72 in training and external validation sets, respectively . Lim et al. analyzed exome sequencing data from 349 RA patients and predicted treatment response to MTX using six ML algorithms. They identified 95 genetic factors and 5 non-genetic factors that influenced response. The predictions had strong performance with AUCs between 0.776 and 0.828 in the test set . Plant et al. utilized whole blood samples from RA patients initiating MTX treatment, both before and 4 weeks after commencement, conducting gene expression profiling to foretell treatment response at 6 months. Application of an L2 regularized logistic regression yielded an AUC of 0.78 . The development of these predictive models has contributed significantly towards identifying patients who are more likely to respond favorably to, or may not derive benefit from, MTX treatment. Anti-tumor necrosis factor (anti-TNF) agents have been established as pivotal second-line therapeutic agents following methotrexate. A prospective multicenter study recruited 104 RA patients and 29 healthy donors to discover predictive biomarkers for anti-TNF treatment using ML. A hybrid model combining clinical and molecular variables achieved a high AUC value of 0.91 . The DREAM RA Responder Challenge introduced a novel approach to predicting anti-TNF treatment response by proposing an optimal model that incorporates Gaussian Process Regression (GPR) and integrates demographic, clinical, and genetic markers. This model accurately predicts the Disease Activity Score in patients 24 months post-baseline assessment and categorizes treatment response according to the EULAR response criteria, effectively identifying non-responders to anti-TNF therapy with an AUC of 0.6 in cross-validation data . Kim et al. utilized 11 datasets containing 256 synovial tissue samples, integrating RA-associated pathway activation scores and four ML types, and found that the SVM model performed the best, with an AUC of 0.87 using the pathway-driven model and an AUC of 0.9 using the DEG-driven model . Recent research has emphasized the potential benefits of integrating diverse datasets for the purpose of treatment decision-making. ML algorithms have demonstrated efficacy in enhancing the precision of response prediction for TNF inhibitors and MTX. Furthermore, ML methodologies are being increasingly utilized in forecasting treatment responses to a range of other biologic therapies . Clinical data may be limited by trial design, including inclusion and exclusion criteria.Using deep learning technology for cluster analysis on RA patients has revealed the connection between patient characteristics and treatment response . Advancements in spatial omics technologies enable a comprehensive and spatially intact analysis of synovial tissue in RA patients. This approach allows for precise localization of cells, exploration of cellular interactions, assessment of cell type distributions, and identification of disease-associated molecular markers . Integrating traditional multi-omics with spatial data, spatial multi-omics elucidates the complexity and dynamics of biological processes across various levels, including their interactions and influences on each other. This approach deepens our understanding of the pathological mechanisms of RA and enhances our knowledge of its spatial heterogeneity . The biopsy-driven RA randomized clinical trial (R4RA), which utilizes spatial omics to create synovial biopsy gene maps, provides a paradigm for predicting drug treatment responses and refining therapeutic strategies. This is crucial for achieving personalized medicine and optimizing treatment outcomes. Despite some progress, spatial omics in RA research is still in its early stages. Numerous challenges remain, such as high costs, high demands on sample handling, patient acceptance, ethical issues, and the need for advanced computational tools for data integration . Overcoming these challenges will be crucial for developing accurate, interpretable, and clinically applicable predictive models. In summary while opportunities exist for refining the accuracy of these predictions, progress is evident in this area of study. In the future, using a larger, more comprehensive datase, appropriate algorithms, and methods in parameter optimization, improving model features and validating against independent cohorts may further improve the discriminative power of predictive models. Prediction of comorbidities related to RA ML is also gaining attention in the prediction of comorbidities associated with RA. Focus within extant research has primarily been oriented towards the identification of risk factors for osteoporosis , assessment of cardiovascular risk , and the prediction of interstitial lung disease development in individuals with RA. Current models pertaining to comorbidities are limited in both quantity and accuracy, with constraints stemming from various sources, notably the scarcity of comprehensive comorbidity data within RA patient cohort datasets. Furthermore, there is significant variability in data quality across different cohorts. To overcome these obstacles, future research should prioritize the accumulation of larger, more robust datasets and improve integration among diverse data sources.Simultaneously, there is a necessity for the advancement of algorithms with broader applicability, thereby enabling the utilization of ML in the prediction of complications associated with RA. Conclusion and outlook Integrating data from diverse sources allows ML models to yield more comprehensive and precise predictions for the diagnosis and treatment outcomes of RA. However, more focus and effort are needed to create predictive models for comorbidities related to RA. Recent research has demonstrated the potential of multimodal learning to improve clinical prediction accuracy. The optimal performing model under specific conditions often necessitates an extensive comparative analysis. Beyond frequently used metrics such as AUC, accuracy, sensitivity, specificity, and F1 score, the employment of cross-validation, the statistical tests applied, the model’s computational cost, the data requirements, and accessibility, the adoption of multimodal learning approaches aims to refine clinical predictions. Efforts should be made to improve the clinical operability of models, utilize external datasets from diverse origins for validation, assess the model’s generalizability, monitor its long-term performance, and evaluate its strengths and weaknesses through multidimensional approaches rather than relying on a single performance metric. Although ML models have demonstrated impressive predictive prowess in research settings, it is imperative to establish their practicality and effectiveness in real-world clinical scenarios. To cultivate trust and acceptance among medical practitioners, it is essential to enhance the interpretability of these models. This can be achieved by prioritizing simplicity in experimental design or by employing tools that enhance model interpretability. Finally, but importantly, the privacy and ethical implications of big biological data should be emphasized and protected. YMS: Data curation, Visualization, Writing – original draft. MZ: Data curation, Formal analysis, Writing – review & editing. CC: Data curation, Formal analysis, Writing – review & editing. PJ: Data curation, Formal analysis, Writing – review & editing. KW: Data curation, Formal analysis, Writing – review & editing. JZ: Data curation, Formal analysis, Writing – review & editing. YS: Data curation, Formal analysis, Writing – review & editing. YZ: Data curation, Formal analysis, Writing – review & editing. FZ: Data curation, Formal analysis, Writing – review & editing. XL: Data curation, Formal analysis, Writing – review & editing. SG: Conceptualization, Writing – review & editing. FW: Supervision, Writing – review & editing. DH: Funding acquisition, Supervision, Writing – review & editing.
GPs’ perceptions of pharmacists working in general practices: A mixed methods survey study
82dc71b7-c5af-4353-8c09-4593fb5e6d43
10629419
Family Medicine[mh]
General practice is under increasing pressure due to ageing populations, increasing multimorbidity, initiatives to move care from hospitals to primary care, and rising public expectations . These challenges have been further compounded by difficulties with recruitment and retention of general practice staff . To combat the rising tide of pressure in general practice, pharmacist integration into practice teams has occurred in several countries like Australia, but most notably in the United Kingdom . Outcomes stemming from pharmacists’ interventions in general practices are shown in a systematic review from Tan et al., who conclude that they lead to improvements in patients’ blood pressure, glycosylated haemoglobin, cholesterol, and cardiovascular risk . Furthermore, a systematic review from Hayhoe et al. concluded that pharmacists’ presence in practices had a positive impact on several healthcare utilisation outcomes, such as reduced emergency department visits, reduced visits to general practitioners (GPs), yet overall increased utilisation of primary care due to visits to pharmacists in practices . Lastly, a recent review from Croke et al. showed that integrating pharmacists into practices, to optimise prescribing and health outcomes in patients with polypharmacy, probably reduces potentially inappropriate prescribing and the number of medications, appears to be cost-effective, but there is no apparent effect on patient outcomes . In primary care in Ireland, most pharmacists currently work in community pharmacies, where they dispense medication and provide healthcare advice; they are mainly funded by the sale of products rather than provision of services . GPs work in practices either by themselves or with other GPs or other allied healthcare professionals, and interact with community pharmacists by telephone or email to optimise patients’ pharmacotherapy . GPs in Ireland also have access to drug interaction alerts as part of their practice computers’ software systems to increase prescribing safety . Despite what is known of the outcomes of practice-based pharmacists, the role in Ireland – like many countries worldwide – is currently not established. There is a scarcity of studies that aim to explore GPs’ perceptions before attempts to integrate pharmacists into practices to pre-empt the concerns and considerations of GPs . It is also unknown if there are any characteristics of GPs or their practices that affect GPs’ perceptions towards pharmacists’ roles that could be considered to tailor a practice-based pharmacist’s role to individual GPs and their practices. Therefore, in this study, we aimed to (1) explore GPs’ perceptions around integrating pharmacists into practices utilising a theory-informed mixed-methods survey and (2) determine if any significant associations were present between GPs’ perceptions and their demographic characteristics. Study design and participants A cross-sectional study utilising an electronic and paper-based survey sent to general practitioners in Ireland. GPs were eligible to partake in the study if they were currently practising as a GP in Ireland and had not previously worked with a pharmacist in general practice before or trained as a pharmacist before becoming a GP. Ethics Ethics approval to conduct this study was granted by the Social Research Ethics Committee, University College Cork. Participants gave their consent to partake before completing the anonymous survey. Construction of the survey The survey (Supplementary File ) was initially constructed based on the findings from a qualitative evidence synthesis and a semi-structured interview study that utilised the Theoretical Domains Framework (TDF) as part of its methods . The survey was then reviewed and further developed by consensus discussion amongst research team members, which consisted of three pharmacists (two practising) and two practising GPs. The survey was then piloted for face validity and time to complete by five independent practising GPs, after which slight modifications to the survey were made. The final survey consists of several demographic, Likert Scale, multiple choice, ranking, binary, and open comment questions across four sections. Survey dissemination The survey was disseminated in June 2022 through multiple channels. A postal survey was sent to a random sample of 500/842 GPs in Munster, Ireland, on the 5th of June 2022. The survey was converted to an electronic version on Microsoft ® Forms. A link to the survey was then distributed to members of GP Buddy (a national online GP support and educational network – https://www.gpbuddy.ie ) via a post on the forum, GP fora on WhatsApp, and Twitter over the proceeding days (Supplementary File ). A QR code linking to an electronic version of the survey was also placed on the front of the postal survey to give GPs the option to return their survey electronically by the 29th of July 2022. Statistical analysis Descriptive statistics were used to characterise the sample. Chi-squared ( χ 2 ) and Fisher’s Exact tests for independence were used to compare groups on categorical data. The survey items and demographics/other factors that were analysed in the independence tests are specified in Supplementary File . Post-hoc analysis was performed using the z -test to compare column proportions; adjustment for multiple testing using the Bonferroni method was also conducted. Differences were considered statistically significant when p < 0.05. IBM ® SPSS version 28 was used to perform descriptive and inferential statistics. Statistics were calculated based on the valid response for each question if data were missing. Analysis of qualitative data from open-ended questions Responses to the open-ended questions were analysed using reflexive thematic analysis . This involved six steps: (1) familiarising ourselves with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the report . Open-ended question data were exported from the survey platform to a Microsoft ® Excel spreadsheet before being transferred to Microsoft ® Word, where the comment function was used to code participants’ responses. These six steps were carried out by one researcher (EH). Another team member (KD) also coded participants’ responses to offer an additional perspective on patterns of meaning within the data. EH then reviewed KD’s coding to reflect on any potential other interpretations of the data that may not have occurred to him while coding. Reporting guidelines This study was reported in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement. A cross-sectional study utilising an electronic and paper-based survey sent to general practitioners in Ireland. GPs were eligible to partake in the study if they were currently practising as a GP in Ireland and had not previously worked with a pharmacist in general practice before or trained as a pharmacist before becoming a GP. Ethics approval to conduct this study was granted by the Social Research Ethics Committee, University College Cork. Participants gave their consent to partake before completing the anonymous survey. The survey (Supplementary File ) was initially constructed based on the findings from a qualitative evidence synthesis and a semi-structured interview study that utilised the Theoretical Domains Framework (TDF) as part of its methods . The survey was then reviewed and further developed by consensus discussion amongst research team members, which consisted of three pharmacists (two practising) and two practising GPs. The survey was then piloted for face validity and time to complete by five independent practising GPs, after which slight modifications to the survey were made. The final survey consists of several demographic, Likert Scale, multiple choice, ranking, binary, and open comment questions across four sections. The survey was disseminated in June 2022 through multiple channels. A postal survey was sent to a random sample of 500/842 GPs in Munster, Ireland, on the 5th of June 2022. The survey was converted to an electronic version on Microsoft ® Forms. A link to the survey was then distributed to members of GP Buddy (a national online GP support and educational network – https://www.gpbuddy.ie ) via a post on the forum, GP fora on WhatsApp, and Twitter over the proceeding days (Supplementary File ). A QR code linking to an electronic version of the survey was also placed on the front of the postal survey to give GPs the option to return their survey electronically by the 29th of July 2022. Descriptive statistics were used to characterise the sample. Chi-squared ( χ 2 ) and Fisher’s Exact tests for independence were used to compare groups on categorical data. The survey items and demographics/other factors that were analysed in the independence tests are specified in Supplementary File . Post-hoc analysis was performed using the z -test to compare column proportions; adjustment for multiple testing using the Bonferroni method was also conducted. Differences were considered statistically significant when p < 0.05. IBM ® SPSS version 28 was used to perform descriptive and inferential statistics. Statistics were calculated based on the valid response for each question if data were missing. Responses to the open-ended questions were analysed using reflexive thematic analysis . This involved six steps: (1) familiarising ourselves with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the report . Open-ended question data were exported from the survey platform to a Microsoft ® Excel spreadsheet before being transferred to Microsoft ® Word, where the comment function was used to code participants’ responses. These six steps were carried out by one researcher (EH). Another team member (KD) also coded participants’ responses to offer an additional perspective on patterns of meaning within the data. EH then reviewed KD’s coding to reflect on any potential other interpretations of the data that may not have occurred to him while coding. This study was reported in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement. A total of 152 GPs responded to the survey, with participant characteristics presented in . Of the 500 postal surveys distributed, 123 GPs completed and returned the survey (24.6%), with four completing via the QR code. A further 25 participants completed the survey via the links on Twitter ( n = 19), GP Buddy ( n = 5), and WhatsApp ( n = 1). The main characteristics of the sample corresponded to a recent report from the Irish College of General Practitioners and suggest a good representation of the group across gender, age, practice size, and location . Quantitative results of the survey Preconceptions and planning for the role shows GPs’ responses to Likert statements concerning preconceptions and planning for the role. GPs working in primary care centres were significantly more likely to agree that they would have space to accommodate a pharmacist in their practices ( p < 0.05). A significant association was found between age and the response to ‘ I am familiar with the training/education that pharmacists undergo ’, with disagreements from 14% aged ≥60 years vs. 52% aged 30–39 years ( p < 0.05). When asked about funding, 84.6% thought the role being fully funded by the government was most feasible, 12.1% felt the role would only have to be partly funded by the government and 0.7% said it could be patient contribution. Most respondents (74.5%) agreed that if the government funded the role, the government-employed pharmacist should be shared between multiple practices, rather than practices hiring pharmacists and the government providing a grant for their salary (25.5%). Regarding the frequency of pharmacist presence on site, 48.3% thought pharmacists should work in practices 1–2 days per week, 39.3% chose 1–2 days per month, 11.2% chose daily, and 1.1% were unsure of the optimal frequency. There was a significant association between practice size and optimal frequency of pharmacist presence in practices; 70% of GPs in large practices agreed pharmacists should be present 1–2 days per week compared to 45% and 40% in medium-sized and small practices, respectively ( p < 0.05). Roles and activities for pharmacists in general practice Responses to Likert statements concerning the roles and activities of pharmacists in general practice are shown in . There was a significant association between practice size and agreement with the role of pharmacists conducting medication reviews; 11.5% of those in smaller practices disagreed, vs. 0 and 1% in large and medium-sized practices, respectively ( p < 0.05). There was a similar significant association between practice size and agreement with the role of ‘ liaising with community pharmacies ’; 12% in small practices disagreed with this role compared to 0% and 1.1% in large and medium-sized practices, respectively ( p < 0.05). There was a significant association between GPs’ experience and agreement with pharmacists developing practice guidelines/practice formularies; 63% of GPs with ≥40 years of experience agreed with pharmacists conducting this role compared to 93% of GPs with <40 years’ experience ( p < 0.05). Not having a practice nurse was significantly associated with GPs’ views on pharmacists vaccinating in practices; 0% of those without a practice nurse agreed to this role compared to 50% with a practice nurse ( p < 0.05). GPs’ responses to the perceived impact on the roles and relationships of others in general practice are shown in . When GPs were asked what experience they would most like to see a pharmacist come to their practice with, 59.4% ranked experience in general practice first, 35.3% in community pharmacy first, and 5.3% in hospital pharmacy first. The mean number of years of experience GPs said they would like to see pharmacists coming to general practice with was three years minimum (range 0–10 years). Potential outcomes of pharmacists in general practice GPs’ responses to the perceived outcomes of pharmacists working in practices are shown in . GPs’ perceived impact on the workloads of others is shown in . Practice (i) location in a primary care centre and (ii) size were significantly associated with perceived impact on administrative staff workloads ( p < 0.05); (i) 40% not in a primary care centre felt administrative staff workloads would be increased compared to 14% based in primary care centres; (ii) 50% in small practices felt there would be an increase compared to 16% in large practices ( p < 0.05). Single-handed GPs were significantly more likely to predict an increase in administrative staff workload compared to GPs in group practices ( p < 0.05). GPs who work more sessions anticipated an increase in their workloads more often than GPs who work fewer sessions per week ( p < 0.05), as shown in . When GPs were asked if the potential outcomes of pharmacists in practices justified the theoretical cost of employing them, 48.6% of GPs agreed, 24.7% disagreed, and 26.7% neither agreed nor disagreed. GPs were significantly more likely to agree that having pharmacists in practices would provide value for money if they agreed with pharmacists prescribing independently ( p < 0.05), were not concerned about pharmacist indemnification ( p < 0.05), and if they expected patients to be receptive to pharmacists ( p < 0.05). Most GPs (78.6%) agreed they would participate in a future pilot study with a general practice-based pharmacist. Nearly all GPs with <10 years of experience (96%) agreed to partake in a trial compared to just 54.5% of those with >40 years’ experience ( p < 0.05). GPs who approved of pharmacists performing independent prescribing were significantly more likely to agree to partake in a pilot ( p < 0.05). Furthermore, GPs who agreed with the pharmacist interpretation of clinical biochemistry in practices were significantly more likely to participate in a pilot ( p < 0.05). Qualitative results of the survey Comments in free text boxes were provided by 49.3% of survey participants (75/152). Reflexive thematic analysis of responses to open-comment questions led to the development of five overarching themes, described below and supported by representative quotations. Additional quotations are available in Supplementary File . Theme 1 – Irish primary healthcare is suboptimal, and change is needed GPs spoke about the extraordinary pressure faced in general practice in Ireland currently. They attributed this to several factors, including practice staff shortages, time constraints, increasing variety and complexity of medications, medication shortages, and the bureaucracy of the public health system. GPs believe that combining all those factors creates a challenging work environment that makes achieving optimal prescribing in practices difficult. GPs believe the change to the current suboptimal primary care system in Ireland is necessary and inevitable, but is unlikely to occur shortly. ‘GPs do not have adequate time to go through medications’ effects or potential effects when prescribing.’ [GP 12] ‘Our problem is that there are not enough GPs.’ [GP 76] ‘We have a lot to go to get there…’ [GP 23] Theme 2 – Pharmacists are a useful resource and their role in primary care should be expanded GPs cited the usefulness of close doctor-pharmacist collaborations in other practice settings and jurisdictions, e.g. in hospitals and the UK’s National Health Service. They also referred to their positive, mutually beneficial relationships with community pharmacists who regularly provide helpful pharmaceutical care input to enhance patient care. GPs were overall very willing in the future to engage with a general practice-based pharmacist role, where pharmacists should fully utilise their skills, and expressed interest in participating in a pilot study of placing pharmacists in general practice. ‘Would think it is a great initiative, have seen it working very well in a hospital setting.’ [GP 53] ‘Huge wealth of knowledge which may be untapped.’ [GP 89] ‘I think it’s worth a pilot study and seems like a worthwhile idea to pursue.’ [GP 6] Theme 3 – Funding and governance of pharmacist roles in general practice GPs were adamant they alone cannot fund pharmacist roles. The government should fund pharmacist roles in practices because they will benefit financially. Most GPs wanted to be autonomous and work with the pharmacist independently of the bureaucracy of the public health system; at the same time, GPs do not want to shoulder the additional responsibilities that come with employing another staff member (e.g. paying their salary and professional indemnity, organising maternity leave cover, and increased paperwork). ‘I would not under any circumstance entertain paying a pharmacist to work in my practice. It would certainly have to be funded by the Health Service Executive if it was to roll out.’ [GP 84] ‘I am already overburdened (and underpaid) with the work of a small business owner and would like to be a doctor.’ [GP 145] ‘I wish to maintain autonomy over who works in my surgery.’ [GP 99] Theme 4 – What the role of a pharmacist in general practice would look like GPs gave several examples of the roles they would expect pharmacists in practices to perform, including medication reviews, providing medicines information, medication monitoring, chronic disease management, and addiction services. GPs said they particularly see a role for pharmacists in high-risk areas, for which they gave examples of hospital discharge prescriptions and repeat prescribing. Role logistics were debated in the comments, with GPs weighing up the benefits of full-time vs. part-time roles, having a dedicated pharmacist working in a single practice or a pharmacist shared between practices, and having a pharmacist working remotely for practices were all raised as possibilities. Some felt general practice-specific training may be required for pharmacists to work in practices. GPs also said they would value evidence or examples of where the role was already working well to give them an idea of what the role looks like and its potential outcomes. ‘Pharmacist very helpful part of team for CDM reviews/medication review.’ [GP 38] ‘Repeat prescribing/hospital prescriptions are higher risk; help would be great.’ [GP 19] ‘Access to advice re medications needs to be full time.’ [GP 66] ‘Would need specific training,’ [GP 72] Theme 5 – Anticipated outcomes from the role: Generally positive, with some unknowns GPs surmised about potential outcomes associated with pharmacists working in practices. For patients, GPs felt pharmacist presence would encourage patient adherence and improve patient outcomes. GPs thought that pharmacists would rationalise and standardise current prescribing practices and, therefore, save money spent on medications by the government. With respect to practices, GPs felt pharmacists may improve practice efficiency and improve the reputations of GPs. Although outcomes were broadly anticipated as positive, GPs still wondered about potential negative impacts on practice staff workloads, increased litigation, and potentially weakened GP-patient relationships. ‘I would love to see primary care pharmacy formalised as a role to support both GPs + community pharmacists to better patient care and safety.’ [GP 76] ‘The monetary benefit would be to the Health Service Executive [Ireland’s publicly funded healthcare system] medications bill.’ [GP 17] ‘Working with a pharmacist would initially increase our workload because adjustments needed to be made.’ [GP 151] ‘This develops over time over many ‘low stake’ consultations. Delegate these away to give us more time for the ‘high stake’ consultations and we don’t have ‘the relationship’ which is often the ‘secret sauce’ in why GP works.’ [GP 107] Preconceptions and planning for the role shows GPs’ responses to Likert statements concerning preconceptions and planning for the role. GPs working in primary care centres were significantly more likely to agree that they would have space to accommodate a pharmacist in their practices ( p < 0.05). A significant association was found between age and the response to ‘ I am familiar with the training/education that pharmacists undergo ’, with disagreements from 14% aged ≥60 years vs. 52% aged 30–39 years ( p < 0.05). When asked about funding, 84.6% thought the role being fully funded by the government was most feasible, 12.1% felt the role would only have to be partly funded by the government and 0.7% said it could be patient contribution. Most respondents (74.5%) agreed that if the government funded the role, the government-employed pharmacist should be shared between multiple practices, rather than practices hiring pharmacists and the government providing a grant for their salary (25.5%). Regarding the frequency of pharmacist presence on site, 48.3% thought pharmacists should work in practices 1–2 days per week, 39.3% chose 1–2 days per month, 11.2% chose daily, and 1.1% were unsure of the optimal frequency. There was a significant association between practice size and optimal frequency of pharmacist presence in practices; 70% of GPs in large practices agreed pharmacists should be present 1–2 days per week compared to 45% and 40% in medium-sized and small practices, respectively ( p < 0.05). Roles and activities for pharmacists in general practice Responses to Likert statements concerning the roles and activities of pharmacists in general practice are shown in . There was a significant association between practice size and agreement with the role of pharmacists conducting medication reviews; 11.5% of those in smaller practices disagreed, vs. 0 and 1% in large and medium-sized practices, respectively ( p < 0.05). There was a similar significant association between practice size and agreement with the role of ‘ liaising with community pharmacies ’; 12% in small practices disagreed with this role compared to 0% and 1.1% in large and medium-sized practices, respectively ( p < 0.05). There was a significant association between GPs’ experience and agreement with pharmacists developing practice guidelines/practice formularies; 63% of GPs with ≥40 years of experience agreed with pharmacists conducting this role compared to 93% of GPs with <40 years’ experience ( p < 0.05). Not having a practice nurse was significantly associated with GPs’ views on pharmacists vaccinating in practices; 0% of those without a practice nurse agreed to this role compared to 50% with a practice nurse ( p < 0.05). GPs’ responses to the perceived impact on the roles and relationships of others in general practice are shown in . When GPs were asked what experience they would most like to see a pharmacist come to their practice with, 59.4% ranked experience in general practice first, 35.3% in community pharmacy first, and 5.3% in hospital pharmacy first. The mean number of years of experience GPs said they would like to see pharmacists coming to general practice with was three years minimum (range 0–10 years). Potential outcomes of pharmacists in general practice GPs’ responses to the perceived outcomes of pharmacists working in practices are shown in . GPs’ perceived impact on the workloads of others is shown in . Practice (i) location in a primary care centre and (ii) size were significantly associated with perceived impact on administrative staff workloads ( p < 0.05); (i) 40% not in a primary care centre felt administrative staff workloads would be increased compared to 14% based in primary care centres; (ii) 50% in small practices felt there would be an increase compared to 16% in large practices ( p < 0.05). Single-handed GPs were significantly more likely to predict an increase in administrative staff workload compared to GPs in group practices ( p < 0.05). GPs who work more sessions anticipated an increase in their workloads more often than GPs who work fewer sessions per week ( p < 0.05), as shown in . When GPs were asked if the potential outcomes of pharmacists in practices justified the theoretical cost of employing them, 48.6% of GPs agreed, 24.7% disagreed, and 26.7% neither agreed nor disagreed. GPs were significantly more likely to agree that having pharmacists in practices would provide value for money if they agreed with pharmacists prescribing independently ( p < 0.05), were not concerned about pharmacist indemnification ( p < 0.05), and if they expected patients to be receptive to pharmacists ( p < 0.05). Most GPs (78.6%) agreed they would participate in a future pilot study with a general practice-based pharmacist. Nearly all GPs with <10 years of experience (96%) agreed to partake in a trial compared to just 54.5% of those with >40 years’ experience ( p < 0.05). GPs who approved of pharmacists performing independent prescribing were significantly more likely to agree to partake in a pilot ( p < 0.05). Furthermore, GPs who agreed with the pharmacist interpretation of clinical biochemistry in practices were significantly more likely to participate in a pilot ( p < 0.05). shows GPs’ responses to Likert statements concerning preconceptions and planning for the role. GPs working in primary care centres were significantly more likely to agree that they would have space to accommodate a pharmacist in their practices ( p < 0.05). A significant association was found between age and the response to ‘ I am familiar with the training/education that pharmacists undergo ’, with disagreements from 14% aged ≥60 years vs. 52% aged 30–39 years ( p < 0.05). When asked about funding, 84.6% thought the role being fully funded by the government was most feasible, 12.1% felt the role would only have to be partly funded by the government and 0.7% said it could be patient contribution. Most respondents (74.5%) agreed that if the government funded the role, the government-employed pharmacist should be shared between multiple practices, rather than practices hiring pharmacists and the government providing a grant for their salary (25.5%). Regarding the frequency of pharmacist presence on site, 48.3% thought pharmacists should work in practices 1–2 days per week, 39.3% chose 1–2 days per month, 11.2% chose daily, and 1.1% were unsure of the optimal frequency. There was a significant association between practice size and optimal frequency of pharmacist presence in practices; 70% of GPs in large practices agreed pharmacists should be present 1–2 days per week compared to 45% and 40% in medium-sized and small practices, respectively ( p < 0.05). Responses to Likert statements concerning the roles and activities of pharmacists in general practice are shown in . There was a significant association between practice size and agreement with the role of pharmacists conducting medication reviews; 11.5% of those in smaller practices disagreed, vs. 0 and 1% in large and medium-sized practices, respectively ( p < 0.05). There was a similar significant association between practice size and agreement with the role of ‘ liaising with community pharmacies ’; 12% in small practices disagreed with this role compared to 0% and 1.1% in large and medium-sized practices, respectively ( p < 0.05). There was a significant association between GPs’ experience and agreement with pharmacists developing practice guidelines/practice formularies; 63% of GPs with ≥40 years of experience agreed with pharmacists conducting this role compared to 93% of GPs with <40 years’ experience ( p < 0.05). Not having a practice nurse was significantly associated with GPs’ views on pharmacists vaccinating in practices; 0% of those without a practice nurse agreed to this role compared to 50% with a practice nurse ( p < 0.05). GPs’ responses to the perceived impact on the roles and relationships of others in general practice are shown in . When GPs were asked what experience they would most like to see a pharmacist come to their practice with, 59.4% ranked experience in general practice first, 35.3% in community pharmacy first, and 5.3% in hospital pharmacy first. The mean number of years of experience GPs said they would like to see pharmacists coming to general practice with was three years minimum (range 0–10 years). GPs’ responses to the perceived outcomes of pharmacists working in practices are shown in . GPs’ perceived impact on the workloads of others is shown in . Practice (i) location in a primary care centre and (ii) size were significantly associated with perceived impact on administrative staff workloads ( p < 0.05); (i) 40% not in a primary care centre felt administrative staff workloads would be increased compared to 14% based in primary care centres; (ii) 50% in small practices felt there would be an increase compared to 16% in large practices ( p < 0.05). Single-handed GPs were significantly more likely to predict an increase in administrative staff workload compared to GPs in group practices ( p < 0.05). GPs who work more sessions anticipated an increase in their workloads more often than GPs who work fewer sessions per week ( p < 0.05), as shown in . When GPs were asked if the potential outcomes of pharmacists in practices justified the theoretical cost of employing them, 48.6% of GPs agreed, 24.7% disagreed, and 26.7% neither agreed nor disagreed. GPs were significantly more likely to agree that having pharmacists in practices would provide value for money if they agreed with pharmacists prescribing independently ( p < 0.05), were not concerned about pharmacist indemnification ( p < 0.05), and if they expected patients to be receptive to pharmacists ( p < 0.05). Most GPs (78.6%) agreed they would participate in a future pilot study with a general practice-based pharmacist. Nearly all GPs with <10 years of experience (96%) agreed to partake in a trial compared to just 54.5% of those with >40 years’ experience ( p < 0.05). GPs who approved of pharmacists performing independent prescribing were significantly more likely to agree to partake in a pilot ( p < 0.05). Furthermore, GPs who agreed with the pharmacist interpretation of clinical biochemistry in practices were significantly more likely to participate in a pilot ( p < 0.05). Comments in free text boxes were provided by 49.3% of survey participants (75/152). Reflexive thematic analysis of responses to open-comment questions led to the development of five overarching themes, described below and supported by representative quotations. Additional quotations are available in Supplementary File . Theme 1 – Irish primary healthcare is suboptimal, and change is needed GPs spoke about the extraordinary pressure faced in general practice in Ireland currently. They attributed this to several factors, including practice staff shortages, time constraints, increasing variety and complexity of medications, medication shortages, and the bureaucracy of the public health system. GPs believe that combining all those factors creates a challenging work environment that makes achieving optimal prescribing in practices difficult. GPs believe the change to the current suboptimal primary care system in Ireland is necessary and inevitable, but is unlikely to occur shortly. ‘GPs do not have adequate time to go through medications’ effects or potential effects when prescribing.’ [GP 12] ‘Our problem is that there are not enough GPs.’ [GP 76] ‘We have a lot to go to get there…’ [GP 23] Theme 2 – Pharmacists are a useful resource and their role in primary care should be expanded GPs cited the usefulness of close doctor-pharmacist collaborations in other practice settings and jurisdictions, e.g. in hospitals and the UK’s National Health Service. They also referred to their positive, mutually beneficial relationships with community pharmacists who regularly provide helpful pharmaceutical care input to enhance patient care. GPs were overall very willing in the future to engage with a general practice-based pharmacist role, where pharmacists should fully utilise their skills, and expressed interest in participating in a pilot study of placing pharmacists in general practice. ‘Would think it is a great initiative, have seen it working very well in a hospital setting.’ [GP 53] ‘Huge wealth of knowledge which may be untapped.’ [GP 89] ‘I think it’s worth a pilot study and seems like a worthwhile idea to pursue.’ [GP 6] Theme 3 – Funding and governance of pharmacist roles in general practice GPs were adamant they alone cannot fund pharmacist roles. The government should fund pharmacist roles in practices because they will benefit financially. Most GPs wanted to be autonomous and work with the pharmacist independently of the bureaucracy of the public health system; at the same time, GPs do not want to shoulder the additional responsibilities that come with employing another staff member (e.g. paying their salary and professional indemnity, organising maternity leave cover, and increased paperwork). ‘I would not under any circumstance entertain paying a pharmacist to work in my practice. It would certainly have to be funded by the Health Service Executive if it was to roll out.’ [GP 84] ‘I am already overburdened (and underpaid) with the work of a small business owner and would like to be a doctor.’ [GP 145] ‘I wish to maintain autonomy over who works in my surgery.’ [GP 99] Theme 4 – What the role of a pharmacist in general practice would look like GPs gave several examples of the roles they would expect pharmacists in practices to perform, including medication reviews, providing medicines information, medication monitoring, chronic disease management, and addiction services. GPs said they particularly see a role for pharmacists in high-risk areas, for which they gave examples of hospital discharge prescriptions and repeat prescribing. Role logistics were debated in the comments, with GPs weighing up the benefits of full-time vs. part-time roles, having a dedicated pharmacist working in a single practice or a pharmacist shared between practices, and having a pharmacist working remotely for practices were all raised as possibilities. Some felt general practice-specific training may be required for pharmacists to work in practices. GPs also said they would value evidence or examples of where the role was already working well to give them an idea of what the role looks like and its potential outcomes. ‘Pharmacist very helpful part of team for CDM reviews/medication review.’ [GP 38] ‘Repeat prescribing/hospital prescriptions are higher risk; help would be great.’ [GP 19] ‘Access to advice re medications needs to be full time.’ [GP 66] ‘Would need specific training,’ [GP 72] Theme 5 – Anticipated outcomes from the role: Generally positive, with some unknowns GPs surmised about potential outcomes associated with pharmacists working in practices. For patients, GPs felt pharmacist presence would encourage patient adherence and improve patient outcomes. GPs thought that pharmacists would rationalise and standardise current prescribing practices and, therefore, save money spent on medications by the government. With respect to practices, GPs felt pharmacists may improve practice efficiency and improve the reputations of GPs. Although outcomes were broadly anticipated as positive, GPs still wondered about potential negative impacts on practice staff workloads, increased litigation, and potentially weakened GP-patient relationships. ‘I would love to see primary care pharmacy formalised as a role to support both GPs + community pharmacists to better patient care and safety.’ [GP 76] ‘The monetary benefit would be to the Health Service Executive [Ireland’s publicly funded healthcare system] medications bill.’ [GP 17] ‘Working with a pharmacist would initially increase our workload because adjustments needed to be made.’ [GP 151] ‘This develops over time over many ‘low stake’ consultations. Delegate these away to give us more time for the ‘high stake’ consultations and we don’t have ‘the relationship’ which is often the ‘secret sauce’ in why GP works.’ [GP 107] GPs spoke about the extraordinary pressure faced in general practice in Ireland currently. They attributed this to several factors, including practice staff shortages, time constraints, increasing variety and complexity of medications, medication shortages, and the bureaucracy of the public health system. GPs believe that combining all those factors creates a challenging work environment that makes achieving optimal prescribing in practices difficult. GPs believe the change to the current suboptimal primary care system in Ireland is necessary and inevitable, but is unlikely to occur shortly. ‘GPs do not have adequate time to go through medications’ effects or potential effects when prescribing.’ [GP 12] ‘Our problem is that there are not enough GPs.’ [GP 76] ‘We have a lot to go to get there…’ [GP 23] GPs cited the usefulness of close doctor-pharmacist collaborations in other practice settings and jurisdictions, e.g. in hospitals and the UK’s National Health Service. They also referred to their positive, mutually beneficial relationships with community pharmacists who regularly provide helpful pharmaceutical care input to enhance patient care. GPs were overall very willing in the future to engage with a general practice-based pharmacist role, where pharmacists should fully utilise their skills, and expressed interest in participating in a pilot study of placing pharmacists in general practice. ‘Would think it is a great initiative, have seen it working very well in a hospital setting.’ [GP 53] ‘Huge wealth of knowledge which may be untapped.’ [GP 89] ‘I think it’s worth a pilot study and seems like a worthwhile idea to pursue.’ [GP 6] GPs were adamant they alone cannot fund pharmacist roles. The government should fund pharmacist roles in practices because they will benefit financially. Most GPs wanted to be autonomous and work with the pharmacist independently of the bureaucracy of the public health system; at the same time, GPs do not want to shoulder the additional responsibilities that come with employing another staff member (e.g. paying their salary and professional indemnity, organising maternity leave cover, and increased paperwork). ‘I would not under any circumstance entertain paying a pharmacist to work in my practice. It would certainly have to be funded by the Health Service Executive if it was to roll out.’ [GP 84] ‘I am already overburdened (and underpaid) with the work of a small business owner and would like to be a doctor.’ [GP 145] ‘I wish to maintain autonomy over who works in my surgery.’ [GP 99] GPs gave several examples of the roles they would expect pharmacists in practices to perform, including medication reviews, providing medicines information, medication monitoring, chronic disease management, and addiction services. GPs said they particularly see a role for pharmacists in high-risk areas, for which they gave examples of hospital discharge prescriptions and repeat prescribing. Role logistics were debated in the comments, with GPs weighing up the benefits of full-time vs. part-time roles, having a dedicated pharmacist working in a single practice or a pharmacist shared between practices, and having a pharmacist working remotely for practices were all raised as possibilities. Some felt general practice-specific training may be required for pharmacists to work in practices. GPs also said they would value evidence or examples of where the role was already working well to give them an idea of what the role looks like and its potential outcomes. ‘Pharmacist very helpful part of team for CDM reviews/medication review.’ [GP 38] ‘Repeat prescribing/hospital prescriptions are higher risk; help would be great.’ [GP 19] ‘Access to advice re medications needs to be full time.’ [GP 66] ‘Would need specific training,’ [GP 72] GPs surmised about potential outcomes associated with pharmacists working in practices. For patients, GPs felt pharmacist presence would encourage patient adherence and improve patient outcomes. GPs thought that pharmacists would rationalise and standardise current prescribing practices and, therefore, save money spent on medications by the government. With respect to practices, GPs felt pharmacists may improve practice efficiency and improve the reputations of GPs. Although outcomes were broadly anticipated as positive, GPs still wondered about potential negative impacts on practice staff workloads, increased litigation, and potentially weakened GP-patient relationships. ‘I would love to see primary care pharmacy formalised as a role to support both GPs + community pharmacists to better patient care and safety.’ [GP 76] ‘The monetary benefit would be to the Health Service Executive [Ireland’s publicly funded healthcare system] medications bill.’ [GP 17] ‘Working with a pharmacist would initially increase our workload because adjustments needed to be made.’ [GP 151] ‘This develops over time over many ‘low stake’ consultations. Delegate these away to give us more time for the ‘high stake’ consultations and we don’t have ‘the relationship’ which is often the ‘secret sauce’ in why GP works.’ [GP 107] Main findings This novel mixed-methods survey study is the first to explore GPs’ perceptions of working with pharmacists in practices without having previously worked alongside them. This study reveals that GPs in Ireland were broadly welcoming and optimistic about the future role of a general practice-based pharmacist. GPs felt the role should be government-funded. While most GPs found advisory roles for pharmacists related to medication optimisation acceptable, they were less keen on independent pharmacist prescribing, vaccinating, or managing and triaging minor ailments in practices. While generally optimistic and welcoming towards the role, GPs had concerns regarding the potential impact of pharmacists in practices on the workloads of others, indemnification, and potential disruption to their relationships with patients. Strengths and limitations The survey content was based on the results of a qualitative evidence synthesis and a TDF-informed interview study ; therefore, this study was able to explore the causal determinants of GPs’ perceptions of pharmacist integration into general practice to inform future interventions better. While the survey utilised a multi-modal dissemination approach to enhance its reach and response rate, responses received via electronic routes could have been more extensive in number. The postal survey also included a novel element of a QR code on its front page to allow GPs to respond electronically to the survey, which is a strategy we are not aware has been previously employed. While the postal survey’s response rate of 24.6% is somewhat low, this is still within the range of reported response rates (18–78%) from other GP surveys in Ireland . Lastly, while Ireland’s primary care system does appear similar across several domains to multiple other European countries , the results may not be as generalisable or transferable to some regions. Comparison with existing literature The positive perception of the pharmacist’s role in practices reported by GPs in this study was mirrored in a recent survey of GPs in Northern Ireland, who currently work with practice-based pharmacists (PBPs) . However, the reticence of GPs towards practice pharmacists independently prescribing in our study is surprising given that 62.4% of the surveyed GPs in Northern Ireland reported that their PBPs were qualified as independent prescribers, and 76.2% were prescribing for patients in general practices. GP reticence towards pharmacist prescribing has also been described elsewhere in the literature; for example, GPs working in private practices in a study by Saw et al. tended to view pharmacists as ‘ medicines suppliers ’ and were less comfortable with expanded pharmacist roles like prescribing . Despite pharmacists with additional accreditation prescribing medications under GP supervision since 2003 (supplementary prescribing) and independently since 2006 in the UK , no legislative changes or additional accreditations have been enacted in Ireland. Furthermore, it has been shown that pharmacist prescribers prescribe safely, appropriately, and improve service accessibility . GPs’ reticence towards pharmacist prescribing may negatively impact the potential clinical benefit of pharmacist roles in Irish general practices. Concerns around funding pharmacists’ roles in practices are common in the literature, namely in Malaysia and Australia . These concerns were apparent amongst our sample, who preferred the role to be fully government-funded. GPs were adamant in open comment responses about not funding the role. GPs in Ireland already report fears about the financial viability of general practice, so it is doubtful they can support the salary of an additional healthcare professional . Moreover, as the GP respondents outlined, the government would likely be the primary financial benefactor of the role, as there would be reduced expenditure on medications resulting from deprescribing or medication optimisation. To date, cost-effectiveness analyses of the role show that the cost of deprescribed medications is the main financial justification for the role so far . In the UK, where pharmacists have been successfully integrated into practices, government funding has been utilised to initiate and maintain pharmacist presence in practices . Perhaps other countries seeking to establish the role should therefore model this UK funding strategy given the evidence that GPs are not the main financial benefactors of the role . GPs’ willingness to partake in a pilot study was a welcome finding, given the current GP workforce and workload issues in Ireland . Two initiatives, the iSIMPATHY project, and a Royal College of Surgeons in Ireland (RCSI) pilot study have demonstrated the potential for a high return on investment, resolution of medication-related problems, effective interprofessional relationships, and significant acceptability to patients, practice staff, and GPs . The qualitative evaluation of RCSI’s pilot study also showed GPs’ concern regarding funding, infrastructure, and potential impact on workload, which is akin to this study’s findings . However, RCSI’s pilot study was small and included just four practices, while the iSIMPATHY project has been confined to select parts of the country. Given that most practices in Ireland remain unexposed to general practice pharmacists, it would be prudent to consider how a similar national pilot would be implemented at a national level, akin to the UK’s successful pilot in 2015 . Implications The potential workload impact of integrating pharmacists into practices must be more clearly deciphered in future research studies. Akhtar et al. have recently suggested that both qualitative and quantitative key performance indicators should be utilised to evaluate the overall impact of practice-based pharmacists, including reduced GP workload measured by their free hours, number and quality of medication reviews, reduced medication wastage, clinical audits, and patient satisfaction surveys . In addition, GPs in this study needed to be more convinced of the value for money of the role. To date, studies in Ireland that have examined the cost-effectiveness of the role have done so to a limited extent, focusing on attributing cost savings of the role solely to deprescribed medications . Future cost-effectiveness analyses should also consider cost savings associated with potentially improved prescribing, such as avoiding preventable adverse drug events and hospitalisations . More definitive evidence of such cost savings may make it a more attractive endeavour for policymakers, GPs, and pharmacists. Given that pharmacists in general practices may be potentially constrained by an inability to alter or initiate medications, policymakers, legislators, and higher education institutions in countries like Ireland should consider the development of pharmacist prescribing and the training thereof to better facilitate such roles in the future – perhaps similar to the 6-month training for a certificate in prescribing that nurses in Ireland have been able to undertake since 2007 . In a recent Irish study exploring pharmacists’ perceptions of such integration into general practices, pharmacists have also identified the need for a pharmacist prescribing course to facilitate their prescribing in general practices . The GPs’ reticence to pharmacist prescribing identified in this study must also be explored further. A phased introduction of pharmacist prescribing would be more acceptable to GPs in Ireland (e.g. similar to the UK’s approach: before independent prescribing, having supplementary prescribing dependent on a prior diagnosis and an agreed pharmacist-GP clinical management plan ). Policymakers should also be cognisant of the need for adequate funding to support pharmacists’ roles in practices and ensure an adequate pharmacist workforce is available to support these general practice roles through liaising with pharmacist regulatory bodies and higher education institutions. This novel mixed-methods survey study is the first to explore GPs’ perceptions of working with pharmacists in practices without having previously worked alongside them. This study reveals that GPs in Ireland were broadly welcoming and optimistic about the future role of a general practice-based pharmacist. GPs felt the role should be government-funded. While most GPs found advisory roles for pharmacists related to medication optimisation acceptable, they were less keen on independent pharmacist prescribing, vaccinating, or managing and triaging minor ailments in practices. While generally optimistic and welcoming towards the role, GPs had concerns regarding the potential impact of pharmacists in practices on the workloads of others, indemnification, and potential disruption to their relationships with patients. The survey content was based on the results of a qualitative evidence synthesis and a TDF-informed interview study ; therefore, this study was able to explore the causal determinants of GPs’ perceptions of pharmacist integration into general practice to inform future interventions better. While the survey utilised a multi-modal dissemination approach to enhance its reach and response rate, responses received via electronic routes could have been more extensive in number. The postal survey also included a novel element of a QR code on its front page to allow GPs to respond electronically to the survey, which is a strategy we are not aware has been previously employed. While the postal survey’s response rate of 24.6% is somewhat low, this is still within the range of reported response rates (18–78%) from other GP surveys in Ireland . Lastly, while Ireland’s primary care system does appear similar across several domains to multiple other European countries , the results may not be as generalisable or transferable to some regions. The positive perception of the pharmacist’s role in practices reported by GPs in this study was mirrored in a recent survey of GPs in Northern Ireland, who currently work with practice-based pharmacists (PBPs) . However, the reticence of GPs towards practice pharmacists independently prescribing in our study is surprising given that 62.4% of the surveyed GPs in Northern Ireland reported that their PBPs were qualified as independent prescribers, and 76.2% were prescribing for patients in general practices. GP reticence towards pharmacist prescribing has also been described elsewhere in the literature; for example, GPs working in private practices in a study by Saw et al. tended to view pharmacists as ‘ medicines suppliers ’ and were less comfortable with expanded pharmacist roles like prescribing . Despite pharmacists with additional accreditation prescribing medications under GP supervision since 2003 (supplementary prescribing) and independently since 2006 in the UK , no legislative changes or additional accreditations have been enacted in Ireland. Furthermore, it has been shown that pharmacist prescribers prescribe safely, appropriately, and improve service accessibility . GPs’ reticence towards pharmacist prescribing may negatively impact the potential clinical benefit of pharmacist roles in Irish general practices. Concerns around funding pharmacists’ roles in practices are common in the literature, namely in Malaysia and Australia . These concerns were apparent amongst our sample, who preferred the role to be fully government-funded. GPs were adamant in open comment responses about not funding the role. GPs in Ireland already report fears about the financial viability of general practice, so it is doubtful they can support the salary of an additional healthcare professional . Moreover, as the GP respondents outlined, the government would likely be the primary financial benefactor of the role, as there would be reduced expenditure on medications resulting from deprescribing or medication optimisation. To date, cost-effectiveness analyses of the role show that the cost of deprescribed medications is the main financial justification for the role so far . In the UK, where pharmacists have been successfully integrated into practices, government funding has been utilised to initiate and maintain pharmacist presence in practices . Perhaps other countries seeking to establish the role should therefore model this UK funding strategy given the evidence that GPs are not the main financial benefactors of the role . GPs’ willingness to partake in a pilot study was a welcome finding, given the current GP workforce and workload issues in Ireland . Two initiatives, the iSIMPATHY project, and a Royal College of Surgeons in Ireland (RCSI) pilot study have demonstrated the potential for a high return on investment, resolution of medication-related problems, effective interprofessional relationships, and significant acceptability to patients, practice staff, and GPs . The qualitative evaluation of RCSI’s pilot study also showed GPs’ concern regarding funding, infrastructure, and potential impact on workload, which is akin to this study’s findings . However, RCSI’s pilot study was small and included just four practices, while the iSIMPATHY project has been confined to select parts of the country. Given that most practices in Ireland remain unexposed to general practice pharmacists, it would be prudent to consider how a similar national pilot would be implemented at a national level, akin to the UK’s successful pilot in 2015 . The potential workload impact of integrating pharmacists into practices must be more clearly deciphered in future research studies. Akhtar et al. have recently suggested that both qualitative and quantitative key performance indicators should be utilised to evaluate the overall impact of practice-based pharmacists, including reduced GP workload measured by their free hours, number and quality of medication reviews, reduced medication wastage, clinical audits, and patient satisfaction surveys . In addition, GPs in this study needed to be more convinced of the value for money of the role. To date, studies in Ireland that have examined the cost-effectiveness of the role have done so to a limited extent, focusing on attributing cost savings of the role solely to deprescribed medications . Future cost-effectiveness analyses should also consider cost savings associated with potentially improved prescribing, such as avoiding preventable adverse drug events and hospitalisations . More definitive evidence of such cost savings may make it a more attractive endeavour for policymakers, GPs, and pharmacists. Given that pharmacists in general practices may be potentially constrained by an inability to alter or initiate medications, policymakers, legislators, and higher education institutions in countries like Ireland should consider the development of pharmacist prescribing and the training thereof to better facilitate such roles in the future – perhaps similar to the 6-month training for a certificate in prescribing that nurses in Ireland have been able to undertake since 2007 . In a recent Irish study exploring pharmacists’ perceptions of such integration into general practices, pharmacists have also identified the need for a pharmacist prescribing course to facilitate their prescribing in general practices . The GPs’ reticence to pharmacist prescribing identified in this study must also be explored further. A phased introduction of pharmacist prescribing would be more acceptable to GPs in Ireland (e.g. similar to the UK’s approach: before independent prescribing, having supplementary prescribing dependent on a prior diagnosis and an agreed pharmacist-GP clinical management plan ). Policymakers should also be cognisant of the need for adequate funding to support pharmacists’ roles in practices and ensure an adequate pharmacist workforce is available to support these general practice roles through liaising with pharmacist regulatory bodies and higher education institutions. GPs surveyed in this study were mostly optimistic and welcoming towards pharmacists working in practices. However, this study also reveals GPs’ concerns about how pharmacist roles will be funded, indemnified, and impact the workloads of others. This research has created a greater understanding of how GP and general practice characteristics impact GPs’ perceptions of integrating pharmacists into practices. This may help better inform future initiatives to integrate pharmacists into practices – ultimately to enhance patient care, support GPs, and utilise the pharmacists’ skillsets to deliver the highest quality primary care models possible. Supplemental Material Click here for additional data file. Supplemental Material Click here for additional data file. Supplemental Material Click here for additional data file. Supplemental Material Click here for additional data file.
Identification of Salivary Exosome-Derived miRNAs as Potential Biomarkers of Bone Remodeling During Orthodontic Tooth Movement
ae472dd6-7138-4d76-92f6-68723317fe77
11818790
Dentistry[mh]
microRNAs (miRNAs) are an integral part of gene regulatory networks, influencing a considerable number of processes both in physiological and diseased states in different tissues and organs. Alterations to their levels have been associated with many diseases and many miRNAs have been identified as biomarkers in pathological conditions . miRNAs are a set of non-coding RNAs that modulate gene expression post-transcriptionally. They are, on average, 22 nucleotides long and play an important role in biological processes . They target mRNAs by imperfect complementary binding, usually in the 3’ untranslated region (UTR), and they suppress their expression through a combination of translational inhibition and the promotion of mRNA decay. Their biogenesis involves their transcription by the RNA pol II as pri-miRNA and their cleavage to form pre-miRNA in the nucleus. Then, another cleavage occurs through the action of Dicer, leading to the formation of mature miRNA in a duplex form. The duplex miRNA is bound by Argonaute, thus forming the RISC complex, which is the effector of transcript downregulation . miRNAs have been shown to be involved in processes relevant to dentistry such as tooth development, bone remodeling, and the differentiation of dental stem cells . Many miRNAs are known to regulate osteoclast and osteoblast differentiation and their maintenance and function, which are important to bone remodeling. More specifically, regarding orthodontic tooth movement, miRNAs are involved in a process that triggers a series of biological changes . In orthodontics, tooth displacement and skeletal growth modification occur due to the bone’s capacity to remodel. Orthodontic devices exert forces on the teeth and the surrounding tissues, thereby inducing reactions to their cells and the extracellular components. Bone remodeling is regulated by a balance system of two types of cells, the osteoblasts and osteoclasts, and includes a complex network of interactions between cells and between the extracellular matrix and the cells in the presence of hormones, cytokines, growth factors, and mechanical loading . These processes heavily depend on the sterile inflammation that occurs upon the application of mechanical stress. Following the reception of mechanical cues, the signal conveying the mechanical conditions of the extracellular environment is carried towards the nucleus through MAPK kinases and, more importantly, through extracellular-signal-regulated kinases (ERKs) and c-Jun N-terminal kinases (JNKs) . Then, to activate osteo-specific transcription factors like Runx2 and c-Jun, c-Fos stimulates the DNA-binding potential to specific genes associated with osteoblast differentiation (ALP, osteocalcin, collagen type I). All of the above ultimately lead to a change in gene expression and reprogramming towards the osteoblastic phenotype. Additionally, mature osteoblasts produce cytokines such as RANKL and OPG, the balance of which is essential for osteoclast differentiation and bone resorption . The proteins essential for these processes are targeted by miRNAs. There is already a set of miRNA molecules that have been validated to be differentially expressed during orthodontic movement. These miRNAs comprise the miRNAs 21, 27, 29, 34,146, 214, and 101 . Lately, the research focus in dentistry and orthodontics has been directed towards the development of biomarkers derived from oral fluids. This would be of interest because these biomarkers could provide information about the monitoring of physiological processes and serve as tools for the early diagnosis of diseases. Most studies regarding orthodontic tooth movement (OTM) have been performed on gingival crevicular fluid (GCF) . GCF is the fluid between a tooth and the surrounding gingival tissues, known as gingival sulcus, and it can contain cells, biomolecules, and microbiota . Apart from GCF, saliva has been identified as a potential fluid for biomarker discovery. Saliva can give a variety of information for both oral and systemic health, as its contents can alter in diseased states. Various molecules have been identified as biomarkers in saliva, in patients diagnosed with cancer or autoimmune and infectious diseases . Despite the use of GCF as a biomarker source in orthodontic tooth movement, the limited sample material that can be reliably collected in the clinic, as well as the non-invasive and cost-effective collection of saliva, could make saliva a more widespread material for biomarker discovery . In the context of using saliva as a source for biomarkers, there is increasing evidence regarding the utilization of salivary exosomes. Exosomes are small extracellular vehicles (EVs) (40–150 nm) secreted by a variety of cells, and they play a crucial role in intracellular communication, as they include various macromolecules such as RNAs, proteins, etc. . An important feature of exosomes is that they can pass through epithelial barriers, meaning that information from the underlying tissues in the mouth can eventually be found in the saliva. This phenomenon could be leveraged in order to identify biomarkers related to systemic skeletal disorders such as osteoporosis . The investigation of these salivary exosomes could clarify the molecular pathways associated with the bone-remodeling process, which occurs after the application of mechanical stress as in orthodontic treatment . As the current literature suggests, exosomal miRNAs could be a better source for biomarker studies . This study aimed to isolate salivary exosome-derived miRNAs in patients undergoing OTM and identify new biomarkers for bone remodeling. Following the isolation and analysis of salivary samples, transmission electron microscopy (TEM) confirmed the presence of EVs in the samples from patients undergoing orthodontic treatment. In the TEM analysis , the EVs appeared as spherical structures, with some exhibiting a distinct membrane-like structure. A complementary size distribution analysis via nanoparticle tracking confirmed that the majority of the EVs were between 100 and 150 nm in diameter, with a peak concentration around 100 nm . These findings confirm the successful isolation and identification of extracellular vesicles, which provide a viable source for a further miRNA analysis relevant to bone-remodeling processes during orthodontic treatment. The most variably expressed miRNAs can be viewed in the heatmap below , with no distinct clustering observed in either the samples or the miRNA expression patterns. Likewise, the MDS plot did not display distinct groupings based on time points or individual subjects, suggesting an absence of strong time-dependent or subject-specific patterns in the data. Of the comparisons made, the Day 40 vs. Day 0 contrast yielded a statistically significant result, identifying hsa-miR-4634 as differentially expressed, with its expression reduced on Day 40 compared to Day 0 (log fold change = −1.9, mean expression = 6.2 log2-transformed CPMs, adjusted p -value = 0.043) . While other miRNAs did show notable changes in expression, they did not reach statistical significance after adjusting for multiple testing to reduce the likelihood of false positives (adjusted p -value > 0.1). Because these miRNAs changed in a statistically significant way ( p -value < 0.05), it is noteworthy to mention and analyze their alterations . Using the database TarBase-v9.0, we generated a list of the genes that these miRNAs target . Afterwards, we performed a gene ontology (GO) enrichment analysis using the bioinformatics tool DAVID (database for annotation, visualization, and integrated discovery) . Of all the processes affected by the input gene set, we searched only the processes that are the most relevant to bone remodeling, such as osteoblast differentiation (GO:0001649); osteoclast differentiation (GO:0030218); the positive regulation of osteoblast differentiation (GO:0045669); and the positive regulation of bone mineralization (GO:0030501) ( , and ). The present study is the first, to our knowledge, to investigate the expression of miRNAs in salivary exosomes during OTM in patients. The foremost finding of this study concerns miRNA hsa-miR-4634, which was found to have a statistically significant altered expression. Its expression was downregulated after 40 days of treatment. The literature regarding this particular miRNA is limited. Still, the only known target for this miRNA is VAV3 . VAV3 is a Rho family guanine nucleotide exchange factor, which has been shown to be essential for stimulated osteoclast activation and bone density. Guanine nucleotide exchange factors (GEFs) mediate the activation of Rho family GTPase by exchanging GDP for GTP. This occurs in addition to GTPase activation, thus influencing downstream signaling pathways. VAV3 has been identified as an essential factor in the regulation of osteoclast function. More specifically, VAV3-deficient mice were shown to have an increased bone mass because of dysfunctional osteoclasts and exhibited protection from stimulated bone loss. Also, the authors reported that GTPase Rac1 was affected by defective VAV3, and the authors concluded that the activation of Rac1 is VAV3-dependent and the signaling after cytokine stimulation required for cytoskeletal reorganization in osteoclasts is impaired . Based on the findings of the aforementioned study, it is evident that the regulation of VAV3 could be significant in orthodontic tooth movement, in the phase where osteoclast activity is essential for bone remodeling. It is possible that, in this setting, hsa-miR-4634 is downregulated in order for an upregulation of VAV3 to occur, a hypothesis that could be addressed in a future study with the use of experimental models. Apart from miR-4634, which is statistically significant for the adjusted p -value, some noteworthy results can be extracted from the other miRNAs with a p -value < 0.05, such as hsa-miR-195-5p and hsa-miR-1246, which are known to regulate osteoblast differentiation and inhibit the osteogenic potential of progenitor cells, respectively . The GO enrichment analysis revealed that processes associated with bone remodeling were impacted, including osteoblast differentiation, the positive regulation of osteoblast differentiation, osteoclast differentiation, and the positive regulation of bone mineralization. A comparison of the three tables ( , and ) yielded significant data. The first statistical comparison was of the differential expression of miRNAs on Day 7 vs. Day 0, the second was of the expression on Day 40 vs. Day 7, and the third was of the expression on Day 40 vs. Day 0. What might be observed is that the first comparison showed genes related to processes of osteoblast differentiation, the positive regulation of osteoblast differentiation, and bone mineralization, while the other comparisons included osteoclast differentiation as well. OTM can be divided into three phases: the initial phase, characterized by rapid tooth movement immediately after force application; the lag phase, during which tooth movement temporarily halts due to the hyalinization of the periodontal ligament; and the post-lag phase, when movement resumes as necrotic tissue is cleared, allowing the tooth to continue its displacement . In our study, the selected timepoints corresponded to the initial (0 to 7 days) and the lag and post-lag phases (7 to 40 days). Current reports in the literature suggest that, in the initial phase after the application of mechanical stress, the early cellular response involves inflammation and tissue remodeling. Osteoclastogenesis, a critical process for bone resorption, tends to increase during the later stages of tooth movement, which aligns with our findings . The absence of detectable miRNAs targeting osteoclast differentiation genes early on can be attributed to the fact that osteoclast activity is still relatively low during this phase . In the lag and post-lag phases of orthodontic tooth movement, after the application of stress, alveolar bone remodeling becomes more pronounced as osteoclasts are activated on the pressure side of the tooth . At this phase, miRNAs related to osteoclast differentiation, such as those targeting the RANK, RANKL, or NFATc1 pathways, are more likely to be expressed . This molecular phase corresponds clinically with the acceleration phase of tooth movement, when bone resorption becomes more critical, facilitating tooth displacement through the alveolar bone. These reports are in accordance with our results, as in the initial phase of our research, only osteoblast differentiation and the positive regulation of osteoblast differentiation were affected, while osteoclast differentiation was affected at later time points . The detection of differentially expressed miRNAs in this final phase could be a result of the crosstalk of the periodontal ligament cells and the cells of the alveolar bones. The cellular source of EVs in the saliva is heterogenous, as various cell types can contribute . Apart from the epithelial cells and immune cells of the oral cavity, a portion of the EVs found in the saliva can originate from dental-tissue-derived cells, such as gingival mesenchymal stem cells and periodontal ligament stem cells . The periodontal ligament is mechanically stimulated during orthodontic tooth movement; it is the primary tissue that responds to mechanical signal transduction, and it conveys changes in the surrounding bone . As it is known that periodontal ligament stem cells (PDLSCs) change their EV miRNA content in response to mechanical stress in order to affect the activity of osteoblasts and osteoclasts , and preventing the release of EVs from PDLSCs results in a disrupted osteoclast function in OTM , one could hypothesize that the changes in the expression of miR-4634 and the other differentially expressed miRNAs in our study could be reflective of the response of the periodontal ligament during OTM. Summarizing the results of this study, the most significant finding was the downregulation of the miRNA hsa-miR-4634. This downregulation was found to occur on Day 0 to Day 40 of tooth movement. Even though there might be a plausible biological explanation for their altered expression that fits the setting of orthodontic stress application, none of the aforementioned miRNAs ( , and ) were found to be significant after the p -value adjustment. One of the potential limitations of this study is that the sample comprised patients in the first phase of orthodontic treatment, during which only a small number of teeth were moved. The cellular and molecular changes expected to occur might have been more pronounced if mechanical stress had been exerted on a larger number of teeth. It is important to note that the term saliva is used interchangeably with oral fluid. As the samples were collected after the patients chewed on parafilm, saliva production was stimulated . Because of this, the collected fluid was oral fluid that contained mainly saliva, but also other components of the oral cavity. Moreover, the flow rate was not recorded, but at least 5 min were required to produce 5 mL of saliva. This corresponds to a stimulated saliva flow rate of approximately 1 mL/min and aligns with the expectations in healthy adolescents . This study might serve as a starting point in investigating altered oral fluid miRNA expression during orthodontic treatment. Further studies are needed to elucidate the molecular interplay between miRNAs and their targets. The identification of potential biomarkers will be of great value to clinical orthodontics and future oral therapeutics. In molecular orthodontics, the clinician may be ultimately capable of controlling several clinical aspects, like the rate of tooth movement. The decoding of the human genome, along with new developments in molecular biology, has provided a much-anticipated boost to the biological sciences. Orthodontics, which increasingly relies on biotechnology, is expected to be significantly impacted by these advancements. 4.1. Patients This study received ethical approval from the ethical committee of the RWTH Medical University of Aachen in Germany, and informed consent was obtained from all the participants (ethical approval number: CTC-A-Nr.: 22-262). This study involved fifteen Caucasian patients aged 11–15 years (six females with a mean age of 14 ± 2.3 years and nine males with a mean age of 13 ± 1.3 years) presenting with a dental Class I or Class II malocclusion accompanied by moderate to severe crowding or spacing. When comparing the mean age of male and female patients, there were no statistically significant differences between the two groups ( p -value = 0.583), suggesting that gender did not have a significant effect on patient age. These patients were treated in a private orthodontic office in Bedburg, Germany. The age group used for this study was selected in order to minimize the likelihood of the presence of underlying conditions, such as periodontitis or systemic disorders, that could confound the results. The exclusion criteria included any history of cleft lip/palate, dentofacial deformities or syndromes, autoimmune diseases, or type 1 or type 2 diabetes; a history of drug use; and previous orthodontic treatment or intraoral/external oral surgery. All the patients underwent a thorough intraoral examination, a clinical assessment of the teeth, and a periodontal evaluation, which included the approximal plaque index (API), sulcus bleeding index (SBI), and periodontal screening index (PSI) . According to the orthodontic treatment plan, the 4 maxillary incisors and the 2 first maxillary molars were bonded with fixed pre-adjusted orthodontic appliances featuring a 0.22 slot size. From each patient, saliva samples were collected at three different time points: one week before bracket placement and archwire activation, 7 days after the treatment’s onset, and 40 days after the treatment’s onset. Thus, 45 saliva samples were finally collected. 4.2. Sample Size Calculation This study was designed as an exploratory analysis, with the primary objective of examining temporal changes in biomarkers, without testing predefined hypotheses. The sample size was determined through a power analysis based on a repeated measures design, where three measurements were collected from each participant at distinct time points: prior to bracket placement (Day 0), one week after the treatment (Day 7), and 40 days after the treatment (Day 40). Based on previous studies, a medium effect size (f = 0.30) was observed for changes in biomarkers over time . A power analysis was performed using the pwr package in R , which indicated that a minimum of 15 participants was required to achieve 40% power at an alpha level of 0.05 for detecting differences across the three time points. In exploratory analyses, the use of a lower power can be justified when the primary objective is to identify preliminary trends that may guide the design of future, more rigorous studies . Accordingly, 15 patients were recruited for this study, and saliva samples were collected from each participant at the three time points, resulting in a total of 45 samples. 4.3. Saliva Collection The saliva samples were collected in the dental office where the patients received orthodontic treatment. The collection was placed on ice in Falcon tubes previously refrigerated. The participants were instructed not to brush their teeth, chew gum, eat, or drink any liquids for at least 1.5 h before the visit. Saliva was collected upon arrival by having the patients chew a piece of parafilm for one minute while swallowing normally. The time of collection was set as late afternoon (15:00–18:00) for all the patients in order to prevent potential time-dependent changes in the saliva content. Before collection, 100 μL of PhosSTOP™ EASYpack (Roche Applied Science, Cat. No. 04 906 845 001, Mannheim, Germany) and 100 μL of cOmplete™ Mini Protease Inhibitor Cocktail Tablets (Roche Applied Science, Cat. No. 04 693 124 001, Mannheim, Germany) were added to a cold 50 mL Falcon tube. This was equivalent to ½ tablet of PhosSTOP and ½ tablet of cOmplete. The participants were instructed to begin chewing and then hold the saliva in their mouth (i.e., not swallow), and at 30-s intervals, eject the saliva into the cold 50 mL Falcon tube. The participants continued this process until a minimum of 5 mL had been obtained, or for up to 15 min of chewing and ejecting the saliva. After collection, the saliva samples were diluted with pre-cooled phosphate-buffered saline (PBS) at a ratio of 1:1. The samples were then centrifuged (Sigma 3K15, Sigma, Osterode am Harz, Germany) at 300× g for 20 min at 4 °C and the pellet was removed. Afterward, the supernatant was centrifuged further at 2000× g for 10 min at 4 °C, and the resulting pellet was discarded. This step was repeated at 5000× g for 30 min at 4 °C, and the final pellet was discarded. The remainder of the supernatant was stored at 4 °C until same-day transport and then stored at −80 °C in the lab. At the lab, the supernatant was thawed and centrifuged using an Optima LE-80K ultracentrifuge with an SW 32 Ti rotor (Beckman Coulter, Chaska, MN, USA) at 12,000× g for 20 min at 4 °C; then, the pellet was discarded. The sample was centrifuged at 120,000× g for 70 min at 4 °C, after which the supernatant was discarded. The pellet was resuspended, filtered using a 0.2 μm filter, and washed again by centrifugation at 120,000× g for 70 min at 4 °C. The final pellet was resuspended in filtered PBS and stored at −80 °C for long-term storage . 4.4. RNA Isolation RNA was isolated from 60 µL of exosome samples using the Maxwell RSC miRNA Plasma and Serum Kit (Promega, Madison, WI, USA) according to the manufacturer’s instructions. This process ensured the efficient extraction of small RNAs, including miRNAs, from the exosome samples. 4.5. RNA Quantification The concentration of the isolated RNA was determined using a Promega Quantus Fluorometer (Promega, Madison, WI, USA). This quantification step was crucial to ensure sufficient RNA input for downstream library preparation. 4.6. Library Preparation Sequencing libraries were prepared using the QIAseq miRNA UDI Library Kit (QIAGEN, Hilden, Germany), following the manufacturer’s protocol. QIAseq miRNA Library QC spike-ins were added to each sample to monitor the library preparation process and ensure the accuracy of miRNA detection. This step facilitated the normalization and validation of the library preparation efficiency. 4.7. Library Quality Control The size distribution of the prepared libraries was assessed using the Agilent TapeStation (Agilent Technologies, Santa Clara, CA, USA). The average library size was confirmed to fall within the expected range for miRNA libraries. After size confirmation, the library concentrations were determined with the Promega Quantus Fluorometer to ensure accurate loading for sequencing. 4.8. Sequencing Sequencing was performed on an Illumina NextSeq 500/550 system using a High Output Kit v2.5 (75 cycles) (Illumina Inc., San Diego, CA, USA). The libraries were sequenced in a single-end mode for 72 cycles, with a 1% PhiX spike-in used as an internal control to monitor sequencing quality and performance. 4.9. Statistical and Bioinformatical Analysis FASTQ files were generated using bcl2fastq (Illumina). To facilitate a reproducible analysis, the samples were processed using the publicly available nf-core/smRNAseq pipeline, version 1.1.0 , implemented in Nextflow 21.10.6 using Docker 20.10.12 (Merkel 2014) with the minimal command. Out of 45 samples, 1 was eliminated from further analysis because no miRNA could be detected in the sample. Therefore, miRNA counts from the 44 samples were analyzed to identify changes in the expression levels. Two samples with library sizes (sum of miRNA counts) of less than 10,000 were excluded, resulting in 42 samples for analysis. Such a phenomenon could be a result of the various centrifugation and washing steps involved in the process of purification. Out of the 1405 miRNAs, only those with at least one read in a minimum of five samples were retained, yielding 185 miRNAs. The miRNA expression table in TMM-normalized CPMs, after count-based miRNA filtering, is included in the . To assess the overall structure and quality of the data before conducting a differential expression analysis, a multidimensional scaling (MDS) plot was generated using the plotMDS function from the “limma” R package. In this plot, each point represents a sample, with the distance between points indicating the similarity of their miRNA expression profiles. The differential expression analysis was performed using the “limma” R package once again. The “voom” function from the “limma” R package was used to transform the count data into log2-counts per million (CPM) with associated weights, accounting for mean–variance relationships. Next, a linear model was fitted to the transformed data while accounting for within-subject correlations. The “duplicateCorrelation” function was employed to estimate the correlation between measurements from the same subject, with the inter-subject correlation being incorporated into the linear model fitting process. This correlation was subsequently used in the “lmFit” function, which fitted the linear model to the data, adjusting for the block effect (i.e., subject-specific variation). To compare different time points, contrasts were defined using the “makeContrasts” function for the following comparisons: Day 7 vs. Day 0, Day 40 vs. Day 7, and Day 40 vs. Day 0. These contrasts were applied to the fitted model using the “contrasts.fit” function. Finally, moderated t -tests were computed using the “eBayes” function to determine the significance of differential expression across the specified contrasts. The R script for this analysis is provided in the . This study received ethical approval from the ethical committee of the RWTH Medical University of Aachen in Germany, and informed consent was obtained from all the participants (ethical approval number: CTC-A-Nr.: 22-262). This study involved fifteen Caucasian patients aged 11–15 years (six females with a mean age of 14 ± 2.3 years and nine males with a mean age of 13 ± 1.3 years) presenting with a dental Class I or Class II malocclusion accompanied by moderate to severe crowding or spacing. When comparing the mean age of male and female patients, there were no statistically significant differences between the two groups ( p -value = 0.583), suggesting that gender did not have a significant effect on patient age. These patients were treated in a private orthodontic office in Bedburg, Germany. The age group used for this study was selected in order to minimize the likelihood of the presence of underlying conditions, such as periodontitis or systemic disorders, that could confound the results. The exclusion criteria included any history of cleft lip/palate, dentofacial deformities or syndromes, autoimmune diseases, or type 1 or type 2 diabetes; a history of drug use; and previous orthodontic treatment or intraoral/external oral surgery. All the patients underwent a thorough intraoral examination, a clinical assessment of the teeth, and a periodontal evaluation, which included the approximal plaque index (API), sulcus bleeding index (SBI), and periodontal screening index (PSI) . According to the orthodontic treatment plan, the 4 maxillary incisors and the 2 first maxillary molars were bonded with fixed pre-adjusted orthodontic appliances featuring a 0.22 slot size. From each patient, saliva samples were collected at three different time points: one week before bracket placement and archwire activation, 7 days after the treatment’s onset, and 40 days after the treatment’s onset. Thus, 45 saliva samples were finally collected. This study was designed as an exploratory analysis, with the primary objective of examining temporal changes in biomarkers, without testing predefined hypotheses. The sample size was determined through a power analysis based on a repeated measures design, where three measurements were collected from each participant at distinct time points: prior to bracket placement (Day 0), one week after the treatment (Day 7), and 40 days after the treatment (Day 40). Based on previous studies, a medium effect size (f = 0.30) was observed for changes in biomarkers over time . A power analysis was performed using the pwr package in R , which indicated that a minimum of 15 participants was required to achieve 40% power at an alpha level of 0.05 for detecting differences across the three time points. In exploratory analyses, the use of a lower power can be justified when the primary objective is to identify preliminary trends that may guide the design of future, more rigorous studies . Accordingly, 15 patients were recruited for this study, and saliva samples were collected from each participant at the three time points, resulting in a total of 45 samples. The saliva samples were collected in the dental office where the patients received orthodontic treatment. The collection was placed on ice in Falcon tubes previously refrigerated. The participants were instructed not to brush their teeth, chew gum, eat, or drink any liquids for at least 1.5 h before the visit. Saliva was collected upon arrival by having the patients chew a piece of parafilm for one minute while swallowing normally. The time of collection was set as late afternoon (15:00–18:00) for all the patients in order to prevent potential time-dependent changes in the saliva content. Before collection, 100 μL of PhosSTOP™ EASYpack (Roche Applied Science, Cat. No. 04 906 845 001, Mannheim, Germany) and 100 μL of cOmplete™ Mini Protease Inhibitor Cocktail Tablets (Roche Applied Science, Cat. No. 04 693 124 001, Mannheim, Germany) were added to a cold 50 mL Falcon tube. This was equivalent to ½ tablet of PhosSTOP and ½ tablet of cOmplete. The participants were instructed to begin chewing and then hold the saliva in their mouth (i.e., not swallow), and at 30-s intervals, eject the saliva into the cold 50 mL Falcon tube. The participants continued this process until a minimum of 5 mL had been obtained, or for up to 15 min of chewing and ejecting the saliva. After collection, the saliva samples were diluted with pre-cooled phosphate-buffered saline (PBS) at a ratio of 1:1. The samples were then centrifuged (Sigma 3K15, Sigma, Osterode am Harz, Germany) at 300× g for 20 min at 4 °C and the pellet was removed. Afterward, the supernatant was centrifuged further at 2000× g for 10 min at 4 °C, and the resulting pellet was discarded. This step was repeated at 5000× g for 30 min at 4 °C, and the final pellet was discarded. The remainder of the supernatant was stored at 4 °C until same-day transport and then stored at −80 °C in the lab. At the lab, the supernatant was thawed and centrifuged using an Optima LE-80K ultracentrifuge with an SW 32 Ti rotor (Beckman Coulter, Chaska, MN, USA) at 12,000× g for 20 min at 4 °C; then, the pellet was discarded. The sample was centrifuged at 120,000× g for 70 min at 4 °C, after which the supernatant was discarded. The pellet was resuspended, filtered using a 0.2 μm filter, and washed again by centrifugation at 120,000× g for 70 min at 4 °C. The final pellet was resuspended in filtered PBS and stored at −80 °C for long-term storage . RNA was isolated from 60 µL of exosome samples using the Maxwell RSC miRNA Plasma and Serum Kit (Promega, Madison, WI, USA) according to the manufacturer’s instructions. This process ensured the efficient extraction of small RNAs, including miRNAs, from the exosome samples. The concentration of the isolated RNA was determined using a Promega Quantus Fluorometer (Promega, Madison, WI, USA). This quantification step was crucial to ensure sufficient RNA input for downstream library preparation. Sequencing libraries were prepared using the QIAseq miRNA UDI Library Kit (QIAGEN, Hilden, Germany), following the manufacturer’s protocol. QIAseq miRNA Library QC spike-ins were added to each sample to monitor the library preparation process and ensure the accuracy of miRNA detection. This step facilitated the normalization and validation of the library preparation efficiency. The size distribution of the prepared libraries was assessed using the Agilent TapeStation (Agilent Technologies, Santa Clara, CA, USA). The average library size was confirmed to fall within the expected range for miRNA libraries. After size confirmation, the library concentrations were determined with the Promega Quantus Fluorometer to ensure accurate loading for sequencing. Sequencing was performed on an Illumina NextSeq 500/550 system using a High Output Kit v2.5 (75 cycles) (Illumina Inc., San Diego, CA, USA). The libraries were sequenced in a single-end mode for 72 cycles, with a 1% PhiX spike-in used as an internal control to monitor sequencing quality and performance. FASTQ files were generated using bcl2fastq (Illumina). To facilitate a reproducible analysis, the samples were processed using the publicly available nf-core/smRNAseq pipeline, version 1.1.0 , implemented in Nextflow 21.10.6 using Docker 20.10.12 (Merkel 2014) with the minimal command. Out of 45 samples, 1 was eliminated from further analysis because no miRNA could be detected in the sample. Therefore, miRNA counts from the 44 samples were analyzed to identify changes in the expression levels. Two samples with library sizes (sum of miRNA counts) of less than 10,000 were excluded, resulting in 42 samples for analysis. Such a phenomenon could be a result of the various centrifugation and washing steps involved in the process of purification. Out of the 1405 miRNAs, only those with at least one read in a minimum of five samples were retained, yielding 185 miRNAs. The miRNA expression table in TMM-normalized CPMs, after count-based miRNA filtering, is included in the . To assess the overall structure and quality of the data before conducting a differential expression analysis, a multidimensional scaling (MDS) plot was generated using the plotMDS function from the “limma” R package. In this plot, each point represents a sample, with the distance between points indicating the similarity of their miRNA expression profiles. The differential expression analysis was performed using the “limma” R package once again. The “voom” function from the “limma” R package was used to transform the count data into log2-counts per million (CPM) with associated weights, accounting for mean–variance relationships. Next, a linear model was fitted to the transformed data while accounting for within-subject correlations. The “duplicateCorrelation” function was employed to estimate the correlation between measurements from the same subject, with the inter-subject correlation being incorporated into the linear model fitting process. This correlation was subsequently used in the “lmFit” function, which fitted the linear model to the data, adjusting for the block effect (i.e., subject-specific variation). To compare different time points, contrasts were defined using the “makeContrasts” function for the following comparisons: Day 7 vs. Day 0, Day 40 vs. Day 7, and Day 40 vs. Day 0. These contrasts were applied to the fitted model using the “contrasts.fit” function. Finally, moderated t -tests were computed using the “eBayes” function to determine the significance of differential expression across the specified contrasts. The R script for this analysis is provided in the . The downregulation of hsa-miR-4634 may play a role in modulating osteoclast function, possibly through the regulation of the gene VAV3. The findings suggest that salivary miRNAs, particularly those derived from exosomes, hold promise as non-invasive biomarkers for monitoring bone-remodeling processes during orthodontic tooth movement. Although other miRNAs showed changes in expression, they did not reach statistical significance after adjustment, highlighting the complexity of miRNA regulation in response to mechanical stress. Future research should focus on elucidating the molecular interactions between miRNAs and their targets, which may provide valuable insights not only for advancing orthodontic treatments, but also for enhancing our understanding of broader skeletal health issues, such as osteoporosis. Understanding these molecular mechanisms could eventually lead to more personalized approaches in orthodontics, including the modulation of tooth movement rates based on individual miRNA signatures.
Salvage Whole-Pelvic Radiation and Long-Term Androgen-Deprivation Therapy in the Management of High-Risk Prostate Cancer: Long-Term Update of the McGill 0913 Study
42d56305-133e-4ce3-9ea0-85c69d2e8cf2
10453184
Internal Medicine[mh]
Prostate cancer remains the most common non-cutaneous malignancy diagnosed amongst the male population . The treatment of prostate cancer encompasses several reliable treatment options for patients afflicted with different disease stages, accounting for and optimized to different patient characteristics, such as performance status, risk group, and clinical and radiological stage, as well as individual patient preferences. Radical prostatectomy (RP) is a common and effective definitive treatment, offered to suitable patients after appropriate selection. While it is an excellent option for early-stage disease, early signs of treatment failure can be detected through a rise in PSA meeting the threshold of biochemical failure. However, newer data have shown that these failures can be effectively managed by a combination of salvage external beam radiotherapy (RT) and/or androgen-deprivation therapy (ADT). Over the past decade, there has been a remarkable evolution in imaging techniques aiming to more accurately stage prostate cancer patients. Innovative imaging technologies, such as positron emission tomography (PET) and PET/computed tomography (CT), have shown promising improvements in both sensitivity and specificity when compared to traditional imaging. Preliminary research suggests that PET tracers designed specifically for prostate cancer, such as F-18 fluciclovine, C-11 choline, and Ga-68 PSMA-11, may prove to be superior to FDG in cases of recurrent prostate cancer . The F-18 fluciclovine (Axumin), an amino acid analog, has been shown to exhibit enhanced uptake by cancer cells. In a study involving 143 prostate cancer patients with PSA-only recurrence, F-18 fluciclovine demonstrated a sensitivity of 91% and a specificity of 40% . Notably, the EMPIRE-1 trial demonstrated that incorporating F-18 fluciclovine PET imaging into radiotherapy-decision making and planning for men with increasing PSA levels after prostatectomy significantly improved outcomes from salvage RT in patients without evidence of extrapelvic disease via conventional imaging . Moreover, the PET tracers, F-18 and C-11 choline, which target cell-membrane-lipid biosynthesis, which is known to be increased in cancer cells, have shown promise. A systematic review of 47 studies involving 3167 patients revealed that C-11 choline or F-18 choline PET/CT led to a modification of the treatment plan in 41% of patients . Advancements in PET scanning using novel radiotracers targeting PSMA, such as Ga-68 PSMA-11 (gozetotide) and F-18 DCFPyL (piflufolastat F-18), have shown potential in detecting both locoregional and distant metastatic sites, even in prostate cancer patients with very low levels of PSA (<2 ng/mL). A systematic review of 15 trials involving 1163 prostate cancer patients revealed that Ga-68 PSMA-11 PET played a crucial role in managing patients with biochemical failure following initial local therapy, with changes in management observed in 54% of the cases . Indeed, after proper restaging, salvage RT has the potential to offer durable disease control in cases of recurrent prostate cancer, provided that the recurrence is confined within the treatment field and that an adequate radiation dose is administered to eliminate the residual cancer . The combination of prostate-bed radiotherapy (PBRT) with androgen-deprivation therapy has been investigated in several large, randomized phase III trials. The GETUG-AFU 16 trial showed a benefit in DMFS when short-term ADT (ST-ADT) was added to locoregional treatment (PBRT with pelvic-lymph-node radiotherapy (PLNRT) or previous LN dissection). Indeed, the study demonstrated a progression-free survival (PFS) rate of 64% at 10 years for patients treated with RT and ST-ADT, whereas the PFS was only 49% in those treated with RT alone. The difference was statistically significant with a p -value of <0.0001 . The RTOG 0534 (SPPORT trial), which enrolled almost 1800 patients, randomly assigned its participants to three treatment groups: PBRT alone, PBRT + ST-ADT, and PBRT + PLNRT + ST-ADT. The authors of the SPPORT trial noted a biochemical-progression-free survival (bPFS) advantage when adding PLNRT to PBRT and ST-ADT, compared to PBRT and ST-ADT alone . The long-term results of the RTOG 9601, showed a 5% overall survival benefit when the patients received long-term ADT (LT-ADT) combined with daily bicalutamide for 24 months with salvage PBRT. The RTOG 9601 was the first major trial to highlight the potential long-term benefits of combining these treatment modalities . Collectively, these studies provide significant evidence for the effectiveness of the combined use of PBRT and ADT, emphasizing the need for further exploration of this therapeutic approach in patients with prostate cancer. In the McGill 0913 study, treatment was intensified with both LT-ADT and PLNRT for prostate cancer patients at high risk of relapse. We hypothesized that a longer duration of ADT and PLNRT may improve outcomes and, potentially, overall survival . In this analysis, we report the long-term results and updated survival outcomes of the McGill 0913 trial. 2.1. Patient Population Between 2010 and 2016, 46 high-risk prostate cancer patients who had BCR after RP approved to participate in the McGill 0913 study. Patients were eligible if they had an initial PSA ≥ 20 µg/L, Gleason score ≥ 8, and/or a pathological T-stage ≥ T3. Patients needed to have adequate functional status (Karnofsky performance score (KPS) > 70) and marrow function (platelets ≥ 100,000 cells/mm 3 , hemoglobin ≥ 10.0 g/dL, and AST/ALT < 2 times the upper limit of normal). Patients with pelvic lymphadenopathy ≥ 1.5 cm, distant metastasis on imaging, detectable residual tumor, post-operative PSA ≥ 5.0 µg/L, prior pelvic radiotherapy, other malignancies within the past 5 years, or active severe comorbidities were excluded. 2.2. Treatment Patients started Bicalutamide 50 mg within 2 weeks of enrollment. After a total of 2–4 weeks of Bicalutamide, patients received a luteinizing hormone-releasing hormone (LHRH)-agonist injection, which was then administered every 3 months for 24 months. External beam radiotherapy (EBRT) was started 8–12 weeks after the first LHRH agonist injection. The EBRT was delivered in two phases. Fifteen patients were treated by using 3D conformal radiotherapy, while the rest of the patients were treated via newer techniques, either IMRT or VMAT. The pelvis was treated with 44 Gy in 22 fractions in the first phase. The pelvic clinical target volume (CTV) encompassed the prostate bed, the remnants of the seminal vesicle, and the pelvic lymph nodes (LNs), which were contoured as per the RTOG pelvic LN guideline . In the second phase, the prostate bed only was boosted to 22 Gy in 11 fractions. Radiation was delivered using ≥6 MV photons. Prior to 2012, a 3D conformal radiotherapy (3DCRT) technique was used, which was later replaced with an intensity-modulated radiotherapy (IMRT) technique. The dose was prescribed to the isocenter for 3DCRT plans and to the planning target volume (PTV) for the IMRT plans (i.e., 100% of the prescribed dose to cover 95% of PTV). 2.3. Assessments Pre-treatment assessments included a detailed history, physical exam, and completion of QOL questionnaires, such as the EORTC QLQ-C30 (11), the EQ-5D (12), the International Index of Erectile Function (IIEF-5) , and the International Prostate Symptom Score (IPSS)] . Investigations included serological assessments (CBC, electrolytes, urea, liver-function tests, testosterone levels, and PSA), chest x-ray, CT or MRI of the abdomen and pelvis, and a whole-body bone scan. During radiotherapy, patients were seen weekly by their radiation oncologist. Observed toxicities were reported using the Common Toxicity Criteria version 3.0 scale (CTCAE v3.0) . Patients were assessed at post-treatment follow-up visits every 3 months for a period of 2 years, every 6 months for an additional 3 years, and annually thereafter. During each follow-up visit, a range of assessments were performed, including PSA, blood testosterone levels, EQ-5D, IPSS, IIEF-5, and toxicity. Furthermore, the EORTC QLQ-C30 questionnaire was administered and completed on an annual basis. 2.4. Endpoints and Statistical Analysis The primary objective of the McGill 0913 study was bPFS. Secondary outcomes included DMFS, OS, QOL, and toxicity. The bPFS was calculated from the date of the first injection of LHRH agonist to the date of biochemical recurrence. The DMFS and OS were calculated from the date of the first LHRH-agonist injection to the date of radiological progression or death from any cause, respectively. The Kaplan–Meier method using IBM SPSS v24 was used for the outcome analysis. Between 2010 and 2016, 46 high-risk prostate cancer patients who had BCR after RP approved to participate in the McGill 0913 study. Patients were eligible if they had an initial PSA ≥ 20 µg/L, Gleason score ≥ 8, and/or a pathological T-stage ≥ T3. Patients needed to have adequate functional status (Karnofsky performance score (KPS) > 70) and marrow function (platelets ≥ 100,000 cells/mm 3 , hemoglobin ≥ 10.0 g/dL, and AST/ALT < 2 times the upper limit of normal). Patients with pelvic lymphadenopathy ≥ 1.5 cm, distant metastasis on imaging, detectable residual tumor, post-operative PSA ≥ 5.0 µg/L, prior pelvic radiotherapy, other malignancies within the past 5 years, or active severe comorbidities were excluded. Patients started Bicalutamide 50 mg within 2 weeks of enrollment. After a total of 2–4 weeks of Bicalutamide, patients received a luteinizing hormone-releasing hormone (LHRH)-agonist injection, which was then administered every 3 months for 24 months. External beam radiotherapy (EBRT) was started 8–12 weeks after the first LHRH agonist injection. The EBRT was delivered in two phases. Fifteen patients were treated by using 3D conformal radiotherapy, while the rest of the patients were treated via newer techniques, either IMRT or VMAT. The pelvis was treated with 44 Gy in 22 fractions in the first phase. The pelvic clinical target volume (CTV) encompassed the prostate bed, the remnants of the seminal vesicle, and the pelvic lymph nodes (LNs), which were contoured as per the RTOG pelvic LN guideline . In the second phase, the prostate bed only was boosted to 22 Gy in 11 fractions. Radiation was delivered using ≥6 MV photons. Prior to 2012, a 3D conformal radiotherapy (3DCRT) technique was used, which was later replaced with an intensity-modulated radiotherapy (IMRT) technique. The dose was prescribed to the isocenter for 3DCRT plans and to the planning target volume (PTV) for the IMRT plans (i.e., 100% of the prescribed dose to cover 95% of PTV). Pre-treatment assessments included a detailed history, physical exam, and completion of QOL questionnaires, such as the EORTC QLQ-C30 (11), the EQ-5D (12), the International Index of Erectile Function (IIEF-5) , and the International Prostate Symptom Score (IPSS)] . Investigations included serological assessments (CBC, electrolytes, urea, liver-function tests, testosterone levels, and PSA), chest x-ray, CT or MRI of the abdomen and pelvis, and a whole-body bone scan. During radiotherapy, patients were seen weekly by their radiation oncologist. Observed toxicities were reported using the Common Toxicity Criteria version 3.0 scale (CTCAE v3.0) . Patients were assessed at post-treatment follow-up visits every 3 months for a period of 2 years, every 6 months for an additional 3 years, and annually thereafter. During each follow-up visit, a range of assessments were performed, including PSA, blood testosterone levels, EQ-5D, IPSS, IIEF-5, and toxicity. Furthermore, the EORTC QLQ-C30 questionnaire was administered and completed on an annual basis. The primary objective of the McGill 0913 study was bPFS. Secondary outcomes included DMFS, OS, QOL, and toxicity. The bPFS was calculated from the date of the first injection of LHRH agonist to the date of biochemical recurrence. The DMFS and OS were calculated from the date of the first LHRH-agonist injection to the date of radiological progression or death from any cause, respectively. The Kaplan–Meier method using IBM SPSS v24 was used for the outcome analysis. Between 2010 and 2016, forty-six patients participated in this study. Forty-three patients are included in the current analysis. Three patients were ineligible in the initial analysis, since they were not treated according to the protocol. Of the three, one opted for salvage surgery, while the other two refused radiotherapy. Baseline characteristics were evaluated for all the patients in the analysis . The high-risk pathological features present in the patient population included positive lymph nodes in 19% ( n = 8), Gleason scores ≥ 8 in 35% ( n = 15), extracapsular extension in 63% ( n = 27), and seminal vesicle involvement in 42% ( n = 18) . The postoperative median PSA was 0.30 μg/L (interquartile range (IQR) −0.20 to 0.47). At a median follow-up of 8.8 years (3.25–12 years), the 10-year bPFS, DMFS, and OS were 68.3%, 72.9%, and 97%, respectively ( , and ). No local or pelvic relapses were detected in this study. In fact, the distant failures involved either non-regional LNs or the bones. All the patients who developed bone metastases had Gleason scores ≥ 8 . While two patients died of other causes, no patients died from prostate cancer. The first patient died of congestive heart failure at the age of 83, with 10.17 years of follow-up. The second died of an unknown cause at the age of 80, with 6.75 years of follow-up. Since the prior report, no new toxicity events were observed. The rates of Grade 2 ADT-related or radiation-induced toxicities are reported in . Overall, the long-term toxicity profiles of the enrollees was acceptable. The QOL, assessed by the EQ-5D’s visual analog score , remained unchanged. While the ADT seemed to be associated with a decrease in QOL, this was not statistically significant from the baseline ( p = 0.39) . The mean minimum QOL experienced by a patient at any given time while on ADT was 7.8 (standard deviation = 2.0), compared to a mean baseline QoL of 8.2 (standard deviation = 1.2). The benefit of combining PLNRT with local prostate RT and ADT in both salvage and definitive settings has long been debated. The use of LT-ADT has a proven survival benefit compared with ST-ADT in patients with high-risk prostate cancer receiving definitive RT to the prostate and pelvic lymph nodes . In a recently published phase III randomized controlled trial (POP- RT), PLNRT with LT-ADT in high-risk and very high-risk prostate cancer demonstrated a 14% benefit in biochemical failure-free survival (BFFS) and a 7% benefit in disease-free survival (DFS) at 5 years . Furthermore, the benefit of PLNRT in pathological or clinical node-positive disease was also suggested in a subgroup analysis of STAMPEDE . This partly contrasted with prior negative trials (GETUG-01 and RTOG 9413), which used ST-ADT combined with PLNRT. In fact, GETUG-01 used 4–8 months of ADT , while RTOG 9413 used 4 months of ADT . Regarding the effectiveness of PLNRT, the ongoing NRG0924 study is an attempt to provide a definitive answer. The NRG0924 trial aims to accrue 2580 patients to demonstrate whether prophylactic neoadjuvant ADT and PLNRT can improve OS in unfavorable intermediate- or high-risk disease. While numerous trials have investigated different combinations of RT and ADT in the salvage setting, the McGill 0913 was the first to prospectively evaluate PBRT with both PLNRT and LT-ADT. Compared to the aforementioned mentioned data, this treatment combination demonstrated encouraging disease control without significantly affecting patients’ QoL. Our results add further credence to treatment intensification for post-prostatectomy patients at high risk of relapse . In the RTOG 9601 trial, the cumulative incidence of biochemical recurrence after salvage RT at 12 years was 44.0% in the bicalutamide group. When comparing arms, the 5-year PSA failure improved from ~50% to 23% (hazard ratio (HR) = 0.48; 95% CI 0.40–0.58). Our 10-year bPFS rate was 68.3 %, which compares favorably with that of the RTOG 9601 trial at 10 years. While direct comparisons between trials are not valid, differences may be attributed to PBRT only without PLNRT . The DMFS has recently emerged as a robust surrogate for OS in prostate cancer . In the GETUG-AFU 16 trial, the 10-year metastasis-free survival was 75% (95% CI 70–80). In the arm comprising radiotherapy plus 6 months of Goserelin, an additional benefit in DMFS of 6% was found . This is comparable to our DMFS rate of 72.9% at 10 years. In the GETUF-AFU 16 study, the patients who did not undergo a pelvic nodal dissection during their radical prostatectomy and whose risk of nodal involvement was over 15% on the Partin table then received pelvic nodal radiotherapy . In the RTOG 9601, the OS actuarial rate at 12 years was 76.3% in the bicalutamide group . In parallel, our 10-year OS rate remains excellent at 97%. Similarly, our 10-year OS rate compares favorably with that of the GETUG-AFU 16 trial. The patients assigned radiotherapy plus Goserelin had a 10-year OS rate of 86% . Regarding the treatment of node-positive patients after radical prostatectomy, 36 institutes in the USA took part in a study to assess whether immediate ADT offers benefits in terms of OS compared with deferred ADT. After a median follow-up of almost 12 years, early ADT led to significant improvements in OS compared to deferred ADT in patients with nodal metastases after radical prostatectomy. According to the latest published NCCN prostate cancer guidelines, CT, MRI, PET/CT, or PET/MRI with F-18 NaF C-11 choline, F-18 fluciclovine, Ga-68 PSMA-11, or F-18 piflufolastat PSMA can be considered for evaluating equivocal results on initial bone imaging . For the soft-tissue imaging of the chest, abdomen, and pelvis, CT chest and abdominal/pelvic MRI are preferred. Alternatively, Ga-68 PSMA-11, F-18 piflufolastat PSMA PET/CT, or PET/MRI can be considered for bone and soft-tissue imaging. Due to the higher sensitivity and specificity of PSMA PET tracers in detecting micrometastatic disease compared to conventional imaging (CT, MRI) during initial staging, conventional imaging is not an essential prerequisite for PSMA PET. Therefore, PSMA PET can now be considered an equally effective, if not more effective, imaging tool. The RADICALS-HD study was part of the RADICALS protocol. The RADICALS design contained two separate randomizations seeking to answer different questions on radiation timing and ADT duration in the post-operative setting in a similar patient population. The RADICALS study overall examined nearly 4000 patients with a follow-up of fifteen years. The first randomization assessed the benefit of adjuvant versus salvage radiotherapy, thereby comparing the differences in the timing of the initiation of the post-operative RT. This question was evaluated through the RADICALS-RT study. The second randomization sought to address the optimal duration of hormone therapy through three arms, using either 0, 6, or 24 months of ADT. An overview of this question was presented in the report on the RADICALS -HD trial. The RADICALS HD study assessed 2839 patients from the UK, Canada, and Denmark. Of these, 23% had high-risk features, including pT3b or T4 disease whereas, 20% had Gleason 8–10 disease. The median age of the enrolled patients was 66 years, whereas the median PSA prior to salvage was 0.22 ng/mL . The patients who were eligible to receive postoperative RT were randomized to either NO ADT, ST-ADT, or LT-ADT. When comparing the ST-ADT arm to the LT-ADT arm, 2 years of ADT improved MFS. Indeed, 24 months of ADT improved MFS, with a HR of 0.77, and a 95% confidence interval (CI) = 0.61–0.97, with p = 0.3. When comparing the no ADT and the ST-ADT groups, no benefit was seen in terms of MFS, with a HR of 0.89 and a CI: 0.69–1.14. This corresponded to 10-year event-free survival rates of 79% vs. 80%, respectively. Furthermore, the MFS at 10 years was 78% in the 6-month group vs. 72% in the 24-month group. The time-to-salvage ADT was delayed in both comparisons, with a HR of 0.73 and a CI 0.59–0.91. No OS benefits were noted in the groups when adding either ST-ADT or LT-ADT, with a HR 0.88 and a CI: 0.66–1.17 . Interestingly, the RADICALs-HD trail permitted randomization through a three-arm comparison between 0, 6, and 24 months of ADT and two subgroups comparing 0 vs. 6 months and 6 vs. 24 months. The three-way randomization regrouped 492 patients, whereas the randomization into the two subgroups regrouped the 2347 patients, representing 83% of the total RADICAL-HD patient population. A further subgroup analysis showed that the patients with high-risk features were more likely to receive 6 to 24 months of ADT rather than 0 to 6 months of ADT, probably due to inherent physician bias. Indeed, the patients with Gleason 8–10 disease represented 11% vs. 28% of the cohorts in the ADT-duration arms of 0– 6 vs. 6– 24 months. The disease stages of T3b or T4 characterized 16% vs. 30% of the patients in the ADT-duration arms of 0–6 vs. 6– 24 months. Similarly, 3% vs. 8% of the patients in the ADT-duration cohorts of 0–6 vs. 6– 24 months had positive lymph nodes as part of their pathological characteristics. Hence, while the results of the RADICALs-HD trial are promising, 24 months of ADT should not necessarily be applied to all patients requiring salvage treatments, but it should be considered preferable in higher-risk patients . Our results are further supported by the recent final publication of the SPPORT trial . In this study, node-negative T2–T3 prostate cancer patients with a PSA of 0.1–2.0 µg/L (median 0.34 µg/L) were randomized to either PBRT alone, PBRT with ST-ADT, or PBRT with both ST-ADT and PLNRT . At a median follow-up of 8.2 years, the 5-year freedom-from-progression rates were 70.9% with PBRT alone, 81.3% with PBRT and ST-ADT, and 87.4% with PBRT, PLNRT, and ST-ADT. Like our results, the addition of PLNRT to PBRT and ST-ADT was found to further improve freedom from progression . Our trial distinguished itself from other publications by including positive-lymph-node prostate cancer patients. The results of our trial indicate that treatment intensification showed equally good outcomes for node-positive postoperative patients with intrinsically higher-risk disease. The SPPORT and ROTG 9601 included only node-negative patients and nearly 20% of the patients in GETUG-AFU16 , pN0 or pNx, and McGill 0913 were lymph-node-positive. However, our results are comparable to these large, randomized trials. In our final analysis, we did not find any evidence of newly emerging toxicity. There were also no statistically significant changes in the patients’ overall QoL assessments compared to their baseline values . Our study has a few limitations, some of which are inherent to its phase II design. Indeed, the study had a small sample size, with only 46 participants. The study was open-label, with a single experimental arm and no comparator. Therefore, an external standard is required for a comparison with our results, and our trials main hypothesis-generating. This study was limited to a single academic institution, which may limit its external validity. However, our results are both promising and intriguing, particularly given our excellent results for DMFS and OS, despite the inclusion of a substantial portion of node-positive patients. The results of McGill 0913 are comparable to the recently published SPPORT randomized clinical trial, despite its inclusion of only high-risk and lymph-node-positive prostate cancer patients. In themselves, our results, given the above-mentioned limitations, are not sufficient to suggest a change in the standard salvage-treatment approach. However, by combining our results with RTOG 9601 and recently published SPPORT study, we can conclude that high-risk prostate cancer patients with BCR post-RP should be treated with pelvic and prostate-bed radiation therapy in conjunction with long-term ADT until the data on systemic-therapy intensification emerge. However, given the small sample size inherent to this phase 2 study, these data remain hypothesis-generating and should be further validated in a phase 3 randomized controlled study.
Transient hypoxia drives soil microbial community dynamics and biogeochemistry during human decomposition
23de98ce-7f5b-4f78-8b89-a3c7e4608b60
11879408
Microbiology[mh]
Human decomposition in terrestrial ecosystems is a dynamic process that creates localized hotspots of soil nutrient cycling (Keenan et al. , , DeBruyn et al. ) and microbial activity (Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Mason et al. ). The study of its progression through time and under varied environmental conditions is crucial to our understanding mechanisms involved in recycling cadaver-derived organic matter. From a soil ecological perspective, vertebrate decomposition (both human and animal) results in a spectrum of changes to prevailing soil chemistry and subsequent biogeochemical cycling, as well as affecting the community structure, diversity, and function of microbial decomposer communities inhabiting this soil environment. When considering the decomposition of human remains specifically, these combined biogeochemical and ecological shifts have the potential to be harnessed as a forensic tool for refining time-since-death estimates, or the postmortem interval (PMI) (Metcalf et al. , Adserias-Garriga et al. , Singh et al. , DeBruyn et al. , Mason et al. ). Assessments of biogeochemical changes within decomposition-impacted soils (both human and animal) have included a wide variety of parameters: pH (Keenan et al. , Quaggiotto et al. , DeBruyn et al. , Mason et al. , Taylor et al. ), soil electrical conductivity (EC; Fancher et al. , Keenan et al. , Quaggiotto et al. , Mason et al. , Taylor et al. ), organic and inorganic C and N compounds (Towne , Aitkenhead-Peterson et al. , Szelecz et al. , Keenan et al. , DeBruyn et al. ), respiration (Cobaugh et al. , Keenan et al. , Mason et al. ), soil oxygen (Keenan et al. ), elemental analysis (Parmenter and MacMahon , Aitkenhead-Peterson et al. , Perrault and Forbes , Fancher et al. , Taylor et al. ), extracellular enzyme activity (Keenan et al. , , DeBruyn et al. , Mason et al. ), protein, fatty acid, steroid, and peptide concentrations (Macdonald et al. , Lühe et al. , , Keenan et al. ), naturally occurring stable isotope fractionation (Keenan et al. ), and metabolomics and lipidomics (DeBruyn et al. ). These diverse parameters have all demonstrated soil responses to decomposition on varied times scales from days to months, and to a limited extent across seasons (Meyer et al. , Perrault and Forbes , DeBruyn et al. ). Soil pH is typically altered, and direction of the pH response varies in part between species; soil acidification is often observed associated with humans and alkalinization with animals (but see: Szelecz et al. , Keenan et al. , Quaggiotto et al. , DeBruyn et al. , Mason et al. , Taylor et al. ). EC increases immediately upon decomposition fluids entering the soil. Fluctuations observed in C and N compounds, respiration rates, and enzyme activity show promise as indicators of early periods of decomposition (active through advanced decay) where most soft tissue mass is lost; C and N transformations are related to increased heterotrophic microbial activity and accompanying oxygen use within the soil (Keenan et al. ). Terrestrial microbial ecology studies have shown that pH and EC (the latter often expressed in terms of salinity or ionic strength) are strong drivers of bacterial community structure across ecosystems (Rousk et al. , Rath et al. ). Likewise, organisms responsible for soil nitrogen cycling have a demonstrated sensitivity to both pH and soil oxygenation (Dent et al. ). Therefore, it has been assumed that selective pressures exerted on soil microbial communities originating from decomposition-induced changes in soil pH, redox status, and EC result in profound alterations to microbial community structure and function over time. In recent years, advances in sequencing technology have made it possible to create longitudinal surveys of decomposition-related microbial community successional patterns (Mason et al. ). There has been interest in whether these patterns could be used to estimate PMI, inspired by forensic entomology and predicated on the hypothesis that following the release of decomposition products into the soil, microbial decomposer communities in the soil exhibit consistently repeatable successional patterns over time as they respond to the nutrient influx. The majority of studies characterizing the soil postmortem microbiome (or “necrobiome”) have focused on exploring bacterial community composition and successional patterns (Lauber et al. , Carter et al. , Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Singh et al. , Procopio et al. , Burcham et al. , Mason et al. ). By comparison, the exploration of changes within fungal communities has received less attention, but they also appear to exhibit some repeatable successional patterns (Metcalf et al. , , Carter et al. , Weiss et al. , Fu et al. , Procopio et al. , Mason et al. ). Soil fungi are key decomposers of organic material and tolerate much wider pH ranges than do bacteria (de Boer et al. , Rousk et al. ) leading to the hypothesis that fungi may be important decomposers in the acidic soils frequently associated with human decompositon. While the focus of this collective body of work has centered on identifying changes to the postmortem microbiome structure and diversity, there are several notable omissions in these prior studies: (1) studies have been primarily focused on the early period of decomposition in which soft tissue mass loss is greatest and not later time periods; (2) studies have included only limited environmental, soil, or seasonal explanatory data to aid in interpretation of microbial community compositional changes; (3) there are few direct comparisons of warm versus cool season decomposition dynamics; and (4) microbiome studies generally do not quantify microbial abundances or biomass to link community compositional changes to determine potential functional impact. Furthermore, many forensics-focused studies use animals as a human proxy (e.g. pig), despite increasing evidence that decomposition patterns vary between species due to a variety of considerations that include size, tissue composition, and scavenger preference (Notter et al. , Dautartas et al. , Steadman et al. , Barton et al. , DeBruyn et al. ). To address these knowledge gaps, we designed a pair of surface human decomposition studies with the following objectives: (1) characterize seasonal differences in biogeochemistry, microbial (bacterial and fungal) community composition, and microbial abundances with high temporal resolution over 1 year; and (2) integrate patterns observed in soil biogeochemistry with those of bacterial and fungal community structure to identify potential drivers of microbial succession. To control as many variables as possible, we (1) used replicate human donors of similar mass to control for interindividual variation in size; (2) placed donors simultaneously in the field so that environmental conditions within each seasonal trial would be the same; and (3) monitored local temperatures to allow for direct comparison between seasonal trials based on accumulated thermal units. We hypothesized that we would observe different seasonal patterns in both biogeochemistry and microbial community succession due to differences in external temperatures, variations in deposition rates of decomposition products (i.e. “pulse” events in warm weather as opposed to the slow release of materials in cooler weather) and the presence or absence of insect activity. We further hypothesized that these collective differences would vary by soil depth as decomposition products translocated to subsoils. Experimental site This study took place at the University of Tennessee Anthropology Research Facility (ARF), Knoxville, TN, USA. The ARF is part of the University of Tennessee Forensic Anthropology Center (FAC) and is the longest running outdoor human decomposition facility in the USA, in service since 1981 (Damann et al. ). The site is composed of temperate mixed deciduous forest, with a Köppen climate classification of Cfa (humid subtropical). Soils belong to the Loyston–Talbott–Rock outcrop and Coghill–Corryton complexes (Hartgrove , Soil Survey Staff ), and are composed of the following clay minerals: vermiculite, mica, interstratified mica–vermiculite, and kaolinite (Taylor et al. ). Site layout and donor placement A total of six deceased human subjects (hereafter, “donors”) were used for this study, who were whole body donors to the FAC ( https://fac.utk.edu/body-donation/ ) to be used specifically for the purpose of decomposition research. No living human subjects were involved, therefore this study was exempt from review by the University of Tennessee Institutional Review Board. No preferences were made for ancestry, sex, height, or age. Donors had not been autopsied nor had they sustained physical trauma that might potentially create artificial points of ingress for scavengers, insects, or microbes. Donors within a weight range of 68.5–91.6 kg were selected to reduce variability due to mass. Field placement sites were chosen within new sectors of the ARF that had not previously been used for surface or burial decomposition experiments. Spring and winter trials were located ~20 m apart . Spring 2018: three male donors within a weight range of 90.7–91.6 kg (200–202 lbs) were accepted for this study. Heights ranged from 1.76 to 1.81 m, and body mass index (BMI) values ranged from 27.7 to 29.6 . Data were collected from 2 May 2018 through 13 May 2019 (376 days), and consisted of daily photographs, visual assessment of decomposition stages, and total body scores (TBS) according to Payne and Megyesi et al. , local environmental data, and soil samples for soil chemistry and DNA extraction. Winter 2019: three male donors within a weight range of 68.5–87.1 kg (151–192 lbs) were accepted for this study . These donors were placed 9 months after the start of the spring 2018 study in order to capture seasonal differences in decomposition patterns. Heights differed within 20 cm, and BMI values ranged from 24.7 to 29.4. Data collection began on 8 February 2019 and continued through 27 February 2020 (384 days). For both trials, temperature probes (Decagon Devices, RT-1) were inserted internally (rectally) into each donor prior to placement, and donors were immediately frozen. The still-frozen donors were simultaneously placed at the study site unclothed, supine, and in direct contact with the soil surface to allow insect and scavenger access. Donors were spaced 2 m apart , which previous research has shown is sufficient space to prevent cross-contamination from decomposition products: in our previous study, lateral transport of decomposition products in soil has not been found appreciable beyond 50 cm, and vertical translocation below ~20 cm (Keenan et al. ). Additional sensors (Decagon Devices, GS3) were placed in the soil underneath each donor to measure soil temperature, moisture, and EC. A third set of sensors were placed ~50 cm above the soil surface to record local ambient air temperature. Placement of these sensors was designed to avoid the influence of elevated cadaver temperatures during decomposition. Sample collection Soil samples were collected at 20 time points throughout the spring study, and at 19 points during the winter study. The first four samples in each seasonal study were selected based on visual assessments of decomposition stages: initial field placement, day 0; bloat, day 8 (spring) or day 21 (winter); active decay, day 12 (spring) or day 38 (winter); onset of advanced decay, day 16 (spring) or days 55–75 (winter). Once advanced decay had begun, sampling continued at approximately evenly spaced intervals based upon equivalent thermal units: 8400 accumulated degree hours (ADH) or 350 accumulated degree days (ADD), as calculated by local ambient air temperature measurements with a baseline threshold of 0°C. For the spring study, the remainder of the sampling dates corresponded to days 27, 43, 58, 72, 86, 103, 117, 132, 150, 168, 201, 254, 303, 340, 357, and 376, with skeletonization occurring by day 303. For the winter study, the remainder of the sampling dates corresponded to days 94, 110, 126, 140, 158, 172, 186, 201, 216, 234, 252, 290, 335, and 384, with skeletonization occurring by day 158 (Tables and ). Soil sampling was performed using a 1.9-cm (three-fourth inch) diameter soil auger to remove five 16-cm cores (or sufficient soil to equal 200 g) from underneath each donor as well as from paired control plots located between 2 and 3 m upslope from each donor. Cores were divided into two depth segments: 0–1 and 1–16 cm, and soils from each depth were composited to create single samples from underneath each donor and from control plots. This 1-cm depth sample was referred to as an interface sample and was designed to capture hot-spot effects in the immediate proximity to decomposing remains. Dissolved oxygen (DO) was measured underneath donors and in controls using an Orion Star A329 multiparameter meter (ThermoScientific). A total of 12 samples were collected per sampling time point (six cores and six interfaces). Samples were transported back to the laboratory, where soil aliquots for DNA extraction were flash-frozen and stored at −80°C. All other soils were stored at 4°C and were processed within 48 h. Soil physical and chemical analyses Particle size analysis (PSA) was performed on homogenized control soils to verify similarity of field placement sites. Air-dried soils were sieved with 2-mm sieves, and organic matter was removed. PSA was performed using a Malvern Mastersizer 3000 laser particle size diffractor. Soil chemical analyses followed methods described in Keenan et al. . Briefly, soils were homogenized, and debris >2 mm (rocks and insects) were removed. Gravimetric moisture was calculated from mass loss following oven drying for 72 h at 105°C. Oven-dried soils were ground and analysed for total carbon (TC) and total nitrogen (TN) using a Vario Max CN Elemental Analyser (Elementar, Hanau, Germany). Soil pH and EC were measured on soil slurries of 1:2 soil:deionized water using an Orion Star A329 multiparameter meter (ThermoScientific). Respiration rates were assessed according to protocols outlined in Keenan et al. . Briefly, soils were sealed in 60 ml serum vials, and 0.5 ml headspace aliquots were removed immediately after sealing (T0), and again following a 24-h incubation (T24) at 20°C. Measurements were conducted in duplicate using a LI-820 CO 2 gas analyser and consisted of injection of 0.5 ml of the serum vial headspace. Soils were extracted in 0.5 M K 2 SO 4 , 1:4 parts soil and salt solution. Slurries were shaken at room temperature for 4 h at 160 r m −1 , after which they were allowed to settle and then vacuum-filtered using 1 µm glass microfiber filters (Ahlstrom). Filtrates were stored at −20°C until downstream processing. Ammonium and nitrate in filtrates were quantified colorimetrically in triplicate according to protocols described in Keenan et al. . DNA extraction, library preparation, sequence processing, and quantitative PCR Prior to DNA extraction, replicate control samples for interfaces (0–1 cm) and cores (1–16 cm) were each pooled into single samples by equal weight per sample. Impacted soils were not pooled. This yielded eight extractions per sampling time point: three impacted interfaces, three impacted cores, one pooled control interface, and one pooled control core. The DNA was extracted from 250 mg soil samples using the QIAGEN DNeasy ® PowerLyzer ® PowerSoil ® Kit (Hilden, Germany) according to manufacturer’s protocols, and included a prolonged bead beating step (4000 r m −1 for 45 s) recommended for soils with high clay content. Concentrations were quantified using a NanoDrop One spectrophotometer (ThermoScientific). Library preparation and sequencing was performed on the Illumina MiSeq platform at the Genomics Core Facility at the University of Tennessee, Knoxville. Sample preparation for amplicon sequencing consisted of polymerase chain reaction (PCR) amplification of the bacterial 16S rRNA region using universal primers 515F and 806R (Apprill et al. , Parada et al. ). Preparation for amplicon sequencing of the fungal ITS2 rRNA region used a mixture of primers (six forward and two reverse: ITS3NGS1, ITS3NGS2, ITS3NGS3, ITS3NGS4, ITS3NGS5, ITS3NGS10, ITS4NGR, and ARCH-ITS4) as previously described (Cregger et al. ), to amplify regions of ~300–400 nucleotides in length. Samples were diluted 1:10 for amplification and PCR amplification was performed using 12.5 µl 2x KAPA HiFi HotStart ReadyMix Taq (Roche, Indianapolis, IN), 5 µL each of 1.5 µM forward and reverse primers, and 2.5 µl sample, for a total volume of 25 µl per sample. The PCR amplification protocol consisted of denaturation at 95°C for 3 min followed by 30 cycles of denaturation (95°C for 30 s), annealing (55°C for 30 s), and extension at (72°C for 30 s), and a final elongation for 5 min at 72°C. Results were visualized by gel electrophoresis. Cleanup of PCR products was performed using AMPure beads (Agencourt, Beverly, MA). A second indexing PCR was performed using Nextera XT indexes with PCR cycling performed as described above except with eight cycles of elongation. Index PCR cleanup was again performed using AMPure beads, and final quality and concentrations were determined on a Bioanalyzer (Agilent, Santa Clara, CA). Samples were pooled to a final loading concentration of 4 pM, combined with 15% PhiX, and paired-end sequencing performed on a v3 flow cell on the Illumina MiSeq sequencing platform. Raw sequence reads are deposited in the NCBI Short Read Archive under BioProjects PRJNA1066312 (Spring data set) and PRJNA1070662 (Winter data set). Quality control of bacterial 16S sequence reads was performed using MOTHUR (v. 1.44.0) (Schloss et al. ). Briefly, paired end reads were merged and primers were trimmed, allowing a maximum of two nucleotide differences on both forward and reverse primers. Sequences were screened to remove ambiguous bases, sequences shorter than 50 bp, and longer than 275 bp. Sequences were aligned to the SILVA v132 reference database (Quast et al. ), overhangs were removed, and sequences with two or fewer differences were merged. Chimeras were removed using VSEARCH, sequences were classified, and unwanted lineages (chloroplast, mitochondria, unknown, archaea, and Eukaryota) were removed. Sequences were clustered into operational taxonomic units (OTUs) at 97% similarity. Quality control of fungal ITS reads was performed using a customized MOTHUR (v. 1.44.0) pipeline (Schloss et al. ). Paired end reads were merged, and primers were trimmed, as described for 16S sequences. Reads were screened to remove sequences with ambiguous bases and read lengths shorter than 200 bp. Initial clustering was performed, allowing up to three nucleotide differences between sequences. Chimeras were removed using VSEARCH, and remaining sequences were mapped to the UNITE 8.2 reference database using the standard cutoff of 80% (Abarenkov et al. ). Unwanted lineages were removed (unknown, Protista, protozoa, Plantae, Chromista, bacteria, and Animalia). Since ITS sequences are of differing length, sequences were clustered into OTUs using greedy clustering with a cutoff of 0.05 in VSEARCH. The PCR blanks and test data were removed from all datasets, and alpha diversity indices (Shannon, Inverse Simpson, and Chao1 were calculated in R using Phyloseq; Bioconductor BiocManager 1.30.10) (McMurdie and Holmes ) following both the removal of singletons and doubletons. Removal of singletons resulted in 8098 ITS OTUs and 19 591 16S OTUs for the spring dataset, and 7846 ITS OTUs and 16 388 16S OTUs for the winter dataset. Removal of doubletons resulted in 6021 ITS OTUs and 15 861 16S OTUs from the spring dataset and 5808 ITS OTUs and 13 089 16S OTUs from the winter dataset. For both seasons changes to diversity were negligible between removal of singletons and doubletons; since our focus was to determine robust patterns of change rather than identifying changes to rare taxa, we have selected removal of doubletons for all downstream analyses. Bacterial and fungal quantities were estimated using quantitative PCR (qPCR) of rRNA genes as a proxy. The Femto ™ Fungal DNA Quantification Kit and Femto ™ Bacterial DNA Quantification Kit (Zymo Research Corporation, Irvine, CA) were used according to manufacturer protocols. The qPCR was performed on a Bio-Rad CFX Connect Real-Time PCR Detection System (Bio-Rad Laboratories, Inc., Hercules, CA). Three outliers showing poor or no amplification were removed from the dataset. Statistical analyses Donors ( n = 3 in the spring, and n = 3 in the winter) were treated as experimental replicates, each with a paired control site. Shapiro–Wilk tests were performed, which showed that data was not normally distributed over the course of the study, so nonparametric Kruskal–Wallace ( P < .05) was used to test for differences between treatments (impacted soils underneath donors and controls). Student’s t -tests were conducted at each sampling time point to determine if impacted soils significantly differed from controls. Canonical analysis of principal coordinates (CAP) on Bray–Curtis distances of community compositional data was performed followed by PERMANOVA ( P < .05) in order to test for relationships between biogeochemical variables and changes in microbial beta diversity by study day. DO values for interfaces were extrapolated from core data, and data following day 168 (spring trial) and 201 (winter trial) were estimated to remain at 100%. Analyses and visualizations were conducted in R (version 3.6.1) using the tidyverse (1.2.1), vegan (version 2.5–6), ggplot2 (version 3.2.1), RColorBrewer (version 1.1–2), and Phyloseq (Bioconductor BiocManager 1.30.10) packages (R Development Core Team , McMurdie and Holmes , Neuwirth , Berry , Wickham , , Oksanen et al. ). Analysis files and R code are available at: https://github.com/jdebruyn/ARF-seasonal . This study took place at the University of Tennessee Anthropology Research Facility (ARF), Knoxville, TN, USA. The ARF is part of the University of Tennessee Forensic Anthropology Center (FAC) and is the longest running outdoor human decomposition facility in the USA, in service since 1981 (Damann et al. ). The site is composed of temperate mixed deciduous forest, with a Köppen climate classification of Cfa (humid subtropical). Soils belong to the Loyston–Talbott–Rock outcrop and Coghill–Corryton complexes (Hartgrove , Soil Survey Staff ), and are composed of the following clay minerals: vermiculite, mica, interstratified mica–vermiculite, and kaolinite (Taylor et al. ). A total of six deceased human subjects (hereafter, “donors”) were used for this study, who were whole body donors to the FAC ( https://fac.utk.edu/body-donation/ ) to be used specifically for the purpose of decomposition research. No living human subjects were involved, therefore this study was exempt from review by the University of Tennessee Institutional Review Board. No preferences were made for ancestry, sex, height, or age. Donors had not been autopsied nor had they sustained physical trauma that might potentially create artificial points of ingress for scavengers, insects, or microbes. Donors within a weight range of 68.5–91.6 kg were selected to reduce variability due to mass. Field placement sites were chosen within new sectors of the ARF that had not previously been used for surface or burial decomposition experiments. Spring and winter trials were located ~20 m apart . Spring 2018: three male donors within a weight range of 90.7–91.6 kg (200–202 lbs) were accepted for this study. Heights ranged from 1.76 to 1.81 m, and body mass index (BMI) values ranged from 27.7 to 29.6 . Data were collected from 2 May 2018 through 13 May 2019 (376 days), and consisted of daily photographs, visual assessment of decomposition stages, and total body scores (TBS) according to Payne and Megyesi et al. , local environmental data, and soil samples for soil chemistry and DNA extraction. Winter 2019: three male donors within a weight range of 68.5–87.1 kg (151–192 lbs) were accepted for this study . These donors were placed 9 months after the start of the spring 2018 study in order to capture seasonal differences in decomposition patterns. Heights differed within 20 cm, and BMI values ranged from 24.7 to 29.4. Data collection began on 8 February 2019 and continued through 27 February 2020 (384 days). For both trials, temperature probes (Decagon Devices, RT-1) were inserted internally (rectally) into each donor prior to placement, and donors were immediately frozen. The still-frozen donors were simultaneously placed at the study site unclothed, supine, and in direct contact with the soil surface to allow insect and scavenger access. Donors were spaced 2 m apart , which previous research has shown is sufficient space to prevent cross-contamination from decomposition products: in our previous study, lateral transport of decomposition products in soil has not been found appreciable beyond 50 cm, and vertical translocation below ~20 cm (Keenan et al. ). Additional sensors (Decagon Devices, GS3) were placed in the soil underneath each donor to measure soil temperature, moisture, and EC. A third set of sensors were placed ~50 cm above the soil surface to record local ambient air temperature. Placement of these sensors was designed to avoid the influence of elevated cadaver temperatures during decomposition. Soil samples were collected at 20 time points throughout the spring study, and at 19 points during the winter study. The first four samples in each seasonal study were selected based on visual assessments of decomposition stages: initial field placement, day 0; bloat, day 8 (spring) or day 21 (winter); active decay, day 12 (spring) or day 38 (winter); onset of advanced decay, day 16 (spring) or days 55–75 (winter). Once advanced decay had begun, sampling continued at approximately evenly spaced intervals based upon equivalent thermal units: 8400 accumulated degree hours (ADH) or 350 accumulated degree days (ADD), as calculated by local ambient air temperature measurements with a baseline threshold of 0°C. For the spring study, the remainder of the sampling dates corresponded to days 27, 43, 58, 72, 86, 103, 117, 132, 150, 168, 201, 254, 303, 340, 357, and 376, with skeletonization occurring by day 303. For the winter study, the remainder of the sampling dates corresponded to days 94, 110, 126, 140, 158, 172, 186, 201, 216, 234, 252, 290, 335, and 384, with skeletonization occurring by day 158 (Tables and ). Soil sampling was performed using a 1.9-cm (three-fourth inch) diameter soil auger to remove five 16-cm cores (or sufficient soil to equal 200 g) from underneath each donor as well as from paired control plots located between 2 and 3 m upslope from each donor. Cores were divided into two depth segments: 0–1 and 1–16 cm, and soils from each depth were composited to create single samples from underneath each donor and from control plots. This 1-cm depth sample was referred to as an interface sample and was designed to capture hot-spot effects in the immediate proximity to decomposing remains. Dissolved oxygen (DO) was measured underneath donors and in controls using an Orion Star A329 multiparameter meter (ThermoScientific). A total of 12 samples were collected per sampling time point (six cores and six interfaces). Samples were transported back to the laboratory, where soil aliquots for DNA extraction were flash-frozen and stored at −80°C. All other soils were stored at 4°C and were processed within 48 h. Particle size analysis (PSA) was performed on homogenized control soils to verify similarity of field placement sites. Air-dried soils were sieved with 2-mm sieves, and organic matter was removed. PSA was performed using a Malvern Mastersizer 3000 laser particle size diffractor. Soil chemical analyses followed methods described in Keenan et al. . Briefly, soils were homogenized, and debris >2 mm (rocks and insects) were removed. Gravimetric moisture was calculated from mass loss following oven drying for 72 h at 105°C. Oven-dried soils were ground and analysed for total carbon (TC) and total nitrogen (TN) using a Vario Max CN Elemental Analyser (Elementar, Hanau, Germany). Soil pH and EC were measured on soil slurries of 1:2 soil:deionized water using an Orion Star A329 multiparameter meter (ThermoScientific). Respiration rates were assessed according to protocols outlined in Keenan et al. . Briefly, soils were sealed in 60 ml serum vials, and 0.5 ml headspace aliquots were removed immediately after sealing (T0), and again following a 24-h incubation (T24) at 20°C. Measurements were conducted in duplicate using a LI-820 CO 2 gas analyser and consisted of injection of 0.5 ml of the serum vial headspace. Soils were extracted in 0.5 M K 2 SO 4 , 1:4 parts soil and salt solution. Slurries were shaken at room temperature for 4 h at 160 r m −1 , after which they were allowed to settle and then vacuum-filtered using 1 µm glass microfiber filters (Ahlstrom). Filtrates were stored at −20°C until downstream processing. Ammonium and nitrate in filtrates were quantified colorimetrically in triplicate according to protocols described in Keenan et al. . Prior to DNA extraction, replicate control samples for interfaces (0–1 cm) and cores (1–16 cm) were each pooled into single samples by equal weight per sample. Impacted soils were not pooled. This yielded eight extractions per sampling time point: three impacted interfaces, three impacted cores, one pooled control interface, and one pooled control core. The DNA was extracted from 250 mg soil samples using the QIAGEN DNeasy ® PowerLyzer ® PowerSoil ® Kit (Hilden, Germany) according to manufacturer’s protocols, and included a prolonged bead beating step (4000 r m −1 for 45 s) recommended for soils with high clay content. Concentrations were quantified using a NanoDrop One spectrophotometer (ThermoScientific). Library preparation and sequencing was performed on the Illumina MiSeq platform at the Genomics Core Facility at the University of Tennessee, Knoxville. Sample preparation for amplicon sequencing consisted of polymerase chain reaction (PCR) amplification of the bacterial 16S rRNA region using universal primers 515F and 806R (Apprill et al. , Parada et al. ). Preparation for amplicon sequencing of the fungal ITS2 rRNA region used a mixture of primers (six forward and two reverse: ITS3NGS1, ITS3NGS2, ITS3NGS3, ITS3NGS4, ITS3NGS5, ITS3NGS10, ITS4NGR, and ARCH-ITS4) as previously described (Cregger et al. ), to amplify regions of ~300–400 nucleotides in length. Samples were diluted 1:10 for amplification and PCR amplification was performed using 12.5 µl 2x KAPA HiFi HotStart ReadyMix Taq (Roche, Indianapolis, IN), 5 µL each of 1.5 µM forward and reverse primers, and 2.5 µl sample, for a total volume of 25 µl per sample. The PCR amplification protocol consisted of denaturation at 95°C for 3 min followed by 30 cycles of denaturation (95°C for 30 s), annealing (55°C for 30 s), and extension at (72°C for 30 s), and a final elongation for 5 min at 72°C. Results were visualized by gel electrophoresis. Cleanup of PCR products was performed using AMPure beads (Agencourt, Beverly, MA). A second indexing PCR was performed using Nextera XT indexes with PCR cycling performed as described above except with eight cycles of elongation. Index PCR cleanup was again performed using AMPure beads, and final quality and concentrations were determined on a Bioanalyzer (Agilent, Santa Clara, CA). Samples were pooled to a final loading concentration of 4 pM, combined with 15% PhiX, and paired-end sequencing performed on a v3 flow cell on the Illumina MiSeq sequencing platform. Raw sequence reads are deposited in the NCBI Short Read Archive under BioProjects PRJNA1066312 (Spring data set) and PRJNA1070662 (Winter data set). Quality control of bacterial 16S sequence reads was performed using MOTHUR (v. 1.44.0) (Schloss et al. ). Briefly, paired end reads were merged and primers were trimmed, allowing a maximum of two nucleotide differences on both forward and reverse primers. Sequences were screened to remove ambiguous bases, sequences shorter than 50 bp, and longer than 275 bp. Sequences were aligned to the SILVA v132 reference database (Quast et al. ), overhangs were removed, and sequences with two or fewer differences were merged. Chimeras were removed using VSEARCH, sequences were classified, and unwanted lineages (chloroplast, mitochondria, unknown, archaea, and Eukaryota) were removed. Sequences were clustered into operational taxonomic units (OTUs) at 97% similarity. Quality control of fungal ITS reads was performed using a customized MOTHUR (v. 1.44.0) pipeline (Schloss et al. ). Paired end reads were merged, and primers were trimmed, as described for 16S sequences. Reads were screened to remove sequences with ambiguous bases and read lengths shorter than 200 bp. Initial clustering was performed, allowing up to three nucleotide differences between sequences. Chimeras were removed using VSEARCH, and remaining sequences were mapped to the UNITE 8.2 reference database using the standard cutoff of 80% (Abarenkov et al. ). Unwanted lineages were removed (unknown, Protista, protozoa, Plantae, Chromista, bacteria, and Animalia). Since ITS sequences are of differing length, sequences were clustered into OTUs using greedy clustering with a cutoff of 0.05 in VSEARCH. The PCR blanks and test data were removed from all datasets, and alpha diversity indices (Shannon, Inverse Simpson, and Chao1 were calculated in R using Phyloseq; Bioconductor BiocManager 1.30.10) (McMurdie and Holmes ) following both the removal of singletons and doubletons. Removal of singletons resulted in 8098 ITS OTUs and 19 591 16S OTUs for the spring dataset, and 7846 ITS OTUs and 16 388 16S OTUs for the winter dataset. Removal of doubletons resulted in 6021 ITS OTUs and 15 861 16S OTUs from the spring dataset and 5808 ITS OTUs and 13 089 16S OTUs from the winter dataset. For both seasons changes to diversity were negligible between removal of singletons and doubletons; since our focus was to determine robust patterns of change rather than identifying changes to rare taxa, we have selected removal of doubletons for all downstream analyses. Bacterial and fungal quantities were estimated using quantitative PCR (qPCR) of rRNA genes as a proxy. The Femto ™ Fungal DNA Quantification Kit and Femto ™ Bacterial DNA Quantification Kit (Zymo Research Corporation, Irvine, CA) were used according to manufacturer protocols. The qPCR was performed on a Bio-Rad CFX Connect Real-Time PCR Detection System (Bio-Rad Laboratories, Inc., Hercules, CA). Three outliers showing poor or no amplification were removed from the dataset. Donors ( n = 3 in the spring, and n = 3 in the winter) were treated as experimental replicates, each with a paired control site. Shapiro–Wilk tests were performed, which showed that data was not normally distributed over the course of the study, so nonparametric Kruskal–Wallace ( P < .05) was used to test for differences between treatments (impacted soils underneath donors and controls). Student’s t -tests were conducted at each sampling time point to determine if impacted soils significantly differed from controls. Canonical analysis of principal coordinates (CAP) on Bray–Curtis distances of community compositional data was performed followed by PERMANOVA ( P < .05) in order to test for relationships between biogeochemical variables and changes in microbial beta diversity by study day. DO values for interfaces were extrapolated from core data, and data following day 168 (spring trial) and 201 (winter trial) were estimated to remain at 100%. Analyses and visualizations were conducted in R (version 3.6.1) using the tidyverse (1.2.1), vegan (version 2.5–6), ggplot2 (version 3.2.1), RColorBrewer (version 1.1–2), and Phyloseq (Bioconductor BiocManager 1.30.10) packages (R Development Core Team , McMurdie and Holmes , Neuwirth , Berry , Wickham , , Oksanen et al. ). Analysis files and R code are available at: https://github.com/jdebruyn/ARF-seasonal . Decomposition stages Spring 2018: three frozen donors were placed at the study site on 2 May 2018 (day 0) . Fly egg masses were visible in the nasal cavities of all donors within 24 h, and by day 2 extensive fly egg oviposition was evident throughout facial areas . Internal temperatures of the donors equilibrated with ambient air temperatures by day 4 when the first evidence of bloating was visible on the lateral abdomens . Donors exhibited extensive bloating by day 8. Bloating decreased by day 12 and there was visible fluid release into the soil, marking the beginning of active decay. Larval masses visibly peaked between days 13 and 15. By day 16 visible cadaver decomposition islands (CDIs) were well-developed, substantial tissue loss was evident, and visible changes in decomposition progression had slowed, marking the beginning of advanced decay. During the latter period of advanced decay (day 86 onwards) interindividual variation of tissue loss was apparent. Generally, soils beneath donors were initially greasy, followed by the development of hard soil crusts. Intermittent white fungal growth and the development of pockets of adipocere were also observed. Skeletonization or partial mummification began following day 303. Winter 2019: a second set of three frozen donors was placed at the ARF nearby the first trio of donors on 8 February 2019, such that local environmental and edaphic conditions as well as forest canopy and understory conditions were similar . By day 7, internal donor temperatures had equilibrated with ambient temperatures . Donors were sampled on day 21, around the onset of bloat . The bloat stage was less-well defined than in the spring study and consisted of only slight bloating with a continuous seepage of decomposition fluid. No signs of insect activity were observed during any of the early stages of decay. Minor scavenging was observed during early decay and limited to extremities. Active decay was estimated to begin on day 38; CDIs became well-developed, and adipocere was observed in soils under the torsos. Since soft tissue mass loss was gradual, samples were collected on both days 55 and 75, representing “active-advanced” and “advanced decay 1,” respectively. During the latter period of advanced decay adipocere quantities reduced and fungal growth was observed on soils underneath donors. Gradual mass loss continued with very little interindividual variation in decomposition rates observed between the three donors. Donors were mostly skeletonized or mummified by day 158. Comprehensive field notes, including TBS (based on visual scoring) can be found in the supplementary materials ( and , ). Environmental sensor data Spring 2018: following donor internal and ambient temperature equilibration, both internal and impacted soil temperatures increased above ambient air temperatures by day 8, the point at which donors were fully bloated . Elevated temperatures continued through day 80, and ranged from 41.3°C to 45.6°C, commensurate with the greatest visible larval masses. Maximum soil temperatures underneath donors reached 40.2°C–42.8°C corresponding to days 23–24. Ambient air and soil control temperatures did not differ from one another. Soil moisture increased immediately upon donor placement and remained elevated for the majority of the study. The largest increase above control values occurred between days 91 and 122 in the latter half of advanced decay, when soils were still covered by donor remains as well as a hard crust of decomposition materials . Soil moisture maxima in decomposition soils ranged from 52.1% to 54.7% volumetric water content (%VWC). EC data originating from soil sensors was similar to that measured in core and interface in the lab; sensor prongs extended ~6 cm into the soil, thus homogenized patterns reported for respective sampling depths. Graphical data is presented here for reference . Winter 2019: in contrast to patterns observed in the spring, there was no difference between internal, ambient, and soil temperatures in the winter study, nor were differences observed between impacted soils and controls for moisture. Conductivity increased immediately after donor placement, remaining elevated through day 210 ( and ). For both seasonal datasets, cumulative increases in temperatures reflect similarities between ambient air and soil control temperatures, as well as similarities between donor and impacted soil temperatures, as noted above; patterns in cumulative temperature differ, as do final cumulative temperatures at the end of the study (Tables and ). Soil biogeochemistry Soils for the spring donor site were composed of 6.34% sand, 85.02% silt, and 8.64% clay. Soils from the winter site were composed of 10.96% sand, 81.44% silt, and 7.6% clay. Spring 2018 core (1–16 cm) samples. Over the course of the entire study, all measured soil chemical parameters in decomposition-impacted cores differed from controls with the exceptions of TN and the C:N ratio (Kruskal–Wallace, P < .05) . The first changes to soil chemistry were detected during the bloat phase (day 8; 2703 ADH based on donor soil), when the first evidence of fluid seepage into the soil was visible (Figs and , Table ; ). Following this bloat period, soil acidified and remained acidic for the majority of the study. Decomposition products stimulated heterotrophic respiration, resulting in reductions in soil oxygen and nitrate during active decay (day 12; 5362 ADH); mean DO decreased to 38.9% and nitrate concentrations decreased below background levels falling to below 50% of control concentrations. Respiration rates continued to increase, reaching a maximum of 55.4 ± 9.20 µmol CO 2 per gram dry weight per day (gdw −1 day −1 ) on day 58. Both TC and the C:N ratio in decomposition soils were significantly elevated on day 27 (18 289 ADH) during early advanced decay, at 11.7 ± 2.3%C and 16.8 ± 0.5, respectively. TN in impacted soils ranged from 0.39 to 0.77% N but did not vary significantly from control soils (0.37 to 0.6% N). Ammonium concentrations in decomposition soils increased in conjunction with increases in EC, becoming significantly different from control soils by day 8 (2703 ADH) and exhibiting maximum concentrations during early advanced decay (day 16; 8360 ADH) at 665.2 ± 375.6 µg gdw −1 and 568.7 ± 232.3 µS cm −1 , respectively. Following soil oxygen recovery on day 168 (107 850 ADH), nitrification began to occur and nitrate concentrations steadily increased to a maximum during skeletonization (day 340; 144 538 ADH) at 101.2 ± 85.2 µg gdw −1 . By the end of the study no parameters exhibited significant differences between decomposition soils and controls, however mean values and wide standard deviations indicate that some degree of impaction was still evident. Spring 2018 interface (0–1 cm) samples. All measured parameters in impacted interface soils were altered during decomposition compared to controls (Kruskal–Wallace, P < .05) . Overall response patterns were consistent with those observed in core soils, however there was greater variability in the interfaces; i.e. changes were greater in magnitude in decomposition soils both in terms of mean values and standard deviations, frequently leading to statistical nonsignificance (Figs and , Table ; ). An especially noteworthy example of this occurred in ammonium; initial significant increases coincided with the maximum soil concentration on day 12 (6905.5 ± 1325.3 µg gdw −1 a 500x increase over controls), however the following sampling time period had a similar mean, but doubled variance. Mean EC reached a maximum of 969.1 ± 1336.7 µS cm −1 during bloat immediately followed by a brief decrease to 4.0 ± 0.9 µS cm −1 during active decay; this decrease occurred during the initial presence of “greasy” decomposition products in soil samples and extracts and was observed in soils associated with all three donors. Shifts in inorganic N pools (ammonium and nitrate) reflected core patterns, except at higher soil concentrations, and early nitrate decreases below control values were apparent throughout the period in which soil oxygen was <75%. TC and TN were both significantly elevated by day 16, and reached maxima on day 27, both at approximately twice core values. Over the course of the study TC returned to control values in mid-advanced decay, however TN remained elevated throughout, in part due to the respective timing and soil concentrations of ammonium and nitrate. The C:N ratio reflected both patterns in TC and TN: brief elevation on day 12 corresponding to increased TC abundances, followed by an extended period (days 132–357) in which the C:N ratio fell below that of control soils corresponding to the period in which TC recovered to control values but TN remained elevated. Winter 2019 core (1–16 cm) samples. Soil chemical parameters in decomposition-impacted core soils significantly differed from control soils with the exceptions of TN and TC (Kruskal–Wallace, P < .05) . In comparison with immediate soil responses apparent during bloat and active decay in the spring study, parameters in the winter study underwent more gradual change with maxima or minima occurring in advanced decay and skeletonization (Figs and , Table ; ). Mean soil pH gradually became acidic, reaching only 6.3 ± 0.3 in early skeletonization (day 158; 71 952 ADH) in contrast with spring cores which reached 5.7 ± 0.2 by early advanced decay. Respiration rates reached maximum values in early advanced decay (day 55, 12 127 ADH) at concentrations only 62% of spring core samples. Likewise, soil oxygen levels declined gradually over 75 days, dropping to 71.3% of initial values, approximately half of the decrease observed in the spring trial. Ammonium increases were gradual and did not display the sharp increases observed in the spring trial; both ammonium and nitrate maxima occurred on day 140 (60 714 ADH) at 329.4 ± 210.5 and 75.7 ± 48.9 µg gdw −1 , respectively. Unlike patterns observed in spring core data, nitrate did not fall below control concentrations during (lesser) soil oxygen reductions. Instead, there was greater overlap of ammonium and nitrate concentrations in winter core data in comparison with the oxygen-dependent separation that was visible in the spring core data. Conductivity reached maximum values on day 110 (43 005 ADH) at 356.4 ± 184.9 µS cm −1 . TC and TN reached maxima on day 172 at 12.0 ± 9.1%C and 1.18 ± 1.05% N, respectively, both approximately double that of control soil values, but in a similar manner to spring core soils patterns of change were not noteworthy. Winter 2019 interface (0–1 cm) samples. As observed in spring trial, all interface soil chemical parameters differed significantly from control soils (Kruskal–Wallace, P < .05) . Overall patterns followed those of winter cores with respect to timing, and with the exception of pH, parameter maxima (or minima) occurred during active and early advanced decay (days 55–94) (Figs and , Table ; ). pH became more acidic, reaching 6.2 ± 0.2 mid-way through skeletonization (day 216; 104 845 ADH). Respiration rates increased during active decay (day 38; 7594 ADH), reaching a maximum in early advanced decay (day 94; 33 575 ADH) of 97.5 ± 7.7 µmol CO 2 gdw −1 day −1 , and remained elevated throughout much of the study. Mean EC and ammonium concentrations both reached their maxima during active decay (day 55; 12 127 ADH) at 926.8 ± 576.7 µS cm −1 and 2811.2 ± 2241.2 µg gdw −1 . TC, TN, and the C:N ratio patterns followed those observed for spring interface soils: simultaneous maxima of TC and TN (day 94), early recovery of TC (day 110, 43 005 ADH), late elevations of TN (day 335, 142 473 ADH), and C:N patterns that were initially elevated above controls and fell below control values during the latter half of the study. As observed in the spring trial nitrate concentrations fell below control concentrations early in the study (days 38–94), although in this case reductions in concentration occurred simultaneously with ammonium concentration increases and just prior to oxygen decreases. Fungal communities Diversity and abundances Fungal community alpha diversity (Shannon, Inverse Simpson, and Chao1) decreased during the onset of soil chemical changes and remained low for the remainder of both study trials (Kruskal–Wallace, P < .05) . The opposite effect was observed for fungal gene copy number, and the fungal:bacterial ratio (Kruskal–Wallace, P < .05) . The decrease in alpha diversity occurred earlier in the spring trial in comparison with the winter trial, and depth effects were not noticeably present (Fig. ). In contrast, increases in fungal gene copy number and the fungal:bacterial ratio occurred early in advanced decay during both spring and winter, with greater effects observed in interface soils (Fig. , Tables and ; ). CAP coordinates of Bray–Curtis distances and PERMANOVA showed significant differences in community structure between controls and impacted soils by study day ( P < .001) (Fig. ). Community changes during early decomposition are positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, fungal gene abundances, and the fungal:bacterial ratio, and negatively correlated to pH and soil oxygen (DO). Later communities correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). This same pattern was observed for the winter trial, however communities exhibited greater dispersion. In both trials community structure remained altered compared with initial communities by the end of 1 year. Community composition Spring 2018: of phyla representing greater than 2% relative abundance in spring trial soils, Ascomycota constituted the largest proportion of community composition in all control samples of both soil depths, followed by Basidiomycota, Mortierellomycota, Mucoromycota , and Rozellomycota . From bloat through early advanced decay (days 8–58) in interfaces, community composition became almost exclusively composed of Ascomycota , specifically Yarrowia ; this shift occurred simultaneously with decreases in pH, soil oxygen, and nitrate concentrations, and peak elevations of EC, respiration rates, and ammonium concentrations ( ; Fig. ). During this same period, soil temperatures remained consistently elevated above 30°C and fly larvae were present. Similar community shifts were observed in cores, although over a shorter period of time (days 12–43). At the class-level, dominant members were Saccharomycetes and Sordariomycetes (both Ascomycota ) (Fig. ; ). Relative abundances of Saccharomycetes , primarily composed of Yarrowia , increased during the period of elevated respiration and reduced soil oxygen in both cores and interfaces. During the Saccharomycetes bloom, Sordariomycetes decreased to <15% relative abundance in interfaces then began to increase after day 27 and by the end of the study constituted 50% relative abundance. In contrast, in core samples Sordariomycetes increased in conjunction with Saccharomycetes , and following day 16 composed 30%–40% of sample abundance for the remainder of the study. At both depths Sordariomycetes increases were driven in large part by the genus Scedosporium , which frequently constituted >20% of the sample relative abundance . Tremellomycetes ( Basidiomycota ), below the detection cutoff in initial samples, also increased beginning in early advanced decay (day 27), peaking between days 86 and 117 in both soil depths. Decreased relative abundances were also observed for Eurotiomycetes ( Ascomycota ), Agaricomycetes , and Archaeorhizomycetes ( Ascomycota ); Agaricomycetes and Eurotiomycetes recovered to initial proportions while the others remained low. Community structures were still impacted at the end of the study, notably with increased relative abundances of Sordariomycetes in both soil depths in comparison with control samples. Winter 2019: changes in phyla, classes, and genera during the winter trial were broadly similar to those found in the spring trial, i.e. an early Saccharomycetes bloom, followed by later increases in Sordariomycetes . However, both timings and magnitudes of taxon changes followed seasonally dependent patterns of decomposition progression and their reflections in soil chemistry, thus winter trial taxon changes were less pronounced and their abundances were less clearly defined due to slow seepage of decomposition products into the soil ( and ; Figs and ). Saccharomycetes abundances increased in both soil depths in early advanced decay (day 75, ADD 822), much later than the increase observed during the bloat stage of the spring trial (day 8, ADD 164). Tremellomycetes increased earlier during the winter trial, as early as active decay (day 38) in cores, and during the Saccharomycetes bloom, rather than following it as observed in the spring . Notably, Scedoscorium relative abundance increases following the brief Yarrowia bloom were commensurate with those observed in the spring study for both soil depths . Bacterial communities Diversity and abundances As with fungal communities, all bacterial alpha diversity metrics decreased with soil chemical changes in the spring relative to controls, however only Shannon diversity and Chao1 significantly changed in the winter trial (Kruskal–Wallace, P < .05) . Bacterial gene copy abundances did not differ between decomposition impacted soils and controls in spring cores (Kruskal–Wallace, P < .05) . Alpha diversity differences between controls and decomposition soils were pronounced in interfaces, however little difference was observed in core soils throughout both seasonal studies (Fig. ). Changes in bacterial gene copy abundances were negligible (Fig. , Tables and ; ). In both trials, CAP results demonstrated significant bacterial community structure changes throughout the entire decomposition process, and that recovery to initial conditions appeared to partially occur in core soils by the final sampling time point of each trial (Fig. ). Bacterial community structure changes during early decomposition were positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, and negatively correlated to pH and soil oxygen (DO), whereas later community changes are correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). Community composition Spring 2018: phyla that were the greatest contributors to relative abundance in control soils were Proteobacteria, Actinobacteria , and Acidobacteria . In decomposition soils, Firmicutes relative abundances increased from bloat through early advanced decay (days 8–103) in both soil depths, concomitant with reduced soil oxygen and decreases in Proteobacteria . Following day 43, relative abundances of Bacteroidetes and Proteobacteria increased and Actinobacteria decreased at both sample depths. Acidobacteria relative abundances decreased in interfaces during bloat and remained low for the remainder of the trial. At the class level, Clostridia ( Firmicutes ) increased in impacted interface soils from days 8 to 43, commensurate with reduced pH and soil oxygen and increased abundances of Saccharomycetes (Yarrowia) fungi (Fig. ; ). Also during this time (days 8–16), a brief increase in Gammaproteobacteria was observed in interfaces, partially composed of the genus Ignatzschineria (16% relative abundance), which is frequently associated with the presence of blow-flies . Following day 43 Gammaproteobacteria and Bacteroidia ( Bacteroidetes ) increased in both soil depths; Gammaproteobacteria remained enriched for the remainder of the trial, however Bacteroidia abundances declined in the final two sampling time points to approximately baseline values. Relative abundances of Actinobacteria increased in both soil depths following day 27, remaining elevated for the remainder of the trial, and composing 20%–30% of relative abundances in interfaces. Alphaproteobacteria ( Proteobacteria ) relative abundances decreased in interfaces during Clostridia enrichment, but recovered by day 58, although no discernable change occurred in core soils. Verrucomicrobiae ( Verrucomicrobia ) and Thermoleophilia (an Actinobacteria ) abundances also decreased during the Clostridia bloom, and abundances remained low throughout the remainder of the study. Following the Clostridia bloom (day 58, mid-advanced decay) community composition at the class level changed only gradually, and by the end of the study differences from initial communities were largely attributable to increased Gammoproteobacteria and Actinobacteria , and decreased Verrucomicrobiae and Thermoleophilia . Winter 2019: bacterial community composition shifts during the winter trial were similar to those found in the spring trial in both soil depths, albeit at reduced magnitude and altered timing, similar to what we observed for fungal communities. Firmicutes were enriched in interfaces, but only up to 25% relative abundance, approximately half that observed in the spring trial, and were not accompanied by decreases in Proteobacteria . Relative abundances of Bacteroidetes increased slightly earlier in the winter trial, during active decay rather than early advanced decay as observed in the spring trial. At the class level, Clostridia briefly increased in impacted interface soils from days 75 to 94, commensurate with reduced pH, soil oxygen, and nitrate (Fig. ; ). In winter interfaces both Gammaproteobacteria and Bacteroidia did not decrease with increases in Clostridia , but instead increased prior to increases in Clostridia by day 38, the onset of active decay. Ignatzschineria was observed on days 55 and 75, but only at <5% relative abundance . Relative abundances of Actinobacteria increased in both soil depths during the Clostridia bloom and remained enriched for the remainder of the trial. Verrucomicrobiae and Thermoleophilia were not discernably impacted in core soils, however their abundances declined in interfaces during the Clostridia bloom and fell below detectable levels on days 126–172, as advanced decay transitioned to skeletonization. Spring 2018: three frozen donors were placed at the study site on 2 May 2018 (day 0) . Fly egg masses were visible in the nasal cavities of all donors within 24 h, and by day 2 extensive fly egg oviposition was evident throughout facial areas . Internal temperatures of the donors equilibrated with ambient air temperatures by day 4 when the first evidence of bloating was visible on the lateral abdomens . Donors exhibited extensive bloating by day 8. Bloating decreased by day 12 and there was visible fluid release into the soil, marking the beginning of active decay. Larval masses visibly peaked between days 13 and 15. By day 16 visible cadaver decomposition islands (CDIs) were well-developed, substantial tissue loss was evident, and visible changes in decomposition progression had slowed, marking the beginning of advanced decay. During the latter period of advanced decay (day 86 onwards) interindividual variation of tissue loss was apparent. Generally, soils beneath donors were initially greasy, followed by the development of hard soil crusts. Intermittent white fungal growth and the development of pockets of adipocere were also observed. Skeletonization or partial mummification began following day 303. Winter 2019: a second set of three frozen donors was placed at the ARF nearby the first trio of donors on 8 February 2019, such that local environmental and edaphic conditions as well as forest canopy and understory conditions were similar . By day 7, internal donor temperatures had equilibrated with ambient temperatures . Donors were sampled on day 21, around the onset of bloat . The bloat stage was less-well defined than in the spring study and consisted of only slight bloating with a continuous seepage of decomposition fluid. No signs of insect activity were observed during any of the early stages of decay. Minor scavenging was observed during early decay and limited to extremities. Active decay was estimated to begin on day 38; CDIs became well-developed, and adipocere was observed in soils under the torsos. Since soft tissue mass loss was gradual, samples were collected on both days 55 and 75, representing “active-advanced” and “advanced decay 1,” respectively. During the latter period of advanced decay adipocere quantities reduced and fungal growth was observed on soils underneath donors. Gradual mass loss continued with very little interindividual variation in decomposition rates observed between the three donors. Donors were mostly skeletonized or mummified by day 158. Comprehensive field notes, including TBS (based on visual scoring) can be found in the supplementary materials ( and , ). Spring 2018: following donor internal and ambient temperature equilibration, both internal and impacted soil temperatures increased above ambient air temperatures by day 8, the point at which donors were fully bloated . Elevated temperatures continued through day 80, and ranged from 41.3°C to 45.6°C, commensurate with the greatest visible larval masses. Maximum soil temperatures underneath donors reached 40.2°C–42.8°C corresponding to days 23–24. Ambient air and soil control temperatures did not differ from one another. Soil moisture increased immediately upon donor placement and remained elevated for the majority of the study. The largest increase above control values occurred between days 91 and 122 in the latter half of advanced decay, when soils were still covered by donor remains as well as a hard crust of decomposition materials . Soil moisture maxima in decomposition soils ranged from 52.1% to 54.7% volumetric water content (%VWC). EC data originating from soil sensors was similar to that measured in core and interface in the lab; sensor prongs extended ~6 cm into the soil, thus homogenized patterns reported for respective sampling depths. Graphical data is presented here for reference . Winter 2019: in contrast to patterns observed in the spring, there was no difference between internal, ambient, and soil temperatures in the winter study, nor were differences observed between impacted soils and controls for moisture. Conductivity increased immediately after donor placement, remaining elevated through day 210 ( and ). For both seasonal datasets, cumulative increases in temperatures reflect similarities between ambient air and soil control temperatures, as well as similarities between donor and impacted soil temperatures, as noted above; patterns in cumulative temperature differ, as do final cumulative temperatures at the end of the study (Tables and ). Soils for the spring donor site were composed of 6.34% sand, 85.02% silt, and 8.64% clay. Soils from the winter site were composed of 10.96% sand, 81.44% silt, and 7.6% clay. Spring 2018 core (1–16 cm) samples. Over the course of the entire study, all measured soil chemical parameters in decomposition-impacted cores differed from controls with the exceptions of TN and the C:N ratio (Kruskal–Wallace, P < .05) . The first changes to soil chemistry were detected during the bloat phase (day 8; 2703 ADH based on donor soil), when the first evidence of fluid seepage into the soil was visible (Figs and , Table ; ). Following this bloat period, soil acidified and remained acidic for the majority of the study. Decomposition products stimulated heterotrophic respiration, resulting in reductions in soil oxygen and nitrate during active decay (day 12; 5362 ADH); mean DO decreased to 38.9% and nitrate concentrations decreased below background levels falling to below 50% of control concentrations. Respiration rates continued to increase, reaching a maximum of 55.4 ± 9.20 µmol CO 2 per gram dry weight per day (gdw −1 day −1 ) on day 58. Both TC and the C:N ratio in decomposition soils were significantly elevated on day 27 (18 289 ADH) during early advanced decay, at 11.7 ± 2.3%C and 16.8 ± 0.5, respectively. TN in impacted soils ranged from 0.39 to 0.77% N but did not vary significantly from control soils (0.37 to 0.6% N). Ammonium concentrations in decomposition soils increased in conjunction with increases in EC, becoming significantly different from control soils by day 8 (2703 ADH) and exhibiting maximum concentrations during early advanced decay (day 16; 8360 ADH) at 665.2 ± 375.6 µg gdw −1 and 568.7 ± 232.3 µS cm −1 , respectively. Following soil oxygen recovery on day 168 (107 850 ADH), nitrification began to occur and nitrate concentrations steadily increased to a maximum during skeletonization (day 340; 144 538 ADH) at 101.2 ± 85.2 µg gdw −1 . By the end of the study no parameters exhibited significant differences between decomposition soils and controls, however mean values and wide standard deviations indicate that some degree of impaction was still evident. Spring 2018 interface (0–1 cm) samples. All measured parameters in impacted interface soils were altered during decomposition compared to controls (Kruskal–Wallace, P < .05) . Overall response patterns were consistent with those observed in core soils, however there was greater variability in the interfaces; i.e. changes were greater in magnitude in decomposition soils both in terms of mean values and standard deviations, frequently leading to statistical nonsignificance (Figs and , Table ; ). An especially noteworthy example of this occurred in ammonium; initial significant increases coincided with the maximum soil concentration on day 12 (6905.5 ± 1325.3 µg gdw −1 a 500x increase over controls), however the following sampling time period had a similar mean, but doubled variance. Mean EC reached a maximum of 969.1 ± 1336.7 µS cm −1 during bloat immediately followed by a brief decrease to 4.0 ± 0.9 µS cm −1 during active decay; this decrease occurred during the initial presence of “greasy” decomposition products in soil samples and extracts and was observed in soils associated with all three donors. Shifts in inorganic N pools (ammonium and nitrate) reflected core patterns, except at higher soil concentrations, and early nitrate decreases below control values were apparent throughout the period in which soil oxygen was <75%. TC and TN were both significantly elevated by day 16, and reached maxima on day 27, both at approximately twice core values. Over the course of the study TC returned to control values in mid-advanced decay, however TN remained elevated throughout, in part due to the respective timing and soil concentrations of ammonium and nitrate. The C:N ratio reflected both patterns in TC and TN: brief elevation on day 12 corresponding to increased TC abundances, followed by an extended period (days 132–357) in which the C:N ratio fell below that of control soils corresponding to the period in which TC recovered to control values but TN remained elevated. Winter 2019 core (1–16 cm) samples. Soil chemical parameters in decomposition-impacted core soils significantly differed from control soils with the exceptions of TN and TC (Kruskal–Wallace, P < .05) . In comparison with immediate soil responses apparent during bloat and active decay in the spring study, parameters in the winter study underwent more gradual change with maxima or minima occurring in advanced decay and skeletonization (Figs and , Table ; ). Mean soil pH gradually became acidic, reaching only 6.3 ± 0.3 in early skeletonization (day 158; 71 952 ADH) in contrast with spring cores which reached 5.7 ± 0.2 by early advanced decay. Respiration rates reached maximum values in early advanced decay (day 55, 12 127 ADH) at concentrations only 62% of spring core samples. Likewise, soil oxygen levels declined gradually over 75 days, dropping to 71.3% of initial values, approximately half of the decrease observed in the spring trial. Ammonium increases were gradual and did not display the sharp increases observed in the spring trial; both ammonium and nitrate maxima occurred on day 140 (60 714 ADH) at 329.4 ± 210.5 and 75.7 ± 48.9 µg gdw −1 , respectively. Unlike patterns observed in spring core data, nitrate did not fall below control concentrations during (lesser) soil oxygen reductions. Instead, there was greater overlap of ammonium and nitrate concentrations in winter core data in comparison with the oxygen-dependent separation that was visible in the spring core data. Conductivity reached maximum values on day 110 (43 005 ADH) at 356.4 ± 184.9 µS cm −1 . TC and TN reached maxima on day 172 at 12.0 ± 9.1%C and 1.18 ± 1.05% N, respectively, both approximately double that of control soil values, but in a similar manner to spring core soils patterns of change were not noteworthy. Winter 2019 interface (0–1 cm) samples. As observed in spring trial, all interface soil chemical parameters differed significantly from control soils (Kruskal–Wallace, P < .05) . Overall patterns followed those of winter cores with respect to timing, and with the exception of pH, parameter maxima (or minima) occurred during active and early advanced decay (days 55–94) (Figs and , Table ; ). pH became more acidic, reaching 6.2 ± 0.2 mid-way through skeletonization (day 216; 104 845 ADH). Respiration rates increased during active decay (day 38; 7594 ADH), reaching a maximum in early advanced decay (day 94; 33 575 ADH) of 97.5 ± 7.7 µmol CO 2 gdw −1 day −1 , and remained elevated throughout much of the study. Mean EC and ammonium concentrations both reached their maxima during active decay (day 55; 12 127 ADH) at 926.8 ± 576.7 µS cm −1 and 2811.2 ± 2241.2 µg gdw −1 . TC, TN, and the C:N ratio patterns followed those observed for spring interface soils: simultaneous maxima of TC and TN (day 94), early recovery of TC (day 110, 43 005 ADH), late elevations of TN (day 335, 142 473 ADH), and C:N patterns that were initially elevated above controls and fell below control values during the latter half of the study. As observed in the spring trial nitrate concentrations fell below control concentrations early in the study (days 38–94), although in this case reductions in concentration occurred simultaneously with ammonium concentration increases and just prior to oxygen decreases. Diversity and abundances Fungal community alpha diversity (Shannon, Inverse Simpson, and Chao1) decreased during the onset of soil chemical changes and remained low for the remainder of both study trials (Kruskal–Wallace, P < .05) . The opposite effect was observed for fungal gene copy number, and the fungal:bacterial ratio (Kruskal–Wallace, P < .05) . The decrease in alpha diversity occurred earlier in the spring trial in comparison with the winter trial, and depth effects were not noticeably present (Fig. ). In contrast, increases in fungal gene copy number and the fungal:bacterial ratio occurred early in advanced decay during both spring and winter, with greater effects observed in interface soils (Fig. , Tables and ; ). CAP coordinates of Bray–Curtis distances and PERMANOVA showed significant differences in community structure between controls and impacted soils by study day ( P < .001) (Fig. ). Community changes during early decomposition are positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, fungal gene abundances, and the fungal:bacterial ratio, and negatively correlated to pH and soil oxygen (DO). Later communities correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). This same pattern was observed for the winter trial, however communities exhibited greater dispersion. In both trials community structure remained altered compared with initial communities by the end of 1 year. Community composition Spring 2018: of phyla representing greater than 2% relative abundance in spring trial soils, Ascomycota constituted the largest proportion of community composition in all control samples of both soil depths, followed by Basidiomycota, Mortierellomycota, Mucoromycota , and Rozellomycota . From bloat through early advanced decay (days 8–58) in interfaces, community composition became almost exclusively composed of Ascomycota , specifically Yarrowia ; this shift occurred simultaneously with decreases in pH, soil oxygen, and nitrate concentrations, and peak elevations of EC, respiration rates, and ammonium concentrations ( ; Fig. ). During this same period, soil temperatures remained consistently elevated above 30°C and fly larvae were present. Similar community shifts were observed in cores, although over a shorter period of time (days 12–43). At the class-level, dominant members were Saccharomycetes and Sordariomycetes (both Ascomycota ) (Fig. ; ). Relative abundances of Saccharomycetes , primarily composed of Yarrowia , increased during the period of elevated respiration and reduced soil oxygen in both cores and interfaces. During the Saccharomycetes bloom, Sordariomycetes decreased to <15% relative abundance in interfaces then began to increase after day 27 and by the end of the study constituted 50% relative abundance. In contrast, in core samples Sordariomycetes increased in conjunction with Saccharomycetes , and following day 16 composed 30%–40% of sample abundance for the remainder of the study. At both depths Sordariomycetes increases were driven in large part by the genus Scedosporium , which frequently constituted >20% of the sample relative abundance . Tremellomycetes ( Basidiomycota ), below the detection cutoff in initial samples, also increased beginning in early advanced decay (day 27), peaking between days 86 and 117 in both soil depths. Decreased relative abundances were also observed for Eurotiomycetes ( Ascomycota ), Agaricomycetes , and Archaeorhizomycetes ( Ascomycota ); Agaricomycetes and Eurotiomycetes recovered to initial proportions while the others remained low. Community structures were still impacted at the end of the study, notably with increased relative abundances of Sordariomycetes in both soil depths in comparison with control samples. Winter 2019: changes in phyla, classes, and genera during the winter trial were broadly similar to those found in the spring trial, i.e. an early Saccharomycetes bloom, followed by later increases in Sordariomycetes . However, both timings and magnitudes of taxon changes followed seasonally dependent patterns of decomposition progression and their reflections in soil chemistry, thus winter trial taxon changes were less pronounced and their abundances were less clearly defined due to slow seepage of decomposition products into the soil ( and ; Figs and ). Saccharomycetes abundances increased in both soil depths in early advanced decay (day 75, ADD 822), much later than the increase observed during the bloat stage of the spring trial (day 8, ADD 164). Tremellomycetes increased earlier during the winter trial, as early as active decay (day 38) in cores, and during the Saccharomycetes bloom, rather than following it as observed in the spring . Notably, Scedoscorium relative abundance increases following the brief Yarrowia bloom were commensurate with those observed in the spring study for both soil depths . Fungal community alpha diversity (Shannon, Inverse Simpson, and Chao1) decreased during the onset of soil chemical changes and remained low for the remainder of both study trials (Kruskal–Wallace, P < .05) . The opposite effect was observed for fungal gene copy number, and the fungal:bacterial ratio (Kruskal–Wallace, P < .05) . The decrease in alpha diversity occurred earlier in the spring trial in comparison with the winter trial, and depth effects were not noticeably present (Fig. ). In contrast, increases in fungal gene copy number and the fungal:bacterial ratio occurred early in advanced decay during both spring and winter, with greater effects observed in interface soils (Fig. , Tables and ; ). CAP coordinates of Bray–Curtis distances and PERMANOVA showed significant differences in community structure between controls and impacted soils by study day ( P < .001) (Fig. ). Community changes during early decomposition are positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, fungal gene abundances, and the fungal:bacterial ratio, and negatively correlated to pH and soil oxygen (DO). Later communities correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). This same pattern was observed for the winter trial, however communities exhibited greater dispersion. In both trials community structure remained altered compared with initial communities by the end of 1 year. Spring 2018: of phyla representing greater than 2% relative abundance in spring trial soils, Ascomycota constituted the largest proportion of community composition in all control samples of both soil depths, followed by Basidiomycota, Mortierellomycota, Mucoromycota , and Rozellomycota . From bloat through early advanced decay (days 8–58) in interfaces, community composition became almost exclusively composed of Ascomycota , specifically Yarrowia ; this shift occurred simultaneously with decreases in pH, soil oxygen, and nitrate concentrations, and peak elevations of EC, respiration rates, and ammonium concentrations ( ; Fig. ). During this same period, soil temperatures remained consistently elevated above 30°C and fly larvae were present. Similar community shifts were observed in cores, although over a shorter period of time (days 12–43). At the class-level, dominant members were Saccharomycetes and Sordariomycetes (both Ascomycota ) (Fig. ; ). Relative abundances of Saccharomycetes , primarily composed of Yarrowia , increased during the period of elevated respiration and reduced soil oxygen in both cores and interfaces. During the Saccharomycetes bloom, Sordariomycetes decreased to <15% relative abundance in interfaces then began to increase after day 27 and by the end of the study constituted 50% relative abundance. In contrast, in core samples Sordariomycetes increased in conjunction with Saccharomycetes , and following day 16 composed 30%–40% of sample abundance for the remainder of the study. At both depths Sordariomycetes increases were driven in large part by the genus Scedosporium , which frequently constituted >20% of the sample relative abundance . Tremellomycetes ( Basidiomycota ), below the detection cutoff in initial samples, also increased beginning in early advanced decay (day 27), peaking between days 86 and 117 in both soil depths. Decreased relative abundances were also observed for Eurotiomycetes ( Ascomycota ), Agaricomycetes , and Archaeorhizomycetes ( Ascomycota ); Agaricomycetes and Eurotiomycetes recovered to initial proportions while the others remained low. Community structures were still impacted at the end of the study, notably with increased relative abundances of Sordariomycetes in both soil depths in comparison with control samples. Winter 2019: changes in phyla, classes, and genera during the winter trial were broadly similar to those found in the spring trial, i.e. an early Saccharomycetes bloom, followed by later increases in Sordariomycetes . However, both timings and magnitudes of taxon changes followed seasonally dependent patterns of decomposition progression and their reflections in soil chemistry, thus winter trial taxon changes were less pronounced and their abundances were less clearly defined due to slow seepage of decomposition products into the soil ( and ; Figs and ). Saccharomycetes abundances increased in both soil depths in early advanced decay (day 75, ADD 822), much later than the increase observed during the bloat stage of the spring trial (day 8, ADD 164). Tremellomycetes increased earlier during the winter trial, as early as active decay (day 38) in cores, and during the Saccharomycetes bloom, rather than following it as observed in the spring . Notably, Scedoscorium relative abundance increases following the brief Yarrowia bloom were commensurate with those observed in the spring study for both soil depths . Diversity and abundances As with fungal communities, all bacterial alpha diversity metrics decreased with soil chemical changes in the spring relative to controls, however only Shannon diversity and Chao1 significantly changed in the winter trial (Kruskal–Wallace, P < .05) . Bacterial gene copy abundances did not differ between decomposition impacted soils and controls in spring cores (Kruskal–Wallace, P < .05) . Alpha diversity differences between controls and decomposition soils were pronounced in interfaces, however little difference was observed in core soils throughout both seasonal studies (Fig. ). Changes in bacterial gene copy abundances were negligible (Fig. , Tables and ; ). In both trials, CAP results demonstrated significant bacterial community structure changes throughout the entire decomposition process, and that recovery to initial conditions appeared to partially occur in core soils by the final sampling time point of each trial (Fig. ). Bacterial community structure changes during early decomposition were positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, and negatively correlated to pH and soil oxygen (DO), whereas later community changes are correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). Community composition Spring 2018: phyla that were the greatest contributors to relative abundance in control soils were Proteobacteria, Actinobacteria , and Acidobacteria . In decomposition soils, Firmicutes relative abundances increased from bloat through early advanced decay (days 8–103) in both soil depths, concomitant with reduced soil oxygen and decreases in Proteobacteria . Following day 43, relative abundances of Bacteroidetes and Proteobacteria increased and Actinobacteria decreased at both sample depths. Acidobacteria relative abundances decreased in interfaces during bloat and remained low for the remainder of the trial. At the class level, Clostridia ( Firmicutes ) increased in impacted interface soils from days 8 to 43, commensurate with reduced pH and soil oxygen and increased abundances of Saccharomycetes (Yarrowia) fungi (Fig. ; ). Also during this time (days 8–16), a brief increase in Gammaproteobacteria was observed in interfaces, partially composed of the genus Ignatzschineria (16% relative abundance), which is frequently associated with the presence of blow-flies . Following day 43 Gammaproteobacteria and Bacteroidia ( Bacteroidetes ) increased in both soil depths; Gammaproteobacteria remained enriched for the remainder of the trial, however Bacteroidia abundances declined in the final two sampling time points to approximately baseline values. Relative abundances of Actinobacteria increased in both soil depths following day 27, remaining elevated for the remainder of the trial, and composing 20%–30% of relative abundances in interfaces. Alphaproteobacteria ( Proteobacteria ) relative abundances decreased in interfaces during Clostridia enrichment, but recovered by day 58, although no discernable change occurred in core soils. Verrucomicrobiae ( Verrucomicrobia ) and Thermoleophilia (an Actinobacteria ) abundances also decreased during the Clostridia bloom, and abundances remained low throughout the remainder of the study. Following the Clostridia bloom (day 58, mid-advanced decay) community composition at the class level changed only gradually, and by the end of the study differences from initial communities were largely attributable to increased Gammoproteobacteria and Actinobacteria , and decreased Verrucomicrobiae and Thermoleophilia . Winter 2019: bacterial community composition shifts during the winter trial were similar to those found in the spring trial in both soil depths, albeit at reduced magnitude and altered timing, similar to what we observed for fungal communities. Firmicutes were enriched in interfaces, but only up to 25% relative abundance, approximately half that observed in the spring trial, and were not accompanied by decreases in Proteobacteria . Relative abundances of Bacteroidetes increased slightly earlier in the winter trial, during active decay rather than early advanced decay as observed in the spring trial. At the class level, Clostridia briefly increased in impacted interface soils from days 75 to 94, commensurate with reduced pH, soil oxygen, and nitrate (Fig. ; ). In winter interfaces both Gammaproteobacteria and Bacteroidia did not decrease with increases in Clostridia , but instead increased prior to increases in Clostridia by day 38, the onset of active decay. Ignatzschineria was observed on days 55 and 75, but only at <5% relative abundance . Relative abundances of Actinobacteria increased in both soil depths during the Clostridia bloom and remained enriched for the remainder of the trial. Verrucomicrobiae and Thermoleophilia were not discernably impacted in core soils, however their abundances declined in interfaces during the Clostridia bloom and fell below detectable levels on days 126–172, as advanced decay transitioned to skeletonization. As with fungal communities, all bacterial alpha diversity metrics decreased with soil chemical changes in the spring relative to controls, however only Shannon diversity and Chao1 significantly changed in the winter trial (Kruskal–Wallace, P < .05) . Bacterial gene copy abundances did not differ between decomposition impacted soils and controls in spring cores (Kruskal–Wallace, P < .05) . Alpha diversity differences between controls and decomposition soils were pronounced in interfaces, however little difference was observed in core soils throughout both seasonal studies (Fig. ). Changes in bacterial gene copy abundances were negligible (Fig. , Tables and ; ). In both trials, CAP results demonstrated significant bacterial community structure changes throughout the entire decomposition process, and that recovery to initial conditions appeared to partially occur in core soils by the final sampling time point of each trial (Fig. ). Bacterial community structure changes during early decomposition were positively correlated with increases in ammonium (NH 4 ), TC, TN, CN ratio, EC, and negatively correlated to pH and soil oxygen (DO), whereas later community changes are correlated with increased concentrations of nitrate (NO 3 ) (Fig. ). Spring 2018: phyla that were the greatest contributors to relative abundance in control soils were Proteobacteria, Actinobacteria , and Acidobacteria . In decomposition soils, Firmicutes relative abundances increased from bloat through early advanced decay (days 8–103) in both soil depths, concomitant with reduced soil oxygen and decreases in Proteobacteria . Following day 43, relative abundances of Bacteroidetes and Proteobacteria increased and Actinobacteria decreased at both sample depths. Acidobacteria relative abundances decreased in interfaces during bloat and remained low for the remainder of the trial. At the class level, Clostridia ( Firmicutes ) increased in impacted interface soils from days 8 to 43, commensurate with reduced pH and soil oxygen and increased abundances of Saccharomycetes (Yarrowia) fungi (Fig. ; ). Also during this time (days 8–16), a brief increase in Gammaproteobacteria was observed in interfaces, partially composed of the genus Ignatzschineria (16% relative abundance), which is frequently associated with the presence of blow-flies . Following day 43 Gammaproteobacteria and Bacteroidia ( Bacteroidetes ) increased in both soil depths; Gammaproteobacteria remained enriched for the remainder of the trial, however Bacteroidia abundances declined in the final two sampling time points to approximately baseline values. Relative abundances of Actinobacteria increased in both soil depths following day 27, remaining elevated for the remainder of the trial, and composing 20%–30% of relative abundances in interfaces. Alphaproteobacteria ( Proteobacteria ) relative abundances decreased in interfaces during Clostridia enrichment, but recovered by day 58, although no discernable change occurred in core soils. Verrucomicrobiae ( Verrucomicrobia ) and Thermoleophilia (an Actinobacteria ) abundances also decreased during the Clostridia bloom, and abundances remained low throughout the remainder of the study. Following the Clostridia bloom (day 58, mid-advanced decay) community composition at the class level changed only gradually, and by the end of the study differences from initial communities were largely attributable to increased Gammoproteobacteria and Actinobacteria , and decreased Verrucomicrobiae and Thermoleophilia . Winter 2019: bacterial community composition shifts during the winter trial were similar to those found in the spring trial in both soil depths, albeit at reduced magnitude and altered timing, similar to what we observed for fungal communities. Firmicutes were enriched in interfaces, but only up to 25% relative abundance, approximately half that observed in the spring trial, and were not accompanied by decreases in Proteobacteria . Relative abundances of Bacteroidetes increased slightly earlier in the winter trial, during active decay rather than early advanced decay as observed in the spring trial. At the class level, Clostridia briefly increased in impacted interface soils from days 75 to 94, commensurate with reduced pH, soil oxygen, and nitrate (Fig. ; ). In winter interfaces both Gammaproteobacteria and Bacteroidia did not decrease with increases in Clostridia , but instead increased prior to increases in Clostridia by day 38, the onset of active decay. Ignatzschineria was observed on days 55 and 75, but only at <5% relative abundance . Relative abundances of Actinobacteria increased in both soil depths during the Clostridia bloom and remained enriched for the remainder of the trial. Verrucomicrobiae and Thermoleophilia were not discernably impacted in core soils, however their abundances declined in interfaces during the Clostridia bloom and fell below detectable levels on days 126–172, as advanced decay transitioned to skeletonization. Our study was designed to evaluate the effects of surface-decomposition of whole human remains, incorporating simultaneously placed donors of similar mass, which were sampled at high resolution throughout a year. Our objectives were to characterize seasonal patterns in soil chemistry and microbial community abundance, diversity and structure at two soil depths, integrating the results in order to identify drivers of microbial change. Seasonal differences in gross decomposition patterns Morphological patterns of decomposition varied considerably between trials. The spring trial followed patterns frequently reported in the literature for warm-weather decomposition scenarios (Payne , Carter et al. , Meyer et al. , Sutherland et al. , Matuszewski et al. , Suckling et al. , Roberts et al. , Connor et al. , Dibner et al. , DeBruyn et al. ): extensive bloating immediately followed by a brief period in which large amounts of decomposition fluids were quickly released into the soil, and an active decay period characterized by fly larval masses and rapid tissue loss. In this study, the period of advanced decay was prolonged (occurring in the winter) and skeletonization took place late in the trial as temperatures warmed. In contrast, winter trial decomposition patterns differed substantially: no insect masses were observed, minor scavenging was present in extremities, and donors exhibited negligible bloating with a more prolonged period of fluid seepage and tissue loss. In this instance, skeletonization occurred during the warm weather season, and earlier in general than observed in the spring trial. Vass et al. predicted skeletonization to occur at ~1285 ADD. We found that skeletonization occurred later in both trials: 4935 and 2682 ADD for the spring and winter trials, respectively. Seasonal temperature patterns Temperatures of decomposing vertebrate remains are frequently reported to increase up to 15°C above ambient temperatures in warm weather decomposition scenarios (Payne , Keenan et al. , Quaggiotto et al. , Taylor et al. , DeBruyn et al. ) and heat associated with larval masses (thermogenesis) is a well-known phenomenon (Heaton et al. , Gruner et al. , Weatherbee et al. ). The soil and internal donor heating results documented in our spring trial are consistent with previous reports for warm-weather decomposition in which larval masses were present (Payne , Keenan et al. , Quaggiotto et al. , Taylor et al. , DeBruyn et al. ); internal and soil temperatures increased over those of ambient air and soil controls beginning during the bloat phase, and continued for 72 days into advanced decay, during which time temperatures exceeded 40°C, with at least 50 days >30°C. Conversely, soil and internal donor heating were absent from the winter trial, as were larval masses. These patterns support seasonal temperature patterns previously reported (DeBruyn et al. ). Seasonal differences in temperature patterns have ramifications for data interpretation and PMI estimation. By the end of our spring study, cumulative ambient air temperatures had reached 143 017 ADH, while cumulative internal and soil temperatures (in the presence of fly larvae, and thus larval thermogenesis) reached 156 039 ADH and 159 517 ADH, respectively, leading to a discrepancy of ~13 000–16 500 ADH after a year. Assuming that a typical East Tennessee summer day is roughly equivalent to 600 ADH, this would correspond to a 21.7–27.5-day (5.8%–7.3%) difference in PMI estimates depending on weather ambient or soil/internal temperatures were used. It is general practice in forensic study to use ambient air temperatures accessed from nearest-weather station data to calculate the ADD or ADH associated with developmental stages of fly larvae, and from this information formulate estimates of the PMI. Temperature-dependent calculations (larval development, TBS:ADD relationships, and so on) and PMI estimates could be underrepresented by not accounting for localized heating effects (Dabbs , , Dourel et al. , Hofer et al. ). Taken together, the extent of soil heating, particularly in conjunction with seasonality, may affect accuracy of both insect-based PMI estimations as well as other relationships based upon temperature (e.g. microbial metabolism and gene expression, soil enzyme activity rates, multicellular soil fauna reproduction strategies, and so on), and thus are knowledge gaps that warrant further exploration. Seasonal soil chemistry patterns In general, soil chemistry data from the spring trial followed patterns documented in the literature for aboveground decomposition, however the inclusion of a winter trial, high sampling resolution, and the long-term duration of our study was able to provide data leading to a more nuanced interpretation of previous material. Of all soil chemical parameters measured during decomposition, wide variability has been observed in response patterns of soil pH. Vertebrate (nonhuman) longitudinal surface-decomposition studies have generally shown pH to increase rather than decrease (Benninger et al. , Metcalf et al. , , Meyer et al. , Lauber et al. , Macdonald et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. ) (but see Towne , Anderson et al. , Perrault and Forbes ). However, human surface longitudinal decomposition studies report that soil pH frequently (but not universally) decreased during decomposition (Vass et al. , Cobaugh et al. , DeBruyn et al. , Mason et al. , Taylor et al. ). In our study, soils acidified at a rate commensurate with the influx of decomposition products and remained impacted throughout the majority of both seasonal trials. Changes were more pronounced both in terms of time and magnitude in the spring study, (e.g. soil acidification occurred quickly and to a greater degree) and supported seasonal patterns reported by DeBruyn et al. . In both seasonal trials, patterns in soil TC reflected respiration rates, suggesting that considerable organic C is added to the soil during the early period of decomposition. This is consistent with reports of persistent soil C originating from sterols, long-chain aliphatic hydrocarbons, and their transformation products, which are derived from the decomposition of fats and tissue (Lühe et al. , ). Additionally, TC in our study did not appear to translocate into deeper layers of the soil, further supporting similar reports by the same authors of decreased translocation of complex carbon structures. During decomposition, once the CDI had begun to “dry” we noted the formation of soil crusts that visibly appeared to contain adipocere, which would further retain carbon in the upper soil layers. Other vertebrate decomposition studies have shown inconsistent results in TC enrichment or depletion both during and following decomposition that cannot be explained by scavenging, seasonality, organism mass, sampling density, or time (Benninger et al. , Parmenter and MacMahon , Spicka et al. , Anderson et al. , Macdonald et al. , Barton et al. , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. , Barton et al. , Risch et al. ). Increased respiration rates during active and advanced decay were coupled with significant reductions in soil oxygen concentrations in both seasonal trials; both parameters were less impacted in the winter trial due to the slower release of decomposition products into the soil than occurred in the spring. In both trials this period of hypoxia corresponded to increased relative abundance of facultative anaerobes including yeast (e.g. Saccharomycetes ) and bacterial taxa (e.g. Firmicutes ). The relative abundances of these organisms reflected seasonal magnitudes of respiration and soil oxygen (i.e. higher overall relative abundances of Saccharomycetes and Firmicutes in the spring trial). Other studies have also reported increases in anaerobic taxa during decomposition (Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Singh et al. , Mason et al. ). This period of reduced oxygen is important, because it constrains key biogeochemical transformations, particularly with respect to N cycling (Keenan et al. ), and limits growth and activity of obligate aerobic decomposers. Despite its importance in constraining microbial metabolisms, soil oxygenation has only historically been discussed within the context of buried remains (Dent et al. , Carter et al. , , , Haslam and Tibbett ). In surface decomposition experiments, changes in soil oxygenation have only recently been directly measured (Keenan et al. ). Our study showed broadly sustained increases in TN due to decomposition, similar to prior results (Benninger et al. , Parmenter and MacMahon , Macdonald et al. , Barton et al. , , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. ). As a result, at both soil depths, the C:N ratio fell below that of soil controls in the later phases of decomposition, indicative of continued N enrichment during and after C utilization. Early in decomposition, a pulse of ammonium is typical, resulting from breakdown of tissues by proteases from fly larvae and microbes (Meyer et al. , Macdonald et al. , Cobaugh et al. , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. , DeBruyn et al. ). In our study, the ammonium pulse was greater and occurred earlier in the spring trial compared to winter, similar to previous seasonal observations (DeBruyn et al. ). In all but winter core soils we observed nitrate concentrations in decomposition soils to decrease below controls during ammonium enrichment in the early phases of decay when soil oxygen dropped below ~75%. Once oxygen returned to 70%–75% later in decomposition, nitrate concentrations increased, indicating ammonium could be transformed via nitrification to nitrite and nitrate or nitrous oxide (Meyer et al. , Macdonald et al. , Metcalf et al. , Szelecz et al. , Keenan et al. ). It is likely that other anaerobic nitrogen transformations were also occurring during the period of hypoxia [i.e. denitrification, dissimilatory nitrate reduction (DNRA), and annamox] as have been documented by other studies (Keenan et al. ). EC is directly related to the ionic strength of the soil and reflects collective changes in concentrations of K + , Na + , Ca 2+ , Mg 2+ , NH 4 + , and NO 3 − (Aitkenhead-Peterson et al. , Perrault and Forbes , Fancher et al. , Lühe et al. , Szelecz et al. , Taylor et al. ). Our results showed that EC significantly increased during active decay in the spring trial and during advanced decay in the winter trial, consistent with what other studies have demonstrated (Aitkenhead-Peterson et al. , Fancher et al. , Keenan et al. , Quaggiotto et al. , Mason et al. , Taylor et al. ). It is interesting to note that EC peaks roughly align with peaks of both ammonium and nitrate. EC is the collective reflection of all cations and anions in the soil solution; increased quantities of elements soluble from the soil matrix under acidic conditions (notably Al 3+ , Fe 2+ , and Mn 2+ ) may also account for late study increases and overall persistence of EC in human decomposition (Taylor et al. ). Conversely, elemental insolubility under soil alkalinization, in combination with presumed impacts to EC may have interesting implications for identifying limitations on the use of animals for modeling human decomposition patterns. Seasonal microbial community patterns In both seasonal trials, successional changes in fungal and bacterial community structures displayed three distinct phases that directly reflected changes in soil biogeochemistry, and correlated to soil oxygen and pH dynamics. A three-phase decomposition model was proposed by Keenan et al. based upon biogeochemical changes documented under decomposing animals in a warm-weather study. Here, we validate that model and expand upon it by coupling these biogeochemical patterns with changes to both microbial community composition (relative abundance) and microbial gene copy number (absolute abundance). Phase 1. From the beginning of the study through the onset of advanced decay microbial alpha diversity decreased, and community structure shifted. Peak relative abundances in Saccharomycetes ( Yarrowia ) and Clostridia were observed during this phase, with decreased Verrucomicrobiae and Thermoleophilia . Fungal abundances, estimated based on gene copies determined by qPCR increased, while bacterial abundances remained unchanged leading to an overall increase in the fungal:bacterial ratio. These microbial changes corresponded with decreases in pH, soil oxygen, and NO 3 , and increases in EC, respiration, and NH 4 . In the spring trial this first phase also included increased soil temperatures due to increased metabolic activity of microbes and insect larvae. Phase 2. An inflection point in the trajectory of microbial community structure changes was observed around days 12–43 of the spring trial (ADD 257–967) and days 75–94 of the winter trial (ADD 822–1196) corresponding to soil oxygen minimums (39% and 71% DO for the spring and winter trials, respectively), and marked the beginning of a second phase of changes. This second phase occurred during the period of advanced decay and early skeletonization, in which soil oxygen gradually increased, respiration rates and NH 4 concentrations declined, and NO 3 increased with more readily available soil oxygen. Fungal alpha diversity and gene copy number during this second phase did not change appreciably, however bacterial alpha diversity increased slightly in interface communities in conjunction with soil oxygen increases. Both Saccharomycetes and Clostridia decreased in relative abundance, and Sordariomycetes (Scedosporium), Bacteroidia, Gammaproteobacteria , and Actinobacteria began to increase. Phase 3. Another inflection point was observed around days 132–168 of the spring trial (ADD 3131–3917) and days 172–186 of the winter trial (ADD 3021–3364) corresponding to soil oxygen recovery to >75% of initial levels, and the beginning of the third phase. This third phase included the period between late advanced decay through late skeletonization, when soil oxygen concentrations recovered to 100% and NO 3 remained elevated. Alpha diversity during this third phase continued to increase slowly in bacterial interface communities, slightly lagging behind oxygen recovery, accompanied by gradual reductions in Bacteroidia and increases in Gammaproteobacteria (spring only), Verrucomicrobiae , and Thermoeophila . In conjunction with pH, fungal diversity remained low, but fungal gene abundances remained elevated. By the end of both trials (1 year later), fungal and bacterial community composition became more similar to initial structures but did not recover completely; the highest degree of recovery was evidenced in bacterial communities in winter core soils. The changes in soil oxygen and pH during decomposition appear to be the primary drivers of microbial communities. Community structure changed and diversity decreased in response to the period of low soil oxygenation during active and advanced decomposition for both the spring and winter trials. This is likely due to a combination of activation of alternative anaerobic metabolic pathways and responses to anoxic stress. Further, the acidification of soil during decomposition is likely also playing a role in structuring microbial communities; pH is one of the primary determinants of soil microbial community structures (Fierer and Jackson ). The increase in fungal, but not bacterial abundances was notable, and suggests that fungi are proliferating in decomposition soils. The lack of increase in bacterial abundances accompanying increased respiration suggest a decrease in carbon use efficiency which has been previously noted (Cobaugh et al. ) and is consistent with a switch to anaerobic metabolic pathways. Decreases in soil pH are often accompanied by an increase in fungi-to-bacteria ratios, generally attributed to the fact that fungi have greater tolerance for acidic conditions than many bacterial taxa (Rousk et al. ). In both seasonal studies, the dominant members of the fungal community were two Ascomycetes: Saccharomycetes and Sordariomycetes. Saccharomycetes are monophyletic yeasts, most of which live as saprobes associated with a variety of ecological niches including soil, or in association with plants and animals (Suh et al. ). Saccharomycetes exhibited a brief bloom during bloat through advanced decay in the spring, and during early advanced decay in the winter study. Their growth may be explained by their ability to perform fermentation under anoxic conditions and a tolerance or preference for acidic conditions: Saccharomyces cerevisiae has been shown to achieve optimal growth in slightly acidic conditions (Ariño , Peña et al. ). The presence of Saccharomycetes enrichment is consistent with other reports (Metcalf et al. , , Carter et al. , Forger et al. , Fu et al. , Mason et al. ). The timing of increase in our study (3801 ambient ADH), as well as the presence of Yarrowia closely aligns with increases associated with spring donors in Mason et al. (3000–3500 ADH). Winter data from the same study showed some variability in the onset of Saccharomycetes enrichment (3000–11 500 ADH) and our results (22 782 ambient ADH), suggest that decomposition beginning in colder temperatures and/or climates in conjunction with the lack of insect activity may influence the timing associated with changes to abundant taxa. This further suggests that more study is required for cold-weather decomposition scenarios. The common soil saprobe Sordariomycetes (phylum Ascomycota) were the most prevalent taxa in fungal communities in later decomposition and remained highly enriched for the remainder of the study. This enrichment, particularly in the winter trial when tissue breakdown was slow, suggests that they thrive in nitrogen-rich, low pH environments. It is possible that Sordariomycetes might constitute members of “ammonia fungi” or “postputrefaction fungi,” which have been historically observed during decomposition (but not identified via DNA sequencing) (Sagara , Tibbett and Carter , Sagara et al. ). Alternatively, the presence of Scedosporium aurantiacum (Sordariomycetes) may have partially originated from decomposition products. Scedosporium spp. are commonly found in the environment and also serve as human pathogens, particularly in the immunocompromised (Kaur et al. ). Relative abundances of the phylum Basidiomycota were reduced during active decomposition, but increased at later times. In the spring trial, Tremellomycetes , particularly orders Trichosporonales and Agaricomycetes , were enriched during mid-advanced decay concurrent with increases in soil nitrate; likewise, increases of both Tremellomycetes and nitrate occurred earlier in the winter trial. Overall, this suggests that there might be a shift within Basidiomycota towards classes with an affinity for nitrate. These types of fungi have been previously identified as “late ammonia fungi” and are generally attributed to the appearance of fruiting structures 1–4 years following decomposition. Our DNA-based approach may have detected these taxa concurrent with elevated nitrate prior to the development of fruiting bodies and thus earlier than previously reported in the literature (Sagara , Tibbett and Carter , Sagara et al. ). Relative abundances of Agaricomycetes fluctuated in the winter trial in comparison with consistent relative abundances in controls, suggesting the possibility that decomposition products may be generally detrimental to these taxa. Other eukaryotic (18S rRNA) amplicon studies have noted the presence and potential value of Agaricomycetes to modeling the PMI but have not discussed changes to abundances (Metcalf et al. , , Carter et al. ). Regarding bacterial community changes, both animal and human decomposition studies have shown consistent increases in Firmicutes relative abundances with decreases in Acidobacteria and Verrucomicrobia during active and early advanced decomposition; these changes are frequently accompanied with or followed by increases in Bacteroidetes, Proteobacteria and Actinobacteria . These population shifts are generally attributed to a combination of sudden increases in general nutrient availability, an influx of host microbes, and brief soil hypoxia (Metcalf et al. , , Lauber et al. , Carter et al. , Cobaugh et al. , Adserias-Garriga et al. , Singh et al. , Mason et al. ). In both seasonal trials we observed increased Firmicutes relative abundances beginning in bloat and active decay, which peaked at the onset of advanced decay in conjunction with high respiration rates, low soil oxygenation, and coinciding with the bloom of Saccharomycetes yeasts. The magnitude of this Firmicutes bloom was most apparent in the spring trial, and in interface soils, and diminished with soil reoxygenation. Firmicutes are opportunistic facultative anaerobes and likely have a competitive advantage during the period of hypoxia. At the class level, Clostridia appears to be the main taxon driving the Firmicutes blooms in both seasons; Clostridia is found in both vertebrate guts and soils and is commonly reported in early decomposition (Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Mason et al. ). Proteobacteria relative abundances were briefly reduced in the spring during increased relative abundances of Firmicutes and accompanying period of low soil oxygen, signifying that they were most likely responding to the greater magnitude of environmental changes present during warm-weather decomposition scenarios and/or were at a competitive disadvantage with Firmicutes . At the class level Gammaproteobacteria , and to a lesser extent Alphaproteobacteria exhibited increases immediately following the pronounced Clostridia bloom, and both taxa persisted throughout the spring trial. In contrast, both groups of taxa were enriched during the comparatively slight Clostridia bloom in the winter trial. Collectively these patterns support reports by Cobaugh et al. in which both Gamma - and Alphaproteobacteria increased as advanced decay progressed. Proteobacteria are broadly considered copiotrophic organisms, and typically respond strongly to increases in nutrient influx. Our observed decrease- then increase-patterns for these two groups of Proteobacteria are further suggestive that despite significant nutrient influx they are negatively affected by reduced oxygen, and that their later increases in relative abundance likely derive from continued soil impaction originating from long-term tissue degradation. Bacteroidia demonstrated similar patterns of increase as observed with Gammaproteobacteria , however toward the end of both trials relative abundances began to decrease to initial levels. This suggests that moderate changes in soil oxygenation did not substantially impact these taxa, and that a favorable competitive window exists for these organisms following the decreases of Clostridia . Our results are in agreement with reports of Bacteroides found in later advanced decay and in anoxic grave soils containing remaining tissue (Cobaugh et al. , Keenan et al. ). Study limitations Human decomposition studies are subject to numerous constraints. Worldwide, there are only 12 facilities in which these types of studies can be performed, nine of which are located in the USA (Pesci et al. ); this serves to reduce the diversity of climate and soil types available for comparison and eliminate others altogether (but see Carter et al. ). Further, these facilities are reliant upon body donations, which in many cases restrict population metrics (age, sex, ancestry, and so on). Spatial constraints and land-use history at these facilities may dictate donor placement proximity and affect the availability of fresh soil, the latter potentially introducing the possibility of priming effects or enrichment in previously used areas (Damann et al. ). Human subjects are a rare resource, and donor availability in combination with facility physical constraints often serves to dictate sample size, which can be low, affecting statistical (and by extension, explanatory) power. Our study attempted to simultaneously address three issues endemic to human decomposition studies: the use of fresh soil, subjects of similar size, and simultaneous donor placement in the field. This necessitated a study size of n = 3 male donors per season. Frozen human subjects were used for this study, and the effect of freezing on decomposition products, decomposer communities, and their functions is currently unknown. Nonetheless, our biogeochemical and microbial results are commensurate with multiple published studies obtained by using both frozen and unfrozen cadavers/carcasses (Aitkenhead-Peterson et al. , Metcalf et al. , , Lauber et al. , Cobaugh et al. , Fancher et al. , Keenan et al. , Mason et al. , and others found throughout the discussion.) Future studies should expand on cohort size and metrics, and include a wider variety of soil types and climates in order to ascertain whether the biogeochemical and microbial patterns reported here are robust. Morphological patterns of decomposition varied considerably between trials. The spring trial followed patterns frequently reported in the literature for warm-weather decomposition scenarios (Payne , Carter et al. , Meyer et al. , Sutherland et al. , Matuszewski et al. , Suckling et al. , Roberts et al. , Connor et al. , Dibner et al. , DeBruyn et al. ): extensive bloating immediately followed by a brief period in which large amounts of decomposition fluids were quickly released into the soil, and an active decay period characterized by fly larval masses and rapid tissue loss. In this study, the period of advanced decay was prolonged (occurring in the winter) and skeletonization took place late in the trial as temperatures warmed. In contrast, winter trial decomposition patterns differed substantially: no insect masses were observed, minor scavenging was present in extremities, and donors exhibited negligible bloating with a more prolonged period of fluid seepage and tissue loss. In this instance, skeletonization occurred during the warm weather season, and earlier in general than observed in the spring trial. Vass et al. predicted skeletonization to occur at ~1285 ADD. We found that skeletonization occurred later in both trials: 4935 and 2682 ADD for the spring and winter trials, respectively. Temperatures of decomposing vertebrate remains are frequently reported to increase up to 15°C above ambient temperatures in warm weather decomposition scenarios (Payne , Keenan et al. , Quaggiotto et al. , Taylor et al. , DeBruyn et al. ) and heat associated with larval masses (thermogenesis) is a well-known phenomenon (Heaton et al. , Gruner et al. , Weatherbee et al. ). The soil and internal donor heating results documented in our spring trial are consistent with previous reports for warm-weather decomposition in which larval masses were present (Payne , Keenan et al. , Quaggiotto et al. , Taylor et al. , DeBruyn et al. ); internal and soil temperatures increased over those of ambient air and soil controls beginning during the bloat phase, and continued for 72 days into advanced decay, during which time temperatures exceeded 40°C, with at least 50 days >30°C. Conversely, soil and internal donor heating were absent from the winter trial, as were larval masses. These patterns support seasonal temperature patterns previously reported (DeBruyn et al. ). Seasonal differences in temperature patterns have ramifications for data interpretation and PMI estimation. By the end of our spring study, cumulative ambient air temperatures had reached 143 017 ADH, while cumulative internal and soil temperatures (in the presence of fly larvae, and thus larval thermogenesis) reached 156 039 ADH and 159 517 ADH, respectively, leading to a discrepancy of ~13 000–16 500 ADH after a year. Assuming that a typical East Tennessee summer day is roughly equivalent to 600 ADH, this would correspond to a 21.7–27.5-day (5.8%–7.3%) difference in PMI estimates depending on weather ambient or soil/internal temperatures were used. It is general practice in forensic study to use ambient air temperatures accessed from nearest-weather station data to calculate the ADD or ADH associated with developmental stages of fly larvae, and from this information formulate estimates of the PMI. Temperature-dependent calculations (larval development, TBS:ADD relationships, and so on) and PMI estimates could be underrepresented by not accounting for localized heating effects (Dabbs , , Dourel et al. , Hofer et al. ). Taken together, the extent of soil heating, particularly in conjunction with seasonality, may affect accuracy of both insect-based PMI estimations as well as other relationships based upon temperature (e.g. microbial metabolism and gene expression, soil enzyme activity rates, multicellular soil fauna reproduction strategies, and so on), and thus are knowledge gaps that warrant further exploration. In general, soil chemistry data from the spring trial followed patterns documented in the literature for aboveground decomposition, however the inclusion of a winter trial, high sampling resolution, and the long-term duration of our study was able to provide data leading to a more nuanced interpretation of previous material. Of all soil chemical parameters measured during decomposition, wide variability has been observed in response patterns of soil pH. Vertebrate (nonhuman) longitudinal surface-decomposition studies have generally shown pH to increase rather than decrease (Benninger et al. , Metcalf et al. , , Meyer et al. , Lauber et al. , Macdonald et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. ) (but see Towne , Anderson et al. , Perrault and Forbes ). However, human surface longitudinal decomposition studies report that soil pH frequently (but not universally) decreased during decomposition (Vass et al. , Cobaugh et al. , DeBruyn et al. , Mason et al. , Taylor et al. ). In our study, soils acidified at a rate commensurate with the influx of decomposition products and remained impacted throughout the majority of both seasonal trials. Changes were more pronounced both in terms of time and magnitude in the spring study, (e.g. soil acidification occurred quickly and to a greater degree) and supported seasonal patterns reported by DeBruyn et al. . In both seasonal trials, patterns in soil TC reflected respiration rates, suggesting that considerable organic C is added to the soil during the early period of decomposition. This is consistent with reports of persistent soil C originating from sterols, long-chain aliphatic hydrocarbons, and their transformation products, which are derived from the decomposition of fats and tissue (Lühe et al. , ). Additionally, TC in our study did not appear to translocate into deeper layers of the soil, further supporting similar reports by the same authors of decreased translocation of complex carbon structures. During decomposition, once the CDI had begun to “dry” we noted the formation of soil crusts that visibly appeared to contain adipocere, which would further retain carbon in the upper soil layers. Other vertebrate decomposition studies have shown inconsistent results in TC enrichment or depletion both during and following decomposition that cannot be explained by scavenging, seasonality, organism mass, sampling density, or time (Benninger et al. , Parmenter and MacMahon , Spicka et al. , Anderson et al. , Macdonald et al. , Barton et al. , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. , Barton et al. , Risch et al. ). Increased respiration rates during active and advanced decay were coupled with significant reductions in soil oxygen concentrations in both seasonal trials; both parameters were less impacted in the winter trial due to the slower release of decomposition products into the soil than occurred in the spring. In both trials this period of hypoxia corresponded to increased relative abundance of facultative anaerobes including yeast (e.g. Saccharomycetes ) and bacterial taxa (e.g. Firmicutes ). The relative abundances of these organisms reflected seasonal magnitudes of respiration and soil oxygen (i.e. higher overall relative abundances of Saccharomycetes and Firmicutes in the spring trial). Other studies have also reported increases in anaerobic taxa during decomposition (Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Singh et al. , Mason et al. ). This period of reduced oxygen is important, because it constrains key biogeochemical transformations, particularly with respect to N cycling (Keenan et al. ), and limits growth and activity of obligate aerobic decomposers. Despite its importance in constraining microbial metabolisms, soil oxygenation has only historically been discussed within the context of buried remains (Dent et al. , Carter et al. , , , Haslam and Tibbett ). In surface decomposition experiments, changes in soil oxygenation have only recently been directly measured (Keenan et al. ). Our study showed broadly sustained increases in TN due to decomposition, similar to prior results (Benninger et al. , Parmenter and MacMahon , Macdonald et al. , Barton et al. , , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. ). As a result, at both soil depths, the C:N ratio fell below that of soil controls in the later phases of decomposition, indicative of continued N enrichment during and after C utilization. Early in decomposition, a pulse of ammonium is typical, resulting from breakdown of tissues by proteases from fly larvae and microbes (Meyer et al. , Macdonald et al. , Cobaugh et al. , Metcalf et al. , Szelecz et al. , Keenan et al. , Quaggiotto et al. , DeBruyn et al. ). In our study, the ammonium pulse was greater and occurred earlier in the spring trial compared to winter, similar to previous seasonal observations (DeBruyn et al. ). In all but winter core soils we observed nitrate concentrations in decomposition soils to decrease below controls during ammonium enrichment in the early phases of decay when soil oxygen dropped below ~75%. Once oxygen returned to 70%–75% later in decomposition, nitrate concentrations increased, indicating ammonium could be transformed via nitrification to nitrite and nitrate or nitrous oxide (Meyer et al. , Macdonald et al. , Metcalf et al. , Szelecz et al. , Keenan et al. ). It is likely that other anaerobic nitrogen transformations were also occurring during the period of hypoxia [i.e. denitrification, dissimilatory nitrate reduction (DNRA), and annamox] as have been documented by other studies (Keenan et al. ). EC is directly related to the ionic strength of the soil and reflects collective changes in concentrations of K + , Na + , Ca 2+ , Mg 2+ , NH 4 + , and NO 3 − (Aitkenhead-Peterson et al. , Perrault and Forbes , Fancher et al. , Lühe et al. , Szelecz et al. , Taylor et al. ). Our results showed that EC significantly increased during active decay in the spring trial and during advanced decay in the winter trial, consistent with what other studies have demonstrated (Aitkenhead-Peterson et al. , Fancher et al. , Keenan et al. , Quaggiotto et al. , Mason et al. , Taylor et al. ). It is interesting to note that EC peaks roughly align with peaks of both ammonium and nitrate. EC is the collective reflection of all cations and anions in the soil solution; increased quantities of elements soluble from the soil matrix under acidic conditions (notably Al 3+ , Fe 2+ , and Mn 2+ ) may also account for late study increases and overall persistence of EC in human decomposition (Taylor et al. ). Conversely, elemental insolubility under soil alkalinization, in combination with presumed impacts to EC may have interesting implications for identifying limitations on the use of animals for modeling human decomposition patterns. In both seasonal trials, successional changes in fungal and bacterial community structures displayed three distinct phases that directly reflected changes in soil biogeochemistry, and correlated to soil oxygen and pH dynamics. A three-phase decomposition model was proposed by Keenan et al. based upon biogeochemical changes documented under decomposing animals in a warm-weather study. Here, we validate that model and expand upon it by coupling these biogeochemical patterns with changes to both microbial community composition (relative abundance) and microbial gene copy number (absolute abundance). Phase 1. From the beginning of the study through the onset of advanced decay microbial alpha diversity decreased, and community structure shifted. Peak relative abundances in Saccharomycetes ( Yarrowia ) and Clostridia were observed during this phase, with decreased Verrucomicrobiae and Thermoleophilia . Fungal abundances, estimated based on gene copies determined by qPCR increased, while bacterial abundances remained unchanged leading to an overall increase in the fungal:bacterial ratio. These microbial changes corresponded with decreases in pH, soil oxygen, and NO 3 , and increases in EC, respiration, and NH 4 . In the spring trial this first phase also included increased soil temperatures due to increased metabolic activity of microbes and insect larvae. Phase 2. An inflection point in the trajectory of microbial community structure changes was observed around days 12–43 of the spring trial (ADD 257–967) and days 75–94 of the winter trial (ADD 822–1196) corresponding to soil oxygen minimums (39% and 71% DO for the spring and winter trials, respectively), and marked the beginning of a second phase of changes. This second phase occurred during the period of advanced decay and early skeletonization, in which soil oxygen gradually increased, respiration rates and NH 4 concentrations declined, and NO 3 increased with more readily available soil oxygen. Fungal alpha diversity and gene copy number during this second phase did not change appreciably, however bacterial alpha diversity increased slightly in interface communities in conjunction with soil oxygen increases. Both Saccharomycetes and Clostridia decreased in relative abundance, and Sordariomycetes (Scedosporium), Bacteroidia, Gammaproteobacteria , and Actinobacteria began to increase. Phase 3. Another inflection point was observed around days 132–168 of the spring trial (ADD 3131–3917) and days 172–186 of the winter trial (ADD 3021–3364) corresponding to soil oxygen recovery to >75% of initial levels, and the beginning of the third phase. This third phase included the period between late advanced decay through late skeletonization, when soil oxygen concentrations recovered to 100% and NO 3 remained elevated. Alpha diversity during this third phase continued to increase slowly in bacterial interface communities, slightly lagging behind oxygen recovery, accompanied by gradual reductions in Bacteroidia and increases in Gammaproteobacteria (spring only), Verrucomicrobiae , and Thermoeophila . In conjunction with pH, fungal diversity remained low, but fungal gene abundances remained elevated. By the end of both trials (1 year later), fungal and bacterial community composition became more similar to initial structures but did not recover completely; the highest degree of recovery was evidenced in bacterial communities in winter core soils. The changes in soil oxygen and pH during decomposition appear to be the primary drivers of microbial communities. Community structure changed and diversity decreased in response to the period of low soil oxygenation during active and advanced decomposition for both the spring and winter trials. This is likely due to a combination of activation of alternative anaerobic metabolic pathways and responses to anoxic stress. Further, the acidification of soil during decomposition is likely also playing a role in structuring microbial communities; pH is one of the primary determinants of soil microbial community structures (Fierer and Jackson ). The increase in fungal, but not bacterial abundances was notable, and suggests that fungi are proliferating in decomposition soils. The lack of increase in bacterial abundances accompanying increased respiration suggest a decrease in carbon use efficiency which has been previously noted (Cobaugh et al. ) and is consistent with a switch to anaerobic metabolic pathways. Decreases in soil pH are often accompanied by an increase in fungi-to-bacteria ratios, generally attributed to the fact that fungi have greater tolerance for acidic conditions than many bacterial taxa (Rousk et al. ). In both seasonal studies, the dominant members of the fungal community were two Ascomycetes: Saccharomycetes and Sordariomycetes. Saccharomycetes are monophyletic yeasts, most of which live as saprobes associated with a variety of ecological niches including soil, or in association with plants and animals (Suh et al. ). Saccharomycetes exhibited a brief bloom during bloat through advanced decay in the spring, and during early advanced decay in the winter study. Their growth may be explained by their ability to perform fermentation under anoxic conditions and a tolerance or preference for acidic conditions: Saccharomyces cerevisiae has been shown to achieve optimal growth in slightly acidic conditions (Ariño , Peña et al. ). The presence of Saccharomycetes enrichment is consistent with other reports (Metcalf et al. , , Carter et al. , Forger et al. , Fu et al. , Mason et al. ). The timing of increase in our study (3801 ambient ADH), as well as the presence of Yarrowia closely aligns with increases associated with spring donors in Mason et al. (3000–3500 ADH). Winter data from the same study showed some variability in the onset of Saccharomycetes enrichment (3000–11 500 ADH) and our results (22 782 ambient ADH), suggest that decomposition beginning in colder temperatures and/or climates in conjunction with the lack of insect activity may influence the timing associated with changes to abundant taxa. This further suggests that more study is required for cold-weather decomposition scenarios. The common soil saprobe Sordariomycetes (phylum Ascomycota) were the most prevalent taxa in fungal communities in later decomposition and remained highly enriched for the remainder of the study. This enrichment, particularly in the winter trial when tissue breakdown was slow, suggests that they thrive in nitrogen-rich, low pH environments. It is possible that Sordariomycetes might constitute members of “ammonia fungi” or “postputrefaction fungi,” which have been historically observed during decomposition (but not identified via DNA sequencing) (Sagara , Tibbett and Carter , Sagara et al. ). Alternatively, the presence of Scedosporium aurantiacum (Sordariomycetes) may have partially originated from decomposition products. Scedosporium spp. are commonly found in the environment and also serve as human pathogens, particularly in the immunocompromised (Kaur et al. ). Relative abundances of the phylum Basidiomycota were reduced during active decomposition, but increased at later times. In the spring trial, Tremellomycetes , particularly orders Trichosporonales and Agaricomycetes , were enriched during mid-advanced decay concurrent with increases in soil nitrate; likewise, increases of both Tremellomycetes and nitrate occurred earlier in the winter trial. Overall, this suggests that there might be a shift within Basidiomycota towards classes with an affinity for nitrate. These types of fungi have been previously identified as “late ammonia fungi” and are generally attributed to the appearance of fruiting structures 1–4 years following decomposition. Our DNA-based approach may have detected these taxa concurrent with elevated nitrate prior to the development of fruiting bodies and thus earlier than previously reported in the literature (Sagara , Tibbett and Carter , Sagara et al. ). Relative abundances of Agaricomycetes fluctuated in the winter trial in comparison with consistent relative abundances in controls, suggesting the possibility that decomposition products may be generally detrimental to these taxa. Other eukaryotic (18S rRNA) amplicon studies have noted the presence and potential value of Agaricomycetes to modeling the PMI but have not discussed changes to abundances (Metcalf et al. , , Carter et al. ). Regarding bacterial community changes, both animal and human decomposition studies have shown consistent increases in Firmicutes relative abundances with decreases in Acidobacteria and Verrucomicrobia during active and early advanced decomposition; these changes are frequently accompanied with or followed by increases in Bacteroidetes, Proteobacteria and Actinobacteria . These population shifts are generally attributed to a combination of sudden increases in general nutrient availability, an influx of host microbes, and brief soil hypoxia (Metcalf et al. , , Lauber et al. , Carter et al. , Cobaugh et al. , Adserias-Garriga et al. , Singh et al. , Mason et al. ). In both seasonal trials we observed increased Firmicutes relative abundances beginning in bloat and active decay, which peaked at the onset of advanced decay in conjunction with high respiration rates, low soil oxygenation, and coinciding with the bloom of Saccharomycetes yeasts. The magnitude of this Firmicutes bloom was most apparent in the spring trial, and in interface soils, and diminished with soil reoxygenation. Firmicutes are opportunistic facultative anaerobes and likely have a competitive advantage during the period of hypoxia. At the class level, Clostridia appears to be the main taxon driving the Firmicutes blooms in both seasons; Clostridia is found in both vertebrate guts and soils and is commonly reported in early decomposition (Cobaugh et al. , Metcalf et al. , Adserias-Garriga et al. , Mason et al. ). Proteobacteria relative abundances were briefly reduced in the spring during increased relative abundances of Firmicutes and accompanying period of low soil oxygen, signifying that they were most likely responding to the greater magnitude of environmental changes present during warm-weather decomposition scenarios and/or were at a competitive disadvantage with Firmicutes . At the class level Gammaproteobacteria , and to a lesser extent Alphaproteobacteria exhibited increases immediately following the pronounced Clostridia bloom, and both taxa persisted throughout the spring trial. In contrast, both groups of taxa were enriched during the comparatively slight Clostridia bloom in the winter trial. Collectively these patterns support reports by Cobaugh et al. in which both Gamma - and Alphaproteobacteria increased as advanced decay progressed. Proteobacteria are broadly considered copiotrophic organisms, and typically respond strongly to increases in nutrient influx. Our observed decrease- then increase-patterns for these two groups of Proteobacteria are further suggestive that despite significant nutrient influx they are negatively affected by reduced oxygen, and that their later increases in relative abundance likely derive from continued soil impaction originating from long-term tissue degradation. Bacteroidia demonstrated similar patterns of increase as observed with Gammaproteobacteria , however toward the end of both trials relative abundances began to decrease to initial levels. This suggests that moderate changes in soil oxygenation did not substantially impact these taxa, and that a favorable competitive window exists for these organisms following the decreases of Clostridia . Our results are in agreement with reports of Bacteroides found in later advanced decay and in anoxic grave soils containing remaining tissue (Cobaugh et al. , Keenan et al. ). Human decomposition studies are subject to numerous constraints. Worldwide, there are only 12 facilities in which these types of studies can be performed, nine of which are located in the USA (Pesci et al. ); this serves to reduce the diversity of climate and soil types available for comparison and eliminate others altogether (but see Carter et al. ). Further, these facilities are reliant upon body donations, which in many cases restrict population metrics (age, sex, ancestry, and so on). Spatial constraints and land-use history at these facilities may dictate donor placement proximity and affect the availability of fresh soil, the latter potentially introducing the possibility of priming effects or enrichment in previously used areas (Damann et al. ). Human subjects are a rare resource, and donor availability in combination with facility physical constraints often serves to dictate sample size, which can be low, affecting statistical (and by extension, explanatory) power. Our study attempted to simultaneously address three issues endemic to human decomposition studies: the use of fresh soil, subjects of similar size, and simultaneous donor placement in the field. This necessitated a study size of n = 3 male donors per season. Frozen human subjects were used for this study, and the effect of freezing on decomposition products, decomposer communities, and their functions is currently unknown. Nonetheless, our biogeochemical and microbial results are commensurate with multiple published studies obtained by using both frozen and unfrozen cadavers/carcasses (Aitkenhead-Peterson et al. , Metcalf et al. , , Lauber et al. , Cobaugh et al. , Fancher et al. , Keenan et al. , Mason et al. , and others found throughout the discussion.) Future studies should expand on cohort size and metrics, and include a wider variety of soil types and climates in order to ascertain whether the biogeochemical and microbial patterns reported here are robust. In this pair of long-term seasonal trials, we have demonstrated that the changes in soil bacterial and fungal community structures are inexorably linked to changes in the physicochemical environment, and that these collective changes vary in magnitude by season. Our work reinforces the need to understand how abiotic filtering drives microbial successional patterns in these systems. The relationships we observed between soil chemistry and microbial succession further supports the biogeochemical model proposed by Keenan et al. and suggests that the magnitudes and timings of fluctuations within soil chemical parameters may function as predictors for successional patterns in microbial communities. Increased fungal abundance, in combination with relatively static bacterial abundances, has potential for determining relative importance of taxon shifts for downstream microbial modeling efforts. Further, increased soil temperatures bolstered by larval thermogenesis in warm-weather decomposition systems suggest that thermal effects, specifically short-term heating, likely influence chemical and biological transformations in the soil due to organismal thermal tolerances, enhanced extracellular enzymatic activity, and altered metabolic rates. Further studies in other systems (i.e. different climates, soils, plant communities, and so on) would be needed to assess the repeatability and robustness of our observed patterns. Given the strong abiotic filtering we observed, the move toward including microbial community assessments that focus more on function or activity (e.g. metatranscriptomics and metabolomics) may be more instructive when it comes to determining biological markers of decomposition. fiae119_Supplemental_File
Refining the definition of
33f6c4cf-925c-4980-b063-218276af92e5
9826019
Anatomy[mh]
Human epidermal growth factor receptor 2 (HER2) gene is amplified in approximately 15% of invasive breast cancer (BC) leading to HER2 protein overexpression. , , , HER2 testing in routine practice is performed using immunohistochemistry (IHC) to assess the level of protein expression, which is reported using a range of 0 to 3+ score. , HER2‐positive BC is defined as IHC score 3+ or score 2+ with evidence of HER2 gene amplification using the in‐situ hybridisation (ISH) technique. HER2‐positive BC patients are eligible for therapies that target the HER2 pathways. , , BC with HER2 IHC score 2+ that lacks evidence of HER 2 gene amplification is currently classified as HER2‐negative similar to cases showing IHC score of 0 or 1+ , and do not benefit from anti‐HER2 therapy. However, recent data have demonstrated that some of the HER2 directed antibody–drug conjugates (ADC) such as trastuzumab–emtansine (T‐DM1) and trastuzumab–deruxtecan (T‐DXD) can improve the outcome of patients with BC that express HER2 protein without evidence of HER2 gene amplification. These cases included BC with HER2 IHC score 1+ or score 2+ without HER 2 gene amplification, which are defined as the HER2‐low class. , , , ADCs are molecules consisting of a recombinant monoclonal antibody covalently bound to a cytotoxic drug via a linker. After antibody binding to the specific antigen on the targeted cell surface, the cytotoxic drug becomes internalised and is released intracellularly, where it can exert its effect. ADC effect relies upon the presence an extracellular protein receptor which acts as a carrier for the cytotoxic agent to achieve targeted effect with no or minimal levels of cytotoxicity to the normal cells, rather than on the oncogenic effect of the protein. Patients’ recruitment to the ongoing HER2‐low‐positive clinical trials which are testing the effect of ADCs in BC are based on the existing definition of HER2 categories, as described in the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP). Although the ASCO/CAP guideline recommendations provided comprehensive definition of the HER2 staining pattern and the categorisation of cases into four IHC scores (0–3+), the distinction between IHC score 0 and 1+ is not sufficiently detailed and lacks relevant evidence, and some scenarios of HER2 expression patterns are missing. , This could participate in the high discordance rates in HER2 status assessments reported in some studies. , , Although clinical response can provide the best tool to define the lower limit of the HER2‐low class, the number of recruited patients in such randomised clinical trials, particularly those close to the threshold of positivity, is typically too limited to develop a robust definition. In this study, we have used a large cohort of BC that express low levels of HER2 protein without evidence of HER2 gene amplification and applied an artificial neural network (ANN) model to refine the definition of HER2‐low class of BC with an emphasis on distinguishing HER2 score 1+ and 0 categories. We have used the HER2 mRNA levels as a ground truth to reflect the level of HER2 gene expression. ANNs can learn and model non‐linear and complex relationships. , , , , We have also tried to refine the existing definitions of HER2 IHC categories by completing the missing scenarios utilising the existing data and our experience. This study was conducted on a primary invasive BC cohort ( n = 363) from patients presenting at Nottingham University Hospitals NHS Trust with a HER2 IHC score 0, 1+ and 2+ without gene amplification. Transcriptomic data on HER2 mRNA expression were available for this cohort within the recorded Oncotype DX report, which was carried out as part of the patients’ clinical care for management. Briefly, mRNA levels were obtained from tumour samples extracted from formalin‐fixed paraffin‐embedded tissue using high‐throughput, real‐time, reverse transcription–polymerase chain reaction. Normalised expression measurements were calculated as the mean cycle threshold (CT) for the five reference genes minus the mean CT of triplicate measurements for each individual gene . HER2 mRNA level ranged between 5.0 and 10.8, units with a mean of 9 units. The clinicopathological data, including age at diagnosis, tumour size, histological grade, histological tumour type, axillary lymph node status, lymphovascular invasion (LVI) and Nottingham prognostic index (NPI), were available (Supporting information, Table ). The patients’ mean age at diagnosis was 59 years, while the mean invasive tumour size was 2.2 cm (range = 0.1–11.5 cm). All cases were oestrogen receptor (ER)‐positive and HER2‐negative. ER and progesterone receptor (PR)‐positivity were assessed according to ASCO/CAP guidelines if ≥ 1% of the invasive tumour cell nuclei were immunoreactive. HER2 staining was completed on the Ventana Benchmark ULTRA immunohistochemistry automated staining system using the Ventana PATHWAY anti‐HER‐2/ neu , rabbit monoclonal ready‐to‐use primary antibody in combination with Ventana detection kits. No antigen retrieval was required, according to the protocol. Appropriate positive and negative controls were included for each staining run as per the published guidelines. , Protein expression assessment was carried out in routine clinical practice using light microscopy on the diagnostic core needle biopsies. The reported HER2 scoring categories in the clinical setting were retrieved from the patient records. Detailed reassessment of HER2 IHC protein expression HER2 expression within the invasive tumour cells only of each case was reassessed and presented in detail. This included: (1) cellular localisation of protein expression (membranous, cytoplasmic or both) and (2) intensity of staining divided into five grades (negative, faint, weak, moderate and strong). In addition to the comparison with the positive and negative controls, the magnification rule was used to guarantee high interobserver agreement. Strong HER2 staining was assessed as those cases displaying unequivocal membranous staining are seen easily at low‐power magnification (2× or 4×), while unequivocal membranous staining (moderate to weak) was only assigned at medium magnification (10× to 20×, respectively). Faint staining can only be appreciated at 40× magnification, whereas weak staining can be appreciated at 20× magnification. Cases were assessed using the NIKON NI‐U Microscope, Nikon UK, Surbiton, UK. Different intensities within the same tumour were assessed to reflect the heterogeneity, (3) the percentage of each intensity, (4) distribution/completeness of membranous staining as either complete circumferential membranous or incomplete lateral or basolateral staining and (5) histo score (H‐score) was calculated as follows: % of weak intensity × 1+% of moderate intensity × 2+% of strong intensity × (3). In addition, the % of faint intensity was assessed and multiplied by 0.5 to produce a total score of 350. Each incomplete membranous staining is multiplied by 0.5, while complete membranous staining is multiplied by 1. HER2 staining on full‐face sections of resection specimen HER2 IHC staining and scoring were performed on core biopsies while HER2 mRNA level was assessed on resection specimens. Thus, for the cases that showed a discrepancy between HER2 IHC score and mRNA level (n = 30), i.e. high mRNA level and HER2 score 0 or HER2 score 2+ with low mRNA level, repeating HER2 IHC staining on the full‐face sections using the same tissue block tested for Oncotype DX was performed. Whenever possible, the same tissue block that was used to run the Oncotype DX test was stained with HER2. The staining protocol was similar to the core biopsy staining as described above. Defining cut‐off for HER2 score 1+ versus score 0 Two steps were followed to define HER2 score 1+ (Figure ). Step 1: K‐means clustering The K‐means technique aims to partition the data into K‐groups such that the sum of squares from points to the assigned cluster centres is minimised. HER2 mRNA values were classified, using K‐means, into two clusters based on their similarity of expression across multiple HER2 scoring parameters. Those cases which had a score of 1+ or 0 were clustered into two groups based on HER2 mRNA level and the detailed IHC scoring performed. Cluster 1 was defined as HER2‐negative (0) while cluster 2 represented HER2‐positive (1+). HER2 2+ were excluded from the clustering to avoid data bias. Step 2: Artificial neural network model ( ANN ) The ANN model (NeuroSolution version 7.0; NeuroDimension, Gainesville, FL, USA), with a range of hidden nodes in three layers, with a Levenburg Marquardt algorithm and a TanH activation function), was used to set the cut‐off point for defining HER2 1+ based on the K‐means clusters defined in step 1. A Monte Carlo cross‐validation approach was used to train a population of models, and early stopping was undertaken using a randomly extracted unseen cross‐validation set with subsequent validation on a test set ( n = 38), which was kept completely blind to training process. Weight regularisation was conducted during training. The model was trained with the detailed HER2 scoring parameters, including various intensities (faint, weak, moderate) and distribution of each intensity, if present, either complete or incomplete, in addition to the total percentage of positive cells and cytoplasmic staining as input and HER2 mRNA‐based clusters as an output variable. The ANN model determined which of the input parameters predicted HER2 score 1+ with a high level of accuracy. Sensitivity and specificity with the produced response curves was applied to set the cut‐off for the most participating parameter. Predictions of trained models were examined to decide predicted probability of K‐means cluster membership. These were examined to determine a probabilistic cut‐point for HER2 score 1+. Model performance was further assessed by finding the area under the curve (AUC) of a constructed receiver operator characteristic (ROC) curve. AUCs of 1 were seen across the three cross‐validation cohorts as well as 100% classification rates. After setting the cut‐points, a new refined score for HER2 was developed and applied. To detect the accuracy of our refined score against the clinical score, we used the same neural network to build a discriminating model of both HER2 scores using the clinicopathological parameters as input units and the HER2 score as an output. The differentiating performance of the ANN models was evaluated with AUC as well as the true‐ and false‐positivity rates. To test the reliability of using HER2 mRNA as a dichotomising variable, we assessed the correlation between HER2 mRNA, protein level and gene amplification levels in a large independent cohort of primary BCs obtained from two publicly available data sets: the Cancer Genome Atlas (TCGA) ( n = 614) and the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) ( n = 288). Reliability and reproducibility of the refined HER2 IHC score The efficiency and reproducibility of the refined HER2 scoring method against the current guidelines were tested. HER2 was scored twice according to the existing ASCO/CAP; once by the clinical team at time of diagnosis, and the second score was carried out by experienced pathologists (N.A. and M.T.) who have more than 5 years’ experience in histopathology, supervised by an experienced breast pathology consultant (E.R.) with more than 20 years’ experience in the field of breast pathology. The agreement between two scores was assessed. Moreover, the interobserver agreement of the refined score was assessed between both observers and the intra‐observer agreement was examined through rescoring the cases after a 3 months’ washout period. Correlation between refined HER2 score with the clinicopathological variables The correlation between the clinicopathological parameters, including HER2 mRNA level, HER2 scores including the refined and the original clinical scores, was carried out. In addition, HER2 mRNA K‐means clusters were correlated with the other clinicopathological parameters. Statistical analysis SPSS version 24 was used to carry out the statistical analysis. Correlations were analysed using χ 2 , Fisher's exact, Kruskal–Wallis and Wilcoxon rank sum tests with continuity correction, where appropriate. The concordance analysis was performed using Cohen's Kappa test. All differences were considered significant at P < 0.05. HER2 IHC protein expression HER2 expression within the invasive tumour cells only of each case was reassessed and presented in detail. This included: (1) cellular localisation of protein expression (membranous, cytoplasmic or both) and (2) intensity of staining divided into five grades (negative, faint, weak, moderate and strong). In addition to the comparison with the positive and negative controls, the magnification rule was used to guarantee high interobserver agreement. Strong HER2 staining was assessed as those cases displaying unequivocal membranous staining are seen easily at low‐power magnification (2× or 4×), while unequivocal membranous staining (moderate to weak) was only assigned at medium magnification (10× to 20×, respectively). Faint staining can only be appreciated at 40× magnification, whereas weak staining can be appreciated at 20× magnification. Cases were assessed using the NIKON NI‐U Microscope, Nikon UK, Surbiton, UK. Different intensities within the same tumour were assessed to reflect the heterogeneity, (3) the percentage of each intensity, (4) distribution/completeness of membranous staining as either complete circumferential membranous or incomplete lateral or basolateral staining and (5) histo score (H‐score) was calculated as follows: % of weak intensity × 1+% of moderate intensity × 2+% of strong intensity × (3). In addition, the % of faint intensity was assessed and multiplied by 0.5 to produce a total score of 350. Each incomplete membranous staining is multiplied by 0.5, while complete membranous staining is multiplied by 1. staining on full‐face sections of resection specimen HER2 IHC staining and scoring were performed on core biopsies while HER2 mRNA level was assessed on resection specimens. Thus, for the cases that showed a discrepancy between HER2 IHC score and mRNA level (n = 30), i.e. high mRNA level and HER2 score 0 or HER2 score 2+ with low mRNA level, repeating HER2 IHC staining on the full‐face sections using the same tissue block tested for Oncotype DX was performed. Whenever possible, the same tissue block that was used to run the Oncotype DX test was stained with HER2. The staining protocol was similar to the core biopsy staining as described above. HER2 score 1+ versus score 0 Two steps were followed to define HER2 score 1+ (Figure ). Step 1: K‐means clustering The K‐means technique aims to partition the data into K‐groups such that the sum of squares from points to the assigned cluster centres is minimised. HER2 mRNA values were classified, using K‐means, into two clusters based on their similarity of expression across multiple HER2 scoring parameters. Those cases which had a score of 1+ or 0 were clustered into two groups based on HER2 mRNA level and the detailed IHC scoring performed. Cluster 1 was defined as HER2‐negative (0) while cluster 2 represented HER2‐positive (1+). HER2 2+ were excluded from the clustering to avoid data bias. Step 2: Artificial neural network model ( ANN ) The ANN model (NeuroSolution version 7.0; NeuroDimension, Gainesville, FL, USA), with a range of hidden nodes in three layers, with a Levenburg Marquardt algorithm and a TanH activation function), was used to set the cut‐off point for defining HER2 1+ based on the K‐means clusters defined in step 1. A Monte Carlo cross‐validation approach was used to train a population of models, and early stopping was undertaken using a randomly extracted unseen cross‐validation set with subsequent validation on a test set ( n = 38), which was kept completely blind to training process. Weight regularisation was conducted during training. The model was trained with the detailed HER2 scoring parameters, including various intensities (faint, weak, moderate) and distribution of each intensity, if present, either complete or incomplete, in addition to the total percentage of positive cells and cytoplasmic staining as input and HER2 mRNA‐based clusters as an output variable. The ANN model determined which of the input parameters predicted HER2 score 1+ with a high level of accuracy. Sensitivity and specificity with the produced response curves was applied to set the cut‐off for the most participating parameter. Predictions of trained models were examined to decide predicted probability of K‐means cluster membership. These were examined to determine a probabilistic cut‐point for HER2 score 1+. Model performance was further assessed by finding the area under the curve (AUC) of a constructed receiver operator characteristic (ROC) curve. AUCs of 1 were seen across the three cross‐validation cohorts as well as 100% classification rates. After setting the cut‐points, a new refined score for HER2 was developed and applied. To detect the accuracy of our refined score against the clinical score, we used the same neural network to build a discriminating model of both HER2 scores using the clinicopathological parameters as input units and the HER2 score as an output. The differentiating performance of the ANN models was evaluated with AUC as well as the true‐ and false‐positivity rates. To test the reliability of using HER2 mRNA as a dichotomising variable, we assessed the correlation between HER2 mRNA, protein level and gene amplification levels in a large independent cohort of primary BCs obtained from two publicly available data sets: the Cancer Genome Atlas (TCGA) ( n = 614) and the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) ( n = 288). The K‐means technique aims to partition the data into K‐groups such that the sum of squares from points to the assigned cluster centres is minimised. HER2 mRNA values were classified, using K‐means, into two clusters based on their similarity of expression across multiple HER2 scoring parameters. Those cases which had a score of 1+ or 0 were clustered into two groups based on HER2 mRNA level and the detailed IHC scoring performed. Cluster 1 was defined as HER2‐negative (0) while cluster 2 represented HER2‐positive (1+). HER2 2+ were excluded from the clustering to avoid data bias. ANN ) The ANN model (NeuroSolution version 7.0; NeuroDimension, Gainesville, FL, USA), with a range of hidden nodes in three layers, with a Levenburg Marquardt algorithm and a TanH activation function), was used to set the cut‐off point for defining HER2 1+ based on the K‐means clusters defined in step 1. A Monte Carlo cross‐validation approach was used to train a population of models, and early stopping was undertaken using a randomly extracted unseen cross‐validation set with subsequent validation on a test set ( n = 38), which was kept completely blind to training process. Weight regularisation was conducted during training. The model was trained with the detailed HER2 scoring parameters, including various intensities (faint, weak, moderate) and distribution of each intensity, if present, either complete or incomplete, in addition to the total percentage of positive cells and cytoplasmic staining as input and HER2 mRNA‐based clusters as an output variable. The ANN model determined which of the input parameters predicted HER2 score 1+ with a high level of accuracy. Sensitivity and specificity with the produced response curves was applied to set the cut‐off for the most participating parameter. Predictions of trained models were examined to decide predicted probability of K‐means cluster membership. These were examined to determine a probabilistic cut‐point for HER2 score 1+. Model performance was further assessed by finding the area under the curve (AUC) of a constructed receiver operator characteristic (ROC) curve. AUCs of 1 were seen across the three cross‐validation cohorts as well as 100% classification rates. After setting the cut‐points, a new refined score for HER2 was developed and applied. To detect the accuracy of our refined score against the clinical score, we used the same neural network to build a discriminating model of both HER2 scores using the clinicopathological parameters as input units and the HER2 score as an output. The differentiating performance of the ANN models was evaluated with AUC as well as the true‐ and false‐positivity rates. To test the reliability of using HER2 mRNA as a dichotomising variable, we assessed the correlation between HER2 mRNA, protein level and gene amplification levels in a large independent cohort of primary BCs obtained from two publicly available data sets: the Cancer Genome Atlas (TCGA) ( n = 614) and the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) ( n = 288). HER2 IHC score The efficiency and reproducibility of the refined HER2 scoring method against the current guidelines were tested. HER2 was scored twice according to the existing ASCO/CAP; once by the clinical team at time of diagnosis, and the second score was carried out by experienced pathologists (N.A. and M.T.) who have more than 5 years’ experience in histopathology, supervised by an experienced breast pathology consultant (E.R.) with more than 20 years’ experience in the field of breast pathology. The agreement between two scores was assessed. Moreover, the interobserver agreement of the refined score was assessed between both observers and the intra‐observer agreement was examined through rescoring the cases after a 3 months’ washout period. HER2 score with the clinicopathological variables The correlation between the clinicopathological parameters, including HER2 mRNA level, HER2 scores including the refined and the original clinical scores, was carried out. In addition, HER2 mRNA K‐means clusters were correlated with the other clinicopathological parameters. SPSS version 24 was used to carry out the statistical analysis. Correlations were analysed using χ 2 , Fisher's exact, Kruskal–Wallis and Wilcoxon rank sum tests with continuity correction, where appropriate. The concordance analysis was performed using Cohen's Kappa test. All differences were considered significant at P < 0.05. Ethical approval was obtained for this study and approved by REC (ref. no. 19/SC/0363) under the title ‘PathLAKE’. All cases included in the study were fully anonymised. Patterns of HER2 protein expression Of the cases in the study cohort, 81% showed a degree of HER2 expression regardless of the pattern and/or the percentage of positive cells. It was observed that each case had a mixture of expression patterns, intensities and cellular localisation. The most frequent pattern observed was incomplete faint staining, which presented in 78% of the cohort, followed by complete faint expression which presented in 58% of cases with or without other patterns of expression. Moderate incomplete staining had the lowest proportion among all patterns of expression (4%). Detailed description of HER2 expression in terms of staining intensities, patterns and percentages are summarised in (Supporting information, Table and Figure ). Correlation between HER2 IHC score and mRNA level Discrepancy between HER2 IHC protein expression score on core biopsy and mRNA level was seen in 30 of 363 (8%) of cases. IHC score 2+ with low mRNA level, as defined based on K‐mean clustering analysis, was presented in (two of 30) cases, while the remaining 28 cases had IHC score 0 with high mRNA level. Upon staining those 30 cases on full‐face tissue sections, all cases that were scored 0 on core biopsy were completely negative (score 0) in the invasive tumour cells with weak to moderate staining within the in‐situ component, while the two cases that were score 2+ in core biopsy emerged as completely negative (score 0) on full‐face sections. Clustering of HER2 mRNA A total of 308 cases with complete data on different HER2 expression patterns and HER2 mRNA level were available for K‐mean clustering analysis. The data set was divided into two clusters; cluster 1 ( n = 109) and cluster 2 ( n = 199), based on mRNA level at cut‐off 8.7 units. Based on the new cut‐off, the mean values ± standard deviation (SD) of HER2 mRNA in HER2 in HER2 IHC were 0, 1+ and 2+ was 8.66 ± 0.69, 9.1 ± 0.66 and 9.4 ± 0.78 (Supporting information, Figure ). ANN model sensitivity The parameters that predicted HER2 cluster 2 (equivalent to score 1+) were faint complete staining at ≥ 20% of invasive tumour cells and/or faint incomplete staining in ≥ 20%, weak complete staining in ≤ 10%, weak incomplete staining in > 10% and moderate incomplete membranous staining in ≤ 10% of invasive tumour cells. Cytoplasmic staining and total percentage of positive cells did not define the cluster (Table , Supporting information, Figure ). Figure shows a schematic illustration of different scenarios of HER2 expression in BC and the corresponding score based on the refined criteria compared to the existing guidelines. , , , Based on the new defined cut‐points for low HER2 IHC scoring, 136 of 363 (37%) cases were scored 0, 140 of 363 cases (39%) with score 1+ and 87 of 363 cases (24%) were scored 2+ compared to 126 of 363 (35%), 156 of 363 (43%) and 81 of 87 (22%) for the original scores 0, 1+ and 2+, respectively (Supporting information, Figure ). Faint staining intensity was the most predominant pattern in HER2 score 1+ (41 of 140), followed by weak incomplete staining in > 10% of invasive tumour cells and then weak complete staining of less than or equal to 10% of cells. Exclusive moderate expression, either complete or incomplete, < 10% was not found in HER2 1+ as a unique pattern, but was expressed in combination with other patterns (Figure 1,2). The AUC for the refined score was 0.92 with a true‐positive rate of 92% and false‐positive rate of 13%. The AUC for the original clinical score was 0.71, with 69 and 34% as true‐positive rates and false‐positive rates, respectively (Supporting information, Figure ). Reproducibility of the refined HER2 IHC score The degree of concordance between the score given on the original clinical settings and the redefined score upon applying the existing HER2 scoring criteria was substantial (kappa = 0.6). Exact score agreement was 79%, while the number of discordant cases was 75 of 363 cases (20%); 47 were between IHC score 1+ versus 0 and the remaining 28 discordant cases were between 1+ versus 2+. None of the cases showed score 2+ versus 0 discrepancy. Regarding the refined score, the intra‐observer concordance showed perfect agreement (kappa = 0.8) with 87% exact score agreement. Furthermore, the interobserver agreement was perfect (kappa = 0.9), with 89% exact score agreement. Overall, there were 36 of 363 cases (10%) with discordance. Table details the agreement levels. There was a strong association between H score and HER2 IHC scores ( P < 0.001). The association between various clinicopathological parameters and the refined HER2 score in comparison with the original score Both HER2 scores showed a statistically significant correlation with lymph node status and mRNA level (Table ). The refined score, but not the original score, showed a statistically significant correlation with Oncotype DX recurrence score ( P = 0.02), where score 0 was associated with higher‐risk Oncotype DX groups. Moreover, there was a statistically significant difference between HER2 IHC 1+ and 2+ using the refined score regarding lymph node metastasis, where IHC 1+ was associated with lymph node metastasis. When compared to IHC score 0 cases, there was statistically significant correlation between HER2‐low cluster and low tumour grade ( P < 0.001), lower pleomorphism score ( P = 0.001) and low mitotic count ( P < 0.001), less DCIS within tumour ( P < 0.001) and more lymph node metastasis ( P = 0.03) (Supporting information, Table ). Within the external cohorts used, there was a significant correlation between HER2 mRNA level and different HER2 IHC scores (from 0 to 3 and HER2‐low only) and HER2 gene copy number in TCGA and METABRIC cohorts with P < 0.001 (Supporting information Figures S5 and S6). HER2 protein expression Of the cases in the study cohort, 81% showed a degree of HER2 expression regardless of the pattern and/or the percentage of positive cells. It was observed that each case had a mixture of expression patterns, intensities and cellular localisation. The most frequent pattern observed was incomplete faint staining, which presented in 78% of the cohort, followed by complete faint expression which presented in 58% of cases with or without other patterns of expression. Moderate incomplete staining had the lowest proportion among all patterns of expression (4%). Detailed description of HER2 expression in terms of staining intensities, patterns and percentages are summarised in (Supporting information, Table and Figure ). HER2 IHC score and mRNA level Discrepancy between HER2 IHC protein expression score on core biopsy and mRNA level was seen in 30 of 363 (8%) of cases. IHC score 2+ with low mRNA level, as defined based on K‐mean clustering analysis, was presented in (two of 30) cases, while the remaining 28 cases had IHC score 0 with high mRNA level. Upon staining those 30 cases on full‐face tissue sections, all cases that were scored 0 on core biopsy were completely negative (score 0) in the invasive tumour cells with weak to moderate staining within the in‐situ component, while the two cases that were score 2+ in core biopsy emerged as completely negative (score 0) on full‐face sections. HER2 mRNA A total of 308 cases with complete data on different HER2 expression patterns and HER2 mRNA level were available for K‐mean clustering analysis. The data set was divided into two clusters; cluster 1 ( n = 109) and cluster 2 ( n = 199), based on mRNA level at cut‐off 8.7 units. Based on the new cut‐off, the mean values ± standard deviation (SD) of HER2 mRNA in HER2 in HER2 IHC were 0, 1+ and 2+ was 8.66 ± 0.69, 9.1 ± 0.66 and 9.4 ± 0.78 (Supporting information, Figure ). model sensitivity The parameters that predicted HER2 cluster 2 (equivalent to score 1+) were faint complete staining at ≥ 20% of invasive tumour cells and/or faint incomplete staining in ≥ 20%, weak complete staining in ≤ 10%, weak incomplete staining in > 10% and moderate incomplete membranous staining in ≤ 10% of invasive tumour cells. Cytoplasmic staining and total percentage of positive cells did not define the cluster (Table , Supporting information, Figure ). Figure shows a schematic illustration of different scenarios of HER2 expression in BC and the corresponding score based on the refined criteria compared to the existing guidelines. , , , Based on the new defined cut‐points for low HER2 IHC scoring, 136 of 363 (37%) cases were scored 0, 140 of 363 cases (39%) with score 1+ and 87 of 363 cases (24%) were scored 2+ compared to 126 of 363 (35%), 156 of 363 (43%) and 81 of 87 (22%) for the original scores 0, 1+ and 2+, respectively (Supporting information, Figure ). Faint staining intensity was the most predominant pattern in HER2 score 1+ (41 of 140), followed by weak incomplete staining in > 10% of invasive tumour cells and then weak complete staining of less than or equal to 10% of cells. Exclusive moderate expression, either complete or incomplete, < 10% was not found in HER2 1+ as a unique pattern, but was expressed in combination with other patterns (Figure 1,2). The AUC for the refined score was 0.92 with a true‐positive rate of 92% and false‐positive rate of 13%. The AUC for the original clinical score was 0.71, with 69 and 34% as true‐positive rates and false‐positive rates, respectively (Supporting information, Figure ). HER2 IHC score The degree of concordance between the score given on the original clinical settings and the redefined score upon applying the existing HER2 scoring criteria was substantial (kappa = 0.6). Exact score agreement was 79%, while the number of discordant cases was 75 of 363 cases (20%); 47 were between IHC score 1+ versus 0 and the remaining 28 discordant cases were between 1+ versus 2+. None of the cases showed score 2+ versus 0 discrepancy. Regarding the refined score, the intra‐observer concordance showed perfect agreement (kappa = 0.8) with 87% exact score agreement. Furthermore, the interobserver agreement was perfect (kappa = 0.9), with 89% exact score agreement. Overall, there were 36 of 363 cases (10%) with discordance. Table details the agreement levels. There was a strong association between H score and HER2 IHC scores ( P < 0.001). HER2 score in comparison with the original score Both HER2 scores showed a statistically significant correlation with lymph node status and mRNA level (Table ). The refined score, but not the original score, showed a statistically significant correlation with Oncotype DX recurrence score ( P = 0.02), where score 0 was associated with higher‐risk Oncotype DX groups. Moreover, there was a statistically significant difference between HER2 IHC 1+ and 2+ using the refined score regarding lymph node metastasis, where IHC 1+ was associated with lymph node metastasis. When compared to IHC score 0 cases, there was statistically significant correlation between HER2‐low cluster and low tumour grade ( P < 0.001), lower pleomorphism score ( P = 0.001) and low mitotic count ( P < 0.001), less DCIS within tumour ( P < 0.001) and more lymph node metastasis ( P = 0.03) (Supporting information, Table ). Within the external cohorts used, there was a significant correlation between HER2 mRNA level and different HER2 IHC scores (from 0 to 3 and HER2‐low only) and HER2 gene copy number in TCGA and METABRIC cohorts with P < 0.001 (Supporting information Figures S5 and S6). Accurate assessment of HER2 status is integral to the care of patients with BC. Recognising this, the ASCO/CAP HER2 working group released their guideline recommendations on HER2 testing in 2007, which were updated thereafter to provide clearer guidance for HER2 testing and assessment. At least 16 scenarios of HER2 expression patterns exist when considering the combination of staining intensity (faint, weak, moderate and strong), membrane completeness (complete versus incomplete) and the cut‐off (e.g.10%) used to classify the percentage of HER2 in the invasive tumour cells into two main categories. However, not all the scenarios have been defined (see below) which, in turn, led to a degree of subjectivity and discordance in HER2 scoring. Some studies indicated that the concordance rates among pathologists remains low, , , , raising concern regarding the need to refine the scoring criteria. Moreover, the distinction between HER2 IHC score 0 from score 1+ was not clinically relevant, and for practical purposes these two groups have often been combined and/or used alternatively in routine practice. Fernandez et al . demonstrated that the current standard assays utilised in the clinical setting do not efficiently differentiate IHC scores 0 or 1+ and only 26% of these cases had 90% concordance agreement. Also, Schettini and colleagues showed that multi‐rater overall kappa score was 0.7, equivalent to substantial agreement, and almost half the discordant cases were between IHC score 0 versus 1+. All previous attempts for the definition aimed at separating HER2‐positive from HER2‐negative BC for therapeutic and prognostic purposes, , , , , as patients with tumours that show low or moderate levels of HER2 protein expression without confirmed gene amplification are currently not candidates for anti‐HER2 agents. This category, which accounts for 45–55% of BC, is known as HER2‐low class BC, which include IHC score 1+ or 2+ with non‐amplified HER2 gene by ISH. , With the promising response rate of ADC in HER2‐low BC patients, , , , , we hypothesised that refining the definition of HER2‐low‐positive class with precise scoring criteria for this group will lead to better scoring concordance levels and better personalisation of ADC therapy. Borderline HER2‐low BC can be demarcated from HER2‐positive cases through gene amplification assays, but the lower limit of protein expression beyond which the tumour is considered HER2‐negative is not fully identified. In this study, we aimed to refine the definition of different HER2 scoring categories through providing a clearer, easier and applicable interpretation approach for different HER2 expression scenarios. We also sought to provide a definition for HER2‐low‐positive BC through distinguishing HER2 IHC score 1+ from score 0 by using the mRNA expression as ground truth. The rationale behind using mRNA level to dichotomise our cases instead of the patient outcome is that at this low level of protein expression, HER2 is not the driver oncogene and the clinical behaviour of the tumour and outcome is typically not dependent upon activation of the HER2 pathways. , This was supported by Denkert et al ., who demonstrated that there was no difference between HER2‐low and HER2‐negative tumours in the triple‐negative BC cohort. Multiple studies show that rates of concordance for HER2 between core biopsy and excision specimens of 98 to 99% are achievable. , , We have demonstrated that HER2 mRNA was reliable in reflecting HER2 protein level both on core biopsy and full‐face sections. Our results revealed that HER2 mRNA is statistically significant in differentiating not only HER2‐ positive from ‐negative BC, but also in the HER2‐low class, where it can separate them into two distinct groups and which are correlated with IHC protein level and gene amplification. Our study also showed that HER2 mRNA significantly correlates with HER2 protein and gene amplification levels supported by data from TCGA and METABRIC cohorts. This was supported in other studies that showed high concordance threshold between HER2 mRNA and IHC and gene amplification. , , , , The discrepancy between mRNA level and IHC score that was observed in a few cases could be explained by intratumoral heterogeneity and the ratio of malignant to non‐malignant cells within tumours, which can dilute the influence of the tumour cells on the result, leading to a false‐low mRNA level, , while a false‐high mRNA level in HER2 score 0 cases was due mainly to the presence of HER2 expression within the in‐situ component. We have described 10 possibilities for the HER2 expression patterns in BC tumour cells related to the staining intensity, localisation and circumferential staining completeness. Using a trained ANN model, we identified which pattern has the highest weight to differentiate HER2 score 1+ from score 0 based on the ground truth represented by the mRNA level. We found that, at faint intensity, the percentage of expression was more effective than the membranous pattern of expression, whether complete or incomplete. Based on our data, any faint HER2 protein expression in 20% or more can be considered as IHC score 1+. For weak staining, our results were consistent with the 2007 ASCO/CAP HER2 guidelines and updated UK guidelines in the definition of HER2 1+ (weak complete staining less than 10% and weak incomplete more than 10%, respectively). The established algorithm for HER2 scoring according to ASCO/CAP guidelines encompasses 10 of the 16 possible scenarios for HER2 expression patterns. In this study, we tried to complete the missing HER2 expression possibilities based on the current study results, data from the various published HER2 scoring guideline recommendations and our personal experience. Although most of these undefined scenarios are infrequent, such as strong incomplete expression and moderate complete less than 10%, providing more objective criteria and adding more guidance to their scoring would improve the concordance rate among pathologists and consequently better HER2 categorisation and management decision‐making. To guarantee high interobserver agreement, the magnification rule was also used to define faint staining which comprise areas showing barely visible expression defined as membranous staining confirmed only at ×40, corresponding to faint intensity. This rule is applied and efficient in the assessment of HER2 in gastric carcinoma. Although H‐score showed a significant association with HER2 scores, we did not include this as a parameter to refine HER2‐low definition. H‐score has been used for assessment of HER2 expression in previous studies, although it is not approved for routine clinical work. , , The limitation of using the H‐score is the non‐linearity of the score, which is due to the heavier weighting of higher‐intensity staining over lower‐intensity staining to calculate the score. One more fallacy of using the H‐score in the assessment of HER2 expression is that it cannot address faint intensity and completeness of membranous staining. Thus, the H‐score, which was designed as a standard scoring scheme to provide continuous scores, is not well suited for the scoring HER2 in BC and would provide more lack of clarity to pathologists and clinicians. Based on the refined score, the proportion of HER2 score 1+ cases decreased by 5%, in comparison with the original ASCO/CAP definition that was used in the original scoring in the clinical setting. This could be explained by increasing the cut‐off from the 10% used in clinical practice to 10–20% in the faint category. From this, we can assume that there could be a false increase in score 1+ category in the recent guidelines which may have affected the response rate for ADC in HER2 score 1+ BC patients. The refined score was more efficient in predicting HER2 score 1+ than the current applied score. The interobserver agreement between HER2 scores based on existing guidelines showed substantial concordance. This magnitude of concordance is in line with others’ reproducibility studies. , Schettini and colleagues showed that multi‐rater agreement was substantial, and almost half the discordant cases were between IHC score 0 versus IHC score 1+. Moreover, in the Phase Ib trastuzumab–deruxtecan study, the concordance between local and central pathology was 70% for HER2 IHC score 1+. The inter‐ and intra‐observer agreement for the two scoring sessions, according to our refined criteria, was near‐perfect, with reduction of discordant cases between HER2 scores 1+ versus 0 by more than 70%. These results support the fact that current scoring criteria for HER2 scores 1+ and 0 are subjective and less reproducible among pathologists. Guidelines should be updated or refined to distinguish between HER2 scores 0 and 1+, especially in the upcoming era of ADC therapy. Recent studies revealed that 40% of patients with HER2‐low BC achieved partial response to T‐DXD. , The refined score showed a stronger association with the clinicopathological parameters than the current applied score. Also, it showed statistical significance with Oncotype DX scores. Our results agreed with both Schettini et al . and Tan et al ., who declared that HER2‐low BC is apparently more associated with axillary lymph node involvement compared to HER2 score 0 tumours. , Overall, HER2 protein expression and mRNA level in IHC 1+ category was associated with low tumour grade, low mitotic count, special histological types of BC and low risk of recurrence based on Oncotype DX, as described in other studies. , , This study has some limitations, including that the mRNA levels were measured on full‐face sections, whereas the IHC score was assessed on core biopsy. To overcome this issue, we selected cases with conflicting HER2 mRNA expression and IHC scores and restained them on resection specimen blocks. The cohort had a low number of outcome events in terms of BC‐related deaths or disease recurrence, so outcome analysis and therapy effects were not feasible in this cohort. Therefore, we have used the mRNA level as our ground truth in classifying patients. Due to the study design, the cohort did not include ER‐negative BC. However, this study aimed at refining the scoring of HER2 protein expression, rather than assessing its oncogenic effect or its interaction with other proteins; thus, we believe that the refined scoring criteria can be generalised and applied to ER‐negative tumours. This is the first study to discuss refining the HER2‐low‐positive BC focusing upon the distinction between IHC score 1+ IHC score 0 to provide a more reproducible and non‐arbitrary scoring criteria compared with the current definition, which is more subjective. HER2 mRNA level is strongly correlated with HER2 protein expression. Further investigations and clinical trials using ADC in HER2‐low‐class BC using the refined criteria is warranted. The authors declare no conflicts of interest. Figure S1. Box blot showing different patterns of HER2 expression in HER2 low category with their median, minimum and maximum values. Click here for additional data file. Figure S2. A : Box plot demonstrating distribution of HER2 mRNA among HER2 IHC scores. B : Distribution of HER2 IHC scores in the original clinical sore and our score according to refined criteria. Click here for additional data file. Figure S3. Graph showing ANN model prediction of HER2 1+ cut‐off points. Click here for additional data file. Figure S4. Chart illustrating which HER2 score is more accurate in predicting score 1+ based on ROC curve AUC, true positive rate and false positive rate. Click here for additional data file. Figure S5. Graphs showing relation between ERBB2 RNA level, HER2 protein level in the form of IHC expression and HER2 gene copy number. A : ERBB2 RNA level significantly correlates with HER2 IHC scores (0‐3+) and HER2 low cases , B. Positive linear correlation between ERBB2 RNA level and HER2 gene copy number in all HER2 scores and in HER2 low cases as shown in C and D , respectively. Click here for additional data file. Figure S6. Box plot chart showing significant the correlation between HER2 RNA level and HER2 IHC score in TCGA cohort. A : ALL HER2 IHC scores from (0‐3+), while B shows HER2 low cases only Click here for additional data file. Table S1. Clinicopathologic characteristics of the study cohort. Table S2. Description of HER2 expression patterns. Table S3 . Associations between clinicopathologic parameters and HER2 mRNA clusters. Click here for additional data file.
Mechanical loading‐induced alveolar bone remodeling is suppressed in the diabetic state via the impairment of the specificity protein 1/vascular endothelial growth factor (
2f5d9e89-870b-4654-ac3a-3f40672b6373
11693532
Dentistry[mh]
In recent years, the demand for adult orthodontic treatment in middle‐aged and older patients has increased. Notably, some patients seeking adult orthodontic treatment may have systemic diseases such as diabetes mellitus. Diabetes is growing rapidly worldwide with 537 million people diagnosed with diabetes by 2021, and the number of affected individuals is projected to rise to 783 million by 2045. Approximately half of all patients with diabetes are unaware of their disease, and many are living with this condition , , . Diabetes is associated with various musculoskeletal complications that can lead to disability and reduced quality of life, including the risk of fractures leading to functional loss and osteoarthritis , , , . Bone fragility in diabetes leads to increased fracture risk; however, the precise mechanism of fragility remains unclear. Arguments regarding the impact of diabetes on bones have been made . In addition, the prevalence and severity of periodontal disease in patients with diabetes are high, and periodontal disease is considered a complication of diabetes , . These issues indicate the potential impact of orthodontic treatment on alveolar bone remodeling in adult patients. Previous reports on alveolar bone resorption during orthodontic treatment in patients with diabetes have been inconsistent, with some reports showing accelerated bone resorption and others showing suppressed bone resorption , , , . During orthodontic treatment, orthodontic forces are applied to the teeth, and various reactions occur in the periodontal ligament and alveolar bone surrounding the teeth, which transmit a mechanical load to move the teeth. Studies have reported that certain factors involved in osteoclastogenesis, such as receptor activator of NF‐κB ligand (RANKL), osteoprotegerin (OPG), transforming growth factor beta (TGF‐β), and vascular endothelial growth factor (VEGF), are frequently generated during this process. These are important regulators of tooth migration and are associated with alveolar bone remodeling. During tooth movement, the periodontal ligament and alveolar bone surrounding the tooth form two regions with different mechanical environments: the compression side and the traction side. The periodontal ligament corresponding to the direction of action of the force is compressed, and the alveolar bone in that region is resorbed. In contrast, the periodontal ligament on the side opposite to the direction of application of the force is in traction, and the alveolar bone in that area undergoes osteogenesis. Consequently, the tooth moves in the direction of the force , , , , . Controversial evidence regarding mechanical loading‐induced alveolar bone remodeling in patients with diabetes has been reported , , . Several reports have demonstrated accelerated bone resorption during orthodontic treatment, whereas others have reported suppressed bone resorption in patients with diabetes , , , . In this study, we evaluated bone remodeling in experimental diabetic mice by applying mechanical loading using the Waldo method, which is a method to evaluate bone metabolism. Animals Male C56BL/6 J mice were obtained from Nippon Clare (Tokyo, Japan). All mice were housed in individual cages at a constant temperature (22.0 ± 2.0°C) and controlled humidity level (50 ± 10%). Animals were maintained in an environment with ad libitum access to standard feed and drinking water. Half of the mice had diabetes induced by intraperitoneal injection of streptozotocin (STZ) (120 mg/g Sigma, St. Louis, MO, USA); one week after the administration of STZ, mice with a plasma glucose concentration >16 mmol/L were selected as the STZ‐induced diabetic group , , . The research methods and management of experimental animals were performed in accordance with the guidelines on animal experiments approved by the Animal Experiment Committee of the School of Dentistry, Aichi‐Gakuin University (Approval No.: AGUD381). Experimental tooth movement Eight‐week‐old male diabetic model (DM) and normal (N) mice were used for this study. Under general anesthesia by intraperitoneal administration of three mixed anesthetics: medetomidine hydrochloride (0.3 mg/kg Meiji Seika Pharma, Tokyo, Japan), midazolam (4.0 mg/kg Astellas Pharma, Tokyo, Japan), and butorphanol tartrate (5.0 mg/kg Meiji Seika Pharma, Tokyo, Japan), orthodontic elastics (Gray 1/4 inch, 3M Unitek Saint Paul, MN, USA) were inserted between the maxillary right first and second molars (M1 and M2) and the maxillary right M1 was moved proximally using the Waldo technique , , , . Micro‐computed tomography Maxillary bones were collected at 0, 1, 3, and 7 days after mechanical loading and analyzed using a micro‐computed tomography (μCT) device (Rigaku Co. Tokyo, Japan; n = 5). The imaging conditions were as follows: tube voltage 90 kV, current 150 μA, imaging time 2 min, and pixel size 20 × 20 × 20 μm. Tooth movement distances were measured using the software TRI/3D‐BON (Ratoc System Engineering, Osaka, Japan). The root and surrounding alveolar bone of the M1 were observed in the horizontal plane to determine the condition of the alveolar apex of the alveolar septum. The bone volume (BV) was normalized to the total volume of the sample (TV) in the inter‐root septum to the relative bone volume (BV/TV), according to the method described in previous reports , . The mean trabecular thickness (Tb.Th) was determined based on the local thickness of each bone. The trabecular number (Tb.N) was comparison of the number of trabeculae. The trabecular separation (Tb.Sp) was calculated using a direct thickness calculation based on the non‐bone portion of the three‐dimensional (3D) image. Histopathological observations Maxillary bones were harvested 7 days after mechanical loading and fixed in 10% neutral buffered formalin solution. Subsequently, they were decalcified in 10% ethylenediaminetetraacetic acid (EDTA; pH 7.2; Sigma, St. Louis, MO, USA) at 4°C for about 4 weeks and paraffin‐embedded according to the established method. Serial tissue sections of 5 μm in the transverse direction were then prepared. To observe osteoclasts, the sections were stained with tartaric acid‐resistant acid phosphatase (TRAP). The tissue observation site was at one‐third of the distance from the root bifurcation to the root apex, and observation was conducted under an optical microscope. Osteoclasts were defined as multinucleated TRAP‐positive cells that were in contact with the alveolar bone. To measure the number of osteoclasts positive for TRAP staining, the circumference of the alveolar bone surface on the compression side near the centrilobular root of M1 (M1DP) was measured to determine the osteoclasts per alveolar bone surface. Immunohistological evaluation of periodontal tissue For immunohistochemical staining, maxillary bones were harvested 7 days after mechanical loading and fixed in 10% neutral‐buffered formalin solution. Subsequently, they were decalcified in 10% ethylenediaminetetraacetic acid (EDTA; pH 7.2; Sigma, St. Louis, MO, USA) at 4°C for about 4 weeks and paraffin‐embedded according to the established method. Serial tissue sections of 5 μm in the transverse direction were then prepared. Sections were stained with anti‐VEGF (Proteintech, Rosemont, IL, USA). Subsequently, the PDL were incubated with 4′,6‐diamidino‐2‐phenylindole (DAPI), and Alexa Fluor 594‐conjugated goat anti‐rabbit IgG (Thermo Fisher Scientific, Waltham, MA) as the secondary antibodies for 1 h at 4°C. Tissue collection Maxillary right first molars were extracted 18 h after mechanical loading. The tooth was dislocated using a 30‐gauge injection needle and extracted with tweezers. The periodontal ligament was scraped off from the extracted tooth using an intra‐auricular surgical instrument and the extracted teeth were immediately immersed in RNAlater RNA Stabilization Reagent (Qiagen, Valencia, CA, USA) and stored. The periodontal ligament tissue of the extracted tooth was then fractionated and collected in the proximal and centrifugal planes ( n = 7). Gene expression in periodontal ligament tissue The levels of messenger ribonucleic acid (mRNA) in the periodontal ligament tissue were measured using real‐time polymerase chain reaction (PCR). RNA was extracted from periodontal ligament tissue using an RNeasy Mini Kit (Qiagen). Complementary deoxyribonucleic acid (cDNA) was synthesized from the RNA using ReverTra Ace (Toyobo, Osaka, Japan). Primers were purchased from TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA, USA). Samples were prepared, and real‐time quantitative PCR was performed using a LightCycler 480 System (Roche Diagnostics, Basel, Switzerland). Relative gene expression was calculated using the ΔΔCT method with β2‐microglobulin as an endogenous control. Statistical analysis Data were processed using GraphPad Prism9 (GraphPad Software, San Diego, CA) and presented as mean ± SEM. Statistical significance was determined using one‐way analysis of variance ( anova ) and Tukey's correction for multiple comparisons. Differences were considered statistically significant at P < 0.05. Male C56BL/6 J mice were obtained from Nippon Clare (Tokyo, Japan). All mice were housed in individual cages at a constant temperature (22.0 ± 2.0°C) and controlled humidity level (50 ± 10%). Animals were maintained in an environment with ad libitum access to standard feed and drinking water. Half of the mice had diabetes induced by intraperitoneal injection of streptozotocin (STZ) (120 mg/g Sigma, St. Louis, MO, USA); one week after the administration of STZ, mice with a plasma glucose concentration >16 mmol/L were selected as the STZ‐induced diabetic group , , . The research methods and management of experimental animals were performed in accordance with the guidelines on animal experiments approved by the Animal Experiment Committee of the School of Dentistry, Aichi‐Gakuin University (Approval No.: AGUD381). Eight‐week‐old male diabetic model (DM) and normal (N) mice were used for this study. Under general anesthesia by intraperitoneal administration of three mixed anesthetics: medetomidine hydrochloride (0.3 mg/kg Meiji Seika Pharma, Tokyo, Japan), midazolam (4.0 mg/kg Astellas Pharma, Tokyo, Japan), and butorphanol tartrate (5.0 mg/kg Meiji Seika Pharma, Tokyo, Japan), orthodontic elastics (Gray 1/4 inch, 3M Unitek Saint Paul, MN, USA) were inserted between the maxillary right first and second molars (M1 and M2) and the maxillary right M1 was moved proximally using the Waldo technique , , , . Maxillary bones were collected at 0, 1, 3, and 7 days after mechanical loading and analyzed using a micro‐computed tomography (μCT) device (Rigaku Co. Tokyo, Japan; n = 5). The imaging conditions were as follows: tube voltage 90 kV, current 150 μA, imaging time 2 min, and pixel size 20 × 20 × 20 μm. Tooth movement distances were measured using the software TRI/3D‐BON (Ratoc System Engineering, Osaka, Japan). The root and surrounding alveolar bone of the M1 were observed in the horizontal plane to determine the condition of the alveolar apex of the alveolar septum. The bone volume (BV) was normalized to the total volume of the sample (TV) in the inter‐root septum to the relative bone volume (BV/TV), according to the method described in previous reports , . The mean trabecular thickness (Tb.Th) was determined based on the local thickness of each bone. The trabecular number (Tb.N) was comparison of the number of trabeculae. The trabecular separation (Tb.Sp) was calculated using a direct thickness calculation based on the non‐bone portion of the three‐dimensional (3D) image. Maxillary bones were harvested 7 days after mechanical loading and fixed in 10% neutral buffered formalin solution. Subsequently, they were decalcified in 10% ethylenediaminetetraacetic acid (EDTA; pH 7.2; Sigma, St. Louis, MO, USA) at 4°C for about 4 weeks and paraffin‐embedded according to the established method. Serial tissue sections of 5 μm in the transverse direction were then prepared. To observe osteoclasts, the sections were stained with tartaric acid‐resistant acid phosphatase (TRAP). The tissue observation site was at one‐third of the distance from the root bifurcation to the root apex, and observation was conducted under an optical microscope. Osteoclasts were defined as multinucleated TRAP‐positive cells that were in contact with the alveolar bone. To measure the number of osteoclasts positive for TRAP staining, the circumference of the alveolar bone surface on the compression side near the centrilobular root of M1 (M1DP) was measured to determine the osteoclasts per alveolar bone surface. For immunohistochemical staining, maxillary bones were harvested 7 days after mechanical loading and fixed in 10% neutral‐buffered formalin solution. Subsequently, they were decalcified in 10% ethylenediaminetetraacetic acid (EDTA; pH 7.2; Sigma, St. Louis, MO, USA) at 4°C for about 4 weeks and paraffin‐embedded according to the established method. Serial tissue sections of 5 μm in the transverse direction were then prepared. Sections were stained with anti‐VEGF (Proteintech, Rosemont, IL, USA). Subsequently, the PDL were incubated with 4′,6‐diamidino‐2‐phenylindole (DAPI), and Alexa Fluor 594‐conjugated goat anti‐rabbit IgG (Thermo Fisher Scientific, Waltham, MA) as the secondary antibodies for 1 h at 4°C. Maxillary right first molars were extracted 18 h after mechanical loading. The tooth was dislocated using a 30‐gauge injection needle and extracted with tweezers. The periodontal ligament was scraped off from the extracted tooth using an intra‐auricular surgical instrument and the extracted teeth were immediately immersed in RNAlater RNA Stabilization Reagent (Qiagen, Valencia, CA, USA) and stored. The periodontal ligament tissue of the extracted tooth was then fractionated and collected in the proximal and centrifugal planes ( n = 7). The levels of messenger ribonucleic acid (mRNA) in the periodontal ligament tissue were measured using real‐time polymerase chain reaction (PCR). RNA was extracted from periodontal ligament tissue using an RNeasy Mini Kit (Qiagen). Complementary deoxyribonucleic acid (cDNA) was synthesized from the RNA using ReverTra Ace (Toyobo, Osaka, Japan). Primers were purchased from TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA, USA). Samples were prepared, and real‐time quantitative PCR was performed using a LightCycler 480 System (Roche Diagnostics, Basel, Switzerland). Relative gene expression was calculated using the ΔΔCT method with β2‐microglobulin as an endogenous control. Data were processed using GraphPad Prism9 (GraphPad Software, San Diego, CA) and presented as mean ± SEM. Statistical significance was determined using one‐way analysis of variance ( anova ) and Tukey's correction for multiple comparisons. Differences were considered statistically significant at P < 0.05. Body weight and blood glucose levels The body weight and blood glucose levels of normal mice (N; n = 5) and diabetic mice (DM; n = 5) were evaluated at the time of insertion of the elastics and at 7, 14, and 21 days after insertion. The mean blood glucose level of N and DM mice at the time of insertion was 3.4 ± 0.2 and 3.5 ± 0.2 mmol/L, respectively, with no significant difference between the two groups. Seven days later, the mean blood glucose was significantly higher in the DM group (22.7 ± 0.9 mmol/L) than in the N group (8.1 ± 0.4 mmol/L). On Day 14, the mean blood glucose level was 8.3 ± 0.4 mmol/L in the N group and 21.2 ± 1.3 mmol/L in the DM group. Similarly, on Day 21, the mean blood glucose level was 8.3 ± 0.3 mmol/L in the N group and 19.7 ± 0.8 mmol/L in the DM group. The mean blood glucose level in the DM group was significantly higher than that in the N group (Figure ; ** P < 0.01). No significant difference in mean body weight was observed between the DM and N groups at 7, 14, and 21 days after the insertion of elastics (Figure ). Evaluation of elastic dropout time Dental orthodontic elastics were inserted between the M1 and M2 of 8‐week‐old male N and DM mice (Figure ). The experimental tooth movement was induced using a 200 gf force from the orthodontic elastics. The days at which dropout of the elastics occurred after tooth movement were recorded. The average dropout time was at 9.0 ± 0 days in the N group and 12.8 ± 1.2 days in the DM group. Coincidentally, in all normal mice, elastics were shed at 9 days after insertion. This resulted in a SE of ±0. The DM group exhibited a significantly longer duration than the N group until spontaneous dropout (Figure ; * P < 0.05). Alveolar bone residual volume and bone ridge structure On Day 7, a significant increase in the alveolar bone remaining from the root bifurcation to the apex of M1 (BV/TV) and Tb.Th was observed in the DM group compared with the N group (Figure ; ** P < 0.01), and a significant decrease in Tb.Sp was observed in the DM group compared with the N group (Figure : * P < 0.05). No significant difference in trabecular number (Tb.N) was observed between the N and DM groups at 0, 1, 3, and 7 days, and no significant decreases in BV/TV, Tb.Th, or Tb.Sp were observed between the two groups at 0, 1, and 3 days (Figure ). Osteoclast formation during experimental tooth movement Histological observations revealed alveolar bone resorption around the M1 on the experimental side. Osteoclasts visualized by TRAP staining on the M1 compression side of the N and DM mice at 7 days after experimental tooth movement are shown in Figure . To be fair in our cell count assessment, we have divided the number of osteoclasts on the compression side by the distance of the yellow lines. The number of osteoclasts significantly decreased by approximately 0.6‐fold on the compression side of the DM group compared with the N group (Figure ; * P < 0.05). Extraction of periodontal ligament Schematic views of the maxillary first molars and molars extracted 18 h after mechanical loading are shown in Figure , respectively. To confirm that the harvested tissue was the periodontal ligament rather than the tooth root, we examined the gene expression of Scx, a transcription factor that responds to mechanical stress, in the periodontal ligament. The expression of Scx was observed in the periodontal ligament. In contrast, no Scx gene expression was observed in the roots (Figure ; ** P < 0.01). Gene expression of PDL The expression levels of alveolar bone remodeling‐related genes, namely VEGF, OPG, RANKL, and TGF‐β, were observed . In the N group, VEGF increased 4.0‐fold, OPG increased 3.0‐fold, RANKL increased 5.9‐fold, and TGF‐β increased 2.2‐fold when mechanical loading was applied compared with no mechanical loading. The expression of VEGF decreased by 0.5‐fold after mechanical loading in the DM group compared with the N group. In contrast, no significant difference was observed between the N and DM groups in the gene expression levels of OPG, RANKL, and TGF‐β (Figure ; * P < 0.05, ** P < 0.01). Subsequently, we observed the gene expression levels of specificity protein 1 (SP1) and hypoxia‐inducible factor‐1 (HIF‐1), transcription factors for VEGF, and fms‐like tyrosine kinase‐1 (Flt‐1) and kinase insert domain receptor/fetal liver kinase 1 (KDR/Flk‐1), receptors for VEGF. The expression levels of SP1, HIF‐1, and KDR/Flk‐1 in the N group were significantly increased by 1.8‐fold, 2.5‐fold, and 2.7‐fold, respectively, with mechanical loading compared with no mechanical loading. In the DM group, the expression levels of HIF‐1 and KDR/Flk‐1 showed significant increases of 2.3‐fold and 3.3‐fold, respectively, when mechanical loading was applied. In contrast, the gene expression level of SP1 in the DM group showed a significant 0.6‐fold decrease compared with that in the N group when mechanical loading was applied. No significant differences were observed between the N and DM groups in the expression levels of HIF‐1, Flt‐1, and KDR/Flk‐1, and the expression of Flt‐1 was not affected by mechanical loading or diabetes (Figure : * P < 0.05, ** P < 0.01). The protein expression of VEGF in PDL To investigate the expression of VEGF in the PDL of normal mice and diabetic mice, PDL were collected 7 days after experimental tooth movement. The protein expression of PDL was assessed by immunofluorescence staining. As shown in Figure , the expression of VEGF in normal mice was significantly higher than that in diabetic mice. The body weight and blood glucose levels of normal mice (N; n = 5) and diabetic mice (DM; n = 5) were evaluated at the time of insertion of the elastics and at 7, 14, and 21 days after insertion. The mean blood glucose level of N and DM mice at the time of insertion was 3.4 ± 0.2 and 3.5 ± 0.2 mmol/L, respectively, with no significant difference between the two groups. Seven days later, the mean blood glucose was significantly higher in the DM group (22.7 ± 0.9 mmol/L) than in the N group (8.1 ± 0.4 mmol/L). On Day 14, the mean blood glucose level was 8.3 ± 0.4 mmol/L in the N group and 21.2 ± 1.3 mmol/L in the DM group. Similarly, on Day 21, the mean blood glucose level was 8.3 ± 0.3 mmol/L in the N group and 19.7 ± 0.8 mmol/L in the DM group. The mean blood glucose level in the DM group was significantly higher than that in the N group (Figure ; ** P < 0.01). No significant difference in mean body weight was observed between the DM and N groups at 7, 14, and 21 days after the insertion of elastics (Figure ). Dental orthodontic elastics were inserted between the M1 and M2 of 8‐week‐old male N and DM mice (Figure ). The experimental tooth movement was induced using a 200 gf force from the orthodontic elastics. The days at which dropout of the elastics occurred after tooth movement were recorded. The average dropout time was at 9.0 ± 0 days in the N group and 12.8 ± 1.2 days in the DM group. Coincidentally, in all normal mice, elastics were shed at 9 days after insertion. This resulted in a SE of ±0. The DM group exhibited a significantly longer duration than the N group until spontaneous dropout (Figure ; * P < 0.05). On Day 7, a significant increase in the alveolar bone remaining from the root bifurcation to the apex of M1 (BV/TV) and Tb.Th was observed in the DM group compared with the N group (Figure ; ** P < 0.01), and a significant decrease in Tb.Sp was observed in the DM group compared with the N group (Figure : * P < 0.05). No significant difference in trabecular number (Tb.N) was observed between the N and DM groups at 0, 1, 3, and 7 days, and no significant decreases in BV/TV, Tb.Th, or Tb.Sp were observed between the two groups at 0, 1, and 3 days (Figure ). Histological observations revealed alveolar bone resorption around the M1 on the experimental side. Osteoclasts visualized by TRAP staining on the M1 compression side of the N and DM mice at 7 days after experimental tooth movement are shown in Figure . To be fair in our cell count assessment, we have divided the number of osteoclasts on the compression side by the distance of the yellow lines. The number of osteoclasts significantly decreased by approximately 0.6‐fold on the compression side of the DM group compared with the N group (Figure ; * P < 0.05). Schematic views of the maxillary first molars and molars extracted 18 h after mechanical loading are shown in Figure , respectively. To confirm that the harvested tissue was the periodontal ligament rather than the tooth root, we examined the gene expression of Scx, a transcription factor that responds to mechanical stress, in the periodontal ligament. The expression of Scx was observed in the periodontal ligament. In contrast, no Scx gene expression was observed in the roots (Figure ; ** P < 0.01). PDL The expression levels of alveolar bone remodeling‐related genes, namely VEGF, OPG, RANKL, and TGF‐β, were observed . In the N group, VEGF increased 4.0‐fold, OPG increased 3.0‐fold, RANKL increased 5.9‐fold, and TGF‐β increased 2.2‐fold when mechanical loading was applied compared with no mechanical loading. The expression of VEGF decreased by 0.5‐fold after mechanical loading in the DM group compared with the N group. In contrast, no significant difference was observed between the N and DM groups in the gene expression levels of OPG, RANKL, and TGF‐β (Figure ; * P < 0.05, ** P < 0.01). Subsequently, we observed the gene expression levels of specificity protein 1 (SP1) and hypoxia‐inducible factor‐1 (HIF‐1), transcription factors for VEGF, and fms‐like tyrosine kinase‐1 (Flt‐1) and kinase insert domain receptor/fetal liver kinase 1 (KDR/Flk‐1), receptors for VEGF. The expression levels of SP1, HIF‐1, and KDR/Flk‐1 in the N group were significantly increased by 1.8‐fold, 2.5‐fold, and 2.7‐fold, respectively, with mechanical loading compared with no mechanical loading. In the DM group, the expression levels of HIF‐1 and KDR/Flk‐1 showed significant increases of 2.3‐fold and 3.3‐fold, respectively, when mechanical loading was applied. In contrast, the gene expression level of SP1 in the DM group showed a significant 0.6‐fold decrease compared with that in the N group when mechanical loading was applied. No significant differences were observed between the N and DM groups in the expression levels of HIF‐1, Flt‐1, and KDR/Flk‐1, and the expression of Flt‐1 was not affected by mechanical loading or diabetes (Figure : * P < 0.05, ** P < 0.01). VEGF in PDL To investigate the expression of VEGF in the PDL of normal mice and diabetic mice, PDL were collected 7 days after experimental tooth movement. The protein expression of PDL was assessed by immunofluorescence staining. As shown in Figure , the expression of VEGF in normal mice was significantly higher than that in diabetic mice. In this study, we have demonstrated the gene expression profiles observed in mechanical load‐induced tooth movement, and we have revealed the impairment of mechanical load‐induced tooth movement in diabetic mice. Our results suggest that the suppression of the SP1/VEGF axis plays a crucial role in the impairment of mechanical load‐induced bone remodeling in diabetic mice. Orthodontically induced tooth movement is a process coordinated by osteoclastogenesis and osteocytes, and combines both pathological and physiological responses to externally applied forces . Braga et al . reported that orthodontic movement is accelerated in diabetic mice, whereas Arita et al . demonstrated that diabetes suppresses bone resorption and orthodontic tooth movement, indicating that there is no consensus on whether experimental tooth movement in diabetic animal models promotes or inhibits bone remodeling , , , , . In this study, the duration to dropout of elastics after experimental tooth movement was longer in diabetic mice than in normal mice. This suggests that the mechanical load‐induced bone resorption is suppressed in patients with diabetes. The analysis of bone trabecular structure revealed that the amount of remaining alveolar bone and bone trabecular width were increased and that the bone trabecular gap was decreased in diabetic mice. The number of osteoclasts on the compression side secondary to the mechanical load was lower in diabetic mice than in normal mice. It is known that bone quality is reduced in diabetes, even though there is no reduction in bone mineral density, and that the decrease in bone quality is related to abnormal bone remodeling. We have assumed that and that the abnormal remodeling in response to mechanical stimulation in this model may also be related to the decrease in bone quality in diabetes. Therefore, bone remodeling experiments involving also osteoblasts in diabetes are our future investigation . The gene expression levels of VEGF, OPG, RANKL, and TGF‐β, which are important regulators of orthodontic tooth movement, were observed, and these levels were increased in normal mice after mechanical loading, suggesting that these genes may be involved in experimental tooth movement. When the gene expression levels of the VEGF receptors Flt‐1 and KDR/Flk‐1 were measured, only KDR/Flk‐1 showed an increase in expression upon mechanical loading. These genes may influence bone remodeling during experimental tooth movement. In diabetic mice, among the alveolar bone remodeling‐related genes, the expression of VEGF on the compression side was lower than that in normal mice. The VEGF gene is upregulated downstream of abnormal glucose metabolism that causes retinopathy, a complication of diabetes, and is involved in the development and progression of retinopathy owing to the interplay of other complex factors . In contrast, the expression of VEGF is decreased in the myocardium under diabetic conditions, indicating site specificity. VEGF promotes bone resorption by osteoclasts, suggesting that osteoclast survival is involved in this process. Mature osteoclasts express VEGF receptors, suggesting that the effect of VEGF in promoting bone resorption by osteoclasts is a direct action via VEGF receptors. VEGF has been reported to directly promote osteoclast differentiation, induction, and viability . In diabetic mice, we have speculated that decreased osteoclast numbers were mainly due to decreased expression of VEGF, which is secreted from periodontal ligament tissue near osteoclasts independent from RANK/RANKL pathway. We have assumed that the paracrine effect of VEGF on osteoclasts may be predominant than other molecules. Flt‐1 (VEGF receptor 1) and KDR/Flk‐1 (VEGF receptor 2) are major VEGF receptors. Flt‐1 maintains vascular function as a regulator of angiogenesis, and KDR/Flk‐1 functions as a promoter of angiogenesis , . Furthermore, Flt‐1‐expressing osteoclast progenitors play an important role in the early stages of osteoclast differentiation, which is the target of Flt‐1 in its effects on osteoclast activity, and KDR/Flk‐1 is likely to be involved in VEGF‐mediated signaling for the survival and activity of mature osteoclasts , . SP1 and HIF‐1 are also upstream regulators of VEGF; they are transcription factors that bind to the promoters of various target genes and are involved in the regulation of biologically essential programs such as cell survival, proliferation, apoptosis, and angiogenesis , , . We revealed that the expression of SP1 on the compression side was decreased in diabetic mice compared with that in normal mice. In contrast, no such trend was observed for HIF‐1. Although the expression of HIF‐1 is known to increase in the diabetic state owing to the hypoxic environment caused by hypoglycemia, this tendency was not observed in this study. This phenomenon may be unique to periodontal ligament tissue . Several experimental methods for tooth movement have been reported, including the use of superelastic coil springs and the insertion of elastics between the first and second molars (Waldo method) . Each method has its advantages and disadvantages. The method using hyperelastic coil springs allows for longer term observations than the Waldo method. In contrast, the Waldo method can apply mechanical loading in a relatively short time and is less invasive. However, the disadvantage of the Waldo method is that the elastics are removed when the teeth move, making it impossible to compare the tooth movement distance, and the observation period is short. Therefore, in this study, we selected the Waldo method for the application of the mechanical load, evaluated the number of days until elasticity loss rather than the distance of tooth movement, and examined the effect of experimental tooth movement on alveolar bone resorption after experimental tooth movement in diabetic and normal mice. Only VEGF showed a decrease in gene expression on the compression side after mechanical loading in the DM group, suggesting that KDR/Flk‐1 activates osteoclasts. Interestingly, the gene expression level of SP1, a transcription factor of VEGF, decreased on the compression side after mechanical loading in diabetic mice. To the best of our knowledge, this is the first study to demonstrate that the diabetic state may suppress the resorption of alveolar bone remodeling upon mechanical loading. We further revealed that the mechanical load‐induced gene expression of VEGF was suppressed by decreasing gene expression of SP1 in the diabetic state, which may be one of the mechanisms underlying the suppression of bone remodeling. Molecules whose gene expression is upregulated by mechanical stimulation include TGF‐beta, OPG, RANKL, and VEGF. Especially, VEGF is involved in hemodynamic changes by ischemic phenomena in the periodontal ligament. We have focused on VEGF because that the expression levels of VEGF were significantly lower in the DM on the compression side of mechanical loading. We also confirmed that diabetic rats failed to increase the expression of SP1, a promoter of VEGF gene, by the mechanical stimulation. The impairment of PI3K/Akt pathway or other pathways by diabetic condition may lead to the suppression of SP1 gene expression, resulting in the decrease of VEGF gene expression . Further study required in this issue. Whether these results can be improved by administration of VEGF needs to be investigated; however, technical barriers to direct administration to the periodontal ligament made it difficult to accomplish in this study. This is a subject for future study. In conclusion, diabetes mellitus suppresses mechanical loading‐induced alveolar bone remodeling, suggesting that VEGF is a key molecule for impaired bone resorption under mechanical loading in the diabetic state. We believe that close attention must be paid to treatment planning and mechanics during orthodontic treatment of diabetic patients, and that clinicians must confirm that the patient does not have diabetes mellitus during the interview before starting orthodontic treatment. Keiko Naruse is an editorial board member of Journal of Diabetes Investigation and a coauthor of this article. To minimize bias, they were excluded from all editorial decision‐making related to the acceptance of this article for publication. Approval of the research protocol: The research methods and management of experimental animals were performed in accordance with the guidelines on animal experiments approved by the Animal Experiment Committee of the School of Dentistry, Aichi‐Gakuin University (Approval No.: AGUD381, Registration date: May 18th, 2020). Informed consent: N/A. Registry and the registration no. of the study/trial: N/A. Animal studies: N/A.
A DNA-based voltmeter for organelles
c989f677-05e5-475d-8b0a-43fe702d2eef
8513801
Physiology[mh]
Membrane potential is a key property of all biological membranes and is a fundamental signaling cue in all cells , . Although the membrane potential of the plasma membrane is relatively straightforward to measure, that of organelles is not. Since organelle lumens have very different ionic compositions from the cytosol, they are expected to harbor distinctive membrane potentials . Electrophysiology on isolated organelles reveals that membrane potential is a prime regulator of their function . For example, the high negative membrane potential of mitochondria changes along with cellular metabolism and regulates mitochondrial fission and fusion , . Lysosome membrane potential regulates lysosomal functions such as the refilling of lumenal calcium and its fusion with other organelles – . Importantly, the role of membrane potential in the function of many other organelles remains unaddressed as they are refractory to electrophysiology and organelle-specific probes are lacking . The pH sensitivity of fluorescent proteins limits their applicability in organelles where lumenal pH and membrane potential are co-dependent . Electrochromic hemicyanine dyes or photoinduced electron transfer (PeT) based voltage sensitive dyes are particularly attractive for organelles due to their low capacitive loads, high temporal resolution, photostability and response range . However, unlike proteins, voltage sensitive dyes cannot be targeted to specific organelles other than mitochondria . We have created a DNA-based nanodevice, called Voltair , that functions as a non-invasive, organelle-targetable, ratiometric reporter of absolute membrane potential of organelles in situ . DNA nanodevices can quantitatively map diverse analytes such as ions, reactive species and enzymes in organelles – . Using the 1:1 stoichiometry of DNA hybridization one can incorporate a reference fluorophore and any desired detection chemistry in a precise ratio to yield ratiometric probes . By displaying molecular trafficking motifs on such DNA nanodevices one can localize them within organelles , . Using Voltair , we measured the membrane potential of different organelles such as the early endosome, the late endosome and the lysosome, and mapped membrane potential changes as a function of endosomal maturation. We targeted Voltair to the recycling endosomes as well as the trans-Golgi network and quantified the membrane potentials of these organelles as well , . By measuring the contribution of the electrogenic Vacuolar H + -ATPase (V-ATPase) proton pump to the membrane potentials of various organelles, we found that each organelle membrane showed different electrochemical characteristics. Voltair Resting membrane potential is measured by the difference in electrical potential of two electrodes, a sample electrode inserted in the biological membrane and a reference electrode located outside the biological membrane of interest . The DNA nanodevice Voltair , uses a similar concept, where the fluorescence of a reporter probe inserted into the biological membrane is compared to a reference probe at a different wavelength located outside the biological membrane . Voltair is a 38-base pair DNA duplex with three modules ( and ). The first module, denoted D v , is a 38-mer single-stranded DNA conjugated at its 3′ end to a previously characterized voltage sensing dye (RVF). The synthesis is described in . The fluorophore in RVF is quenched by PeT due to hyperconjugation with the lone pair of electrons on its dimethyl aminobenzyl moiety , . Plasma-membrane depolarization decreases electron transfer and increases RVF fluorescence ( and ). RVF has high photostability, no capacitive load, is pH insensitive from pH 4.5 – 7.5, and therefore deployable at different physiological pH . The second module in Voltair is the reference probe, Atto647N, that corrects for intensity changes due to different sensor abundance arising from inhomogeneous probe distribution. Atto647N is attached to the 5′ end of D A and has high photostability, minimal spectral overlap with RVF, insensitivity to pH, voltage and other ions (red circle, ). The ratio of RVF and Atto647N intensities are thus proportional only to membrane potential. The third module is a targeting moiety that differs in every Voltair variant targeted to a specific organelle. In Voltair PM , Voltair localizes to the plasma-membrane as the 5′ end of D T bears a 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) moiety via a tetra-ethylene glycol linker . When Voltair PM is added to the culture medium, the POPE moiety anchors Voltair PM to the cell surface . Thus, RVF is inserted in a defined orientation with respect to the membrane potential vector, and the anionic DNA duplex prevents RVF flipping within the membrane. For ~2 h post-labeling, neither RVF nor DNA conjugated to POPE are endocytosed by HEK 293T cells . Voltair IM labels intracellular organelles by leveraging the ability of duplex DNA to double as a ligand for scavenger receptors . The engagement of endocytic receptors by DNA-based targeting motifs overrides the plasma-membrane residence of Voltair and targets it to specific intracellular organelles. Thus, the DNA duplex binds scavenger receptors and undergoes scavenger receptor-mediated endocytosis, trafficking to early endosomes, which mature to late endosomes, and finally to lysosomes, in a time-dependent manner . Voltair variants are assembled by annealing equimolar amounts of the component strands to yield probes with a precise 1:1 stoichiometry of the sensing dye (RVF) and reference dye (Atto647N). The formation and integrity of Voltair variants were confirmed by gel electrophoresis . We characterized the voltage-sensitive response of Voltair PM using whole-cell voltage clamping ( and ). Both Voltair PM and Voltair IM report on membrane potential by exciting RVF (G) and Atto647N (R) and monitoring their emission intensities at 540 nm and 665 nm respectively . The plasma membrane of HEK293T cells was efficiently labeled by Voltair PM , as seen by colocalization with Cellmask™ . Single Voltair PM labeled cells were voltage clamped from −100 mV to +100 mV and fluorescence images of RVF (G channel) and Atto647N (R channel) were acquired at each clamped voltage, from which the pseudo-colored G/R images were generated . The intensity in the G channel of voltage-clamped cells changed as a function of the applied voltage, while the R channel stayed constant . The uniform 1:1 ratio of RVF: Atto647N in Voltair PM yield highly reproducible G/R values as a function of applied voltage . The fold change in G/R signal from 0 mV to +100 mV (G/R) was ~1.2 for Voltair PM which matched well with plain RVF dye ( , inset) . Electrochromic voltage sensitive dyes like di-4-ANEPPDHQ, are also sensitive to lipid composition . Voltair PM however, was insensitive to lipid composition, consistent with its PeT-based mechanism of voltage sensing . The G/R value of Voltair PM in resting cells showed that resting potential was −50 mV, consistent with literature . Voltair IM to membranes of specific endocytic organelles We then targeted the variant, Voltair IM , to organelles on the endo-lysosomal pathway using scavenger receptor mediated endocytosis . DNA-based probes can label specific endocytic organelles in diverse cell types that express scavenger receptors . Since HEK 293T cells do not express scavenger receptors , we over-expressed human macrophage scavenger receptor (hMSR1) fused to CFP in HEK293T cells by transient transfection. This led to effective Voltair IM internalization . Colocalization between hMSR1-CFP and Voltair IM as well as competition experiments revealed that Voltair IM uptake was mediated by scavenger receptors ( , and ) . We determined the timepoints of localization of internalized Voltair IM devices at each stage of the endo-lysosomal pathway by time-dependent colocalization with various endocytic markers , . We found that Voltair IM showed 70% colocalization in early endosomes at ~10 min, 85% colocalization in late endosomes at ~60 min and 60% colocalization in lysosomes at ~100 min. In Voltair IM , the RVF moiety acts both as a voltage sensitive dye and a lipid anchor that tethers Voltair IM to the lumenal face of the organelle membrane . The duplex DNA moiety in Voltair IM enables scavenger receptor binding while permitting RVF insertion into the membrane surrounding the receptor . Thus, integration onto a duplex DNA scaffold imposes the scavenger receptor-mediated endocytic program on to RVF . To map the response of Voltair in intracellular membranes, we voltage-clamped Voltair IM -labeled lysosomes while simultaneously imaging Voltair IM ratiometrically. Treating COS-7 cells with vacuolin-1 swells up lysosomes to 1-3 μm . Such Voltair IM -labeled, enlarged lysosomes were isolated and voltage clamped from −100 mV to +100 mV . Fluorescence images of RVF (G) and Atto647N (R) were acquired at different membrane potentials and an intra-organellar calibration plot of normalized G/R versus membrane potential was constructed . Voltair IM in lysosomes quantitatively recapitulated its voltage-sensing characteristics in the plasma membrane ( , inset). All further experiments used the intra-organellar calibration profile of Voltair IM to compute membrane potential. measurement of absolute lysosomal membrane potential We first validated Voltair IM in lysosomes given the extensive prior art with electrophysiology , . Fluorescence images in the G and R channels of Voltair IM labeled lysosomes in HEK293T cells were converted to G/R images from which the G/R distribution could be obtained . Considering the lumen to be positive and the cytoplasmic face negative, the membrane potential of lysosomes (V Ly ) was found to be +114 mV. This is consistent with electrophysiology in different cell types . Values of V Ly in different cell lines showed slightly lower overall values of +80 mV (lumen positive) in COS-7 and BHK-21 compared to HEK 293T cells . Lysosomes in RAW macrophages were even more depolarized with V Ly of +40 mV (lumen positive), also consistent with literature . The variation in V Ly in different cell types could arise from the differential lysosomal activity across these cell types. Voltair IM reports absolute membrane potential up to 1h after it reaches lysosomes in HEK293T cells . We therefore checked whether Voltair IM could report on changes in V Ly upon treatment with pharmacological modulators of lysosomal pumps, channels and transporters. The proton pump V-ATPase acidifies organelles by hydrolyzing ATP and generating membrane potential across the organelle membrane . Inhibiting V-ATPase with bafilomycin A1 stops proton transport, which neutralizes membrane potential in purified, highly acidic synaptic vesicles . We found that Voltair IM labeled lysosomes showed a large increase in G/R values upon bafilomycin A1 treatment revealing that V Lys was dissipated upon V-ATPase inhibition . V Lys was reduced by ~100 mV from the resting value, as also seen in other V-ATPase-regulated organelles such as synaptic vesicles . Next, we modulated specific lysosomal ion channels. While on the lysosome, mTORC1 inhibits both TPC2 and TRPML1 channels , . When nutrient levels plummet, mTORC1 dissociates from the lysosome, relieving the inhibition on TPC2 and TRPML1 channels. This triggers lysosomal Ca 2+ release and is expected to reduce V Lys . Low V Lys is expected to activate the voltage dependent channel Slo1, which would promote K + influx into the lysosome and activate TRPML1 7 . Cytosolic Ca 2+ elevation caused by TRPML1 channel opening is expected to feedback positively to release lysosomal Ca 2+ and drive further K + influx . Treating HEK 293T cells with an activator of TRPML1 (ML-SA1) gave a ΔV Lys of 90 mV . This decrease in membrane potential was specific to the lysosome, because time-lapse imaging of membrane potential in early endosomes (V EE ) showed no change upon ML-SA1 treatment . Treatment with ML-SA1 and the BK channel agonist, NS1619 gave ΔV Lys of 50 mV . These values agree well with electrophysiology , . Inhibiting mTORC1 with Torin-1 reduced V Lys , by nearly 60 mV presumably due to TPC2 and TRPML1 channel activation . Treatment with Torin-1 and a TPC2 channel inhibitor (trans-ned-19), gave a ΔV Lys of 35 mV due to TPC2 channel inhibition . This agrees closely with electrophysiology on isolated lysosomes where TPC2 channel inhibition gave ΔV Lys of 20 mV . We then sought to measure the membrane potential as a function of endosomal maturation. Although the membrane potential of isolated lysosomes is known, no prior values are known for early or late endosomes. We therefore labeled early endosomes, late endosomes and lysosomes specifically with Voltair IM as described . The G/R values of ~200 organelles were computed for each endosomal stage from which the membrane potential was determined . The membrane potential of early endosomes (V EE ) and late endosomes (V LE ) was found to be +153 mV and +46 mV respectively . We observed a spread in values of membrane potential in endocytic organelles . At least for endo-lysosomes, we know that they are heterogenous and comprise subpopulations that enclose different ionic concentrations . This could in part explain the observed spread in organellar membrane potential. Surprisingly the gradient of membrane potential accompanying endosomal maturation did not reflect that of ion gradients . Protons, chloride and calcium levels increase progressively during endosomal maturation . In contrast, membrane potential is highest in the early endosome, drops ~3 fold in the late endosome and increases again in lysosomes. With respect to cytoplasmic concentrations, the levels of lumenal [Ca 2+ ] and [Cl − ] for the early endosome or the lysosome, are similar i.e., ~10 2 -10 3 fold higher [Ca 2+ ], 1-2-fold higher [Cl − ] – . However, the lysosome lumen has ~10 3 fold higher [H + ], while the early endosome lumen has only ~10 fold higher [H + ] than the cytosol . Thus, the membrane potential for the early endosome should be much lower than what we have observed (V EE = +153 mV; V LY = +114 mV). Considering the low abundances of other free ions, this provides a compelling case for Na + and K + transporters or exchangers in maintaining the high positive membrane potential of the early endosome. In fact, late endosomes are posited to have high K + , which is consistent with our observations . Our hypothesis is supported by observations at the plasma membrane. The differences in [Ca 2+ ] and [Cl − ] across the plasma membrane are comparable, yet its membrane potential is set by the differences in [Na + ] and [K + ] across the membrane . We re-designed Voltair to give Voltair RE and Voltair TGN to measure the membrane potential of recycling endosomes and the trans-Golgi network. Both organelles have been hypothesized to have negligible membrane potential . Voltair RE , displays an RNA aptamer against the human transferrin receptor , was uptaken by the transferrin receptor pathway and trafficked to recycling endosomes as evidenced by 80% colocalization with Alexa 546 labelled transferrin . Voltair TGN was targeted to the trans-Golgi network in HEK 293 cells expressing furin fused to a single-chain variable fragment recombinant antibody, scFv . The scFv domain binds d(AT) 4 sequence that is included in Voltair TGN . Thus, Voltair TGN is trafficked retrogradely to the trans Golgi network evidenced by 70% colocalization with TGN46-mCherry and no colocalization with endocytic organelles . We then measured the resting membrane potential of the recycling endosome (V RE ) and trans-Golgi network (V TGN ), neither of which has been previously possible, and evaluated the contribution of V-ATPase to membrane potential in each organelle. We found that V RE was +65 mV (lumen positive), similar to plasma membrane (cytosol negative). When V-ATPase activity was inhibited with bafilomycin A1, V RE showed no change revealing a negligible contribution of V-ATPase to the membrane potential . The magnitude and the V-ATPase dependence of V RE mirrors that of the plasma membrane, indicating that both membranes share similar electrical characteristics. Voltair TGN labelled cells were similarly imaged and G/R values of the TGN in ~30 cells revealed that V TGN was +121 mV (lumen positive) . Such high membrane potential is surprising as the Golgi has been posited by others to have negligible membrane potential . In addition, V-ATPase inhibition substantially reduced V TGN . However, unlike in lysosomes, the membrane potential of the TGN could not be completely neutralized and still showed +75 mV across the membrane. This suggests that other electrogenic transporters at the TGN, possibly Na + /K + ATPases could significantly contribute to the V TGN . It also reveals that different organelle membranes have distinct electrochemical behaviors. Finally, to test whether Voltair could provide temporal information, we mapped V LY of large numbers of intact lysosomes in situ in response to acute pharmacological triggers. Cytosolic Ca 2+ levels are stringently regulated by various organelles such as endoplasmic reticulum, mitochondria and plasma membrane . Lysosomes are also hypothesized to buffer cytosolic Ca 2+ , which, when elevated is expected to perturb the electrochemical homeostasis of lysosomes . We therefore elevated cytosolic Ca 2+ using ATP and measured V LY along with cytosolic Ca 2+ levels in COS-7 cells. Extracellular ATP elevates cytosolic Ca 2+ by interacting with P2 purinergic receptors on the plasma membrane, generating Ins(1,4,5)P 3 to activate IP 3 receptors on the endoplasmic reticulum which releases Ca 2+ . We monitored cytosolic Ca 2+ and V LY as a function of time by continuously imaging both Fluo-4 and Voltair IM labeled cells from whole cell intensity images . Cytosolic Ca 2+ peaked ~5 seconds after ATP addition . Interestingly, ~80 seconds after cytosolic Ca 2+ peaked, we observed an acute hyperpolarization of the lysosomal membrane by +75 mV. This hyperpolarization lasted for ~100 seconds, and then resting V LY was restored . Notably, V LY restoration coincided with cytosolic Ca 2+ restoration. The correlation between the cytosolic Ca 2+ surge and the abrupt lysosomal hyperpolarization suggests the activation of lysosome-resident transporters that contribute to restoring cytosolic Ca 2+ , along with other Ca 2+ buffering organelles of the cell. For example, the activity of a lysosomal Ca 2+ /H + exchanger (CAX) could elevate V LY because it would accumulate positive charge within the lysosome due to dicationic Ca 2+ influx versus monocationic H + efflux . Membrane potential variations are ~10 3 -fold faster than Ca 2+ transients and the small size of lysosomes compared to the plasma membrane are possible reasons for the sharp changes in V LY . Our studies provide evidence for either Na + or K + in establishing the high membrane potential of the early endosome. The recycling endosomal membrane and plasma membrane show very similar electrochemical characteristics. The membrane potential of the trans Golgi network, previously hypothesized to be negligible, is actually as high as the lysosome . Yet, unlike the lysosome, it is not completely driven by V-ATPase. Non-invasively interrogating membrane potential in organelles that have proved previously impossible to address, offers the capacity to uncover how membrane potential regulates functionality. Protein-based voltage indicators while powerful, have limited applicability to acidic organelles, as they are pH sensitive and cannot yet provide absolute membrane potential. Voltage sensitive dyes are attractive due to their low capacitive loads and pH insensitivity, but, on their own, cannot be targeted to organelles. Voltair unites the advantages of voltage sensitive dyes with the organelle-targetability of proteins to non-invasively measure the membrane potential of organelles. A key requirement for voltage reporters is robust membrane association. Genetically encoded voltage reporters access the membrane of interest from the cytosolic face, while voltage-sensitive dyes such as RVF bind the outer leaflet of the plasma membrane , . Voltair probes bind the lumenal leaflet of the organelle. Note that the vector direction of membrane potential as measured by Voltair in organelles is inverted with respect to lysosomal electrophysiology , . We purposely keep our sign convention consistent that of with electrophysiology, to also convey the inverted topology of organelle membranes and plasma membrane. Currently, Voltair can probe only endocytic organelles and the trans Golgi network (TGN). Although one can potentially target synaptic vesicles and secretory vesicles, their small size may limit probe abundance and therefore require more sophisticated imaging . To apply Voltair to organelles on the secretory pathway, mitochondria, nucleus and peroxisomes, new knowledge of proteins that retrogradely traffic from the plasma membrane to the organelle is required . While Voltair is not two-photon compatible, it can be redesigned to display two-photon compatible dyes for deep tissue imaging. Unlike genetically encoded voltage reporters, Voltair cannot yet be targeted tissue specifically. Thus, mechanisms to target DNA nanodevices to excitable tissues would significantly expand its applicability. However, DNA nanodevices are preferentially internalized by immune cells in vivo ; hence one can readily image organelles in transparent systems such as coelomocytes in nematodes , or microglia in zebrafish brains . When an organelle membrane breaks, the membrane potential is expected to fall to zero. Thus, Voltair could also act as a real-time reporter of organelle damage and membrane repair. As Voltair is a quantitative reporter at the biotic-abiotic interface, it can be used to rationally guide the design of biocompatible electronics . Knowledge of absolute membrane potential will enable us to model cellular responses to electrical stimuli at the level of organelles. Reagents Modified oligonucleotides were purchased from IDT (USA), subjected to ethanol precipitation and quantified using UV absorbance. CellMask™ reagents and TMR-Dextran were purchased from molecular probes/Life Technologies (USA). Maleylated BSA (mBSA) and fluorescent transferrin (Tf-Alexa546) were conjugated according to previously published protocols , , . Pharmacological Lysosomal modulators were purchased from Cayman Chemical (USA). Phenyl triflimide, N-Boc-piperazine, Azido-PEG4-NHS and 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine lipid were purchased from TCI America (USA), Oakwood chemicals (USA), Click Chemistry tools (USA) and Avanti lipids (USA) respectively. All other reagents were purchased from Sigma-Aldrich (USA) unless otherwise specified. Synthesis of RVF Synthesis scheme is shown in . Compound 1a (0.2 g, 0.36 mmol) was suspended in N,N-dimethylformamide (DMF) (1.4 mL) and cooled to 0°C . Triethylamine (2.7 mL) and Phenyl triflimide (0.25 g, 0.72 mmol, 2 eq.) were added dropwise and the reaction was stirred at room temperature for 2 hours. The reaction mixture was then diluted in water and extracted with dichloromethane (DCM) twice. Organic extracts were washed three times with brine and 1M HCl. The product was concentrated via rotor evaporation and diluted in dry DMSO (2mL). N-boc piperazine (3.72g, 20 mmol, 100eq.) was added and the reaction mixture was kept at 100°C overnight. The reaction mixture was then diluted in water, extracted in DCM twice then washed with brine twice. The mixture was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure. Silica gel column chromatography was performed in 5% methanol in chloroform to get 1b (0.08g, 28%). ESI-MS Expected mass = 728.98, found = 729.0. A reaction tube was charged with 1b (70 mg, 0.0956 mmol), Pd(OAc) 2 (6.87 mg, 0.0306 mmol, 0.32 eq.), tri-o-tolylphosphine (20.4 mg, 0.067 mmol, 0.7 eq.) and ( E )-N,N-dimethyl-4-(4-vinylstyryl)aniline (26.23mg, 1.052 mmol, 1.1 eq.) which was previously synthesized according to literature . The tube was evacuated and backfilled with N 2 three times. 1 mL of dry DMF and 500 μL dry triethylamine were added via syringe, and the reaction was stirred at 110°C overnight. The reaction mixture was diluted in water and extracted with DCM twice. The organic extract was washed with brine twice and concentrated under reduce pressure. Silica gel column chromatography was performed in 5% methanol in DCM to get orangish brown solid 1c (20 mg, 25%). ESI-MS , Expected mass = 851.29, found = 851.2. A reaction vial with 1c (5 mg, 6.7 nmol) was placed in 5% TFA in DCM overnight for deprotection. TFA was removed under reduced pressure and Azido-PEG4-NHS ester (26mg, 6.7 nmol, 10.0 eq.), 300 μL dry DMF, and 200 μL triethylamine were added. The mixture was stirred for 4 hours at room temperature. The reaction mixture was then diluted in water, extracted to DCM twice, washed with brine three times, dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure. Silica gel column chromatography was performed with 2% methanol in DCM, slowly increasing the gradient to 10%, to get reddish brown solid 1d (2mg, 30%). ESI- MS, Expected mass = 1025.23, found = 1025.3. Sample preparation Sensing domain (D V, D V RE ): 3’- DBCO modified 38 base strand (10 μM) was coupled to the azide containing RVF (50 μM, 5 eq.) in 20 mM sodium phosphate buffer, pH 7.4, and incubated at RT for 4 hrs , , . Upon completion, unconjugated fluorophores were removed by ethanol precipitation . Targeting domain (D T ): 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) was conjugated to NHS-PEG4-Azide using an established protocol . 20 μM of the 5’-DBCO modified 22 base strand (IDT, USA) was coupled to azido-POPE (40 μM, 2eq.) in 20 mM sodium phosphate buffer, pH 7.4 and stirred for 4 hours at RT . Construction of Voltair PM , Voltair IM and Voltair RE : Stock solution of Voltair PM was prepared at a final concentration of 10 μM by mixing D V , D T and D A (Atto647N – 5’ modified strand) at an equimolar ratio in 10 mM sodium phosphate buffer, pH 7.4 . For Voltair IM samples, D V (Voltage sensing strand of Voltair PM ) and D A’ (Atto647N – 3’ modified 38 mer strand) were mixed at an equimolar ratio with a final concentration of 10 μM . For Voltair RE samples, 10 μM of D V RE , D Tf and D A RE were mixed at an equimolar ratio. For all samples, annealing and gel characterization were done according to previous established protocol , . In vitro spectroscopic measurements: Fluorescence spectra were recorded on a FluoroMax-4 scanning Spectro-fluorometer (Horiba Scientific, Edison, NJ, USA). For recording spectra, 100 nM Voltair IM in UB4 buffer (20 mM HEPES, MES and sodium acetate, 150 mM KCl, 5 mM NaCl, 1 mM CaCl 2 and MgCl 2 ) of desired pH, were excited at 520 nm and 650 nm and emission spectra were collected between 525 – 600 nm and 655-750 nm respectively . Cell culture, plasmids and transfection: Human embryonic kidney cells (HEK 293T), BHK-21 cells, human dermal fibroblasts (HDF), RAW 264.7, THP-1 and T-47D cells were a kind gift from Dr. Bryan Dickinson ( University of Chicago), Dr. M. Gack (University of Chicago), J. Rowley’s lab (University of Chicago), Dr. Christine A. Petersen (University of Iowa), Dr. D. Nelson (University of Chicago) and G. Greene (University of Chicago), respectively. COS-7 cells were purchased from ATCC. Cell lines were purchased from ATCC prior to receiving them as gifts and were authenticated by short tandem repeats (STR). All cell lines were checked for mycoplasma contamination using Hoechst-33342 staining. Cells were cultured in Dulbecco’s Modified Eagle’s Medium (Invitrogen Corporation, USA) containing 10% heat inactivated Fetal Bovine Serum (FBS) (Invitrogen Corporation, USA), 100 U/mL penicillin and 100 μg/mL streptomycin and maintained at 37°C under 5% CO 2 . HEK 293T cells were passaged and plated at a confluency of 20 – 30% for electrophysiology experiments, and 50 – 70% for transfection and intracellular measurements. The hMSR1 sequence was cloned into the PCS2NXE vector (4,103 bp) containing the CMV promoter for overexpression in mammalian cell lines. The hMSR1-CFP plasmid (5973 bp) was constructed by cloning hMSR1 sequence into pECFP-C1 plasmid (4731 bp). The identity of each construct was confirmed by sequencing, using forward primer (5’ to 3’) GGGACATGGGAATGCAATAG and reverse primer (5’ to 3’) CTCAAGGTCTGAGAATGTTCCC. The mCherry-TGNP-N-10 was a gift from Michael Davidson (Addgene plasmid #55145) and Rab7-RFP was a gift from Ari Helenius (Addgene plasmid #14436, ). Construction of scFv-furin construct is reported previously . HEK 293T cells were transiently transfected with respective plasmids using Trans IT®-293 transfection reagent (MIRUS). After a 4-hour incubation the transfected medium was replaced with fresh medium. Labeling experiments were performed on cells 48 hours post transfection. Electrophysiology: A schematic of the electrophysiology equipment used for whole cell patch clamp recording is shown in . Recordings were performed with an Axopatch 200A amplifier (Molecular Devices), digitized using an NI-6251 DAQ (National Instruments). The amplifier and digitizer were controlled using WinWCP software (Strathclyde Electrophysiology Software). Borosilicate glass capillaries (Sutter) of dimension 1.5 mm x 0.86 mm (OD/ID) were pulled using a Sutter P-97 Micropipette puller (program: Heat – Ramp, Pull – 0, Vel – 21, Time – 1(Delay), Loops – 5). Patch pipettes with resistances between 5-10 MOhm were used in voltage clamping experiments. The patch pipette was positioned using an MP325 motorized manipulator (Sutter). Image Acquisition software Metamorph premier Ver. 7.8.12.0 was linked to an NI-6501 DAQ to enable voltage triggered image acquisition. For all measurements the extracellular solution composition was (in mM) 145 NaCl, 20 glucose, 10 HEPES, pH 7.4, 3 KCl, 2 CaCl 2 , 1 MgCl 2 (310 mOsm) and the intracellular solution composition was (in mM) 115 potassium gluconate, 10 EGTA, 10 HEPES, pH 7.2, 5 NaCl, 10 KCl, 2 ATP disodium salt, 0.3 GTP trisodium salt (290 mOsm). For plasma membrane voltage clamping experiments, 1 μM RVF or 500 nM Voltair PM was incubated with HEK 293T cells for 30 mins in Hank’s Balanced Salt Solution (Thermofisher) at RT. Labelled cells were washed three times with PBS and incubated in extracellular solution for whole cell voltage clamping. Whole cell voltage clamping was performed according to an established protocol . For background subtraction, bleaching correction and lamp fluctuation compensation, imaging field was chosen with at least one more HEK 293T cell that is not clamped. Once clamped, membrane potential is changed from −100 to +100 mV in 10 mV increments at 1000 ms intervals. Around 200 ms after the voltage is changed three images are taken in quick succession. Voltage clamp experiments were also performed with extracellular solutions of different pH, to study the effect of pH on voltage sensitivity of RVF . Endo-lysosomal electrophysiology: COS-7 cells were treated with 1 μM vacuolin-1 overnight to increase the size of lysosomes to 1-3 μm . Enlarged lysosomes of COS-7 cells were labeled with 500 nM Voltair IM , by 30 mins pulse in HBSS followed by 2 hr. chase in complete media containing 1 μM vacuolin-1 . Enlarged endolysosome containing Voltair IM was pushed out of the ruptured cell (ref) . Borosilicate glass capillaries (Sutter) of dimension 1.5 mm x 0.86 mm (OD/ID) were pulled using the program: Heat – 520, Pull – 0, Vel – 20, Time – 200, Loops – 4. Fire polished patch pipettes with resistances between 15-20 MOhm were used in voltage clamping experiments. After giga-ohm seal formation, break in was performed by a zap protocol (5V: 0.5-5s) till appearance of capacitance transients. In order to minimize fluorescence interference from other lysosomes, patched lysosome is moved away from the cell prior to imaging. For all measurements the cytoplasmic solution composition was (in mM) 140 K-gluconate, 4 NaCl, 1 EGTA, 20 HEPES, pH 7.2, 0.39 CaCl 2 , 2 MgCl 2, 2 ATP disodium salt, 0.3 mM GTP and the composition of pipette solution was 145 NaCl, 2 CaCl 2 , 1 MgCl 2 , 10 HEPES, 10 MES pH 4.6, 10 glucose, 5 KCl. The voltage clamping and imaging protocol was followed as discussed in electrophysiology section above. Microscopy: Wide field microscopy was carried out on an IX83 inverted microscope (Olympus Corporation of the Americas, Center Valley, PA, USA) using either a 100X or 60X, 1.4 NA, DIC oil immersion objective (PLAPON) and Evolve Delta 512 EMCCD camera (Photometrics, USA) and controlled using Metamorph premier Ver 7.8.12.0 (Molecular Devices, LLC, USA). Images were acquired with exposure 100 ms and EM gain at 100 for Atto647N, exposure 200 ms and EM gain 300 for RVF. RVF channel images were obtained using 500/20 band pass excitation filter, 535/30 band pas emission filter and 89016 dichroic. For Atto647N, images were obtained using the 640/30 band pass excitation filter, 705/72 band pass emission filter and 89016 dichroic. For cytosolic calcium recording, fluorescence of Fluo-4 was recorded by exciting at 480 nm and collecting emission at 520 nm. Images were acquired using a 480/20 band pass excitation filter, 520/40 band pass emission filter and 89016 dichroic. Intracellular membrane potential measurements were performed by recording images in the sensing channel (G) and the normalizing channel (R) in cells where each specific compartment was labeled. After acquisition of G and R images, intracellular membrane potential was neutralized (~ 0 mV) by adding 50 μM valinomycin and monensin in high K + buffer, for 20 mins at room temperature. A set of G and R images of same cells were acquired after valinomycin and monensin or pharmacological treatments as shown in . These images of neutralized endosomes were used as a baseline measurement to correct for variations in autofluorescence. To record all endo-lysosomal compartments in the cell, Z-stacks (30 planes, Z distance = 0.8 μm) were captured and a maximum intensity projection was used to produce a single image for analysis. Confocal images were captured with Leica TCS SP5 II STED laser scanning confocal microscope (Leica Microsystems, Inc. Buffalo Grove, IL, USA) equipped with a 63X, 1.4 NA, Oil immersion objective. RVF was excited using an argon laser with 514 nm wavelength, CFP by 458 nm and Atto647N using a He-Ne laser with 633 nm wavelength. CellMask orange stain was excited by 543 nm and all emissions were filtered using Acousto Optical Beam Splitter (AOBS) with settings suitable for each fluorophore and recorded using hybrid detectors (HyD). Time lapse Imaging: Lysosomes of COS-7 cells labeled with Voltair IM were sequentially imaged on a widefield microscope with 60X magnification in G and R channel at 10-20s interval for a duration of 20 mins. Image acquisition was briefly paused at indicated time for addition of indicated chemical to cells. To minimize bleaching, time-lapse imaging was done at single focal plane. Z-drift compensator module from IX83 was applied to minimize out of focus drift during time lapse imaging. The extracellular ATP induced cytosolic calcium increase was imaged by labeling COS-7 cells with cell permeable calcium sensitive dye Fluo4-AM ester . Time-lapse imaging of Fluo-4 labeled COS-7 cells were acquired at 1s interval for 20 mins. Competition assay: Maleylated BSA based competition assay was performed according to previously established protocols . Co-localization and labeling experiments: To find out the trafficking time of DNA device in specifically labeling EE, LE or Ly of HEK 293T cells transfected with hMSR1, time dependent colocalization experiments were performed as reported previously , . Fluorescent Transferrin was used to specifically label EE by pulsing it for 10 minute prior to imaging and to label RE with an additional chase time of 30 mins . Rab7-RFP is a well-established late endosome (LE) marker, and was transiently expressed in HEK 293T cells to label LE . Transient expression of TGN46-mCherry specifically labels trans Golgi network (TGN) . Finally, TMR-Dextran a specific marker for lysosome (Ly) after previously reported chase times, was pulsed for 1 hour, followed by 16 hours chase in complete media to label lysosomes . The trafficking time of DNA device with transferrin aptamer and d(AT) 4 tag in labeling RE and TGN, respectively, have been established previously , , . Briefly, recycling endosomes are targeted by pulsing Voltair RE in 1X HBSS, for 10 mins at 37°C, followed by 30 mins of chase in complete media at 37°C. Trans Golgi network is targeted by pulsing Voltair IM to ScFv-Furin transfect HEK 293T cells, in complete media containing cycloheximide (CHX) for 90 mins at 37°C, followed by 90 mins chase. Pharmacological drug treatments: Voltair IM labelled cells were treated with ML-SA1 (20 μM), NS1619 (15 μM) or trans-ned-19 (1 μM) for 15 mins in HBSS solution at room temperature. Torin-1 (1 μM) was treated to labelled cells during the chase period (50 mins), to inhibit mTOR. Bafilomycin-A1 (500 nM) was added to cells for 30 mins in 1X HBSS and incubated at 37°C prior organellar voltage measurements. After acquisition of G and R images of drug treated cells, intracellular membrane potential was neutralized (~ 0 mV) by adding 50 μM valinomycin and monensin in high K + buffer, for 20 mins at room temperature . Image analysis: Images were analyzed with Fiji (NIH, USA). For organellar voltage measurements, regions of cells containing single isolated endosomes/lysosomes in each Atto647N (R) image were manually selected and the coordinates saved in the ROI plugin. Similarly, for background computation, a nearby region outside endosomes/lysosomes were manually selected and saved as an ROI. The same regions were selected in the RVF (G) image by recalling the ROIs. After background subtraction, mean intensity for each endosome (G and R) was measured and exported to OriginPro (OriginLab, USA). A ratio of G to R intensities (G/R) was obtained from these values by dividing the mean intensity of a given endosome in the G image with the corresponding intensity in the R image. To minimize the measurement error due to low fold change of Voltair probes, the same endosomes or lysosomes are measured post addition of valinomycin and monensin which neutralizes the membrane potential in presence of 150 mM KCl. For TGN voltage measurements, total cell intensity was recorded and background subtraction was performed by manually selecting a region outside the cell. For a given experiment, membrane potential of an organelle population was determined by converting the mean [G/R] V - [G/R] O value of the distribution to voltage values according to intracellular voltage calibration profile . The mean value of each organelle population across three trials on different days is determined and the final data is presented as mean ± S.E.M. Representative images are shown in pseudo-color images, where G and R images were modified by thresholding in ImageJ to get G’ and R’ images. Using ImageJ’s Image calculator module, G’ images were divided by R’ images to generate an image where each pixel represents [G/R] V . For whole cell patch clamp, image analysis was performed using custom Matlab code. A series of images corresponding to a voltage sweep from −100 to +100 mV was collected and input into the program. By identifying changes in intensity from the first to last image a region of interest corresponding to the clamped cell was selected. Other cells present in the image were also selected based on an intensity threshold. The region containing no cells was used to subtract background noise from the detector from all regions. Intensity from unclamped cells was measured in each image and used to correct for photobleaching or fluctuation in lamp intensity. After background corrections the average intensity of the patch clamped cell was then measured for each individual image in the series and normalized against the value at −60 mV. Plasma membrane Cholesterol modulation: HeLa cells were incubated with either 5mM Methyl-β-cyclodextrin (MβCD) in HBSS alone or 4.5mM MβCD complexed with 0.5mM cholesterol for 1 h. The former treatment depletes cholesterol levels in the plasma membrane while the latter treatment increases cholesterol levels in the plasma membrane . HeLa cells treated thus, were then labeled with Voltair PM in HBSS for 20 mins and voltage clamped from −100 to +100 mV and simultaneously imaged in the red and green channels. Statistical analysis For statistical analysis between two samples, two-sample two tailed test assuming unequal variance were used. For comparison of multiple samples, one-way ANOVA with a post hoc Tukey test or Fischer test was used. All statistical analysis was performed in Origin (Student version). Violin plots show the Kernel smooth distribution of data points and embedded box plots indicate 25-75% percentile, with median shown as white circle and error bars represents standard deviation. Modified oligonucleotides were purchased from IDT (USA), subjected to ethanol precipitation and quantified using UV absorbance. CellMask™ reagents and TMR-Dextran were purchased from molecular probes/Life Technologies (USA). Maleylated BSA (mBSA) and fluorescent transferrin (Tf-Alexa546) were conjugated according to previously published protocols , , . Pharmacological Lysosomal modulators were purchased from Cayman Chemical (USA). Phenyl triflimide, N-Boc-piperazine, Azido-PEG4-NHS and 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine lipid were purchased from TCI America (USA), Oakwood chemicals (USA), Click Chemistry tools (USA) and Avanti lipids (USA) respectively. All other reagents were purchased from Sigma-Aldrich (USA) unless otherwise specified. Synthesis scheme is shown in . Compound 1a (0.2 g, 0.36 mmol) was suspended in N,N-dimethylformamide (DMF) (1.4 mL) and cooled to 0°C . Triethylamine (2.7 mL) and Phenyl triflimide (0.25 g, 0.72 mmol, 2 eq.) were added dropwise and the reaction was stirred at room temperature for 2 hours. The reaction mixture was then diluted in water and extracted with dichloromethane (DCM) twice. Organic extracts were washed three times with brine and 1M HCl. The product was concentrated via rotor evaporation and diluted in dry DMSO (2mL). N-boc piperazine (3.72g, 20 mmol, 100eq.) was added and the reaction mixture was kept at 100°C overnight. The reaction mixture was then diluted in water, extracted in DCM twice then washed with brine twice. The mixture was dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure. Silica gel column chromatography was performed in 5% methanol in chloroform to get 1b (0.08g, 28%). ESI-MS Expected mass = 728.98, found = 729.0. A reaction tube was charged with 1b (70 mg, 0.0956 mmol), Pd(OAc) 2 (6.87 mg, 0.0306 mmol, 0.32 eq.), tri-o-tolylphosphine (20.4 mg, 0.067 mmol, 0.7 eq.) and ( E )-N,N-dimethyl-4-(4-vinylstyryl)aniline (26.23mg, 1.052 mmol, 1.1 eq.) which was previously synthesized according to literature . The tube was evacuated and backfilled with N 2 three times. 1 mL of dry DMF and 500 μL dry triethylamine were added via syringe, and the reaction was stirred at 110°C overnight. The reaction mixture was diluted in water and extracted with DCM twice. The organic extract was washed with brine twice and concentrated under reduce pressure. Silica gel column chromatography was performed in 5% methanol in DCM to get orangish brown solid 1c (20 mg, 25%). ESI-MS , Expected mass = 851.29, found = 851.2. A reaction vial with 1c (5 mg, 6.7 nmol) was placed in 5% TFA in DCM overnight for deprotection. TFA was removed under reduced pressure and Azido-PEG4-NHS ester (26mg, 6.7 nmol, 10.0 eq.), 300 μL dry DMF, and 200 μL triethylamine were added. The mixture was stirred for 4 hours at room temperature. The reaction mixture was then diluted in water, extracted to DCM twice, washed with brine three times, dried over anhydrous Na 2 SO 4 and concentrated under reduced pressure. Silica gel column chromatography was performed with 2% methanol in DCM, slowly increasing the gradient to 10%, to get reddish brown solid 1d (2mg, 30%). ESI- MS, Expected mass = 1025.23, found = 1025.3. Sensing domain (D V, D V RE ): 3’- DBCO modified 38 base strand (10 μM) was coupled to the azide containing RVF (50 μM, 5 eq.) in 20 mM sodium phosphate buffer, pH 7.4, and incubated at RT for 4 hrs , , . Upon completion, unconjugated fluorophores were removed by ethanol precipitation . Targeting domain (D T ): 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) was conjugated to NHS-PEG4-Azide using an established protocol . 20 μM of the 5’-DBCO modified 22 base strand (IDT, USA) was coupled to azido-POPE (40 μM, 2eq.) in 20 mM sodium phosphate buffer, pH 7.4 and stirred for 4 hours at RT . Construction of Voltair PM , Voltair IM and Voltair RE : Stock solution of Voltair PM was prepared at a final concentration of 10 μM by mixing D V , D T and D A (Atto647N – 5’ modified strand) at an equimolar ratio in 10 mM sodium phosphate buffer, pH 7.4 . For Voltair IM samples, D V (Voltage sensing strand of Voltair PM ) and D A’ (Atto647N – 3’ modified 38 mer strand) were mixed at an equimolar ratio with a final concentration of 10 μM . For Voltair RE samples, 10 μM of D V RE , D Tf and D A RE were mixed at an equimolar ratio. For all samples, annealing and gel characterization were done according to previous established protocol , . V, D V RE ): 3’- DBCO modified 38 base strand (10 μM) was coupled to the azide containing RVF (50 μM, 5 eq.) in 20 mM sodium phosphate buffer, pH 7.4, and incubated at RT for 4 hrs , , . Upon completion, unconjugated fluorophores were removed by ethanol precipitation . T ): 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) was conjugated to NHS-PEG4-Azide using an established protocol . 20 μM of the 5’-DBCO modified 22 base strand (IDT, USA) was coupled to azido-POPE (40 μM, 2eq.) in 20 mM sodium phosphate buffer, pH 7.4 and stirred for 4 hours at RT . Voltair PM , Voltair IM and Voltair RE : Stock solution of Voltair PM was prepared at a final concentration of 10 μM by mixing D V , D T and D A (Atto647N – 5’ modified strand) at an equimolar ratio in 10 mM sodium phosphate buffer, pH 7.4 . For Voltair IM samples, D V (Voltage sensing strand of Voltair PM ) and D A’ (Atto647N – 3’ modified 38 mer strand) were mixed at an equimolar ratio with a final concentration of 10 μM . For Voltair RE samples, 10 μM of D V RE , D Tf and D A RE were mixed at an equimolar ratio. For all samples, annealing and gel characterization were done according to previous established protocol , . Fluorescence spectra were recorded on a FluoroMax-4 scanning Spectro-fluorometer (Horiba Scientific, Edison, NJ, USA). For recording spectra, 100 nM Voltair IM in UB4 buffer (20 mM HEPES, MES and sodium acetate, 150 mM KCl, 5 mM NaCl, 1 mM CaCl 2 and MgCl 2 ) of desired pH, were excited at 520 nm and 650 nm and emission spectra were collected between 525 – 600 nm and 655-750 nm respectively . Human embryonic kidney cells (HEK 293T), BHK-21 cells, human dermal fibroblasts (HDF), RAW 264.7, THP-1 and T-47D cells were a kind gift from Dr. Bryan Dickinson ( University of Chicago), Dr. M. Gack (University of Chicago), J. Rowley’s lab (University of Chicago), Dr. Christine A. Petersen (University of Iowa), Dr. D. Nelson (University of Chicago) and G. Greene (University of Chicago), respectively. COS-7 cells were purchased from ATCC. Cell lines were purchased from ATCC prior to receiving them as gifts and were authenticated by short tandem repeats (STR). All cell lines were checked for mycoplasma contamination using Hoechst-33342 staining. Cells were cultured in Dulbecco’s Modified Eagle’s Medium (Invitrogen Corporation, USA) containing 10% heat inactivated Fetal Bovine Serum (FBS) (Invitrogen Corporation, USA), 100 U/mL penicillin and 100 μg/mL streptomycin and maintained at 37°C under 5% CO 2 . HEK 293T cells were passaged and plated at a confluency of 20 – 30% for electrophysiology experiments, and 50 – 70% for transfection and intracellular measurements. The hMSR1 sequence was cloned into the PCS2NXE vector (4,103 bp) containing the CMV promoter for overexpression in mammalian cell lines. The hMSR1-CFP plasmid (5973 bp) was constructed by cloning hMSR1 sequence into pECFP-C1 plasmid (4731 bp). The identity of each construct was confirmed by sequencing, using forward primer (5’ to 3’) GGGACATGGGAATGCAATAG and reverse primer (5’ to 3’) CTCAAGGTCTGAGAATGTTCCC. The mCherry-TGNP-N-10 was a gift from Michael Davidson (Addgene plasmid #55145) and Rab7-RFP was a gift from Ari Helenius (Addgene plasmid #14436, ). Construction of scFv-furin construct is reported previously . HEK 293T cells were transiently transfected with respective plasmids using Trans IT®-293 transfection reagent (MIRUS). After a 4-hour incubation the transfected medium was replaced with fresh medium. Labeling experiments were performed on cells 48 hours post transfection. A schematic of the electrophysiology equipment used for whole cell patch clamp recording is shown in . Recordings were performed with an Axopatch 200A amplifier (Molecular Devices), digitized using an NI-6251 DAQ (National Instruments). The amplifier and digitizer were controlled using WinWCP software (Strathclyde Electrophysiology Software). Borosilicate glass capillaries (Sutter) of dimension 1.5 mm x 0.86 mm (OD/ID) were pulled using a Sutter P-97 Micropipette puller (program: Heat – Ramp, Pull – 0, Vel – 21, Time – 1(Delay), Loops – 5). Patch pipettes with resistances between 5-10 MOhm were used in voltage clamping experiments. The patch pipette was positioned using an MP325 motorized manipulator (Sutter). Image Acquisition software Metamorph premier Ver. 7.8.12.0 was linked to an NI-6501 DAQ to enable voltage triggered image acquisition. For all measurements the extracellular solution composition was (in mM) 145 NaCl, 20 glucose, 10 HEPES, pH 7.4, 3 KCl, 2 CaCl 2 , 1 MgCl 2 (310 mOsm) and the intracellular solution composition was (in mM) 115 potassium gluconate, 10 EGTA, 10 HEPES, pH 7.2, 5 NaCl, 10 KCl, 2 ATP disodium salt, 0.3 GTP trisodium salt (290 mOsm). For plasma membrane voltage clamping experiments, 1 μM RVF or 500 nM Voltair PM was incubated with HEK 293T cells for 30 mins in Hank’s Balanced Salt Solution (Thermofisher) at RT. Labelled cells were washed three times with PBS and incubated in extracellular solution for whole cell voltage clamping. Whole cell voltage clamping was performed according to an established protocol . For background subtraction, bleaching correction and lamp fluctuation compensation, imaging field was chosen with at least one more HEK 293T cell that is not clamped. Once clamped, membrane potential is changed from −100 to +100 mV in 10 mV increments at 1000 ms intervals. Around 200 ms after the voltage is changed three images are taken in quick succession. Voltage clamp experiments were also performed with extracellular solutions of different pH, to study the effect of pH on voltage sensitivity of RVF . COS-7 cells were treated with 1 μM vacuolin-1 overnight to increase the size of lysosomes to 1-3 μm . Enlarged lysosomes of COS-7 cells were labeled with 500 nM Voltair IM , by 30 mins pulse in HBSS followed by 2 hr. chase in complete media containing 1 μM vacuolin-1 . Enlarged endolysosome containing Voltair IM was pushed out of the ruptured cell (ref) . Borosilicate glass capillaries (Sutter) of dimension 1.5 mm x 0.86 mm (OD/ID) were pulled using the program: Heat – 520, Pull – 0, Vel – 20, Time – 200, Loops – 4. Fire polished patch pipettes with resistances between 15-20 MOhm were used in voltage clamping experiments. After giga-ohm seal formation, break in was performed by a zap protocol (5V: 0.5-5s) till appearance of capacitance transients. In order to minimize fluorescence interference from other lysosomes, patched lysosome is moved away from the cell prior to imaging. For all measurements the cytoplasmic solution composition was (in mM) 140 K-gluconate, 4 NaCl, 1 EGTA, 20 HEPES, pH 7.2, 0.39 CaCl 2 , 2 MgCl 2, 2 ATP disodium salt, 0.3 mM GTP and the composition of pipette solution was 145 NaCl, 2 CaCl 2 , 1 MgCl 2 , 10 HEPES, 10 MES pH 4.6, 10 glucose, 5 KCl. The voltage clamping and imaging protocol was followed as discussed in electrophysiology section above. Wide field microscopy was carried out on an IX83 inverted microscope (Olympus Corporation of the Americas, Center Valley, PA, USA) using either a 100X or 60X, 1.4 NA, DIC oil immersion objective (PLAPON) and Evolve Delta 512 EMCCD camera (Photometrics, USA) and controlled using Metamorph premier Ver 7.8.12.0 (Molecular Devices, LLC, USA). Images were acquired with exposure 100 ms and EM gain at 100 for Atto647N, exposure 200 ms and EM gain 300 for RVF. RVF channel images were obtained using 500/20 band pass excitation filter, 535/30 band pas emission filter and 89016 dichroic. For Atto647N, images were obtained using the 640/30 band pass excitation filter, 705/72 band pass emission filter and 89016 dichroic. For cytosolic calcium recording, fluorescence of Fluo-4 was recorded by exciting at 480 nm and collecting emission at 520 nm. Images were acquired using a 480/20 band pass excitation filter, 520/40 band pass emission filter and 89016 dichroic. Intracellular membrane potential measurements were performed by recording images in the sensing channel (G) and the normalizing channel (R) in cells where each specific compartment was labeled. After acquisition of G and R images, intracellular membrane potential was neutralized (~ 0 mV) by adding 50 μM valinomycin and monensin in high K + buffer, for 20 mins at room temperature. A set of G and R images of same cells were acquired after valinomycin and monensin or pharmacological treatments as shown in . These images of neutralized endosomes were used as a baseline measurement to correct for variations in autofluorescence. To record all endo-lysosomal compartments in the cell, Z-stacks (30 planes, Z distance = 0.8 μm) were captured and a maximum intensity projection was used to produce a single image for analysis. Confocal images were captured with Leica TCS SP5 II STED laser scanning confocal microscope (Leica Microsystems, Inc. Buffalo Grove, IL, USA) equipped with a 63X, 1.4 NA, Oil immersion objective. RVF was excited using an argon laser with 514 nm wavelength, CFP by 458 nm and Atto647N using a He-Ne laser with 633 nm wavelength. CellMask orange stain was excited by 543 nm and all emissions were filtered using Acousto Optical Beam Splitter (AOBS) with settings suitable for each fluorophore and recorded using hybrid detectors (HyD). Lysosomes of COS-7 cells labeled with Voltair IM were sequentially imaged on a widefield microscope with 60X magnification in G and R channel at 10-20s interval for a duration of 20 mins. Image acquisition was briefly paused at indicated time for addition of indicated chemical to cells. To minimize bleaching, time-lapse imaging was done at single focal plane. Z-drift compensator module from IX83 was applied to minimize out of focus drift during time lapse imaging. The extracellular ATP induced cytosolic calcium increase was imaged by labeling COS-7 cells with cell permeable calcium sensitive dye Fluo4-AM ester . Time-lapse imaging of Fluo-4 labeled COS-7 cells were acquired at 1s interval for 20 mins. Maleylated BSA based competition assay was performed according to previously established protocols . To find out the trafficking time of DNA device in specifically labeling EE, LE or Ly of HEK 293T cells transfected with hMSR1, time dependent colocalization experiments were performed as reported previously , . Fluorescent Transferrin was used to specifically label EE by pulsing it for 10 minute prior to imaging and to label RE with an additional chase time of 30 mins . Rab7-RFP is a well-established late endosome (LE) marker, and was transiently expressed in HEK 293T cells to label LE . Transient expression of TGN46-mCherry specifically labels trans Golgi network (TGN) . Finally, TMR-Dextran a specific marker for lysosome (Ly) after previously reported chase times, was pulsed for 1 hour, followed by 16 hours chase in complete media to label lysosomes . The trafficking time of DNA device with transferrin aptamer and d(AT) 4 tag in labeling RE and TGN, respectively, have been established previously , , . Briefly, recycling endosomes are targeted by pulsing Voltair RE in 1X HBSS, for 10 mins at 37°C, followed by 30 mins of chase in complete media at 37°C. Trans Golgi network is targeted by pulsing Voltair IM to ScFv-Furin transfect HEK 293T cells, in complete media containing cycloheximide (CHX) for 90 mins at 37°C, followed by 90 mins chase. Voltair IM labelled cells were treated with ML-SA1 (20 μM), NS1619 (15 μM) or trans-ned-19 (1 μM) for 15 mins in HBSS solution at room temperature. Torin-1 (1 μM) was treated to labelled cells during the chase period (50 mins), to inhibit mTOR. Bafilomycin-A1 (500 nM) was added to cells for 30 mins in 1X HBSS and incubated at 37°C prior organellar voltage measurements. After acquisition of G and R images of drug treated cells, intracellular membrane potential was neutralized (~ 0 mV) by adding 50 μM valinomycin and monensin in high K + buffer, for 20 mins at room temperature . Images were analyzed with Fiji (NIH, USA). For organellar voltage measurements, regions of cells containing single isolated endosomes/lysosomes in each Atto647N (R) image were manually selected and the coordinates saved in the ROI plugin. Similarly, for background computation, a nearby region outside endosomes/lysosomes were manually selected and saved as an ROI. The same regions were selected in the RVF (G) image by recalling the ROIs. After background subtraction, mean intensity for each endosome (G and R) was measured and exported to OriginPro (OriginLab, USA). A ratio of G to R intensities (G/R) was obtained from these values by dividing the mean intensity of a given endosome in the G image with the corresponding intensity in the R image. To minimize the measurement error due to low fold change of Voltair probes, the same endosomes or lysosomes are measured post addition of valinomycin and monensin which neutralizes the membrane potential in presence of 150 mM KCl. For TGN voltage measurements, total cell intensity was recorded and background subtraction was performed by manually selecting a region outside the cell. For a given experiment, membrane potential of an organelle population was determined by converting the mean [G/R] V - [G/R] O value of the distribution to voltage values according to intracellular voltage calibration profile . The mean value of each organelle population across three trials on different days is determined and the final data is presented as mean ± S.E.M. Representative images are shown in pseudo-color images, where G and R images were modified by thresholding in ImageJ to get G’ and R’ images. Using ImageJ’s Image calculator module, G’ images were divided by R’ images to generate an image where each pixel represents [G/R] V . For whole cell patch clamp, image analysis was performed using custom Matlab code. A series of images corresponding to a voltage sweep from −100 to +100 mV was collected and input into the program. By identifying changes in intensity from the first to last image a region of interest corresponding to the clamped cell was selected. Other cells present in the image were also selected based on an intensity threshold. The region containing no cells was used to subtract background noise from the detector from all regions. Intensity from unclamped cells was measured in each image and used to correct for photobleaching or fluctuation in lamp intensity. After background corrections the average intensity of the patch clamped cell was then measured for each individual image in the series and normalized against the value at −60 mV. HeLa cells were incubated with either 5mM Methyl-β-cyclodextrin (MβCD) in HBSS alone or 4.5mM MβCD complexed with 0.5mM cholesterol for 1 h. The former treatment depletes cholesterol levels in the plasma membrane while the latter treatment increases cholesterol levels in the plasma membrane . HeLa cells treated thus, were then labeled with Voltair PM in HBSS for 20 mins and voltage clamped from −100 to +100 mV and simultaneously imaged in the red and green channels. For statistical analysis between two samples, two-sample two tailed test assuming unequal variance were used. For comparison of multiple samples, one-way ANOVA with a post hoc Tukey test or Fischer test was used. All statistical analysis was performed in Origin (Student version). Violin plots show the Kernel smooth distribution of data points and embedded box plots indicate 25-75% percentile, with median shown as white circle and error bars represents standard deviation. 1 1631136_SuppMovieS1 1631136_SuppMovieS2 1631136_SuppMovieS4 1631136_SuppMovieS6 1631136_SuppMovieS3 1631136_SuppMovieS5
EXPOSURE OF PEDIATRIC EMERGENCY PATIENTS TO IMAGING EXAMS, NOWADAYS AND IN TIMES OF COVID-19: AN INTEGRATIVE REVIEW
7db0077e-6f21-4429-8fe8-5ee81f3a7ed8
7747787
Pediatrics[mh]
Computed tomography (CT), magnetic resonance imaging (MRI), X-ray and ultrasound (US) are widely used imaging examinations in Urgency and Emergency Services (UEs) for pediatric diagnosis and follow-up. However, these resources should be used carefully, together with the clinical judgment from health professionals, so that there is no overuse. Nowadays, due to technological advances, there has been an increase in the use of examinations, especially CTs. This is the physicians’ preferred choice in the UEs, because of the fast digitalization of images, which reduces the time of sedation in children. Therefore, there was an increase in the number of studies about the potential risks of exposure to ionizing radiation caused by imaging examinations, due to the tendency to develop genetic changes and future malignancies. On March 11, 2020, the World Health Organization (WHO) declared a pandemic caused by COVID-19. By July 9 of the same year, more than 11.8 million cases and 544 thousand deaths had been reported around the world. Even though infected children manifest less severe symptoms in comparison to adults, they can be hospitalized and exposed to ionizing radiation exams. Therefore, there is a growing concern about the potential overuse of imaging examinations in this population. This review aimed to analyze literature data about the unnecessary exposure of pediatric emergency patients to ionizing agents from imaging examinations, nowadays and in times of COVID-19. This is an integrative literature review (ILR) whose purpose is to synthetize and analyze studies that are available, from several methodological approaches, about the theme in question. Therefore, the identification of a large sample allows the evaluation, the critical discussion of the results and the development of a conclusion based on scientific evidence. To elaborate the research question, we used the PICO strategy - population, intervention, comparison and outcomes . This review aims at answering: “What does the literature show about the unnecessary exposure of pediatric emergency patients to ionizing agents from imaging examinations, nowadays and in times of COVID-19?”. Then, we continued with the following stages of ILR: determination of databases, application of descriptors, and inclusion and exclusion criteria (Identification); analysis of titles and content of the abstracts of the identified articles (Screening); evaluation and critical inspection of studies in full (Eligibility); and definition of the analyzed articles for the confection of the IRL (Inclusion). We selected the articles between April and July, 2020, using the following databases: Virtual Health Library, PubMed and Scientific Electronic Library Online (SciELO). The search descriptors were used in two stages, and were selected from the Health Science Descriptors (DeCS), combined in pairs based on the Boolean logic: AND or OR. In the first search, we used: (pediatrics) AND (emergencies) AND (diagnostic imaging) AND (medical overuse). In the second search, the following were used: (Coronavirus infections) OR (COVID-19) AND (pediatrics) AND (emergencies) AND (diagnostic imaging). The search in the databases respected the following inclusion criteria: articles available in full, in Portuguese or in English, published from 2016 to 2020 (1 st search) or from 2019 to 2020 (2 nd search), and articles that contemplated the exacerbated use of imaging exams in pediatric emergency rooms. The exclusion criteria were: articles without adherence to the theme and duplicated texts in the databases. Sixty-one publications were identified, being three in the Virtual Health Library (5%), 58 in PubMed (95%), and none in SciELO (0%). In the identification stage, we excluded four texts due to duplicity, and 16 for not being available in full. Therefore, in the screening stage we analyzed 41 articles. Of these, after reading the title and abstract, 13 were excluded for not being related to the theme, and eight for not answering the research question. Thus, 20 articles were included in the eligibility stage; three were excluded after the texts were read in full, for not answering the research question. So, the final sample of this ILR comprised 17 articles . For data extraction and analysis, we used a Microsoft Excel ® spreadsheet that included: authors, country of origin/year of publication, journal, study method and type of analyzed imaging examination. The copyrights were respected by preserving the content exposed by the authors and by referencing the information extracted from the articles available in public domain. Of the 17 analyzed articles, four were performed in the United States of America (USA); four, in China; two, in Italy; two, in Israel; two in the Republic of Korea; one, in Canada; one, in the Netherlands; and one, in Turkey. All articles were published in English. Regarding the year of publication, there was higher incidence in 2020 (seven articles), followed by 2018 (four articles), 2019 (two articles), 2017 (two articles) and 2016 (two articles). All of the analyzed articles contemplated the overuse of imaging examinations in pediatric emergency patients nowadays and in times of COVID-19 . The analysis of the selected studies allowed the definition of seven categories. 1st category - the relation between computed tomography and magnetic resonance imaging and the use of X-ray in pediatric emergency In the retrospective study carried out in the USA, it was observed that the main imaging examinations used in the clinical investigation of pediatric patients are CT and MRI, both with its pros and cons, thus defining their utility. In comparison to MRI, CT tends to be cheaper, faster and more sensitive to bone fractures, besides presenting high diagnostic accuracy. However, a major disadvantage is the exposure of patients to ionizing radiation. , On the other hand, MRI demonstrates to be less accessible, have higher costs and be less tolerable among younger children when compared to CT. , However, It is a more favorable alternative regarding the reduction of radiation. That justifies the high number of studies performed in the past few years about MRI, in order to reduce the exposure of pediatric patients to ionizing agents. A cohort study performed in Israel points out that the thoracic X-ray is used for different respiratory emergencies, especially bronchiolitis, even if there is no recommendation for its use, according to the American Academy of Pediatrics (AAP). A similar orientation was identified in the North-American analysis, which does not advise the use of thoracic X-ray to treat acute exacerbation of asthma. The analysis conducted in the Republic of Korea stated there are many physicians using the X-ray in the screening of non-specific abdominal symptoms due to difficulties in diagnosis. Therefore, there is a flaw in relation to the recommendations defined in protocols and the reality of medical practice, which corroborates the exaggerated use of imaging tests in the pediatric population. 2nd category - the relation between ionization and possible malignancies associated with imaging examinations An analysis conducted in the USA showed that CT is related to future risks of developing malignanies, , especially in younger patients, for being an examination that uses ionizing radiation. Another study showed that, in childhood, the cells grow fast and become more prone to developing cancer when exposed to ionizing radiation. Besides, the small body area contributes with higher dose of accumulated radiation. It was estimated that 1.5-2% of malignant neoplasms in the USA may have been caused by radiation from CTs. Besides, it is assumed that CTs in the pediatric population may be related to 5,000 future annual cases of cancer. A study carried out in the Netherlands showed that the performance of a head CT increases the risk of developing a future brain tumor. This risk increases with the performance of additional CTs, and becomes even higher when the exposure involves children aged less than 5 years. For patients in this same age group, the assumption is that one case of leukemia will appear for every 5,250 head CTs. 3rd category - the financial impact of the excessive use of imaging examinations in the hospital environment In an administrative database analysis, there was a retrospective study including visits to pediatric UEs from 2006 to 2016, in four hospitals in Canada (1,783,753 visits) and 26 hospitals in the USA (21,807,332 visits). The observation was that the North-American and Canadian populations have different financial and care structural organizations. In both countries, professionals are paid for the number of provided medical services; however, Canadian doctors have a global view of hospital management and governmental budget restrictions. Therefore, there is a reduction in the total number of requested and provided services per year. Besides, in the USA there is a higher tendency of performing imaging examinations, even when there is no indication for it, in order to avoid the lack of documents in a possible lawsuit; unlike in Canada, which has 25% of lawsuits due to medical negligence, in comparison to the cases in the USA. Thus, the conclusion is that North-American physicians, in comparison to Canadian physicians, perform excessive procedures, which results in major increase in hospital expenses. A similar fact was identified in a retrospective study carried out in Turkey, which stated that the improper use of imaging examinations causes major economic losses for hospital UEs. Imaging examinations, when performed without necessity, generate an approximate cost of US$ 20 billion; the non-performance of improper imaging examinations, regardless of the type, could generate an annual saving of US$ 81 billion. 4th category - the difference between hospitals in the use of imaging examinations in the pediatric Urgency and Emergency services There is a tendency for UEs in general hospitals to request more imaging examinations than in exclusively pediatric hospitals, especially when it comes to CTs. Another variation occurs between teaching and non-teaching hospitals, as observed in a study carried out in the Netherlands. This difference was analyzed by the chi-square test, identifying fewer head CTs in two regional, non-teaching general hospitals (23.3 and 25.9%) in comparison to the teaching hospital (44.1%). A similar fact was observed in a Canadian study that compared the purposes of imaging examinations between pediatric emergency units in Canada and in the USA. The use of thoracic X-ray to handle bronchiolitis (absolute difference of 6.8%) and asthma (absolute difference of 0.7%) was lower in Canada, as well as the use of abdominal X-ray for constipation (absolute difference of 23.7%) and abdominal pain (absolute difference of 20.6%). 5th category - instruments used to reduce the exposure to ionizing agents in pediatric emergency patients Several instruments are used to guide health professionals as to the use of imaging examinations, in order to reduce the unnecessary exposure to ionizing agents in pediatric UE patients. , , , , , , , The Alvarado Score is used for abdominal CT. For head CT, the following instruments were reported: Pediatric Emergency Care Applied Research Network (PECARN), , , , National Emergency X-Radiography Utilization Study (NEXUS), , Children’s Head Injury Algorithm for the Prediction of Important Clinical Events (CHALICE) and Canadian Assessment of Tomography for Childhood Head Injury (CATCH). , The last two, however, were considered little reliable for clinical use due to their low sensitivity. The Alvarado Score is an instrument used for acute appendicitis and abdominal pain, in order to reduce and limit the use of CT, thus favoring the use of US and MRI. However, the adherence to this instrument was not effective in reference hospitals, unlike pediatric hospitals. The PECARN , , , is a clinical decision instrument widely used in hospitals for the orientation of head CT in cases of traumatic brain injury (TBI). The focus of this tool is to identify patients with low risk of developing major clinical complications, without the need to being submitted to a CT. The PECARN not only has high sensitivity and low specificity, but it also has negative predictive value of approximately 100% for TBIs with high clinical relevance. Besides these characteristics, the PECARN can also be used to detect and classify the severity of pathologies, with 74.8% of sensitivity and 91.7% of specificity. The instrument allows to estimate the duration of hospitalization and, therefore, leads to the reduction of hospital resources and contact of the patient with radiation. This instrument presents excellent screening and is similar to medical judgment, so it is recommended by the AAP. NEXUS is a clinical decision instrument used for pediatric patients with TBI , which aims at assisting physicians in the identification of low-risk patients, who do not need a CT request, and high-risk patients, who will require intervention. The great differential of this tool is the use of clinical judgment to conduct this risk stratification. This instrument properly classified all high-risk patients who should be submitted to neurosurgery, presenting a 100% sensitivity level. However, a study pointed out that the real sensitivity of the instrument is of 87.2%. Through the use of NEXUS, together with clinical judgment, the head CT requests decreased in up to 34% of low-risk pediatric patients. By comparing the PECARN and the NEXUS instruments, the conclusion was that sensitivity, in both cases, is similar, even though the analyzed samples are different. There is a difference in the way children can be classified regarding CT requests, considering that PECARN was developed for all patients with TBI, whereas NEXUS assesses only children previously determined by clinical judgment. This additional criterion of NEXUS directly implies on the reduction of unnecessary imaging examinations in approximately 10%. 6th category - proposals of intervention The need for using clinical judgment together with the guiding instruments of imaging examination requests was emphasized in order to optimize the diagnosis in pediatric UEs. These tools not only contribute with the knowledge of the medical professional, but also allow low-risk patients to be safely excluded from imaging examination requests, preventing overuse. The reimbursement policy for the quality of the provided service, and not for the number of procedures, can be an alternative to reduce imaging examinations and prescription of medication. It was suggested to audit and do collaborative benchmarking with the hospital staff in order to reduce examination overuse. Besides, UEs of general and pediatric hospitals can cooperate providing an integrated service for patients, by sharing pediatric guidelines. The training emergency programs of general hospitals should highlight the specificities of children in comparison to adults. Among the analyzed articles, some implemented effective proposals to reduce the use of imaging examinations. In the cohort study, an intervention that aims at limiting the use of thoracic X-ray to diagnose bronchiolitis and assess if pediatric UEs follow the current guidelines was implemented. Before intervention, the level of radiography was 44%, and then decreased to 36.6%. A similar reduction occurred in the hospitalization rates, which decreased from 76.8 to 69.8%. The conclusion was that the proposed approach was successful to reduce financial costs, pharmacological treatments and exposure to ionizing agents, as indicated in AAP guidelines. The authors suggested that similar interventions should be implemented in other pediatric UEs. A study analyzed the implementation of a policy whose purpose was to reduce the use of CT in patients with suspicion of peritonsillar abscess, recommending that professionals in the UEs request the evaluation of otolaryngologists before requiring imaging examinations in pediatric patients with unspecific physical examination. The efficacy of this policy was proven by observing a 13% reduction in the use of CT in the analyzed populations. This demonstrated that the evaluation by an expert, with clinical experience, reduces the number of unnecessary complementary examinations and leads to more accurate requests. 7th category - use of imaging examinations in pediatric patients in times of COVID-19 The infection by SARS-CoV-2 in pediatric patients has shown milder, non-typical symptoms, with lower mortality rates, in comparison to adult patients. , , , One of the analyzed explanations is the fact that the immune system of children is still immature, which leads to reduced inflammatory effect - and lower cytokine release - and, consequently, lower clinical expression. Imaging examinations are essential for diagnosis and for the early detection and monitoring of COVID-19. , CT is a widely used instrument in the investigation of the infection, , , , even though it does not distinguish it from other viral pneumonias. However, CT exposes patients to unnecessary radiation, and health professionals, to higher risk of cross contamination in the hospital. , Thus, the American College of Radiology, in March 2020, advised against the use of this examination as a primary diagnostic method. However, CT should be chosen in some clinical situations, together with the reverse transcription-polymerase chain reaction (RT-PCR). Thoracic X-ray presents low sensitivity and specificity in the detection of pneumonia caused by COVID-19. On the other hand, CT is better for detecting changes in the early stage of the disease. For these reasons, it is the alternative of choice in relation to thoracic X-ray, suggested by some radiology societies. However, a retrospective study from a Chinese center, which analyzed nine pediatric patients with COVID-19, demonstrated that most CTs did not show changes; only two children had minor unilateral ground-glass opacities. In a Chinese retrospective study that analyzed 25 infected children, 24 were submitted to CT; of these, eight (33.3%) did not present radiological changes. These disparate data suggest that further studies are necessary to verify the reliability of the use of CT in the infected pediatric population. Studies , , showed that lung ultrasound (LUS) is a reliable alternative for the diagnosis of the new coronavirus. One advantage of LUS is that it is more sensitive than thoracic X-ray and it does not expose children to the ionizing radiation present in other imaging examinations. , LUS is possible to be performed by the medical team in the bed side, reducing the risks of cross contamination. , It also provides reliable data for evaluation, diagnosis and clinical follow-up of acute respiratory failure. When the pediatric patient is admitted to the UEs, with symptoms suggestive of the new coronavirus and visible lung impairment at LUS, there is a high chance that the child has viral pneumonia. So, the examination can be used as a standardized tool to perform differential diagnoses and the early evaluation of patients with suspicion of COVID-19. Therefore, it is necessary to establish guidelines for pediatric cases of infection by the new coronavirus, in order to prevent the overuse of examinations in this population. Besides, it is important to train doctors from different specialties to recognize the pathological findings of LUS and to store the results in a database, in order to create, in the future, an automatic algorithm to identify these echographic patterns. However, the use of other imaging examinations should not be ruled out, , such as CT and thoracic X-ray, considering the fast evolution of the SARS-CoV-2 infection and the different clinical staging of the disease. This review allowed to identify that, nowadays, there is a tendency for the exacerbated use of imaging examinations in pediatric patients in UEs. Therefore, it is necessary to train the hospital clinical staff, the use of clinical decision instruments and the confection of efficient protocols that can assess the singularity of the child. This will allow short and long-term benefits: reduction in the number of examination requests, enabling to save in hospital costs, reduce the exposure of pediatric patients to ionizing agents, considering that these can cause future malignancies. Because of the infection by the new coronavirus, strategies are necessary so that there is no overmedicalization in the pediatric population. One of them is the creation of guidelines that limit the use of examinations with ionizing radiation and favor the use of LUS. Therefore, it is possible to gather, afterwards, a database of characteristic ultrasound and radiological findings to facilitate the diagnosis of infection by SARS-CoV-2. The analyzed studies allowed to recognize the importance of this theme and its global diffusion, especially in North-American, European and Asian continents. However, there were no Brazilian studies about the theme, and its conduction is recommended to follow up the tendencies of international research and validation of the aforementioned instruments. In the retrospective study carried out in the USA, it was observed that the main imaging examinations used in the clinical investigation of pediatric patients are CT and MRI, both with its pros and cons, thus defining their utility. In comparison to MRI, CT tends to be cheaper, faster and more sensitive to bone fractures, besides presenting high diagnostic accuracy. However, a major disadvantage is the exposure of patients to ionizing radiation. , On the other hand, MRI demonstrates to be less accessible, have higher costs and be less tolerable among younger children when compared to CT. , However, It is a more favorable alternative regarding the reduction of radiation. That justifies the high number of studies performed in the past few years about MRI, in order to reduce the exposure of pediatric patients to ionizing agents. A cohort study performed in Israel points out that the thoracic X-ray is used for different respiratory emergencies, especially bronchiolitis, even if there is no recommendation for its use, according to the American Academy of Pediatrics (AAP). A similar orientation was identified in the North-American analysis, which does not advise the use of thoracic X-ray to treat acute exacerbation of asthma. The analysis conducted in the Republic of Korea stated there are many physicians using the X-ray in the screening of non-specific abdominal symptoms due to difficulties in diagnosis. Therefore, there is a flaw in relation to the recommendations defined in protocols and the reality of medical practice, which corroborates the exaggerated use of imaging tests in the pediatric population. An analysis conducted in the USA showed that CT is related to future risks of developing malignanies, , especially in younger patients, for being an examination that uses ionizing radiation. Another study showed that, in childhood, the cells grow fast and become more prone to developing cancer when exposed to ionizing radiation. Besides, the small body area contributes with higher dose of accumulated radiation. It was estimated that 1.5-2% of malignant neoplasms in the USA may have been caused by radiation from CTs. Besides, it is assumed that CTs in the pediatric population may be related to 5,000 future annual cases of cancer. A study carried out in the Netherlands showed that the performance of a head CT increases the risk of developing a future brain tumor. This risk increases with the performance of additional CTs, and becomes even higher when the exposure involves children aged less than 5 years. For patients in this same age group, the assumption is that one case of leukemia will appear for every 5,250 head CTs. In an administrative database analysis, there was a retrospective study including visits to pediatric UEs from 2006 to 2016, in four hospitals in Canada (1,783,753 visits) and 26 hospitals in the USA (21,807,332 visits). The observation was that the North-American and Canadian populations have different financial and care structural organizations. In both countries, professionals are paid for the number of provided medical services; however, Canadian doctors have a global view of hospital management and governmental budget restrictions. Therefore, there is a reduction in the total number of requested and provided services per year. Besides, in the USA there is a higher tendency of performing imaging examinations, even when there is no indication for it, in order to avoid the lack of documents in a possible lawsuit; unlike in Canada, which has 25% of lawsuits due to medical negligence, in comparison to the cases in the USA. Thus, the conclusion is that North-American physicians, in comparison to Canadian physicians, perform excessive procedures, which results in major increase in hospital expenses. A similar fact was identified in a retrospective study carried out in Turkey, which stated that the improper use of imaging examinations causes major economic losses for hospital UEs. Imaging examinations, when performed without necessity, generate an approximate cost of US$ 20 billion; the non-performance of improper imaging examinations, regardless of the type, could generate an annual saving of US$ 81 billion. There is a tendency for UEs in general hospitals to request more imaging examinations than in exclusively pediatric hospitals, especially when it comes to CTs. Another variation occurs between teaching and non-teaching hospitals, as observed in a study carried out in the Netherlands. This difference was analyzed by the chi-square test, identifying fewer head CTs in two regional, non-teaching general hospitals (23.3 and 25.9%) in comparison to the teaching hospital (44.1%). A similar fact was observed in a Canadian study that compared the purposes of imaging examinations between pediatric emergency units in Canada and in the USA. The use of thoracic X-ray to handle bronchiolitis (absolute difference of 6.8%) and asthma (absolute difference of 0.7%) was lower in Canada, as well as the use of abdominal X-ray for constipation (absolute difference of 23.7%) and abdominal pain (absolute difference of 20.6%). Several instruments are used to guide health professionals as to the use of imaging examinations, in order to reduce the unnecessary exposure to ionizing agents in pediatric UE patients. , , , , , , , The Alvarado Score is used for abdominal CT. For head CT, the following instruments were reported: Pediatric Emergency Care Applied Research Network (PECARN), , , , National Emergency X-Radiography Utilization Study (NEXUS), , Children’s Head Injury Algorithm for the Prediction of Important Clinical Events (CHALICE) and Canadian Assessment of Tomography for Childhood Head Injury (CATCH). , The last two, however, were considered little reliable for clinical use due to their low sensitivity. The Alvarado Score is an instrument used for acute appendicitis and abdominal pain, in order to reduce and limit the use of CT, thus favoring the use of US and MRI. However, the adherence to this instrument was not effective in reference hospitals, unlike pediatric hospitals. The PECARN , , , is a clinical decision instrument widely used in hospitals for the orientation of head CT in cases of traumatic brain injury (TBI). The focus of this tool is to identify patients with low risk of developing major clinical complications, without the need to being submitted to a CT. The PECARN not only has high sensitivity and low specificity, but it also has negative predictive value of approximately 100% for TBIs with high clinical relevance. Besides these characteristics, the PECARN can also be used to detect and classify the severity of pathologies, with 74.8% of sensitivity and 91.7% of specificity. The instrument allows to estimate the duration of hospitalization and, therefore, leads to the reduction of hospital resources and contact of the patient with radiation. This instrument presents excellent screening and is similar to medical judgment, so it is recommended by the AAP. NEXUS is a clinical decision instrument used for pediatric patients with TBI , which aims at assisting physicians in the identification of low-risk patients, who do not need a CT request, and high-risk patients, who will require intervention. The great differential of this tool is the use of clinical judgment to conduct this risk stratification. This instrument properly classified all high-risk patients who should be submitted to neurosurgery, presenting a 100% sensitivity level. However, a study pointed out that the real sensitivity of the instrument is of 87.2%. Through the use of NEXUS, together with clinical judgment, the head CT requests decreased in up to 34% of low-risk pediatric patients. By comparing the PECARN and the NEXUS instruments, the conclusion was that sensitivity, in both cases, is similar, even though the analyzed samples are different. There is a difference in the way children can be classified regarding CT requests, considering that PECARN was developed for all patients with TBI, whereas NEXUS assesses only children previously determined by clinical judgment. This additional criterion of NEXUS directly implies on the reduction of unnecessary imaging examinations in approximately 10%. The need for using clinical judgment together with the guiding instruments of imaging examination requests was emphasized in order to optimize the diagnosis in pediatric UEs. These tools not only contribute with the knowledge of the medical professional, but also allow low-risk patients to be safely excluded from imaging examination requests, preventing overuse. The reimbursement policy for the quality of the provided service, and not for the number of procedures, can be an alternative to reduce imaging examinations and prescription of medication. It was suggested to audit and do collaborative benchmarking with the hospital staff in order to reduce examination overuse. Besides, UEs of general and pediatric hospitals can cooperate providing an integrated service for patients, by sharing pediatric guidelines. The training emergency programs of general hospitals should highlight the specificities of children in comparison to adults. Among the analyzed articles, some implemented effective proposals to reduce the use of imaging examinations. In the cohort study, an intervention that aims at limiting the use of thoracic X-ray to diagnose bronchiolitis and assess if pediatric UEs follow the current guidelines was implemented. Before intervention, the level of radiography was 44%, and then decreased to 36.6%. A similar reduction occurred in the hospitalization rates, which decreased from 76.8 to 69.8%. The conclusion was that the proposed approach was successful to reduce financial costs, pharmacological treatments and exposure to ionizing agents, as indicated in AAP guidelines. The authors suggested that similar interventions should be implemented in other pediatric UEs. A study analyzed the implementation of a policy whose purpose was to reduce the use of CT in patients with suspicion of peritonsillar abscess, recommending that professionals in the UEs request the evaluation of otolaryngologists before requiring imaging examinations in pediatric patients with unspecific physical examination. The efficacy of this policy was proven by observing a 13% reduction in the use of CT in the analyzed populations. This demonstrated that the evaluation by an expert, with clinical experience, reduces the number of unnecessary complementary examinations and leads to more accurate requests. The infection by SARS-CoV-2 in pediatric patients has shown milder, non-typical symptoms, with lower mortality rates, in comparison to adult patients. , , , One of the analyzed explanations is the fact that the immune system of children is still immature, which leads to reduced inflammatory effect - and lower cytokine release - and, consequently, lower clinical expression. Imaging examinations are essential for diagnosis and for the early detection and monitoring of COVID-19. , CT is a widely used instrument in the investigation of the infection, , , , even though it does not distinguish it from other viral pneumonias. However, CT exposes patients to unnecessary radiation, and health professionals, to higher risk of cross contamination in the hospital. , Thus, the American College of Radiology, in March 2020, advised against the use of this examination as a primary diagnostic method. However, CT should be chosen in some clinical situations, together with the reverse transcription-polymerase chain reaction (RT-PCR). Thoracic X-ray presents low sensitivity and specificity in the detection of pneumonia caused by COVID-19. On the other hand, CT is better for detecting changes in the early stage of the disease. For these reasons, it is the alternative of choice in relation to thoracic X-ray, suggested by some radiology societies. However, a retrospective study from a Chinese center, which analyzed nine pediatric patients with COVID-19, demonstrated that most CTs did not show changes; only two children had minor unilateral ground-glass opacities. In a Chinese retrospective study that analyzed 25 infected children, 24 were submitted to CT; of these, eight (33.3%) did not present radiological changes. These disparate data suggest that further studies are necessary to verify the reliability of the use of CT in the infected pediatric population. Studies , , showed that lung ultrasound (LUS) is a reliable alternative for the diagnosis of the new coronavirus. One advantage of LUS is that it is more sensitive than thoracic X-ray and it does not expose children to the ionizing radiation present in other imaging examinations. , LUS is possible to be performed by the medical team in the bed side, reducing the risks of cross contamination. , It also provides reliable data for evaluation, diagnosis and clinical follow-up of acute respiratory failure. When the pediatric patient is admitted to the UEs, with symptoms suggestive of the new coronavirus and visible lung impairment at LUS, there is a high chance that the child has viral pneumonia. So, the examination can be used as a standardized tool to perform differential diagnoses and the early evaluation of patients with suspicion of COVID-19. Therefore, it is necessary to establish guidelines for pediatric cases of infection by the new coronavirus, in order to prevent the overuse of examinations in this population. Besides, it is important to train doctors from different specialties to recognize the pathological findings of LUS and to store the results in a database, in order to create, in the future, an automatic algorithm to identify these echographic patterns. However, the use of other imaging examinations should not be ruled out, , such as CT and thoracic X-ray, considering the fast evolution of the SARS-CoV-2 infection and the different clinical staging of the disease. This review allowed to identify that, nowadays, there is a tendency for the exacerbated use of imaging examinations in pediatric patients in UEs. Therefore, it is necessary to train the hospital clinical staff, the use of clinical decision instruments and the confection of efficient protocols that can assess the singularity of the child. This will allow short and long-term benefits: reduction in the number of examination requests, enabling to save in hospital costs, reduce the exposure of pediatric patients to ionizing agents, considering that these can cause future malignancies. Because of the infection by the new coronavirus, strategies are necessary so that there is no overmedicalization in the pediatric population. One of them is the creation of guidelines that limit the use of examinations with ionizing radiation and favor the use of LUS. Therefore, it is possible to gather, afterwards, a database of characteristic ultrasound and radiological findings to facilitate the diagnosis of infection by SARS-CoV-2. The analyzed studies allowed to recognize the importance of this theme and its global diffusion, especially in North-American, European and Asian continents. However, there were no Brazilian studies about the theme, and its conduction is recommended to follow up the tendencies of international research and validation of the aforementioned instruments.
Envisioning Academic Global Oncologists: Proposed Competencies for Global Oncology Training From ASCO
52c27db8-36d2-4348-8dd7-4e6f86a57c17
11018164
Internal Medicine[mh]
Cancer is a significant health care problem and its burden is increasing in low- and middle-income countries (LMICs), which together are estimated to account for >70% of the nearly 20 million annual incident cancers. , By 2030, predictions suggest that 13 million people will die of cancer each year, and three quarters of these deaths will be in LMICs. It is possible that this burden is underestimated, given that LMICs often lack representative cancer registries and adequate data to better understand and respond to trends in cancer incidence. Limited access to cancer prevention, diagnosis, and treatments has resulted in cancer survival rates in LMICs that are less than one third of survival rates in high-income countries. As science and innovation continue to lead to improvements in cancer outcomes in high-resource settings, most of these gains have not been achieved in LMICs. To address growing global cancer inequities, cancer control actions and priorities need to be adapted to account for localized patterns of risk factors and cancer types, available health care infrastructure and resources, and societal and cultural norms. Efforts to address global cancer inequities are increasingly referred to as global oncology, which ASCO has defined as collaboratively addressing disparities and differences in cancer prevention, care, research, education, and the disease's social and human impact around the world. It includes a full spectrum of activities ranging from epidemiology to implementation science to public health policy. There is a vital need to train oncologists of all disciplines who are committed to global oncology to work collaboratively with health care providers to develop sustainable capacity and infrastructure for clinical oncology care, research, and education that is resource-appropriate and responsive to locally driven needs and priorities. Formalizing the core competencies for global oncology training is key for advancing this field. Such an outline of core competencies for global oncology can build on existing global curricula in oncology. The European Society of Medical Oncology (ESMO) and ASCO published joint recommendations outlining a standardized medical oncology training curriculum with competencies required to qualify as a medical oncologist and update the curriculum periodically. The 2016-published curriculum includes a chapter on Cancer Care Delivery in Low-Resource Environments, with the objective that all medical oncology trainees should be able to fundamentally understand the global cancer burden and the challenges of treating cancer with limited resources and in settings with weaker health care infrastructure. The chapter's basic outline of awareness, knowledge, and skills is intended for inclusion within the standard curriculum and competencies for all medical oncology training programs. More recently, the 2023 recommendations have emphasized this content in the cancer control, cancer prevention, and cancer epidemiology sections. For those intending a career focused on specialization in global cancer care and control, further training beyond this basic knowledge and awareness is necessary. Well-trained clinicians across all levels of resource settings, equipped with advanced global oncology skills and competencies, are needed to solve the complex global cancer challenges. This requires training programs that include hands-on field experience and high-quality mentoring, opportunities for a defined career path in global oncology, as well as research funding and institutional support. In the United States, there is evidence of a growing interest and engagement in global oncology on the part of academic cancer centers. The National Cancer Institute (NCI)–Designated Cancer Centers (NDCCs) are active in global oncology through Cancer Center–led initiatives funded through NCI and through external funding sources. The NCI Center for Global Health (CGH) periodically conducts a global oncology survey of NDCCs to understand the scope of global cancer research and training led by NDCCs and funded outside of the National Institutes of Health (NIH)-funded portfolio. In 2021, the NCI CGH collaborated with ASCO and partners to conduct a Global Oncology Survey and showed that there is a substantial amount of global oncology research and training conducted by NDCCs, with 86% reporting involvement in global oncology and 39% having formal global oncology programs. Most of the collaborative partnerships have been focused with LMICs. Fifty-four percent reported at least some level of didactic global oncology training at their cancer center, and 86% reported a collective total of 447 non–NIH-funded global oncology projects, 30% of which focus on capacity building or training. Comparing the survey results from 2021 with a similar survey conducted in 2018 shows that interest in the global oncology field is expanding among trainees from a variety of disciplines at NDCCs. In response to the increasing interest among oncology trainees and faculty to seek experiences and specialization in global oncology as a career focus, ASCO established an Academic Global Oncology Task Force (AGOTF) in 2017 to better define and formalize the field of academic global oncology. The task force was asked to collect and analyze key issues and barriers toward the recognition of global oncology as an academic discipline, with an emphasis on training, research, and career pathways. The AGOTF created a set of 13 recommendations in three primary areas: Global Oncology Training, Global Oncology Research and Practice, and Career Paths and Professional Development in Global Oncology. These recommendations, shown in Table , were approved by the ASCO Board of directors in 2019 and published in 2020. Within its Global Oncology Training recommendations, the Task Force recommended that ASCO outline competencies for those seeking to specialize in the field of global oncology. These global oncology competencies were intended to be distinct from, and build upon, standard training curricula for oncology specialties (medical, surgical, radiation, pathology, radiology, nursing, or other oncology specialty or subspecialty). The resulting set of competencies are outlined and reviewed in detail in this manuscript. They are divided into two categories: core global health competencies that can be obtained through participation in a standard global health academic program, and cancer-specific global oncology skills and knowledge that could be provided by a select number of training programs that specialize in this field. The AGOTF was composed of nine members from different resource settings and backgrounds, and a liaison from the ASCO Professional Development Committee. The committee members were identified and appointed by the ASCO Board of Directors with consideration for each individual's experience and expertise in global oncology, leadership within ASCO, and diversity of specialty, geography, and sex. The AGOTF was supported by staff from the ASCO International Affairs and Professional Development Committees. The Task Force worked to delineate what subjects global oncology training programs should include and the competencies trainees need to acquire during the training period. To accomplish this work, the AGOTF conducted a series of in-person and virtual meetings between 2017 and 2018 among Task Force members and additional key stakeholders. Although focused primarily on the structure of US-based oncology training programs, these global oncology competencies were intended to be applicable to every oncologist with an interest in academic global oncology, no matter where they trained. The AGOTF members went through the following process to develop the proposed academic global oncology competencies, including the review of: Available data supporting the growing need for academic global oncology investigators and practitioners. Current formal education related to global oncology/global health that US-based trainees receive in standard residency/fellowship training. Supplemental global health curriculum/competencies from oncology programs with some existing training in global oncology. Existing global health curriculum/competencies from training programs in other medical disciplines. From these reviews, the AGOTF defined competencies, including knowledge and skills, that those who engage in global oncology efforts as a significant part of their career require to work within differing regions and resource environments. The resulting proposed global oncology core curriculum and recommended core competencies are outlined in this manuscript. Competencies Required for Specialization in Global Oncology The following are competencies ASCO recommends that a trained and certified oncology professional (medical, surgical, radiation, pathology, radiology, genetics, nursing, or other oncology specialist or subspecialist) should complete to perform as a specialist in academic global oncology. The recommended Global Oncology Competencies consist of knowledge-based competencies and skill-based competencies, each of which have topics that relate to global health generally and those that are cancer-specific . It is envisioned that the global health competencies could be obtained through participation in standard global health programs, such as a Master of Public Health degree program or integrated into a formal Global Oncology Program. Knowledge-Based Competencies The knowledge a global oncologist should develop includes general global health knowledge that is pertinent to global oncology and knowledge that is cancer-specific (Fig ). Global Health Knowledge and Cancer-Specific Knowledge Knowledge in these specific areas is critical to facilitate understanding of general global health issues, challenges and priorities, global and national health policies and regulatory bodies, differences in health systems infrastructure and capacity, the impact of cultural and religious differences on health care systems, and ethics in the context of global health and human rights. In addition to knowledge required in general global health competencies, proficiency in cancer-specific competencies place these topics in the cancer context. Epidemiology. Understanding the basis for estimates and limitations of epidemiologic data is critical in global health competencies. As global oncologists, professionals need to be aware of incidence, prevalence, and mortality rates for all health issues impacting populations. Furthermore, they need to understand the varying types and accuracy of epidemiologic data that are available, and the benefits and risks of extrapolating from these data. This includes the incidence/prevalence of communicable and noncommunicable diseases (NCDs) within a country or region, including common comorbid conditions (cardiac, pulmonary, and diabetes) and viral-related illnesses (such as HIV, hepatitis B virus, human papillomavirus, etc) potentially associated with cancers. As a complement, a global oncologist should understand the epidemiology of cancer around the world, including incidence, prevalence, and mortality rates by country and region. The includes an awareness of differences in the etiology and pathophysiology of cancer across countries and regions, particularly as related to infectious disease–associated cancers more commonly seen in LMICs, including Kaposi sarcoma, Burkitt lymphoma, nasopharyngeal carcinoma, gastric cancer, hepatocellular carcinoma, and cervical cancer. Awareness of projections of current trends in cancer risk factors and exposures and their impact on the epidemiology of cancer is essential, as well as the potential impact of prevention and risk reduction strategies, including vaccination and screening. It is important to understand contributing factors that result in differences in important outcomes for cancer, such as overall survival, cancer-specific survival, and quality of life, during oncologic treatment and survivorship. Culture. Cultural, societal, and religious influences on health care systems and behaviors such as traditional healers, traditional medicines, alternative medicine, and social media influences need to be understood by those specializing in global oncology to better appreciate the needs and serve the community. Furthermore, cultural practices and norms around disclosure of health status to ill patients (eg, in some cultures, it is not typical to share poor prognoses directly with the patient) should be understood. Global oncology specialization requires understanding of cultural differences in public perceptions specifically for cancer, cancer awareness and myths, cancer prevention, cancer treatment, and cancer policy. An awareness of alternative and/or complementary treatment approaches to cancer that might be common and/or accepted in a region is important. Understanding the role that traditional healers, tribal and religious leaders, and family members may play in cancer decision making and acceptance of treatment options is critical. Government and national policy. It is important to recognize the roles, capabilities, and limitations of national and local governments in directing public health activities. Health care coverage, government priorities in health, and national policies as related to health are key requirements. The structure of the Ministry of Health in a given country/state and the roles of its different departments should be understood. Knowledge of how health care coverage is determined and provided, and what prevention, screening, diagnostics, and treatments are covered, is critical, as well as understanding ongoing activities that address health care inequities such as microfinance and health insurance. In the cancer context, understanding the structure and roles of different government institutions (including Ministries of Health and Finance) as related to national cancer-related health care policies and regulations is important. This includes comprehension of the regulatory structure for authorizing and covering cancer diagnostics and treatments within a country where the global oncologist may be focusing. Knowledge of the core elements of a national cancer control plan (NCCP) and the role of NCCPs in guiding and prioritizing a country's efforts in reducing cancer incidence and mortality and improving the quality of life of patients with cancer is essential. Working within any country, a global oncologist should understand of the status of the NCCP and its operationalization and implementation in the areas of cancer prevention, early detection, diagnosis, treatment, and palliation. Understanding the available data for cancer-related decision making and monitoring, including the status and quality of cancer registries in a country or region, is essential. United Nations system and international policy. Any global oncology practitioner should understand the United Nations (UN) system and affiliated bodies, its policies on health, and its relationship with UN member states. In particular, an understanding of the structure and mission of the WHO is critical, since this body recommends the basic health care that nations should provide and vaccination regimens that should be adopted. Additionally, the WHO maintains the WHO Essential Medicines and Essential Diagnostics Lists, which are used to inform individual country policy around drug and diagnostics access. A broad awareness of UN action and commitment to NCDs and the health-related sustainable development goals and their targets should be included. An understanding of the UN system and its affiliated bodies with specific roles in global cancer control should include the International Agency on Research in Cancer and its Global Cancer Observatory, the International Atomic Energy Agency's Program of Action for Cancer Therapy, and the WHO's cancer programs, priorities, and guidelines. A knowledge of UN High-Level meetings and resolutions related to cancer, including how cancer fits into the NCD framework and sustainable development goals, is essential. Awareness of the cancer-specific sections of the WHO's Essential Medicines and Diagnostics Lists and how value of drugs and diagnostics included on these lists is determined should be included. Health systems. Basic knowledge on the structure and economics of health care and its impact on health outcomes and an understanding of common organizing models for national health systems should be included in basic global health training. This should include models for health system priority setting and allocation of health care resources by governmental authorities. Knowledge of the infrastructure of a health system, including cancer-related facilities, oncology equipment and other critical resources for cancer care, and the oncology workforce, is critical when working in global oncology. Understanding the existing services and clinical expertise in pathology, surgery, radiation and medical oncology, as well as other resources and expertise required for cancer care is important. There should be an understanding of national and institutional approaches to delivery of cancer prevention, diagnosis, treatment and supportive/palliative care, and the role of national cancer control plans, cancer treatment guidelines, and care pathways. In poor-resource settings, there is a paucity of trained professionals in various disciplines involved in cancer care. Global oncology specialists practicing in an LMIC need to have an understanding of the task shifting that may occur because of a limited workforce, where a task normally performed by a physician is transferred to a health professional with a different/lower level of education and training. Knowledge of cancer by the health organizations is mandatory to provide optimal care and estimate the accurate amount of health care workers needed to cover the needs of the cancer population. A global oncology specialist should have an understanding of the financing and major drivers of cost and sources of payment/insurance coverage that affect availability of cancer medications and procedures. Frameworks for assessing the relative value of clinical interventions, such as those developed by ESMO and ASCO, as well as methods for measuring the quality of care that is delivered should be included. , Ethics. In terms of global health knowledge, familiarity with ethical considerations in the context of global health and human rights is essential, including knowledge of disparities in the availability of and access to health care resources and treatment, and the role of ethics in the conduct of genetic, genomic, and pharmacogenomic testing. Understanding ethics in the context of research, including conduct of clinical research, protection of human subjects, and informed consent, should be included. Cancer disparities. Finally, from a cancer-specific viewpoint, it is paramount that the global oncology specialist recognizes the tremendous disparities that exist globally in awareness and understanding of, and access to, cancer prevention, diagnosis, treatment, and supportive care, and the associated disparities in cancer outcomes. Global oncology training should include knowledge of strategies to maximize the provision of evidence-based, quality cancer care in settings of limited resources. It is important to understand models and approaches for delivering quality cancer control that is culturally sensitive and appropriate to settings of low resource levels, including use of tiered clinical guidelines that can be applied across contexts with differing resources. Skill-Based Competencies A global oncology training program needs to equip trainees with skills to work collaboratively with in-country experts, in a manner respectful of local resources and culture. One needs to establish local priorities for health research, education, and clinical care. Skill-based competencies include those in global health generally as well as cancer-specific global oncology skills (Fig ). Global Health Skills Program design and implementation. Global health skills essential to global oncology include an ability to design global health program interventions using logical frameworks for determining targets and indicators, as well as resource requirements and milestones. An ability to apply the skills of implementation science to effective program implementation, monitoring, and evaluation in different resource settings is critical. Data collection and analysis. A global health specialist should be able to find, report, and critically discuss epidemiologic evidence, disease biology, and behavior in different areas of the world. Key skills include an ability to design and use effective data collection instruments, analyze quantitative and qualitative data, and apply those skills and data to design, assess, and influence feasible and sustainable programs and interventions across a range of settings. Communication and cultural sensitivity. Skills in communication and cultural sensitivity are critical in global health, including exercising cultural literacy, open-mindedness, humility, and understanding and navigating within varying cultural norms and expectations. Professionals in the field should be able to exercise diplomacy to understand differing needs and points of view, apply communication and cultural sensitivity skills in daily interactions, communicate in a way that enhances effective relationships, and appreciate cultural influence on practice. This includes the need to recognize the importance of greeting formalities, including, in some cultures, differences in addressing people and patients from different genders. Key skills include the ability to demonstrate principles of effective patient and family communications in a culturally appropriate way including use of culturally appropriate terms to explain disease prognosis and outcomes. This is especially important in fields of oncology where not all treatment alternatives or procedures are widely accepted. Leadership and collaboration. Skills in this area include the ability to lead in defining a problem in partnership with in-country colleagues, developing collaborative solutions, forming teams of diverse stakeholders to implement projects and programs, and harnessing resources (including philanthropy) needed to implement the solution. An ability to respect, appreciate, and acknowledge the contributions of colleagues in another country or setting is critical, including inclusion in grant support and authorship (including recognition as first author and/or last author). Prioritization should be made for creating strategies that allow fair collaboration within the team and allowing opportunities for in-country capacity building, training, networking, and mentorship. Research. Global health research should be defined and driven in partnership with local collaborators, on the basis of priorities set by local research and practice communities. A global health specialist should be able to collaborate with local colleagues to collectively define areas of research need, develop research that addresses the issue, and conduct research. This includes the ability to design and implement outcomes research studies. An ability to identify, acquire, and allocate resources needed for global health research, including funding and protected time for research, is necessary. Awareness of the importance of fair recognition of colleague contributions (eg, appropriate and fair distribution of authors among published and presented data is necessary, including recognition of in-country collaborators in first/last author positions) is important. The distribution of grant resources must be in a way that allows creating infrastructure in both settings (constrained and nonconstrained resource settings). Capacity building efforts supported by global research should consider how efforts will be maintained/sustained after research funding expires. Cancer-Specific Skills Additionally, there are cancer-specific skills that should be included in an academic global oncology training program. Specialists in global oncology need skills relevant to lower-resource settings, for conducting research and develop projects related to cancer control and oncology clinical practice. Research. A global oncologist should be equipped to collaboratively identify and pursue locally driven and relevant cancer research opportunities. This includes the investigation and application of potential leapfrog approaches, moving forward rapidly through the adoption of modern systems without going through intermediary steps, toward improved cancer control in low-resource settings. It also includes the discovery of solutions or processes in a low-resource setting that can provide cost-effective options in high-resource settings. Cancer control. The goal of cancer control is to reduce the cancer burden, through reductions in cancer incidence, morbidity, and mortality. A specialist in global oncology should have the ability to advocate for and synthesize health-related and nonhealth factors in developing and implementing effective cancer control programs in varying settings. Clinical practice. Global oncologists should have broad and general clinical skills essential to cancer prevention, screening, diagnosis, treatment, and supportive/palliative care that are relevant to countries of all levels of resources. Additional clinical skills and knowledge relevant to specific regions and resources settings, such as cancer interactions with antiretroviral medications in a setting with high HIV rates of infection, are also important. An ability to develop and implement contextually appropriate, innovative approaches to enhance the delivery of quality cancer care and control is crucial. An elementary part of the global oncology discipline is the concept of bidirectional sharing of innovative practices, a process of actively pursuing learnings from successful practices across different resource settings. The following are competencies ASCO recommends that a trained and certified oncology professional (medical, surgical, radiation, pathology, radiology, genetics, nursing, or other oncology specialist or subspecialist) should complete to perform as a specialist in academic global oncology. The recommended Global Oncology Competencies consist of knowledge-based competencies and skill-based competencies, each of which have topics that relate to global health generally and those that are cancer-specific . It is envisioned that the global health competencies could be obtained through participation in standard global health programs, such as a Master of Public Health degree program or integrated into a formal Global Oncology Program. The knowledge a global oncologist should develop includes general global health knowledge that is pertinent to global oncology and knowledge that is cancer-specific (Fig ). Global Health Knowledge and Cancer-Specific Knowledge Knowledge in these specific areas is critical to facilitate understanding of general global health issues, challenges and priorities, global and national health policies and regulatory bodies, differences in health systems infrastructure and capacity, the impact of cultural and religious differences on health care systems, and ethics in the context of global health and human rights. In addition to knowledge required in general global health competencies, proficiency in cancer-specific competencies place these topics in the cancer context. Epidemiology. Understanding the basis for estimates and limitations of epidemiologic data is critical in global health competencies. As global oncologists, professionals need to be aware of incidence, prevalence, and mortality rates for all health issues impacting populations. Furthermore, they need to understand the varying types and accuracy of epidemiologic data that are available, and the benefits and risks of extrapolating from these data. This includes the incidence/prevalence of communicable and noncommunicable diseases (NCDs) within a country or region, including common comorbid conditions (cardiac, pulmonary, and diabetes) and viral-related illnesses (such as HIV, hepatitis B virus, human papillomavirus, etc) potentially associated with cancers. As a complement, a global oncologist should understand the epidemiology of cancer around the world, including incidence, prevalence, and mortality rates by country and region. The includes an awareness of differences in the etiology and pathophysiology of cancer across countries and regions, particularly as related to infectious disease–associated cancers more commonly seen in LMICs, including Kaposi sarcoma, Burkitt lymphoma, nasopharyngeal carcinoma, gastric cancer, hepatocellular carcinoma, and cervical cancer. Awareness of projections of current trends in cancer risk factors and exposures and their impact on the epidemiology of cancer is essential, as well as the potential impact of prevention and risk reduction strategies, including vaccination and screening. It is important to understand contributing factors that result in differences in important outcomes for cancer, such as overall survival, cancer-specific survival, and quality of life, during oncologic treatment and survivorship. Culture. Cultural, societal, and religious influences on health care systems and behaviors such as traditional healers, traditional medicines, alternative medicine, and social media influences need to be understood by those specializing in global oncology to better appreciate the needs and serve the community. Furthermore, cultural practices and norms around disclosure of health status to ill patients (eg, in some cultures, it is not typical to share poor prognoses directly with the patient) should be understood. Global oncology specialization requires understanding of cultural differences in public perceptions specifically for cancer, cancer awareness and myths, cancer prevention, cancer treatment, and cancer policy. An awareness of alternative and/or complementary treatment approaches to cancer that might be common and/or accepted in a region is important. Understanding the role that traditional healers, tribal and religious leaders, and family members may play in cancer decision making and acceptance of treatment options is critical. Government and national policy. It is important to recognize the roles, capabilities, and limitations of national and local governments in directing public health activities. Health care coverage, government priorities in health, and national policies as related to health are key requirements. The structure of the Ministry of Health in a given country/state and the roles of its different departments should be understood. Knowledge of how health care coverage is determined and provided, and what prevention, screening, diagnostics, and treatments are covered, is critical, as well as understanding ongoing activities that address health care inequities such as microfinance and health insurance. In the cancer context, understanding the structure and roles of different government institutions (including Ministries of Health and Finance) as related to national cancer-related health care policies and regulations is important. This includes comprehension of the regulatory structure for authorizing and covering cancer diagnostics and treatments within a country where the global oncologist may be focusing. Knowledge of the core elements of a national cancer control plan (NCCP) and the role of NCCPs in guiding and prioritizing a country's efforts in reducing cancer incidence and mortality and improving the quality of life of patients with cancer is essential. Working within any country, a global oncologist should understand of the status of the NCCP and its operationalization and implementation in the areas of cancer prevention, early detection, diagnosis, treatment, and palliation. Understanding the available data for cancer-related decision making and monitoring, including the status and quality of cancer registries in a country or region, is essential. United Nations system and international policy. Any global oncology practitioner should understand the United Nations (UN) system and affiliated bodies, its policies on health, and its relationship with UN member states. In particular, an understanding of the structure and mission of the WHO is critical, since this body recommends the basic health care that nations should provide and vaccination regimens that should be adopted. Additionally, the WHO maintains the WHO Essential Medicines and Essential Diagnostics Lists, which are used to inform individual country policy around drug and diagnostics access. A broad awareness of UN action and commitment to NCDs and the health-related sustainable development goals and their targets should be included. An understanding of the UN system and its affiliated bodies with specific roles in global cancer control should include the International Agency on Research in Cancer and its Global Cancer Observatory, the International Atomic Energy Agency's Program of Action for Cancer Therapy, and the WHO's cancer programs, priorities, and guidelines. A knowledge of UN High-Level meetings and resolutions related to cancer, including how cancer fits into the NCD framework and sustainable development goals, is essential. Awareness of the cancer-specific sections of the WHO's Essential Medicines and Diagnostics Lists and how value of drugs and diagnostics included on these lists is determined should be included. Health systems. Basic knowledge on the structure and economics of health care and its impact on health outcomes and an understanding of common organizing models for national health systems should be included in basic global health training. This should include models for health system priority setting and allocation of health care resources by governmental authorities. Knowledge of the infrastructure of a health system, including cancer-related facilities, oncology equipment and other critical resources for cancer care, and the oncology workforce, is critical when working in global oncology. Understanding the existing services and clinical expertise in pathology, surgery, radiation and medical oncology, as well as other resources and expertise required for cancer care is important. There should be an understanding of national and institutional approaches to delivery of cancer prevention, diagnosis, treatment and supportive/palliative care, and the role of national cancer control plans, cancer treatment guidelines, and care pathways. In poor-resource settings, there is a paucity of trained professionals in various disciplines involved in cancer care. Global oncology specialists practicing in an LMIC need to have an understanding of the task shifting that may occur because of a limited workforce, where a task normally performed by a physician is transferred to a health professional with a different/lower level of education and training. Knowledge of cancer by the health organizations is mandatory to provide optimal care and estimate the accurate amount of health care workers needed to cover the needs of the cancer population. A global oncology specialist should have an understanding of the financing and major drivers of cost and sources of payment/insurance coverage that affect availability of cancer medications and procedures. Frameworks for assessing the relative value of clinical interventions, such as those developed by ESMO and ASCO, as well as methods for measuring the quality of care that is delivered should be included. , Ethics. In terms of global health knowledge, familiarity with ethical considerations in the context of global health and human rights is essential, including knowledge of disparities in the availability of and access to health care resources and treatment, and the role of ethics in the conduct of genetic, genomic, and pharmacogenomic testing. Understanding ethics in the context of research, including conduct of clinical research, protection of human subjects, and informed consent, should be included. Cancer disparities. Finally, from a cancer-specific viewpoint, it is paramount that the global oncology specialist recognizes the tremendous disparities that exist globally in awareness and understanding of, and access to, cancer prevention, diagnosis, treatment, and supportive care, and the associated disparities in cancer outcomes. Global oncology training should include knowledge of strategies to maximize the provision of evidence-based, quality cancer care in settings of limited resources. It is important to understand models and approaches for delivering quality cancer control that is culturally sensitive and appropriate to settings of low resource levels, including use of tiered clinical guidelines that can be applied across contexts with differing resources. Knowledge in these specific areas is critical to facilitate understanding of general global health issues, challenges and priorities, global and national health policies and regulatory bodies, differences in health systems infrastructure and capacity, the impact of cultural and religious differences on health care systems, and ethics in the context of global health and human rights. In addition to knowledge required in general global health competencies, proficiency in cancer-specific competencies place these topics in the cancer context. Epidemiology. Understanding the basis for estimates and limitations of epidemiologic data is critical in global health competencies. As global oncologists, professionals need to be aware of incidence, prevalence, and mortality rates for all health issues impacting populations. Furthermore, they need to understand the varying types and accuracy of epidemiologic data that are available, and the benefits and risks of extrapolating from these data. This includes the incidence/prevalence of communicable and noncommunicable diseases (NCDs) within a country or region, including common comorbid conditions (cardiac, pulmonary, and diabetes) and viral-related illnesses (such as HIV, hepatitis B virus, human papillomavirus, etc) potentially associated with cancers. As a complement, a global oncologist should understand the epidemiology of cancer around the world, including incidence, prevalence, and mortality rates by country and region. The includes an awareness of differences in the etiology and pathophysiology of cancer across countries and regions, particularly as related to infectious disease–associated cancers more commonly seen in LMICs, including Kaposi sarcoma, Burkitt lymphoma, nasopharyngeal carcinoma, gastric cancer, hepatocellular carcinoma, and cervical cancer. Awareness of projections of current trends in cancer risk factors and exposures and their impact on the epidemiology of cancer is essential, as well as the potential impact of prevention and risk reduction strategies, including vaccination and screening. It is important to understand contributing factors that result in differences in important outcomes for cancer, such as overall survival, cancer-specific survival, and quality of life, during oncologic treatment and survivorship. Culture. Cultural, societal, and religious influences on health care systems and behaviors such as traditional healers, traditional medicines, alternative medicine, and social media influences need to be understood by those specializing in global oncology to better appreciate the needs and serve the community. Furthermore, cultural practices and norms around disclosure of health status to ill patients (eg, in some cultures, it is not typical to share poor prognoses directly with the patient) should be understood. Global oncology specialization requires understanding of cultural differences in public perceptions specifically for cancer, cancer awareness and myths, cancer prevention, cancer treatment, and cancer policy. An awareness of alternative and/or complementary treatment approaches to cancer that might be common and/or accepted in a region is important. Understanding the role that traditional healers, tribal and religious leaders, and family members may play in cancer decision making and acceptance of treatment options is critical. Government and national policy. It is important to recognize the roles, capabilities, and limitations of national and local governments in directing public health activities. Health care coverage, government priorities in health, and national policies as related to health are key requirements. The structure of the Ministry of Health in a given country/state and the roles of its different departments should be understood. Knowledge of how health care coverage is determined and provided, and what prevention, screening, diagnostics, and treatments are covered, is critical, as well as understanding ongoing activities that address health care inequities such as microfinance and health insurance. In the cancer context, understanding the structure and roles of different government institutions (including Ministries of Health and Finance) as related to national cancer-related health care policies and regulations is important. This includes comprehension of the regulatory structure for authorizing and covering cancer diagnostics and treatments within a country where the global oncologist may be focusing. Knowledge of the core elements of a national cancer control plan (NCCP) and the role of NCCPs in guiding and prioritizing a country's efforts in reducing cancer incidence and mortality and improving the quality of life of patients with cancer is essential. Working within any country, a global oncologist should understand of the status of the NCCP and its operationalization and implementation in the areas of cancer prevention, early detection, diagnosis, treatment, and palliation. Understanding the available data for cancer-related decision making and monitoring, including the status and quality of cancer registries in a country or region, is essential. United Nations system and international policy. Any global oncology practitioner should understand the United Nations (UN) system and affiliated bodies, its policies on health, and its relationship with UN member states. In particular, an understanding of the structure and mission of the WHO is critical, since this body recommends the basic health care that nations should provide and vaccination regimens that should be adopted. Additionally, the WHO maintains the WHO Essential Medicines and Essential Diagnostics Lists, which are used to inform individual country policy around drug and diagnostics access. A broad awareness of UN action and commitment to NCDs and the health-related sustainable development goals and their targets should be included. An understanding of the UN system and its affiliated bodies with specific roles in global cancer control should include the International Agency on Research in Cancer and its Global Cancer Observatory, the International Atomic Energy Agency's Program of Action for Cancer Therapy, and the WHO's cancer programs, priorities, and guidelines. A knowledge of UN High-Level meetings and resolutions related to cancer, including how cancer fits into the NCD framework and sustainable development goals, is essential. Awareness of the cancer-specific sections of the WHO's Essential Medicines and Diagnostics Lists and how value of drugs and diagnostics included on these lists is determined should be included. Health systems. Basic knowledge on the structure and economics of health care and its impact on health outcomes and an understanding of common organizing models for national health systems should be included in basic global health training. This should include models for health system priority setting and allocation of health care resources by governmental authorities. Knowledge of the infrastructure of a health system, including cancer-related facilities, oncology equipment and other critical resources for cancer care, and the oncology workforce, is critical when working in global oncology. Understanding the existing services and clinical expertise in pathology, surgery, radiation and medical oncology, as well as other resources and expertise required for cancer care is important. There should be an understanding of national and institutional approaches to delivery of cancer prevention, diagnosis, treatment and supportive/palliative care, and the role of national cancer control plans, cancer treatment guidelines, and care pathways. In poor-resource settings, there is a paucity of trained professionals in various disciplines involved in cancer care. Global oncology specialists practicing in an LMIC need to have an understanding of the task shifting that may occur because of a limited workforce, where a task normally performed by a physician is transferred to a health professional with a different/lower level of education and training. Knowledge of cancer by the health organizations is mandatory to provide optimal care and estimate the accurate amount of health care workers needed to cover the needs of the cancer population. A global oncology specialist should have an understanding of the financing and major drivers of cost and sources of payment/insurance coverage that affect availability of cancer medications and procedures. Frameworks for assessing the relative value of clinical interventions, such as those developed by ESMO and ASCO, as well as methods for measuring the quality of care that is delivered should be included. , Ethics. In terms of global health knowledge, familiarity with ethical considerations in the context of global health and human rights is essential, including knowledge of disparities in the availability of and access to health care resources and treatment, and the role of ethics in the conduct of genetic, genomic, and pharmacogenomic testing. Understanding ethics in the context of research, including conduct of clinical research, protection of human subjects, and informed consent, should be included. Cancer disparities. Finally, from a cancer-specific viewpoint, it is paramount that the global oncology specialist recognizes the tremendous disparities that exist globally in awareness and understanding of, and access to, cancer prevention, diagnosis, treatment, and supportive care, and the associated disparities in cancer outcomes. Global oncology training should include knowledge of strategies to maximize the provision of evidence-based, quality cancer care in settings of limited resources. It is important to understand models and approaches for delivering quality cancer control that is culturally sensitive and appropriate to settings of low resource levels, including use of tiered clinical guidelines that can be applied across contexts with differing resources. Understanding the basis for estimates and limitations of epidemiologic data is critical in global health competencies. As global oncologists, professionals need to be aware of incidence, prevalence, and mortality rates for all health issues impacting populations. Furthermore, they need to understand the varying types and accuracy of epidemiologic data that are available, and the benefits and risks of extrapolating from these data. This includes the incidence/prevalence of communicable and noncommunicable diseases (NCDs) within a country or region, including common comorbid conditions (cardiac, pulmonary, and diabetes) and viral-related illnesses (such as HIV, hepatitis B virus, human papillomavirus, etc) potentially associated with cancers. As a complement, a global oncologist should understand the epidemiology of cancer around the world, including incidence, prevalence, and mortality rates by country and region. The includes an awareness of differences in the etiology and pathophysiology of cancer across countries and regions, particularly as related to infectious disease–associated cancers more commonly seen in LMICs, including Kaposi sarcoma, Burkitt lymphoma, nasopharyngeal carcinoma, gastric cancer, hepatocellular carcinoma, and cervical cancer. Awareness of projections of current trends in cancer risk factors and exposures and their impact on the epidemiology of cancer is essential, as well as the potential impact of prevention and risk reduction strategies, including vaccination and screening. It is important to understand contributing factors that result in differences in important outcomes for cancer, such as overall survival, cancer-specific survival, and quality of life, during oncologic treatment and survivorship. Cultural, societal, and religious influences on health care systems and behaviors such as traditional healers, traditional medicines, alternative medicine, and social media influences need to be understood by those specializing in global oncology to better appreciate the needs and serve the community. Furthermore, cultural practices and norms around disclosure of health status to ill patients (eg, in some cultures, it is not typical to share poor prognoses directly with the patient) should be understood. Global oncology specialization requires understanding of cultural differences in public perceptions specifically for cancer, cancer awareness and myths, cancer prevention, cancer treatment, and cancer policy. An awareness of alternative and/or complementary treatment approaches to cancer that might be common and/or accepted in a region is important. Understanding the role that traditional healers, tribal and religious leaders, and family members may play in cancer decision making and acceptance of treatment options is critical. It is important to recognize the roles, capabilities, and limitations of national and local governments in directing public health activities. Health care coverage, government priorities in health, and national policies as related to health are key requirements. The structure of the Ministry of Health in a given country/state and the roles of its different departments should be understood. Knowledge of how health care coverage is determined and provided, and what prevention, screening, diagnostics, and treatments are covered, is critical, as well as understanding ongoing activities that address health care inequities such as microfinance and health insurance. In the cancer context, understanding the structure and roles of different government institutions (including Ministries of Health and Finance) as related to national cancer-related health care policies and regulations is important. This includes comprehension of the regulatory structure for authorizing and covering cancer diagnostics and treatments within a country where the global oncologist may be focusing. Knowledge of the core elements of a national cancer control plan (NCCP) and the role of NCCPs in guiding and prioritizing a country's efforts in reducing cancer incidence and mortality and improving the quality of life of patients with cancer is essential. Working within any country, a global oncologist should understand of the status of the NCCP and its operationalization and implementation in the areas of cancer prevention, early detection, diagnosis, treatment, and palliation. Understanding the available data for cancer-related decision making and monitoring, including the status and quality of cancer registries in a country or region, is essential. Any global oncology practitioner should understand the United Nations (UN) system and affiliated bodies, its policies on health, and its relationship with UN member states. In particular, an understanding of the structure and mission of the WHO is critical, since this body recommends the basic health care that nations should provide and vaccination regimens that should be adopted. Additionally, the WHO maintains the WHO Essential Medicines and Essential Diagnostics Lists, which are used to inform individual country policy around drug and diagnostics access. A broad awareness of UN action and commitment to NCDs and the health-related sustainable development goals and their targets should be included. An understanding of the UN system and its affiliated bodies with specific roles in global cancer control should include the International Agency on Research in Cancer and its Global Cancer Observatory, the International Atomic Energy Agency's Program of Action for Cancer Therapy, and the WHO's cancer programs, priorities, and guidelines. A knowledge of UN High-Level meetings and resolutions related to cancer, including how cancer fits into the NCD framework and sustainable development goals, is essential. Awareness of the cancer-specific sections of the WHO's Essential Medicines and Diagnostics Lists and how value of drugs and diagnostics included on these lists is determined should be included. Basic knowledge on the structure and economics of health care and its impact on health outcomes and an understanding of common organizing models for national health systems should be included in basic global health training. This should include models for health system priority setting and allocation of health care resources by governmental authorities. Knowledge of the infrastructure of a health system, including cancer-related facilities, oncology equipment and other critical resources for cancer care, and the oncology workforce, is critical when working in global oncology. Understanding the existing services and clinical expertise in pathology, surgery, radiation and medical oncology, as well as other resources and expertise required for cancer care is important. There should be an understanding of national and institutional approaches to delivery of cancer prevention, diagnosis, treatment and supportive/palliative care, and the role of national cancer control plans, cancer treatment guidelines, and care pathways. In poor-resource settings, there is a paucity of trained professionals in various disciplines involved in cancer care. Global oncology specialists practicing in an LMIC need to have an understanding of the task shifting that may occur because of a limited workforce, where a task normally performed by a physician is transferred to a health professional with a different/lower level of education and training. Knowledge of cancer by the health organizations is mandatory to provide optimal care and estimate the accurate amount of health care workers needed to cover the needs of the cancer population. A global oncology specialist should have an understanding of the financing and major drivers of cost and sources of payment/insurance coverage that affect availability of cancer medications and procedures. Frameworks for assessing the relative value of clinical interventions, such as those developed by ESMO and ASCO, as well as methods for measuring the quality of care that is delivered should be included. , In terms of global health knowledge, familiarity with ethical considerations in the context of global health and human rights is essential, including knowledge of disparities in the availability of and access to health care resources and treatment, and the role of ethics in the conduct of genetic, genomic, and pharmacogenomic testing. Understanding ethics in the context of research, including conduct of clinical research, protection of human subjects, and informed consent, should be included. Finally, from a cancer-specific viewpoint, it is paramount that the global oncology specialist recognizes the tremendous disparities that exist globally in awareness and understanding of, and access to, cancer prevention, diagnosis, treatment, and supportive care, and the associated disparities in cancer outcomes. Global oncology training should include knowledge of strategies to maximize the provision of evidence-based, quality cancer care in settings of limited resources. It is important to understand models and approaches for delivering quality cancer control that is culturally sensitive and appropriate to settings of low resource levels, including use of tiered clinical guidelines that can be applied across contexts with differing resources. A global oncology training program needs to equip trainees with skills to work collaboratively with in-country experts, in a manner respectful of local resources and culture. One needs to establish local priorities for health research, education, and clinical care. Skill-based competencies include those in global health generally as well as cancer-specific global oncology skills (Fig ). Global Health Skills Program design and implementation. Global health skills essential to global oncology include an ability to design global health program interventions using logical frameworks for determining targets and indicators, as well as resource requirements and milestones. An ability to apply the skills of implementation science to effective program implementation, monitoring, and evaluation in different resource settings is critical. Data collection and analysis. A global health specialist should be able to find, report, and critically discuss epidemiologic evidence, disease biology, and behavior in different areas of the world. Key skills include an ability to design and use effective data collection instruments, analyze quantitative and qualitative data, and apply those skills and data to design, assess, and influence feasible and sustainable programs and interventions across a range of settings. Communication and cultural sensitivity. Skills in communication and cultural sensitivity are critical in global health, including exercising cultural literacy, open-mindedness, humility, and understanding and navigating within varying cultural norms and expectations. Professionals in the field should be able to exercise diplomacy to understand differing needs and points of view, apply communication and cultural sensitivity skills in daily interactions, communicate in a way that enhances effective relationships, and appreciate cultural influence on practice. This includes the need to recognize the importance of greeting formalities, including, in some cultures, differences in addressing people and patients from different genders. Key skills include the ability to demonstrate principles of effective patient and family communications in a culturally appropriate way including use of culturally appropriate terms to explain disease prognosis and outcomes. This is especially important in fields of oncology where not all treatment alternatives or procedures are widely accepted. Leadership and collaboration. Skills in this area include the ability to lead in defining a problem in partnership with in-country colleagues, developing collaborative solutions, forming teams of diverse stakeholders to implement projects and programs, and harnessing resources (including philanthropy) needed to implement the solution. An ability to respect, appreciate, and acknowledge the contributions of colleagues in another country or setting is critical, including inclusion in grant support and authorship (including recognition as first author and/or last author). Prioritization should be made for creating strategies that allow fair collaboration within the team and allowing opportunities for in-country capacity building, training, networking, and mentorship. Research. Global health research should be defined and driven in partnership with local collaborators, on the basis of priorities set by local research and practice communities. A global health specialist should be able to collaborate with local colleagues to collectively define areas of research need, develop research that addresses the issue, and conduct research. This includes the ability to design and implement outcomes research studies. An ability to identify, acquire, and allocate resources needed for global health research, including funding and protected time for research, is necessary. Awareness of the importance of fair recognition of colleague contributions (eg, appropriate and fair distribution of authors among published and presented data is necessary, including recognition of in-country collaborators in first/last author positions) is important. The distribution of grant resources must be in a way that allows creating infrastructure in both settings (constrained and nonconstrained resource settings). Capacity building efforts supported by global research should consider how efforts will be maintained/sustained after research funding expires. Cancer-Specific Skills Additionally, there are cancer-specific skills that should be included in an academic global oncology training program. Specialists in global oncology need skills relevant to lower-resource settings, for conducting research and develop projects related to cancer control and oncology clinical practice. Research. A global oncologist should be equipped to collaboratively identify and pursue locally driven and relevant cancer research opportunities. This includes the investigation and application of potential leapfrog approaches, moving forward rapidly through the adoption of modern systems without going through intermediary steps, toward improved cancer control in low-resource settings. It also includes the discovery of solutions or processes in a low-resource setting that can provide cost-effective options in high-resource settings. Cancer control. The goal of cancer control is to reduce the cancer burden, through reductions in cancer incidence, morbidity, and mortality. A specialist in global oncology should have the ability to advocate for and synthesize health-related and nonhealth factors in developing and implementing effective cancer control programs in varying settings. Clinical practice. Global oncologists should have broad and general clinical skills essential to cancer prevention, screening, diagnosis, treatment, and supportive/palliative care that are relevant to countries of all levels of resources. Additional clinical skills and knowledge relevant to specific regions and resources settings, such as cancer interactions with antiretroviral medications in a setting with high HIV rates of infection, are also important. An ability to develop and implement contextually appropriate, innovative approaches to enhance the delivery of quality cancer care and control is crucial. An elementary part of the global oncology discipline is the concept of bidirectional sharing of innovative practices, a process of actively pursuing learnings from successful practices across different resource settings. Program design and implementation. Global health skills essential to global oncology include an ability to design global health program interventions using logical frameworks for determining targets and indicators, as well as resource requirements and milestones. An ability to apply the skills of implementation science to effective program implementation, monitoring, and evaluation in different resource settings is critical. Data collection and analysis. A global health specialist should be able to find, report, and critically discuss epidemiologic evidence, disease biology, and behavior in different areas of the world. Key skills include an ability to design and use effective data collection instruments, analyze quantitative and qualitative data, and apply those skills and data to design, assess, and influence feasible and sustainable programs and interventions across a range of settings. Communication and cultural sensitivity. Skills in communication and cultural sensitivity are critical in global health, including exercising cultural literacy, open-mindedness, humility, and understanding and navigating within varying cultural norms and expectations. Professionals in the field should be able to exercise diplomacy to understand differing needs and points of view, apply communication and cultural sensitivity skills in daily interactions, communicate in a way that enhances effective relationships, and appreciate cultural influence on practice. This includes the need to recognize the importance of greeting formalities, including, in some cultures, differences in addressing people and patients from different genders. Key skills include the ability to demonstrate principles of effective patient and family communications in a culturally appropriate way including use of culturally appropriate terms to explain disease prognosis and outcomes. This is especially important in fields of oncology where not all treatment alternatives or procedures are widely accepted. Leadership and collaboration. Skills in this area include the ability to lead in defining a problem in partnership with in-country colleagues, developing collaborative solutions, forming teams of diverse stakeholders to implement projects and programs, and harnessing resources (including philanthropy) needed to implement the solution. An ability to respect, appreciate, and acknowledge the contributions of colleagues in another country or setting is critical, including inclusion in grant support and authorship (including recognition as first author and/or last author). Prioritization should be made for creating strategies that allow fair collaboration within the team and allowing opportunities for in-country capacity building, training, networking, and mentorship. Research. Global health research should be defined and driven in partnership with local collaborators, on the basis of priorities set by local research and practice communities. A global health specialist should be able to collaborate with local colleagues to collectively define areas of research need, develop research that addresses the issue, and conduct research. This includes the ability to design and implement outcomes research studies. An ability to identify, acquire, and allocate resources needed for global health research, including funding and protected time for research, is necessary. Awareness of the importance of fair recognition of colleague contributions (eg, appropriate and fair distribution of authors among published and presented data is necessary, including recognition of in-country collaborators in first/last author positions) is important. The distribution of grant resources must be in a way that allows creating infrastructure in both settings (constrained and nonconstrained resource settings). Capacity building efforts supported by global research should consider how efforts will be maintained/sustained after research funding expires. Global health skills essential to global oncology include an ability to design global health program interventions using logical frameworks for determining targets and indicators, as well as resource requirements and milestones. An ability to apply the skills of implementation science to effective program implementation, monitoring, and evaluation in different resource settings is critical. A global health specialist should be able to find, report, and critically discuss epidemiologic evidence, disease biology, and behavior in different areas of the world. Key skills include an ability to design and use effective data collection instruments, analyze quantitative and qualitative data, and apply those skills and data to design, assess, and influence feasible and sustainable programs and interventions across a range of settings. Skills in communication and cultural sensitivity are critical in global health, including exercising cultural literacy, open-mindedness, humility, and understanding and navigating within varying cultural norms and expectations. Professionals in the field should be able to exercise diplomacy to understand differing needs and points of view, apply communication and cultural sensitivity skills in daily interactions, communicate in a way that enhances effective relationships, and appreciate cultural influence on practice. This includes the need to recognize the importance of greeting formalities, including, in some cultures, differences in addressing people and patients from different genders. Key skills include the ability to demonstrate principles of effective patient and family communications in a culturally appropriate way including use of culturally appropriate terms to explain disease prognosis and outcomes. This is especially important in fields of oncology where not all treatment alternatives or procedures are widely accepted. Skills in this area include the ability to lead in defining a problem in partnership with in-country colleagues, developing collaborative solutions, forming teams of diverse stakeholders to implement projects and programs, and harnessing resources (including philanthropy) needed to implement the solution. An ability to respect, appreciate, and acknowledge the contributions of colleagues in another country or setting is critical, including inclusion in grant support and authorship (including recognition as first author and/or last author). Prioritization should be made for creating strategies that allow fair collaboration within the team and allowing opportunities for in-country capacity building, training, networking, and mentorship. Global health research should be defined and driven in partnership with local collaborators, on the basis of priorities set by local research and practice communities. A global health specialist should be able to collaborate with local colleagues to collectively define areas of research need, develop research that addresses the issue, and conduct research. This includes the ability to design and implement outcomes research studies. An ability to identify, acquire, and allocate resources needed for global health research, including funding and protected time for research, is necessary. Awareness of the importance of fair recognition of colleague contributions (eg, appropriate and fair distribution of authors among published and presented data is necessary, including recognition of in-country collaborators in first/last author positions) is important. The distribution of grant resources must be in a way that allows creating infrastructure in both settings (constrained and nonconstrained resource settings). Capacity building efforts supported by global research should consider how efforts will be maintained/sustained after research funding expires. Additionally, there are cancer-specific skills that should be included in an academic global oncology training program. Specialists in global oncology need skills relevant to lower-resource settings, for conducting research and develop projects related to cancer control and oncology clinical practice. Research. A global oncologist should be equipped to collaboratively identify and pursue locally driven and relevant cancer research opportunities. This includes the investigation and application of potential leapfrog approaches, moving forward rapidly through the adoption of modern systems without going through intermediary steps, toward improved cancer control in low-resource settings. It also includes the discovery of solutions or processes in a low-resource setting that can provide cost-effective options in high-resource settings. Cancer control. The goal of cancer control is to reduce the cancer burden, through reductions in cancer incidence, morbidity, and mortality. A specialist in global oncology should have the ability to advocate for and synthesize health-related and nonhealth factors in developing and implementing effective cancer control programs in varying settings. Clinical practice. Global oncologists should have broad and general clinical skills essential to cancer prevention, screening, diagnosis, treatment, and supportive/palliative care that are relevant to countries of all levels of resources. Additional clinical skills and knowledge relevant to specific regions and resources settings, such as cancer interactions with antiretroviral medications in a setting with high HIV rates of infection, are also important. An ability to develop and implement contextually appropriate, innovative approaches to enhance the delivery of quality cancer care and control is crucial. An elementary part of the global oncology discipline is the concept of bidirectional sharing of innovative practices, a process of actively pursuing learnings from successful practices across different resource settings. A global oncologist should be equipped to collaboratively identify and pursue locally driven and relevant cancer research opportunities. This includes the investigation and application of potential leapfrog approaches, moving forward rapidly through the adoption of modern systems without going through intermediary steps, toward improved cancer control in low-resource settings. It also includes the discovery of solutions or processes in a low-resource setting that can provide cost-effective options in high-resource settings. The goal of cancer control is to reduce the cancer burden, through reductions in cancer incidence, morbidity, and mortality. A specialist in global oncology should have the ability to advocate for and synthesize health-related and nonhealth factors in developing and implementing effective cancer control programs in varying settings. Global oncologists should have broad and general clinical skills essential to cancer prevention, screening, diagnosis, treatment, and supportive/palliative care that are relevant to countries of all levels of resources. Additional clinical skills and knowledge relevant to specific regions and resources settings, such as cancer interactions with antiretroviral medications in a setting with high HIV rates of infection, are also important. An ability to develop and implement contextually appropriate, innovative approaches to enhance the delivery of quality cancer care and control is crucial. An elementary part of the global oncology discipline is the concept of bidirectional sharing of innovative practices, a process of actively pursuing learnings from successful practices across different resource settings. Global oncology is a growing academic field that collaboratively addresses disparities and differences in cancer prevention, care, research, education, and the disease's social and human impact around the world. A critical component to success in this field is the training and support of the careers of oncology experts who can think critically alongside in-country collaborators about innovative solutions to global cancer care and control in limited-resource settings. In response to the increasingly global profile of ASCO's membership, an ASCO Global Oncology Leadership Task Force was created in 2014 to provide recommendations on ASCO's engagement in global oncology. Approximately one third of ASCO members practice outside the United States, and of those international members, one quarter practice in LMICs, and the ASCO Board of Directors made it a priority to consider their needs and interests. A subsequent ASCO AGOTF, created as a result of a recommendation from the previous task force, sought to help global oncology transition from an informal field to a scientifically rigorous area of research and training. The AGOTF created a set of recommendations designed to advance the status of global oncology as an academic discipline. One of these recommendations was that ASCO should develop a set of global oncology competencies to equip trainees and faculty interested in undertaking a career in academic global oncology, and provide a road map for cancer centers seeking to create a career path for trainees interested in global oncology. The proposed global oncology competencies included in this manuscript are intended to enable training institutions to prepare oncology specialists with a specialized global oncology academic training. These competencies are distinct from, and build upon, a standard oncology training curriculum. They consist of knowledge and skills needed in general global health as well as cancer-specific competencies, including understanding global cancer health disparities, defining unique resources and needs in low- and middle-resource settings, and promoting international collaboration. It is envisioned that the global health competencies could be provided by a standard global health training program, and the cancer-specific global oncology skills and knowledge could be provided by a select number of training programs that specialize in this field. An advanced fellowship, like that offered for physicians training in blood and marrow transplantation, could possibly offer the most practical and tailor-made way to provide in-depth global oncology training. Although these competencies were originally developed for training programs in the United States, they are intended to be more widely applicable. Global oncology programs must aim to provide professionals with the tools and support to understand the equitable process for designing tailored research strategies that will lead to improved cancer prevention and care worldwide. As reported in the 2018 Global Oncology Survey of NDCCs, US cancer centers do commonly include global oncology in their programs and research partnerships. At the same time, the reported number of hours of global oncology training included in a typical medical oncology fellowship training program ranges from a few hours to a few days. Since the AGOTF concluded its work, ASCO has remained engaged in this evolving field, convening a series of meetings with the NCI CGH with global oncology stakeholders that have explored themes related to global oncology training, research, partnerships, and professional development. These discussions served to validate key points in these proposed competencies, while also introducing additional considerations, including the importance of investing in mentorship, access to training opportunities, and awards and recognition for global oncology specialists. ASCO awareness of the importance of global oncology has increased. In addition, the NCI has been working to expand its research and training portfolio to increase opportunities to strengthen global cancer research and control in LMICs key priority areas include increasing research training capacity, accelerating global cancer implementation science, focusing on clinical trials in LMICs, addressing global cancer health disparities, and supporting development of affordable technologies for global cancer control. The NCI CGH integrates the findings from each fielding of the Global Oncology Survey together with the bidirectional exchanges in the Annual Symposium on Global Cancer Research, which is cohosted with ASCO, , and other global cancer meetings to continue to refine and enhance research, training, and knowledge translation initiatives. In addition, the NCI has funded grants in global health in global oncology research that have paved the way to building research capacity. ASCO's vision is an equitable world where every cancer is prevented or cured and every survivor is healthy. By formalizing the training of oncologists and supporting career pathways in the field of global oncology, and working in collaboration and partnership, we can make progress in achieving global equity in cancer care and control.
Aktuelle Zahlen zur rheumatologischen Versorgung – Jahresbericht aus der Kerndokumentation der regionalen kooperativen Rheumazentren
5217cb81-d937-421a-bb1c-d4445a3fc719
11485178
Internal Medicine[mh]
In der Kerndokumentation der regionalen kooperativen Rheumazentren werden jährlich wesentliche Daten zur rheumatologischen Versorgung von Patient:innen mit entzündlich-rheumatischen Erkrankungen und zu ihrer Krankheitslast erhoben. Ergebnisse und Versorgungstrends werden jedes Jahr anhand von Ergebnisgrafiken gezeigt . Auf vielfachen Wunsch veröffentlichen wir dieses Jahr erstmalig zusätzlich einen Jahresbericht, um die aktuellsten Daten auch in der Zeitschrift für Rheumatologie sichtbar zu machen. Da wir ausschließlich Querschnittsdaten berichten, sind alle Werte als Momentaufnahmen im kontinuierlichen Behandlungsprozess zu bewerten. Die Kerndokumentation ist eine seit 1993 laufende bundesweite prospektive Langzeitdokumentation. Konsekutive Patient:innen aus der rheumatologischen Routineversorgung werden einmal im Jahr hinsichtlich ihres Krankheitsverlaufes dokumentiert . Rheumatolog:innen dokumentieren u. a. die Krankheitsaktivität und die aktuelle Medikation. Die Krankheitsaktivität wird bei allen Krankheitsentitäten mit einer numerischen Ratingskala (NRS) von 0 bis 10 erfasst, wobei 0 keiner Aktivität entspricht. Für die rheumatoide Arthritis (RA) wird zusätzlich der Disease Activity Score (DAS28) bzw. der Clinical oder Simple Disease Activity Index (CDAI oder SDAI) erhoben, für die axiale Spondyloarthritis (axSpA) der Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) und der Axial Spondyloarthritis Disease Activity Score (ASDAS). Beim systemischen Lupus erythematodes (SLE) wird in einigen Einrichtungen der European Consensus Lupus Activity Measurement (ECLAM) erhoben. An medikamentösen Therapien werden Glukokortikoide (GC), alle konventionell synthetischen (cs), biologischen (b) und zielgerichteten (ts) immunmodulierenden Medikamente (englisch disease-modifying antirheumatic drugs, DMARDs), nichtsteroidale Antirheumatika (NSAR), nicht-opioidhaltige Analgetika und Opioide erfasst. Die Abfrage nichtmedikamentöser Therapien umfasst Physiotherapie, Ergotherapie, Rheumafunktionstraining und Patientenschulung. Patientenberichtete Outcomes (PRO) beinhalten u. a. den allgemeinen Gesundheitszustand, Krankheitsaktivität, Schmerzen, Erschöpfung/Müdigkeit, Schlafstörungen, Schwierigkeiten bei körperlichen Tätigkeiten sowie psychisches und körperliches Wohlbefinden, jeweils bezogen auf die letzte Woche. Diese werden auch mit einer NRS, adaptiert vom Rheumatoid Arthritis Impact of Disease (RAID) erfasst. Für die Funktionsfähigkeit wird bei RA der Funktionsfragebogen Hannover (FFbH) und bei axSpA der Bath Ankylosing Spondylitis Functional Index (BASFI) erhoben. Einschlusskriterien: In die Kerndokumentation können alle Patient:innen mit entzündlich-rheumatischen Erkrankungen eingeschlossen werden. Die Daten aus diesem Bericht beziehen sich auf das Auswertungsjahr 2022. Für die Darstellung von Patientencharakteristika, Therapien und PROs sind nur Patient:innen mit einer gesicherten Diagnose berücksichtigt. Im Jahr 2022 erhoben 13 rheumatologische Einrichtungen (6 Praxen, 1 Krankenhaus und 6 Universitätskliniken) Daten für die Kerndokumentation. Dokumentiert wurden insgesamt 13.887 ambulante Patient:innen. Davon hatten 47 % Arthritiden, 26 % Spondyloarthritiden, 15 % Kollagenosen, 7 % Vaskulitiden, 4 % eine Polymyalgia rheumatica (PMR) und 1 % eine sonstige Diagnose. Die häufigsten Krankheitsentitäten waren RA ( n = 5998), Psoriasis-Arthritis (PsA: n = 1795), axSpA ( n = 1449) und SLE ( n = 946), s. Tab. . Knapp die Hälfte der Patient:innen (48 %) wurde über die ambulante spezialfachärztliche Versorgung (ASV), 39 % über die Regelversorgung und 5 % über eine Hochschul‑/Ermächtigungsambulanz betreut. Patientencharakteristika Das mittlere Alter der Patient:innen variierte von 45 Jahren bei Behçet-Erkrankung (BD) bis 73 Jahre bei Riesenzellarteriitis (RZA). Die Krankheitsdauer lag im Median zwischen 3 Jahren bei PMR und 16 Jahren bei axSpA. Das mittlere Alter bei Erkrankung lag zwischen 28 Jahren bei BD und 67 Jahren bei PMR und RZA. Ein Prozent (BD) bis 38 % (PMR) hatten eine Krankheitsdauer unter zwei Jahren. Der Anteil an Frauen war mit 89 % bei primärem Sjögren-Syndrom (SjS) am höchsten und mit 37 % bei axSpA am niedrigsten. Krankheitsaktivität und Remission Die ärztlich dokumentierte Krankheitsaktivität nach der NRS lag krankheitsübergreifend im niedrigen Bereich: Der Mittelwert nach der NRS war am niedrigsten bei BD (0,9 ± 1,2) und am höchsten bei SSc (2,4 ± 1,5). Bei 3 % (BD) bis 19 % (SSc) stuften die Rheumatolog:innen die Aktivität als moderat bis hoch (> 4) ein, s. Tab. . Bei RA lag die Einschätzung bei 35 % der Betroffenen bei 0, bei 27 % bei 1 und bei 15 % bei 2 (insgesamt 77 %: 0–2). Von den RA-Patient:innen befanden sich 46 % in DAS28-Remission (< 2,6), 19 % hatten eine niedrige (2,6–3,2), 31 % eine mittlere (> 3,2–5,1) und 4 % (> 5,1) eine hohe Krankheitsaktivität. Nach dem CDAI waren 24 % in Remission (≤ 2,8), 51 % hatten eine niedrige (2,8–10), 20 % eine moderate (11–22) und 5 % eine hohe (> 22) Krankheitsaktivität. Von den axSpA-Patient:innen hatten 20 % gemäß dem ASDAS-CRP eine inaktive Erkrankung (< 1,3) und 30 % eine niedrige Krankheitsaktivität. Nach dem BASDAI hatten 27 % eine niedrige (0–2), 32 % eine mittlere (< 2–4) und 41 % eine hohe Krankheitsaktivität (> 4), s. Abb. . Der ECLAM wurde bei 141 SLE-Patient:innen erfasst. Nach dem ECLAM Score (0–10, 0 entsprechend einer inaktiven Erkrankung) betrug der Score bei 4 % null, bei 43 % eins, bei 45 % zwei, bei 6 % drei und bei 2 % vier. Am häufigsten waren eine hämatologische Beteiligung (48 %) und Komplementerniedrigung (29 %). Glukokortikoide, NSAR und Schmerzmedikation In 2022 erhielten bei RA 28 % Glukokortikoide (GC), bei SLE 46 % und bei ANCA-assoziierten Vaskulitiden (AAV) 62 %, die Prozentzahlen für die weiteren Diagnosen sind in Tab. angeführt. Davon hatten etwa 80 % eine niedrige Dosis von bis zu 5 mg Prednisolonäquivalent pro Tag. Nichtsteroidale Antirheumatika wurden vor allem bei axSpA (60 %), PsA (41 %) und RA (40 %) eingesetzt. Zwölf bis 20 % der Patient:innen erhielten nicht-opioidhaltige Analgetika und 2 % (BD) bis 9 % (MTCD) nahmen aktuell ein Opioid ein (s. Tab. ). csDMARDs Bei RA und SLE hatten 89 % mindestens eine Basistherapie (cs, b oder tsDMARD), bei PsA 85 % und bei AAV 81 %. Methotrexat (MTX) war das am häufigsten verwendete csDMARD. Es wurde vor allem bei RA (55 %), PsA (43 %) und bei idiopathischen inflammatorischen Myositiden (IIM, 43 %) verwendet. Azathioprin kam häufig bei AAV und BD (je 25 %) zum Einsatz. Bei SLE (74 %), MCTD (51 %) und SjS (49 %) war Hydroxychloroquin (HCQ) das häufigste csDMARD. Mycophenolat-Mofetil spielte bei SLE und systemischer Sklerose (SSc, jeweils 15 %) eine Rolle. Leflunomid (6,8 % bei RA), Sulfasalazin, Ciclosporin A und Cyclophosphamid waren nur bei wenigen Patient:innen Bestandteil der Therapie. bDMARDs Biologische (b)DMARDs wurden am häufigsten bei axSpA (63 %) und PsA (50 %) verordnet (s. Tab. ). Auch bei RZA (36 %), RA (31 %), AAV (30 %) und BD (27 %) erhielt jeder dritte bis vierte Patient ein bDMARD. Häufigste bDMARD-Therapie waren Tumornekrose-Faktor-Inhibitoren (TNFi), vor allem bei axSpA (54 %), PsA (27 %), BD (26 %) und RA (19 %). Bei PsA waren auch IL17-Inhibitoren (16 %), davon häufiger Secukinumab (10,7 %) als Ixekizumab (5,2 %), und IL12/23-Inhibitoren (6 %) relevant. Tocilizumab wurde häufig bei RZA (34 %) eingesetzt, allerdings mit hoher Varianz zwischen den Einrichtungen. Rituximab war die häufigste bDMARD-Therapie bei AAV (27 %), kam aber auch bei RA und off-label bei Kollagenosen zum Einsatz. Belimumab erhielten 15 % der SLE-Patient:innen. Mepolizumab wurde 5 von 51 (9,8 %) der Patient:innen mit einer eosinophilen Granulomatose mit Polyangiitis (EGPA) verordnet. tsDMARDs JAK-Inhibitoren kamen vor allem bei der RA (11 %) zum Einsatz. Es wurden mehrheitlich Baricitinib (4,4 %) und Upadicinib (4,3 %), seltener Filgotinib (1,5 %) und Tofacitinib (1,0 %) verordnet. Apremilast kam bei BD (5 %) und bei PsA (3 %) zum Einsatz. Biosimilars Der Biosimilar-Anteil unter den TNF-Blockern lag für die RA bei 62 % (Adalimumab), 66 % (Etanercept), 55 % (Infliximab) für die PsA bei 67 % (Adalimumab), 58 % (Etanercept) und 73 % (Infliximab) und für die axSpA bei 67 % (Adalimumab), 53 % (Etanercept) und 72 % (Infliximab). Für Rituximab lag er bei der RA bei 34 % und bei AAV bei 38 %. Ergänzende nichtmedikamentöse Therapiemaßnahmen In den letzten 12 Monaten erhielten 31 % (RA), 33 % (PsA), 22 % (SLE) und 51 % (axSpA) Physiotherapie. Bei schweren Funktionseinschränkungen gemäß FFbH (< 50) lag der Anteil bei 46 % (RA) bzw. gemäß BASFI (> 4) bei 60 % (axSpA). Ergotherapie erhielten 5 % (SLE) bis 8 % (RA). Rheumafunktionstraining wurde von 3–5 % der Betroffenen wahrgenommen. Patientenberichtete Outcomes Die patientenberichtete Krankheitsaktivität lag krankheitsübergreifend im mittleren Bereich (s. Tab. ). Der Mittelwert nach der NRS lag zwischen 2,8 ± 2,5 (AAV) und 4,2 ± 2,3/2,6 (SSc/IIM). Im Vergleich zur ärztlichen Einschätzung gaben die Betroffenen im Mittel jeweils eine um etwa 2 Einheiten (auf der Skala von 0–10) höhere Krankheitsaktivität an. Am häufigsten stuften Patient:innen mit Kollagenosen ihre Krankheitsaktivität als moderat bis hoch (NRS 4–10) ein (s. Abb. ). Mehr als die Hälfte aller Patient:innen mit Kollagenosen beurteilten auch ihren allgemeinen Gesundheitszustand, das körperliche Wohlbefinden und Erschöpfung/Müdigkeit als moderat bis stark eingeschränkt bzw. hoch. Bis auf AAV- und RZA-Betroffene dokumentierten Patient:innen diagnoseübergreifend zu etwa 50 % moderate bis starke Schmerzen. Weitere versorgungsrelevante Daten Von den Betroffenen hatten im Vorjahr 8 bis 10 % aufgrund ihrer rheumatischen Erkrankung (RA, axSpA, SLE, PsA) einen stationären Aufenthalt mit einer Dauer von 8 Tagen (RA, PsA) bis 14 Tagen (SLE) im Median. Eine ambulante oder stationäre medizinische Rehabilitationsmaßnahme wurde bei 7,2 % (RA), 6,4 % (PsA), 9,2 % (axSpA) und 6,7 % (SLE) der Patient:innen durchgeführt. Mitglied einer Patientenorganisation waren 9,5 % (RA), 6,6 % (PsA), 10 % (axSpA) bzw. 14 % (SLE). Der Anfahrtsweg zur rheumatologischen Einrichtung betrug für 28 % der Patient:innen 0–10 km, für 26 % 11–20 km, für 33 % 21–50 km und für 14 % > 50 km. Die mittlere Entfernung zum Wohnort betrug 23 km in Großstädten/Metropolen, 31 km in Mittelstädten, 51 km in der Kleinstadt und 50 km auf dem Land. Das mittlere Alter der Patient:innen variierte von 45 Jahren bei Behçet-Erkrankung (BD) bis 73 Jahre bei Riesenzellarteriitis (RZA). Die Krankheitsdauer lag im Median zwischen 3 Jahren bei PMR und 16 Jahren bei axSpA. Das mittlere Alter bei Erkrankung lag zwischen 28 Jahren bei BD und 67 Jahren bei PMR und RZA. Ein Prozent (BD) bis 38 % (PMR) hatten eine Krankheitsdauer unter zwei Jahren. Der Anteil an Frauen war mit 89 % bei primärem Sjögren-Syndrom (SjS) am höchsten und mit 37 % bei axSpA am niedrigsten. Die ärztlich dokumentierte Krankheitsaktivität nach der NRS lag krankheitsübergreifend im niedrigen Bereich: Der Mittelwert nach der NRS war am niedrigsten bei BD (0,9 ± 1,2) und am höchsten bei SSc (2,4 ± 1,5). Bei 3 % (BD) bis 19 % (SSc) stuften die Rheumatolog:innen die Aktivität als moderat bis hoch (> 4) ein, s. Tab. . Bei RA lag die Einschätzung bei 35 % der Betroffenen bei 0, bei 27 % bei 1 und bei 15 % bei 2 (insgesamt 77 %: 0–2). Von den RA-Patient:innen befanden sich 46 % in DAS28-Remission (< 2,6), 19 % hatten eine niedrige (2,6–3,2), 31 % eine mittlere (> 3,2–5,1) und 4 % (> 5,1) eine hohe Krankheitsaktivität. Nach dem CDAI waren 24 % in Remission (≤ 2,8), 51 % hatten eine niedrige (2,8–10), 20 % eine moderate (11–22) und 5 % eine hohe (> 22) Krankheitsaktivität. Von den axSpA-Patient:innen hatten 20 % gemäß dem ASDAS-CRP eine inaktive Erkrankung (< 1,3) und 30 % eine niedrige Krankheitsaktivität. Nach dem BASDAI hatten 27 % eine niedrige (0–2), 32 % eine mittlere (< 2–4) und 41 % eine hohe Krankheitsaktivität (> 4), s. Abb. . Der ECLAM wurde bei 141 SLE-Patient:innen erfasst. Nach dem ECLAM Score (0–10, 0 entsprechend einer inaktiven Erkrankung) betrug der Score bei 4 % null, bei 43 % eins, bei 45 % zwei, bei 6 % drei und bei 2 % vier. Am häufigsten waren eine hämatologische Beteiligung (48 %) und Komplementerniedrigung (29 %). In 2022 erhielten bei RA 28 % Glukokortikoide (GC), bei SLE 46 % und bei ANCA-assoziierten Vaskulitiden (AAV) 62 %, die Prozentzahlen für die weiteren Diagnosen sind in Tab. angeführt. Davon hatten etwa 80 % eine niedrige Dosis von bis zu 5 mg Prednisolonäquivalent pro Tag. Nichtsteroidale Antirheumatika wurden vor allem bei axSpA (60 %), PsA (41 %) und RA (40 %) eingesetzt. Zwölf bis 20 % der Patient:innen erhielten nicht-opioidhaltige Analgetika und 2 % (BD) bis 9 % (MTCD) nahmen aktuell ein Opioid ein (s. Tab. ). Bei RA und SLE hatten 89 % mindestens eine Basistherapie (cs, b oder tsDMARD), bei PsA 85 % und bei AAV 81 %. Methotrexat (MTX) war das am häufigsten verwendete csDMARD. Es wurde vor allem bei RA (55 %), PsA (43 %) und bei idiopathischen inflammatorischen Myositiden (IIM, 43 %) verwendet. Azathioprin kam häufig bei AAV und BD (je 25 %) zum Einsatz. Bei SLE (74 %), MCTD (51 %) und SjS (49 %) war Hydroxychloroquin (HCQ) das häufigste csDMARD. Mycophenolat-Mofetil spielte bei SLE und systemischer Sklerose (SSc, jeweils 15 %) eine Rolle. Leflunomid (6,8 % bei RA), Sulfasalazin, Ciclosporin A und Cyclophosphamid waren nur bei wenigen Patient:innen Bestandteil der Therapie. Biologische (b)DMARDs wurden am häufigsten bei axSpA (63 %) und PsA (50 %) verordnet (s. Tab. ). Auch bei RZA (36 %), RA (31 %), AAV (30 %) und BD (27 %) erhielt jeder dritte bis vierte Patient ein bDMARD. Häufigste bDMARD-Therapie waren Tumornekrose-Faktor-Inhibitoren (TNFi), vor allem bei axSpA (54 %), PsA (27 %), BD (26 %) und RA (19 %). Bei PsA waren auch IL17-Inhibitoren (16 %), davon häufiger Secukinumab (10,7 %) als Ixekizumab (5,2 %), und IL12/23-Inhibitoren (6 %) relevant. Tocilizumab wurde häufig bei RZA (34 %) eingesetzt, allerdings mit hoher Varianz zwischen den Einrichtungen. Rituximab war die häufigste bDMARD-Therapie bei AAV (27 %), kam aber auch bei RA und off-label bei Kollagenosen zum Einsatz. Belimumab erhielten 15 % der SLE-Patient:innen. Mepolizumab wurde 5 von 51 (9,8 %) der Patient:innen mit einer eosinophilen Granulomatose mit Polyangiitis (EGPA) verordnet. JAK-Inhibitoren kamen vor allem bei der RA (11 %) zum Einsatz. Es wurden mehrheitlich Baricitinib (4,4 %) und Upadicinib (4,3 %), seltener Filgotinib (1,5 %) und Tofacitinib (1,0 %) verordnet. Apremilast kam bei BD (5 %) und bei PsA (3 %) zum Einsatz. Der Biosimilar-Anteil unter den TNF-Blockern lag für die RA bei 62 % (Adalimumab), 66 % (Etanercept), 55 % (Infliximab) für die PsA bei 67 % (Adalimumab), 58 % (Etanercept) und 73 % (Infliximab) und für die axSpA bei 67 % (Adalimumab), 53 % (Etanercept) und 72 % (Infliximab). Für Rituximab lag er bei der RA bei 34 % und bei AAV bei 38 %. In den letzten 12 Monaten erhielten 31 % (RA), 33 % (PsA), 22 % (SLE) und 51 % (axSpA) Physiotherapie. Bei schweren Funktionseinschränkungen gemäß FFbH (< 50) lag der Anteil bei 46 % (RA) bzw. gemäß BASFI (> 4) bei 60 % (axSpA). Ergotherapie erhielten 5 % (SLE) bis 8 % (RA). Rheumafunktionstraining wurde von 3–5 % der Betroffenen wahrgenommen. Die patientenberichtete Krankheitsaktivität lag krankheitsübergreifend im mittleren Bereich (s. Tab. ). Der Mittelwert nach der NRS lag zwischen 2,8 ± 2,5 (AAV) und 4,2 ± 2,3/2,6 (SSc/IIM). Im Vergleich zur ärztlichen Einschätzung gaben die Betroffenen im Mittel jeweils eine um etwa 2 Einheiten (auf der Skala von 0–10) höhere Krankheitsaktivität an. Am häufigsten stuften Patient:innen mit Kollagenosen ihre Krankheitsaktivität als moderat bis hoch (NRS 4–10) ein (s. Abb. ). Mehr als die Hälfte aller Patient:innen mit Kollagenosen beurteilten auch ihren allgemeinen Gesundheitszustand, das körperliche Wohlbefinden und Erschöpfung/Müdigkeit als moderat bis stark eingeschränkt bzw. hoch. Bis auf AAV- und RZA-Betroffene dokumentierten Patient:innen diagnoseübergreifend zu etwa 50 % moderate bis starke Schmerzen. Von den Betroffenen hatten im Vorjahr 8 bis 10 % aufgrund ihrer rheumatischen Erkrankung (RA, axSpA, SLE, PsA) einen stationären Aufenthalt mit einer Dauer von 8 Tagen (RA, PsA) bis 14 Tagen (SLE) im Median. Eine ambulante oder stationäre medizinische Rehabilitationsmaßnahme wurde bei 7,2 % (RA), 6,4 % (PsA), 9,2 % (axSpA) und 6,7 % (SLE) der Patient:innen durchgeführt. Mitglied einer Patientenorganisation waren 9,5 % (RA), 6,6 % (PsA), 10 % (axSpA) bzw. 14 % (SLE). Der Anfahrtsweg zur rheumatologischen Einrichtung betrug für 28 % der Patient:innen 0–10 km, für 26 % 11–20 km, für 33 % 21–50 km und für 14 % > 50 km. Die mittlere Entfernung zum Wohnort betrug 23 km in Großstädten/Metropolen, 31 km in Mittelstädten, 51 km in der Kleinstadt und 50 km auf dem Land. Mithilfe der Kerndokumentation können aktuelle Daten zur Versorgung von Patient:innen mit entzündlich-rheumatischen Erkrankungen aus derzeit 13 rheumatologischen Einrichtungen berichtet werden. Im Jahr 2022 zeigte sich im Vergleich zu den Vorjahren krankheitsübergreifend ein weiterer Rückgang im Einsatz von GC . Bei RA blieb die Häufigkeit einer bDMARD-Therapie mit 31 % zum Vorjahr unverändert, auch tsDMARDs wurden mit 11 % nur gering häufiger verordnet als im Vorjahr (10 %) . Bei PsA stieg der Einsatz von IL-17- und IL-12/3-Inhibitoren, aber TNFi blieben mit 27 % das häufigste bDMARD für die Behandlung der PsA. Die axSpA-Therapie veränderte sich im Vergleich zu den Vorjahren kaum. Jede zweite axSpA-Patient:in erhielt einen TNF-Inhibitor. Von den mit einem bDMARD behandelten axSpA-Patient:innen hatten 53 % zusätzlich ein NSAR, 30 % kontinuierlich und 70 % bedarfsorientiert. Mit einer NSAR-Monotherapie wurden 23 % behandelt. Drei von vier SLE-Patient:innen erhielten HCQ, der Anteil ist gestiegen. Auch Belimumab wurde im Vergleich zum Vorjahr (11 % vs. 15 %) häufiger verwendet . Die Behandlung von SjS, SSc und IIM blieb heterogen, mit Rituximab und Tocilizumab als bDMARD-Optionen. Bei den Vaskulitiden kam bei RZA zunehmend Tocilizumab zum Einsatz. Bei EGPA wurden 2022 die ersten Patient:innen mit Mepolizumab behandelt. Eine Übersicht über die Medikation der letzten 15 Jahre wird jedes Jahr in Ergebnisgrafiken dargestellt . Die überwiegende Zahl an Patient:innen aus der Kerndokumentation hat eine lange Krankheitsdauer. Während die Krankheitsaktivität bei diesen langjährig Erkrankten ärztlicherseits größtenteils als niedrig eingestuft wurde, dokumentierten viele Betroffene weiterhin moderate bis starke Einschränkungen hinsichtlich Schmerzen, Beweglichkeit, Müdigkeit/Erschöpfung, der Schlafruhe und in ihrem Wohlbefinden. Die Diskrepanz zwischen ärztlich- und patientenberichteter Krankheitsaktivität ist in zahlreichen Studien beschrieben und wird darauf zurückgeführt, dass Ärzt:innen und Betroffene unterschiedliche Faktoren (u. a. laborchemische Entzündungsaktivität, Organbeteiligung vs. Fatigue, Schmerz, Coping, eingeschränkte Teilhabe) bei ihrer Beurteilung aktiver Krankheitsmanifestationen berücksichtigen . Die patientenberichteten Einschränkungen scheinen auf die vielfältigen medikamentösen Behandlungsoptionen noch nicht in ausreichendem Maße anzusprechen oder haben sich bereits chronifiziert. Limitationen und Stärken Die Teilnahme an der Kerndokumentation ist freiwillig und erfordert von Rheumatolog:innen und Betroffenen eine hohe Bereitschaft, jedes Jahr viele Daten zu dokumentieren. Die Dokumentation wird in begrenztem Umfang vergütet. Die Einrichtungen sind bundesweit vertreten und umfassen mit Einzelpraxen, Gemeinschaftspraxen, Kliniken und Hochschulambulanzen alle ambulanten Versorgungssektoren der Rheumatologie. Mit knapp 14.000 Patient:innen bildet die Kerndokumentation etwas weniger als 1 % der geschätzten Zahl Betroffener in Deutschland (etwa 1,8 Mio. ) ab. Nicht erfasst werden Betroffene, die nicht fachärztlich versorgt werden. Die Daten sind daher nicht repräsentativ für die Versorgung aller Betroffener in Deutschland, liefern aber eine multizentrische und sektorenübergreifende Abbildung der fachärztlichen rheumatologischen Versorgung. Die kontinuierliche Erfassung über viele Jahre ermöglicht die Darstellung von Versorgungstrends in Therapie und patientenberichteten Outcomes. Die Teilnahme an der Kerndokumentation ist freiwillig und erfordert von Rheumatolog:innen und Betroffenen eine hohe Bereitschaft, jedes Jahr viele Daten zu dokumentieren. Die Dokumentation wird in begrenztem Umfang vergütet. Die Einrichtungen sind bundesweit vertreten und umfassen mit Einzelpraxen, Gemeinschaftspraxen, Kliniken und Hochschulambulanzen alle ambulanten Versorgungssektoren der Rheumatologie. Mit knapp 14.000 Patient:innen bildet die Kerndokumentation etwas weniger als 1 % der geschätzten Zahl Betroffener in Deutschland (etwa 1,8 Mio. ) ab. Nicht erfasst werden Betroffene, die nicht fachärztlich versorgt werden. Die Daten sind daher nicht repräsentativ für die Versorgung aller Betroffener in Deutschland, liefern aber eine multizentrische und sektorenübergreifende Abbildung der fachärztlichen rheumatologischen Versorgung. Die kontinuierliche Erfassung über viele Jahre ermöglicht die Darstellung von Versorgungstrends in Therapie und patientenberichteten Outcomes. Der Jahresbericht aus der Kerndokumentation zeigt aktuelle medikamentöse und patientenberichtete Versorgungsdaten aus Deutschland zu den häufigsten entzündlich-rheumatischen Erkrankungen. Die beobachteten Veränderungen sind im Einklang mit entsprechenden Empfehlungen. Die vergleichende Übersicht zu den einzelnen Diagnosen ergänzt die jährlich erscheinenden Ergebnisgrafiken und soll zukünftig einmal im Jahr berichtet werden.
Socio-behavioural associates of Early Childhood Caries among preschool children aged three to four years in Gampaha district, Sri Lanka: a cross sectional study
fffa65ca-96a7-4fd1-ab7d-a37621af26f8
11512507
Dentistry[mh]
Early Childhood Caries (ECC) is a dental decay found in children under six years of age. It is defined as “the presence of one or more decayed (non-cavitated or cavitated lesions), missing (due to caries), or filled tooth surfaces in any primary tooth in a child under the age of six” . The prevalence of ECC varies in different parts of the world, ranging from 17–63% among one- to five-year-old children . ECC is a highly prevalent disease in Sri Lanka. The prevalence of ECC has stagnated over the decades in Sri Lanka, according to National Oral Health Surveys (NOHS) conducted over the years. According to the latest NOHS in 2015/2016, the prevalence of dental caries in the deciduous dentition of five-year-olds was 63.1%, and the mean dmft was 3.0 (± 3.5). Approximately 96.2% of five-year-old children have active dental caries . This high prevalence reflects the burden of untreated caries in the deciduous dentition of Sri Lankan children. Several other studies conducted in Sri Lanka have also revealed a high prevalence of ECC in young children . ECC has a multifactorial aetiology . As most of the etiological factors of ECC are related to unhealthy oral health behaviours, interventions in the early stages are crucial, as it causes pain and compromises all oral health-related activities. The main oral health-related behaviours responsible for ECC are excessive dietary sugar consumption, poor oral hygiene, and unsatisfactory dental attendance patterns. Several studies conducted in Sri Lanka among under-five children have identified mother’s level of education, use of fluoridated toothpaste, frequency of sweets consumption and frequency of tooth brushing were associated with ECC in some parts of the country . Generally, most children start preschool life at the age of three years . Preschool age enables the child to become independent and interact with the social environment . Furthermore, a significant increase in the consumption of snacks and drinks containing dietary sugar during preschool age is highly evident in many countries leading to a high percentage of ECC. Therefore preschool age is the most critical time to inculcate healthy dietary and oral hygiene practices. In addition, preschool settings provide opportunities for school-based interventions for oral health prevention programmes such as programmes for junk food-free schools, schools with healthy meals, schools with daily toothbrushing programmes, and school-based dental screening programmes. Therefore, it is crucial to assess the social and behavioural factors related to ECC among three-year-old preschoolers because early interventions for improving oral health behaviours are more successful than interventions at later ages . Sri Lanka is an island in South Asia with a population of approximately 20 million and a lower middle-income economy. Though it is not economically ranked high, the general literacy rate of Sri Lankans stands at 95.7% and only 2.3% have never had any education according to statistics . Although a few studies have been carried out among three-to five-year-old preschoolers in the country several years ago to study ECC, no study has assessed associates of ECC among three-to four-year-old preschoolers who are freshers in the preschool environment. This study was conducted in the Gampaha district of Sri Lanka. No study has assessed the association of ECC among three to four year old preschool children in the entire district. Therefore, considering the above facts, we decided to conduct the present study in the Gampaha district, and this study aimed to assess the socio-behavioural associations of ECC among three-to four-year-old preschool children to provide insights into prevention programs to reduce the ECC burden among preschool children. Study design, setting and participants A cross-sectional study was conducted among three-to four-year-old children of registered preschools in the Gampaha district. Data collection was done over 4 months from September to December in 2019. Gampaha district is in the western province of Sri Lanka and around 32% of the population in Gampaha belongs to the highest Wealth Index. Approximately 84.5% of children under the age of 18 were living with both parents. Approximately 40.8% had passed secondary education and only 1.8% had no education. Literacy rate 98.5% . Approximately 84.3% of the population is in the rural sector, 15.6% in the urban sector and 0.1% in the estate sector . A multistage cluster sampling technique, with a probability proportional to size sampling, was used to select preschools. The first stage involved selecting clusters/preschools from the district. According to the national census of early childhood development centres in Sri Lanka in 2016, approximately 73% of preschools were registered under the provincial council’s early childcare authority . The list of registered preschools in the Gampaha district was taken from the Department of Education, Western Province. Preschools are listed according to the Divisional Secretariat (DS) areas in Gampaha district. The probability proportionate-to-size sampling technique was used to determine the exact number of preschools to be selected from each DS division. A total of 72 preschools were selected from all registered preschools in the Gampaha district. Fifteen, three to four-year-old preschool children were selected from each selected preschool using a simple random sampling method. The preschool register was used as the sampling frame. Computer-generated random numbers were used. Information sheets and consent forms for the mothers/caregivers of preschoolers were distributed by the Primary Investigator (PI) and got down via the preschool teacher. The data collection team comprised the PI, two interviewers, a data recorder, and an assistant. Preschool children who completed their third birthday and those who had not completed their fifth birthday on the day of data collection were selected. Once each mother completed the self-administered questionnaire, the interviewer interviewed them in a place where privacy could be maintained within the preschool premises. Oral examination was performed in a place with good natural daylight. Preschoolers and their mothers/caregivers were provided with health education after data collection. Children who had oral health treatment needs, including ECC, were referred for treatment after data collection. Variables The dependent variable considered was the presence (dmft ≥ 1) or absence (dmft = 0) of ECC. The independent variables or the associates of ECC considered were gender, monthly family income (≤ 50,000 Rs / >50,000 Rs), educational status of the mother (up to Ordinary level/ above Ordinary level), occupational status of the mother (employed/unemployed), birth order of the child (first/second or more), family type (nuclear/extended), frequency of sweet consumption; biscuits, buns, cakes, toffees, chocolates, ice-creams or any sweet ( consumes every day or consume several times a week or less), sweetened drinks consumption ; flavoured milk packets, fruit juice, carbonated drinks (drinks every day or drinks several times a week or less), time of sweet consumption during last 24 h (between meals/after main meals only), ingredient used for toothbrushing (use adult fluoridated toothpaste/do not use adult fluoridated toothpaste), frequency of toothbrushing (brush twice per day/do not brush twice per day), supervision of toothbrushing (supervised by an adult/not supervised), dental visit (ever visited/never visited), and maternal dental caries (present; DMFT ≥ 1 /absent; DMFT = 0). Data sources/measurement The study instruments used were a questionnaire with interviewer-administered and self-administered components to assess associates and, oral health assessment forms to record ECC and maternal caries which are the dental caries of the mothers of preschool children in the sample. ECC and maternal dental caries were recorded according to the standard criteria of the fifth edition of the WHO survey method and dmft/DMFT index. The questions on the self-administered component consisted of the child’s gender, birth order, family type, mother’s educational status, mother’s occupational status, and monthly family income. If the mother was separated, divorced, died, or migrated, and did not live with the child, the main person who took care of the child was considered the caregiver. Sweet consumption, oral hygiene behaviors, and dental attendance patterns of preschoolers were the components of the interviewer-administered questionnaire. These associates of ECC were selected based on the results of a series of in-depth interviews conducted in a previous component of the current study and relevant literature. The considered associates were finalized by expert consensus. Most questions for the interviewer-administered component were developed based on the NOHS 2015/2016 of Sri Lanka and the relevant literature. Face validity was achieved by assessing whether the questionnaire had been developed to achieve the objective of the study relevant to the research area and whether it was appropriate for participants. A literature survey, findings of qualitative studies conducted by the PI, and expert opinions were obtained to achieve the content validity of every component of the questionnaire. Expert opinion was obtained at every step in the development of the questionnaire by obtaining consensus from a panel of experts to ensure content validity. The questions were worded using simple language and arranged in a logical sequence. Most of the questions were closed-ended, and leading questions were not included in the questionnaire. Standard translation and back-translation procedures were followed to develop the questionnaires. Oral health assessment forms for preschool children have achieved face and content validity. Pretesting of self-administered and interviewer-administered questionnaires and oral health assessment forms was performed on 25 mother/child dyads. PI was the only person who performed the oral examination and was trained and calibrated under a Consultant in Community Dentistry. Standardized sterilized dental instruments, which were Community Periodontal Index (CPI) probes and plane mouth mirrors, were used for the oral examination procedure. Two female sociology undergraduates were trained as interviewers in a one-day program and helped the illiterate mothers to fill the self-administered component. They were provided with interviewer guides to minimize bias. Study size The expected prevalence of ECC and the expected proportion of ECC factors were taken as 50% to obtain the maximum sample size for the study . The calculated sample size was adjusted for the design effect and cluster sampling was performed. The rate of homogeneity which is the measure of variability between clusters compared to the variability within clusters, was taken as 0.1, based on prior research conducted in the community among children . One preschool was considered one cluster, and the cluster size was 15. Studies conducted in preschools in Sri Lanka used different cluster sizes. One study used 10 , and another used 12 and 23 as the cluster sizes. As a smaller cluster size is preferable to reduce the design effect, the cluster size was taken as 15 for the present study, considering the previous literature and expert opinion. The calculated sample size was 1080 children from 72 preschools, according to a standard formula . The reporting of the study was based on the STROBE guidelines. Statistical methods Data were analyzed using Statistical Product and Service Solutions (SPSS) 23rd version. The distribution of variables was presented as percentages and 95% confidence intervals. The significance level was taken at 0.05 for statistical significance between the variables considered. Multiple logistic regression analysis was used to assess the relationship between the associates of ECC because the outcome/dependent variable was the presence or absence of ECC. The Variance Inflation Factor (VIF) was calculated to assess multicollinearity among the variables in the model and all VIF factors were close to one . In the present study, we used all the important variables in the model instead of univariate/bivariate selection to choose the variables, as described by Sun et al. . The Hosmer-Lemeshow test was performed to assess the goodness of fit of the model. A cross-sectional study was conducted among three-to four-year-old children of registered preschools in the Gampaha district. Data collection was done over 4 months from September to December in 2019. Gampaha district is in the western province of Sri Lanka and around 32% of the population in Gampaha belongs to the highest Wealth Index. Approximately 84.5% of children under the age of 18 were living with both parents. Approximately 40.8% had passed secondary education and only 1.8% had no education. Literacy rate 98.5% . Approximately 84.3% of the population is in the rural sector, 15.6% in the urban sector and 0.1% in the estate sector . A multistage cluster sampling technique, with a probability proportional to size sampling, was used to select preschools. The first stage involved selecting clusters/preschools from the district. According to the national census of early childhood development centres in Sri Lanka in 2016, approximately 73% of preschools were registered under the provincial council’s early childcare authority . The list of registered preschools in the Gampaha district was taken from the Department of Education, Western Province. Preschools are listed according to the Divisional Secretariat (DS) areas in Gampaha district. The probability proportionate-to-size sampling technique was used to determine the exact number of preschools to be selected from each DS division. A total of 72 preschools were selected from all registered preschools in the Gampaha district. Fifteen, three to four-year-old preschool children were selected from each selected preschool using a simple random sampling method. The preschool register was used as the sampling frame. Computer-generated random numbers were used. Information sheets and consent forms for the mothers/caregivers of preschoolers were distributed by the Primary Investigator (PI) and got down via the preschool teacher. The data collection team comprised the PI, two interviewers, a data recorder, and an assistant. Preschool children who completed their third birthday and those who had not completed their fifth birthday on the day of data collection were selected. Once each mother completed the self-administered questionnaire, the interviewer interviewed them in a place where privacy could be maintained within the preschool premises. Oral examination was performed in a place with good natural daylight. Preschoolers and their mothers/caregivers were provided with health education after data collection. Children who had oral health treatment needs, including ECC, were referred for treatment after data collection. The dependent variable considered was the presence (dmft ≥ 1) or absence (dmft = 0) of ECC. The independent variables or the associates of ECC considered were gender, monthly family income (≤ 50,000 Rs / >50,000 Rs), educational status of the mother (up to Ordinary level/ above Ordinary level), occupational status of the mother (employed/unemployed), birth order of the child (first/second or more), family type (nuclear/extended), frequency of sweet consumption; biscuits, buns, cakes, toffees, chocolates, ice-creams or any sweet ( consumes every day or consume several times a week or less), sweetened drinks consumption ; flavoured milk packets, fruit juice, carbonated drinks (drinks every day or drinks several times a week or less), time of sweet consumption during last 24 h (between meals/after main meals only), ingredient used for toothbrushing (use adult fluoridated toothpaste/do not use adult fluoridated toothpaste), frequency of toothbrushing (brush twice per day/do not brush twice per day), supervision of toothbrushing (supervised by an adult/not supervised), dental visit (ever visited/never visited), and maternal dental caries (present; DMFT ≥ 1 /absent; DMFT = 0). The study instruments used were a questionnaire with interviewer-administered and self-administered components to assess associates and, oral health assessment forms to record ECC and maternal caries which are the dental caries of the mothers of preschool children in the sample. ECC and maternal dental caries were recorded according to the standard criteria of the fifth edition of the WHO survey method and dmft/DMFT index. The questions on the self-administered component consisted of the child’s gender, birth order, family type, mother’s educational status, mother’s occupational status, and monthly family income. If the mother was separated, divorced, died, or migrated, and did not live with the child, the main person who took care of the child was considered the caregiver. Sweet consumption, oral hygiene behaviors, and dental attendance patterns of preschoolers were the components of the interviewer-administered questionnaire. These associates of ECC were selected based on the results of a series of in-depth interviews conducted in a previous component of the current study and relevant literature. The considered associates were finalized by expert consensus. Most questions for the interviewer-administered component were developed based on the NOHS 2015/2016 of Sri Lanka and the relevant literature. Face validity was achieved by assessing whether the questionnaire had been developed to achieve the objective of the study relevant to the research area and whether it was appropriate for participants. A literature survey, findings of qualitative studies conducted by the PI, and expert opinions were obtained to achieve the content validity of every component of the questionnaire. Expert opinion was obtained at every step in the development of the questionnaire by obtaining consensus from a panel of experts to ensure content validity. The questions were worded using simple language and arranged in a logical sequence. Most of the questions were closed-ended, and leading questions were not included in the questionnaire. Standard translation and back-translation procedures were followed to develop the questionnaires. Oral health assessment forms for preschool children have achieved face and content validity. Pretesting of self-administered and interviewer-administered questionnaires and oral health assessment forms was performed on 25 mother/child dyads. PI was the only person who performed the oral examination and was trained and calibrated under a Consultant in Community Dentistry. Standardized sterilized dental instruments, which were Community Periodontal Index (CPI) probes and plane mouth mirrors, were used for the oral examination procedure. Two female sociology undergraduates were trained as interviewers in a one-day program and helped the illiterate mothers to fill the self-administered component. They were provided with interviewer guides to minimize bias. The expected prevalence of ECC and the expected proportion of ECC factors were taken as 50% to obtain the maximum sample size for the study . The calculated sample size was adjusted for the design effect and cluster sampling was performed. The rate of homogeneity which is the measure of variability between clusters compared to the variability within clusters, was taken as 0.1, based on prior research conducted in the community among children . One preschool was considered one cluster, and the cluster size was 15. Studies conducted in preschools in Sri Lanka used different cluster sizes. One study used 10 , and another used 12 and 23 as the cluster sizes. As a smaller cluster size is preferable to reduce the design effect, the cluster size was taken as 15 for the present study, considering the previous literature and expert opinion. The calculated sample size was 1080 children from 72 preschools, according to a standard formula . The reporting of the study was based on the STROBE guidelines. Data were analyzed using Statistical Product and Service Solutions (SPSS) 23rd version. The distribution of variables was presented as percentages and 95% confidence intervals. The significance level was taken at 0.05 for statistical significance between the variables considered. Multiple logistic regression analysis was used to assess the relationship between the associates of ECC because the outcome/dependent variable was the presence or absence of ECC. The Variance Inflation Factor (VIF) was calculated to assess multicollinearity among the variables in the model and all VIF factors were close to one . In the present study, we used all the important variables in the model instead of univariate/bivariate selection to choose the variables, as described by Sun et al. . The Hosmer-Lemeshow test was performed to assess the goodness of fit of the model. A total of 1038 participated in the study, with a nonresponse rate of 4%. The prevalence of ECC was 56.3% (95% CI: 53.1–59.3). The mean dmft was 2.72 (± 3.41) and the range was 1–0. The percentage of children with dental abscesses was 0.6% ( n = 06). As shown in Table , half of the study population (51.0%, 95% CI:48.1–54.0) were female. Approximately 56.2% (95% CI:53.7–59.8) were living only with their nuclear family in the house. As shown in Table , 82.3% of the study population (95% CI:79.8–84.5) had sweets every day, and 10.0% had sweetened drinks every day. Most children used adult fluoridated toothpaste (67.1%, 95% CI: 64.1–69.9). Approximately 54.7% (95% CI: 51.6–57.7) brushed their teeth twice per day. About 59.4% (95% CI: 56.3–62.4) had never visited a dental clinic. The Hosmer and Lemeshow test statistic for the logistic regression analysis was not significant ( p = 0.26). According to the multivariable analysis in Table , between meal-sweet consumption (adjusted OR = 1.72, CI:1.25–2.35), sweets consumption everyday (adjusted OR = 2.89, CI: 2.02–4.13), sweetened drinks consumption everyday (adjusted OR = 1.73, CI: 1.07–2.79), ever visited a dental clinic (adjusted OR = 1.73, CI: 1.31– 2.27), presence of maternal caries (adjusted OR = 1.74, CI: 1.12–2.69) were the factors positively associated with ECC. On the other hand, use of adult fluoridated toothpaste (adjusted OR = 0.64, CI: 0.48–0.85) and brushing teeth twice per day (adjusted OR = 0.67, CI: 0.51–0.89) were the factors negatively associated with ECC among preschoolers. In the present study, ECC was positively associated with sweet consumption patterns, maternal caries, and dental visits and negatively associated with oral hygiene behaviours. The prevalence of ECC in this study was 56.3%. Several studies conducted among preschoolers in Sri Lanka have shown a similar prevalence . However, the age range of the above studies was three to five, although the present study had only three to four-year-old preschoolers. This implies that preschool children of this age group also have a higher prevalence of ECC than expected. Non-milk extrinsic sugars are the primary substrate for the development of dental caries. The present study revealed that a very high percentage of three to four-year-old preschoolers consume any form of sweets, and most of them eat sweets between their main meals. The high ECC prevalence among them might have resulted from their unsatisfactory sweet consumption pattern, as the consumption of refined sugar-based food several times a day between main meals was identified as a strong unhealthy behavioural factor associated with ECC . Further, the consumption of sweets and sweetened drinks every day, and between meals was strongly associated with the development of ECC according to the multivariable analysis. This finding is consistent with those of other studies . Parents and preschool teachers have a responsibility to reduce dietary sugar intake among 3-year-old children. Oral health professionals can support this effort by giving more guidance on nutrition and oral health education to both parents and teachers. Furthermore, they can advocate for the formulation of new policies and programs that promote child nutrition and oral health, both within communities and at schools. Oral hygiene behaviours are other important oral health-related factors that are responsible for the prevention of ECC. Brushing teeth twice daily is the best practice for preventing oral diseases. However, only half of the study sample brushed their teeth twice per day, similar to a study conducted in Malaysia . Although most had brushed their teeth under the supervision of an adult, a small percentage (13.3%) of children still had not brushed their teeth under the supervision of an adult which is similar to the findings of a study conducted in Sweden . Children of this age cannot brush their teeth by themselves, and parents or caregivers are responsible for brushing their children’s teeth. Oral hygiene best practices should be encouraged from the start of the eruption of the first tooth, especially brushing teeth twice a day, to inculcate best oral hygiene practices to reduce oral diseases. In multivariable analysis, the use of fluoridated toothpaste and twice-daily toothbrushing were identified as factors negatively associated with ECC among preschoolers in the present study. Nanayakkara reported similar results. Although mothers are encouraged by the Ministry of Health to take their children to a dental clinic at the age of one year, around 59.4% of the three to four-year-old preschoolers have never visited a dental clinic. However, this implies that mothers do not have the practice of visiting dental clinics. However, according to the analysis, dental utilisation was positively associated with ECC. This implies that most preschool children had visited a dental clinic only after having an oral disease and not as a preventive measure before getting a disease. This is the reason why it has been identified as a positively associated factor in the analysis. Several other studies have reported similar findings . Parents are responsible for preschool children’s oral health. According to the present study, the oral health behaviours of preschoolers in the early years are not satisfactory. Parents should be encouraged to improve the oral health behaviours of their children because oral health best practices should be initiated at this early age. The relationship between maternal caries and ECC has been evident in several studies worldwide . Similar to other studies, it showed a significant association with ECC in the multivariable analysis. Mothers with high caries have a higher tendency to get their children’s teeth carious. The unhealthy maternal oral health behaviours which have led to the development of caries in mothers might influence the child’s oral health behaviours, leading to ECC development. Furthermore, maternal transmission of cariogenic bacteria in newborns is well established in the literature . The mother is the primary source and is responsible for the vertical transmission of the bacteria to the newborn. Therefore, improving oral health of prenatal mothers is essential for preventing dental caries in their offspring. The present study did not find any significant relationship between these sociodemographic associates and ECC according to logistic regression. A previous study reported similar findings . Furthermore, as this study used the presence or absence of ECC as the outcome variable, the statistical power may not have been sufficient to find an association between sociodemographic variables and ECC. This study highlights several implications for ECC prevention that can inform policies and programs at various levels. At the community level, implementing fiscal measures such as sugar taxes is important to discourage excessive sugar consumption. In the healthcare system, integrating oral health education, and screening, into maternal-child healthcare services should be implemented in the identification of dental caries and early intervention to mitigate the onset of ECC. Furthermore, daily supervised toothbrushing programs, restricting access to unhealthy foods and sugary drinks, and school-based dental screening should be introduced in preschools. These interventions would promote healthy oral hygiene habits from an early age to reduce the burden of ECC on the healthcare system. According to the Annual Report of the Family Health Bureau of Sri Lanka, the percentage of preschoolers free from Early Childhood Caries (ECC) or calculi was 48% in 2020 and 50.2% in 2021 . Though this data indicates a slight improvement in the percentage of preschoolers free from Early Childhood Caries (ECC) or calculi, increasing from 48% in 2020 to 50.2% in 2021, still the overall prevalence of ECC remains a significant concern. This study used a probability sampling method to minimize selection bias. The interviewers were trained and provided with guides to reduce interviewer bias. However, there were several limitations in the study. As oral health behaviour data were gathered via questionnaires, social desirability bias may have occurred. Although the questions regarding oral health behaviours were extracted from the standard questions formulated in the National Oral Health Survey 2015/2016 in Sri Lanka, WHO basic survey methods and from previous literature, recall bias might have occurred in assessing the behaviours. Because this study was conducted within one district, the generalisability of the findings to other districts cannot be assured. As the ECC prevalence in the present study was too high, severity could be a better dependent variable than the ECC presence or absence. However, the presence or absence of ECC is more aligned with the study’s aims than the focus on severity. The goal of many early childhood oral health programs is to prevent ECC onset. This allows for easier identification of factors associated with ECC. The prevalence of ECC is very high among three to four-year-old preschoolers in the Gampaha District of Sri Lanka. Consumption of sweets and sweetened drinks everyday, between meal sweet consumption, having maternal caries, and ever visited a dental clinic have been positively associated with ECC. Using adult fluoridated toothpaste, tooth brushing twice per day, and adult supervision of tooth brushing were found to be negatively associated with ECC among three to four year old preschoolers. Policies and programmes in implementing fiscal measures such as sugar taxes at the community level and integrating oral health education, and screening, into maternal-child healthcare services should be essential in mitigating ECC onset at preschool age.
Mind the gut: Navigating the complex landscape of gastroprotection in neurosurgical patients
6f45af9f-9434-46d6-bace-59a98978c6f0
11886513
Surgical Procedures, Operative[mh]
Stress ulcers and associated upper gastrointestinal bleeding (UGIB) have long been known to complicate the disease course in neurosurgical patients, with Harvey Cushing being one of the firsts to describe this phenomenon. Patients with a spectrum of neurological illnesses such as traumatic brain injury (TBI), spinal cord injury (SCI), intracerebral hemorrhage (ICH), ischemic stroke, central nervous system tumors, infections have been shown to develop stress ulceration at a rate that is significantly higher than those without neurological illness. The pathophysiology of stress ulceration in this patient population seems to be multifactorial and can be summarized as an alteration in the balance between protective and destructive factors that regulate the gastric mucosal lining (Table ). Destructive factors include gastric acid and pepsin hypersecretion, splanchnic hypoperfusion which are themselves a consequence of uninhibited vagal activity and reduced sympathetic drive. Furthermore, risk factors such as age over 60 years, presence of syndrome of inappropriate anti diuretic hormone secretion, pre-existing Helicobacter pylori ( H. pylori ) infection, mechanical ventilation, as well as corticosteroid use have been shown to predispose patients to develop stress ulcers. While stress ulcer prophylaxis (SUP) is clearly indicated in neurocritical patients, the evidence regarding the best strategy for the same is not clear. Studies on pharmacologic strategies involving proton pump inhibitors (PPIs) and/or histamine 2 (H2) antagonists have raised concerns about the safety of routinely using these medications in the intensive care unit (ICU) setting. Notably, there are concerns regarding the potential development of pneumonia and Clostridium difficile infection (CDI). Non-pharmacological strategies aimed at preventing stress ulcers, such as nasogastric decompression, conflict with established principles of post-operative care, particularly the early initiation of enteral feeding, which is known to maintain gut integrity, reduce infection risk, and improve overall recovery outcomes. Despite the clear and present danger that stress ulcers pose for this patient population, there is a palpable lack of consensus on the topic and the literature regarding gastroprotection in neurocritical patients is vast and riddled with inconsistencies. We aim to summarize some of the major issues and attempt a general synthesis, highlighting the key areas of disagreement, the evidence supporting various prophylactic and therapeutic approaches, and potential pathways for future research and clinical practice improvements in the management of stress ulcers in neurosurgical and neurocritical patients. H2 receptor antagonists (H2RAs) have been known at least since the 1990s to be effective in reducing stress ulcers and related bleeding in critically ill patients. However, their efficacy in the neurosurgical patient cohort was unclear at the time. A prospective randomized trial of critically ill neurosurgical patients by Reusser et al , in which the authors evaluated the development of ulcer and bleeding endoscopically, concluded that the use of ranitidine for routine SUP may not be necessary. Over the next several years, multiple randomized trials evaluating H2RAs in high-risk neurosurgical patients came out with the conclusion that the use of H2RAs were beneficial in preventing stress ulcers and gastrointestinal bleeding. PPIs are widely utilized in the hospital setting for the prevention and management of stress ulcers and UGIB. Multiple systematic reviews and meta-analyses have concluded that PPIs are generally more effective compared to H2RAs in preventing stress ulcers and UGIB in the critical care setting. Despite the demonstrably better efficacy, safety of PPIs over H2RAs is less clear. The same meta-analyses either found increased or no difference in the rates of pneumonia in patients treated with PPIs compared to H2RAs. A large pharmacoepidemiologic study of over 35000 patients found that PPIs were associated with significantly higher rates of pneumonia and CDI, raising concern regarding the routine use of PPIs. However, the PEPTIC randomized trial, involving a comparable number of patients concluded that there was no statistically significant difference in all-cause mortality or CDI in critically ill patients treated with PPIs over H2RAs. When considering the critically ill neurosurgical patient, these issues are magnified further. Not only are these patients at an elevated risk but there is also a paucity of trials evaluating the aforementioned factors in this specific cohort. In our review of the literature, we found only four single center randomized trials that explicitly compared the safety and efficacy of PPIs vs H2RAs in neurosurgical patients. Three trials found no significant difference between the two classes of medications with respect to mucosal injury whereas the other one found PPIs to be better. Still, none of the studies found any difference in the rate of adverse events between the evaluated groups. A meta-analysis on the topic found that SUP proved more effective than placebo or no prophylaxis in preventing UGIB and lowering all-cause mortality, without raising the risk of pneumonia. However, the reliability of this finding was affected by “lack of trials with a low risk of bias, sparse data, heterogeneity among trials, and a concern regarding small trial bias”. It is also important to consider the potential adverse effects of the concomitant use of HR2As with seizure prophylactic agents like phenytoin as well as PPIs with clopidogrel as these drugs affect the cytochrome P450 enzyme system and could lead to unpredictable interactions. Therefore, investigating the risk-benefit profiles of these agents is crucial, given their widespread use in everyday practice. It becomes even more relevant in the context of the availability of newer acid suppressive agents such as the recently approved novel potassium competitive blocker, vonoprazan. Vonoprazan has been shown to be more efficacious than PPIs for acid suppression and having a comparable adverse effect profile for treating H. pylori infection, erosive esophagitis, and gastroesophageal reflux disease. The recent study by Gao et al on the real-world efficacy of the vonoprazan based regimen in the elderly, offers insights. The study, despite being retrospective in nature, highlights the need for investigating acid suppressive therapies and H. pylori eradication in specific populations. Extending this to the neurosurgical patient cohort, we find that there is paucity of data regarding acid suppressive regimens in different subgroups as well. For instance, age over 60 years has been identified as an independent risk factor in developing stress ulcers and UGIB in neurocritical patients. However, several known risk factors for stress ulcer development remain unexplored, particularly the role of preexisting H. pylori infection in the pathogenesis of stress-related mucosal damage. While one prospective study by Maury et al found that H. pylori infection was more common in ICU patients who experienced bleeding compared to controls, the exact nature of the relationship remains uncertain. Additionally, other established risk factors, such as prolonged non-steroidal anti-inflammatory drug therapy, mechanical ventilation, and corticosteroid use, need to be revisited within the context of specific subpopulations. While pharmacological interventions are key in preventing stress ulcers, non-pharmacological strategies play an equally important role in optimizing patient outcomes. Interventions, such as nasogastric decompression, early enteral nutrition (EN), and appropriate patient positioning, are vital in the management of neurocritical patients. Among these, nasogastric tube (NGT) decompression is a common practice in the critical care setting, helping to relieve gastrointestinal obstruction, manage postoperative ileus, and reduce the risk of aspiration. Although there is limited evidence, NGT decompression is also thought to potentially protect against stress ulcers by maintaining an optimal gastric pH and thus mucosal integrity. More importantly, draining gastric contents through NGT is crucial for neurocritical patients, as increased intra-abdominal pressure has been shown to elevate intracranial pressure (ICP), making effective decompression essential for preventing further complications. Early EN is a well-established principle in postoperative care, essential for preserving gut integrity and supporting recovery. Despite proven efficacy, intolerance to early enteral feed initiation occurs in neurocritical patients due to reduced mentation (low Glasgow coma scale, sedation) and thus are maintained on parenteral nutrition (PN) during the critical phases following central nervous system (CNS) insult. PN is not without shortcomings and is known to result in hyperglycemia, higher infection rates, and hepatic steatosis. To balance the risk vs benefit of these nutritional strategies, a combination approach seems to work better than either strategy alone. Several recent studies in the neurocritical population have demonstrated that EN + PN strategies are associated with lower complications from stress ulcers, aspiration as well as fewer days of hospitalization and better nutritional status. This raises an important question: If nasogastric decompression is crucial for preventing complications from gastric distension, and early enteral feeding is vital for preserving mucosal integrity and promoting recovery, how can these two seemingly conflicting strategies be balanced? The solution may lie in a combination approach, where a NGT is used for gastric decompression while a naso-intestinal tube is employed for enteral feeding. Recent studies have demonstrated the efficacy of naso-intestinal tubes in the neurocritical patient population, showing their ability to initiate enteral feeding while reducing gastric retention, pulmonary aspiration, and the risk of pneumonia. However, further research is needed to better define optimal protocols and assess the long-term outcomes of such combined strategies. Ultimately, continued investigation will be essential to refine these approaches and ensure they provide the best balance between gastric decompression and nutritional support for neurocritical patients. High-quality primary evidence on gastroprotection in the neurocritical care population remains scarce. Most available data on the subject are over a decade old (Table ), with only one recent randomized controlled trial published in 2019. Our recommendations for the direction of future research are based on the available systematic reviews and meta-analyses on the topic. A well-designed, large, randomized trial is the need of the hour to address the issues discussed herein. Patient selection should encompass a broad spectrum of neurocritical illnesses to ensure comprehensive representation. Existing trials have predominantly focused on patients with TBI and ICH, while other conditions such as SCI, subarachnoid hemorrhage, and CNS tumors remain underrepresented. These underrepresented groups account for a substantial proportion of neuro ICU admissions and exhibit risk factors unique to their conditions. For example, corticosteroid use is typically limited to patients with CNS tumors and SCI, potentially resulting in a markedly different ulcer risk profile compared to TBI and ICH patients. When it comes to interventions requiring further study, significant gaps remain in understanding the safety profiles of commonly used gastroprotective agents. For example, the association between PPI use and the incidence of nosocomial pneumonia or CDI has not been definitively established. Furthermore, we found only one trial directly comparing PPIs to no prophylaxis, limiting conclusions to comparisons between PPIs and H2RAs. In addition to addressing these issues, future studies could adopt multi-arm designs to explore combination strategies, such as early EN, combined NGT and NIT interventions, and the efficacy and safety of emerging agents like vonoprazan. Meta-analyses on the topic have also highlighted inconsistencies in endpoint definitions. While stress ulceration and UGIB were the most commonly studied outcomes, the methods used to assess these endpoints varied significantly. Some studies employed endoscopic evaluation of stress ulcers and gastrointestinal bleeding, whereas others relied on indirect measures such as gastric pH, occult blood, or coffee ground/frank bloody NGT aspirates. To enable meaningful comparisons and conclusions about interventions, standardization of endpoints is essential. A potential standardized endpoint suggested is clinically significant bleeding, defined as endoscopic evidence of lesions leading to substantial hemodynamic instability, the need for transfusion, or surgical intervention. Given the complex nature of gastroprotection in neurocritical patients, clinical practice must involve a balanced approach that judiciously integrates both pharmacological and non-pharmacological strategies. While PPIs have demonstrated greater efficacy than H2RAs in preventing stress ulcers and associated bleeding, the potential risks, such as pneumonia and CDIs, are not yet definitively established and should not cause significant concern until further data become available. An NGT can effectively manage gastric decompression, which has the dual advantage of lowering ICP and preventing aspiration while a naso-intestinal tube may facilitate early enteral feeding, promoting recovery. Early initiation of enteral feeding particularly, is supported by substantial evidence and should be prioritized, as it seems to be the most effective strategy for preserving gut integrity and minimizing gastrointestinal complications, and prolonged hospital stays. Neurocritical patients represent a unique subgroup of critically ill individuals with distinct risks for stress ulcers and GIB. Consequently, findings from other critically ill cohorts may not directly apply to this vulnerable population due to their differing risk profiles. Current clinical practice guidelines are based on studies that are outdated and lack comprehensive evidence in many areas. There is an urgent need for high-quality studies to evaluate gastroprotective strategies specific to neurocritical patients. As we advance toward more personalized care, tailored approaches are essential to address their unique clinical challenges, optimize outcomes, and minimize complications.
Coupling metabolomics and exome sequencing reveals graded effects of rare damaging heterozygous variants on gene function and human traits
549c213c-3d55-4ed5-9972-e3d7c3cd60e1
11735408
Biochemistry[mh]
A complex interplay of thousands of enzymes and transport proteins is involved in maintaining physiological levels of intermediates and end products of metabolism. Disturbances of their function can result in severe diseases, such as those caused by inborn errors of metabolism (IEMs), or predispose to common metabolic diseases such as type 2 diabetes or gout. While the study of rare, early-onset, autosomal recessive IEMs has uncovered many metabolite-related genes, such studies are limited by the very low number of persons homozygous for the causative variants. Conversely, genome-wide association studies (GWASs) in large populations have revealed thousands of common genetic variants associated with altered metabolite levels – , but these variants’ functional effects are often unknown, and their modest effect sizes limit their direct clinical impact. Gene-based aggregation testing of rare, putatively damaging variants in population studies can address this challenge. Previously, such studies have focused almost exclusively on the circulating metabolome – . We have shown recently that GWASs of paired plasma and urine metabolomes do not only reveal many more associations but also enable specific insights into renal metabolite handling . We therefore aimed to perform gene-based testing of the aggregate effect of rare variants on the levels of 1,294 plasma and 1,396 urine metabolites quantified from 4,737 participants in the German Chronic Kidney Disease (GCKD) study with whole-exome sequencing (WES) data to identify metabolism-related genes and to understand whether the underlying rare, almost exclusively heterozygous variants permit inferences complementary to the ones obtained from the study of IEMs. Patients with IEMs typically show severe symptoms that originate from accumulation or depletion of metabolites, while heterozygous carriers of the causative variants often show milder changes of the same or related metabolic phenotypes . We hypothesized that sex-specific analysis of metabolite-associated, X chromosomal genes as well as knowledge-based, computational modeling based on sex-specific organ-resolved whole-body models (WBMs ; ) of human metabolism can inform on whether heterozygous damaging variants capture the metabolic effects of their unobserved homozygous counterparts. WBMs enable the investigation of homozygous gene defects through deterministic in silico knockout modeling. The resulting virtual IEMs reflect observed IEMs – . We further hypothesized that metabolite-associated rare variants identified in the GCKD study would show associations with related human traits and diseases in very large population studies and that the genetic effects would be proportional to their effects on metabolite levels if the implicated metabolites are molecular readouts of disease-relevant processes. The large UK Biobank (UKB) with WES data and extensive health record linkage permits the systematic study of the aggregated and individual effects of rare, damaging, metabolite-associated variants on a wide variety of traits and diseases. Here, we set out to perform gene-based rare variant aggregation testing to discover genes associated with metabolite levels and to characterize their genetic architecture with respect to the identified variants and across plasma and urine. We validate identified genes and variants and the range of their effects through complementary genetic approaches, with a new computational method based on WBMs , and through proof-of-principle experimental studies, and identify traits and diseases for which these metabolites represent molecular readouts. As summarized in Fig. , rare, putatively damaging variants were identified in 16,525 genes based on WES data from 4,737 GCKD study participants (mean age of 60 years, 40% women; Supplementary Table ). Metabolites were determined by nontargeted mass spectrometry and covered a wide variety of superpathways (Metabolon HD4 platform; Supplementary Table ). Exome-wide burden tests for the association between each gene and the levels of each of 1,294 plasma and 1,396 urine metabolites (781 overlapping) were carried out using two complementary ‘masks’ that differed in the selection of qualifying variants (QVs; ) for gene-based aggregation. While the ‘LoF_mis’ mask contained a median of eight QVs per gene predicted to be either high-confidence loss-of-function (LoF) variants or deleterious missense or in-frame nonsynonymous variants, the ‘HI_mis’ mask contained a median of 16 QVs per gene predicted as high-impact consequence (transcript ablation or amplification, splice acceptor or donor, stop-gain, frameshift, start or stop lost) or as deleterious missense variants using additional prediction scores . Both masks assume a LoF mechanism but account for different genetic architectures. Discovery of 192 significant gene–metabolite associations We identified 192 significant gene–metabolite pairs across both plasma ( P value < 5.04 × 10 −9 ) and urine ( P value < 4.46 × 10 −9 ), where 43 associations were detected in both (192 + 43 associations overall; Fig. and Supplementary Table ). These involved 73 unique genes and 179 metabolites, with a comparable number of genes and metabolites identified in plasma and urine. There were 22 and 17 genes with significant associations exclusively in plasma and in urine, respectively. While the majority of associations was detected with both masks, the more inclusive ‘HI_mis’ mask yielded more mask-specific associations than the ‘LoF_mis’ mask (Fig. ). Amino acids and lipids were the dominating pathways among the associated metabolites (Supplementary Fig. ). The higher proportion of implicated lipids in plasma than in urine is consistent with the absence of glomerular filtration of many lipids (Fig. ). Associations detected in both plasma and urine generally affected the levels of the implicated metabolite in the same direction (Fig. ). Sensitivity analyses evaluating additional masks and methods for aggregation testing (LoF only, sequence kernel association test (SKAT) and SKAT- optimal unified test (SKAT-O)) as well as sex-stratified and kidney function-stratified analyses supported the robustness of the main findings ( , Extended Data Figs. – and Supplementary Tables and ). Previous independent studies of associations between sequencing-based rare variants and metabolite levels obtained using comparable technology have focused on plasma and serum , , , . Comparison of the 128 discovered gene–plasma metabolite associations in this study with previous studies , , , showed that 69% (88 of 128) were not reported previously, although 93% (82 of 88) of the new findings involved metabolites analyzed before (Supplementary Table ; detailed description in the and the ). The 73 unique metabolite-associated genes were strongly overrepresented among genes known to be causative for IEMs (odds ratio = 10.6, P value = 1.9 × 10 −14 ; ), with 28 (38%) of them currently known to harbor causative mutations (Supplementary Table ). The QVs detected in our study of middle-aged and older adults were almost exclusively observed in the heterozygous state (Supplementary Data ). Detailed annotation of QVs in the two masks (Supplementary Table ) showed that 63 unique QVs in 15 genes and 73 unique QVs in 17 genes were listed in ClinVar as ‘pathogenic’ or ‘pathogenic or likely pathogenic’ for a corresponding monogenic disease. These observations support the notion that gene-based aggregation of rare, heterozygous, putatively damaging variants effectively identifies gene–metabolite relationships implicated in human diseases. Validation through independent, complementary approaches Independent replication of our findings is complicated by differences in QVs, metabolite quantification methods and different analytical choices across studies. We therefore validated our findings using four complementary approaches: first, the large UKB permitted analysis of the same rare QVs using the same analytical choices , as in our study for two overlapping metabolites, and showed very similar effect sizes for gene–metabolite associations (Fig. ). Second, the UKB proteomics data contain information on circulating levels of the encoded proteins of 17 genes implicated in our study. Burden tests aggregating protein-truncating and rare damaging variants revealed associations with lower levels of 15 of these proteins (in cis , P value < 1 × 10 −5 ; Fig. ) , potentially explained by nonsense-mediated decay. Third, comparison of our findings to those from a previous study of the plasma metabolome showed highly correlated effect sizes with those from our study, both on the variant level and the aggregated level (Spearman correlation coefficient > 0.8; Fig. and Supplementary Table ). Lastly, we performed a proof-of-concept experimental validation study for an implicated gene–metabolite relationship. The B 0 AT1 transporter, encoded by SLC6A19 , is responsible for the uptake of neutral amino acids across the apical membrane of intestinal and kidney epithelial cells . In addition to associations with the levels of the known substrates asparagine, histidine and tryptophan, we also detected associations with methionine sulfone, not yet reported as a substrate. Transport studies in CHO cells overexpressing human SLC6A19 and its co-chaperone collectrin (CLTRN) in comparison to the control indeed confirmed methionine sulfone to be a substrate of the transporter in vitro, in a similar concentration range as its known substrate isoleucine (Fig. and the ). Specificity was shown by complete inhibition of transport activity upon application of the SLC6A19 inhibitor cinromide (Fig. ). Together, these four complementary lines of evidence all support the validity of the detected associations. Prioritization and characteristics of driver variants We next performed a forward selection procedure to assess the contribution of individual QVs to their gene-based association signals . Plots that visualize the association P value based on the successive aggregation of the most influential QVs (Supplementary Data ) revealed noteworthy differences across genes and metabolites, with examples detailed in the . The inclusion of effectively neutral variants among the QVs may dilute their joint signal. We thus prioritized the variants with the strongest individual contributions that resulted in the lowest possible association P value when aggregated for burden testing as ‘driver variants’ . For each significant association signal, we identified at least two and up to 48 driver variants (median of 13; Supplementary Data and Supplementary Tables and ). The proteins encoded by the vast majority of identified genes are directly involved in the generation, turnover or transport of the associated metabolite(s). It is therefore a reasonable assumption that truly functional variants are those with the strongest individual contributions to the association signal with the implicated metabolite. Indeed, the minimum association P value based on only driver variants was often many orders of magnitude lower than the one obtained from all QVs, as exemplified by DPYD and plasma uracil (Supplementary Data ). As expected, the proportion of splice, stop-gain and frameshift variants was higher among driver QVs, whereas nondriver QVs contained a greater proportion of missense variants (Fisher’s exact test, P value = 1.3 × 10 −6 ; Extended Data Fig. ). The median effect of driver variants on metabolite levels increased from missense over start/stop lost, frameshift and stop-gain variants to variants predicted to affect splicing (Extended Data Fig. ). The median effect of drivers also increased with lower minor allele count and differed substantially from the one of nondrivers in each minor allele count bin (Extended Data Fig. ). Lastly, evaluation of the convergence of rare and common variant association signals showed that the associations of rare and common variants in the same region with a given metabolite were independent ( , Supplementary Table and Extended Data Fig. ). Heterozygous variants inform about dose–response effects The identification of known IEM-causing variants such as in CTH , PAH , SLC6A19 and SLC7A9 (Supplementary Table ) in the heterozygous state supports the notion that heterozygous QVs are functional alleles that lead to more extreme metabolic changes when present homozygously. For three genes with a homozygous QV present in more than one individual in our study, homozygous individuals tended to have more extreme metabolite levels than heterozygous ones (Extended Data Fig. ), supporting a dose–response effect. Moreover, we had previously confirmed experimentally that heterozygous sulfate-associated QVs in SLC26A1 detected by aggregate variant testing are indeed LoF alleles and that the encoded protein is an important player in human sulfate homeostasis . However, experimental studies of each of the 2,077 QVs and 73 genes detected here are infeasible, and IEMs are so rare that no homozygous person for a given gene may have been observed yet. We therefore used three orthogonal approaches: examination of hemizygosity, in silico knockout modeling and investigation of variants prioritized through allelic series, to evaluate whether the observed metabolite-associated heterozygous variants captured similar information about a gene’s function as might be derived from homozygous damaging variants in the respective gene. X chromosomal genes as a readout of variant homozygosity Genes in the non-pseudo-autosomal region of the X chromosome offer an opportunity to study differences between heterozygous women and effectively homozygous (that is, hemizygous) men. We therefore investigated sex differences for the two X chromosomal genes identified in our screen, TMLHE and RGN (Supplementary Table ). Indeed, male carriers of QVs in TMLHE showed clearly higher urine levels of N 6 , N 6 , N 6 -trimethyllysine, the substrate of the encoded enzyme trimethyllysine dioxygenase, than female carriers as well as markedly lower levels of its product hydroxy- N 6 , N 6 , N 6 -trimethyllysine, especially when focusing on driver variants (Fig. and Supplementary Table ). In plasma, male QV carriers showed 1.15 s.d. lower levels of plasma hydroxy- N 6 , N 6 , N 6 -trimethyllysine than noncarriers ( P value = 6 × 10 −44 ), whereas female QV carriers only showed 0.45 s.d. lower metabolite levels than noncarriers ( P value = 3 × 10 −4 ). A similar tendency was observed for RGN and urine levels of the unnamed metabolite X-23436. Levels were higher among both male and female carriers (Supplementary Table ), suggesting that X-23436 is a metabolite upstream of the reaction catalyzed by the encoded regucalcin. Data from the GTEx Project show no sex differences in gene expression across tissues. Hence, sex-differential effects on metabolite levels likely represent a dose–response effect resulting from heterozygosity versus hemizygosity of the involved QVs. Virtual IEMs mirror the effects of heterozygous variants We next investigated the implicated genes’ LoF by generating virtual IEMs for 24 genes that covered 60 gene–metabolite pairs via in silico knockout modeling ( and Extended Data Fig. ). We compared the maximal secretion flux of the implicated metabolite into blood and/or urine between the wild-type WBM and the gene-knockout WBM. Initially, the direction of the observed gene–metabolite associations was correctly predicted by virtual IEMs with an accuracy of 73.3% in the male WBM and 76.7% in the female WBM, which is significantly better than chance (Fisher’s exact test, P value = 3.3 × 10 −3 (male), P value = 1.5 × 10 −4 (female); Supplementary Table ). After model curation informed by the observed gene–metabolite associations, which included the addition of metabolites (for example, 8-methoxykynurenate) and pathways as well as alteration of constraints (for example, diet; details in the and Supplementary Table ), the number of modeled gene–metabolite associations increased to 67, and accuracy increased to 79.1% (male, P value = 2.1 × 10 −5 ) and 83.58% (female, P value = 2.9 × 10 −7 ). These findings underline the predictive nature of the virtual IEMs for the aggregated effects of heterozygous damaging variants and highlight opportunities to further improve WBMs by curation of the underlying knowledge base. Personalized WBMs capture observed metabolic changes Virtual IEMs as described above only allow for qualitative prediction. To additionally study an equivalent to observed effect sizes, we introduced a second modeling strategy (Extended Data Fig. ) as proof of principle, focusing on the gene KYNU . We successfully generated 569 microbiome-personalized WBMs and calculated the effect size of in silico KYNU knockout on metabolite excretion into urine against the natural variation induced by the personalized microbiomes (Supplementary Table ). Eighteen of 257 metabolites had a modeling P value < 0.05/257, implicating them as potential biomarkers of the corresponding IEM kynureninase deficiency (Supplementary Table ). The in silico effects of these 18 biomarkers, mostly belonging to tryptophan metabolism and the nicotinamide adenine dinucleotide (NAD) + de novo synthesis pathway, were significantly correlated with their observed counterparts (Supplementary Fig. ). Whereas two of the three metabolites with particularly large effects in both in silico modeling and the GCKD study, xanthurenate and 3-hydroxykynurenine, are known biomarkers of kynureninase deficiency , 8-methoxykynurenate was not. We therefore measured absolute levels of these metabolites in urine samples from a homozygous patient with kynureninase deficiency and her parents and confirmed that, in addition to xanthurenate and 3-hydroxykynurenine, 8-methoxykynurenate also constituted a biomarker of this IEM (Fig. and Extended Data Fig. ), consistent with the association statistics from aggregate tests of heterozygous variants from the GCKD study. A similar observation was made with regard to the gene PAH (Fig. , Supplementary Fig. and ). Thus, in silico knockout modeling of two proof-of-principle examples faithfully captured metabolic changes observed for heterozygous variants detected in population studies and for the corresponding recessively inherited IEMs. Metabolites represent intermediate readouts of human traits Allelic series describe a dose–response relationship, in which increasingly deleterious mutations in a gene result in increasingly larger effects on a trait or a disease. We hypothesized that genetic effects on metabolite levels should manifest as allelic series if the metabolite represents a molecular readout of an underlying (patho-)physiological process. As proof of principle, we investigated plasma sulfate because of solid evidence for causal gene–metabolite relationships: first, QVs in SLC13A1 showed a significant aggregate effect on lower plasma sulfate levels ( P value = 3 × 10 −18 , lowest possible P value = 2 × 10 −25 ). The observed association is well supported by experimental studies establishing that the encoded Na + –sulfate cotransporter NaS1 (SLC13A1) reabsorbs filtered sulfate at the apical membrane of kidney tubular epithelial cells . Second, we had previously confirmed experimentally that plasma sulfate-associated QVs in SLC26A1 are LoF alleles that lead to reduced sulfate transport , consistent with the aggregate effect of driver variants in SLC26A1 reaching a P value of 2 × 10 −11 for association with plasma sulfate (Extended Data Fig. ). The encoded sulfate transporter SAT1 localizes to basolateral membranes of tubular epithelial cells and works in series with NaS1 to mediate transcellular sulfate reabsorption (Fig. ) , . Based on a growth retardation phenotype in Slc13a1 -knockout mice and an association between SLC13A1 and lower sitting height in the UKB ( P value = 3 × 10 −8 ; Supplementary Tables and ), we investigated relations of six functional driver QVs in SLC13A1 and SLC26A1 with anthropometric measurements in the UKB . Supplementary Table contains traits with which at least two QVs showed nominally significant associations ( P value < 0.05). The genetic effect sizes on plasma sulfate levels in the GCKD study and both sitting and standing heights in the UKB were correlated (Pearson correlation coefficients of 0.57 and 0.70, respectively; Fig. ). These observations support a causal relationship between transcellular sulfate reabsorption and human height and designate plasma sulfate as an intermediate readout. Additionally, we observed significantly lower standing height among carriers of driver variants in SLC13A1 and SLC26A1 than among noncarriers in a subsample of the GCKD study ( N = 3,239) with measured height. The aggregated effect size of driver variants in SLC13A1 was −0.54 (corresponding to −5.17 cm when height was not inverse normal transformed, P value = 1.6 × 10 −3 ; Supplementary Fig. ). For SLC26A1 , we obtained even a stronger effect size of −0.73 (corresponding to −6.68 cm, P value = 1.7 × 10 −6 ; Supplementary Fig. ). The first patient homozygous for a LoF stop-gain mutation in SLC13A1 , p.Arg12*, has just been described . Aside from sitting height >2 s.d. below the normal range, the patient featured multiple skeletal abnormalities. Experimental transport studies as well as the patient’s fractional sulfate excretion of almost 100% establish this variant as a complete LoF, resulting in renal sulfate wasting. In this study, we found that, compared with noncarriers of p.Arg12*, heterozygous carriers showed 0.95 s.d. lower plasma sulfate levels (GCKD, 22 carriers, P value = 9.9 × 10 −10 ) and 0.08 s.d. lower sitting height (UKB, 2,480 carriers, P value = 2.2 × 10 −7 ). Plasma sulfate measurements from heterozygous carriers therefore are indicative of more extreme phenotypic changes in homozygous carriers. Variants altering sulfate uptake and musculoskeletal traits Rare LoF variants in SLC13A1 and SLC26A1 have been linked to individual musculoskeletal phenotypes through IEMs and GWASs , – . We further investigated the association between the same six functional, sulfate-associated QVs in SLC13A1 and SLC26A1 and musculoskeletal disorders, fractures and injuries in the UKB, for which at least two carriers with and without disease were present . There were 116 nominally significant ( P value < 0.05) associations with clinical traits and diseases, 113 of which were associated with increased odds of disease (Fig. ). For instance, the odds of various fractures ranged up to 30.7 (closed fracture of the neck, P value = 2.1 × 10 −8 , NaS1 p.Trp48*; Supplementary Table ). While the increased odds support a relationship between LoF variants in sulfate transporters and predisposition to several musculoskeletal disorders, the power to detect decreased odds was limited because of the rareness of the QVs and many of the disorders. UKB participants who carried more than one copy of any of the six QVs were investigated more closely. The rare allele, resulting in the p.Arg272Cys substitution in NaS1, was observed in nine heterozygous carriers in the GCKD study and prioritized because of its location in a splice region, its high impact on plasma sulfate levels and its particularly large effect on human height (Fig. ). In the UKB, we found 294 heterozygous carriers of p.Arg272Cys, four persons who carried both p.Arg272Cys in NaS1 and p.Leu348Pro in SAT1 and a single person homozygous for p.Arg272Cys. Age- and sex-specific z scores for human height showed a clear dose–response effect (Fig. and the ). The stronger effects among the four individuals heterozygous for LoF variants in each of the two transcellular sulfate reabsorption proteins as compared with heterozygous carriers of p.Arg272Cys only support additive effects across the pathway for human growth. Carrier status for NaS1 p.Arg272Cys was associated with increased odds of several musculoskeletal diseases such as back pain and intervertebral disk disorders as well as fractures (Fig. ). Homozygous persons were also identified for NaS1 p.Arg12* and SAT1 p.Leu348Pro, with similar findings (Extended Data Fig. ). Together, these findings provide strong support that genetic variants that proxy lower transcellular sulfate reabsorption are associated with human height and several musculoskeletal traits and diseases. Prioritizing variants with strong effects in allelic series for subsequent investigation in larger studies, even if the biomarker association rests on only a few heterozygous alleles, can therefore be an effective strategy to gain insights into the impact of rare damaging variants on human health. Relation of metabolite-associated genes to clinical traits A query of associations between the identified 2,077 QVs and 73 genes with thousands of quantitative and binary health outcomes using data from ~450,000 UKB participants revealed multiple biologically plausible significant and suggestive associations for genes (Supplementary Table ) and QVs (Supplementary Table ) but also less-studied relationships . The genes SLC47A1 , SLC6A19 , SLC7A9 and SLC22A7 were associated with one or more measures of kidney function and encode transport proteins highly expressed in the kidney – . Their localization at the apical – versus basolateral membrane of tubular epithelial kidney cells corresponded to the matrix (urine versus plasma) in which they left corresponding metabolic fingerprints. This observation illustrates that rare genetic variants associated with clinical markers of organ function can leave specific signatures in organ-adjacent biofluids that reflect their roles in cellular exchange processes. We identified 192 significant gene–metabolite pairs across both plasma ( P value < 5.04 × 10 −9 ) and urine ( P value < 4.46 × 10 −9 ), where 43 associations were detected in both (192 + 43 associations overall; Fig. and Supplementary Table ). These involved 73 unique genes and 179 metabolites, with a comparable number of genes and metabolites identified in plasma and urine. There were 22 and 17 genes with significant associations exclusively in plasma and in urine, respectively. While the majority of associations was detected with both masks, the more inclusive ‘HI_mis’ mask yielded more mask-specific associations than the ‘LoF_mis’ mask (Fig. ). Amino acids and lipids were the dominating pathways among the associated metabolites (Supplementary Fig. ). The higher proportion of implicated lipids in plasma than in urine is consistent with the absence of glomerular filtration of many lipids (Fig. ). Associations detected in both plasma and urine generally affected the levels of the implicated metabolite in the same direction (Fig. ). Sensitivity analyses evaluating additional masks and methods for aggregation testing (LoF only, sequence kernel association test (SKAT) and SKAT- optimal unified test (SKAT-O)) as well as sex-stratified and kidney function-stratified analyses supported the robustness of the main findings ( , Extended Data Figs. – and Supplementary Tables and ). Previous independent studies of associations between sequencing-based rare variants and metabolite levels obtained using comparable technology have focused on plasma and serum , , , . Comparison of the 128 discovered gene–plasma metabolite associations in this study with previous studies , , , showed that 69% (88 of 128) were not reported previously, although 93% (82 of 88) of the new findings involved metabolites analyzed before (Supplementary Table ; detailed description in the and the ). The 73 unique metabolite-associated genes were strongly overrepresented among genes known to be causative for IEMs (odds ratio = 10.6, P value = 1.9 × 10 −14 ; ), with 28 (38%) of them currently known to harbor causative mutations (Supplementary Table ). The QVs detected in our study of middle-aged and older adults were almost exclusively observed in the heterozygous state (Supplementary Data ). Detailed annotation of QVs in the two masks (Supplementary Table ) showed that 63 unique QVs in 15 genes and 73 unique QVs in 17 genes were listed in ClinVar as ‘pathogenic’ or ‘pathogenic or likely pathogenic’ for a corresponding monogenic disease. These observations support the notion that gene-based aggregation of rare, heterozygous, putatively damaging variants effectively identifies gene–metabolite relationships implicated in human diseases. Independent replication of our findings is complicated by differences in QVs, metabolite quantification methods and different analytical choices across studies. We therefore validated our findings using four complementary approaches: first, the large UKB permitted analysis of the same rare QVs using the same analytical choices , as in our study for two overlapping metabolites, and showed very similar effect sizes for gene–metabolite associations (Fig. ). Second, the UKB proteomics data contain information on circulating levels of the encoded proteins of 17 genes implicated in our study. Burden tests aggregating protein-truncating and rare damaging variants revealed associations with lower levels of 15 of these proteins (in cis , P value < 1 × 10 −5 ; Fig. ) , potentially explained by nonsense-mediated decay. Third, comparison of our findings to those from a previous study of the plasma metabolome showed highly correlated effect sizes with those from our study, both on the variant level and the aggregated level (Spearman correlation coefficient > 0.8; Fig. and Supplementary Table ). Lastly, we performed a proof-of-concept experimental validation study for an implicated gene–metabolite relationship. The B 0 AT1 transporter, encoded by SLC6A19 , is responsible for the uptake of neutral amino acids across the apical membrane of intestinal and kidney epithelial cells . In addition to associations with the levels of the known substrates asparagine, histidine and tryptophan, we also detected associations with methionine sulfone, not yet reported as a substrate. Transport studies in CHO cells overexpressing human SLC6A19 and its co-chaperone collectrin (CLTRN) in comparison to the control indeed confirmed methionine sulfone to be a substrate of the transporter in vitro, in a similar concentration range as its known substrate isoleucine (Fig. and the ). Specificity was shown by complete inhibition of transport activity upon application of the SLC6A19 inhibitor cinromide (Fig. ). Together, these four complementary lines of evidence all support the validity of the detected associations. We next performed a forward selection procedure to assess the contribution of individual QVs to their gene-based association signals . Plots that visualize the association P value based on the successive aggregation of the most influential QVs (Supplementary Data ) revealed noteworthy differences across genes and metabolites, with examples detailed in the . The inclusion of effectively neutral variants among the QVs may dilute their joint signal. We thus prioritized the variants with the strongest individual contributions that resulted in the lowest possible association P value when aggregated for burden testing as ‘driver variants’ . For each significant association signal, we identified at least two and up to 48 driver variants (median of 13; Supplementary Data and Supplementary Tables and ). The proteins encoded by the vast majority of identified genes are directly involved in the generation, turnover or transport of the associated metabolite(s). It is therefore a reasonable assumption that truly functional variants are those with the strongest individual contributions to the association signal with the implicated metabolite. Indeed, the minimum association P value based on only driver variants was often many orders of magnitude lower than the one obtained from all QVs, as exemplified by DPYD and plasma uracil (Supplementary Data ). As expected, the proportion of splice, stop-gain and frameshift variants was higher among driver QVs, whereas nondriver QVs contained a greater proportion of missense variants (Fisher’s exact test, P value = 1.3 × 10 −6 ; Extended Data Fig. ). The median effect of driver variants on metabolite levels increased from missense over start/stop lost, frameshift and stop-gain variants to variants predicted to affect splicing (Extended Data Fig. ). The median effect of drivers also increased with lower minor allele count and differed substantially from the one of nondrivers in each minor allele count bin (Extended Data Fig. ). Lastly, evaluation of the convergence of rare and common variant association signals showed that the associations of rare and common variants in the same region with a given metabolite were independent ( , Supplementary Table and Extended Data Fig. ). The identification of known IEM-causing variants such as in CTH , PAH , SLC6A19 and SLC7A9 (Supplementary Table ) in the heterozygous state supports the notion that heterozygous QVs are functional alleles that lead to more extreme metabolic changes when present homozygously. For three genes with a homozygous QV present in more than one individual in our study, homozygous individuals tended to have more extreme metabolite levels than heterozygous ones (Extended Data Fig. ), supporting a dose–response effect. Moreover, we had previously confirmed experimentally that heterozygous sulfate-associated QVs in SLC26A1 detected by aggregate variant testing are indeed LoF alleles and that the encoded protein is an important player in human sulfate homeostasis . However, experimental studies of each of the 2,077 QVs and 73 genes detected here are infeasible, and IEMs are so rare that no homozygous person for a given gene may have been observed yet. We therefore used three orthogonal approaches: examination of hemizygosity, in silico knockout modeling and investigation of variants prioritized through allelic series, to evaluate whether the observed metabolite-associated heterozygous variants captured similar information about a gene’s function as might be derived from homozygous damaging variants in the respective gene. Genes in the non-pseudo-autosomal region of the X chromosome offer an opportunity to study differences between heterozygous women and effectively homozygous (that is, hemizygous) men. We therefore investigated sex differences for the two X chromosomal genes identified in our screen, TMLHE and RGN (Supplementary Table ). Indeed, male carriers of QVs in TMLHE showed clearly higher urine levels of N 6 , N 6 , N 6 -trimethyllysine, the substrate of the encoded enzyme trimethyllysine dioxygenase, than female carriers as well as markedly lower levels of its product hydroxy- N 6 , N 6 , N 6 -trimethyllysine, especially when focusing on driver variants (Fig. and Supplementary Table ). In plasma, male QV carriers showed 1.15 s.d. lower levels of plasma hydroxy- N 6 , N 6 , N 6 -trimethyllysine than noncarriers ( P value = 6 × 10 −44 ), whereas female QV carriers only showed 0.45 s.d. lower metabolite levels than noncarriers ( P value = 3 × 10 −4 ). A similar tendency was observed for RGN and urine levels of the unnamed metabolite X-23436. Levels were higher among both male and female carriers (Supplementary Table ), suggesting that X-23436 is a metabolite upstream of the reaction catalyzed by the encoded regucalcin. Data from the GTEx Project show no sex differences in gene expression across tissues. Hence, sex-differential effects on metabolite levels likely represent a dose–response effect resulting from heterozygosity versus hemizygosity of the involved QVs. We next investigated the implicated genes’ LoF by generating virtual IEMs for 24 genes that covered 60 gene–metabolite pairs via in silico knockout modeling ( and Extended Data Fig. ). We compared the maximal secretion flux of the implicated metabolite into blood and/or urine between the wild-type WBM and the gene-knockout WBM. Initially, the direction of the observed gene–metabolite associations was correctly predicted by virtual IEMs with an accuracy of 73.3% in the male WBM and 76.7% in the female WBM, which is significantly better than chance (Fisher’s exact test, P value = 3.3 × 10 −3 (male), P value = 1.5 × 10 −4 (female); Supplementary Table ). After model curation informed by the observed gene–metabolite associations, which included the addition of metabolites (for example, 8-methoxykynurenate) and pathways as well as alteration of constraints (for example, diet; details in the and Supplementary Table ), the number of modeled gene–metabolite associations increased to 67, and accuracy increased to 79.1% (male, P value = 2.1 × 10 −5 ) and 83.58% (female, P value = 2.9 × 10 −7 ). These findings underline the predictive nature of the virtual IEMs for the aggregated effects of heterozygous damaging variants and highlight opportunities to further improve WBMs by curation of the underlying knowledge base. Virtual IEMs as described above only allow for qualitative prediction. To additionally study an equivalent to observed effect sizes, we introduced a second modeling strategy (Extended Data Fig. ) as proof of principle, focusing on the gene KYNU . We successfully generated 569 microbiome-personalized WBMs and calculated the effect size of in silico KYNU knockout on metabolite excretion into urine against the natural variation induced by the personalized microbiomes (Supplementary Table ). Eighteen of 257 metabolites had a modeling P value < 0.05/257, implicating them as potential biomarkers of the corresponding IEM kynureninase deficiency (Supplementary Table ). The in silico effects of these 18 biomarkers, mostly belonging to tryptophan metabolism and the nicotinamide adenine dinucleotide (NAD) + de novo synthesis pathway, were significantly correlated with their observed counterparts (Supplementary Fig. ). Whereas two of the three metabolites with particularly large effects in both in silico modeling and the GCKD study, xanthurenate and 3-hydroxykynurenine, are known biomarkers of kynureninase deficiency , 8-methoxykynurenate was not. We therefore measured absolute levels of these metabolites in urine samples from a homozygous patient with kynureninase deficiency and her parents and confirmed that, in addition to xanthurenate and 3-hydroxykynurenine, 8-methoxykynurenate also constituted a biomarker of this IEM (Fig. and Extended Data Fig. ), consistent with the association statistics from aggregate tests of heterozygous variants from the GCKD study. A similar observation was made with regard to the gene PAH (Fig. , Supplementary Fig. and ). Thus, in silico knockout modeling of two proof-of-principle examples faithfully captured metabolic changes observed for heterozygous variants detected in population studies and for the corresponding recessively inherited IEMs. Allelic series describe a dose–response relationship, in which increasingly deleterious mutations in a gene result in increasingly larger effects on a trait or a disease. We hypothesized that genetic effects on metabolite levels should manifest as allelic series if the metabolite represents a molecular readout of an underlying (patho-)physiological process. As proof of principle, we investigated plasma sulfate because of solid evidence for causal gene–metabolite relationships: first, QVs in SLC13A1 showed a significant aggregate effect on lower plasma sulfate levels ( P value = 3 × 10 −18 , lowest possible P value = 2 × 10 −25 ). The observed association is well supported by experimental studies establishing that the encoded Na + –sulfate cotransporter NaS1 (SLC13A1) reabsorbs filtered sulfate at the apical membrane of kidney tubular epithelial cells . Second, we had previously confirmed experimentally that plasma sulfate-associated QVs in SLC26A1 are LoF alleles that lead to reduced sulfate transport , consistent with the aggregate effect of driver variants in SLC26A1 reaching a P value of 2 × 10 −11 for association with plasma sulfate (Extended Data Fig. ). The encoded sulfate transporter SAT1 localizes to basolateral membranes of tubular epithelial cells and works in series with NaS1 to mediate transcellular sulfate reabsorption (Fig. ) , . Based on a growth retardation phenotype in Slc13a1 -knockout mice and an association between SLC13A1 and lower sitting height in the UKB ( P value = 3 × 10 −8 ; Supplementary Tables and ), we investigated relations of six functional driver QVs in SLC13A1 and SLC26A1 with anthropometric measurements in the UKB . Supplementary Table contains traits with which at least two QVs showed nominally significant associations ( P value < 0.05). The genetic effect sizes on plasma sulfate levels in the GCKD study and both sitting and standing heights in the UKB were correlated (Pearson correlation coefficients of 0.57 and 0.70, respectively; Fig. ). These observations support a causal relationship between transcellular sulfate reabsorption and human height and designate plasma sulfate as an intermediate readout. Additionally, we observed significantly lower standing height among carriers of driver variants in SLC13A1 and SLC26A1 than among noncarriers in a subsample of the GCKD study ( N = 3,239) with measured height. The aggregated effect size of driver variants in SLC13A1 was −0.54 (corresponding to −5.17 cm when height was not inverse normal transformed, P value = 1.6 × 10 −3 ; Supplementary Fig. ). For SLC26A1 , we obtained even a stronger effect size of −0.73 (corresponding to −6.68 cm, P value = 1.7 × 10 −6 ; Supplementary Fig. ). The first patient homozygous for a LoF stop-gain mutation in SLC13A1 , p.Arg12*, has just been described . Aside from sitting height >2 s.d. below the normal range, the patient featured multiple skeletal abnormalities. Experimental transport studies as well as the patient’s fractional sulfate excretion of almost 100% establish this variant as a complete LoF, resulting in renal sulfate wasting. In this study, we found that, compared with noncarriers of p.Arg12*, heterozygous carriers showed 0.95 s.d. lower plasma sulfate levels (GCKD, 22 carriers, P value = 9.9 × 10 −10 ) and 0.08 s.d. lower sitting height (UKB, 2,480 carriers, P value = 2.2 × 10 −7 ). Plasma sulfate measurements from heterozygous carriers therefore are indicative of more extreme phenotypic changes in homozygous carriers. Rare LoF variants in SLC13A1 and SLC26A1 have been linked to individual musculoskeletal phenotypes through IEMs and GWASs , – . We further investigated the association between the same six functional, sulfate-associated QVs in SLC13A1 and SLC26A1 and musculoskeletal disorders, fractures and injuries in the UKB, for which at least two carriers with and without disease were present . There were 116 nominally significant ( P value < 0.05) associations with clinical traits and diseases, 113 of which were associated with increased odds of disease (Fig. ). For instance, the odds of various fractures ranged up to 30.7 (closed fracture of the neck, P value = 2.1 × 10 −8 , NaS1 p.Trp48*; Supplementary Table ). While the increased odds support a relationship between LoF variants in sulfate transporters and predisposition to several musculoskeletal disorders, the power to detect decreased odds was limited because of the rareness of the QVs and many of the disorders. UKB participants who carried more than one copy of any of the six QVs were investigated more closely. The rare allele, resulting in the p.Arg272Cys substitution in NaS1, was observed in nine heterozygous carriers in the GCKD study and prioritized because of its location in a splice region, its high impact on plasma sulfate levels and its particularly large effect on human height (Fig. ). In the UKB, we found 294 heterozygous carriers of p.Arg272Cys, four persons who carried both p.Arg272Cys in NaS1 and p.Leu348Pro in SAT1 and a single person homozygous for p.Arg272Cys. Age- and sex-specific z scores for human height showed a clear dose–response effect (Fig. and the ). The stronger effects among the four individuals heterozygous for LoF variants in each of the two transcellular sulfate reabsorption proteins as compared with heterozygous carriers of p.Arg272Cys only support additive effects across the pathway for human growth. Carrier status for NaS1 p.Arg272Cys was associated with increased odds of several musculoskeletal diseases such as back pain and intervertebral disk disorders as well as fractures (Fig. ). Homozygous persons were also identified for NaS1 p.Arg12* and SAT1 p.Leu348Pro, with similar findings (Extended Data Fig. ). Together, these findings provide strong support that genetic variants that proxy lower transcellular sulfate reabsorption are associated with human height and several musculoskeletal traits and diseases. Prioritizing variants with strong effects in allelic series for subsequent investigation in larger studies, even if the biomarker association rests on only a few heterozygous alleles, can therefore be an effective strategy to gain insights into the impact of rare damaging variants on human health. A query of associations between the identified 2,077 QVs and 73 genes with thousands of quantitative and binary health outcomes using data from ~450,000 UKB participants revealed multiple biologically plausible significant and suggestive associations for genes (Supplementary Table ) and QVs (Supplementary Table ) but also less-studied relationships . The genes SLC47A1 , SLC6A19 , SLC7A9 and SLC22A7 were associated with one or more measures of kidney function and encode transport proteins highly expressed in the kidney – . Their localization at the apical – versus basolateral membrane of tubular epithelial kidney cells corresponded to the matrix (urine versus plasma) in which they left corresponding metabolic fingerprints. This observation illustrates that rare genetic variants associated with clinical markers of organ function can leave specific signatures in organ-adjacent biofluids that reflect their roles in cellular exchange processes. We performed a comprehensive screen of the aggregate effect of rare, putatively damaging variants on the levels of 1,294 plasma and 1,396 urine metabolites from paired specimens of 4,737 persons. The majority of the 192 identified gene–metabolite relationships have not been reported yet – , and include plasma- and urine-exclusive associations that reflect organ function. The findings were validated through primary data analysis for metabolites available in the UKB, investigation of previously published summary statistics from sequencing-based genetic studies of the plasma metabolome, integration of orthogonal plasma proteomics data and proof-of-concept experimental studies that confirmed a new metabolite association with the transport protein encoded by SLC6A19 . We show, via several genetic, computational and experimental approaches that the rare, almost exclusively heterozygous metabolite-associated variants in our study capture similar information about a gene’s function as can be obtained from the study of rare IEMs but are observed much more frequently and permit insights into graded effects of impaired gene function. First, 38% of identified genes in our study are known to harbor causative mutations for autosomal recessively inherited IEMs that often exhibit concordant but more extreme changes in the implicated metabolite, as exemplified by elevated urine levels of cystine in cystinuria (MIM 220100, SLC7A9 ) or tryptophan in Hartnup disease (MIM 234500, SLC6A19 ). Second, men exhibited significantly larger effects of rare QVs in non-pseudo-autosomal X chromosomal genes on metabolite levels than women. This observation is consistent with male hemizygosity as an approximation of female homozygosity for a given variant and with the known greater penetrance and severity of X-linked disorders in men than in women . Third, in silico knockout in a virtual metabolic human, that is, full loss of gene function, was predictive for observed metabolic changes associated with variant heterozygosity. Predicted changes on metabolite levels upon in silico gene knockout were also reflected in absolute metabolite quantification of patients with IEM homozygous for a LoF mutation in the respective genes, KYNU and PAH . Thus, deterministic, knowledge-based in silico modeling generated context for better biological interpretation also of heterozygous variants, while genetic screens of metabolite levels in population studies permit the identification of knowledge gaps and errors in WBMs. Our modeling pipeline for generating virtual IEMs, which we make publicly available to substantiate evidence from rare variant aggregation tests, will constitute a valuable resource in particular to scrutinize genes for which an IEM has yet to be observed. Fourth, the presence of different causal QVs affecting a given metabolic reaction or pathway enabled the investigation of allelic series. The resulting dose–response relationships proxy a range of target inhibition, which represents desirable information for drug development and is relevant because enzymes and transporters are attractive drug targets. Plasma sulfate-associated functional QVs in SLC13A1 and SLC26A1 showed a clear dose–response effect between the degree of genetically inferred impaired transcellular sulfate reabsorption and lower human height. This observation is biologically plausible, because defects in genes linked to sulfate biology often result in perturbed skeletal growth and development . In particular, constitutive knockouts of Slc13a1 and Slc26a1 in mice do not only cause hyposulfatemia and renal sulfate wasting , but also general growth retardation in Slc13a1 -knockout mice . Interestingly, the missense variant p.Thr185Met in SAT1 exhibited the largest effect on sulfate. We have previously shown experimentally a dominant negative mechanism of this variant , providing another mechanism of how heterozygous variants may promote insights into an effectively full loss of gene function. Moreover, our findings for the p.Arg272Cys variant in NaS1 show that even very few, heterozygous copies of a metabolite-prioritized QV can give rise to the detection of homozygous individuals and hitherto unreported disease associations in subsequent larger studies. These observations suggest that the importance of impaired transcellular epithelial sulfate transport for musculoskeletal diseases, fractures and injuries deserves additional study and should be further substantiated through conditional or mediation analyses if plasma sulfate levels become available in the UKB. Potential limitations of our study include a focus on participants of European ancestry with moderately reduced kidney function, potential violations of assumptions underlying burden tests, in silico prediction of QV pathogenicity and of whole-body modeling and the use of semi-quantitative rather than absolute metabolite levels. Arguments mitigating each of these concerns are detailed in the . In conclusion, exome-wide population studies of rare, putative LoF variants can reveal potentially causal relationships with metabolites and highlight metabolic biomarkers informative of the degree of impaired gene function that can translate into graded associations with human traits. Study design and participants The GCKD study is an ongoing prospective cohort study of 5,217 participants with moderate chronic kidney disease who were enrolled from 2010 to 2012 and are under regular nephrologist care. Inclusion criteria were an age between 18 and 74 years and an eGFR between 30 and 60 ml min −1 per 1.73 m 2 or an eGFR >60 ml min −1 per 1.73 m 2 with a UACR >300 mg per g or with a urinary protein-to-creatinine ratio >500 mg per g . Biomaterials, including blood and urine, were collected at the baseline visit, processed and shipped frozen to a central biobank for storage at −80 °C . Details on the study design and participant characteristics have been published , . The GCKD study was registered in the national registry for clinical studies (DRKS 00003971) and approved by local ethics committees of the participating institutions . All participants provided written informed consent. Whole-exome sequencing and quality control Genomic DNA was extracted from whole blood and underwent paired-end 100-bp WES at Human Longevity, using the IDT xGen version 1 capture kit on the Illumina NovaSeq 6000 platform. More than 97% of consensus coding sequence (CCDS) release 22 (ref. ) had at least 10-fold coverage. The average coverage of the CCDS was 141-fold read depth. Exomes were processed from their unaligned FASTQ state in a custom-built cloud compute platform using the Illumina DRAGEN Bio-IT Platform Germline Pipeline version 3.0.7 at AstraZeneca’s Centre for Genomics Research, including alignment of reads to the GRCh38 reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/001/405/GCA_000001405.15_GRCh38/ ) and variant calling . Sample-level quality control included removal of samples from participants who withdrew consent, duplicated samples, those with an estimated VerifyBamID contamination level >4% , samples with inconsistency between reported and genetically predicted sex, samples not having chromosomes XX or XY, samples having <94.5% of CCDS release 22 bases covered with ≥10-fold coverage , related samples with kinship >0.884 (KING, kinship version 2.2.3) and samples with a missing call rate >0.03. Furthermore, only samples with available high-quality DNA microarray genotype data and without outlying values (>8 s.d.) along any of the first ten genetic principle components from a principal component analysis were kept, for a final sample size of 4,779 samples. Variant-level quality control was performed similar to that in ref. , excluding variants with coverage <10, heterozygous variants with a one-sided binomial exact test P value <1 × 10 −6 for Hardy–Weinberg equilibrium, variants with a genotype quality score <30, single-nucleotide variants with a Fisher’s strand bias score (FS) >60 and insertions and deletions with an FS >200, variants with a mapping quality score <40, those with a quality score <30, variants with a read position rank-sum score <−2, those with a mapping quality rank-sum score <−8, variants that did not pass the DRAGEN calling algorithm filters, heterozygous genotype called variants based on an alternative allele read ratio <0.2 or >0.8 and variants with a missing call rate >10% among all remaining samples. This resulted in 1,038,062 variants across the autosomes and the X chromosome. Variant and gene annotation Variants from WES were annotated using the Variant Effect Predictor (VEP) version 101 (ref. ) with standard settings, including the canonical transcript, gene symbol and variant frequencies from gnomAD version 2.1 ( https://gnomad.broadinstitute.org/ ). VEP plugins were used to add the REVEL (version 2020-5) and CADD (version 3.0) scores and to downgrade LoF variants using LOFTEE (version 2020-8) . Furthermore, we added multiple in silico prediction scores using dbNSFP version 4.1a . For interpretation, genes were annotated for their potential function as enzymes using UniProt ( https://www.uniprot.org/ ) and as transporters using data from Gyimesi and Hediger . Metabolite identification and quantification Metabolite levels were quantified from stored plasma and spot urine as published by Schlosser et al. . In brief, nontargeted mass spectrometry analysis was conducted at Metabolon. Metabolites were identified by automated comparison of the ion features in the experimental sample to a reference library of chemical standards. Known metabolites reported in this study were identified with the highest confidence level of identification of the Metabolomics Standards Initiative , , unless marked with an asterisk. Unnamed biochemicals of unknown structural identity were identified by virtue of their recurrent nature. For peak quantification, the area under the curve was used, followed by normalization to account for interday instrument variation. Data cleaning of quantified metabolites Data cleaning, quality control, filtering and normalization of quantified metabolites in plasma and urine in the GCKD study were performed using an in-house pipeline . Samples and metabolites were evaluated for duplicates; missing and outlying values and metabolites with low variance were excluded. Levels of urine metabolites were normalized using the probabilistic quotient derived from 309 endogenous metabolites with <1% missing values to account for differences in urine dilution. After removing metabolites with <300 individuals with WES data, the remaining 1,294 plasma and 1,396 urine metabolites (Supplementary Table ) were inverse normal transformed before gene-based aggregation testing. Therefore, effect sizes based on effects of aggregated rare variants on the semi-quantitative metabolite measurements have 1 s.d. as a unit. Additional variables Serum and urine creatinine were measured using an IDMS-traceable enzymatic assay (Creatinine Plus, Roche). Serum and urine albumin levels were measured using the Tina-quant assay (Roche–Hitachi Diagnostics). GFR was estimated with the CKD-EPI formula from serum creatinine. UACR was calculated using urinary albumin and creatinine measurements. Full information on WES data, covariates and metabolites was available for 4,713 persons regarding plasma metabolites and for 4,619 persons regarding urine metabolites. Genetic principal components were derived based on principal component analysis on the basis of genotype data using flashpca . Rare variant aggregation testing on metabolite levels We performed burden tests to combine the effects of rare, putatively damaging variants within a gene on metabolite levels assuming a LoF mechanism that results in concordant effect directions on metabolite levels . The selection of high-quality QVs into masks based on their frequency and annotated properties is a state-of-the-art approach in variant aggregation studies . Annotations from VEP version 101 (ref. ) were used to select QVs within each gene for aggregation in burden tests. Because genetic architectures of damaging variants vary across genes, two complementary masks for the selection of QVs were defined. Both masks were restricted to contain only rare variants in canonical transcripts with MAF <1%. All variants that were predicted to be either high-confidence LoF variants or missense variants with a MetaSVM score >0 or in-frame nonsynonymous variants with a fathmm-XF-coding score >0.5 were aggregated into the first mask, termed LoF_mis. The second mask, termed HI_mis, contained all variants that were predicted either to have a high-impact consequence defined by VEP (transcript ablation, splice acceptor variant, splice donor variant, stop-gain variant, frameshift variant, start/stop lost variant, and transcript amplification) or to be missense variants with a REVEL score >0.5, a CADD PHRED score >20 or an M-CAP score >0.025. Only genes with an HGNC symbol that were not read-throughs and that contained more than three QVs in at least one of the masks were kept for testing, resulting in 16,525 analyzed genes. Burden tests were carried out as implemented in the seqMeta R package version 1.6.7 (ref. ), adjusting for age, sex, ln(eGFR) and the first three genetic principal components as well as serum albumin for plasma metabolites and ln(UACR) for urinary metabolites, respectively . Genotypes were coded as the number of copies of the rare allele (0, 1, 2) on the autosomes and also on the X chromosome for women. For men, genotypes in the non-pseudo-autosomal region of the X chromosome were coded as (0, 2). Statistical significance was defined as nominal significance corrected for the number of tested genes and principal components that explained more than 95% of the metabolites’ variance (0.05/16,525/600 = 5.04 × 10 −9 in plasma, 0.05/16,525/679 = 4.46 × 10 −9 in urine). For significant gene–metabolite associations, single-variant association tests between each QV in the respective mask and the corresponding metabolite levels were performed under additive modeling, adjusting for the same covariates using the seqMeta R package version 1.6.7 (ref. ). Sensitivity analyses that evaluated all significant gene–metabolite pairs with regard to additional gene-based tests as well as across strata of sex and kidney function are summarized in the and Supplementary Tables and . Assessment of QV contributions and driver variants The investigation of the genetic architecture underlying gene–metabolite associations and the prioritization of QVs according to their contribution to the gene-based association signal were performed using the forward selection procedure from Bomba et al. . First, for each QV v , the P value P v is calculated by performing the burden test aggregating all QVs other than v . Second, for each QV v , the difference Δ v between P v and the total P value of the burden test including all QVs is calculated. Subsequently, QVs are ranked by the magnitude of Δ v . QVs not contributing to the gene signal or even having an opposite effect can provide a negative Δ v . Finally, burden tests are performed by adding the ranked QVs one after the other until the lowest P value is reached, starting with the greatest Δ v . This identified a set of QVs that contained only variants that contributed most to the gene–metabolite association signal (that is, led to a stronger association signal) and did not contain variants that introduced noise (that is, neutral variants or those with a small or even opposite effect on metabolite levels). The resulting set of selected variants that led to the lowest possible association P value was designated ‘driver variants’ for the respective gene–metabolite association. Driver variants within a gene might differ for different associated metabolites. Relation of QVs in SLC13A1 and SLC26A1 to musculoskeletal traits WES and biomedical data of the UKB were used to investigate allelic series of functional QVs in SLC13A1 and SLC26A1 with hypothesized related clinical traits and diseases. We focused on SLC13A1 driver variants with experimental validation or that likely result in a severe consequence (stop-gain, splicing) to select truly functional QVs. Among these, the stop-gain variant encoding p.Arg12*, for which a complete LoF has experimentally been validated , the stop-gain substitution p.Trp48*, for which associations with decreased serum sulfate levels and skeletal phenotypes were reported, and the missense variant encoding p.Arg272Cys, located in a splice region, were available in the UKB. For SLC26A1 , we selected driver QVs for which reduced sulfate transport activity had previously been shown , of which p.Leu384Pro, p.Ser358Leu and p.Thr185Met were available in the UKB. All 6 QVs passed the ‘90pct10dp’ QC filter, defined as at least 90% of all genotypes for a given variant, independent of variant allele zygosity, had a read depth of at least 10 ( https://biobank.ndph.ox.ac.uk/ukb/ukb/docs/UKB_WES_AnalysisBestPractices.pdf ). Analyses were performed on the UKB Research Analysis Platform. Participants with all ancestries were included into the analysis but excluding strongly related individuals, defined as those that were excluded from the kinship inference process and those with ten or more third-degree relatives. After individual-level filtering, 468,292 individuals remained for analyses. Of these, ten participants were homozygous for one of the six QVs and 7,280 persons were heterozygous for at least one of the QVs. For these homozygous or heterozygous persons, we determined age- and sex-specific z scores of their quantitative anthropometric measurements, enabling interpretation of their measurements compared with noncarriers of the same age and sex. Age- and sex-specific distributions were inverse normal transformed before calculating z scores. The association between each of the six functional QVs with medical diagnoses defined by International Classification of Diseases version 10 (ICD-10) codes based on UKB field 41202 (primary or main diagnosis codes across hospital inpatient records) was investigated. We selected musculoskeletal diseases (ICD-10 codes starting with ‘M’) and fractures and injuries (ICD-10 codes starting with ‘S’ and containing ‘fracture’, ‘dislocation’ or ‘sprain’ terms). To avoid unreliable estimates, traits were restricted to those with at least two rare variant carriers among both individuals with and without disease. The association was examined using Fisher’s exact test under dominant modeling and Firth regression under additive modeling (‘brglm2’ R package ). We included sex, age at recruitment, sex × age and the first 20 genetic principal components (UKB field 22009) as covariates in the regression model. The association with quantitative anthropometric traits was assessed after inverse normal transformation via linear regression, additive genotype modeling and adjusting for the same covariates. Gene-based tests for metabolite associations in the UK Biobank We performed gene-based tests for significantly associated metabolites available in the UKB to validate our findings using the same settings for analysis as those in our study. Because metabolite levels in the UKB were quantified by Nightingale Health’s metabolic biomarker platform focusing on lipids, only two (histidine and phenylalanine) of the 122 significantly associated plasma metabolites were available. Histidine and phenylalanine values (UKB data fields 23463 and 23468) were inverse normal transformed. Sample and variant QC was performed, and covariates were included as described in the previous paragraph. A total of 260,000 individuals were available for analysis. Association analysis for the two identified gene–metabolite pairs, histidine and HAL as well as phenylalanine and PAH , was performed based on burden tests as implemented in REGENIE version 3.3 in two steps using the HI_mis mask, selecting only QVs that were present in the GCKD study to ensure reproducibility of rare variant effects between the studies. Setup of the whole-body model and mapping The sex-specific and organ-resolved WBM covers 13,543 unique metabolic reactions and 4,140 unique metabolites based on the generic genome-scale reconstruction of human metabolism, Recon3D , and adequate physiological and coupling constraints , . Of all observed significant gene–metabolite pairs from the GCKD study, 51 genes and 69 metabolites could be mapped onto Recon3D. For 36 of 51 genes, their associated metabolites could be mapped, resulting in 69 unique gene–metabolite pairs. To investigate perturbations in gene G , we first identified all reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } of the corresponding encoded enzymes or transporters in the WBM . We included those genes (27 of 36) in the generation of virtual IEMs that were exclusively causal for a non-empty set of reactions (that is, for a gene G , associated with reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } , there did not exist a gene H that was associated with any reaction of R G ) and metabolites with urinary excretion reactions, leading to the exclusion of SLC22A7 and SULT2A1 . In silico knockout modeling via linear programming Knockout simulations were based on maximizing the flux of the excretion or demand reaction of the metabolite of interest M under different conditions in a steady state setting ( Sv = 0 ), where S is the stoichiometric matrix (rows, metabolites; columns, reactions), and v is the flux vector through each reaction, adhering to specific constraints ( v l ≤ v ≤ v u ) , : 1 [12pt]{minimal} $$_{{}}{{}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ max v c T v , subject to Sv = 0 , v l ≤ v ≤ v u . For simulating a wild-type model for gene G , we solved the linear programming (LP) problem stated in equation , choosing the linear objective as the sum of all corresponding fluxes of reactions in R G : 2 [12pt]{minimal} $$_{G}:=\, _{k=1}^{n}{v}_{{G}_{k}},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ S G : = max ∑ k = 1 n v G k , subject to Sv = 0 , v l ≤ v ≤ v u . First, we checked whether S G > 10 −6 , a criterion implemented in the function checkIEM_WBM of the PSCM toolbox for deciding whether the corresponding reactions could carry any flux , . All reactions except the TMLHE -associated reactions passed this criterion. Next, we maximized the flux of two key reactions: the urine excretion reaction (for example, EX M [ u ]) and the created unbounded demand reaction (for example, DM M [ bc ]), designed to reflect accumulation in the blood compartment. First, we unbounded the upper bound of the urine excretion reaction. Next, we maximized the corresponding fluxes of metabolite M as the LP problem stated in equation under the additional constraint that [12pt]{minimal} $${ }_{k=1}^{n}{v}_{{G}_{k}}={S}_{G}$$ ∑ k = 1 n v G k = S G , providing the maximal urine excretion and the maximal flux into blood given the constraint setting. Finally, to simulate the complete LoF, we blocked all reactions in all organs catalyzed by gene G by setting [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0.$$ v G 1 = … = v G n = 0 . We derived maximum fluxes as in the wild-type model. Subsequently, we tested whether the knockout resulted in an increase, a decrease or no change in EX M [ u ] and DM M [ bc ] for each mapped gene–metabolite pair that was significant in the GCKD cohort. From the initial 36 genes mapped onto Recon3D, 24 genes and their mapped metabolites fulfilled all criteria (exclusively causal, reactions of the genes carry flux, urinary excretion reaction present), leading to 60 modeled gene–metabolite pairs. After curation of the male and female models, 26 genes ( TMLHE and KYAT1 added) and 67 gene–metabolite pairs could be computed . LP simulations were carried out in Windows 10 using MATLAB 2021a (MathWorks) as the simulation environment, ILOG CPLEX version 12.9 (IBM) as the LP solver, the COBRA Toolbox version 3.4 (ref. ) and the PSCM toolbox . Microbiome personalization of whole-body models Microbiome-personalized WBMs were generated by creating community models based on the genome-scale reconstructions of microbes in the AGORA1 resource , . Models have been shown to accurately reflect aspects of the fecal host metabolome , . Briefly, from microbe identification and relative abundance data of a metagenomic sample, genome-scale reconstructions of the identified microbes are joined together and connected via a lumen compartment, where they can exchange metabolites to form a microbial community , . Each microbial community model is then integrated in the WBM by connecting the microbiota lumen compartment to the large intestinal lumen of the WBM. Microbial community models ( n = 616) were based on publicly available metagenomics data from Yachida et al. and then embedded into the male WBM to form 616 personalized WBMs. In silico knockout modeling using quadratic programming While maintaining the same conditions as outlined in equation , rather than maximizing a linear objective, we minimized a quadratic objective for each personalized WBM: 3 [12pt]{minimal} $$_{{}}}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ min v 1 2 v T Qv , subject to Sv = 0 , v l ≤ v ≤ v u . Here, Q is a diagonal matrix, with 10 −6 on its diagonal, a value recommended in the COBRA Toolbox . Because of convexity attributes, equation allows for calculation of a unique flux distribution. For each solution v * , we obtained the corresponding urine excretion reactions of the measured and mapped metabolites. For knockout simulations, the associated reactions of gene G were set to zero ( [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0$$ v G 1 = … = v G n = 0 ). Then, equation was solved if possible. An optimal quadratic programming (QP) solution could be computed for 582 wild-type models, 590 KYNU -knockout WBMs and 588 PAH -knockout WBMs, which led to 569 paired QP– KYNU solutions and 567 paired QP– PAH solutions. We analyzed urine secretion fluxes for 257 metabolites covered in the GCKD urine metabolome data and 272 metabolites covered in the GCKD plasma metabolome data that had non-zero flux values. For KYNU , the urine compartment was analyzed, as biomarker quantification for the corresponding IEM is done in urine. Analogously for PAH , the blood metabolome data were analyzed as the clinically relevant compartment. The QP simulations were carried out using the high-performance computing facility, called the Brain-Cluster, of the University of Greifs-wald, employing MATLAB 2019b (MathWorks), ILOG CPLEX version 12.10 (IBM) as the quadratic programming solver and the COBRA Toolbox version 3.4 (ref. ). Statistical analysis of the in silico simulation results The Fisher–Freeman–Halton test was used to determine significance when comparing the in vivo and in silico signs from LP modeling. Statistical analysis of the QP solutions was conducted based on the paired wild-type and knockout fluxes via fixed-effect linear regression for panel data . We used ln(urine secretion flux) as the response variable, the knockout status as the sole predictor (wild type versus knockout) and the personalized microbiome as a fixed effect. Significance thresholds were set to 0.05/257 ( KYNU ) and 0.05/272 ( PAH ). Importantly, the entire variance in the regression models had two sources: (1) the knockout and (2) the microbiome personalization. Significance testing of the in silico regression coefficient of the knockout variable therefore delivers a test of whether the knockout explains substantial amounts of variance in comparison to the variance induced by randomly sampled microbiome communities. The in silico regression coefficients were then correlated with the burden-derived observed regression coefficients of gene–metabolite associations from the GCKD study, and significance was determined using the standard test for Pearson correlations. Experiments on transport activity of SLC6A19 Generation of cells Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. FLIPR membrane potential assay CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Reporting summary Further information on research design is available in the linked to this article. The GCKD study is an ongoing prospective cohort study of 5,217 participants with moderate chronic kidney disease who were enrolled from 2010 to 2012 and are under regular nephrologist care. Inclusion criteria were an age between 18 and 74 years and an eGFR between 30 and 60 ml min −1 per 1.73 m 2 or an eGFR >60 ml min −1 per 1.73 m 2 with a UACR >300 mg per g or with a urinary protein-to-creatinine ratio >500 mg per g . Biomaterials, including blood and urine, were collected at the baseline visit, processed and shipped frozen to a central biobank for storage at −80 °C . Details on the study design and participant characteristics have been published , . The GCKD study was registered in the national registry for clinical studies (DRKS 00003971) and approved by local ethics committees of the participating institutions . All participants provided written informed consent. Genomic DNA was extracted from whole blood and underwent paired-end 100-bp WES at Human Longevity, using the IDT xGen version 1 capture kit on the Illumina NovaSeq 6000 platform. More than 97% of consensus coding sequence (CCDS) release 22 (ref. ) had at least 10-fold coverage. The average coverage of the CCDS was 141-fold read depth. Exomes were processed from their unaligned FASTQ state in a custom-built cloud compute platform using the Illumina DRAGEN Bio-IT Platform Germline Pipeline version 3.0.7 at AstraZeneca’s Centre for Genomics Research, including alignment of reads to the GRCh38 reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/001/405/GCA_000001405.15_GRCh38/ ) and variant calling . Sample-level quality control included removal of samples from participants who withdrew consent, duplicated samples, those with an estimated VerifyBamID contamination level >4% , samples with inconsistency between reported and genetically predicted sex, samples not having chromosomes XX or XY, samples having <94.5% of CCDS release 22 bases covered with ≥10-fold coverage , related samples with kinship >0.884 (KING, kinship version 2.2.3) and samples with a missing call rate >0.03. Furthermore, only samples with available high-quality DNA microarray genotype data and without outlying values (>8 s.d.) along any of the first ten genetic principle components from a principal component analysis were kept, for a final sample size of 4,779 samples. Variant-level quality control was performed similar to that in ref. , excluding variants with coverage <10, heterozygous variants with a one-sided binomial exact test P value <1 × 10 −6 for Hardy–Weinberg equilibrium, variants with a genotype quality score <30, single-nucleotide variants with a Fisher’s strand bias score (FS) >60 and insertions and deletions with an FS >200, variants with a mapping quality score <40, those with a quality score <30, variants with a read position rank-sum score <−2, those with a mapping quality rank-sum score <−8, variants that did not pass the DRAGEN calling algorithm filters, heterozygous genotype called variants based on an alternative allele read ratio <0.2 or >0.8 and variants with a missing call rate >10% among all remaining samples. This resulted in 1,038,062 variants across the autosomes and the X chromosome. Variants from WES were annotated using the Variant Effect Predictor (VEP) version 101 (ref. ) with standard settings, including the canonical transcript, gene symbol and variant frequencies from gnomAD version 2.1 ( https://gnomad.broadinstitute.org/ ). VEP plugins were used to add the REVEL (version 2020-5) and CADD (version 3.0) scores and to downgrade LoF variants using LOFTEE (version 2020-8) . Furthermore, we added multiple in silico prediction scores using dbNSFP version 4.1a . For interpretation, genes were annotated for their potential function as enzymes using UniProt ( https://www.uniprot.org/ ) and as transporters using data from Gyimesi and Hediger . Metabolite levels were quantified from stored plasma and spot urine as published by Schlosser et al. . In brief, nontargeted mass spectrometry analysis was conducted at Metabolon. Metabolites were identified by automated comparison of the ion features in the experimental sample to a reference library of chemical standards. Known metabolites reported in this study were identified with the highest confidence level of identification of the Metabolomics Standards Initiative , , unless marked with an asterisk. Unnamed biochemicals of unknown structural identity were identified by virtue of their recurrent nature. For peak quantification, the area under the curve was used, followed by normalization to account for interday instrument variation. Data cleaning, quality control, filtering and normalization of quantified metabolites in plasma and urine in the GCKD study were performed using an in-house pipeline . Samples and metabolites were evaluated for duplicates; missing and outlying values and metabolites with low variance were excluded. Levels of urine metabolites were normalized using the probabilistic quotient derived from 309 endogenous metabolites with <1% missing values to account for differences in urine dilution. After removing metabolites with <300 individuals with WES data, the remaining 1,294 plasma and 1,396 urine metabolites (Supplementary Table ) were inverse normal transformed before gene-based aggregation testing. Therefore, effect sizes based on effects of aggregated rare variants on the semi-quantitative metabolite measurements have 1 s.d. as a unit. Serum and urine creatinine were measured using an IDMS-traceable enzymatic assay (Creatinine Plus, Roche). Serum and urine albumin levels were measured using the Tina-quant assay (Roche–Hitachi Diagnostics). GFR was estimated with the CKD-EPI formula from serum creatinine. UACR was calculated using urinary albumin and creatinine measurements. Full information on WES data, covariates and metabolites was available for 4,713 persons regarding plasma metabolites and for 4,619 persons regarding urine metabolites. Genetic principal components were derived based on principal component analysis on the basis of genotype data using flashpca . We performed burden tests to combine the effects of rare, putatively damaging variants within a gene on metabolite levels assuming a LoF mechanism that results in concordant effect directions on metabolite levels . The selection of high-quality QVs into masks based on their frequency and annotated properties is a state-of-the-art approach in variant aggregation studies . Annotations from VEP version 101 (ref. ) were used to select QVs within each gene for aggregation in burden tests. Because genetic architectures of damaging variants vary across genes, two complementary masks for the selection of QVs were defined. Both masks were restricted to contain only rare variants in canonical transcripts with MAF <1%. All variants that were predicted to be either high-confidence LoF variants or missense variants with a MetaSVM score >0 or in-frame nonsynonymous variants with a fathmm-XF-coding score >0.5 were aggregated into the first mask, termed LoF_mis. The second mask, termed HI_mis, contained all variants that were predicted either to have a high-impact consequence defined by VEP (transcript ablation, splice acceptor variant, splice donor variant, stop-gain variant, frameshift variant, start/stop lost variant, and transcript amplification) or to be missense variants with a REVEL score >0.5, a CADD PHRED score >20 or an M-CAP score >0.025. Only genes with an HGNC symbol that were not read-throughs and that contained more than three QVs in at least one of the masks were kept for testing, resulting in 16,525 analyzed genes. Burden tests were carried out as implemented in the seqMeta R package version 1.6.7 (ref. ), adjusting for age, sex, ln(eGFR) and the first three genetic principal components as well as serum albumin for plasma metabolites and ln(UACR) for urinary metabolites, respectively . Genotypes were coded as the number of copies of the rare allele (0, 1, 2) on the autosomes and also on the X chromosome for women. For men, genotypes in the non-pseudo-autosomal region of the X chromosome were coded as (0, 2). Statistical significance was defined as nominal significance corrected for the number of tested genes and principal components that explained more than 95% of the metabolites’ variance (0.05/16,525/600 = 5.04 × 10 −9 in plasma, 0.05/16,525/679 = 4.46 × 10 −9 in urine). For significant gene–metabolite associations, single-variant association tests between each QV in the respective mask and the corresponding metabolite levels were performed under additive modeling, adjusting for the same covariates using the seqMeta R package version 1.6.7 (ref. ). Sensitivity analyses that evaluated all significant gene–metabolite pairs with regard to additional gene-based tests as well as across strata of sex and kidney function are summarized in the and Supplementary Tables and . The investigation of the genetic architecture underlying gene–metabolite associations and the prioritization of QVs according to their contribution to the gene-based association signal were performed using the forward selection procedure from Bomba et al. . First, for each QV v , the P value P v is calculated by performing the burden test aggregating all QVs other than v . Second, for each QV v , the difference Δ v between P v and the total P value of the burden test including all QVs is calculated. Subsequently, QVs are ranked by the magnitude of Δ v . QVs not contributing to the gene signal or even having an opposite effect can provide a negative Δ v . Finally, burden tests are performed by adding the ranked QVs one after the other until the lowest P value is reached, starting with the greatest Δ v . This identified a set of QVs that contained only variants that contributed most to the gene–metabolite association signal (that is, led to a stronger association signal) and did not contain variants that introduced noise (that is, neutral variants or those with a small or even opposite effect on metabolite levels). The resulting set of selected variants that led to the lowest possible association P value was designated ‘driver variants’ for the respective gene–metabolite association. Driver variants within a gene might differ for different associated metabolites. SLC13A1 and SLC26A1 to musculoskeletal traits WES and biomedical data of the UKB were used to investigate allelic series of functional QVs in SLC13A1 and SLC26A1 with hypothesized related clinical traits and diseases. We focused on SLC13A1 driver variants with experimental validation or that likely result in a severe consequence (stop-gain, splicing) to select truly functional QVs. Among these, the stop-gain variant encoding p.Arg12*, for which a complete LoF has experimentally been validated , the stop-gain substitution p.Trp48*, for which associations with decreased serum sulfate levels and skeletal phenotypes were reported, and the missense variant encoding p.Arg272Cys, located in a splice region, were available in the UKB. For SLC26A1 , we selected driver QVs for which reduced sulfate transport activity had previously been shown , of which p.Leu384Pro, p.Ser358Leu and p.Thr185Met were available in the UKB. All 6 QVs passed the ‘90pct10dp’ QC filter, defined as at least 90% of all genotypes for a given variant, independent of variant allele zygosity, had a read depth of at least 10 ( https://biobank.ndph.ox.ac.uk/ukb/ukb/docs/UKB_WES_AnalysisBestPractices.pdf ). Analyses were performed on the UKB Research Analysis Platform. Participants with all ancestries were included into the analysis but excluding strongly related individuals, defined as those that were excluded from the kinship inference process and those with ten or more third-degree relatives. After individual-level filtering, 468,292 individuals remained for analyses. Of these, ten participants were homozygous for one of the six QVs and 7,280 persons were heterozygous for at least one of the QVs. For these homozygous or heterozygous persons, we determined age- and sex-specific z scores of their quantitative anthropometric measurements, enabling interpretation of their measurements compared with noncarriers of the same age and sex. Age- and sex-specific distributions were inverse normal transformed before calculating z scores. The association between each of the six functional QVs with medical diagnoses defined by International Classification of Diseases version 10 (ICD-10) codes based on UKB field 41202 (primary or main diagnosis codes across hospital inpatient records) was investigated. We selected musculoskeletal diseases (ICD-10 codes starting with ‘M’) and fractures and injuries (ICD-10 codes starting with ‘S’ and containing ‘fracture’, ‘dislocation’ or ‘sprain’ terms). To avoid unreliable estimates, traits were restricted to those with at least two rare variant carriers among both individuals with and without disease. The association was examined using Fisher’s exact test under dominant modeling and Firth regression under additive modeling (‘brglm2’ R package ). We included sex, age at recruitment, sex × age and the first 20 genetic principal components (UKB field 22009) as covariates in the regression model. The association with quantitative anthropometric traits was assessed after inverse normal transformation via linear regression, additive genotype modeling and adjusting for the same covariates. We performed gene-based tests for significantly associated metabolites available in the UKB to validate our findings using the same settings for analysis as those in our study. Because metabolite levels in the UKB were quantified by Nightingale Health’s metabolic biomarker platform focusing on lipids, only two (histidine and phenylalanine) of the 122 significantly associated plasma metabolites were available. Histidine and phenylalanine values (UKB data fields 23463 and 23468) were inverse normal transformed. Sample and variant QC was performed, and covariates were included as described in the previous paragraph. A total of 260,000 individuals were available for analysis. Association analysis for the two identified gene–metabolite pairs, histidine and HAL as well as phenylalanine and PAH , was performed based on burden tests as implemented in REGENIE version 3.3 in two steps using the HI_mis mask, selecting only QVs that were present in the GCKD study to ensure reproducibility of rare variant effects between the studies. The sex-specific and organ-resolved WBM covers 13,543 unique metabolic reactions and 4,140 unique metabolites based on the generic genome-scale reconstruction of human metabolism, Recon3D , and adequate physiological and coupling constraints , . Of all observed significant gene–metabolite pairs from the GCKD study, 51 genes and 69 metabolites could be mapped onto Recon3D. For 36 of 51 genes, their associated metabolites could be mapped, resulting in 69 unique gene–metabolite pairs. To investigate perturbations in gene G , we first identified all reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } of the corresponding encoded enzymes or transporters in the WBM . We included those genes (27 of 36) in the generation of virtual IEMs that were exclusively causal for a non-empty set of reactions (that is, for a gene G , associated with reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } , there did not exist a gene H that was associated with any reaction of R G ) and metabolites with urinary excretion reactions, leading to the exclusion of SLC22A7 and SULT2A1 . Knockout simulations were based on maximizing the flux of the excretion or demand reaction of the metabolite of interest M under different conditions in a steady state setting ( Sv = 0 ), where S is the stoichiometric matrix (rows, metabolites; columns, reactions), and v is the flux vector through each reaction, adhering to specific constraints ( v l ≤ v ≤ v u ) , : 1 [12pt]{minimal} $$_{{}}{{}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ max v c T v , subject to Sv = 0 , v l ≤ v ≤ v u . For simulating a wild-type model for gene G , we solved the linear programming (LP) problem stated in equation , choosing the linear objective as the sum of all corresponding fluxes of reactions in R G : 2 [12pt]{minimal} $$_{G}:=\, _{k=1}^{n}{v}_{{G}_{k}},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ S G : = max ∑ k = 1 n v G k , subject to Sv = 0 , v l ≤ v ≤ v u . First, we checked whether S G > 10 −6 , a criterion implemented in the function checkIEM_WBM of the PSCM toolbox for deciding whether the corresponding reactions could carry any flux , . All reactions except the TMLHE -associated reactions passed this criterion. Next, we maximized the flux of two key reactions: the urine excretion reaction (for example, EX M [ u ]) and the created unbounded demand reaction (for example, DM M [ bc ]), designed to reflect accumulation in the blood compartment. First, we unbounded the upper bound of the urine excretion reaction. Next, we maximized the corresponding fluxes of metabolite M as the LP problem stated in equation under the additional constraint that [12pt]{minimal} $${ }_{k=1}^{n}{v}_{{G}_{k}}={S}_{G}$$ ∑ k = 1 n v G k = S G , providing the maximal urine excretion and the maximal flux into blood given the constraint setting. Finally, to simulate the complete LoF, we blocked all reactions in all organs catalyzed by gene G by setting [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0.$$ v G 1 = … = v G n = 0 . We derived maximum fluxes as in the wild-type model. Subsequently, we tested whether the knockout resulted in an increase, a decrease or no change in EX M [ u ] and DM M [ bc ] for each mapped gene–metabolite pair that was significant in the GCKD cohort. From the initial 36 genes mapped onto Recon3D, 24 genes and their mapped metabolites fulfilled all criteria (exclusively causal, reactions of the genes carry flux, urinary excretion reaction present), leading to 60 modeled gene–metabolite pairs. After curation of the male and female models, 26 genes ( TMLHE and KYAT1 added) and 67 gene–metabolite pairs could be computed . LP simulations were carried out in Windows 10 using MATLAB 2021a (MathWorks) as the simulation environment, ILOG CPLEX version 12.9 (IBM) as the LP solver, the COBRA Toolbox version 3.4 (ref. ) and the PSCM toolbox . Microbiome-personalized WBMs were generated by creating community models based on the genome-scale reconstructions of microbes in the AGORA1 resource , . Models have been shown to accurately reflect aspects of the fecal host metabolome , . Briefly, from microbe identification and relative abundance data of a metagenomic sample, genome-scale reconstructions of the identified microbes are joined together and connected via a lumen compartment, where they can exchange metabolites to form a microbial community , . Each microbial community model is then integrated in the WBM by connecting the microbiota lumen compartment to the large intestinal lumen of the WBM. Microbial community models ( n = 616) were based on publicly available metagenomics data from Yachida et al. and then embedded into the male WBM to form 616 personalized WBMs. While maintaining the same conditions as outlined in equation , rather than maximizing a linear objective, we minimized a quadratic objective for each personalized WBM: 3 [12pt]{minimal} $$_{{}}}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ min v 1 2 v T Qv , subject to Sv = 0 , v l ≤ v ≤ v u . Here, Q is a diagonal matrix, with 10 −6 on its diagonal, a value recommended in the COBRA Toolbox . Because of convexity attributes, equation allows for calculation of a unique flux distribution. For each solution v * , we obtained the corresponding urine excretion reactions of the measured and mapped metabolites. For knockout simulations, the associated reactions of gene G were set to zero ( [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0$$ v G 1 = … = v G n = 0 ). Then, equation was solved if possible. An optimal quadratic programming (QP) solution could be computed for 582 wild-type models, 590 KYNU -knockout WBMs and 588 PAH -knockout WBMs, which led to 569 paired QP– KYNU solutions and 567 paired QP– PAH solutions. We analyzed urine secretion fluxes for 257 metabolites covered in the GCKD urine metabolome data and 272 metabolites covered in the GCKD plasma metabolome data that had non-zero flux values. For KYNU , the urine compartment was analyzed, as biomarker quantification for the corresponding IEM is done in urine. Analogously for PAH , the blood metabolome data were analyzed as the clinically relevant compartment. The QP simulations were carried out using the high-performance computing facility, called the Brain-Cluster, of the University of Greifs-wald, employing MATLAB 2019b (MathWorks), ILOG CPLEX version 12.10 (IBM) as the quadratic programming solver and the COBRA Toolbox version 3.4 (ref. ). The Fisher–Freeman–Halton test was used to determine significance when comparing the in vivo and in silico signs from LP modeling. Statistical analysis of the QP solutions was conducted based on the paired wild-type and knockout fluxes via fixed-effect linear regression for panel data . We used ln(urine secretion flux) as the response variable, the knockout status as the sole predictor (wild type versus knockout) and the personalized microbiome as a fixed effect. Significance thresholds were set to 0.05/257 ( KYNU ) and 0.05/272 ( PAH ). Importantly, the entire variance in the regression models had two sources: (1) the knockout and (2) the microbiome personalization. Significance testing of the in silico regression coefficient of the knockout variable therefore delivers a test of whether the knockout explains substantial amounts of variance in comparison to the variance induced by randomly sampled microbiome communities. The in silico regression coefficients were then correlated with the burden-derived observed regression coefficients of gene–metabolite associations from the GCKD study, and significance was determined using the standard test for Pearson correlations. Generation of cells Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. FLIPR membrane potential assay CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41588-024-01965-7. Supplementary Information Supplementary Methods, Results, Discussion, Note and Figs. 1–3 Reporting Summary Peer Review File Supplementary Tables 1–18 Supplementary Tables 1–18. Supplementary Data 1 Plasma and urine metabolite levels among carriers and noncarriers of QVs in significantly associated genes. Supplementary Data 2 Contribution of individual QVs to their gene-based association signal with plasma and urine metabolite levels.
When
42a9bdfd-48d5-45ff-a04a-ce2906b272a5
11863445
Forensic Medicine[mh]
Escherichia coli ( E. coli ) is the one of prevalent gram-negative species. The following three broad categories of E. coli strains are of biological significance to mammals: commensal, intestinal pathogenic (InPEC), and extraintestinal pathogenic (ExPEC) . Although E. coli is a benign commensal colonizing the mammalian intestine, some strains or pathotypes can cause a variety of intestinal and diarrheal disorders . For example, a minimum of the following six pathotypes have been described: enterohemorrhagic, enteropathogenic, enterotoxigenic, enteroaggregative, diffusely adherent, and enteroinvasive E. coli , respectively . Moreover, ExPEC can cause diseases such as urinary tract infections, bacteremia, septicemia, and meningitis . It is unclear how E. coli genetic diversity, virulence, and antimicrobial resistance affect biodiversity and wild animal conservation . Wild animals may get exposed to antimicrobial compounds and antimicrobial resistance bacteria by interaction with anthropogenic sources such as human waste (garbage and sewage) and polluted waterways , livestock activities , or predation on impacted prey, including livestock corpses . Giraffes ( Giraffa camelopardalis ) are the tallest living animals and are kept in many zoos worldwide. Despite the passionate interest in keeping captive giraffes healthy, the health management of the giraffe presents a significant challenge. Despite being routinely bred in zoos, giraffes continue to provide a problem, particularly when it comes to food. Because of the high risk of maternal rejection and death among both mother-reared and hand-reared calves . Although success rates have increased over time, intensive care therapy of compromised calves remains under documented . There are still no definitive feeding standards, predicted weight increase, or suggestions for veterinarian assistance. In addition, little research has been conducted on diseases affecting giraffes, which are primarily associated with its hoofs and musculoskeletal system . However, there are few reports of E. coli disease in young giraffes. ExPEC infections are a serious threat to public health worldwide . Urinary tract infections, severe newborn meningitis, major intra-abdominal infections, and, less frequently, pneumonia, intravascular device infections, osteomyelitis, soft tissue infections, or bacteremia are the most troublesome illnesses. Bacteremia can result in sepsis, which is defined as life-threatening organ dysfunction caused by an unregulated immune response to infection . In this study, we describe the case of a giraffe that developed septicemia after an umbilical cord infection caused by E. coli. This case study may serve as a valuable reference and caution for veterinarians in zoos. Clinical history A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. Necropsy A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Histopathology Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Bacterial isolation and molecular identification Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. Among neonatal hand-reared giraffes, failure of passive transfer of immunity (FPI) continues to be a problem . The cotyledonary placentas in giraffes transfer negligible antibodies. Therefore, newborns rely on colostrum consumption and the absorption of maternal antibodies across the intestines during the first 24–48 h after birth . FPI increases the risk of diarrhea, enteritis, septicemia, arthritis, omphalitis, and pneumonia in domestic ungulates . Passive immunity transfer during the newborn’s first week is crucial for the successful rearing of ruminant neonates. To ensure optimal and steady growth, milk replacers must have a composition similar to that of giraffe milk. Bovine milk and colostrum have been effectively utilized and advised for hand-rearing giraffes despite the lower fat and protein contents of cow’s milk and milk substitutes than that of giraffe milk . Until the regular consumption of solid food, milk should be consumed daily in amounts of 7–10% of the body weight (19,000–25,000 kcal/day) . A hand-fed giraffe calf (which did not receive colostrum) died of septicemia caused by E. coli in the present study. Septic arthritis and phlegmon are caused by trauma or systemic infection. No trauma was recorded in this giraffe pup. Therefore, systemic infection may have contributed to the septic polyarthritis and/or phlegmon observed in this study. Enteritis, pneumonia, and funisitis are common sources of infection in giraffe calves; enteritis and pneumonia were not recorded in giraffe calves before the development of arthritis . Furthermore, the lack of immunocompetence might have put the calves at a risk of the infection spreading systemically through the umbilical cord. Septic polyarthritis and/or phlegmon may be caused by systemic infection. A PCR and sequence analysis confirmed that E. coli was the cause of bacteremia in the present case. E. coli colonizes newborn pups’ gastrointestinal tract shortly after birth and typically coexists with its host without causing disease. However, certain strains with specific virulence attributes can cause a range of illnesses in immunocompromised hosts or when gastrointestinal barriers are compromised. Extraintestinal pathogenic E. coli (ExPEC) are characterized primarily by their site of isolation, with the most clinically significant groups being uropathogenic E. coli (UPEC), neonatal meningitis-associated E. coli (NMEC), avian pathogenic E. coli (APEC), and septicemic E. coli (SEPEC) . ExPEC strains have the ability to cause infections in various extraintestinal locations. In the present case, the ExPEC strain resulted in pneumonia, umbilical arteritis, hepatitis, nephritis, hemorrhagic lymphadenitis, necrotizing enteritis, and necrotizing thyroiditis in the baby giraffe. There is no doubt that this is a direct result of E. coli bacteremia. In order to initiate bacteremia, the ExPEC strain must successfully infiltrate initial sites of infection or colonization, disseminate throughout the bloodstream, and persist within the blood. Nevertheless, the ExPEC strain has the capability to access the bloodstream through various pathways. Bacteremia lacking a discernible origin is classified as primary, while secondary bacteremia may result from dissemination originating from an existing infection, such as pneumonia or urinary tract infections, or from contaminated medical equipment . In this case, however, the bacteremia was likely a result of an umbilical cord infection. Improper handling of the umbilical cord presents a potential risk of infection, as it serves as a significant entry point for pathogens in newborns. Therefore, it is strongly advised that veterinarians adhere to proper disinfection, sterilization, isolation, and other cleaning protocols to ensure optimal umbilical cord hygiene when handling neonates. ExPEC uses various factors to cause disease in animals, including adhesins, invasins, protectins, iron acquisition systems, and toxins . These factors help ExPEC adhere, invade, evade the immune system, colonize, proliferate, and spread throughout the body, leading to infection in animals . Other bacterial factors such as secretion systems, quorum sensing systems, transcriptional regulators, and two-component systems also play a role in ExPEC pathogenesis . In this study, the virulotyping revealed that the E. coli strain was positive for PAI, iut A, pap G allele III, cva C, sfa s, and afa / dra BC. Adhesins are bacterial components that help them stick to other cells or surfaces, increasing their virulence. Specific adhesins are adapted to colonize different environments. Virulence genes linked to adhesion include pap G allele III, sfas , and afa / dra BC. Iron is a crucial micronutrient necessary for the growth and proliferation of bacteria within the host following successful colonization and/or invasion. Among the most significant virulence plasmids associated with ExPEC virulence are ColV and ColBM, particularly those containing the aerobactin operon ( iut A/ iuc ABCD). This operon codes for high-affinity iron-transport systems that enable bacteria to acquire iron in low-iron environments, such as those found in host fluids and tissues. Our isolates carrying virulence genes were found to possess the iut A gene, which facilitates survival in low iron conditions. Antibiotics are commonly utilized for the prevention and treatment of ExPEC infections. However, the widespread use of antibiotics has been linked to the development of multidrug-resistant bacteria. The high levels of antibiotic resistance observed in ExPEC strains present a significant risk to human health, as antibiotic-resistant bacteria and genes can be transmitted through the food chain. Previous research has shown that ExPEC isolates exhibit resistance to multiple antibiotics , underscoring the importance of conducting antibiotic susceptibility testing to identify the most effective treatment option. In this particular instance, the E. coli strain exhibited broad-spectrum beta-lactamase production. β-Lactam antibiotics, particularly 3rd generation cephalosporins, are commonly prescribed for the treatment of serious community-onset or hospital-acquired infections caused by E. coli . Regrettably, β-lactamase production in E. coli continues to be a significant factor in the development of resistance to β-lactam antibiotics . β-lactamases are bacterial enzymes that render β-lactam antibiotics ineffective through hydrolysis. This study presents findings on septic polyarthritis and/or septicemia in juvenile giraffes, potentially attributed to insufficient colostrum intake and E. coli infection via the umbilical cord. Furthermore, the study elucidates the diverse array of virulence factors exhibited by the E. coli strain and underscores the pathogenic significance of these pathogens in animal health. Continued research is warranted to identify additional virulence factors and elucidate the pathogenic mechanisms, ultimately aiding in the development of an effective diagnosis and treatment strategy for managing giraffe colibacillosis. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3.
Biological Insights of Fluoroaryl-2,2′-Bichalcophene Compounds on Multi-Drug Resistant
93362117-6822-4de7-be79-a8fa59482767
7795799
Pharmacology[mh]
Several antibiotics have been developed to control bacterial infections after penicillin discovery . However, the resistance of bacteria to several antibiotics has developed and become a major global problem . The resistance development rate is affected by several factors, including excessive use and abuse of antibiotics . To restrict the resistant bacteria spreading, different effective therapies need to be continuously developed in order not to return to the pre-antibiotic therapy. Such problems have led to the need to use chemically synthesized heterocyclic compounds that can be associated with wide-spectrum activity and have a substantially lower propensity than antibiotics to cause microbial resistance . Most biomass cellulose and related products, various pharmaceutical products, all nucleic acids, many natural and synthetic dyes, are heterocyclic compounds. Fifty-nine percent of US FDA-approved medications contain nitrogen heterocycles. . Sulfur, oxygen, and nitrogen are still the most prevalent heteroatoms. Cationic heterocyclic compounds containing two units of thiophene and/or furan are called bichalcophenes. Such compounds have shown a wide variety of biological activities . One bifuran derivative of a bichalcophene series was more effective against methicillin-resistant S . aureus than the antibiotic vancomycin in mice . A series of phenyl bichalcophenes were found to have an efficient antimicrobial activity against Gram-negative bacteria, E. coli and P. aeruginosa in addition to Gram-positive bacteria, S. aureus and B. subtilis and some strains of fungi such as Saccharomyces cerevisiae . Due to the promising antimicrobial activity of these bichalcophene derivatives, the relationship between the novel bichalcophenes and tetracycline was explored in previous reports . For better pharmacological properties, greater antimicrobial activity and lower toxicity than their non-fluorinated parent compounds, a series of fluoroarylbichalcophenes has been synthesized and published. These fluoroarylbichalcophene derivatives have been tested for their toxic effect on S. typhimurium TA1535 viability. It was found that, all the investigated fluorine-containing bichalcophenes exhibited a significant reduction on the S. typhimurium TA1535 viability at 50 and 100 µM. Moreover, it has been found that these investigated compounds were acted as potent antimutagenic and anticancer agents . The presence of fluorine atoms generally increases lipophilicity and therefore biological availability . Fluorine substitution, through microsomal inhibition, was found to suppress mutagenicity of quinolone . The enzymatic oxidation is usually prevented at the F-substitution site if the aromatic nucleus is fluorinated due to its electron-negativity nature . As a result, F-substitution at the activation site may reduce the aromatic compounds carcinogenicity and mutagenicity . At position 3, quinoline was deprived of genotoxicity in vitro and in vivo, while fluoroquinoline was genotoxic as quinolone . The biological and chemical properties of compounds can be altered due to the fluorine substitution of molecule and this can lead to the development of a massive number of novel fluorinated drugs. The high electronegativity of the fluorine substituent affected a molecule metabolism, distribution, and absorption of molecules by modifying the electron distribution in that molecule. Pharmaceutical products containing fluorine are used in a variety of fields for the manufacture of anti-inflammatory, anticancer pharmaceutical drugs, cardiac therapy, anti-parasitic and antibiotics and general anesthetics . Due to the exposure of some encouraging reports concerning the biological activity of fluorophenylbichalcophene derivatives, this research reported herein was launched to investigate the antibacterial impacts of monocationic fluorinated bichalcophenes at the molecular, cellular ultrastructure and biosensing levels against S. aureus . 2.1. Antimicrobial Activity of the Tested Aryl-2,2′-Bichalcophene Derivatives The tested bichalcophenes ( a–g) exhibited a wide range of antimicrobial activity against S. aureus , whereas the maximum inhibition zone diameter recorded 20 mm with MA-1115 and MA-1114 compounds equally while the zone of inhibition produced by the non-fluorinated parent compounds, MA-0944 and MA-0947 were 14 mm and 15.5 mm diameter respectively. Thus, the MA-1156 compound has a considerable antimicrobial activity with inhibition zone diameter 15 mm. MA-1116 and MA-1113 showed an equal antimicrobial activity with inhibition zones diameter 16 mm as shown in and . The growth inhibition was not observed around the control disc containing dimethyl sulfoxide (DMSO). The antimicrobial activity of the tested compounds with standard antibiotics as Cefoxitin and Gentamycin was compared using the disc diffusion method (data not shown). The antibiotics exhibited antimicrobial activities was very close to the activity of the parent compounds but in case of fluoroarylbichalcophenes the activity was lower. 2.2. Minimum Inhibitory Concentrations (MIC) of the Tested Fluoroaryl-2,2′-Bichalcophene Derivatives The MIC value of the tested fluoroarylbichalcophene derivatives was recorded at a minimum concentration not showing microbial growth . Compound MA-1156 was the most effective candidate showing the best MIC value of 16 µM. The MA-1115 compound demonstrated a potent antimicrobial activity with the MIC value of 32 µM. MA-1116 compound prevented the growth of S. aureus at 64 µM. On the other hand, the MA-1113 and MA-1114 compounds displayed the highest MIC values at 128 µM and subsequently the lowest activity. 2.3. Structure-Activity Relationship (SAR) The observed antibacterial activity of the MA-1156 indicated that the bithiophene linked to fluorophenyl moiety is promoting the antibacterial activity. Introducing fluorine into the phenyl group of compound MA-0944 increased its activity from MIC 12–16 μM. However, replacement of the thionyl moiety by a furyl moiety decreased the activity from MIC 16–32 µM. 2.4. Detection of the Tested Fluoroaryl-2,2′-Bichalcophenes-Resistant Variants The compounds MA-1156 , MA-1116 , and MA-1113 could effectively prevent S. aureus growth even after seven days with all concentrations. In contrast, one case of resistance was developed against the compound MA-1115 after the 3rd day of incubation with one-fold concentration (1× MIC). Another case of resistance appeared with one-fold concentration (1× MIC) of the compound MA-1114 after the 2nd day of incubation whereas the bacteria could not develop any resistance at higher concentrations (2× and 3× MIC) . 2.5. Effect of Fluoroarylbichalcophenes on S. aureus Protein Pattern The changes of protein banding of S. aureus treated with the two sub-MIC concentrations of the tested fluoroarylbichalcophenes were presented in . The total number of bands for MA-1156 , MA-1115 and MA-1116 recorded 36 distributed as: 13 polymorphic, 12 monomorphic and 11 unique bands ( A). For the control sample, the total number of bands was 22 missing unique bands. Lane of 4 µM MA-1156 recorded the 27 maximum bands number higher than the control including eight unique bands. On the other hand, the 19 minimum bands number exposed at both lanes of 8 and 16 µM MA-1116 . The lane of 2 µM MA-1156 showed 22 total bands similar to the control. Slight changes compared to control appeared in a lane of 4 µM MA-1115 recording 21 total protein bands and no unique bands were observed. Lane of 8 µM MA-1115 recorded the same 22 total protein bands as the control. For MA-1113 and MA-1114 , the total number of bands recorded as 30 distributed as: 11 polymorphic, 17 monomorphic, and two unique bands ( B). For the control, the total number of bands was 22 with no unique bands. Two lanes of 16 and 32 µM MA-1113 increased the total protein to 26 higher than control. The 21 minimum band numbers exposed at lanes of 16 and 32 µM MA-1114 . 2.6. Changes in Cellular Ultrastructure of S. aureus Treated with One Sub-MIC of Fluoroarylbichalcophenes Based on the results of antimicrobial activities, S. aureus was selected for scanning electron microscopy in the presence of the investigated fluoroarylbichalcophenes. The micrographs of the control untreated S. aureus cells showed their normal morphology as typical grapelike cluster arrangement. The cells were intact, around 1 µm in diameter and looked smooth rounded in shape ( A). On the other hand, after incubation with a sub-MIC, some bacteria exhibited different morphological abnormalities. Treatment with 4 µM of MA-1156 caused random aggregations of sticky cells having smaller size around 0.7 µm diameter with wavy cell walls. SEM micrographs of S. aureus cells treated with both 8 µM of MA-1115 and 16 µM of MA-1116 showed random clusters appeared to be smaller in diameter (0.5–0.8 µm) than the control sample. For treatment with 16 µM of MA-1116 , linear arrangement was observed. In case of 32 µM of MA-1113 treatment, the cells of S. aureus exhibited abnormal shape of few numbered and smaller cells (nearly 0.8 µm in diameter) arranged in clusters with less width and length. Treatment with 32 µM of MA-1114 showed a decrease in cells number varying markedly in size and shape. Many cells were smaller (0.6–0.9 µm diameter) than usual . 2.7. Cell Viability Determination Using WST-1 Reagent The effect of the tested fluoroarylbichalcophenes on the microbial activity was studied by monitoring the growth rate via optical density as a measure of the cell number. However, the rapid detection of these bichalcophenes antibacterial effects on the cell viability rather than the cell number is still needed. Therefore, WST-test was implemented. The reduction of tetrazolium salt WST-1 by the viable S. aureus cells results in a water-soluble yellow formazan, which can be easily quantified at the wavelength of 450 nm. As depicted in , S. aureus was most sensitive to MA-1156 , since the cell viability was strongly inhibited over all concentrations (viability%: 7.4%, 3.5%, 3.2%, 0.8%, 0.6%, and 0). On the other hand, the sensitivity of S. aureus to MA-1114 was the lowest among the tested compounds as the treated S. aureus cells were viable over all concentrations except 128 µM at which the cells were not viable with 1.7% viability. For MA-1115 , all concentrations were lethal to S. aureus cells except the lowest concentration 4 µM at which the bacterial cells were viable with 30% when the bacterial strain was treated. MA-1116 and MA-1113 had the same potent effect on S. aureus cells. Some concentrations 128, 64, and 32 µM were lethal, while the cells treated with 16, 8, and 4 µM were viable with percentage 44%, 27.6%, and 32% for MA-1116 and 58%, 35%, and 17% for MA-1113 respectively. 2.8. Sensing the Response of S. aureus Biofilms The biofilm formed at the electrode surface (sensors surface) was used monitoring the S. aureus cell viability. From the obtained bioelectrochemcial signals, height of electric-current represents the high cell viability and fast electron transfer efficiency. Lowering the electric-current is reflecting the defects in biological functions of the living adhered cells at the electrode surface. Herein, the bioelectrochemical performances of the untreated S. aureus culture and fluoroarylbichalcophene treated cultures were recorded and analyzed. The results shown in demonstrate the inhibition effects on the formation of electrochemically active biofilm caused by the existence of fluorobichalcophenes in the microbial culture during the stage of biofilm formation. As shown in the microbial-electrode interaction of S. aureus with the MnO 2 nano-rods was measured in the presence of one sub-MIC concentration of the fluorobichalcophenes. To that end, the faradic current of treated S. aureus has much lower current values than the control untreated cells; 5 × 10 −5 A for MA-1156 , 2.5 × 10 −5 A for MA-1115 , 6.5 × 10 −5 A for MA-1116 , 5 × 10 −5 A for MA-1113 and 2.5 × 10 −5 A for MA-1114 . However, the untreated cells of S. aureus produced reasonable and significant higher faradic current reached about 2.5 × 10 −4 A. This clearly points to the strong sensitivity of S. aureus to the tested fluoroarylbichalcophenes. The tested bichalcophenes ( a–g) exhibited a wide range of antimicrobial activity against S. aureus , whereas the maximum inhibition zone diameter recorded 20 mm with MA-1115 and MA-1114 compounds equally while the zone of inhibition produced by the non-fluorinated parent compounds, MA-0944 and MA-0947 were 14 mm and 15.5 mm diameter respectively. Thus, the MA-1156 compound has a considerable antimicrobial activity with inhibition zone diameter 15 mm. MA-1116 and MA-1113 showed an equal antimicrobial activity with inhibition zones diameter 16 mm as shown in and . The growth inhibition was not observed around the control disc containing dimethyl sulfoxide (DMSO). The antimicrobial activity of the tested compounds with standard antibiotics as Cefoxitin and Gentamycin was compared using the disc diffusion method (data not shown). The antibiotics exhibited antimicrobial activities was very close to the activity of the parent compounds but in case of fluoroarylbichalcophenes the activity was lower. The MIC value of the tested fluoroarylbichalcophene derivatives was recorded at a minimum concentration not showing microbial growth . Compound MA-1156 was the most effective candidate showing the best MIC value of 16 µM. The MA-1115 compound demonstrated a potent antimicrobial activity with the MIC value of 32 µM. MA-1116 compound prevented the growth of S. aureus at 64 µM. On the other hand, the MA-1113 and MA-1114 compounds displayed the highest MIC values at 128 µM and subsequently the lowest activity. The observed antibacterial activity of the MA-1156 indicated that the bithiophene linked to fluorophenyl moiety is promoting the antibacterial activity. Introducing fluorine into the phenyl group of compound MA-0944 increased its activity from MIC 12–16 μM. However, replacement of the thionyl moiety by a furyl moiety decreased the activity from MIC 16–32 µM. The compounds MA-1156 , MA-1116 , and MA-1113 could effectively prevent S. aureus growth even after seven days with all concentrations. In contrast, one case of resistance was developed against the compound MA-1115 after the 3rd day of incubation with one-fold concentration (1× MIC). Another case of resistance appeared with one-fold concentration (1× MIC) of the compound MA-1114 after the 2nd day of incubation whereas the bacteria could not develop any resistance at higher concentrations (2× and 3× MIC) . S. aureus Protein Pattern The changes of protein banding of S. aureus treated with the two sub-MIC concentrations of the tested fluoroarylbichalcophenes were presented in . The total number of bands for MA-1156 , MA-1115 and MA-1116 recorded 36 distributed as: 13 polymorphic, 12 monomorphic and 11 unique bands ( A). For the control sample, the total number of bands was 22 missing unique bands. Lane of 4 µM MA-1156 recorded the 27 maximum bands number higher than the control including eight unique bands. On the other hand, the 19 minimum bands number exposed at both lanes of 8 and 16 µM MA-1116 . The lane of 2 µM MA-1156 showed 22 total bands similar to the control. Slight changes compared to control appeared in a lane of 4 µM MA-1115 recording 21 total protein bands and no unique bands were observed. Lane of 8 µM MA-1115 recorded the same 22 total protein bands as the control. For MA-1113 and MA-1114 , the total number of bands recorded as 30 distributed as: 11 polymorphic, 17 monomorphic, and two unique bands ( B). For the control, the total number of bands was 22 with no unique bands. Two lanes of 16 and 32 µM MA-1113 increased the total protein to 26 higher than control. The 21 minimum band numbers exposed at lanes of 16 and 32 µM MA-1114 . Based on the results of antimicrobial activities, S. aureus was selected for scanning electron microscopy in the presence of the investigated fluoroarylbichalcophenes. The micrographs of the control untreated S. aureus cells showed their normal morphology as typical grapelike cluster arrangement. The cells were intact, around 1 µm in diameter and looked smooth rounded in shape ( A). On the other hand, after incubation with a sub-MIC, some bacteria exhibited different morphological abnormalities. Treatment with 4 µM of MA-1156 caused random aggregations of sticky cells having smaller size around 0.7 µm diameter with wavy cell walls. SEM micrographs of S. aureus cells treated with both 8 µM of MA-1115 and 16 µM of MA-1116 showed random clusters appeared to be smaller in diameter (0.5–0.8 µm) than the control sample. For treatment with 16 µM of MA-1116 , linear arrangement was observed. In case of 32 µM of MA-1113 treatment, the cells of S. aureus exhibited abnormal shape of few numbered and smaller cells (nearly 0.8 µm in diameter) arranged in clusters with less width and length. Treatment with 32 µM of MA-1114 showed a decrease in cells number varying markedly in size and shape. Many cells were smaller (0.6–0.9 µm diameter) than usual . The effect of the tested fluoroarylbichalcophenes on the microbial activity was studied by monitoring the growth rate via optical density as a measure of the cell number. However, the rapid detection of these bichalcophenes antibacterial effects on the cell viability rather than the cell number is still needed. Therefore, WST-test was implemented. The reduction of tetrazolium salt WST-1 by the viable S. aureus cells results in a water-soluble yellow formazan, which can be easily quantified at the wavelength of 450 nm. As depicted in , S. aureus was most sensitive to MA-1156 , since the cell viability was strongly inhibited over all concentrations (viability%: 7.4%, 3.5%, 3.2%, 0.8%, 0.6%, and 0). On the other hand, the sensitivity of S. aureus to MA-1114 was the lowest among the tested compounds as the treated S. aureus cells were viable over all concentrations except 128 µM at which the cells were not viable with 1.7% viability. For MA-1115 , all concentrations were lethal to S. aureus cells except the lowest concentration 4 µM at which the bacterial cells were viable with 30% when the bacterial strain was treated. MA-1116 and MA-1113 had the same potent effect on S. aureus cells. Some concentrations 128, 64, and 32 µM were lethal, while the cells treated with 16, 8, and 4 µM were viable with percentage 44%, 27.6%, and 32% for MA-1116 and 58%, 35%, and 17% for MA-1113 respectively. S. aureus Biofilms The biofilm formed at the electrode surface (sensors surface) was used monitoring the S. aureus cell viability. From the obtained bioelectrochemcial signals, height of electric-current represents the high cell viability and fast electron transfer efficiency. Lowering the electric-current is reflecting the defects in biological functions of the living adhered cells at the electrode surface. Herein, the bioelectrochemical performances of the untreated S. aureus culture and fluoroarylbichalcophene treated cultures were recorded and analyzed. The results shown in demonstrate the inhibition effects on the formation of electrochemically active biofilm caused by the existence of fluorobichalcophenes in the microbial culture during the stage of biofilm formation. As shown in the microbial-electrode interaction of S. aureus with the MnO 2 nano-rods was measured in the presence of one sub-MIC concentration of the fluorobichalcophenes. To that end, the faradic current of treated S. aureus has much lower current values than the control untreated cells; 5 × 10 −5 A for MA-1156 , 2.5 × 10 −5 A for MA-1115 , 6.5 × 10 −5 A for MA-1116 , 5 × 10 −5 A for MA-1113 and 2.5 × 10 −5 A for MA-1114 . However, the untreated cells of S. aureus produced reasonable and significant higher faradic current reached about 2.5 × 10 −4 A. This clearly points to the strong sensitivity of S. aureus to the tested fluoroarylbichalcophenes. The rapid and extensive development of antibiotic resistance in bacteria is a serious world-wide health problem. Therefore, continuous effort to develop novel and effective antimicrobial agents or increase the efficacy of the antibiotics currently in use by reducing the development of resistance in bacteria are highly needed . Synthesized bichalcophene derivatives were found to have an efficient antimicrobial activity . Bichalcophenes-based on thiophene and furan rings are recognized for their promising biological activity. The reported antimicrobial activity of the bichalcophenes can be attributed to the presence of sulfur and amidine functions . In a previous study, bichalcophene derivatives were effective against both Gram-positive and Gram-negative bacteria like various broad-spectrum antibiotics; their mechanism(s) may possibly be through the inhibition of protein and/or nucleic acids synthesis , or bacterial DNA degradation . In this study, all tested fluorobichalcophenes showed better antimicrobial activity than their corresponding parent compounds (1a and 1b) as the inhibition zone diameters of 1c to 1g compounds were from 15 to 20 mm against S. aureus . On the other hand, the inhibition zone diameters for parent compounds 1a and 1b were 14 and 15.5 mm, respectively. The compound MA-1156 showed the highest activity inhibiting S. aureus growth at minimum inhibitory concentration value of 16 µM. A possible explanation for this result is that the antibacterial activity of these compounds may stem from the basic skeleton of the molecules as well as from the nature of the fluorine, sulfur, and/or oxygen heteroatoms substituents. This finding was inconsistent with a previous study on some bichalcophene and their aza-analogs . The biological activity of bichalcophenes could be enhanced through introducing fluorine atoms to the phenyl ring . The addition of extra thiophene ring in the former compound is the only difference between compound MA-1156 and the other tested fluoroarylbichalcophenes. It cannot be denied that this additional thiophene rings is responsible for the impressive antibacterial activity. It was previously reported that compounds with thiophene rings were more active than those containing pyrrole or furan rings . The low activity of MA-1156 in the preliminary experiment using disc diffusion technique compared to the other tested compounds could be explained as; MA-1156 may have lower diffusion ability or higher affinity to agar than the other tested fluoroarylbichalcophenes. It was also reported that substitution of fluorine on the phenyl ring resulted in a significant improvement in the antiproliferative activity . Similar findings were reported herein as the inhibition zone diameter and the MIC value are better enhanced when the fluorine atom is next to the bichalcophene moiety as in compounds MA-1115 and MA-1113 than their positional isomers MA-1116 and MA-1114 , respectively, in which the fluorine atom is next to the amidine group leading to improve the ability of compounds to inhibit the bacterial activity. This go along with a previous study on compounds MA-1113 and MA-1114 reported by Abousalem et al. that could be explained by terms of the replacement of fluorine resulting in the apparently orthogonal effects of increasing molecular hydrophobicity and local polarity, thus altering their pharmacokinetic properties . During the present research, different fluoroarylbichalcophene resistance was formed with S. aureus at one-fold MIC concentration against the compound MA-1115 after three days’ incubation and MA-1114 after two days’ incubation whereas the bacteria could not develop any resistance at higher concentrations. Similar findings for some monocationic pyridyl bichalcophene compounds have been reported previously against E. coli which could develop resistance to these bichalcophenes and tetracycline at one-fold MIC concentration after only one day of incubation, whereas at three-fold MIC concentration, the bacteria could not develop any resistance even after the 7th day of incubation against nearly all bichalcophenes . General protection is provided by abundant multidrug resistant pumps (MDRs) that involve membrane translocations to extrude toxins from the cell . The preferred substrates of most MDRs are synthetic hydrophobic cations such as quaternary ammonium antiseptics . Mutational events that lead to target alteration or activation of efflux pump mechanisms are the most reasonable explanation for their resistance . From scanning electron microscopy, the untreated S. aureus micrograph showed normal morphology and typical grapelike cluster arrangement of S. aureus cells. While cells exposed to each of fluoroarylbichalcophenes undergo some ultra-structural changes as decreasing the number and size of cells around 0.5 µm, appearance of some deformities in the external shape of cell up to lyses of some cells. Moreover, the appearance of the bacterial cells treated with MA-1116 and MA-1113 compounds as streptococci or linear arrangement could be attributed to the division of cells in one direction. That goes along with a previous study of Staphylococci treated with cloxacillin where the progressive separation of the bacteria was in a manner fundamentally similar to that seen in streptococci . The SDS-PAGE analysis showed qualitative changes in protein profile of S. aureus after treatment with the tested fluoroarylbichalcophenes. Disappearance of protein bands have been attributed to the degradation of proteins by the antimicrobial agents whereas the appearance of new bands and/or the increasing in bands intensity could be due to the synthesis of responsive proteins that might resist the stress or help the bacteria to survive in stressful conditions . Low molecular weight protein bands may be evolved from degradation of high molecular weight proteins . The antimicrobial activity of these bichalcophenes could occur because the synthesis of protein and/or nucleic acids is inhibited . The minimum percentage of viability results for S. aureus cells treated with MA-1156 were recognized at all tested concentrations where cells were unable to cleave WST-1 within 1 h of complement-mediated lysis so no formazan was formed. In contrast, the cells treated with MA-1114 showed the maximum percentage of viability as S. aureus cells were able to cleave WST-1 and form the formazan product at all except 128 µM (1.7%). WST-1 is cleaved by all living, metabolically active cells only, but not by dead cells. The amount of formazan generated is directly proportional to the cell number over a wide range, using a homogeneous cell population. Activated cells produce more formazan than resting cells, which could allow the measurement of activation. These properties are all consistent with the cleavage of WST-1 only by active cells . Due to the microbial-catalytic oxidation of degradable organic compounds, electrons transmitted to a molecule other than oxygen under anaerobic conditions via the electron transport chain (ETC). Accepting the electrons results in the production of electrical current directly proportional to the total number of viable microbial cells and their activity. Dormant or dead cells, in contrast, have no electrochemical activity . In the end, online monitoring of microbial responses and cell viability could be enabled to differentiate the living from the dead microbes . Herein this study, the treated S. aureus cultures with the synthesized fluoroarylbichalcophenes, produce lower bioelectrochemical signals ranging from 2.5 to 6.5 × 10 −5 A than the untreated cell which recorded 2.5 × 10 −4 A. This sharp decrease points to the ability of the tested fluoroarylbichalcophenes to inhibit the direct electron transfer from viable bacterial cells to the carbon electrode. A previously measured S. aureus activity was electrochemically using a double-mediated bioelectrochemical system. The electrochemical signals were obtained only from the metabolically active cells, whereas an oxidation peak current was observed at about 0.6 V vs. Ag/AgCl, while the metabolically inactive cells showed almost no bioelectrochemical responses . In the light of previously mentioned results, it could be concluded that the antibacterial activity of bichalcophene derivatives against S. aureus is enhanced due to the incorporation of fluorine atom. 4.1. Tested Bichalcophene Derivatives and Bacteria Two non-fluorinated parent compounds; 1a and 1b and five fluorophenylbichalcophenes; 1c – 1g were available from previous studies and provided by Professor M. A. Ismail. Their chemical names: 1a (4-(2,2′-bithiophene-5-yl) benzamidine); 1b (4-(2,2′-bifuran-5-yl) benzamidine); 1c (4-(2,2′-bithiophen-5-yl)-2-fluorobenamidine); 1d (2-fluoro-4-(5-(thiophen-2-yl) furan-2-yl) benzamidine); 1e (3-fluoro-4-(5-(thiophen -2-yl) furan-2-yl) benzamidine); 1f (4-(2,2′-bifuran-5-yl)-3-fluorobenzamidine); 1g (4-(2,2′-bifuran-5-yl)-2-fluorobenzamidine). The tested compounds were liquefied at 10 mM concentration in 1 mL absolute DMSO. S. aureus used in this study was multi-drug resistant bacteria as it was resistant to Oxacillin, Ciprofloxacin, Methicillin, Levofloxacin, Ofloxacin, Erythromycin and Ampicillin and sensitive to Cefoxitin and Gentamicin. 4.2. Detecting the Antimicrobial Activity of the Tested 2,2′-Bichalcophene Derivatives Disc agar diffusion technique described by Bauer et al. was used to evaluate the antimicrobial activity of the bichalcophene compounds 1a – 1g against S. aureus . The tested compounds were liquefied at 10 mM concentration in DMSO. Mueller-Hinton agar plates were inoculated separately with 10 7 CFU/mL of bacterial cultures and regularly spread on the whole surface of each plate. The 5 mm diameter sterile discs were saturated with 10 µL of the tested bichalcophene compounds and placed on LB plates inoculated with the tested microorganisms. The plates were incubated for 24 h at 37 °C, after that inhibition zones were measured in millimeters and compared with a negative control 10% DMSO. Each assay in this test was conducted in three replicates. 4.3. Determination of MIC of the Tested 2,2′-Bichalcophene Derivatives The MIC of bichalcophene derivatives was measured according to the processes described by Clinical and Laboratory Standards . Different concentrations (0–128 μM) of the tested bichalcophenes dissolved in DMSO were added independently to LB broth medium which was previously autoclaved. For the current assay, prepared LB culture of S. aureus was chosen. A 20 μL seed culture of the tested organism having nearly 5 × 10 4 colony forming units (≈0.5 OD) was used as an inoculum for testing the bichalcophene derivatives. Respective blanks (culture and broth alone) were preserved, subsequently. All tubes were incubated overnight at 37 °C and measured at 600 nm optical density. Experiments were performed as triplicates. 4.4. Detection of Fluoroaryl-2,2′-Bichalcophenes-Resistant Variants In order to evaluate the development of resistance in S. aureus against fluoroarylbichalcophenes, growth assays were performed in the presence or absence of each bichalcophene derivative. 20 mL of cells were centrifuged, and the cells were re-suspended in 2 mL of MHB (Mueller-Hinton broth). The inoculum density was adjusted to 10 8 CFU/mL. Different concentrations of each tested bichalcophene derivative (3×, 2×, 1× MIC) were added. All tubes were incubated at 37 °C and the absorbance was recorded at 600 nm daily over a period of seven days . 4.5. SDS-PAGE of S. aureus S. aureus samples treated with the tested fluoroarylbichalcophene derivatives and the untreated sample (control) were cultured in LB broth media at 37 °C and 120 rpm for 24 h. The bacterial cells were harvested by centrifugation at 10,000 rpm for 5 min. The pellets were homogenized in phosphate buffer (0.6 M, pH 6.8) using glass beads and FastPrep ® -24 homogenizer and then centrifuged at 10,000 rpm for 5 min for protein isolation . Ten µg protein samples were boiled into 2× sample buffer (10 mL Distilled Water, 2.5 mL Tris HCl pH 6.8, 2 mL glycerol, 4 mL of 10% SDS and 1 mL β-mercaptoethanol) for 2 min. Around 20 µL treated proteins were loaded over acrylamide gel. Acrylamide gel was prepared according to Laemmli from two layers; 4% stacking gel on top of 12% separating gel. After electrophoresis at 100 V for 2 h, a gel was overnight stained in Commassie brilliant blue R 250 and visualized by soaking in destaining solution on shaker for some hours. The gel was documented and analyzed using the gel analyzer3 program. 4.6. SEM Cells of S. aureus treated with the sub-MIC of each tested fluoroarylbichalcophene, MA-1156 (4 µM), MA-1115 (8 µM), MA-1116 (16 µM), MA-1113 , and MA-1114 (32 µM) were subjected to SEM microscopy (JSM-6510 L.V., Tokyo, Japan). The bacteria were incubated for 12 h in incubator at 37 °C in LB broth containing the sub-MIC of fluoroarylbichalcophenes besides the untreated one (control). The pellets of bacterial cells were harvested from 10 mL of each sample by centrifugation at 5000 rpm for 10 min and processed according to Hartmann et al. . The specimens were coated with gold-palladium membranes and examined by SEM microscope using 30 KV at EM Unit, Mansoura University, Egypt. 4.7. Cell Viability Determination Using WST-1 Reagent The cell density of the selected strain S. aureus was quantified by measuring the viability of the cells using WST-1 reagent according to Mosmann . Starting at 0-incubation time, whereas 4 mL sterilized LB media amended different concentrations (0, 4, 8, 16, 32, 64, 128 µM) for all fluoroarylbichalcophenes. A 1 mL inoculum seed culture of S. aureus was added and incubated in a shaker at 650 rpm for 30 min at 37 °C. Consequently, 20 μL of 10 mM WST-1 solution was added per tube then shacked for (1–2) min, afterwards, the absorbance at 450 nm was measured and considered as zero time. Further incubation of the treated microbial cultured with fluorobichalcophene derivatives and WST reagent for another 30 min before re-measuring the absorbance at the same wavelength . The overall cell viability percentage was calculated as the following equation: Cell viability% = (absorbance value of treated cells)/(absorbance value of untreated cells) × 100 4.8. Bioelectrochemical Measurements All electrochemical experiments were performed utilizing a three-electrode Gamry Potentiostat/Galvanostat/ZRA G750 system known as a voltammetry. The working electrode was electrochemically activated by applying five cyclic scans from −0.4 to 1.0 V vs. Ag/AgCl, with the scan rate of 50 mV/s in phosphate buffer (pH 7.4) as a supporting electrolyte. The basic concept of this assay relies on the direct electron transfer from living microbial cells that physically adhere to the electrode surface . To investigate the response of S. aureus cells to the targeted fluorobichalcophenes, a suspension of S. aureus was incubated with one sub-MIC concentration of each compound; 4 µM of MA-1156 , 8 µM of MA-1115 , 16 µM of MA-1116 and 32 µM of both MA-1113 and MA-1114 at 37 °C for two weeks to form biofilm by attaching microbial cells at the screen printed electrode and measure current by measuring the bioelectrochemical responses of the treated cells. The cyclic voltammograms were recorded at different time intervals to monitor the online growing of the biofilms on the modified electrode surfaces and to detect the effect of the fluoroarylbichalcophenes on the electron transfer from the adhered bacterial cells and the working electrode surface. The resulting curve is called a voltammogram, whereas electric potential and electric currents are plotted on the X and Y axes, respectively. 4.9. Statistical Analysis For the goals of this study, all assays were performed in triplicate in three independent experiments. Statistical analysis results were expressed as means ± S.E. (standard error). Two non-fluorinated parent compounds; 1a and 1b and five fluorophenylbichalcophenes; 1c – 1g were available from previous studies and provided by Professor M. A. Ismail. Their chemical names: 1a (4-(2,2′-bithiophene-5-yl) benzamidine); 1b (4-(2,2′-bifuran-5-yl) benzamidine); 1c (4-(2,2′-bithiophen-5-yl)-2-fluorobenamidine); 1d (2-fluoro-4-(5-(thiophen-2-yl) furan-2-yl) benzamidine); 1e (3-fluoro-4-(5-(thiophen -2-yl) furan-2-yl) benzamidine); 1f (4-(2,2′-bifuran-5-yl)-3-fluorobenzamidine); 1g (4-(2,2′-bifuran-5-yl)-2-fluorobenzamidine). The tested compounds were liquefied at 10 mM concentration in 1 mL absolute DMSO. S. aureus used in this study was multi-drug resistant bacteria as it was resistant to Oxacillin, Ciprofloxacin, Methicillin, Levofloxacin, Ofloxacin, Erythromycin and Ampicillin and sensitive to Cefoxitin and Gentamicin. Disc agar diffusion technique described by Bauer et al. was used to evaluate the antimicrobial activity of the bichalcophene compounds 1a – 1g against S. aureus . The tested compounds were liquefied at 10 mM concentration in DMSO. Mueller-Hinton agar plates were inoculated separately with 10 7 CFU/mL of bacterial cultures and regularly spread on the whole surface of each plate. The 5 mm diameter sterile discs were saturated with 10 µL of the tested bichalcophene compounds and placed on LB plates inoculated with the tested microorganisms. The plates were incubated for 24 h at 37 °C, after that inhibition zones were measured in millimeters and compared with a negative control 10% DMSO. Each assay in this test was conducted in three replicates. The MIC of bichalcophene derivatives was measured according to the processes described by Clinical and Laboratory Standards . Different concentrations (0–128 μM) of the tested bichalcophenes dissolved in DMSO were added independently to LB broth medium which was previously autoclaved. For the current assay, prepared LB culture of S. aureus was chosen. A 20 μL seed culture of the tested organism having nearly 5 × 10 4 colony forming units (≈0.5 OD) was used as an inoculum for testing the bichalcophene derivatives. Respective blanks (culture and broth alone) were preserved, subsequently. All tubes were incubated overnight at 37 °C and measured at 600 nm optical density. Experiments were performed as triplicates. In order to evaluate the development of resistance in S. aureus against fluoroarylbichalcophenes, growth assays were performed in the presence or absence of each bichalcophene derivative. 20 mL of cells were centrifuged, and the cells were re-suspended in 2 mL of MHB (Mueller-Hinton broth). The inoculum density was adjusted to 10 8 CFU/mL. Different concentrations of each tested bichalcophene derivative (3×, 2×, 1× MIC) were added. All tubes were incubated at 37 °C and the absorbance was recorded at 600 nm daily over a period of seven days . S. aureus S. aureus samples treated with the tested fluoroarylbichalcophene derivatives and the untreated sample (control) were cultured in LB broth media at 37 °C and 120 rpm for 24 h. The bacterial cells were harvested by centrifugation at 10,000 rpm for 5 min. The pellets were homogenized in phosphate buffer (0.6 M, pH 6.8) using glass beads and FastPrep ® -24 homogenizer and then centrifuged at 10,000 rpm for 5 min for protein isolation . Ten µg protein samples were boiled into 2× sample buffer (10 mL Distilled Water, 2.5 mL Tris HCl pH 6.8, 2 mL glycerol, 4 mL of 10% SDS and 1 mL β-mercaptoethanol) for 2 min. Around 20 µL treated proteins were loaded over acrylamide gel. Acrylamide gel was prepared according to Laemmli from two layers; 4% stacking gel on top of 12% separating gel. After electrophoresis at 100 V for 2 h, a gel was overnight stained in Commassie brilliant blue R 250 and visualized by soaking in destaining solution on shaker for some hours. The gel was documented and analyzed using the gel analyzer3 program. Cells of S. aureus treated with the sub-MIC of each tested fluoroarylbichalcophene, MA-1156 (4 µM), MA-1115 (8 µM), MA-1116 (16 µM), MA-1113 , and MA-1114 (32 µM) were subjected to SEM microscopy (JSM-6510 L.V., Tokyo, Japan). The bacteria were incubated for 12 h in incubator at 37 °C in LB broth containing the sub-MIC of fluoroarylbichalcophenes besides the untreated one (control). The pellets of bacterial cells were harvested from 10 mL of each sample by centrifugation at 5000 rpm for 10 min and processed according to Hartmann et al. . The specimens were coated with gold-palladium membranes and examined by SEM microscope using 30 KV at EM Unit, Mansoura University, Egypt. The cell density of the selected strain S. aureus was quantified by measuring the viability of the cells using WST-1 reagent according to Mosmann . Starting at 0-incubation time, whereas 4 mL sterilized LB media amended different concentrations (0, 4, 8, 16, 32, 64, 128 µM) for all fluoroarylbichalcophenes. A 1 mL inoculum seed culture of S. aureus was added and incubated in a shaker at 650 rpm for 30 min at 37 °C. Consequently, 20 μL of 10 mM WST-1 solution was added per tube then shacked for (1–2) min, afterwards, the absorbance at 450 nm was measured and considered as zero time. Further incubation of the treated microbial cultured with fluorobichalcophene derivatives and WST reagent for another 30 min before re-measuring the absorbance at the same wavelength . The overall cell viability percentage was calculated as the following equation: Cell viability% = (absorbance value of treated cells)/(absorbance value of untreated cells) × 100 All electrochemical experiments were performed utilizing a three-electrode Gamry Potentiostat/Galvanostat/ZRA G750 system known as a voltammetry. The working electrode was electrochemically activated by applying five cyclic scans from −0.4 to 1.0 V vs. Ag/AgCl, with the scan rate of 50 mV/s in phosphate buffer (pH 7.4) as a supporting electrolyte. The basic concept of this assay relies on the direct electron transfer from living microbial cells that physically adhere to the electrode surface . To investigate the response of S. aureus cells to the targeted fluorobichalcophenes, a suspension of S. aureus was incubated with one sub-MIC concentration of each compound; 4 µM of MA-1156 , 8 µM of MA-1115 , 16 µM of MA-1116 and 32 µM of both MA-1113 and MA-1114 at 37 °C for two weeks to form biofilm by attaching microbial cells at the screen printed electrode and measure current by measuring the bioelectrochemical responses of the treated cells. The cyclic voltammograms were recorded at different time intervals to monitor the online growing of the biofilms on the modified electrode surfaces and to detect the effect of the fluoroarylbichalcophenes on the electron transfer from the adhered bacterial cells and the working electrode surface. The resulting curve is called a voltammogram, whereas electric potential and electric currents are plotted on the X and Y axes, respectively. For the goals of this study, all assays were performed in triplicate in three independent experiments. Statistical analysis results were expressed as means ± S.E. (standard error). MA-1156 was the most effective monocationic fluoroaryl bichalcophene compound for inhibiting the microbial growth and could be used as an effective antimicrobial agent for living tissues.
Comparative analysis of resource utilization in integrative anthroposophic and all German pediatric inpatient departments
a9e856ba-2624-4afd-b04e-69b613e02bb0
7552368
Pediatrics[mh]
Integrative Medicine (IM) is “healing-oriented medicine that takes account of the whole person, including all aspects of lifestyle. It emphasizes the therapeutic relationship between practitioner and patient, is informed by evidence, and makes use of all appropriate therapies” . It considers biological, psychological, social, spiritual, and environmental aspects of health . It is not “a discipline, a group of disorders, or a method of treatment, but an approach, a way of thinking”, it “encourages clinicians and researchers to consider more than one system at a time” and “provides a framework for understanding complex and dynamic challenges” of the human organism . IM is practiced worldwide and varies in its special approaches depending on cultural and national factors . Therefore, IM is of particular interest from both, a public health and health economic perspective. During the last 20 years, the implementation of integrative approaches for children has grown worldwide: IM is used in pediatrics in the USA , in Canada and in Europe in private practices, outpatient- and inpatient-departments . Current literature suggests that 30–50% of parents of children with acute or chronic diseases use IM for their children , while it seems to be used more frequently for children with chronic diseases in the US . IM’s use for children is associated with disease severity and whether parents use IM themselves . IM is established in academic pediatrics and is acknowledged as an important subspecialty to address children’s needs . An IM approach with particular relevance in Europe and Germany is Anthroposophic Medicine (AM). It is a multimodal treatment system founded by Rudolf Steiner and Ita Wegmann in the early 1920s and includes complementary pharmacotherapy, medicinal baths, rhythmical massages, compresses, and embrocation (rhythmic massages with etheric oils ), as well as art therapy, eurythmy, speech therapies, music therapy , and light/ color therapy. Consensus-based guidelines for the anthroposophic therapies for children suffering from general pediatric diseases, such as acute gastroenteritis and bronchitis have already been published. In Europe, AM is integrated into conventional medical services and practiced in inpatient and outpatient settings. There are two anthroposophic hospitals in Germany that offer integrative treatment for children in distinct pediatric departments: The Gemeinschaftskrankenhaus Herdecke (community hospital) near Dortmund and the Filderklinik in Filderstadt near Stuttgart . A recent study of our working group found a large catchment area for these hospitals all over Germany and that parents are willing to travel further distance to get specialized integrative anthroposophic medical care for children with severe and chronic diseases . However, little is known about the impact of integrative anthroposophic pediatric treatment on resource utilization as one element of economic analysis. Considering the growing costs of health care, a better understanding of resource utilization is indispensable to provide clinically effective and financially responsible treatment. Especially within the integrative field, there is a need for research evaluating the resource utilization and benefits for patients and the health care system . The evaluation of resource utilization parameters thus may provide valuable information that can be considered when seeking to optimize integrative strategies in order to lower health care costs and to license and scope health care investment decisions . Until today there is a need of data concerning such resource utilization parameters within integrative pediatrics . Diagnoses-Related Groups (DRG), as well as length of stay are frequently used to inform health economic and resource utilization analyses in Germany, other European countries and worldwide . Resource utilization analysis based on such data has rarely been used within integrative inpatient care . Therefore, the aim of the present study is to investigate parameters associated with resource utilization within integrative anthroposophic pediatric departments in Germany and to compare them systematically to representative data from all pediatric departments in Germany. Our hypotheses were that: There is no difference considering the length of stay between integrative anthroposophic pediatric inpatient departments and a) the entirety of all pediatric departments in Germany, b) the DRG defined mean length of stay, as well the upper and lower limits. Resource utilization indices, such as the effective Case Mix Index of integrative anthroposophic pediatric inpatient departments are comparable to the entirety of all pediatric departments in Germany. There is no difference in the frequencies of DRG/ MDC between integrative anthroposophic pediatric departments and all pediatric departments in Germany. Study design The current study is a post hoc observational study. It was conducted according to the Declaration of Helsinki . It is reported according to the STROBE guidelines for reporting observational cohort studies . Setting In Germany, there are two integrative hospitals focusing on Anthroposophic Medicine with pediatric inpatient departments: The Gemeinschaftskrankenhaus Herdecke (GKH) near Dortmund and the Filderklinik in Filderstadt near Stuttgart. Both hospitals treat children with various diseases reaching from general pediatrics to specialized fields by means of an integrative approach. This approach combines conventional and complementary remedies. The pediatric department of the Filderklinik on average treats 1245 patients per year. Beneath general pediatrics, the Filderklinik specifies in neurology, psychosomatic disorders, neonatology, endocrinology, pulmonology and cardiology for children. In the pediatric ward of the GKH, 1750 patients are treated on average every year. The GKH practices diabetology, oncology, neonatology, rheumatology, psychosomatics and neurology in children alongside general pediatrics. The staff include physicians, nursing staff, pharmacists and therapists who are all trained in integrative medicine . Diagnosis and treatment in both hospitals are in accordance with official pediatric guidelines from scientific societies and furthermore include treatment options from Anthroposophic Medicine . This anthroposophic treatment includes : complementary pharmacotherapy, medicinal baths, rhythmical massages, compresses, and embrocation (rhythmic massages with etheric oils ), as well as art therapy, eurythmy, speech therapies, music therapy , and light/ color therapy . Both hospitals are part of the German regular medical care and thus funded by the statutory health insurers. Data collection Patient data over the last decade (2005–2016) was derived from the standard ward documentation interface Agfa-ORBIS® in all integrative anthroposophic pediatric departments. The Microsoft Excel®-output was imported into SPSS 24® (Statistical Package for the Social Sciences, IBM), cleaned and a plausibility check was performed. Furthermore, representative data was derived from the German National Consensus bureau for all pediatric departments in Germany (2005–2016). Eligibility criteria There were no specific criteria of eligibility in the integrative anthroposophic sample. All patient cases of all integrative anthroposophic pediatric departments in Germany treated between 2005 and 2016 were included in the integrative sample. Outliers were excluded from analysis post hoc. An outlier is an observed value which deviates so much from the other values as to arouse suspicions that it was generated by a different mechanism . In the data of the entirety of pediatric departments, outlier analysis was not possible since we were not able to gather raw data from the German consensus bureau. Consequently, exclusion of outliers was not possible. Sample The integrative anthroposophic sample consists of 29,956 patient cases (Gemeinschaftskrankenhaus: n = 17,503 (58.4%); Filderklinik: n = 12,453 (41.6%). The sample of all pediatric departments in Germany includes 48,670,077 patient cases. Resource utilization parameters In Germany, it is mandatory by law for all hospitals to provide data concerning health resource utilization to the Institute for the Hospital Renumeration System (InEK) and the National Consensus Bureau. These resource utilization parameters include Diagnosis Related Groups (DRG), Major Diagnosis Categories (MDC), and effective Case Mix Index (CMI). Therefore, these parameters are considered for comparisons between the integrative anthroposophic and all pediatric departments . Diagnosis related groups and major diagnosis categories DRGs are assigned based on patients’ ICD-diagnosis, as well as procedures, age, sex, discharge status, and the presence of complications or comorbidities. In 2003 the G-DRG system was established in Germany as an adaption of the Australian DRG system. It is updated annually by the Institute for the Hospital Remuneration System (InEK). Length of stay The length of stay is measured in days in both samples. In the German DRG System only full days of stay are included for the length of stay . Besides the length of stay, the German DRG system provides a mean length of stay, a minimum-, and a maximum length of stay for each diagnosis in the DRG-catalogue . The length of stay of a patient can affect the revenue of a DRG. If the length of stay is shorter than the DRG defined lower limit, a deduction of the revenue is performed . Vice versa, if the length of stay is longer than the upper limit, an additional fee is drawn . For each DRG within the integrative anthroposophic sample the mean length of stay, as well as upper and lower limit for length of stay, were calculated using SPSS’ Syntax function. The data source was the DRG case-based lump sum catalogues for the years 2005–2016 derived from the homepage of the Institute for the Hospital Remuneration System . Effective case mix index In the German DRG-System (G-DRG), the cost weights are used to quantify a hospital’s average costs per case in relation to the specific resource utilization. This includes the Case-Mix (CM), which is equal to the sum of the cost weights of all DRGs performed over a given time period. The average case weight, which is called Case-Mix index (CMI), is calculated by dividing the CM by the total number of cases. Consequently, the CMI is equal to the average DRG cost weights for a particular hospital. The CMI is suitable for the comparison of the utilization of health care resources in different hospitals . The effective CMI considers the deductions in the case of patient transfer or short-stay outliers, and surcharges for long-stay outliers and thus reflects the effort of a care provider for the treatment of a patient. An effective CMI value greater than 1.0 reflects a more extensive case compared to the average, while a value less than 1.0 indicates a less extensive case. In this way, the effective CMI maps the actual calculated amount for case fees . Hence, in our study, the effective CMI of both samples were used for comparison of resource utilization between integrative anthroposophic and all German pediatric departments. Statistical analysis All statistical analyses are performed using IBM SPSS Version 24 and R Statistics. Mean differences between the integrative anthroposophic sample and all pediatric departments are tested for statistical significance by means of t-tests for independent samples. Because of cumulative testing the level of statistical significance was Bonferroni adjusted to p < .01. Due to the high sample-size, Cohen’s d is calculated as a standardized measure of effect independent of the sample size. The current study is a post hoc observational study. It was conducted according to the Declaration of Helsinki . It is reported according to the STROBE guidelines for reporting observational cohort studies . In Germany, there are two integrative hospitals focusing on Anthroposophic Medicine with pediatric inpatient departments: The Gemeinschaftskrankenhaus Herdecke (GKH) near Dortmund and the Filderklinik in Filderstadt near Stuttgart. Both hospitals treat children with various diseases reaching from general pediatrics to specialized fields by means of an integrative approach. This approach combines conventional and complementary remedies. The pediatric department of the Filderklinik on average treats 1245 patients per year. Beneath general pediatrics, the Filderklinik specifies in neurology, psychosomatic disorders, neonatology, endocrinology, pulmonology and cardiology for children. In the pediatric ward of the GKH, 1750 patients are treated on average every year. The GKH practices diabetology, oncology, neonatology, rheumatology, psychosomatics and neurology in children alongside general pediatrics. The staff include physicians, nursing staff, pharmacists and therapists who are all trained in integrative medicine . Diagnosis and treatment in both hospitals are in accordance with official pediatric guidelines from scientific societies and furthermore include treatment options from Anthroposophic Medicine . This anthroposophic treatment includes : complementary pharmacotherapy, medicinal baths, rhythmical massages, compresses, and embrocation (rhythmic massages with etheric oils ), as well as art therapy, eurythmy, speech therapies, music therapy , and light/ color therapy . Both hospitals are part of the German regular medical care and thus funded by the statutory health insurers. Patient data over the last decade (2005–2016) was derived from the standard ward documentation interface Agfa-ORBIS® in all integrative anthroposophic pediatric departments. The Microsoft Excel®-output was imported into SPSS 24® (Statistical Package for the Social Sciences, IBM), cleaned and a plausibility check was performed. Furthermore, representative data was derived from the German National Consensus bureau for all pediatric departments in Germany (2005–2016). There were no specific criteria of eligibility in the integrative anthroposophic sample. All patient cases of all integrative anthroposophic pediatric departments in Germany treated between 2005 and 2016 were included in the integrative sample. Outliers were excluded from analysis post hoc. An outlier is an observed value which deviates so much from the other values as to arouse suspicions that it was generated by a different mechanism . In the data of the entirety of pediatric departments, outlier analysis was not possible since we were not able to gather raw data from the German consensus bureau. Consequently, exclusion of outliers was not possible. The integrative anthroposophic sample consists of 29,956 patient cases (Gemeinschaftskrankenhaus: n = 17,503 (58.4%); Filderklinik: n = 12,453 (41.6%). The sample of all pediatric departments in Germany includes 48,670,077 patient cases. In Germany, it is mandatory by law for all hospitals to provide data concerning health resource utilization to the Institute for the Hospital Renumeration System (InEK) and the National Consensus Bureau. These resource utilization parameters include Diagnosis Related Groups (DRG), Major Diagnosis Categories (MDC), and effective Case Mix Index (CMI). Therefore, these parameters are considered for comparisons between the integrative anthroposophic and all pediatric departments . DRGs are assigned based on patients’ ICD-diagnosis, as well as procedures, age, sex, discharge status, and the presence of complications or comorbidities. In 2003 the G-DRG system was established in Germany as an adaption of the Australian DRG system. It is updated annually by the Institute for the Hospital Remuneration System (InEK). The length of stay is measured in days in both samples. In the German DRG System only full days of stay are included for the length of stay . Besides the length of stay, the German DRG system provides a mean length of stay, a minimum-, and a maximum length of stay for each diagnosis in the DRG-catalogue . The length of stay of a patient can affect the revenue of a DRG. If the length of stay is shorter than the DRG defined lower limit, a deduction of the revenue is performed . Vice versa, if the length of stay is longer than the upper limit, an additional fee is drawn . For each DRG within the integrative anthroposophic sample the mean length of stay, as well as upper and lower limit for length of stay, were calculated using SPSS’ Syntax function. The data source was the DRG case-based lump sum catalogues for the years 2005–2016 derived from the homepage of the Institute for the Hospital Remuneration System . In the German DRG-System (G-DRG), the cost weights are used to quantify a hospital’s average costs per case in relation to the specific resource utilization. This includes the Case-Mix (CM), which is equal to the sum of the cost weights of all DRGs performed over a given time period. The average case weight, which is called Case-Mix index (CMI), is calculated by dividing the CM by the total number of cases. Consequently, the CMI is equal to the average DRG cost weights for a particular hospital. The CMI is suitable for the comparison of the utilization of health care resources in different hospitals . The effective CMI considers the deductions in the case of patient transfer or short-stay outliers, and surcharges for long-stay outliers and thus reflects the effort of a care provider for the treatment of a patient. An effective CMI value greater than 1.0 reflects a more extensive case compared to the average, while a value less than 1.0 indicates a less extensive case. In this way, the effective CMI maps the actual calculated amount for case fees . Hence, in our study, the effective CMI of both samples were used for comparison of resource utilization between integrative anthroposophic and all German pediatric departments. All statistical analyses are performed using IBM SPSS Version 24 and R Statistics. Mean differences between the integrative anthroposophic sample and all pediatric departments are tested for statistical significance by means of t-tests for independent samples. Because of cumulative testing the level of statistical significance was Bonferroni adjusted to p < .01. Due to the high sample-size, Cohen’s d is calculated as a standardized measure of effect independent of the sample size. Length of stay The mean length of stay in the integrative anthroposophic sample was 5.38 days (SD = 7.31, n = 29,956). Figure illustrate the length of stay in the integrative sample compared to the DRG defined upper and lower limit of length of stay. The length of stay in the integrative anthroposophic sample did not exceed or undercut the DRG defined upper and lower limits for length of stay. Overall, the mean length of stay in the entirety of all pediatric departments was 4.48 days (SD = 7.83; n = 38,724,087). A t-test for independent samples showed a significant mean difference between the integrative anthroposophic and all pediatric departments (t (38,754,041) = 49.41 p < .01; Cohen’s d = 0.12). The average length of stay per year in the integrative anthroposophic and in all pediatric departments is shown in Table . The length of stay in the integrative anthroposophic sample was significantly lower (M = 4.74; SD = 6.23) than the mean length of stay defined by DRG (M = 5.8; SD = 4.71; t (28,236) = − 37.74; p < .01; Cohen’s d = − 0.07). The mean length of stay in the integrative anthroposophic and all pediatric departments compared to the mean length of stay proposed by DRG are shown in Fig. . Effective case mix index The average effective CMI in the integrative anthroposophic sample is 0.76 (SD = 1.22; n = 29,956). Overall the average effective CMI in the entirety of all pediatric departments was 0.76 (SD = 1.97; n = 39,159,515). The average effective CMI in the integrative and all German pediatric departments per year is shown in Table . Diagnoses related groups The most frequent DRG in the integrative anthroposophic sample were B80Z ( head injuries ; n = 1933, 6.5%), G67B ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, complex genesis ; n = 1286, 4.3%), P67C ( newborn > 2499 g, without complex diagnosis ; n = 1254, 4.2%), G67C ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, uncomplex genesis ; n = 1158, 3.9%) and P67B ( n = 975, 3.3%, newborn > 2499 g, with complex diagnosis ). In the entirety of all pediatric departments, the most frequent DRG’s were G67B ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, complex genesis ; n = 561,552; 8.78%) G67C ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, uncomplex genesis ; n = 440,529; 6.89%); B80Z ( head injuries ; n = 382,762; 5.99%) and D63Z ( otitis media or infections of the upper respiratory tract, age < 3 years; n = 310,283; 4.85%). The 50 most frequent DRG in both groups per year and overall are shown in the supplemental materials and . Major diagnosis categories The most frequent MDC in the integrative sample were Diseases and Disorders of the Nervous System ( n = 5366, 17.90%), Diseases and Disorders of the Respiratory System ( n = 4155, 13.87%), Newborn and other Neonates Perinatal Period ( n = 4068; 13.58%) and Diseases and Disorders of the Digestive System ( n = 4007; 13.38%). In the entirety of all pediatric departments in Germany sample the most frequent MDC were Diseases and Disorders of the Digestive System ( n = 1,502,678; 23.50%); Diseases and Disorders of the Respiratory System ( n = 1,066,127; 16.67%); Diseases and Disorders of the Nervous System ( n = 876,894; 13.71%); Diseases and Disorders of the Ear, Nose, Mouth and Throat ( n = 671,922; 10.51%). The percentages of the MDC compared in both samples are presented in Fig. . There were some significant differences in the frequencies of the MDCs between the integrative pediatric departments and all German pediatric departments. Higher frequencies in the integrative sample were observed for the MDC: Newborn and other Neonates Perinatal Period ( IPD: 13.88% vs. 0.87%); Alcohol, Drug Use, Induced Mental Disorders (IPD: 8.57 vs. 3.32%); Mental Diseases and Disorders (IPD: 4.27% vs. 1.16%); Diseases and Disorders of the Endocrine, Nutritional and Metabolic System (IPD: 7.43 vs 2.74); Diseases and Disorders of the Nervous System (IPD: 17.90% vs 13.71%). Lower frequencies in the integrative sample were observed for the MDCs: Pregnancy, Childbirth and Puerperium (IPD: 0.0% vs 9.6%); Diseases and Disorders of the Digestive System (IPD: 13.87% vs 23.05%); Diseases and Disorders of Ear, Nose, Mouth and Throat (IPD: 4.69% vs. 10.51%); Diseases and Disorders of the Respiratory System (IPD: 13.87% vs. 16.67%). The mean length of stay in the integrative anthroposophic sample was 5.38 days (SD = 7.31, n = 29,956). Figure illustrate the length of stay in the integrative sample compared to the DRG defined upper and lower limit of length of stay. The length of stay in the integrative anthroposophic sample did not exceed or undercut the DRG defined upper and lower limits for length of stay. Overall, the mean length of stay in the entirety of all pediatric departments was 4.48 days (SD = 7.83; n = 38,724,087). A t-test for independent samples showed a significant mean difference between the integrative anthroposophic and all pediatric departments (t (38,754,041) = 49.41 p < .01; Cohen’s d = 0.12). The average length of stay per year in the integrative anthroposophic and in all pediatric departments is shown in Table . The length of stay in the integrative anthroposophic sample was significantly lower (M = 4.74; SD = 6.23) than the mean length of stay defined by DRG (M = 5.8; SD = 4.71; t (28,236) = − 37.74; p < .01; Cohen’s d = − 0.07). The mean length of stay in the integrative anthroposophic and all pediatric departments compared to the mean length of stay proposed by DRG are shown in Fig. . The average effective CMI in the integrative anthroposophic sample is 0.76 (SD = 1.22; n = 29,956). Overall the average effective CMI in the entirety of all pediatric departments was 0.76 (SD = 1.97; n = 39,159,515). The average effective CMI in the integrative and all German pediatric departments per year is shown in Table . The most frequent DRG in the integrative anthroposophic sample were B80Z ( head injuries ; n = 1933, 6.5%), G67B ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, complex genesis ; n = 1286, 4.3%), P67C ( newborn > 2499 g, without complex diagnosis ; n = 1254, 4.2%), G67C ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, uncomplex genesis ; n = 1158, 3.9%) and P67B ( n = 975, 3.3%, newborn > 2499 g, with complex diagnosis ). In the entirety of all pediatric departments, the most frequent DRG’s were G67B ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, complex genesis ; n = 561,552; 8.78%) G67C ( esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer, uncomplex genesis ; n = 440,529; 6.89%); B80Z ( head injuries ; n = 382,762; 5.99%) and D63Z ( otitis media or infections of the upper respiratory tract, age < 3 years; n = 310,283; 4.85%). The 50 most frequent DRG in both groups per year and overall are shown in the supplemental materials and . The most frequent MDC in the integrative sample were Diseases and Disorders of the Nervous System ( n = 5366, 17.90%), Diseases and Disorders of the Respiratory System ( n = 4155, 13.87%), Newborn and other Neonates Perinatal Period ( n = 4068; 13.58%) and Diseases and Disorders of the Digestive System ( n = 4007; 13.38%). In the entirety of all pediatric departments in Germany sample the most frequent MDC were Diseases and Disorders of the Digestive System ( n = 1,502,678; 23.50%); Diseases and Disorders of the Respiratory System ( n = 1,066,127; 16.67%); Diseases and Disorders of the Nervous System ( n = 876,894; 13.71%); Diseases and Disorders of the Ear, Nose, Mouth and Throat ( n = 671,922; 10.51%). The percentages of the MDC compared in both samples are presented in Fig. . There were some significant differences in the frequencies of the MDCs between the integrative pediatric departments and all German pediatric departments. Higher frequencies in the integrative sample were observed for the MDC: Newborn and other Neonates Perinatal Period ( IPD: 13.88% vs. 0.87%); Alcohol, Drug Use, Induced Mental Disorders (IPD: 8.57 vs. 3.32%); Mental Diseases and Disorders (IPD: 4.27% vs. 1.16%); Diseases and Disorders of the Endocrine, Nutritional and Metabolic System (IPD: 7.43 vs 2.74); Diseases and Disorders of the Nervous System (IPD: 17.90% vs 13.71%). Lower frequencies in the integrative sample were observed for the MDCs: Pregnancy, Childbirth and Puerperium (IPD: 0.0% vs 9.6%); Diseases and Disorders of the Digestive System (IPD: 13.87% vs 23.05%); Diseases and Disorders of Ear, Nose, Mouth and Throat (IPD: 4.69% vs. 10.51%); Diseases and Disorders of the Respiratory System (IPD: 13.87% vs. 16.67%). In this study, we aimed to investigate resource utilization parameters of integrative anthroposophic pediatric departments and to compare them to corresponding data from all pediatric departments in Germany. In accordance with our initial hypothesis, we found no difference between pediatric integrative anthroposophic departments and the entirety of all pediatric departments concerning effective Case Mix Index. The length of stay in the integrative departments was shorter than the mean DRG-defined mean length of stay and within upper and lower limits, which was in line with our hypothesis. Furthermore, we hypothesized that these department do not differ from the entirety considering patients’ length of stay. Contrary to this hypothesis, we found that the mean length of stay was significantly longer in the integrative anthroposophic departments compared to all German pediatric departments. Another hypothesis was that the departments do not differ considering the frequency distribution of DRG and MDC. Our data did not support this hypothesis, but much more implied some systematic discrepancies between the integrative anthroposophic pediatric departments and all German pediatric departments. Length of stay The average length of stay in the integrative sample was significantly lower than the mean length of stay defined by DRG. It did furthermore, not exceed the upper limit of length of stay defined by DRG or undercut the lower limit of length of stay defined by DRG. This result implies that integrative pediatric departments in Germany can provide care within the terms of the DRG defined conditions concerning length of stay. In contrast to previous studies, we found no indication for less resource utilization in the integrative departments . The mean length of stay in the integrative anthroposophic departments was significantly longer compared to the mean stay in all German pediatric departments. This finding is in line with previous research that found longer length of stay in integrative anthroposophic and integrative naturopathic departments . This circumstance may most likely be due to the large number of time-consuming diagnostic and medical procedures that are associated with integrative anthroposophic treatment. Previous studies found an association of increased length of stay in integrative medical department with the utilization of additional anthroposophic or naturopathic reimbursement, which requires a longer stay. Considering this, the relative difference of 1 day in the length of stay between the departments is comparatively low. While this difference is statistically significant, the effect size is low. However, in this context, it needs to be stated that outliers with extreme lengths of stay (mainly from the diagnosis spectrum of eating disorders) were excluded prior to analysis. Effective case mix index The mean effective CMI were identical in the integrative sample and in all German pediatric departments. This finding indicates that integrative pediatric departments have comparable resource utilization management to general pediatric departments, which is in line with comparable cost analyses . Diagnosis related groups and major diagnosis categories In both samples, esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer (G67B; G67C) and head injuries (B80Z) belonged to the most frequent DRG’s. The percentages varied between the integrative anthroposophic and all pediatric departments. While newborn > 2499 g, without complex diagnosis was one of the most frequent DRG’s in the integrative anthroposophic sample, otitis media or infections of the upper respiratory tract age < 3 years (D63Z) was more frequent in the entirety of all German pediatric departments. The frequencies of the MDCs in the integrative anthroposophic sample showed some significant differences in comparison to the entirety of all pediatric departments in Germany. Higher frequencies could be obtained for MDCs of chronic diagnosis spectrum, such as mental, endocrine, and nervous disorders. Lower frequencies were found for acute diseases, such as digestive, respiratory, and ENT- disorders. A similar pattern was obtained in the DRG frequencies. This result pattern is known from a previous study of our working group on the patient characteristics and clinical characteristics of integrative anthroposophic pediatric departments in Germany . Furthermore, this result is in line with other international studies that conclude that the use of integrative medicine seems to be more frequent in in children with severe and chronic diseases . The higher frequency of chronic and severe diseases may be another factor influencing the longer length of stay in the integrative pediatric departments. The large difference considering the MDC newborns, neonates and diseases of the perinatal period, is most likely due to the specification of the GKH with its center for neonatology. The absence of pregnancy, childbirth, and puerperium in the integrative anthroposophic sample may be explained by the circumstance that the treatment in this MDC is merely used by the gynecologic department in the integrative anthroposophic hospitals but not by the pediatric department. The higher percentage of this MDC in all German pediatric departments may be caused by teenage pregnancies or mothers who are treated in the pediatric department because their neonate child is treated in the pediatric department. Strengths and limitations The aim of the present study was to contribute to the better understanding of resource utilization, as measured by length of stay, in pediatric integrative medicine in Germany. A big strength of this study is that it is the first systematic investigation of a large sample of integrative pediatric resource utilization data with comparison to representative data of the entirety of all pediatric inpatient departments in Germany. One major limitation of this study is that it is a secondary data analysis. We were not able to gain raw data from the German Federal Statistical Office for the entirety of pediatric hospitals in Germany. Consequently, it was not possible to exclude any outliers in this sample. Future analyses also need to look at the impact on resource utilization in primary and outpatient care, as well as rehabilitation and social care where appropriate as they may influence the length of stay of in-patients. We also recognize that this analysis only provides one aspect of information required for future economic evaluation; sequential services and the use of resources for the entire episode of care were not addressed in this study. To do so, data on resource use and their costs between integrated pediatric hospitals and other pediatric hospitals will need to be combined with comparative data on outcomes associated with treatment in these settings. This would include analysis for different population sub-groups, for instance by different MDC. Ideally outcomes would be measured in terms of impact on quality of life so that the health economic gold standard of incremental cost per quality adjusted life year (QALY) gained could then be assessed. It would also be important to look at whether there are differences in patterns of rehospitalization as part of any future economic evaluation. The average length of stay in the integrative sample was significantly lower than the mean length of stay defined by DRG. It did furthermore, not exceed the upper limit of length of stay defined by DRG or undercut the lower limit of length of stay defined by DRG. This result implies that integrative pediatric departments in Germany can provide care within the terms of the DRG defined conditions concerning length of stay. In contrast to previous studies, we found no indication for less resource utilization in the integrative departments . The mean length of stay in the integrative anthroposophic departments was significantly longer compared to the mean stay in all German pediatric departments. This finding is in line with previous research that found longer length of stay in integrative anthroposophic and integrative naturopathic departments . This circumstance may most likely be due to the large number of time-consuming diagnostic and medical procedures that are associated with integrative anthroposophic treatment. Previous studies found an association of increased length of stay in integrative medical department with the utilization of additional anthroposophic or naturopathic reimbursement, which requires a longer stay. Considering this, the relative difference of 1 day in the length of stay between the departments is comparatively low. While this difference is statistically significant, the effect size is low. However, in this context, it needs to be stated that outliers with extreme lengths of stay (mainly from the diagnosis spectrum of eating disorders) were excluded prior to analysis. The mean effective CMI were identical in the integrative sample and in all German pediatric departments. This finding indicates that integrative pediatric departments have comparable resource utilization management to general pediatric departments, which is in line with comparable cost analyses . In both samples, esophagitis, gastroenteritis, gastrointestinal bleeding, ulcer (G67B; G67C) and head injuries (B80Z) belonged to the most frequent DRG’s. The percentages varied between the integrative anthroposophic and all pediatric departments. While newborn > 2499 g, without complex diagnosis was one of the most frequent DRG’s in the integrative anthroposophic sample, otitis media or infections of the upper respiratory tract age < 3 years (D63Z) was more frequent in the entirety of all German pediatric departments. The frequencies of the MDCs in the integrative anthroposophic sample showed some significant differences in comparison to the entirety of all pediatric departments in Germany. Higher frequencies could be obtained for MDCs of chronic diagnosis spectrum, such as mental, endocrine, and nervous disorders. Lower frequencies were found for acute diseases, such as digestive, respiratory, and ENT- disorders. A similar pattern was obtained in the DRG frequencies. This result pattern is known from a previous study of our working group on the patient characteristics and clinical characteristics of integrative anthroposophic pediatric departments in Germany . Furthermore, this result is in line with other international studies that conclude that the use of integrative medicine seems to be more frequent in in children with severe and chronic diseases . The higher frequency of chronic and severe diseases may be another factor influencing the longer length of stay in the integrative pediatric departments. The large difference considering the MDC newborns, neonates and diseases of the perinatal period, is most likely due to the specification of the GKH with its center for neonatology. The absence of pregnancy, childbirth, and puerperium in the integrative anthroposophic sample may be explained by the circumstance that the treatment in this MDC is merely used by the gynecologic department in the integrative anthroposophic hospitals but not by the pediatric department. The higher percentage of this MDC in all German pediatric departments may be caused by teenage pregnancies or mothers who are treated in the pediatric department because their neonate child is treated in the pediatric department. The aim of the present study was to contribute to the better understanding of resource utilization, as measured by length of stay, in pediatric integrative medicine in Germany. A big strength of this study is that it is the first systematic investigation of a large sample of integrative pediatric resource utilization data with comparison to representative data of the entirety of all pediatric inpatient departments in Germany. One major limitation of this study is that it is a secondary data analysis. We were not able to gain raw data from the German Federal Statistical Office for the entirety of pediatric hospitals in Germany. Consequently, it was not possible to exclude any outliers in this sample. Future analyses also need to look at the impact on resource utilization in primary and outpatient care, as well as rehabilitation and social care where appropriate as they may influence the length of stay of in-patients. We also recognize that this analysis only provides one aspect of information required for future economic evaluation; sequential services and the use of resources for the entire episode of care were not addressed in this study. To do so, data on resource use and their costs between integrated pediatric hospitals and other pediatric hospitals will need to be combined with comparative data on outcomes associated with treatment in these settings. This would include analysis for different population sub-groups, for instance by different MDC. Ideally outcomes would be measured in terms of impact on quality of life so that the health economic gold standard of incremental cost per quality adjusted life year (QALY) gained could then be assessed. It would also be important to look at whether there are differences in patterns of rehospitalization as part of any future economic evaluation. The comparison of resource utilization in integrative anthroposophic pediatric departments to the entirety of pediatric departments in Germany shows a heterogeneous pattern of similarities and differences. The effective Case Mix Indices were identical, indicating an equal resource utilization in integrative anthroposophic and all pediatric departments. Treatment within integrative anthroposophic pediatric departments fits well in terms of the DRG defined conditions concerning length of stay, even though integrative pediatric patients has an increased length of stay of averagely 1 day, which is most likely associated to time consuming, complex integrative treatment approaches and to a certain extend to higher amount of chronic and severe diseases. Future economic evaluations are needed to assess whether integrative anthroposophic pediatric departments is cost effective. Additional file 1 Supplemental material 1 : 50 most frequent DRG in intergative anthroposophic departments in Germany. Additional file 2 Supplemental material 2 : 50 most frequent DRGs in the entirety of all pediatric departsments in Germany.
An Immunocytochemistry Method to Investigate the Translationally Active HIV Reservoir
05ad8f57-b465-4378-a81a-fb0f88c1c521
11766174
Anatomy[mh]
Progress has been made in understanding the molecular pathogenesis of HIV and has since led to the development of combination antiretroviral therapy (cART). However, cART only suppresses viral replication and cell-to-cell transmission, and its cessation is inevitably followed by viral rebound due to the existence of an HIV reservoir harboring HIV DNA . The development of molecular tools that accurately measure the HIV reservoir size is critical to monitor remission and prognosis of HIV pathogenesis . Currently, several approaches to quantify markers of the HIV reservoir exist, including those that measure the proviral DNA by PCR, the intact proviral DNA by digital PCR (IPDA), the HIV viral RNA by qPCR, the HIV capsid protein (CA) by digital single-molecule assay (Simoa), and the amount of replication-competent virus by quantitative viral outgrowth assay (QVOA) . The value of these assays is limited by the HIV mutation rate and genomic deletions that render the majority of proviruses defective and unable to replicate . Moreover, commonly used methods, such as the gold standard assay, QVOA, are expensive, labor-intensive, require a large volume of blood, and tend to underestimate the size of the replication-competent reservoir . This highlights the need for simpler, less expensive, and time-saving methods to reliably measure the HIV reservoir size. Immunocytochemistry (ICC) is a technique used to assess the presence of a specific protein or antigen in cells by use of an antibody to which it binds . Here, bulk cells from cell lines or isolated from blood were used to quantify HIV CA-containing cells using an automated staining protocol coupled with computational image analysis. Specificity and sensitivity were assayed with cell lines and human PBMCs and compared to assays validated for quantifying viral biomarkers. Application of this method to the detection of CA-expressing cells in humanized mouse and non-human primate models and to the quantification of targeted activator of cell kill (TACK) activity was also assessed. This novel application of ICC, named here CA-ICC, has the potential to provide a simple and sensitive alternative assay to quantify the translationally active HIV reservoir. 2.1. CA-ICC Development and Characterization To detect and quantify the number of HIV CA-containing cells, an immunocytochemistry assay (CA-ICC) was developed with two mouse anti-CA-specific antibodies at 3 µg/mL each. These two monoclonal antibodies (mAbs) recognize different regions of CA, whereby the binding domain of the first anti-CA mAb (ZeptoMetrix, Buffalo, NY, USA) is located at the C-terminus of CA and the second anti-CA mAb (Capricorn) is located in the middle region of CA ( and ). To improve detection sensitivity and specificity, both anti-CA mAbs were used together in this study. The sensitivity of CA-ICC was first evaluated with MOLT IIIB cells, a human T-cell line chronically infected with HIV that constitutively produces HIV proteins in the cell and virus particles in the culture supernatant , and with in vitro infected human PBMCs . As a negative technical control, MOLT IIIB cells were stained with a mouse IgG (mIgG) that does not bind to HIV-CA, demonstrating that none of the cells with positive nuclear staining displayed the CA signal, as expected ( A and ). To the contrary, staining of MOLT IIIB with anti-CA antibodies displayed the intracellular chromogenic signal for 99.9% of cells with positive nuclear staining, as expected ( B and ). Expressed as a probability, the CA-ICC’s sensitivity, calculated as the ratio between the number of true positives and the sum of the number of true positives and false negatives [26,034/(26,034 + 19) × 100], was 99.96%. As negative biological controls, Jurkat cells (which do not express CA) and PBMCs from HIV seronegative individuals did not display any intracellular capsid signals when stained with anti-CA antibodies ( C,D and ). Expressed as a probability, the CA-ICC’s specificity, calculated as the ratio between the number of true negatives and the sum of the number of true negatives and false positives [69,410/(69,410 + 0) × 100], was 100%. Under these conditions, staining of in vitro HIV-infected PBMCs displayed 73.3% cells with the CA intracellular signal ( E and ). All together, these data showed that CA-ICC is sensitive and specific for the detection of intracellular CA in cell lines and in primary cells infected in vitro. Additional experimentation was performed to assess CA-ICC’s sensitivity . For this purpose, chronically infected MOLT IIIB cells were spiked into Jurkat cells starting at 1:5 dilution (200,000 MOLT IIIB cells into 800,000 Jurkat cells) and then at 1:10 serial dilutions up to 1:50,000 (16 MOLT IIIB cells into 800,000 Jurkat cells). On average, 95,293 ± 27,388 cells were scanned per serial dilution ( B). The percentage of CA-positive cells detected by CA-ICC decreased progressively with increasing dilutions, showing frequencies close to the expected values. Specifically, the detection percentages decreased from 15% at the 1:5 dilution to 1.7% at the 1:50 dilution, 0.14% at the 1:500 dilution, 0.04% at the 1:5000 dilution, and 0.015% at the 1:50,000 dilution ( A,B). To validate CA-ICC, HIV CA and RNA markers were quantified by standard ELISA and qPCR assays, respectively, using cell lysates from the same set of MOLT IIIB cells diluted in Jurkat cells. The concentrations of CA measured as picograms per milliliter were 48.959, 4.167, 0.543, 0.022, and 0.006 in the 1:5, 1:50, 1:500, 1:5000, and 1:50,000 dilutions, respectively. Independently, the numbers of HIV-RNA copies per microliter were 1285.4, 124.5, 12.1, 1, and 0.3, respectively. The proportion of CA-positive cells measured by CA-ICC was associated with the quantification of CA in cell lysate by ELISA and with the intracellular viral RNA measured by qPCR. These data indicated that the quantifications made with CA-ICC were consistent with other known biomarkers of intracellular HIV replication, including CA and viral RNA measured with standard assays. Separately, CA-ICC’s reproducibility was evaluated by spiking MOLT IIIB cells into Jurkat cells at 7% dilution. On average, 165,798 ± 12,527 cells were scanned per condition. As shown in , the coefficient of variations (CVs) of intra-day and inter-day were 12.97% (N = 3) and 10.08% (N = 3), respectively. Both CVs were below 20%, indicating that CA-ICC was reproducible when performed three times within the same day or when performed on three consecutive days . To compare CA-ICC’s sensitivity with validated flow cytometric detection of CA, MOLT IIIB cells were spiked into Jurkat cells at different dilutions to obtain expected frequencies ranging from 1 to 100,000 CA-positive cells per million ( C,D). As a negative control, Jurkat cells were used to set the baseline ( C,D; ‘0’ cells condition). These samples were then split and analyzed by CA-ICC and flow cytometry in parallel to compare the sensitivity of the two assays. On average, 194,581 ± 40,005 cells per dilution were scanned for CA-ICC analysis. By CA-ICC, no CA-positive cells were detected in the sample containing only Jurkat cells. On average, 6, 13, 148, 1039, 16,773, and 172,933 CA-positive cells were observed in the samples with expected frequencies of 1, 10, 100, 1000, 10,000, and 100,000 per million, respectively. It should be noted that, in these experiments, normalization per million was achieved by extrapolation. With the exception of the condition testing the 1 cell per million frequency, which returned an average value 6-fold greater than expected, CA-ICC’s recovery rates were consistently within 2-fold of the expected frequencies and displayed a linear range throughout the serial dilutions ( C). On the other hand, the average numbers of CA-positive cells observed by flow cytometry in the serial dilutions described above were 498, 415, 110, 1105, 11,021, and 61,148, respectively. In this case, the dilutions equal to and greater than 1000 CA-positive cells per million observed recovery rates within 2-fold of the expected frequencies and a serial linear range. However, the average number of CA-positive cells observed in the dilutions containing 100 or fewer CA-positive cells per million was inconsistent with the expected frequencies. On average, 4 positive cells per million were quantified in the sample where the expected frequency was 0. These discrepancies highlighted the limitations of flow cytometry for the quantification of these samples ( D). Taken together, CA-ICC was demonstrated to be specific, reproducible, and to possess a linear range for the detection of CA-positive cells at expected analyte concentrations. In this regard, CA-ICC was shown to be more specific and sensitive than standard flow cytometric detection of CA. 2.2. CA-ICC of Bulk PBMCs Isolated from Blood Obtained from Animal Models As an application of CA-ICC for the detection of in vivo biomarkers, blood samples collected from a humanized mouse model ( A–C) and a non-human primate model ( D–I) were used. Cells expressing HIV-CA intracellularly were detected in blood samples collected from mice engrafted with in vitro HIV-infected human PBMCs ( B) but not in control animals ( A). On average, 231,197 ± 82,830 PBMCs were scanned by CA-ICC for each of the six mice included in the study. Due to the limited number of cells obtainable from retro-orbital bleeding in mice, CA-ICC quantification was compared to plasma viral loads. This comparison revealed a range of viremia set points and demonstrated an association between the two markers ( C). To evaluate whether CA-ICC could also be used for the detection of SIV p27, a mouse anti-p27-specific mAb was used in addition to an anti-CA antibody that cross binds with both HIV CA and SIV p27. Staining of in vitro SIV-infected rhesus macaque’s PBMCs displayed ~27% cells with the detectable p27 signal by CA-ICC. As a negative control, the same sample was stained with mIgG and did not display any positive cells, suggesting that the p27 signal was specific. To further test the applicability of CA-ICC to a non-human primate model, PBMCs longitudinally isolated from two individuals before SIV infection (day −11), after SIV infection (day 14), and after initiation of antiretroviral treatment (day 84) were evaluated. On average, 226,776 ± 106,614 PBMCs were scanned for each of the two monkeys included in the study ( D–F). The percentages of p27-positive cells at day −11, day 14, and day 84 were 0.001 ± 0.002, 0.065 ± 0.015, and 0.001 ± 0.002, respectively ( G). As a comparison, p27 accumulation in PBMCs lysate ( H) and plasma viral loads ( I) were measured in the same blood samples with Simoa-based and qPCR validated assays, respectively. The readouts from the different assays displayed comparable trends across the time course ( G–I). Altogether, the datasets obtained from the two animal models showed that CA-ICC can be used to determine the proportion of CA-expressing cells and that its quantification was associated with other biomarkers of virus replication assessed with previously validated assays. 2.3. Measurement of CA-Expressing Cells in HIV-Infected Human PBMCs Treated with Antiretrovirals To evaluate applications of CA-ICC in drug discovery and development, CA-ICC was used to assess the activity of small-molecule targeted activator of cell kill (TACK) . TACK compounds are non-nucleoside reverse transcriptase inhibitors (NNRTIs) that can induce HIV-selective cytotoxicity through binding to the RT domain of monomeric Gag-Pol and causing premature intracellular viral protease activation. For this purpose, human PBMCs were infected with a replication-incompetent virus expressing a GFP reporter. Twenty-four hours post-infection, the cells were treated with compound for 72 h. Viral biomarkers were quantified by CA-ICC and validated using qPCR-, flow cytometric- and GFP-based assays . As negative controls, an NNRTI that does not possess TACK activity (non-TACK in ) and a co-treatment consisting of TACK along with the HIV protease inhibitor (PI) indinavir (IDV), which is known to block TACK activity, were used. For CA-ICC quantification, an average of 187,798 ± 23,086 cells were scanned across all treatments. The proportion of HIV-positive cells detected by CA-ICC was 76.3% following HIV infection and treatment with vehicle control (DMSO) ( A). Treatment with TACK reduced the number of CA-positive cells by 2.7-fold to 28.1%, while this effect was blocked by the addition of IDV. As expected, the non-TACK compound treatment displayed proportions of CA-positive cells comparable to the vehicle control ( A). To evaluate the hypothesis that the amount of intracellular CA signal is associated with susceptibility to TACK-mediated elimination, CA-positive cells were sorted based on the intensity of the chromogenic signal quantified by computational analysis. In the experiment presented in A, three subgroups of CA-positive cells were arbitrarily defined as strong, moderate, and weak, accounting for average percentages of 8.6, 12.9, and 54.7 of the total CA-positive cells (~76% as described above), respectively. The average percentages of CA-positive cells quantified after TACK treatment in the three subgroups were 2.3, 4.0, and 21.9 of the total CA-positive cells (~28%, as described above), respectively. The reductions in CA-positive cells in the strong, moderate, and weak subgroups were 3.8-, 3.2-, and 2.5-fold, respectively, suggesting that cells with a higher intensity of intracellular CA (and possibly of Gag-Pol) signal were more susceptible to TACK-mediated cell elimination. The same samples, treated under the conditions described above, were used to quantify other markers of virus replication using standard assays. TACK treatment lowered the level of viral RNA-positive cells detected by qPCR by 3.9-fold ( B), CA-positive cells detected by flow cytometry by 2.6-fold ( C), and the number of GFP-positive cells detected by a microplate reader by 2.5-fold ( D), with these last two protein markers’ reductions being comparable to CA-ICC quantification. The data indicated that CA-ICC could quantify the activity of antiretroviral agents that specifically eliminate cells expressing CA, similarly to standard methods. Finally, to evaluate the use of CA-ICC with clinical samples, bulk PBMCs from three PLWHs on cART were stimulated with PMA/Ionomycin (P/I) to reactivate cells from latency and treated with TACK to eliminate CA-expressing cells . In those experiments, approximately 100,000 cells were scanned per donor per condition, and less than 0.1% CA-positive cells were detected by CA-ICC in the vehicle control treatment (DMSO), while P/I stimulation increased the proportion of CA-positive cells to ~1% ( A). Treatment of P/I-reactivated cells with TACK reduced the proportion of capsid expressing cells to ~0.1% ( p < 0.05). As confirmatory evaluation using a standard assay, CA measured by Simoa showed similar treatment-induced patterns in both the cell lysate ( B) and culture supernatant ( C). In addition, the efficacy of TACK treatment, assessed by HIV viral genomic RNA quantification using qPCR, was comparable to that assessed by CA-ICC ( D). Altogether, this validation indicated that CA-ICC can be used to measure the pharmacological reduction of translationally active HIV-infected cells in clinical samples as an alternative to standard assays. To detect and quantify the number of HIV CA-containing cells, an immunocytochemistry assay (CA-ICC) was developed with two mouse anti-CA-specific antibodies at 3 µg/mL each. These two monoclonal antibodies (mAbs) recognize different regions of CA, whereby the binding domain of the first anti-CA mAb (ZeptoMetrix, Buffalo, NY, USA) is located at the C-terminus of CA and the second anti-CA mAb (Capricorn) is located in the middle region of CA ( and ). To improve detection sensitivity and specificity, both anti-CA mAbs were used together in this study. The sensitivity of CA-ICC was first evaluated with MOLT IIIB cells, a human T-cell line chronically infected with HIV that constitutively produces HIV proteins in the cell and virus particles in the culture supernatant , and with in vitro infected human PBMCs . As a negative technical control, MOLT IIIB cells were stained with a mouse IgG (mIgG) that does not bind to HIV-CA, demonstrating that none of the cells with positive nuclear staining displayed the CA signal, as expected ( A and ). To the contrary, staining of MOLT IIIB with anti-CA antibodies displayed the intracellular chromogenic signal for 99.9% of cells with positive nuclear staining, as expected ( B and ). Expressed as a probability, the CA-ICC’s sensitivity, calculated as the ratio between the number of true positives and the sum of the number of true positives and false negatives [26,034/(26,034 + 19) × 100], was 99.96%. As negative biological controls, Jurkat cells (which do not express CA) and PBMCs from HIV seronegative individuals did not display any intracellular capsid signals when stained with anti-CA antibodies ( C,D and ). Expressed as a probability, the CA-ICC’s specificity, calculated as the ratio between the number of true negatives and the sum of the number of true negatives and false positives [69,410/(69,410 + 0) × 100], was 100%. Under these conditions, staining of in vitro HIV-infected PBMCs displayed 73.3% cells with the CA intracellular signal ( E and ). All together, these data showed that CA-ICC is sensitive and specific for the detection of intracellular CA in cell lines and in primary cells infected in vitro. Additional experimentation was performed to assess CA-ICC’s sensitivity . For this purpose, chronically infected MOLT IIIB cells were spiked into Jurkat cells starting at 1:5 dilution (200,000 MOLT IIIB cells into 800,000 Jurkat cells) and then at 1:10 serial dilutions up to 1:50,000 (16 MOLT IIIB cells into 800,000 Jurkat cells). On average, 95,293 ± 27,388 cells were scanned per serial dilution ( B). The percentage of CA-positive cells detected by CA-ICC decreased progressively with increasing dilutions, showing frequencies close to the expected values. Specifically, the detection percentages decreased from 15% at the 1:5 dilution to 1.7% at the 1:50 dilution, 0.14% at the 1:500 dilution, 0.04% at the 1:5000 dilution, and 0.015% at the 1:50,000 dilution ( A,B). To validate CA-ICC, HIV CA and RNA markers were quantified by standard ELISA and qPCR assays, respectively, using cell lysates from the same set of MOLT IIIB cells diluted in Jurkat cells. The concentrations of CA measured as picograms per milliliter were 48.959, 4.167, 0.543, 0.022, and 0.006 in the 1:5, 1:50, 1:500, 1:5000, and 1:50,000 dilutions, respectively. Independently, the numbers of HIV-RNA copies per microliter were 1285.4, 124.5, 12.1, 1, and 0.3, respectively. The proportion of CA-positive cells measured by CA-ICC was associated with the quantification of CA in cell lysate by ELISA and with the intracellular viral RNA measured by qPCR. These data indicated that the quantifications made with CA-ICC were consistent with other known biomarkers of intracellular HIV replication, including CA and viral RNA measured with standard assays. Separately, CA-ICC’s reproducibility was evaluated by spiking MOLT IIIB cells into Jurkat cells at 7% dilution. On average, 165,798 ± 12,527 cells were scanned per condition. As shown in , the coefficient of variations (CVs) of intra-day and inter-day were 12.97% (N = 3) and 10.08% (N = 3), respectively. Both CVs were below 20%, indicating that CA-ICC was reproducible when performed three times within the same day or when performed on three consecutive days . To compare CA-ICC’s sensitivity with validated flow cytometric detection of CA, MOLT IIIB cells were spiked into Jurkat cells at different dilutions to obtain expected frequencies ranging from 1 to 100,000 CA-positive cells per million ( C,D). As a negative control, Jurkat cells were used to set the baseline ( C,D; ‘0’ cells condition). These samples were then split and analyzed by CA-ICC and flow cytometry in parallel to compare the sensitivity of the two assays. On average, 194,581 ± 40,005 cells per dilution were scanned for CA-ICC analysis. By CA-ICC, no CA-positive cells were detected in the sample containing only Jurkat cells. On average, 6, 13, 148, 1039, 16,773, and 172,933 CA-positive cells were observed in the samples with expected frequencies of 1, 10, 100, 1000, 10,000, and 100,000 per million, respectively. It should be noted that, in these experiments, normalization per million was achieved by extrapolation. With the exception of the condition testing the 1 cell per million frequency, which returned an average value 6-fold greater than expected, CA-ICC’s recovery rates were consistently within 2-fold of the expected frequencies and displayed a linear range throughout the serial dilutions ( C). On the other hand, the average numbers of CA-positive cells observed by flow cytometry in the serial dilutions described above were 498, 415, 110, 1105, 11,021, and 61,148, respectively. In this case, the dilutions equal to and greater than 1000 CA-positive cells per million observed recovery rates within 2-fold of the expected frequencies and a serial linear range. However, the average number of CA-positive cells observed in the dilutions containing 100 or fewer CA-positive cells per million was inconsistent with the expected frequencies. On average, 4 positive cells per million were quantified in the sample where the expected frequency was 0. These discrepancies highlighted the limitations of flow cytometry for the quantification of these samples ( D). Taken together, CA-ICC was demonstrated to be specific, reproducible, and to possess a linear range for the detection of CA-positive cells at expected analyte concentrations. In this regard, CA-ICC was shown to be more specific and sensitive than standard flow cytometric detection of CA. As an application of CA-ICC for the detection of in vivo biomarkers, blood samples collected from a humanized mouse model ( A–C) and a non-human primate model ( D–I) were used. Cells expressing HIV-CA intracellularly were detected in blood samples collected from mice engrafted with in vitro HIV-infected human PBMCs ( B) but not in control animals ( A). On average, 231,197 ± 82,830 PBMCs were scanned by CA-ICC for each of the six mice included in the study. Due to the limited number of cells obtainable from retro-orbital bleeding in mice, CA-ICC quantification was compared to plasma viral loads. This comparison revealed a range of viremia set points and demonstrated an association between the two markers ( C). To evaluate whether CA-ICC could also be used for the detection of SIV p27, a mouse anti-p27-specific mAb was used in addition to an anti-CA antibody that cross binds with both HIV CA and SIV p27. Staining of in vitro SIV-infected rhesus macaque’s PBMCs displayed ~27% cells with the detectable p27 signal by CA-ICC. As a negative control, the same sample was stained with mIgG and did not display any positive cells, suggesting that the p27 signal was specific. To further test the applicability of CA-ICC to a non-human primate model, PBMCs longitudinally isolated from two individuals before SIV infection (day −11), after SIV infection (day 14), and after initiation of antiretroviral treatment (day 84) were evaluated. On average, 226,776 ± 106,614 PBMCs were scanned for each of the two monkeys included in the study ( D–F). The percentages of p27-positive cells at day −11, day 14, and day 84 were 0.001 ± 0.002, 0.065 ± 0.015, and 0.001 ± 0.002, respectively ( G). As a comparison, p27 accumulation in PBMCs lysate ( H) and plasma viral loads ( I) were measured in the same blood samples with Simoa-based and qPCR validated assays, respectively. The readouts from the different assays displayed comparable trends across the time course ( G–I). Altogether, the datasets obtained from the two animal models showed that CA-ICC can be used to determine the proportion of CA-expressing cells and that its quantification was associated with other biomarkers of virus replication assessed with previously validated assays. To evaluate applications of CA-ICC in drug discovery and development, CA-ICC was used to assess the activity of small-molecule targeted activator of cell kill (TACK) . TACK compounds are non-nucleoside reverse transcriptase inhibitors (NNRTIs) that can induce HIV-selective cytotoxicity through binding to the RT domain of monomeric Gag-Pol and causing premature intracellular viral protease activation. For this purpose, human PBMCs were infected with a replication-incompetent virus expressing a GFP reporter. Twenty-four hours post-infection, the cells were treated with compound for 72 h. Viral biomarkers were quantified by CA-ICC and validated using qPCR-, flow cytometric- and GFP-based assays . As negative controls, an NNRTI that does not possess TACK activity (non-TACK in ) and a co-treatment consisting of TACK along with the HIV protease inhibitor (PI) indinavir (IDV), which is known to block TACK activity, were used. For CA-ICC quantification, an average of 187,798 ± 23,086 cells were scanned across all treatments. The proportion of HIV-positive cells detected by CA-ICC was 76.3% following HIV infection and treatment with vehicle control (DMSO) ( A). Treatment with TACK reduced the number of CA-positive cells by 2.7-fold to 28.1%, while this effect was blocked by the addition of IDV. As expected, the non-TACK compound treatment displayed proportions of CA-positive cells comparable to the vehicle control ( A). To evaluate the hypothesis that the amount of intracellular CA signal is associated with susceptibility to TACK-mediated elimination, CA-positive cells were sorted based on the intensity of the chromogenic signal quantified by computational analysis. In the experiment presented in A, three subgroups of CA-positive cells were arbitrarily defined as strong, moderate, and weak, accounting for average percentages of 8.6, 12.9, and 54.7 of the total CA-positive cells (~76% as described above), respectively. The average percentages of CA-positive cells quantified after TACK treatment in the three subgroups were 2.3, 4.0, and 21.9 of the total CA-positive cells (~28%, as described above), respectively. The reductions in CA-positive cells in the strong, moderate, and weak subgroups were 3.8-, 3.2-, and 2.5-fold, respectively, suggesting that cells with a higher intensity of intracellular CA (and possibly of Gag-Pol) signal were more susceptible to TACK-mediated cell elimination. The same samples, treated under the conditions described above, were used to quantify other markers of virus replication using standard assays. TACK treatment lowered the level of viral RNA-positive cells detected by qPCR by 3.9-fold ( B), CA-positive cells detected by flow cytometry by 2.6-fold ( C), and the number of GFP-positive cells detected by a microplate reader by 2.5-fold ( D), with these last two protein markers’ reductions being comparable to CA-ICC quantification. The data indicated that CA-ICC could quantify the activity of antiretroviral agents that specifically eliminate cells expressing CA, similarly to standard methods. Finally, to evaluate the use of CA-ICC with clinical samples, bulk PBMCs from three PLWHs on cART were stimulated with PMA/Ionomycin (P/I) to reactivate cells from latency and treated with TACK to eliminate CA-expressing cells . In those experiments, approximately 100,000 cells were scanned per donor per condition, and less than 0.1% CA-positive cells were detected by CA-ICC in the vehicle control treatment (DMSO), while P/I stimulation increased the proportion of CA-positive cells to ~1% ( A). Treatment of P/I-reactivated cells with TACK reduced the proportion of capsid expressing cells to ~0.1% ( p < 0.05). As confirmatory evaluation using a standard assay, CA measured by Simoa showed similar treatment-induced patterns in both the cell lysate ( B) and culture supernatant ( C). In addition, the efficacy of TACK treatment, assessed by HIV viral genomic RNA quantification using qPCR, was comparable to that assessed by CA-ICC ( D). Altogether, this validation indicated that CA-ICC can be used to measure the pharmacological reduction of translationally active HIV-infected cells in clinical samples as an alternative to standard assays. Currently, several assays to quantify viral reservoir markers in bulk cells are available, including those that measure the HIV provirus (DNA), viral genomic RNA, and viral proteins . Traditional assays aimed at reservoir characterization by measuring HIV DNA or RNA are sensitive and scalable but have limitations in distinguishing defective from translation- and replication-competent proviruses . This can inherently lead to an overestimation of the size of the reservoir. In addition, standard cell culture-based assays, such as QVOA, that measure HIV Gag CA in culture medium after expansion in culture for 2–3 weeks, require large sample volumes and have limited throughput and clinical application . It should also be emphasized that the successful translation of Gag does not unequivocally confirm the generation of replication-competent progeny viral particles. Alternatively, the Tat/rev-induced limiting dilution assay (TILDA), which measures cell-associated multiply spliced HIV RNA, offers advantages to QVOA, such as increased throughput and decreased sample requirements, but only measures RNA transcripts . Overall, these assays require cells lysis as end point quantification and preclude analysis of cell morphology and visualization of subcellular biomarkers’ localization. Thus, alternative approaches to quantitate the translationally active reservoir in bulk cells are needed to directly visualize CA at the single cell level. In this work, a novel immunocytochemistry method (CA-ICC), employing antibody-mediated detection of intracellular HIV CA in combination with automated staining and computational image analysis, is described. CA-ICC was validated with HIV-infected cell lines and primary cells, and its application was tested with blood samples collected from animal models. Across multiple experiments, the quantification of CA-positive cells made by CA-ICC was associated with other viral markers quantified with validated methods ( , , and ). Importantly, when compared to standard flow cytometric detection of CA, CA-ICC displayed improved specificity and sensitivity in quantifying CA-positive cells spiked into uninfected Jurkat cells ( C,D). In those experiments, CA-ICC allowed detection within a 2-fold range from expected frequencies, with an improved linear range resolution down to 10 CA-positive cells per million. However, in samples with an expected frequency of 1 CA-positive cell per million, the average observed quantification by CA-ICC was 6-fold larger than expected. This indicates that while CA-ICC shows promise, further optimization is needed to improve its sensitivity at such low frequencies ( C). Nonetheless, within the same samples, the flow cytometric quantification of CA-positive cells at frequencies below 100 per million was inaccurate, and the background signal posed limitations ( D). Thus, CA-ICC is superior to flow cytometric quantification of intracellular CA, even though the mAbs used for CA-ICC and flow cytometry, as well as the respective methodologies, are different. Overall, this improvement offers a clear advantage of using CA-ICC over flow cytometry for the detection of the translationally active reservoir. It should be noted that while the FISH-Flow assay allows for the detection of about 1 Gag-positive cell per million and it appears to be more sensitive than CA-ICC in that regard, it lacks a microscopic visualization of CA in the cell and requires a more laborious procedure than CA-ICC . In this regard, CA-ICC is an advantageous method because it uses a relatively simpler protocol based on commercially available reagents and instrumentation. Compared to QVOA, another methodology for the detection of the translationally active reservoir, CA-ICC is favorable as its protocol is two days long, as opposed to several weeks. Furthermore, CA-ICC requires smaller blood volumes than QVOA, which is critical for preserving precious clinical samples. Importantly, one autostainer can stain 30 slides in one run, and two experiments can be performed per day, thereby expanding the throughput of CA-ICC to 60 samples per day. As a disadvantage, the CA-ICC’s capacity, as determined by the number of cells that can be loaded on a microscopy slide, is often limited to 0.5 million cells per specimen. Thus, samples with larger numbers of cells need to be separately loaded onto multiple slides. However, using two loading zones within the same microscopy slide is an adjustment currently under evaluation. Among the factors contributing to the sensitivity of CA-ICC is the use of a combination of two mAbs that bind to different epitopes in CA to minimize the detection of false negatives. Furthermore, according to the manufacturer’s description, the BOND Polymer Refine Red detection kit employs a controlled polymerization technique to create polymeric AP-linker antibody conjugates. This method eliminates the need for traditional streptavidin and biotin processes, thus significantly reducing nonspecific staining. In addition, the commercially available automated staining system used for the CA-ICC procedure, as described in and , is typically employed in digital pathology workflows designed to detect signals in tissue specimens. These specimens are inherently more challenging to stain than bulk cell preparations due to factors such as specimen fixation, dehydration, and permeabilization. It should be noted, however, that the requirement for specialized equipment, including an autostainer and digital pathology scanner, is a disadvantage of the CA-ICC protocol. Nonetheless, it is not more costly than flow cytometry. To demonstrate the applicability to clinical samples, CA-ICC was used to quantify the activity of TACK to induce cell death of either in vitro infected PBMCs or ex vivo treated PBMCs from PLWHs on cART ( and ). TACK compounds are antiretrovirals belonging to the NNRTI class that can trigger the premature activation of HIV protease followed by CARD8-mediated activation of the inflammasome and pyroptosis in the infected cell, which provides a novel mechanism to reduce the translationally active reservoir. CA-ICC quantification of TACK’s efficacy was comparable to previously validated assays based on quantitative PCR, flow cytometry, and GFP fluorescence, further showing that CA-ICC is a valid alternative to those methods. Although at this stage CA-ICC is limited to detection of the intracellular CA marker and analyses were conducted with modest sample sizes, studies are ongoing to multiplex the simultaneous detection of additional markers and to apply this technique to cells obtained from dissociated tissue. Coupling the detection of CA with other viral and host markers, along with the ICC’s capability to visualize their subcellular distribution at the single cell level, will enhance the complexity of characterizing the translationally active reservoir. These additional applications should improve the potential of CA-ICC to become a simple and rapid tool to profile the translationally active reservoir in pre-clinical and clinical samples. 4.1. Antibodies and Cells The antibodies to HIV CA or SIV p27 used in this study are listed in . The binding domain of the ZeptoMetrix anti-HIV CA mAb is located within the p24 amino acid position 46–75 (GATPQDLNTMLNTVGGHQAAMQMLKETINE), as determined by mass spectrometry. The Capricorn anti-HIV CA mAb binding domain is instead located at p24 amino acid position 188–197 (TLLVQNANPD), as determined by peptide mapping. When aligned with 11,677 complete HIV-1 sequences in the Los Alamos National Laboratory database (2021 version), the amino acid sequences of the binding sites for the ZeptoMetrix and Capricorn mAbs were conserved at rates of 31.9% and 87.2%, respectively. In addition, as shown in a previous report, the combination of these two mAbs efficiently detects p24 from multiple HIV-1 subtypes . The sequence of the ABL anti-SIV p27 mAb binding domain is located at the p27 amino acid position 64–84 (AMQIIRDIINEEAADWDLQH), as determined by peptide mapping. The MOLT IIIB cell line carrying an integrated HIV provirus was obtained from the AIDS Research Program . Human PBMCs isolated from leukapheresis were obtained from Lonza (Basel, Switzerland). Jurkat cell line clone E6-1 was obtained from ATCC. Non-human primate (NHP) rhesus macaque PBMCs were isolated from whole ethylenediaminetetraacetic acid (EDTA) anticoagulant blood by Ficoll density gradient centrifugation. Mouse PBMCs were isolated from mouse whole blood, collected in an EDTA tube, spun down, and the cell pellet was treated with red blood cell lysis buffer (Biolegend, San Diego, CA, USA). 4.2. CA-ICC Slide Preparation and Workflow Cell pellets from either cell lines or PBMCs were washed once with PBS and fixed with 4% paraformaldehyde (PFA) for 2 h at room temperature. Fixed cells were resuspended to a 1 × 10 6 cells/mL density in PBS, and a 0.5 mL cell mixture (0.5 × 10 6 cells) was loaded into a cytospin funnel cassette (Thermo-Fisher, Carlsbad, CA, USA) with a cytospin single slide (Thermo-Fisher). Cells were spun at 800 rpm for 10 min at room temperature (RT) in a CytoSpin 4 Cytocentrifuge. Slides were air dried for 30 min at RT and immersed in 50% ethanol (EtOH) for 5 min, followed by immersions in 70% EtOH for 5 min and 100% EtOH for 5 min. The slides were then stored in 100% EtOH for up to 1 month at −20 °C until staining. 4.3. CA-ICC Slides were stained by a Leica Bond RX autostainer (Leica Biosystems, Nussloch, Germany) via a BOND Polymer Refine Red Detection kit (Leica Biosystems). For the detection of HIV-p24, two anti-CA mAbs (anti-CA HIV-018-48304 and anti-CA 801136; ) were combined to increase the staining sensitivity. For the detection of SIV-p27, two antibodies (anti-SIV p27 4324 and anti-HIV CA HIV-018-48304; ) were combined. Antibodies were diluted with Da Vinci Green Diluent buffer (Biocare Medical, Concord, CA, USA), each to final concentration of 3 µg/mL. A secondary alkaline phosphatase (AP)-labeled anti-mouse IgG detection antibody and AP red substrate were included in the kit. All of the staining procedures were performed according to the manufacturer’s instructions with an automated protocol. After staining, the slides were washed once with distilled water for 1 min, then dehydrated twice with 100% EtOH for 5 min, followed by two immersions in HistoPrep xylene (Fisherbrand, Pittsburgh, PA, USA), for 5 min each. The slides were mounted with one drop of EcoMount (Biocare Medical) and a coverslip, then dried overnight at RT. The prepared slides were scanned using a Zeiss Axioscan (Oberkochen, Germany) and, after acquisition, the images were analyzed and rendered with a digital pathology imaging software (HALO, v3.6.4134, Indica Labs, Albuquerque, NM, USA) . 4.4. Preparation of Mouse and NHP Bulk PBMCs For CA-ICC validation, PBMCs were collected from two animal models, a mouse HIV model and a SIV-infected rhesus macaque model. The mouse HIV model utilized intraperitoneal (IP) injection of HIV-infected human PBMCs. For this, fresh human PBMCs were activated with phytohemagglutinin (PHA) at a concentration of 5 µg/mL in complete RPMI-1640 (cRPMI) medium with 10% fetal bovine serum (FBS) and 20 U/mL Interleukin-2 (IL-2) for 3 days. After activation, PBMCs were infected with the R8 wild-type replication-competent HIV laboratory strain with a multiplicity of infection (MOI) of 0.1 for 4 h. Following infection, the input virus was washed away with cRPMI three times. The infected PBMCs were then cultured in T175 flasks with fresh cRPMI. Three days after infection, the cultured PBMCs were collected. A concentration of 30 × 10 6 PBMCs/mL in phosphate-buffered saline (PBS) was injected into immunodeficient NOD-SCID-IL-2ry−/− mice. Mouse blood was collected by retro-orbital bleeding, both before and 5 weeks post-injection of HIV-infected human PBMCs, for both CA-ICC and plasma viral load analyses. For the SIV-infected non-human primate model, PBMCs were isolated from rhesus macaques pre-infection (day −11), post infection (day 14) with SIVmac239m , and post-treatment (day 84; 28 days post-treatment initiation with daily subcutaneous 2.5 mg/kg dolutegravir, 40 mg/kg emtricitabine, 5.1 mg/kg tenofovir disoproxil fumarate) . The isolated PBMCs were used for CA-ICC, p27 Simoa, and SIV viral RNA measurements. In addition, for in vitro characterization of p27 CA-ICC, PBMCs were isolated from uninfected rhesus macaques’ blood, activated with 5 µg/mL Concanavalin A in cRPMI for 3 days, washed and resuspended in culture media with 20 U/mL IL-2. PBMCs at a ~9.6 × 10 6 /mL density were then infected with SIV SF162p3 in a 50 mL bio-reactor tube with spin inoculation at 2000× g for 2 h at RT. Infected PBMCs were then cultured overnight, washed three times with cRPMI by centrifuging at 200× g for 5 min, and fixed with 4% PFA for p27 CA-ICC. 4.5. Plasma Viral Load Measurement Murine blood was collected by retro-orbital bleeding before and after injection of HIV-infected PBMCs. Plasma was obtained by centrifuging the blood at 2000× g for 5 min in an Eppendorf tube. Quantitative PCR (qPCR) was performed using a primers/probe set targeting the HIV integrase sequence, similarly to our previous procedure . To determine the standard curve, HIV RNA was extracted from the MOLT IIIB cell line and quantified using the QuantStudio 3D digital real-time PCR system (QuantStudio 3D Analysis Suite Cloud Software, 15 April 2016, Thermo Fisher Scientific, Carlsbad, CA, USA). The plasma HIV viral loads (copies/mL) were calculated based on the standard curve. 4.6. Single Molecule Array (Simoa) Supernatant obtained from the cell culture was centrifuged at 10,000× g for 5 min at RT to remove insoluble material before measuring CA levels on the Quanterix analyzer (Quanterix, Bullerica, MA, USA). The assay reagents and reaction conditions for the CA measurements were previously described . The concentration of CA was determined using a calibration curve fit to a four-parameter logistic model. 4.7. Quantitative Reverse-Transcription Polymerase Chain Reaction (qPCR) PBMCs were isolated from PLWH on cART using either leukapheresis or whole blood via Ficoll-gradient centrifugation. CD4+ T cells were then isolated from PBMCs using negative selection with the EasySep Kit (STEMCELL Technologies, Vancouver, BC, Canada). CD4+ T cells were treated with either 0.1% DMSO or a combination of 10 ng/mL PMA (phorbol 12-myristate 13-acetate) and 1 μg/mL ionomycin in culture medium for 48 h. The cells and culture medium were collected by centrifugation, and RNA was isolated from the cell pellet using the RNeasy kit (Qiagen, Hilden, Germany). qPCR was performed using the TaqMan Fast Virus 1-Step Master Mix (Thermo Fisher) . A total of 2 µL of purified RNA was used as template. Gene expression assays for the primers/probe specific to HIV or SIV gag were obtained from Thermo Fisher. qPCR was carried out using the QuantStudio 12K Flex system (ThermoFisher Scientific, Carlsbad, CA, USA). 4.8. Flow Cytometry PBMCs were stained as previously described , with modifications. Briefly, cells were stained for viability at 4 °C for 20 min using BD Horizon Fixable Viability Stain 700 (BD Biosciences, Franklin Lakes, NJ, USA) and blocked for 10 min at RT with Human TruStain FcX (Biolegend) prior to surface staining. CD4+ T cells were then stained for surface markers at 4 °C for 30 min with the following antibodies in PBS, including 1% FBS and Brilliant Stain Buffer Plus (BD Biosciences): CD3-BUV496 (UCHT-1, BD Biosciences), CD4-BV786 (SK3, BD Biosciences), and CD8-BUV737 (SK1, BD Biosciences). Fixation and permeabilization of cells were performed at 4 °C for 20 min using BD Cytofix/Cytoperm Buffer (BD Biosciences), followed by intracellular staining for 45 min at RT with anti-CA-KC57-PE and anti-CA 28B7-APC antibodies . Cells were then fixed in BD Stabilizing Fixative (BD Biosciences) and acquired on a BD FACSymphony A3 cytometer (BD Biosciences). 4.9. Quantification of HIV-Infected Cell Elimination Human PBMCs were cultured in cRPMI with 5 µg/mL PHA for 72 h. PHA-stimulated PBMCs were infected with the vesicular stomatitis virus glycoprotein G (VSV-G) pseudotyped HIV-1 virus engineered with a green fluorescent protein (GFP) reporter (VSV-G/pNLG1-P2A-∆Env), as previously described . Briefly, PBMCs were incubated with virus at an MOI of 0.4 for 4 h at 37 °C. After incubation, the PBMCs were washed three times with cRPMI by centrifugation at 200× g for 5 min. Infected PBMCs were then resuspended at 5 × 10 6 cells/mL in complete medium with 10 U/mL IL-2 and incubated for 24 h before treatment with compounds. TACK and non-TACK compounds were tested with or without 250 nM IDV. HIV-infected PBMCs were washed once, and either 100 nM TACK or non-TACK compound was added in four treatment conditions, including TACK alone, non-TACK alone, TACK with IDV, and non-TACK with IDV, then cultured for 72 h. The half maximal cytotoxic concentration (CC 50 ) of both TACK and non-TACK compounds exceeded 40.5 µM. Moreover, cell viability under the aforementioned conditions, assessed using a ViCell XR Cell Viability Analyzer (Beckman Coulter, Brea, CA, USA), was greater than 92%. After incubation with compounds, cells were collected for measurement of cellular CA by CA-ICC and flow cytometry and viral RNA by qPCR with the methods described above. For the GFP-based microplate assay, cells were seeded in plates and analyzed with an Acumen X3 imager (SPT Labtech Ltd, Melbourn, UK) (488-nm laser) to count the number of GFP-positive cells. 4.10. Data Analysis Statistical analyses were performed on log-transformed data, and results were reported after transforming back to the original scale. The data were presented as the mean ± standard deviation. Graphs and figures were prepared using GraphPad Prism 10 (GraphPad Software, Inc., La Jolla, CA, USA). Statistical significance in group comparisons was denoted conventionally by *, p < 0.05 using the Student’s t test. The antibodies to HIV CA or SIV p27 used in this study are listed in . The binding domain of the ZeptoMetrix anti-HIV CA mAb is located within the p24 amino acid position 46–75 (GATPQDLNTMLNTVGGHQAAMQMLKETINE), as determined by mass spectrometry. The Capricorn anti-HIV CA mAb binding domain is instead located at p24 amino acid position 188–197 (TLLVQNANPD), as determined by peptide mapping. When aligned with 11,677 complete HIV-1 sequences in the Los Alamos National Laboratory database (2021 version), the amino acid sequences of the binding sites for the ZeptoMetrix and Capricorn mAbs were conserved at rates of 31.9% and 87.2%, respectively. In addition, as shown in a previous report, the combination of these two mAbs efficiently detects p24 from multiple HIV-1 subtypes . The sequence of the ABL anti-SIV p27 mAb binding domain is located at the p27 amino acid position 64–84 (AMQIIRDIINEEAADWDLQH), as determined by peptide mapping. The MOLT IIIB cell line carrying an integrated HIV provirus was obtained from the AIDS Research Program . Human PBMCs isolated from leukapheresis were obtained from Lonza (Basel, Switzerland). Jurkat cell line clone E6-1 was obtained from ATCC. Non-human primate (NHP) rhesus macaque PBMCs were isolated from whole ethylenediaminetetraacetic acid (EDTA) anticoagulant blood by Ficoll density gradient centrifugation. Mouse PBMCs were isolated from mouse whole blood, collected in an EDTA tube, spun down, and the cell pellet was treated with red blood cell lysis buffer (Biolegend, San Diego, CA, USA). Cell pellets from either cell lines or PBMCs were washed once with PBS and fixed with 4% paraformaldehyde (PFA) for 2 h at room temperature. Fixed cells were resuspended to a 1 × 10 6 cells/mL density in PBS, and a 0.5 mL cell mixture (0.5 × 10 6 cells) was loaded into a cytospin funnel cassette (Thermo-Fisher, Carlsbad, CA, USA) with a cytospin single slide (Thermo-Fisher). Cells were spun at 800 rpm for 10 min at room temperature (RT) in a CytoSpin 4 Cytocentrifuge. Slides were air dried for 30 min at RT and immersed in 50% ethanol (EtOH) for 5 min, followed by immersions in 70% EtOH for 5 min and 100% EtOH for 5 min. The slides were then stored in 100% EtOH for up to 1 month at −20 °C until staining. Slides were stained by a Leica Bond RX autostainer (Leica Biosystems, Nussloch, Germany) via a BOND Polymer Refine Red Detection kit (Leica Biosystems). For the detection of HIV-p24, two anti-CA mAbs (anti-CA HIV-018-48304 and anti-CA 801136; ) were combined to increase the staining sensitivity. For the detection of SIV-p27, two antibodies (anti-SIV p27 4324 and anti-HIV CA HIV-018-48304; ) were combined. Antibodies were diluted with Da Vinci Green Diluent buffer (Biocare Medical, Concord, CA, USA), each to final concentration of 3 µg/mL. A secondary alkaline phosphatase (AP)-labeled anti-mouse IgG detection antibody and AP red substrate were included in the kit. All of the staining procedures were performed according to the manufacturer’s instructions with an automated protocol. After staining, the slides were washed once with distilled water for 1 min, then dehydrated twice with 100% EtOH for 5 min, followed by two immersions in HistoPrep xylene (Fisherbrand, Pittsburgh, PA, USA), for 5 min each. The slides were mounted with one drop of EcoMount (Biocare Medical) and a coverslip, then dried overnight at RT. The prepared slides were scanned using a Zeiss Axioscan (Oberkochen, Germany) and, after acquisition, the images were analyzed and rendered with a digital pathology imaging software (HALO, v3.6.4134, Indica Labs, Albuquerque, NM, USA) . For CA-ICC validation, PBMCs were collected from two animal models, a mouse HIV model and a SIV-infected rhesus macaque model. The mouse HIV model utilized intraperitoneal (IP) injection of HIV-infected human PBMCs. For this, fresh human PBMCs were activated with phytohemagglutinin (PHA) at a concentration of 5 µg/mL in complete RPMI-1640 (cRPMI) medium with 10% fetal bovine serum (FBS) and 20 U/mL Interleukin-2 (IL-2) for 3 days. After activation, PBMCs were infected with the R8 wild-type replication-competent HIV laboratory strain with a multiplicity of infection (MOI) of 0.1 for 4 h. Following infection, the input virus was washed away with cRPMI three times. The infected PBMCs were then cultured in T175 flasks with fresh cRPMI. Three days after infection, the cultured PBMCs were collected. A concentration of 30 × 10 6 PBMCs/mL in phosphate-buffered saline (PBS) was injected into immunodeficient NOD-SCID-IL-2ry−/− mice. Mouse blood was collected by retro-orbital bleeding, both before and 5 weeks post-injection of HIV-infected human PBMCs, for both CA-ICC and plasma viral load analyses. For the SIV-infected non-human primate model, PBMCs were isolated from rhesus macaques pre-infection (day −11), post infection (day 14) with SIVmac239m , and post-treatment (day 84; 28 days post-treatment initiation with daily subcutaneous 2.5 mg/kg dolutegravir, 40 mg/kg emtricitabine, 5.1 mg/kg tenofovir disoproxil fumarate) . The isolated PBMCs were used for CA-ICC, p27 Simoa, and SIV viral RNA measurements. In addition, for in vitro characterization of p27 CA-ICC, PBMCs were isolated from uninfected rhesus macaques’ blood, activated with 5 µg/mL Concanavalin A in cRPMI for 3 days, washed and resuspended in culture media with 20 U/mL IL-2. PBMCs at a ~9.6 × 10 6 /mL density were then infected with SIV SF162p3 in a 50 mL bio-reactor tube with spin inoculation at 2000× g for 2 h at RT. Infected PBMCs were then cultured overnight, washed three times with cRPMI by centrifuging at 200× g for 5 min, and fixed with 4% PFA for p27 CA-ICC. Murine blood was collected by retro-orbital bleeding before and after injection of HIV-infected PBMCs. Plasma was obtained by centrifuging the blood at 2000× g for 5 min in an Eppendorf tube. Quantitative PCR (qPCR) was performed using a primers/probe set targeting the HIV integrase sequence, similarly to our previous procedure . To determine the standard curve, HIV RNA was extracted from the MOLT IIIB cell line and quantified using the QuantStudio 3D digital real-time PCR system (QuantStudio 3D Analysis Suite Cloud Software, 15 April 2016, Thermo Fisher Scientific, Carlsbad, CA, USA). The plasma HIV viral loads (copies/mL) were calculated based on the standard curve. Supernatant obtained from the cell culture was centrifuged at 10,000× g for 5 min at RT to remove insoluble material before measuring CA levels on the Quanterix analyzer (Quanterix, Bullerica, MA, USA). The assay reagents and reaction conditions for the CA measurements were previously described . The concentration of CA was determined using a calibration curve fit to a four-parameter logistic model. PBMCs were isolated from PLWH on cART using either leukapheresis or whole blood via Ficoll-gradient centrifugation. CD4+ T cells were then isolated from PBMCs using negative selection with the EasySep Kit (STEMCELL Technologies, Vancouver, BC, Canada). CD4+ T cells were treated with either 0.1% DMSO or a combination of 10 ng/mL PMA (phorbol 12-myristate 13-acetate) and 1 μg/mL ionomycin in culture medium for 48 h. The cells and culture medium were collected by centrifugation, and RNA was isolated from the cell pellet using the RNeasy kit (Qiagen, Hilden, Germany). qPCR was performed using the TaqMan Fast Virus 1-Step Master Mix (Thermo Fisher) . A total of 2 µL of purified RNA was used as template. Gene expression assays for the primers/probe specific to HIV or SIV gag were obtained from Thermo Fisher. qPCR was carried out using the QuantStudio 12K Flex system (ThermoFisher Scientific, Carlsbad, CA, USA). PBMCs were stained as previously described , with modifications. Briefly, cells were stained for viability at 4 °C for 20 min using BD Horizon Fixable Viability Stain 700 (BD Biosciences, Franklin Lakes, NJ, USA) and blocked for 10 min at RT with Human TruStain FcX (Biolegend) prior to surface staining. CD4+ T cells were then stained for surface markers at 4 °C for 30 min with the following antibodies in PBS, including 1% FBS and Brilliant Stain Buffer Plus (BD Biosciences): CD3-BUV496 (UCHT-1, BD Biosciences), CD4-BV786 (SK3, BD Biosciences), and CD8-BUV737 (SK1, BD Biosciences). Fixation and permeabilization of cells were performed at 4 °C for 20 min using BD Cytofix/Cytoperm Buffer (BD Biosciences), followed by intracellular staining for 45 min at RT with anti-CA-KC57-PE and anti-CA 28B7-APC antibodies . Cells were then fixed in BD Stabilizing Fixative (BD Biosciences) and acquired on a BD FACSymphony A3 cytometer (BD Biosciences). Human PBMCs were cultured in cRPMI with 5 µg/mL PHA for 72 h. PHA-stimulated PBMCs were infected with the vesicular stomatitis virus glycoprotein G (VSV-G) pseudotyped HIV-1 virus engineered with a green fluorescent protein (GFP) reporter (VSV-G/pNLG1-P2A-∆Env), as previously described . Briefly, PBMCs were incubated with virus at an MOI of 0.4 for 4 h at 37 °C. After incubation, the PBMCs were washed three times with cRPMI by centrifugation at 200× g for 5 min. Infected PBMCs were then resuspended at 5 × 10 6 cells/mL in complete medium with 10 U/mL IL-2 and incubated for 24 h before treatment with compounds. TACK and non-TACK compounds were tested with or without 250 nM IDV. HIV-infected PBMCs were washed once, and either 100 nM TACK or non-TACK compound was added in four treatment conditions, including TACK alone, non-TACK alone, TACK with IDV, and non-TACK with IDV, then cultured for 72 h. The half maximal cytotoxic concentration (CC 50 ) of both TACK and non-TACK compounds exceeded 40.5 µM. Moreover, cell viability under the aforementioned conditions, assessed using a ViCell XR Cell Viability Analyzer (Beckman Coulter, Brea, CA, USA), was greater than 92%. After incubation with compounds, cells were collected for measurement of cellular CA by CA-ICC and flow cytometry and viral RNA by qPCR with the methods described above. For the GFP-based microplate assay, cells were seeded in plates and analyzed with an Acumen X3 imager (SPT Labtech Ltd, Melbourn, UK) (488-nm laser) to count the number of GFP-positive cells. Statistical analyses were performed on log-transformed data, and results were reported after transforming back to the original scale. The data were presented as the mean ± standard deviation. Graphs and figures were prepared using GraphPad Prism 10 (GraphPad Software, Inc., La Jolla, CA, USA). Statistical significance in group comparisons was denoted conventionally by *, p < 0.05 using the Student’s t test.
Perceived Exertion: Revisiting the History and Updating the Neurophysiology and the Practical Applications
02b8266d-fc6f-460d-bd62-bd197c166f72
9658641
Physiology[mh]
The sensations produced during exercise have intrigued scientists from several areas for more than one century . Over time, studies investigating exercise-evoked sensations addressed essential topics, including construct validity, measurement properties, neurophysiological mechanisms, and practical applications . The studies carried out by the Swedish psychologist Gunnar Borg about the perceived exertion construct from the early sixties onward are a landmark in exercise physiology and sport science . These studies culminated with the creation of two scales extensively used to measure perceived exertion and other exercise-evoked sensations . Obtaining perceived exertion is relatively easy, but practitioners often neglect some critical methodological issues extensively addressed when the scales are developed, which can subsequently compromise the validity of perceived exertion assessment . In addition, several theoretical models have proposed that perceived exertion plays a role in explaining endurance exercise performance . These models rely on assumptions about the origin of the neural signals responsible for generating the perceived exertion . Although the scientific knowledge about central and peripheral signals involved in the perceived exertion genesis has notably progressed in the last decade, the scenario is complex, and some caveats remain, requiring an integrative physiological interpretation to advance the field further . Lastly, practitioners have extensively applied the perceived exertion to prescribe exercise intensity . However, the practical application of perceived exertion assessment has advanced since its inception . For the reasons above, the objectives of the present narrative review are (1) to revisit the history of the perceived exertion construct and scales development; (2) to present available definitions of perceived exertion; (3) to describe potential neurophysiological mechanisms involved in the perceived exertion genesis during exercise, exploring them from an integrative viewpoint; (4) to highlight essential methodological aspects that practitioners should take into account when obtaining perceived exertion; (5) to demonstrate practical applications of perceived exertion assessment during exercise, either in sport or exercise applied to health promotion and rehabilitation programs. Psychophysics is a psychology discipline that typically investigates the relationship between physical stimuli and sensory responses . Psychophysics researchers frequently use two experimental approaches: (1) ratio production and (2) magnitude estimation . In the ratio production method, a subject must produce a physical stimulus proportional (e.g., double or half) to a previously presented reference physical stimulus . In the magnitude estimation method, a subject must estimate the magnitude of the sensation generated by a physical stimulus, choosing any number that best represents that sensation . For example, a subject can choose the number 10 and another 100 for the same physical stimulus. Using physical stimuli of different magnitudes, it is then possible to establish, with both experimental approaches, the mathematical function that better describes the relationship between physical stimuli and sensory responses . The psychologist Stanley Stevens and his collaborators developed these ratio-scaling methods in the middle of the last century at Harvard University . Later, other researchers have widely used them, leveraging the research in the psychophysics field . Gunnar Borg used the previously presented psychophysics methods to investigate the perceived exertion during exercise . Motivations for this investigation arose from practical observations reported to Gunnar Borg by Hans Dahlström, a Gunnar Borg’s colleague at Umeå University. Hans Dahlström noted that his patients reported a loss of 50% in physical work capacity, but the patients’ performance in a cycle ergometer test had reduced 25% . The initial studies conducted by Gunnar Borg and Hans Dahlström did not focus specifically on the perception of the reduction in physical work capacity over time . The people did not realize a decline in their physical work capacity but rather an increased effort to perform the same workload . Considering that a given workload can generate an overload on skeletal muscles, joints, and the cardiorespiratory system that is proportional to each person’s maximum physical work capacity, Gunnar Borg speculated that the signals coming from the involved sensory receptors would generate a perception of effort proportional to each person’s maximum physical work capacity (i.e., relative exercise intensity). This hypothesis provided an essential theoretical framework for developing scales to measure perceived exertion . Then, in 1959 and 1960, Gunnar Borg and Hans Dahlström investigated the perceived exertion during short-duration (30 s) exercise on a stationary bicycle using the ratio production method . In summary, different workloads were applied, and subsequently, subjects were asked to produce half of each of these workloads according to their perceived exertion. Thus, it was possible to establish a power function that mathematically described the relationship between the experimentally imposed workload (physical stimulus) and the perceptually produced workload (sensory response). The exponent of the relationship between imposed workload and perceptually produced workload averaged approximately 1.7. Therefore, this power function would explain the difference between people’s perceived loss and the actual loss in physical work capacity reported by Hans Dahlström to Gunnar Borg. Of note, similar exponent values on the relationship between the workload and perceived exertion were obtained by studies using the ratio production method in different types of exercise (e.g., handgrip) and later, during relatively more prolonged exercise (4–6 min) on a cycle ergometer by using the magnitude estimation method . These initial studies helped describe the concept of perceived exertion, but the ratio production and magnitude estimation methods had significant limitations . These techniques did not allow for estimating the absolute level of perceived exertion . For example, a child and a weightlifter can recognize that an object weighs twice as much as another . However, this information is irrelevant concerning the absolute effort used to lift the object, which could be different between the child and the weightlifter. Another crucial aspect was the validity of these psychophysical mathematical functions . One way to investigate the validity of psychophysical functions would be to test their correspondence with the physiological functions behind the sensory modality in question . However, in the case of perceived exertion assessed without a specific scale, the ensuing psychophysical function showed very low correlations with heart rate . In an attempt to overcome these limitations, Gunnar Borg then began to use a category scale with descriptors that anchored the perceived exertion between a minimum and a maximum value . He assumed that most people share a similar perception of “maximum effort” (Borg’s range model), even though the absolute physical work capacity achieved in this “maximum effort” was different . Additionally, evidence at that time showed similar between-people exponents (around 1.6 and 1.7) of the psychophysical relationship between workload and perceived exertion . Consequently, the perceived exertion for a given workload would be proportional to the maximum work capacity of each person , allowing comparison between individuals . It is also worth noting that Gunnar Borg carefully chose adjectives and adverbs to characterize scale descriptors according to their quantitative semantics properties in such a way that the verbal expressions contained in the descriptors facilitated identifying a level of intensity that converged with the numbers on the scale . In the early studies, Gunnar Borg used a 7-point category scale with simple verbal expressions . Later, as some subjects carried out five to seven loads in bicycle ergometer tests, Gunnar Borg increased the numbers on the scale to 21, thus allowing people to have more options to classify the perceived exertion between successive loads . The 21-point scale, however, exhibited a slightly negative accelerating power function with the workload, which made comparisons with heart rate difficult. Then, a practical observation led to the development of a new scale. Gunnar Borg observed that, on average, a perceived exertion of 17 corresponded to a heart rate of 170 bpm. This coincidence made Gunnar Borg generate a new scale starting on six and ending on 20, corresponding to the resting (60 bpm) and maximum (200 bpm) heart rate of young adults, respectively. The descriptors of the 21-point scale were then mathematically adjusted for the new 15-point scale (from 6 to 20). This new 15-point scale had equidistant intervals so that the effort ratings grew linearly ( ; Panel A), allowing comparison with objective exercise-intensity measurements such as heart rate and oxygen consumption . Thus, a category scale with interval property emerged: the rating of perceived exertion (RPE) scale. Following the RPE scale, Gunnar Borg was interested in developing a new scale that would establish the absolute magnitude of sensory response (category rating method) and the mathematical relationship between a physical stimulus and sensory response (ratio scaling method). The development of this new scale had the 7-point category scale as a starting point, as occurred in the creation of the RPE scale. Psychophysics studies at that time showed that category and ratio scales generated different nonlinear growth functions in the sensory responses to physical stimuli of progressive magnitude . Category scales produced a negatively accelerating growth function, whereas ratio scales exhibited a positively accelerating growth function . Such features allowed Gunnar Borg to mathematically change the verbal descriptors from the 7-point category scale to a ratio scale containing 10 points ( ; Panel B), thus emerging the category-ratio 10 (CR10) scale. The CR10 scale allows reporting decimal numbers (e.g., 0.3) to more finely grade the magnitude of perceived stimuli. In addition, the CR10 scale enables reporting values greater than 10 in case the magnitude of perceived stimuli is higher than the maximal previously experienced, thus avoiding a ceiling effect . Other scales were later developed , such as the category-ratio scale of 100 points , but a more detailed description of these scales is beyond the scope of the present narrative review. Researchers have recently discussed the meaning of perceived exertion, which may have implications for defining and applying the construct in the practical context . Gunnar Borg proposed that sensory information from skeletal muscles, joints, the cardiorespiratory system, and any other organ would generate sensations such as pain, fatigue (weakness), strain, and breathlessness. Together, these sensations would form the perceived exertion, a kind of gestalt (i.e., a whole inexplicable by its parts individually) related to the exercise requirement . Past experiences, expectations about exercise performance, psychological features, environmental conditions, exercise characteristics, and emotions associated with the exercise-evoked sensations would also weigh on the reported perceived exertion . Based on these assumptions, Gunnar Borg defined perceived exertion as a “feeling of how heavy, strenuous, and laborious the exercise is” according to the sensation of strain and fatigue in the skeletal muscles and breathlessness or aches in the chest . Robert Robertson and Bruce Noble defined perceived exertion as a “subjective intensity of effort, strain, discomfort, and/or fatigue that is experienced during physical exercise” . This definition somewhat agrees with Gunnar Borg’s idea that somatic information from different organs would generate sensations that together would form the perceived exertion. However, people can differentiate several bodily sensations arising during exercise . For example, if clear instructions are given, it is possible to discriminate the sense of effort to command skeletal muscles from the sensations of force, pain, or discomfort evoked by muscle contractions . It is also possible to differentiate the sensations of respiratory effort from the feelings of “air hunger” (insufficient inspiration), breathlessness, and chest tightness . Such differentiations possibly occur because different neurophysiological mechanisms are involved in the genesis of each of these sensations . Thus, considering all exercise-evoked somatic sensations together could hinder the rating accuracy of the perceived exertion . Samuele Marcora suggested defining perceived exertion as a “conscious sensation of how hard, heavy, and strenuous a physical task is” . This sensation, however, would depend mainly on the sense of effort to command the involved limbs during the physical task and the feeling of heavy breathing . Given that people can accurately differentiate the sense of effort to command skeletal muscles (locomotor and respiratory) from other exercise-evoked somatic sensations, such as tension, force, pain, discomfort, and breathlessness , it seems that the definition proposed by Samuele Marcora is more accurate for classifying the perceived exertion. It is also worth noting that Marcora’s definition enables quantifying perceived exertion according to the RPE and CR10 scales descriptors, and several studies have shown that perceived exertion is sensitive to different physiological and psychological manipulations using Marcora’s definition . Two theories are frequently used to describe the origin of the neural signals responsible for the genesis of the perceived exertion during exercise . One of them, known as the afferent feedback theory , holds that sensory brain areas produce the perceived exertion proportionally to mechanical and metabolic signals detected by receptors in the skeletal muscles and cardiorespiratory system . The same neural signals are also crucial for cardiorespiratory responses to exercise . Thus, presumably, perceived exertion and cardiorespiratory responses should be tightly associated. Indeed, several studies have shown high correlations between perceived exertion, heart rate, and pulmonary ventilation responses to exercise . Some researchers, however, argue that available evidence does not support the afferent feedback theory . For instance, beta-blockers or mental fatigue can dissociate perceived exertion and heart rate responses to exercise . Moreover, information from mechano- and chemo-receptors in the respiratory system (airways, lungs, and chest wall) do not seem to be involved in generating the respiratory effort sensation (i.e., heavy breathing), which is an essential component of perceived exertion during whole-body exercise . Finally, experimental studies that partially blocked group III and IV muscle afferents typically do not show changes in perceived exertion during exercise compared to a control condition . Nevertheless, a cautious interpretation is required, considering that some of the mentioned studies were not specifically designed to investigate the neurophysiological mechanisms behind the perceived exertion during exercise . In contrast to the afferent feedback theory, the corollary discharges theory proposes that the signals that generate the perceived exertion come from efferent copies associated with the motor command to locomotor muscles and the central drive to respiratory muscle . Specifically, outputs from the supplementary motor area and medullary respiratory center are sent directly to sensory areas. These signals are parallel (i.e., corollary discharges) and somewhat independent of the signals sent to locomotor and respiratory muscles . In support, experimental findings have shown that perceived exertion accompanies the changes in motor-related cortical potential (a proxy to central motor command) induced by manipulations that do not alter afferent signals (e.g., use of caffeine or eccentric exercise-induced force reduction) . Therefore, these findings indicate that corollary discharges, rather than afferent signals, are vital to the generation and modulation of perceived exertion. However, researchers opposed to the afferent feedback theory have not considered that the redundancy and interaction between neurophysiological mechanisms are crucial for cardiovascular and respiratory adjustments to exercise . Such a phenomenon likely contributes to the formation of perceived exertion as well. Thus, next, we propose how both theories might physiologically operate together, which should be taken into account by future studies. Recent evidence has suggested that the genesis of perceived exertion during high-intensity exercise can indirectly involve the afferent feedback from the skeletal muscle and cardiorespiratory system. For example, the activation of group III and IV afferents receptors in the locomotor muscles by metabolite accumulation reduces the excitability of the primary motor cortex, hindering muscle recruitment . In this case, the supplementary motor area has to increase the signals to the primary motor cortex to preserve the muscle power output, which provides additional corollary discharges for the genesis of the perceived exertion . In addition, the elevated respiratory work may generate metabolite accumulation in the respiratory muscles, which also activates underlying group III and IV afferent fibers . The activation of respiratory muscle afferents leads to sympathetically-mediated vasoconstriction that impairs the oxygen supply to the locomotor muscles, exacerbating metabolite accumulation in both locomotor and respiratory and ultimately inducing primary motor cortex inhibition . Again, enhanced activation of the supplementary motor area would be required to sustain the muscle power output, potentially increasing corollary discharges. Supporting evidence is the increase in diaphragmatic muscle activation (i.e., EMG response) associated with a decline in evoked transdiaphragmatic twitch pressure during an incremental exercise . In this scenario, an increase in medullary corollary discharges would also contribute to the rise in the perceived exertion. Some methodological issues are fundamental to quantifying the perceived exertion during exercise accurately. One of these issues is using the original versions of the scales, regardless of whether it is Borg’s scale or not. The previous section showed that the psychophysics properties of the RPE and CR10 scales were carefully verified and validated over several years. It is, therefore, inappropriate altering the RPE and CR10 scales by using figures, colors, other non-tested verbal descriptors, or verbal descriptors for all scale numbers . Considering that the scales were developed in English, if using the scales in another language, it is also strongly recommended to verify if the translated version has passed a thorough transcultural validation process . Translated versions of the RPE and CR10 in various languages are available from the Swedish company website licensed to distribute Borg’s scales ( https://borgperception.se , accessed on 1 November 2022). Moreover, practitioners should ideally obtain perceived exertion during exercise . If not possible, an option is obtaining values immediately after exercise. However, practitioners should remind the tested individual to report values referring to the exercise performed . Providing written instructions when obtaining perceived exertion was a methodological procedure originally recommended by Gunner Borg , which a review article recently reinforced . Given that these instructions were developed in English, transcultural adaptation to other languages should also be considered . Similar to Borg’s scales, translated versions of the instructions are also available ( https://borgperception.se , accessed on 1 November 2022). However, practitioners should be aware that the instructions were developed using Borg’s definition of perceived exertion . As we pointed out in the previous paragraphs, contemporary studies support that it is essential to distinguish effort sensation to command skeletal muscles (locomotor and respiratory) from other exercise-evoked somatic sensations (e.g., pain or breathlessness). Clear instructions differentiating the sensations can be critical for accurate perceived exertion quantification . In the written instructions, individuals should be oriented first to read the descriptors and then to quantify the perceived exertion . In the case of the CR10 scale, individuals should be encouraged to report decimal values (e.g., 0.5), thus grading more finely the perceived exertion magnitude . Providing an example of maximal perceived exertion to the individuals is strongly recommended. This anchoring process can be done based on the individual’s memory or the individual’s experience with a performed exercise . In the case of the CR10 scale, values above 10 are possible if the current perception is more intense than previous experiences. The anchoring process and the clear construct definition are essential procedures for a valid measure of perceived exertion . Lastly, it is worth reminding the individuals to be as honest as possible and avoid comparing with others. It is also encouraged to avoid judgments about exercise intensity that can result in the tested individual underestimating or overestimating the reported perceived exertion . Maximal oxygen uptake and peak exercise intensity are frequently used to assess cardiorespiratory fitness and individualize exercise prescription, respectively . It is possible to estimate these parameters through the perceived exertion when it is undesirable to push the incremental exercise testing until the subject’s voluntary exhaustion . It is only necessary to extrapolate the submaximal relationship between perceived exertion and oxygen uptake or exercise intensity to a theoretical endpoint (i.e., 19 or 20) on the RPE scale . It is also possible to estimate the time to exhaustion during constant-load exercise tests, given the linear relationship between perceived exertion and exercise time . Additionally, the product between perceived exertion and remaining distance fraction (i.e., hazard score) can predict the subsequent running speed change during time-trials tests (e.g., completing 5 km in the shortest time). Hazard scores below 1.5 and above 3 arbitrary units are associated with a reduction and an increase in the running speed, respectively . Critical power delimits the transition between heavy and severe exercise intensity domains . In the exercise above the critical power (i.e., severe-intensity domain), fatigue-related metabolites accumulate (e.g., inorganic phosphate and hydrogen ions) over time in the skeletal muscle , limiting the capacity to sustain the exercise for a prolonged time . Therefore, critical power is a valuable tool for continuous or interval endurance training prescription, and it is considered an important indicator of performance in endurance sports . It is possible to approximate the critical power through perceived exertion slopes obtained from three or more constant-load exercise tests in the severe-intensity domain . The intercept between exercise intensity and perceived exertion slopes approximates the critical power . Importantly, this method of critical power approximation does not require exercise until exhaustion, as the perceived exertion slopes can be obtained from intermediate levels (11–14 on the RPE scale) of perceived exertion . Monitoring athletes’ responses to training (i.e., training effect) can provide valuable information to refine the training process, maximizing the chances of improving sports performance and minimizing the risk of injury, illness, nonfunctional overreaching, or overtraining . It is possible to track physical fitness changes by quantifying the workload during incremental exercise testing corresponding to a specific level of perceived exertion (e.g., 15 or 17 on the RPE scale) . For example, an increased workload to the same level of perceived exertion represents an improvement in physical fitness—i.e., a positive training effect . In addition, when athletes are in a state of accumulated fatigue due to an imbalance between training loads and recovery periods, a lower heart rate accompanies a higher perceived exertion for a given workload . Heart rate reduction for the same exercise intensity most often indicates positive training adaptations . Therefore, measuring perceived exertion simultaneously with heart rate during constant-load exercise tests seems to permit more accurate monitoring of the athletes’ responses to training. The aforementioned practical applications are based on measuring perceived exertion during an externally imposed-exercise intensity (estimation approach). An alternative approach would be self-regulating exercise intensity while maintaining a given perceived exertion over time (production approach). For example, evidence suggests that incremental exercise testing self-regulated by perceived exertion can produce similar values of maximal oxygen uptake and ventilatory threshold compared with traditional protocols, but it seems unfeasible to determine the respiratory compensation point . Moreover, several studies have also shown that self-regulation of exercise intensity by perceived exertion produces similar cardiorespiratory and metabolic responses to those obtained during incremental exercise testing for the same perceived exertion . It is, therefore, possible to use the perceived exertion corresponding to percentages of maximal oxygen uptake or maximal heart rate obtained in incremental testing (e.g., 60% and 80%) to control exercise intensity during training sessions , which has important practical implications when prescribing exercise for health or throughout rehabilitation programs. Early studies conducted by Gunnar Borg about the perceived exertion construct used psychophysics methods (ratio production and magnitude estimation). Given the limitations of these methods (impossibility of between-individuals comparison and low correlations with physiological variables), Gunnar Borg developed a scale with categorical descriptors anchored within a minimum and a maximum limit for perceived exertion judgment (Borg’s range model) in subsequent studies. These later studies gave rise to the RPE and CR10 scales. Perceived exertion should be defined as a conscious perception of how hard, heavy, and strenuous the exercise is, emphasizing that perceived exertion depends only on the sense of effort to command the limbs and the feeling of heavy breathing (respiratory effort). This contemporary definition is related to neurophysiological mechanisms involved in the perceived exertion genesis. Regarding neurophysiological mechanisms, efferent copies from the motor command and respiratory drive (i.e., corollary discharge) appear directly linked with the generation of perceived exertion. On the other hand, feedback from group III and IV muscle (locomotor and respiratory) afferents might indirectly participate in the perceived exertion genesis during high-intensity exercise, modulating the magnitude of corollary discharges. Some methodological issues are fundamental to quantifying the perceived exertion accurately. One of these issues is using an updated definition of perceived exertion proposed by Samule Marcora. Thus, other exercise-evoked sensations would not hinder the accuracy of perceived exertion assessment. We strongly recommend using the original scales versions, regardless of whether it is Borg’s scales or not, since the psychophysics properties of the RPE and CR10 scales (and others) were carefully verified and validated over several research years. Another essential issue is the evaluator providing the assessed person with an example of maximal perceived exertion. This anchoring process can be done based on the individual’s memory or individual’s experience with a performed exercise. When carefully applied, exercise and sports science practitioners can use the perceived exertion during incremental, constant-load, and time-trial exercise testing to assess cardiorespiratory fitness, prescribe individualized exercise intensities, predict endurance exercise performance, and monitor athletes’ responses to physical training. Therefore, determining perceived exertion during exercise has important implications for health promotion, rehabilitation programs, and high-performance sports.
Revolutionizing Playing with Skeleton Atoms: Molecular Editing Surgery in Medicinal Chemistry
7d2d08d4-b20b-43d9-9103-ffc900349f75
11851142
Pharmacology[mh]
INTRODUCTION Organic chemistry appears to the beginners and uninformed to be a bewildering display of Egyptian hieroglyphs and a whirlwind of zigzags and hexagons spinning over the page. Since my first undergraduate years in the Faculty of Pharmacy (Mansoura University, Egypt), I have been dreaming of establishing a new pharmaceutical chemistry branch comprising unique techniques in organic chemistry that will be concerned with all dreamed aspects and visions at the atomic level of the substances. One of my dreams was to reach each atom in the molecule at its position, interact with it, and decide its planned destiny, i.e. , to change the organic reaction game from the external molecular level to the internal atomic level. Later on, as a medicinal chemist, the ultimate goal of this dream was to increase the ability of relevant chemists to accurately edit chemical molecules by replacing, adding, or deleting single atoms in their cores/scaffolds through extremely delicate chemical surgeries or reactions. In general, normal classical organic reactions occur at the peripheries ( e.g. , substituents) of interacting molecules to form new larger molecules without affecting the internal backbones or cores (traditional peripheral molecular editing) (Fig. ). While on the other hand, our revolutionary dream is to play with and modify the major cores of molecules through designed chemical nanosurgeries or, sometimes, picosurgeries, using even robo-chemists and drug-maker machines for these types of precise and difficult organic synthesis (revolutionary skeletal molecular editing or, simply, skeletal editing; I suggest officially abbreviating it in the coming days as “SKED”, e.g. , SKED strategy) (Fig. ). These molecular nanosurgeries partially depend on the “moonshot” pathway concept. As an explosion in organic reaction methodologies, this type of magical molecular nanosurgery may completely alter how organic chemists create compounds and may greatly speed up the drug development/discovery process. In the last few years, there have been some successful attempts to synthesize new molecules using this new technique unintendedly; however, most of these reactions were not effective with all chemical skeleton (core) types, could not bypass the interferences caused by many peripheral fragments, and were also almost not workable with less simple molecules . Although skeletal or core editing was on the rise in both the years 2021 and 2022, it actually became an emerging independent branch capable of being standalone in pharmaceutical organic chemistry in the previous year, 2023 . This was achieved by having the least sufficient deal of data, information, characteristics, and features that can systemically build up this new organic/medicinal chemistry branch that effectively complements the other branches of organic/medicinal chemistry. PREVIOUS ORGANIC SYNTHETIC ATTEMPTS In the last decade (2015-2024), I and my research teams carried out many successful syntheses of primary skeletal molecular editing (together with those of conventional peripheral molecular editing) . Though most of these reactions showed direct actions on the principal scaffold moieties of the reactants (mainly, one- or two-atom modifications), they are still considered initial attempts of combined classic and advanced organic synthesis. A famous example of skeletal molecular editing is the substitution or replacement of the core oxygen atom of the 1,3,4-oxadiazole ring in 2,5-disubstituted-1,3,4-oxadiazoles, either with a core sulfur atom of the 1,3,4-thiadiazole ring in 2,5-disubstituted-1,3,4-thiadiazoles or with a core nitrogen atom of the 1,2,4-triazole ring in 3,5-disubstituted-4-amino-1,2,4-triazoles, depending on the reaction conditions (Fig. ) . Another known example (which was observed in several optimized drug design trials) is the replacement of the core oxygen atom of the 4 H -benzo[ d ][1,3]oxazine ring in 2-substituted-4 H -benzo[ d ][1,3]oxazin-4-ones with another core nitrogen atom of the 3,4-dihydroquinazoline ring in 2,3-disubstituted-3 H -quinazolin-4-ones (Fig. ) . In both examples, skeletal editing could also be called skeletal substitution. TYPES OF SKELETAL OR CORE EDITING IN STRUCTURAL CHEMISTRY The motif of atom replacement, deletion, or insertion in any chemical skeleton is a very dreamy magic idea in organic reactions and structural chemistry. Numerous methods for intramolecularly transferring individual atoms into and out of the skeletons of molecules have been developed by organic chemists. However, the current condensed examples (displayed in general style) may only apply to particular types of organic structures or require specific reaction circumstances, i.e. , skeletal editing in organic chemistry is not yet effective for all compounds and is still under progressive development, as previously stated. These chemical edits are carried out for molecules in organic chemistry for the sake of mainly easing or enabling certain types of organic chemical reactions (from a chemical point of view) and in medicinal chemistry for addressing mainly pharmacokinetic, pharmacological, metabolic, and toxicological issues (from a clinical point of view). Three major categories of skeletal editing reactions are currently present in organic chemistry laboratories, as follows: 3.1 Atom Replacement (Skeleton Analogism/Isosterism) This type of skeletal editing comprises various kinds of atom replacement “substitution”/rearrangement reactions. Mainly, one carbon atom in the organic molecule scaffold is replaced by either an isotopic carbon atom, a nitrogen atom, an oxygen atom, or a sulfur atom (the reverse is also possible for the four atoms) . Sometimes, two atoms are replaced by two other atoms, e.g. , from NN to CC . Another major example is the swapping of an oxygen atom for either one nitrogen atom or one sulfur atom . This approach of reactions was developed and applied to several kinds of late-stage modifications of drugs/drug candidates and their derivatives ( e.g. , tropicamide, loratadine, stanolone, indomethacin, and probenecid), as in the case of the pyridine-editing strategy (Fig. ) . 3.2 Atom Deletion (Skeleton Narrowing/Straitening) This type of skeletal editing was primarily developed to enable drug discovery chemists to delete “remove” at least one carbon atom (or other specified atoms) from the drug/drug candidate molecule's skeleton to effectively interconvert this molecule to another more successful one during the structure-activity relationship (SAR) studies, e.g. , to decrease the molecular weight, contract the skeleton chain length, and/or adjust certain pharmacokinetic properties. An example of this skeleton contraction and scaffold hopping strategy is the interconversion between the two chemical classes, quinolines and indoles (Fig. ) . Both of them are very common core motifs in drug molecules, and they differ only by a single carbon atom in their ring structures, making this ring-editing strategy between them very useful during the SAR studies, e.g. , when comparing the two members of the statin class of pharmaceuticals, the cholesterol-lowering therapeutics pitavastatin (the quinoline derivative) and fluvastatin (the indole analog) . This strategy is also applied when comparing pyridine drugs and their skeleton-contracted pyrrole/pyrazole analogs, e.g. , the two antiinflammatory drugs etoricoxib and celecoxib; the same approach is also obvious when comparing compounds in a certain drug development series, e.g. , 5,10-dideazafolic acid (an inactive molecule) and pemetrexed (an active chemotherapeutic molecule) . 3.3 Atom Insertion (Skeleton Widening/Expanding) This type of skeletal editing is almost the reverse of the previous atom-removal strategy, and it also has the same importance in drug design and discovery tactics. This strategy enables medicinal chemists to insert “add” carbon, nitrogen, and/or oxygen atom(s) (or other specified atoms) to the drug/drug candidate molecule's skeleton to interconvert this molecule to another more effective one. This strategy is specifically very useful in the chemical retrosynthesis of bioactive natural products and their analogs from simple and available small-molecule precursors . Known examples of this skeleton expansion strategy are the pyrrole-to-pyridine, pyrazole-to-pyrimidine, indole-to-quinoline, indole-to-quinazoline, and indazole-to-quinazoline molecular editing conversions, e.g. , the total organic synthesis of the natural pyridine-containing alkaloids complanadine A and lycodine, as well as simplified derived analogs, from a simple starting material containing a five-membered pyrrole ring in its chemical structure (pyrrole-to-pyridine molecular editing strategy) (Fig. ) . Atom Replacement (Skeleton Analogism/Isosterism) This type of skeletal editing comprises various kinds of atom replacement “substitution”/rearrangement reactions. Mainly, one carbon atom in the organic molecule scaffold is replaced by either an isotopic carbon atom, a nitrogen atom, an oxygen atom, or a sulfur atom (the reverse is also possible for the four atoms) . Sometimes, two atoms are replaced by two other atoms, e.g. , from NN to CC . Another major example is the swapping of an oxygen atom for either one nitrogen atom or one sulfur atom . This approach of reactions was developed and applied to several kinds of late-stage modifications of drugs/drug candidates and their derivatives ( e.g. , tropicamide, loratadine, stanolone, indomethacin, and probenecid), as in the case of the pyridine-editing strategy (Fig. ) . Atom Deletion (Skeleton Narrowing/Straitening) This type of skeletal editing was primarily developed to enable drug discovery chemists to delete “remove” at least one carbon atom (or other specified atoms) from the drug/drug candidate molecule's skeleton to effectively interconvert this molecule to another more successful one during the structure-activity relationship (SAR) studies, e.g. , to decrease the molecular weight, contract the skeleton chain length, and/or adjust certain pharmacokinetic properties. An example of this skeleton contraction and scaffold hopping strategy is the interconversion between the two chemical classes, quinolines and indoles (Fig. ) . Both of them are very common core motifs in drug molecules, and they differ only by a single carbon atom in their ring structures, making this ring-editing strategy between them very useful during the SAR studies, e.g. , when comparing the two members of the statin class of pharmaceuticals, the cholesterol-lowering therapeutics pitavastatin (the quinoline derivative) and fluvastatin (the indole analog) . This strategy is also applied when comparing pyridine drugs and their skeleton-contracted pyrrole/pyrazole analogs, e.g. , the two antiinflammatory drugs etoricoxib and celecoxib; the same approach is also obvious when comparing compounds in a certain drug development series, e.g. , 5,10-dideazafolic acid (an inactive molecule) and pemetrexed (an active chemotherapeutic molecule) . Atom Insertion (Skeleton Widening/Expanding) This type of skeletal editing is almost the reverse of the previous atom-removal strategy, and it also has the same importance in drug design and discovery tactics. This strategy enables medicinal chemists to insert “add” carbon, nitrogen, and/or oxygen atom(s) (or other specified atoms) to the drug/drug candidate molecule's skeleton to interconvert this molecule to another more effective one. This strategy is specifically very useful in the chemical retrosynthesis of bioactive natural products and their analogs from simple and available small-molecule precursors . Known examples of this skeleton expansion strategy are the pyrrole-to-pyridine, pyrazole-to-pyrimidine, indole-to-quinoline, indole-to-quinazoline, and indazole-to-quinazoline molecular editing conversions, e.g. , the total organic synthesis of the natural pyridine-containing alkaloids complanadine A and lycodine, as well as simplified derived analogs, from a simple starting material containing a five-membered pyrrole ring in its chemical structure (pyrrole-to-pyridine molecular editing strategy) (Fig. ) . MEETING THERAPEUTIC NEEDS One of the major objectives of organic chemistry is to pour compounds of bioactive potential into the pool of therapeutic or medicinal chemistry. One could be tempted to compare skeletal editing to the highly popular gene-editing method CRISPR in biotechnology. It actually has poor similarity and comparison. CRISPR simply needs to handle the four nucleotides of DNA or RNA . On the other hand, skeletal editing is much more generalizable since the editing techniques here are far more broadly applicable and work on millions of diverse organic molecules (including organometallic compounds). Interestingly, organic chemical skeletal editing could be considered a modern pathway for modifying pharmaceutically relevant bioactive molecules through mainly replacing a carbon atom for one atom of nitrogen, oxygen, sulfur, selenium, or others (and the other way around), i.e. , mainly interconversions of arenes and heteroarenes . The produced analogs and derivatives might open up very important tracks of research in medicinal chemistry drives. These highly analogous derivatives might provide substantial solutions to pharmacodynamics/pharmacokinetics-related issues of the original medicines, e.g. , unfavorable pharmacokinetic profiles (including the metabolic pathways), low drug-likeness scores, severe side effects, and/or considerable toxic behaviors e.g. , . The development of much simpler and cheaper one-pot procedures of organic chemistry substitution (interconversion) methods between mainly the previously-mentioned atoms (carbon, nitrogen, oxygen, sulfur, and selenium atoms, in addition to phosphorus, silicon, boron, and arsenic atoms), i.e. , intended or targeted chemical mutations, would continue to push the limits of pharmaceutical organic chemistry, revealing chemical mechanisms that might serve as the foundation for upcoming substitution procedures of skeletal or core editing as a general attractive synthetic tactic to exchange influential atoms in medicinal chemistry. Continuous diverse contributions from the entire synthetic organic chemistry community worldwide could make it very possible to interchange influential atoms at whim inside the fundamental cores or scaffolds of pharmaceutically significant compounds. The dream of having extreme-technology chemico-editing machines (supposed to be composed of two principal parts: a software part, which will have a very complicated programmed electronic dry-lab system, connected to a hardware part, which will have a high-quality chemical reactor wet-lab system) with the controllable capacity to directly and precisely edit and modify organic chemical molecules at exactly their central cores to generate closely-related new molecules possessing, for instance, improved biological activities and reduced adverse effects is gradually approaching its reality. In my opinion, artificial intelligence (AI) methodologies will certainly play a considerable role in this mounting new technology of molecular editing. Synthetically, the skeletal editing strategies are based on the concept and fact that compounds of unsaturated open-chain, cyclic, or heterocyclic scaffolds (as well as their saturated analogs) could be interconverted to each other or even to themselves (the same type) but with different numbers of skeleton atoms, providing time-saving and site-directed drug discovery approaches. Last but not least, in the era of molecular femtosurgeries, the applicability and practicality of the hundreds of methods of the various strategies for modern skeletal editing of heterocyclic (mainly heteroarene), cyclic (mainly arene), and open-chain (mainly alkane and alkene) cores will represent a very special importance in the next years in the fields of drug discovery and medicinal chemistry, for example, as in approaches for late-stage modifications and site-selective functionalizations in drug development, approaches for specific fragment coupling of complex molecules, approaches for detecting and revealing a drug's mechanism of action ( e.g. , atom knockout approaches for deciphering the mechanistic chemicobiological interactions of drugs in the human body), approaches for SAR expansions and improvements, approaches for targeted drug repurposing, and approaches for adjustments of drug-likeness behaviors and profiles of therapeutics for gaining clinical benefits and solving clinical issues, concerns, and dangers. Undoubtedly, this new effective gate for drug design and development will provide drug discoverers and medicinal chemists with many successful tactics for editing organic molecules in lab flasks exactly as in note papers. This exclusive minireview/perspective paper discusses the scope and principles of the emerging new science of skeletal editing (SKED) in molecular design, optimization, modeling, and modification as a new practicable branch in medicinal chemistry. The paper specifically sheds light on the revolutionary methods designed to improve the drug-likeness profiles of the targeted drug molecules by operating certain chemical femtosurgeries over skeletal atoms. Further, the paper also highlights the relevant general synthetic strategies, demonstrates the three main core-editing techniques, and, finally, discusses the future therapeutic needs and scenarios from the viewpoint of one of the founders of this organic chemistry branch.
Estrogen receptor-α expressing neurons in the ventrolateral VMH regulate glucose balance
8ff5d429-fedc-4712-a07b-44ce35fdc09a
7195451
Physiology[mh]
Severe hypoglycemia is a life-threatening problem for diabetic patients with intensive insulin therapy . While normal subjects can correct isolated hypoglycemic events through a defending mechanism, this protection is often impaired in diabetic patients. Elimination of hypoglycemia from the lives of diabetic patients and long-term maintenance of euglycemia will require critical fundamental insights into the mechanisms for defending against hypoglycemia. Although all neurons need glucose as a basic fuel for neuronal viability and functions, not all neurons rapidly change their firing activity and membrane potential in response to glucose fluctuations, a feature named glucose sensing. Glucose-sensing neurons are found in several brain regions, including the ventromedial hypothalamic nucleus (VMH, also known as VMN), the arcuate nucleus (ARH), the paraventricular nucleus of the hypothalamus (PVH), the nucleus of solitary tract (NTS), and the medial amygdala – . In particular, the VMH has been well-documented in the regulation of glucose balance . Many VMH neurons are glucose sensing, being excited by high glucose level (glucose excited, GE) or being inhibited by high glucose (glucose inhibited, GI) . Local glucopenia produced by infusion of 2-deoxy-D-glucose (2-DG, a glucose metabolism antagonist) into rat VMH significantly increases glucagon levels in the circulation, associated with elevated blood glucose ; whereas infusions of glucose directly into the VMH blocks glucagon release despite of the systemic hypoglycemia . Mice with genetic loss of glutamatergic neurotransmission only in VMH neurons display impaired responses to hypoglycemia . UCP2-dependent mitochondrial fission in VMH neurons have been recently reported to mediate glucose-induced neuronal activation and therefore regulate the whole-body glucose metabolism . Abundant neurons in the dorsomedial subdivision of the VMH (dmVMH) express glucokinase, and activation of these neurons increase blood glucose in mice ; deletion of glucokinase reduces hypoglycemia-induced glucagon secretion . Further, neurons in the dmVMH and those in the central subdivision of VMH (cVMH) receive neuronal inputs from glucose-sensing neurons in the parabrachial nucleus (PBN) to defend against hypoglycemia , . Together, these findings strongly support an essential role of dmVMH/cVMH neurons in the regulation of glucose balance. However, neurons in another VMH subdivision, the ventrolateral VMH (vlVMH), also respond to glucose fluctuations , but their functions in glucose metabolism have not been specifically investigated. Estrogen receptor-α (ERα) is abundantly expressed in the vlVMH, but largely spared in the dmVMH and the cVMH . In the present study, we systematically characterized the glucose-sensing properties of these ERα-expressing neurons in the vlVMH (ERα vlVMH neurons). We used fiber photometry, optogenetics, and CRISPR-Cas9 approaches to identify the ionic and circuitry mechanisms by which these neurons sense glucose fluctuations and regulate blood glucose levels. All tested ERα vlVMH neurons sense glucose fluctuations We have recently generated and validated a new ERα-ZsGreen mouse strain, in which expression of a fluroscence reporter, ZsGreen, is driven by the mouse ERα promoter. As we reported , ZsGreen is selectively expressed in ERα-expressing neurons, including those in the vlVMH (Fig. ). We used female ERα-ZsGreen mice to record glucose-sensing properties of ERα vlVMH neurons under the current clamp mode in response to a 5→1→5 mM extracellular glucose fluctuation protocol . Strikingly, all the tested ERα vlVMH neurons (576 neurons from 65 mice) are glucose-sensing neurons (defined as >2 mV depolarization or hyperpolarization in response to the glucose fluctuations) . We also examined the adjacent non-ERα vlVMH neurons (ZsGreen(−) neurons) and found that 83% of these neurons are glucose sensing, while the rest 17% non-ERα vlVMH neurons did not respond to glucose fluctuations (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 test). Interestingly, only 47% of dmVMH neurons and 49% of cVMH neurons (labeled by the SF1 promoter) are found to be glucose sensing, using the same recording protocol (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 tests). Since scattered ERα neurons are also present in the dmVMH and cVMH (Fig. ), we examined the glucose-sensing properties of these neurons, and found only 50% of ERα dmVMH neurons and 46% of ERα cVMH neurons responded to glucose fluctuations (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 tests). Thus, ERα vlVMH neurons represent a unique subpopulation with remarkably strong glucose-sensing capability. Among ERα vlVMH neurons, 43% of them depolarized and increased their firing frequency in response to hypoglycemia (5→1 mM glucose), while recovery of glucose level to 5 mM restored activities of all these neurons (Fig. ); these neurons were identified as GI-ERα vlVMH neurons. The rest 57% of female ERα vlVMH neurons hyperpolarized and decreased their firing frequency in response to hypoglycemia, and then recovered at 5 mM glucose condition (Fig. ); these neurons were identified as GE-ERα vlVMH neurons. We exposed female ERα vlVMH neurons to 5→2.5→1→2.5→5 mM glucose fluctuation, and found that GE- and GI-ERα vlVMH neurons changed their membrane potential in a concentration-dependent manner (Supplementary Fig. ). We repeated the same 5→1→5 mM extracellular glucose fluctuation protocol in the presence of a cocktail of synaptic blockers (TTX, CNQX, D-AP5, and bicuculline) and found similar hypoglycemia-induced depolarization in GI-ERα vlVMH neurons and hyperpolarization in GE-ERα vlVMH neurons (Supplementary Fig. ), indicating that glucose-sensing of ERα vlVMH neurons is independent of synaptic inputs. GI- and GE-ERα vlVMH neurons use distinct ionic conductances We used the Patch-seq approach to further explore the mechanisms by which female GI- and GE-ERα vlVMH neurons sense and respond to glucose fluctuations. We first used patch-clamp recordings to identify single GI- and GE-ERα vlVMH neurons from ERα-ZsGreen female mice, and then manually collected these neurons for transcriptome experiments (RNA-seq). Our analysis revealed 372 genes differentially expressed in GE- vs. GI-ERα vlVMH neurons ( P < 0.05 and |log 2 (fold change)| > 2; Supplementary Tables and , Supplementary Data ). Our functional enrichment analysis revealed 23 Gene Ontology (GO) terms that are statistically enriched in these genes (Supplementary Data ). Among them, “ATP binding” (GO: 0005524 ) is the second most enriched GO Molecular Function term (Supplementary Data ). Other GO terms included “regulation of response to external stimulus” (GO: 0032101 ), “plasma membrane region” (GO: 0098590 ), and “transporter activity” (GO: 0005215 ; Supplementary Data ). These results suggested that differentially expressed genes involved in ATP binding, transporter, and membrane regions may account for the opposite electrophysiological responses of GI- and GE-ERα vlVMH to glucose fluctuations. Among these differently expressed genes, we paid attention to those known to encode ion channels as potential targets. In particular, expression of anoctamin 4 gene ( Ano4 , encoding a calcium-activated chloride channel protein) was significantly higher in GI-ERα vlVMH neurons than in GE-ERα vlVMH neurons ( P = 0.0133, log 2 (fold change) = 3.102, Supplementary Fig. ). Our qPCR assay further confirmed that expression of Ano4 is significantly higher in GI-ERα vlVMH neurons than GE-ERα vlVMH neurons (Fig. , primer sequences seen in Supplementary Table ). Consistently, we detected robust rectifying currents in GI-ERα vlVMH neurons that were blocked by CaCCinh-A01 (1 µM), an anoctamin inhibitor , confirming that these were Ano currents (Fig. ). Importantly, these Ano currents in GI-ERα vlVMH neurons were significantly potentiated by exposure to low glucose (1 mM) compared to high glucose (5 mM), whereas such currents were minimal in GE-ERα vlVMH neurons regardless of glucose concentrations (Fig. ). Further, CaCCinh-A01 abolished the responsiveness of GI-ERα vlVMH neurons to glucose fluctuations, but it had no effect on GE-ERα vlVMH neurons (Fig. ). To further confirm the role of Ano4, we used CRISPR-Cas9 approach to knockout Ano4 specifically in ERα vlVMH neurons. Briefly, we designed sgRNAs targeting exon 4 and exon 11 of the Ano4 gene, respectively, screened 19 sgRNAs, and identified two sgRNAs that effectively induced indel mutations in each exon in the HEK293 cells (Supplementary Fig. ). These two sgRNAs were constructed into an AAV vector followed by Cre-dependent FLEX-tdTOMATO sequence (Supplementary Fig. ). Female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 (Vector Biolabs, #7122) and AAV-Ano4/sgRNAs-FLEX-tdTOMATO into one side of the vlVMH to disrupt expression of Ano4 selectively in ERα vlVMH neurons. For the purpose of the control, the other side of the vlVMH received AAV-Ano4/sgRNAs-FLEX-tdTOMATO and the AAV-GFP (no Cas9) virus (Fig. ). Compared to control side (GFP + Ano4/sgRNA), the combination of Cas9 and Ano4/sgRNA diminished the GI population without affecting the GE population, and robustly reduced Ano currents in TOMATO-labeled ERα vlVMH neurons that were not GE (Fig. ). Thus, our results indicate that Ano4 is required for GI-ERα vlVMH neurons to respond to glucose fluctuations. The Patch-seq analysis also revealed that expression of Abcc8 (which encodes the Sur1 protein, one subunit of the K ATP channel) was substantially higher in GE-ERα vlVMH neurons than that in GI-ERα vlVMH neurons ( P = 0.0088, log 2 (fold change) = 4.597, Supplementary Fig. ). Our qPCR analyses further confirmed that Abcc8 mRNAs were abundant in GE-ERα vlVMH neurons but below the detection threshold in GI-ERα vlVMH neurons (Fig. , primer sequences seen in Supplementary Table ). Consistently, we showed that K ATP channel-mediated outward currents in female GE-ERα vlVMH neurons were significantly elevated by hypoglycemia, which were blocked by 200 µM tolbutamide, a K ATP channel inhibitor (Fig. ). On the other hand, such K ATP channel-mediated outward currents were almost not detectable in female GI-ERα vlVMH neurons (Fig. ). In addition, treatment of tolbutamide (200 µM) blocked hypoglycemia-induced inhibition in female GE-ERα vlVMH neurons but had no effect on GI-ERα vlVMH neurons (Fig. ). To further confirm the function of Abcc8, we designed and identified two sgRNAs that efficiently induced indel mutations in exon 2 and exon 5 of the Abcc8 gene (Supplementary Fig. ). Both these sgRNAs were constructed into one AAV vector followed by Cre-dependent FLEX-tdTOMATO sequence (AAV-Abcc8/sgRNAs-FLEX-tdTOMATO; Supplementary Fig. ). Female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 and AAV-Abcc8/sgRNAs-FLEX-tdTOMATO into one side of the vlVMH. As controls, the other side of vlVMH of the same mice received AAV-Abcc8/sgRNAs-FLEX-tdTOMATO and AAV-GFP (no Cas9; Fig. ). Compared to the control side (GFP + Abcc8/sgRNA), the combination of Cas9 and Abcc8/sgRNA diminished the GE population without affecting the GI population and robustly reduced K ATP currents in TOMATO-labeled ERα vlVMH neurons that were not GI (Fig. ). Thus, our results indicate that hypoglycemia opens the K ATP channel in female GE-ERα vlVMH neurons to inhibit these neurons. We further examined the functions of Ano4 and Abcc8 and in ERα vlVMH neurons in the regulation of glucose balance in vivo using intracerebroventricular (icv) injections of 2-DG to induce central glucopenia . In control female mice (wild-type mice receiving stereotaxic injections of AAV vectors that express scCas9, Ano4/sgRNAs, and Abcc8/sgRNAs into two sides of the vlVMH; Fig. ), icv 2-DG significantly elevated blood glucose compared to icv saline in the same mice (Fig. ). On the other hand, the 2-DG-induced glucose elevations were largely attenuated in female mice with simultaneous disruption of Ano4 and Abcc8 in ERα vlVMH neurons (female Esr1-Cre mice receiving stereotaxic injections of AAV vectors that express scCas9, Ano4/sgRNAs, and Abcc8/sgRNAs into two sides of the vlVMH; Fig. ). Together, these results indicate that Ano4 and Abcc8 mediate glucose-sensing functions of GI- and GE-ERα vlVMH neurons, respectively, and their functions are required for normal glucose balance in female mice. It is worth noting that this study did not directly address the downstream hormonal and/or neural signals mediating the regulations on glucose, which warrants future investigations. GI- and GE-ERα vlVMH neurons recruit different circuits In order to examine the projection sites of ERα vlVMH neurons, we stereotaxically injected Ad-iN/WED into the vlVMH of female Esr1-Cre mice. These mice expressed GFP-tagged wheat germ agglutinin (GFP-WGA, an anterograde transsynaptic tracer) specifically in ERα vlVMH neurons (Fig. ). Abundant WGA-labeled neurons were detected in a few brain regions implicated in the regulation of energy and glucose homeostasis, including the medioposterior part of the arcuate hypothalamic nucleus (mpARH), the lateral hypothalamus (LH), the medial parabrachial nucleus (MPB), and the dorsal Raphe nuclei (DRN; Fig. , Supplementary Fig. ). These results suggest that ERα vlVMH neurons project to these distant brain regions. Since WGA may also travel retrogradely , the WGA-labeled neurons may not be the synaptic targets of ERα vlVMH neurons. To further confirm the synaptic connectivity between ERα vlVMH neurons and mpARH neurons, we stereotaxically injected Ad-iN/WED and AAV-EF1α-DIO hChR2(H134R)-EYFP into the vlVMH of female Esr1-Cre mice to express both channelrhodopsin-2 (ChR2)-EYFP and GFP-WGA specifically in ERα vlVMH neurons. Angular brain slices (containing both the vlVMH and mpARH) were prepared from these mice to perform electrophysiological recordings in WGA-GFP-labeled mpARH neurons (Fig. ). We detected light-evoked excitatory postsynaptic currents (EPSCs) in 10 out of 13 WGA-GFP-labeled neurons (averaged 65.97 ± 12.52 pA, latency: 7.26 ± 1.16 ms; n = 10). These evoked EPSCs were blocked by NMDA and AMPA glutamate receptor antagonists, 30 µM D-AP5 and 30 µM CNQX (Fig. ), confirming the glutamatergic nature of these evoked EPSCs. Importantly, all evoked EPSC persisted in the presence of 400 µM 4-AP and 1 µM TTX (Fig. ), indicating that these neurotransmissions are likely monosynaptic. We performed similar electrophysiological recordings to demonstrate functional connectivity between ERα vlVMH neurons and WGA-GFP-labeled neurons in the LH (Supplementary Fig. ), the DRN (Supplementary Fig. ), and the MPB (Supplementary Fig. ). Then, to further determine whether these projections originate primarily from GI-ERα vlVMH neurons or from GE-ERα vlVMH neurons, we stereotaxically injected a retrograde CAV2-Cre virus into each of these four sites of female ERα-ZsGreen/Rosa26-TOMATO mice. In this case, CAV2-Cre retrogradely infected upstream neurons that project to the injection site (e.g., mpARH) and induced TOMATO (red fluorescence) expression in these mpARH-projecting neurons (Fig. ). Thus, in the vlVMH, those neurons double labeled by TOMATO and ZsGreen were identified as mpARH-projecting ERα vlVMH neurons, which accounted for ~29.7% of ERα vlVMH population (Supplementary Fig. ). Similarly, CAV2-Cre injected into the LH, DRN, or MPB induced TOMATO expression in ~22.6%, 15.0%, or 25.7%, respectively, of ERα vlVMH neurons (Supplementary Fig. ). We further determined that ~71–84% TOMATO neurons, retrogradely labeled by CAV2-Cre injected into one of the four respective regions, were ZsGreen ERα vlVMH neurons (Supplementary Fig. ). We then performed electrophysiology recordings in these TOMATO/ZsGreen neurons and determined whether they were GI or GE neurons (Fig. ). Interestingly, we found that the majority of mpARH-projecting ERα vlVMH neurons are GI neurons, while the majority of DRN-projecting ERα vlVMH neurons are GE neurons (Fig. ). On the other hand, ERα vlVMH neurons that project to the LH and the MPB are mixtures of approximately equal numbers of GE and GI neurons (Fig. ). To further confirm these findings in vivo, we stereotaxically injected a retrograde virus HSV-hEF1α-LS1L-GCaMP6f into the mpARH of female Esr1-Cre mice and implanted a photodetector to target the vlVMH, which allowed fiber photometry recordings of the mpARH-projecting ERα vlVMH neurons in functional mice (Fig. ). We showed that activity of mpARH-projecting ERα vlVMH neurons was significantly reduced by hyperglycemia (i.p. 1 g per kg glucose) but increased by hypoglycemia (i.p. 1.5 U per kg insulin), confirming that these neurons were largely GI neurons (Fig. ). We used a similar approach to monitor activity of DRN-projecting ERα vlVMH neurons, and found that these neurons displayed GE properties: activated by hyperglycemia but inhibited by hypoglycemia (Fig. ). We further examined the glucose regulatory effects of mpARH-projecting ERα vlVMH neurons. To this end, we stereotaxically injected AAV-EF1α-DIO hChR2(H134R)-EYFP into the vlVMH of female Esr1-Cre mice and implanted an optic fiber to target the mpARH (Fig. , Supplementary Fig. ). Blue light pulses (473 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) were applied to this region to selectively activate the ERα vlVMH →mpARH projection (Supplementary Fig. ), which mimicked hypoglycemia-induced activation of this circuit. Interestingly, such activation resulted in significant increases in blood glucose (Fig. ). Importantly, as a control, yellow light pulses (589 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) applied to the same mice did not significantly alter blood glucose levels (Fig. ). On the other hand, the DRN-projecting ERα vlVMH neurons are inhibited during a hypoglycemic event. Thus, we stereotaxically injected AAV-EF1α-DIO-eNpHR3.0-EYFP into the vlVMH of female Esr1-Cre mice and implanted an optic fiber to target the DRN (Fig. , Supplementary Fig. ). Yellow light pulses were applied to the DRN to selectively inhibit the ERα vlVMH →DRN projection (Supplementary Fig. ), which also resulted in significant increases in blood glucose, while blue light pulses (as negative controls) showed no effect on glucose levels (Fig. ). Thus, these results support a feedback model that hypoglycemia activates GI-ERα vlVMH neurons but inhibits GE-ERα vlVMH neurons, which in turn engage distinct downstream neural circuits to increase blood glucose, and therefore prevent severe hypoglycemia. Photostimulation of ERα vlVMH neurons has been shown to evoke social investigation and occasional mounting in female mice when they are grouped housed . In addition, activation of ERα vlVMH neurons triggers aggressive behavior in Swiss Webster female mice, although this phenotype cannot be evoked in C57 females . It is important to note that the current study measured glucose levels in singly housed mice, while examinations of social, sexual, and aggressive behaviors require animals to be encountered with an intruder or the opposite sex , . Nevertheless, we examined whether photostimulation of the ERα vlVMH →DRN circuit evokes similar social behaviors in singly housed female mice (all on the C57BL6/J background), but found no such behaviors (Supplementary Movie ). vlVMH neurons sense glucose fluctuations We have recently generated and validated a new ERα-ZsGreen mouse strain, in which expression of a fluroscence reporter, ZsGreen, is driven by the mouse ERα promoter. As we reported , ZsGreen is selectively expressed in ERα-expressing neurons, including those in the vlVMH (Fig. ). We used female ERα-ZsGreen mice to record glucose-sensing properties of ERα vlVMH neurons under the current clamp mode in response to a 5→1→5 mM extracellular glucose fluctuation protocol . Strikingly, all the tested ERα vlVMH neurons (576 neurons from 65 mice) are glucose-sensing neurons (defined as >2 mV depolarization or hyperpolarization in response to the glucose fluctuations) . We also examined the adjacent non-ERα vlVMH neurons (ZsGreen(−) neurons) and found that 83% of these neurons are glucose sensing, while the rest 17% non-ERα vlVMH neurons did not respond to glucose fluctuations (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 test). Interestingly, only 47% of dmVMH neurons and 49% of cVMH neurons (labeled by the SF1 promoter) are found to be glucose sensing, using the same recording protocol (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 tests). Since scattered ERα neurons are also present in the dmVMH and cVMH (Fig. ), we examined the glucose-sensing properties of these neurons, and found only 50% of ERα dmVMH neurons and 46% of ERα cVMH neurons responded to glucose fluctuations (Fig. ; P < 0.0001 compared to ERα vlVMH neurons, χ 2 tests). Thus, ERα vlVMH neurons represent a unique subpopulation with remarkably strong glucose-sensing capability. Among ERα vlVMH neurons, 43% of them depolarized and increased their firing frequency in response to hypoglycemia (5→1 mM glucose), while recovery of glucose level to 5 mM restored activities of all these neurons (Fig. ); these neurons were identified as GI-ERα vlVMH neurons. The rest 57% of female ERα vlVMH neurons hyperpolarized and decreased their firing frequency in response to hypoglycemia, and then recovered at 5 mM glucose condition (Fig. ); these neurons were identified as GE-ERα vlVMH neurons. We exposed female ERα vlVMH neurons to 5→2.5→1→2.5→5 mM glucose fluctuation, and found that GE- and GI-ERα vlVMH neurons changed their membrane potential in a concentration-dependent manner (Supplementary Fig. ). We repeated the same 5→1→5 mM extracellular glucose fluctuation protocol in the presence of a cocktail of synaptic blockers (TTX, CNQX, D-AP5, and bicuculline) and found similar hypoglycemia-induced depolarization in GI-ERα vlVMH neurons and hyperpolarization in GE-ERα vlVMH neurons (Supplementary Fig. ), indicating that glucose-sensing of ERα vlVMH neurons is independent of synaptic inputs. vlVMH neurons use distinct ionic conductances We used the Patch-seq approach to further explore the mechanisms by which female GI- and GE-ERα vlVMH neurons sense and respond to glucose fluctuations. We first used patch-clamp recordings to identify single GI- and GE-ERα vlVMH neurons from ERα-ZsGreen female mice, and then manually collected these neurons for transcriptome experiments (RNA-seq). Our analysis revealed 372 genes differentially expressed in GE- vs. GI-ERα vlVMH neurons ( P < 0.05 and |log 2 (fold change)| > 2; Supplementary Tables and , Supplementary Data ). Our functional enrichment analysis revealed 23 Gene Ontology (GO) terms that are statistically enriched in these genes (Supplementary Data ). Among them, “ATP binding” (GO: 0005524 ) is the second most enriched GO Molecular Function term (Supplementary Data ). Other GO terms included “regulation of response to external stimulus” (GO: 0032101 ), “plasma membrane region” (GO: 0098590 ), and “transporter activity” (GO: 0005215 ; Supplementary Data ). These results suggested that differentially expressed genes involved in ATP binding, transporter, and membrane regions may account for the opposite electrophysiological responses of GI- and GE-ERα vlVMH to glucose fluctuations. Among these differently expressed genes, we paid attention to those known to encode ion channels as potential targets. In particular, expression of anoctamin 4 gene ( Ano4 , encoding a calcium-activated chloride channel protein) was significantly higher in GI-ERα vlVMH neurons than in GE-ERα vlVMH neurons ( P = 0.0133, log 2 (fold change) = 3.102, Supplementary Fig. ). Our qPCR assay further confirmed that expression of Ano4 is significantly higher in GI-ERα vlVMH neurons than GE-ERα vlVMH neurons (Fig. , primer sequences seen in Supplementary Table ). Consistently, we detected robust rectifying currents in GI-ERα vlVMH neurons that were blocked by CaCCinh-A01 (1 µM), an anoctamin inhibitor , confirming that these were Ano currents (Fig. ). Importantly, these Ano currents in GI-ERα vlVMH neurons were significantly potentiated by exposure to low glucose (1 mM) compared to high glucose (5 mM), whereas such currents were minimal in GE-ERα vlVMH neurons regardless of glucose concentrations (Fig. ). Further, CaCCinh-A01 abolished the responsiveness of GI-ERα vlVMH neurons to glucose fluctuations, but it had no effect on GE-ERα vlVMH neurons (Fig. ). To further confirm the role of Ano4, we used CRISPR-Cas9 approach to knockout Ano4 specifically in ERα vlVMH neurons. Briefly, we designed sgRNAs targeting exon 4 and exon 11 of the Ano4 gene, respectively, screened 19 sgRNAs, and identified two sgRNAs that effectively induced indel mutations in each exon in the HEK293 cells (Supplementary Fig. ). These two sgRNAs were constructed into an AAV vector followed by Cre-dependent FLEX-tdTOMATO sequence (Supplementary Fig. ). Female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 (Vector Biolabs, #7122) and AAV-Ano4/sgRNAs-FLEX-tdTOMATO into one side of the vlVMH to disrupt expression of Ano4 selectively in ERα vlVMH neurons. For the purpose of the control, the other side of the vlVMH received AAV-Ano4/sgRNAs-FLEX-tdTOMATO and the AAV-GFP (no Cas9) virus (Fig. ). Compared to control side (GFP + Ano4/sgRNA), the combination of Cas9 and Ano4/sgRNA diminished the GI population without affecting the GE population, and robustly reduced Ano currents in TOMATO-labeled ERα vlVMH neurons that were not GE (Fig. ). Thus, our results indicate that Ano4 is required for GI-ERα vlVMH neurons to respond to glucose fluctuations. The Patch-seq analysis also revealed that expression of Abcc8 (which encodes the Sur1 protein, one subunit of the K ATP channel) was substantially higher in GE-ERα vlVMH neurons than that in GI-ERα vlVMH neurons ( P = 0.0088, log 2 (fold change) = 4.597, Supplementary Fig. ). Our qPCR analyses further confirmed that Abcc8 mRNAs were abundant in GE-ERα vlVMH neurons but below the detection threshold in GI-ERα vlVMH neurons (Fig. , primer sequences seen in Supplementary Table ). Consistently, we showed that K ATP channel-mediated outward currents in female GE-ERα vlVMH neurons were significantly elevated by hypoglycemia, which were blocked by 200 µM tolbutamide, a K ATP channel inhibitor (Fig. ). On the other hand, such K ATP channel-mediated outward currents were almost not detectable in female GI-ERα vlVMH neurons (Fig. ). In addition, treatment of tolbutamide (200 µM) blocked hypoglycemia-induced inhibition in female GE-ERα vlVMH neurons but had no effect on GI-ERα vlVMH neurons (Fig. ). To further confirm the function of Abcc8, we designed and identified two sgRNAs that efficiently induced indel mutations in exon 2 and exon 5 of the Abcc8 gene (Supplementary Fig. ). Both these sgRNAs were constructed into one AAV vector followed by Cre-dependent FLEX-tdTOMATO sequence (AAV-Abcc8/sgRNAs-FLEX-tdTOMATO; Supplementary Fig. ). Female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 and AAV-Abcc8/sgRNAs-FLEX-tdTOMATO into one side of the vlVMH. As controls, the other side of vlVMH of the same mice received AAV-Abcc8/sgRNAs-FLEX-tdTOMATO and AAV-GFP (no Cas9; Fig. ). Compared to the control side (GFP + Abcc8/sgRNA), the combination of Cas9 and Abcc8/sgRNA diminished the GE population without affecting the GI population and robustly reduced K ATP currents in TOMATO-labeled ERα vlVMH neurons that were not GI (Fig. ). Thus, our results indicate that hypoglycemia opens the K ATP channel in female GE-ERα vlVMH neurons to inhibit these neurons. We further examined the functions of Ano4 and Abcc8 and in ERα vlVMH neurons in the regulation of glucose balance in vivo using intracerebroventricular (icv) injections of 2-DG to induce central glucopenia . In control female mice (wild-type mice receiving stereotaxic injections of AAV vectors that express scCas9, Ano4/sgRNAs, and Abcc8/sgRNAs into two sides of the vlVMH; Fig. ), icv 2-DG significantly elevated blood glucose compared to icv saline in the same mice (Fig. ). On the other hand, the 2-DG-induced glucose elevations were largely attenuated in female mice with simultaneous disruption of Ano4 and Abcc8 in ERα vlVMH neurons (female Esr1-Cre mice receiving stereotaxic injections of AAV vectors that express scCas9, Ano4/sgRNAs, and Abcc8/sgRNAs into two sides of the vlVMH; Fig. ). Together, these results indicate that Ano4 and Abcc8 mediate glucose-sensing functions of GI- and GE-ERα vlVMH neurons, respectively, and their functions are required for normal glucose balance in female mice. It is worth noting that this study did not directly address the downstream hormonal and/or neural signals mediating the regulations on glucose, which warrants future investigations. vlVMH neurons recruit different circuits In order to examine the projection sites of ERα vlVMH neurons, we stereotaxically injected Ad-iN/WED into the vlVMH of female Esr1-Cre mice. These mice expressed GFP-tagged wheat germ agglutinin (GFP-WGA, an anterograde transsynaptic tracer) specifically in ERα vlVMH neurons (Fig. ). Abundant WGA-labeled neurons were detected in a few brain regions implicated in the regulation of energy and glucose homeostasis, including the medioposterior part of the arcuate hypothalamic nucleus (mpARH), the lateral hypothalamus (LH), the medial parabrachial nucleus (MPB), and the dorsal Raphe nuclei (DRN; Fig. , Supplementary Fig. ). These results suggest that ERα vlVMH neurons project to these distant brain regions. Since WGA may also travel retrogradely , the WGA-labeled neurons may not be the synaptic targets of ERα vlVMH neurons. To further confirm the synaptic connectivity between ERα vlVMH neurons and mpARH neurons, we stereotaxically injected Ad-iN/WED and AAV-EF1α-DIO hChR2(H134R)-EYFP into the vlVMH of female Esr1-Cre mice to express both channelrhodopsin-2 (ChR2)-EYFP and GFP-WGA specifically in ERα vlVMH neurons. Angular brain slices (containing both the vlVMH and mpARH) were prepared from these mice to perform electrophysiological recordings in WGA-GFP-labeled mpARH neurons (Fig. ). We detected light-evoked excitatory postsynaptic currents (EPSCs) in 10 out of 13 WGA-GFP-labeled neurons (averaged 65.97 ± 12.52 pA, latency: 7.26 ± 1.16 ms; n = 10). These evoked EPSCs were blocked by NMDA and AMPA glutamate receptor antagonists, 30 µM D-AP5 and 30 µM CNQX (Fig. ), confirming the glutamatergic nature of these evoked EPSCs. Importantly, all evoked EPSC persisted in the presence of 400 µM 4-AP and 1 µM TTX (Fig. ), indicating that these neurotransmissions are likely monosynaptic. We performed similar electrophysiological recordings to demonstrate functional connectivity between ERα vlVMH neurons and WGA-GFP-labeled neurons in the LH (Supplementary Fig. ), the DRN (Supplementary Fig. ), and the MPB (Supplementary Fig. ). Then, to further determine whether these projections originate primarily from GI-ERα vlVMH neurons or from GE-ERα vlVMH neurons, we stereotaxically injected a retrograde CAV2-Cre virus into each of these four sites of female ERα-ZsGreen/Rosa26-TOMATO mice. In this case, CAV2-Cre retrogradely infected upstream neurons that project to the injection site (e.g., mpARH) and induced TOMATO (red fluorescence) expression in these mpARH-projecting neurons (Fig. ). Thus, in the vlVMH, those neurons double labeled by TOMATO and ZsGreen were identified as mpARH-projecting ERα vlVMH neurons, which accounted for ~29.7% of ERα vlVMH population (Supplementary Fig. ). Similarly, CAV2-Cre injected into the LH, DRN, or MPB induced TOMATO expression in ~22.6%, 15.0%, or 25.7%, respectively, of ERα vlVMH neurons (Supplementary Fig. ). We further determined that ~71–84% TOMATO neurons, retrogradely labeled by CAV2-Cre injected into one of the four respective regions, were ZsGreen ERα vlVMH neurons (Supplementary Fig. ). We then performed electrophysiology recordings in these TOMATO/ZsGreen neurons and determined whether they were GI or GE neurons (Fig. ). Interestingly, we found that the majority of mpARH-projecting ERα vlVMH neurons are GI neurons, while the majority of DRN-projecting ERα vlVMH neurons are GE neurons (Fig. ). On the other hand, ERα vlVMH neurons that project to the LH and the MPB are mixtures of approximately equal numbers of GE and GI neurons (Fig. ). To further confirm these findings in vivo, we stereotaxically injected a retrograde virus HSV-hEF1α-LS1L-GCaMP6f into the mpARH of female Esr1-Cre mice and implanted a photodetector to target the vlVMH, which allowed fiber photometry recordings of the mpARH-projecting ERα vlVMH neurons in functional mice (Fig. ). We showed that activity of mpARH-projecting ERα vlVMH neurons was significantly reduced by hyperglycemia (i.p. 1 g per kg glucose) but increased by hypoglycemia (i.p. 1.5 U per kg insulin), confirming that these neurons were largely GI neurons (Fig. ). We used a similar approach to monitor activity of DRN-projecting ERα vlVMH neurons, and found that these neurons displayed GE properties: activated by hyperglycemia but inhibited by hypoglycemia (Fig. ). We further examined the glucose regulatory effects of mpARH-projecting ERα vlVMH neurons. To this end, we stereotaxically injected AAV-EF1α-DIO hChR2(H134R)-EYFP into the vlVMH of female Esr1-Cre mice and implanted an optic fiber to target the mpARH (Fig. , Supplementary Fig. ). Blue light pulses (473 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) were applied to this region to selectively activate the ERα vlVMH →mpARH projection (Supplementary Fig. ), which mimicked hypoglycemia-induced activation of this circuit. Interestingly, such activation resulted in significant increases in blood glucose (Fig. ). Importantly, as a control, yellow light pulses (589 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) applied to the same mice did not significantly alter blood glucose levels (Fig. ). On the other hand, the DRN-projecting ERα vlVMH neurons are inhibited during a hypoglycemic event. Thus, we stereotaxically injected AAV-EF1α-DIO-eNpHR3.0-EYFP into the vlVMH of female Esr1-Cre mice and implanted an optic fiber to target the DRN (Fig. , Supplementary Fig. ). Yellow light pulses were applied to the DRN to selectively inhibit the ERα vlVMH →DRN projection (Supplementary Fig. ), which also resulted in significant increases in blood glucose, while blue light pulses (as negative controls) showed no effect on glucose levels (Fig. ). Thus, these results support a feedback model that hypoglycemia activates GI-ERα vlVMH neurons but inhibits GE-ERα vlVMH neurons, which in turn engage distinct downstream neural circuits to increase blood glucose, and therefore prevent severe hypoglycemia. Photostimulation of ERα vlVMH neurons has been shown to evoke social investigation and occasional mounting in female mice when they are grouped housed . In addition, activation of ERα vlVMH neurons triggers aggressive behavior in Swiss Webster female mice, although this phenotype cannot be evoked in C57 females . It is important to note that the current study measured glucose levels in singly housed mice, while examinations of social, sexual, and aggressive behaviors require animals to be encountered with an intruder or the opposite sex , . Nevertheless, we examined whether photostimulation of the ERα vlVMH →DRN circuit evokes similar social behaviors in singly housed female mice (all on the C57BL6/J background), but found no such behaviors (Supplementary Movie ). Our results identified ERα vlVMH neurons as one key glucose-sensing neural population that can detect glucose fluctuations and prevent severe hypoglycemia at least in female mice. We further identified two key ion channels, namely the Ano4 channel and the K ATP channel, which respectively mediate opposite GI and GE responses during the hypoglycemic challenge. Interestingly, subsets of GI- and GE-ERα vlVMH neurons preferentially project to the mpARH and the DRN, respectively. Through these segregated downstream neural circuits, the opposite neural responses in these GI- and GE-ERα vlVMH subsets are coordinated to synergistically elevate blood glucose, and therefore prevent severe hypoglycemia (Fig. ). Early work indicated that VMH neurons play essential roles in brain glucose sensing and the whole-body glucose balance – , . Using genetic tools, recent research efforts started to reveal that distinct glucose-sensing subgroups within the VMH regulate glucose balance through diverse mechanisms. For example, the Friedman group showed that activation of glucokinase-expressing neurons in the dmVMH increases glucose . Further, the Heisler group demonstrated that another subgroup in the dmVMH and cVMH indirectly sense glucose fluctuations, via neuronal inputs from glucose-sensing neurons in the PBN . Consistently, the Morton group recently showed that dmVMH/cVMH neurons (marked by adult SF1 promoter) relay PBN synaptic inputs to defend against hypoglycemia . It is important to note that while the entire VMH shares a common SF1 lineage, the vlVMH further differentiates into a neuronal cluster devoid of SF1 during the adulthood . Thus, studies using adult SF1-Cre mice (e.g., receiving ChR2 virus injections) only target dmVMH and cVMH neurons, but not those in the vlVMH. Our studies used ERα as a molecular marker and demonstrated that ERα-expressing VMH neurons is a unique glucose-sensing population. Anatomically, these ERα neurons are highly concentrated within the vlVMH, but barely expressed in the dmVMH or cVMH. In addition, we showed that ERα vlVMH neurons project to the mpARH, LH, MPB, and DRN. On the other hand, dmVMH/cVMH neurons (marked by the adult SF1 promoter) were reported to project to the anterior bed nucleus of stria terminlis, PVH, and central amygdala . The uniqueness of ERα vlVMH neurons also lies in their strong glucose-sensing capability. It has been previously reported that as a whole, ~50% of VMH neurons are capable of altering their firing activities in response to glucose fluctuations . In line with these earlier findings, we found that 47–50% of dmVMH neurons and 46–48% of cVMH neurons are glucose sensing. Strikingly, we found that 100% of ERα vlVMH neurons (in both male and female mice) that we tested are glucose sensing. Thus, ERα vlVMH neurons represent a unique subpopulation with strong glucose-sensing capability. It is intriguing that GI- and GE-ERα vlVMH neurons displayed exactly opposite neural responses to hypoglycemia. In particular, GI neurons rapidly increase their activities in response to low glucose, despite the fact that glucose is a basic fuel for neuronal viability and functions. In searching for ionic mechanisms that regulate activities of GI-ERα vlVMH neurons, we found that GI-ERα vlVMH neurons, but not GE-ERα vlVMH neurons, express high levels of Ano4 (a chloride channel) and display robust rectifying Ano currents. Interestingly, hypoglycemia can potentiate these currents that likely account for activation of GI-ERα vlVMH neurons. Supporting this notion, both pharmacological blockade of Ano and CRISPR-mediated disruption of Ano4 abolish responses of GI-ERα vlVMH neurons, but have no effect on GE-ERα vlVMH neurons. Together, these results identified Ano4 as a key ion channel that mediates the activation of GI-ERα vlVMH neurons evoked by hypoglycemia. Notably, NTS neurons marked by GLUT2, another GI population, respond to hypoglycemia through leak potassium conductances but not chloride conductances , suggesting that diverse ionic mechanisms exist for GI populations in different brain regions. On the other hand, the ion channel mediating neural responses in GE neurons appear to be conserved among various GE populations. In particular, the K ATP channel has been reported to mediate glucose-sensing functions of GE neurons in the VMH, in the NTS and in the supraoptic nuclei , – . Consistent with this notion, we found that GE-ERα vlVMH neurons express high levels of Abcc8 (encoding the K ATP channel subunit, Sur1) and display elevated K ATP currents in response to hypoglycemia; blockade of K ATP currents abolishes the hypoglycemia-induced inhibition in these neurons. Notably, GI-ERα vlVMH neurons express minimal Abcc8, and hypoglycemia does not enhance K ATP currents in these neurons. We further demonstrated the specific functions of Abcc8 in GE-ERα vlVMH neurons. Thus, CRISPR-mediated disruption of Abcc8 reduces the K ATP current and diminishes glucose-sensing capability of GE-ERα vlVMH neurons, but does not affect GI-ERα vlVMH neurons. Together, our results indicate that distinct ionic conductances mediate the opposite neural responses to glucose fluctuations in GI- vs. GE-ERα vlVMH neurons. While our data indicate that the glucose-sensing activity of ERα vlVMH neurons is important for preventing hypoglycemia, we cannot exclude the possibility that ERα vlVMH neurons or other VMH neurons may relay glucose-sensing signals coming from outside of the VMH (or even outside of the brain) to regulate glucose balance. Interestingly, subsets of GI- and GE-ERα vlVMH neurons have different projection patterns. The mpARH-projecting ERα vlVMH neurons are largely GI, whereas the DRN-projecting ERα vlVMH neurons are largely GE in nature. This segregation was observed by both ex vivo slice electrophysiology and by in vivo fiber photometry. Then, we used the optogenetic approach to specifically activate the ERα vlVMH →mpARH projection or to inhibit the ERα vlVMH →DRN projection, which mimic their natural responses to hypoglycemia. Interestingly, these optogenetic manipulations both lead to significant increases in blood glucose. Thus, these results indicate that while the activities of GI-ERα vlVMH -originated and GE-ERα vlVMH -originated neural circuits are oppositely regulated by hypoglycemia, the functions of these neural circuits are coordinated to provide a synergistic response to restore the glucose balance. Notably, in addition to the regulating metabolic balance – , ERα vlVMH neurons have been implicated in multiple social behaviors, including investigation, mating, and aggression , . While we did not detect these social behaviors in singly housed mice with the ERα vlVMH →DRN circuit stimulated, our results cannot fully exclude the possibility that functions of ERα vlVMH neurons on social behaviors may have influenced the overall outcome on the glucose homeostasis. Mice Several transgenic mouse lines, including ERα-ZsGreen, ERα-ZsGreen/Rosa26-TOMATO, and Esr1-Cre were maintained on a C57BL6/J background. Esr1-Cre mice were purchased from Jackson Laboratory (#017911) that express Cre recombinase selectively in ERα-expressing neurons, including those in the vlVMH . In addition, some C57Bl6J mice were purchased from the mouse facility of Baylor College of Medicine. Mice were housed in a temperature-controlled environment at 22–24 ˚C using a 12 h light/12 h dark cycle. The mice were fed standard chow (6.5% fat, #2920, Harlan-Teklad, Madison, WI). Water was provided ad libitum. Further information about resources and reagents can be found in Table . Electrophysiology ERα-ZsGreen or ERα-ZsGreen/Rosa26-TOMATO mice were used for electrophysiological recordings. Mice were deeply anesthetized with isoflurane and transcardially perfused with a modified ice-cold sucrose-based cutting solution (pH 7.3) containing 10 mM NaCl, 25 mM NaHCO 3 , 195 mM sucrose, 5 mM glucose, 2.5 mM KCl, 1.25 mM NaH2PO 4 , 2 mM Na-pyruvate, 0.5 mM CaCl 2 , and 7 mM MgCl 2 , bubbled continuously with 95% O 2 and 5% CO 2 (ref. ). The mice were then decapitated, and the entire brain was removed and immediately submerged in the cutting solution. Slices (250 µm) were cut with a Microm HM 650 V vibratome (Thermo Scientific). Three brain slices containing the VMH were obtained for each animal (bregma −2.06 mm to −1.46 mm; interaural 1.74–2.34 mm). The slices were recovered for 1 h at 34 °C and then maintained at room temperature in artificial cerebrospinal fluid (aCSF, pH 7.3) containing 126 mM NaCl, 2.5 mM KCl, 2.4 mM CaCl 2 , 1.2 mM NaH2PO 4 , 1.2 mM MgCl 2 , 5.0 mM glucose, and 21.4 mM NaHCO 3 ) saturated with 95% O 2 and 5% CO 2 before recording. Slices were transferred to a recording chamber and allowed to equilibrate for at least 10 min before recording. The slices were superfused at 34 °C in oxygenated aCSF at a flow rate of 1.8–2 ml/min. ZsGreen and/or TOMATO-labeled neurons in the VMH were visualized using epifluorescence and IR-DIC imaging on an upright microscope (Eclipse FN-1, Nikon) equipped with a movable stage (MP-285, Sutter Instrument). Patch pipettes with resistances of 3–5 MΩ were filled with intracellular solution (pH 7.3) containing 128 mM K-gluconate, 10 mM KCl, 10 mM HEPES, 0.1 mM EGTA, 2 mM MgCl 2 , 0.05 mM Na-GTP, and 0.05 mM Mg-ATP. Recordings were made using a MultiClamp 700B amplifier (Axon Instrument), sampled using Digidata 1440 A and analyzed offline with pClamp 10.3 software (Axon Instruments). Series resistance was monitored during the recording, and the values were generally <10 MΩ and were not compensated. The liquid junction potential was +12.5 mV, and was corrected after the experiment. Data were excluded if the series resistance increased dramatically during the experiment or without overshoot for action potential. Currents were amplified, filtered at 1 kHz, and digitized at 20 kHz. Current clamp was engaged to test neural firing frequency and resting membrane potential at the baseline 5 mM glucose aCSF and 1 mM glucose aCSF. The values for resting membrane potential and firing frequency averaged within 2-min bin at the 5 mM glucose or 1 mM glucose aCSF condition. A neuron was considered depolarized or hyperpolarized if a change in membrane potential was at least 2 mV in amplitude. To measure Ano currents, the pipette solution contained (in mM): CsCl 130, NaH 2 PO 4 1.2, Na 2 HPO 4 4.8, EGTA, MgCl 2 1.0, D-glucose 5.0, and ATP 3.0 (pH adjusted to 7.2). Total Ano current was recorded under voltage-clamp by holding the membrane potential at −60 mV in 5 mM glucose or 1 mM glucose aCSF. At intervals, neurons were voltage clamped from −50 mV to +50 mV in steps of 10 mV for 1 s (ref. ). Then the neurons were treated with 100 μM CaCCinh-A01 (an Ano blocker) for 3 min (ref. ). The Ano current was calculated by subtracting the left current in the presence of Ano blocker from total current without the blocker. For the K ATP current, slices were perfused with an external solution that contained 140 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 5 mM glucose, and 10 mM HEPES (pH 7.4). The pipette (intracellular) solution contained 130 mM potassium gluconate, 20 mM HEPES, 10 mM EGTA, 1 mM MgCl 2 , 2.5 mM CaCl 2 , 1.0 mM Mg-ATP, and 0.3 mM Tris-GTP (pH 7.2) . The neural membrane potential was hold at −60mV in voltage-clamp model when K ATP current was recorded and when glucose was changed from 5 mM to 1 mM. K ATP current was calculated by subtracting currents recorded in the absence and the presence of a K ATP blocker, tolbutamide (200 µM) . Patch-seq and data analysis In order to examine gene expression profiling of GI-ERα vlVMH or GE-ERα vlVMH neurons, we first performed electrophysiological recordings to identify GI-ERα vlVMH or GE-ERα vlVMH neurons. Four GI-ERα vlVMH neurons and four GE-ERα vlVMH neurons (from two different mice) were manually collected and suspended in PBS buffer. Cell lysis, first-strand cDNA synthesis and cDNA amplification were performed according to the manufacturer’s instructions of SMART-Seq v4 Ultra Low Input RNA Kit (Clontech). Amplified cDNA was purified by Agencourt AMPure XP Kit (Beckman Coulter). The Nextera Library Prep Kit (Illumina) was used to prepare libraries for sequencing (100 bp paired-end, RNA-seq) on a HiSeq 2500 platform (Illumina). RNA-seq raw data files were trimmed using TrimGalore (version 0.4.1) and aligned against the mouse reference genome assembly (GRCm38.p6) using the STAR aligner (version 2.5.3a) . One sample (“GIneuron3”) was removed from the following analysis due to low sequence read counts (Supplementary Table ). Next, gene expression was assessed using featureCounts (version 1.6.0) . To improve the identification of differentially regulated genes, unwanted variation between samples was removed using RUVseq . Then, DESeq2 was used to determine differential gene expression . Significance of differential expression was assessed by requiring both P < 0.05 and |log 2 (fold change)| > 2. We found a total of 372 differentially expressed genes (Supplementary Data , Supplementary Fig. ). Gene set enrichment analysis was performed using the online tool WebGestalt (version 2019) . Only the gene sets whose size was in a range of 5–2000 were considered and enrichment P -values were corrected for multiple testing by Benjamini–Hochberg procedure, as implemented in the tool. Real-time RT-qPCR analyses To confirm the expression of Abcc8 and Ano4 in GE-ERα vlVMH and GI-ERα vlVMH neurons, respectively, we manually picked up identified GE or GI ZsGreen-labeled vlVMH neurons from female ERα-ZsGreen mice (at diestrus). To this end, the mouse brain was removed and immediately submerged in ice-cold sucrose-based cutting solution (adjusted to pH 7.3) containing (in mM) 10 NaCl, 25 NaHCO 3 , 195 sucrose, 5 glucose, 2.5 KCl, 1.25 NaH 2 PO 4 , 2 Na-pyruvate, 0.5 CaCl 2 , and 7 MgCl 2 bubbled continuously with 95% O 2 and 5% CO 2 . The slices (250 μm) were cut with a Microm HM 650 V vibratome (Thermo Scientific) and recovered for 1 h at 34 °C, and then maintained at room temperature in artificial cerebrospinal fluid (aCSF, pH 7.3) containing 126 mM NaCl, 2.5 mM KCl, 2.4 mM CaCl 2 , 1.2 mM NaH 2 PO 4 , 1.2 mM MgCl 2 , 11.1 mM glucose, and 21.4 mM NaHCO 3 saturated with 95% O 2 and 5% CO 2 before recording. Slices were transferred to a chamber, and ZsGreen-labeled neurons were visualized using epifluorescence and IR-DIC imaging on an upright microscope equipped with a moveable stage (MP-285, Sutter Instrument). These neurons were first subjected to electrophysiological recordings to determine whether they were GE neurons or GI neurons, as described above. Single neurons were then manually picked up by the pipette and subjected to RNA extraction and reverse transcription using the Ambion Single-Cell-to-CT Kit (Ambion, Life Technologies) according to the manufacturer’s instruction. Briefly, 10 μl Single Cell Lysis solutions with DNase I was added to each sample, and the supernatant after centrifuge were used for cDNA synthesis (25 °C for 10 min, 42 °C for 60 min, and 85 °C for 5 min). The cDNA samples were amplified on a CFX384 Real-Time System (Bio-Rad) using SsoADV SYBR Green Supermix (Bio-Rad), and data were collected using Bio-Rad CFX Manager (3.1). Results were normalized against the expression of house-keeping gene (cyclophilin). Primer sequences were listed in Supplementary Table . CRISRP-Cas9 deletion of Ano4 and Abcc8 AAV vectors carrying sgRNAs targeting mouse Ano4 or Abcc8 were designed and constructed by Biocytogen (Wakefield, MA). For Ano4, exon 4 and exon 11 were chosen to be targeted by CRISPR-Cas9. A total of 19 sgRNAs were designed with seven targeting exon 4 and 12 targeting exon 11. These sgRNAs were selected using the CRISPR tool ( https://www.sanger.ac.uk/htgt/wge/ ) with minimal potential off-target effects. All 19 sgRNAs were screened for on-target activity using a Universal CRISPR Activity Assay (UCA TM , Biocytogen) . Briefly, the plasmid carrying Cas9 and sgRNA, and another plasmid carrying the target sequence cloned inside a luciferase gene were co-transfected into HEK293. Stop codon and CRISPR/Cas9 targeting sites were located within the luciferase gene. Stop codon induced the translational termination of the luciferase gene, while sgRNA targeting site cutting induced DNA annealing based on single-strand annealing and the complementary sequence recombination thereby occurred to rescue a complete coding sequence of the luciferase. The luciferase signal was then detected to reflect the DNA editing efficiency of the sgRNA. We used the pCS(puro)-positive plasmid, which expressed a proven positive-sgRNA, as the positive control (Supplementary Fig. ). The sgRNA#2 (GCACTTCGGAGGACACCAGC AGG) and sgRNA#9-A (GTACTTGTACCACACGCCCC AGG ) were selected to target exon 4 and exon 11 of the Ano4 gene, respectively, due to their relatively high on-target activity and low off-target potentials. Similarly for Abcc8, exon 2 and exon 5 were chosen to be targeted by CRISPR-Cas9. A total of 14 sgRNAs were designed with seven targeting exon 2 and 7 targeting exon 5. These sgRNAs were selected using the CRISPR tool ( https://www.sanger.ac.uk/htgt/wge/ ) with minimal potential off-target effects. All 14 sgRNAs were screened for on-target activity using the UCA TM Assay (Supplementary Fig. ). The sgRNA#5 (TGAAGGTAAGGATCCAGCGC AGG) and sgRNA#11 (GCAGCTTCCCGATGGCCCGC AGG ) were used for next step. We constructed an AAV-U6-sgRNA-tdTomato vector containing two sgRNAs targeting Ano4 (AAV-Ano4/sgRNAs-FLEX-tdTOMATO) or two sgRNAs targeting Abcc8 gene (AAV-Abcc8/sgRNAs-FLEX-tdTOMATO), respectively (Supplementary Fig. ). Briefly, U6 promoter-sgRNAs, CAG promoter-flex-tdTomato-bGH polyA cassettes were cloned into the Addgene plasmid #61591 vector ( http://www.addgene.org/61591/ ) and further verified by full sequencing. The two viruses were packaged by the Baylor IDDRC Neuroconnectivity Core. To validate AAV-Ano4/sgRNAs-FLEX-tdTOMATO in mice, female Esr1-Cre mice (12 weeks of age) received stereotaxic injections of AAV-FLEX-scCas9 (80 nl, 5.3 × 10 12 GC per ml) and AAV-Ano4/sgRNAs-FLEX-tdTOMATO (160 nl, 1.4 × 10 12 GC per ml) into one side of the vlVMH (knockout side), and received AAV-Ano4/sgRNAs-FLEX-tdTOMATO (160 nl) and the AAV-GFP (80 nl, 5.6 × 10 12 GC per ml, no Cas9) in the other side of the vlVMH virus (control side). After a 4-week recovery, these mice were subjected to electrophysiology recordings for glucose-sensing properties and Ano currents, as described above. Since we did not find any GI neurons from the knockout side, we measured Ano currents in non-GE neurons from the knockout side (which were likely original GI neurons) and compared these currents to Ano currents in identified GI neurons from the control side. Similarly, to validate AAV-Abcc8/sgRNAs-FLEX-tdTOMATO in mice, female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 (80 nl) and AAV-Abcc8/sgRNAs-FLEX-tdTOMATO (160 nl, 1.3 × 10 12 GC per ml) into one side of the vlVMH (knockout side), and received AAV-Abcc8/sgRNAs-FLEX-tdTOMATO (160 nl) and the AAV-GFP (80 nl, no Cas9) in the other side of the vlVMH virus (control side). After a 4-week recovery, these mice were subjected to electrophysiology recordings for glucose-sensing properties and K ATP currents, as described above. Since we did not find any GE neurons from the knockout side, we measured K ATP currents in non-GI neurons from the knockout side (which were likely original GE neurons) and compared these currents to K ATP currents in identified GE neurons from the control side. Icv 2-DG assay To determine the functions of Ano4 and Abcc8 in ERα vlVMH neurons in vivo, female wild type and Esr1-Cre littermates (12 weeks of age) received stereotaxic injections of AAV-FLEX-scCas9 and AAV-Ano4/sgRNAs-FLEX-tdTOMATO into both sides of the vlVMH (1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). During the same surgeries, an indwelling icv guide cannula (#62003, Plastics One) was stereotaxically inserted to target the lateral ventricle (0.34 mm posterior, 1.00 mm lateral, and 2.30 mm ventral to the bregma). One week after surgery, the cannulation accuracy was validated by central administration of 10 ng angiotensin II (A9525, Sigma), which induced the increase of drinking and grooming behavior. Mice were subjected to weekly handling to adapt to stress associated with icv injections. Four weeks after the surgeries, mice were fasted for 3 h from 9 a.m. in the morning. At 12 p.m., mice received icv injections of saline or 2-DG (1 mg in 2 μl saline). Blood glucose was then measured at 0, 30, 60, and 120 min after injections. WGA anterograde tracing In order to map the downstream neurons of ERα vlVMH neurons, 12-week-old Esr1-Cre female mice were anesthetized by isoflurane and received stereotaxic injections of Ad-FLEX-WGA-EGFP into the vlVMH (200 nl, 6.1 × 10 12 VP per ml; 1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). Because Ad-iN/WED is Cre-dependent virus, WGA-GFP was exclusively expressed in ERα vlVMH neurons and anterogradely traveled along the fibers, passed the synapse, and filled the downstream neurons that were innervated by ERα vlVMH terminals. Four weeks after injections, mice were perfused with 10% formalin, and brain sections were cut at 25 μm (5 series). The sections were incubated at room temperature in primary goat anti-WGA antibody (1:1000, #AS-2024, VectorLabs) overnight, followed by biotinylated donkey anti-goat secondary antibody (1:1000; #705-065-003, Jackson ImmunoResearch) for 2 h. Sections were then incubated in the avidin–biotin complex (1:500, ABC; Vector Elite Kit), and incubated in 0.04% 3, 3′-diaminobenzidine and 0.01% hydrogen peroxide. After dehydration through graded ethanol, the slides were then immersed in xylene and coverslipped. Images were analyzed using a brightfield Leica microscope equipped with the Leica MM AF Acquisition and Analysis (#11640901). ChR2-assisted circuit mapping We performed the WGA-ChR2-assisted circuit mapping – in order to demonstrate the connectivity of ERα vlVMH neurons and their downstream target neurons in various brain regions. Briefly, 12-week-old Esr1-Cre female mice were anesthetized by isoflurane and received stereotaxic injections of Ad-iN/WED (200 nl, 6.1 × 10 12 VP per ml) and AAV-EF1α-DIO hChR2(H134R)-EYFP (200 nl, 6.2 × 10 12 VP per ml) into the vlVMH. After a 4-week recovery, mice were sacrificed and brain slices (containing both the vlVMH and one of its target regions, e.g., LH, mpARH, DRN, or MPB) were prepared from these mice to perform electrophysiological recordings in WGA-GFP-labeled neurons in the target region. Neurons were patched using electrodes with tip resistances at 3.0–5.0 MΩ. Recording pipettes were routinely filled with a solution containing the following (in mM): 125 K-gluconate, 15 KCl, 10 HEPES, 8 NaCl, 4 Mg-ATP, 0.3 Na-GTP, 10 Na 2 -phosphocreatine, 2 EGTA, pH 7.30. The holding potential for voltage-clamp recordings was −60 mV, and responses were digitized at 10 kHz. All experiments were performed in the presence of GABA A receptor antagonist bicuculline (50 μM). EPSCs were evoked by a 473 nm laser (C.N.I) to stimulate ChR2-expressing fibers every 20 s. D-AP5 (30 μM; an NMDA receptor antagonist) and CNQX (30 μM; an AMPA receptor antagonist) were added to confirm whether the light-evoked currents were glutamatergic synaptic currents. TTX (1 μm) and 4-AP (400 μM) were added to the aCSF in order to determine whether the response was monosynaptic. CAV2-Cre retrograde tracing and electrophysiology In order to identify glucose-sensing neurons that send projections from the vlVMH to downstream brain regions, 12-week-old ERα-ZsGreen/Rosa26-TOMATO female mice were anesthetized by isoflurane and received unilateral stereotaxic injections of CAV2-Cre (200 nl, 3.7 × 10 12 VP per ml) into one of the following sites: LH (1.06 mm posterior, 1.20 mm lateral, and 5.00 mm ventral to the bregma), mpARH (2.70 mm posterior, 0.25 mm lateral, and 5.60 mm ventral to the bregma), MPB (5.20 mm posterior, 1.25 mm lateral, and 3.80 mm ventral to the bregma), or DRN (4.65 mm posterior, 0 mm lateral, and 3.60 mm ventral to the bregma). CAV2 virus retrogradely traveled from the initial site to the brain region that project to the LH, mpARH, MPB, or DRN, and Cre recombinase induced TOMATO expression in these infected cells. Two weeks later, unfixed brain slices containing the VMH (150 µm in thickness) were prepared from these mice, and were subjected to fluorescent microscopy to visualize and quantify ZsGreen, TOMATO and ZsGreen/TOMATO neurons in the vlVMH. About 500–600 neurons were counted from each mouse and two mice were included for each injection site (LH, mpARH, DRN, or MPB). In parallel, some of CAV2-Cre-injected mice were used for electrophysiology. Briefly, whole-cell patch-clamp recordings were performed on identified dual fluorescent neurons (ZsGreen and TOMATO) in the brain slices containing the VMH. Current clamp was engaged to test neural firing frequency and resting membrane potential at the 5 mM glucose aCSF and 1 mM glucose aCSF, as described above, in order to identify them as GE or GI neurons. The composition of GE and GI neurons for each projecting site was calculated. Fiber photometry For the fiber photometry experiments, Esr1-Cre female mice (12 weeks of age) were anesthetized by isoflurane and received stereotaxic injections of HSV-hEF1α-LS1L-GCaMP6f (200 nl per site, 3 × 10 9 VP per ml) into the DRN or into the mpARH. During the same surgery, an optical fiber (fiber: core = 400 μm; 0.48 NA; M3 thread titanium receptacle; Doric Lenses) was implanted over the vlVMH (1.60 mm posterior, 0.70 mm lateral, and 5.70 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). Fibers were fixed to the skull using dental acrylic and mice were allowed 3 weeks for recovery before acclimatization investigator handling for 1 week before experiments. The fiber photometry recordings started 4–6 weeks after surgeries to allow for adequate recovery and GCaMP6f expression to stabilize. All recordings were done in the home cage of the singly housed experimental animal. Mice were allowed to adapt to the tethered patchcord for 2 days prior to experiments and given 5 min to acclimate to the tethered patchcord prior to any recording. Continuous <20 μW blue LED at 465 nm and UV LED at 405 nm served as excitation light sources, driven by a multichannel hub (Doric Lenses), modulated at 211 Hz and 330 Hz, respectively. The light was delivered to a filtered minicube (FMC5, Doric Lenses) before connecting through optic fibers to a rotary joint (FRJ 1 × 1, Doric Lenses) to allow for movement. GCaMP6f calcium GFP signals and UV autofluorescent signals were collected through the same fibers back to the dichroic ports of the minicube into a femtowatt silicon photoreceiver (2151, Newport). The digital signals were then amplified, demodulated, and collected through a lock-in amplifier (RZ5P, Tucker-Davis Technologies) . The fiber photometry data was collected using Synapse 2.0 (Tucker-Davis Technologies) and down sampled to 8 Hz. We derived the values of fluorescence change (Δ F / F 0) by calculating ( F 465 − F 405 )/ F 465 (ref. ). Optogenetic in vivo studies Esr1-Cre female mice (12 weeks of age) were anesthetized with isoflurane and received stereotaxic injections of Cre-dependent AAV expressing ChR2-YFP (AAV-EF1α-DIO hChR2(H134R)-YFP, 6.2 × 10 12 VP per ml) or eNpHR3.0-EYFP (AAV-EF1α-DIO-eNpHR3.0-EYFP, 3 × 10 12 VP per ml) into the vlVMH (200 nl; 1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma). Simultaneously, an optic fiber (0.2 mm in diameter with a numerical aperture of 0.22) was implanted to target mpARH (2.70 mm posterior, 0.25 mm lateral, and 5.45 mm ventral to the bregma) for ChR2-YFP-injected mice or the DRN (4.65 mm posterior, 0.00 mm lateral, and 3.25 mm ventral to the bregma) for eNpHR3.0-EYFP-injected mice. Importantly, the mpARH is very caudal (posterior) to the typical ARH where most of POMC and AgRP neurons are located. As shown in Fig. , in the mpARH-containing coronal section that have WGA-labeled neurons, the third ventricle already becomes a small hole, namely mammillary recess of the third ventricle. According the mouse brain atlas, the mpARH is ~1.2 mm posterior to the vlVMH. After a 7-day recovery, mice were fasted for 3 h from 9 a.m. in the morning to ensure empty stomach. At 12 p.m., blue or yellow light stimulation (473 nm or 589 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) was used to activate ERα vlVMH →mpARH or inhibit ERα vlVMH →DRN neural circuits as described by others . Briefly, light intensity was applied at 21 mW per mm2 for photostimulation or 10 mW per mm 2 for photoinhibition to reach appropriate light power exiting the fiber tip in the brain corresponding to 8 mW and 4 mW for activation and inhibition, respectively (web.stanford.eduper group/dlab/cgi-bin/graph/chart.php). Blood glucose was measured at three time points: prior to the start of photostimulation/inhibition, at the end of 1-h photostimulation/inhibition, and 1 h afterward. To validate accurate and sufficient infection of ChR2-YFP or eNpHR3.0-EYFP in ERα vlVMH neurons, all mice were perfused with 10% formalin. Brain sections were cut at 25 μm (5 series) and subjected to histological validation. Only those mice with YFP in the vlVMH, and the fiber tract in the mpARH or DRN were included in analyses. Statistical analyses For electrophysiology recordings, the investigator was not blinded for animals’ genotypes, but he was blinded with the treatments (e.g., virus injections) the animals were subjected to. For the measurement of glucose, investigators were blinded for animal’s genotypes or the surgeries the animals were subjected to. For Patch-seq study, the investigator was blinded with the nature of neurons (GE or GI). The data are presented as mean ± SEM (standard error of the mean). Statistical analyses were performed using GraphPad Prism 7.0 to evaluate normal distribution and variations within and among groups. Methods of statistical analyses were chosen based on the design of each experiment and are indicated in figure legends or main text. P < 0.05 was considered to be statistically significant. Study approval Care of all animals and procedures were approved by the Baylor College of Medicine Institutional Animal Care and Use Committee. Reporting summary Further information on research design is available in the linked to this article. Several transgenic mouse lines, including ERα-ZsGreen, ERα-ZsGreen/Rosa26-TOMATO, and Esr1-Cre were maintained on a C57BL6/J background. Esr1-Cre mice were purchased from Jackson Laboratory (#017911) that express Cre recombinase selectively in ERα-expressing neurons, including those in the vlVMH . In addition, some C57Bl6J mice were purchased from the mouse facility of Baylor College of Medicine. Mice were housed in a temperature-controlled environment at 22–24 ˚C using a 12 h light/12 h dark cycle. The mice were fed standard chow (6.5% fat, #2920, Harlan-Teklad, Madison, WI). Water was provided ad libitum. Further information about resources and reagents can be found in Table . ERα-ZsGreen or ERα-ZsGreen/Rosa26-TOMATO mice were used for electrophysiological recordings. Mice were deeply anesthetized with isoflurane and transcardially perfused with a modified ice-cold sucrose-based cutting solution (pH 7.3) containing 10 mM NaCl, 25 mM NaHCO 3 , 195 mM sucrose, 5 mM glucose, 2.5 mM KCl, 1.25 mM NaH2PO 4 , 2 mM Na-pyruvate, 0.5 mM CaCl 2 , and 7 mM MgCl 2 , bubbled continuously with 95% O 2 and 5% CO 2 (ref. ). The mice were then decapitated, and the entire brain was removed and immediately submerged in the cutting solution. Slices (250 µm) were cut with a Microm HM 650 V vibratome (Thermo Scientific). Three brain slices containing the VMH were obtained for each animal (bregma −2.06 mm to −1.46 mm; interaural 1.74–2.34 mm). The slices were recovered for 1 h at 34 °C and then maintained at room temperature in artificial cerebrospinal fluid (aCSF, pH 7.3) containing 126 mM NaCl, 2.5 mM KCl, 2.4 mM CaCl 2 , 1.2 mM NaH2PO 4 , 1.2 mM MgCl 2 , 5.0 mM glucose, and 21.4 mM NaHCO 3 ) saturated with 95% O 2 and 5% CO 2 before recording. Slices were transferred to a recording chamber and allowed to equilibrate for at least 10 min before recording. The slices were superfused at 34 °C in oxygenated aCSF at a flow rate of 1.8–2 ml/min. ZsGreen and/or TOMATO-labeled neurons in the VMH were visualized using epifluorescence and IR-DIC imaging on an upright microscope (Eclipse FN-1, Nikon) equipped with a movable stage (MP-285, Sutter Instrument). Patch pipettes with resistances of 3–5 MΩ were filled with intracellular solution (pH 7.3) containing 128 mM K-gluconate, 10 mM KCl, 10 mM HEPES, 0.1 mM EGTA, 2 mM MgCl 2 , 0.05 mM Na-GTP, and 0.05 mM Mg-ATP. Recordings were made using a MultiClamp 700B amplifier (Axon Instrument), sampled using Digidata 1440 A and analyzed offline with pClamp 10.3 software (Axon Instruments). Series resistance was monitored during the recording, and the values were generally <10 MΩ and were not compensated. The liquid junction potential was +12.5 mV, and was corrected after the experiment. Data were excluded if the series resistance increased dramatically during the experiment or without overshoot for action potential. Currents were amplified, filtered at 1 kHz, and digitized at 20 kHz. Current clamp was engaged to test neural firing frequency and resting membrane potential at the baseline 5 mM glucose aCSF and 1 mM glucose aCSF. The values for resting membrane potential and firing frequency averaged within 2-min bin at the 5 mM glucose or 1 mM glucose aCSF condition. A neuron was considered depolarized or hyperpolarized if a change in membrane potential was at least 2 mV in amplitude. To measure Ano currents, the pipette solution contained (in mM): CsCl 130, NaH 2 PO 4 1.2, Na 2 HPO 4 4.8, EGTA, MgCl 2 1.0, D-glucose 5.0, and ATP 3.0 (pH adjusted to 7.2). Total Ano current was recorded under voltage-clamp by holding the membrane potential at −60 mV in 5 mM glucose or 1 mM glucose aCSF. At intervals, neurons were voltage clamped from −50 mV to +50 mV in steps of 10 mV for 1 s (ref. ). Then the neurons were treated with 100 μM CaCCinh-A01 (an Ano blocker) for 3 min (ref. ). The Ano current was calculated by subtracting the left current in the presence of Ano blocker from total current without the blocker. For the K ATP current, slices were perfused with an external solution that contained 140 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 5 mM glucose, and 10 mM HEPES (pH 7.4). The pipette (intracellular) solution contained 130 mM potassium gluconate, 20 mM HEPES, 10 mM EGTA, 1 mM MgCl 2 , 2.5 mM CaCl 2 , 1.0 mM Mg-ATP, and 0.3 mM Tris-GTP (pH 7.2) . The neural membrane potential was hold at −60mV in voltage-clamp model when K ATP current was recorded and when glucose was changed from 5 mM to 1 mM. K ATP current was calculated by subtracting currents recorded in the absence and the presence of a K ATP blocker, tolbutamide (200 µM) . In order to examine gene expression profiling of GI-ERα vlVMH or GE-ERα vlVMH neurons, we first performed electrophysiological recordings to identify GI-ERα vlVMH or GE-ERα vlVMH neurons. Four GI-ERα vlVMH neurons and four GE-ERα vlVMH neurons (from two different mice) were manually collected and suspended in PBS buffer. Cell lysis, first-strand cDNA synthesis and cDNA amplification were performed according to the manufacturer’s instructions of SMART-Seq v4 Ultra Low Input RNA Kit (Clontech). Amplified cDNA was purified by Agencourt AMPure XP Kit (Beckman Coulter). The Nextera Library Prep Kit (Illumina) was used to prepare libraries for sequencing (100 bp paired-end, RNA-seq) on a HiSeq 2500 platform (Illumina). RNA-seq raw data files were trimmed using TrimGalore (version 0.4.1) and aligned against the mouse reference genome assembly (GRCm38.p6) using the STAR aligner (version 2.5.3a) . One sample (“GIneuron3”) was removed from the following analysis due to low sequence read counts (Supplementary Table ). Next, gene expression was assessed using featureCounts (version 1.6.0) . To improve the identification of differentially regulated genes, unwanted variation between samples was removed using RUVseq . Then, DESeq2 was used to determine differential gene expression . Significance of differential expression was assessed by requiring both P < 0.05 and |log 2 (fold change)| > 2. We found a total of 372 differentially expressed genes (Supplementary Data , Supplementary Fig. ). Gene set enrichment analysis was performed using the online tool WebGestalt (version 2019) . Only the gene sets whose size was in a range of 5–2000 were considered and enrichment P -values were corrected for multiple testing by Benjamini–Hochberg procedure, as implemented in the tool. To confirm the expression of Abcc8 and Ano4 in GE-ERα vlVMH and GI-ERα vlVMH neurons, respectively, we manually picked up identified GE or GI ZsGreen-labeled vlVMH neurons from female ERα-ZsGreen mice (at diestrus). To this end, the mouse brain was removed and immediately submerged in ice-cold sucrose-based cutting solution (adjusted to pH 7.3) containing (in mM) 10 NaCl, 25 NaHCO 3 , 195 sucrose, 5 glucose, 2.5 KCl, 1.25 NaH 2 PO 4 , 2 Na-pyruvate, 0.5 CaCl 2 , and 7 MgCl 2 bubbled continuously with 95% O 2 and 5% CO 2 . The slices (250 μm) were cut with a Microm HM 650 V vibratome (Thermo Scientific) and recovered for 1 h at 34 °C, and then maintained at room temperature in artificial cerebrospinal fluid (aCSF, pH 7.3) containing 126 mM NaCl, 2.5 mM KCl, 2.4 mM CaCl 2 , 1.2 mM NaH 2 PO 4 , 1.2 mM MgCl 2 , 11.1 mM glucose, and 21.4 mM NaHCO 3 saturated with 95% O 2 and 5% CO 2 before recording. Slices were transferred to a chamber, and ZsGreen-labeled neurons were visualized using epifluorescence and IR-DIC imaging on an upright microscope equipped with a moveable stage (MP-285, Sutter Instrument). These neurons were first subjected to electrophysiological recordings to determine whether they were GE neurons or GI neurons, as described above. Single neurons were then manually picked up by the pipette and subjected to RNA extraction and reverse transcription using the Ambion Single-Cell-to-CT Kit (Ambion, Life Technologies) according to the manufacturer’s instruction. Briefly, 10 μl Single Cell Lysis solutions with DNase I was added to each sample, and the supernatant after centrifuge were used for cDNA synthesis (25 °C for 10 min, 42 °C for 60 min, and 85 °C for 5 min). The cDNA samples were amplified on a CFX384 Real-Time System (Bio-Rad) using SsoADV SYBR Green Supermix (Bio-Rad), and data were collected using Bio-Rad CFX Manager (3.1). Results were normalized against the expression of house-keeping gene (cyclophilin). Primer sequences were listed in Supplementary Table . AAV vectors carrying sgRNAs targeting mouse Ano4 or Abcc8 were designed and constructed by Biocytogen (Wakefield, MA). For Ano4, exon 4 and exon 11 were chosen to be targeted by CRISPR-Cas9. A total of 19 sgRNAs were designed with seven targeting exon 4 and 12 targeting exon 11. These sgRNAs were selected using the CRISPR tool ( https://www.sanger.ac.uk/htgt/wge/ ) with minimal potential off-target effects. All 19 sgRNAs were screened for on-target activity using a Universal CRISPR Activity Assay (UCA TM , Biocytogen) . Briefly, the plasmid carrying Cas9 and sgRNA, and another plasmid carrying the target sequence cloned inside a luciferase gene were co-transfected into HEK293. Stop codon and CRISPR/Cas9 targeting sites were located within the luciferase gene. Stop codon induced the translational termination of the luciferase gene, while sgRNA targeting site cutting induced DNA annealing based on single-strand annealing and the complementary sequence recombination thereby occurred to rescue a complete coding sequence of the luciferase. The luciferase signal was then detected to reflect the DNA editing efficiency of the sgRNA. We used the pCS(puro)-positive plasmid, which expressed a proven positive-sgRNA, as the positive control (Supplementary Fig. ). The sgRNA#2 (GCACTTCGGAGGACACCAGC AGG) and sgRNA#9-A (GTACTTGTACCACACGCCCC AGG ) were selected to target exon 4 and exon 11 of the Ano4 gene, respectively, due to their relatively high on-target activity and low off-target potentials. Similarly for Abcc8, exon 2 and exon 5 were chosen to be targeted by CRISPR-Cas9. A total of 14 sgRNAs were designed with seven targeting exon 2 and 7 targeting exon 5. These sgRNAs were selected using the CRISPR tool ( https://www.sanger.ac.uk/htgt/wge/ ) with minimal potential off-target effects. All 14 sgRNAs were screened for on-target activity using the UCA TM Assay (Supplementary Fig. ). The sgRNA#5 (TGAAGGTAAGGATCCAGCGC AGG) and sgRNA#11 (GCAGCTTCCCGATGGCCCGC AGG ) were used for next step. We constructed an AAV-U6-sgRNA-tdTomato vector containing two sgRNAs targeting Ano4 (AAV-Ano4/sgRNAs-FLEX-tdTOMATO) or two sgRNAs targeting Abcc8 gene (AAV-Abcc8/sgRNAs-FLEX-tdTOMATO), respectively (Supplementary Fig. ). Briefly, U6 promoter-sgRNAs, CAG promoter-flex-tdTomato-bGH polyA cassettes were cloned into the Addgene plasmid #61591 vector ( http://www.addgene.org/61591/ ) and further verified by full sequencing. The two viruses were packaged by the Baylor IDDRC Neuroconnectivity Core. To validate AAV-Ano4/sgRNAs-FLEX-tdTOMATO in mice, female Esr1-Cre mice (12 weeks of age) received stereotaxic injections of AAV-FLEX-scCas9 (80 nl, 5.3 × 10 12 GC per ml) and AAV-Ano4/sgRNAs-FLEX-tdTOMATO (160 nl, 1.4 × 10 12 GC per ml) into one side of the vlVMH (knockout side), and received AAV-Ano4/sgRNAs-FLEX-tdTOMATO (160 nl) and the AAV-GFP (80 nl, 5.6 × 10 12 GC per ml, no Cas9) in the other side of the vlVMH virus (control side). After a 4-week recovery, these mice were subjected to electrophysiology recordings for glucose-sensing properties and Ano currents, as described above. Since we did not find any GI neurons from the knockout side, we measured Ano currents in non-GE neurons from the knockout side (which were likely original GI neurons) and compared these currents to Ano currents in identified GI neurons from the control side. Similarly, to validate AAV-Abcc8/sgRNAs-FLEX-tdTOMATO in mice, female Esr1-Cre mice received stereotaxic injections of AAV-FLEX-scCas9 (80 nl) and AAV-Abcc8/sgRNAs-FLEX-tdTOMATO (160 nl, 1.3 × 10 12 GC per ml) into one side of the vlVMH (knockout side), and received AAV-Abcc8/sgRNAs-FLEX-tdTOMATO (160 nl) and the AAV-GFP (80 nl, no Cas9) in the other side of the vlVMH virus (control side). After a 4-week recovery, these mice were subjected to electrophysiology recordings for glucose-sensing properties and K ATP currents, as described above. Since we did not find any GE neurons from the knockout side, we measured K ATP currents in non-GI neurons from the knockout side (which were likely original GE neurons) and compared these currents to K ATP currents in identified GE neurons from the control side. To determine the functions of Ano4 and Abcc8 in ERα vlVMH neurons in vivo, female wild type and Esr1-Cre littermates (12 weeks of age) received stereotaxic injections of AAV-FLEX-scCas9 and AAV-Ano4/sgRNAs-FLEX-tdTOMATO into both sides of the vlVMH (1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). During the same surgeries, an indwelling icv guide cannula (#62003, Plastics One) was stereotaxically inserted to target the lateral ventricle (0.34 mm posterior, 1.00 mm lateral, and 2.30 mm ventral to the bregma). One week after surgery, the cannulation accuracy was validated by central administration of 10 ng angiotensin II (A9525, Sigma), which induced the increase of drinking and grooming behavior. Mice were subjected to weekly handling to adapt to stress associated with icv injections. Four weeks after the surgeries, mice were fasted for 3 h from 9 a.m. in the morning. At 12 p.m., mice received icv injections of saline or 2-DG (1 mg in 2 μl saline). Blood glucose was then measured at 0, 30, 60, and 120 min after injections. In order to map the downstream neurons of ERα vlVMH neurons, 12-week-old Esr1-Cre female mice were anesthetized by isoflurane and received stereotaxic injections of Ad-FLEX-WGA-EGFP into the vlVMH (200 nl, 6.1 × 10 12 VP per ml; 1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). Because Ad-iN/WED is Cre-dependent virus, WGA-GFP was exclusively expressed in ERα vlVMH neurons and anterogradely traveled along the fibers, passed the synapse, and filled the downstream neurons that were innervated by ERα vlVMH terminals. Four weeks after injections, mice were perfused with 10% formalin, and brain sections were cut at 25 μm (5 series). The sections were incubated at room temperature in primary goat anti-WGA antibody (1:1000, #AS-2024, VectorLabs) overnight, followed by biotinylated donkey anti-goat secondary antibody (1:1000; #705-065-003, Jackson ImmunoResearch) for 2 h. Sections were then incubated in the avidin–biotin complex (1:500, ABC; Vector Elite Kit), and incubated in 0.04% 3, 3′-diaminobenzidine and 0.01% hydrogen peroxide. After dehydration through graded ethanol, the slides were then immersed in xylene and coverslipped. Images were analyzed using a brightfield Leica microscope equipped with the Leica MM AF Acquisition and Analysis (#11640901). We performed the WGA-ChR2-assisted circuit mapping – in order to demonstrate the connectivity of ERα vlVMH neurons and their downstream target neurons in various brain regions. Briefly, 12-week-old Esr1-Cre female mice were anesthetized by isoflurane and received stereotaxic injections of Ad-iN/WED (200 nl, 6.1 × 10 12 VP per ml) and AAV-EF1α-DIO hChR2(H134R)-EYFP (200 nl, 6.2 × 10 12 VP per ml) into the vlVMH. After a 4-week recovery, mice were sacrificed and brain slices (containing both the vlVMH and one of its target regions, e.g., LH, mpARH, DRN, or MPB) were prepared from these mice to perform electrophysiological recordings in WGA-GFP-labeled neurons in the target region. Neurons were patched using electrodes with tip resistances at 3.0–5.0 MΩ. Recording pipettes were routinely filled with a solution containing the following (in mM): 125 K-gluconate, 15 KCl, 10 HEPES, 8 NaCl, 4 Mg-ATP, 0.3 Na-GTP, 10 Na 2 -phosphocreatine, 2 EGTA, pH 7.30. The holding potential for voltage-clamp recordings was −60 mV, and responses were digitized at 10 kHz. All experiments were performed in the presence of GABA A receptor antagonist bicuculline (50 μM). EPSCs were evoked by a 473 nm laser (C.N.I) to stimulate ChR2-expressing fibers every 20 s. D-AP5 (30 μM; an NMDA receptor antagonist) and CNQX (30 μM; an AMPA receptor antagonist) were added to confirm whether the light-evoked currents were glutamatergic synaptic currents. TTX (1 μm) and 4-AP (400 μM) were added to the aCSF in order to determine whether the response was monosynaptic. In order to identify glucose-sensing neurons that send projections from the vlVMH to downstream brain regions, 12-week-old ERα-ZsGreen/Rosa26-TOMATO female mice were anesthetized by isoflurane and received unilateral stereotaxic injections of CAV2-Cre (200 nl, 3.7 × 10 12 VP per ml) into one of the following sites: LH (1.06 mm posterior, 1.20 mm lateral, and 5.00 mm ventral to the bregma), mpARH (2.70 mm posterior, 0.25 mm lateral, and 5.60 mm ventral to the bregma), MPB (5.20 mm posterior, 1.25 mm lateral, and 3.80 mm ventral to the bregma), or DRN (4.65 mm posterior, 0 mm lateral, and 3.60 mm ventral to the bregma). CAV2 virus retrogradely traveled from the initial site to the brain region that project to the LH, mpARH, MPB, or DRN, and Cre recombinase induced TOMATO expression in these infected cells. Two weeks later, unfixed brain slices containing the VMH (150 µm in thickness) were prepared from these mice, and were subjected to fluorescent microscopy to visualize and quantify ZsGreen, TOMATO and ZsGreen/TOMATO neurons in the vlVMH. About 500–600 neurons were counted from each mouse and two mice were included for each injection site (LH, mpARH, DRN, or MPB). In parallel, some of CAV2-Cre-injected mice were used for electrophysiology. Briefly, whole-cell patch-clamp recordings were performed on identified dual fluorescent neurons (ZsGreen and TOMATO) in the brain slices containing the VMH. Current clamp was engaged to test neural firing frequency and resting membrane potential at the 5 mM glucose aCSF and 1 mM glucose aCSF, as described above, in order to identify them as GE or GI neurons. The composition of GE and GI neurons for each projecting site was calculated. For the fiber photometry experiments, Esr1-Cre female mice (12 weeks of age) were anesthetized by isoflurane and received stereotaxic injections of HSV-hEF1α-LS1L-GCaMP6f (200 nl per site, 3 × 10 9 VP per ml) into the DRN or into the mpARH. During the same surgery, an optical fiber (fiber: core = 400 μm; 0.48 NA; M3 thread titanium receptacle; Doric Lenses) was implanted over the vlVMH (1.60 mm posterior, 0.70 mm lateral, and 5.70 mm ventral to the bregma, based on Franklin and Paxinos Mouse Brain Atlas). Fibers were fixed to the skull using dental acrylic and mice were allowed 3 weeks for recovery before acclimatization investigator handling for 1 week before experiments. The fiber photometry recordings started 4–6 weeks after surgeries to allow for adequate recovery and GCaMP6f expression to stabilize. All recordings were done in the home cage of the singly housed experimental animal. Mice were allowed to adapt to the tethered patchcord for 2 days prior to experiments and given 5 min to acclimate to the tethered patchcord prior to any recording. Continuous <20 μW blue LED at 465 nm and UV LED at 405 nm served as excitation light sources, driven by a multichannel hub (Doric Lenses), modulated at 211 Hz and 330 Hz, respectively. The light was delivered to a filtered minicube (FMC5, Doric Lenses) before connecting through optic fibers to a rotary joint (FRJ 1 × 1, Doric Lenses) to allow for movement. GCaMP6f calcium GFP signals and UV autofluorescent signals were collected through the same fibers back to the dichroic ports of the minicube into a femtowatt silicon photoreceiver (2151, Newport). The digital signals were then amplified, demodulated, and collected through a lock-in amplifier (RZ5P, Tucker-Davis Technologies) . The fiber photometry data was collected using Synapse 2.0 (Tucker-Davis Technologies) and down sampled to 8 Hz. We derived the values of fluorescence change (Δ F / F 0) by calculating ( F 465 − F 405 )/ F 465 (ref. ). Esr1-Cre female mice (12 weeks of age) were anesthetized with isoflurane and received stereotaxic injections of Cre-dependent AAV expressing ChR2-YFP (AAV-EF1α-DIO hChR2(H134R)-YFP, 6.2 × 10 12 VP per ml) or eNpHR3.0-EYFP (AAV-EF1α-DIO-eNpHR3.0-EYFP, 3 × 10 12 VP per ml) into the vlVMH (200 nl; 1.60 mm posterior, 0.70 mm lateral, and 5.90 mm ventral to the bregma). Simultaneously, an optic fiber (0.2 mm in diameter with a numerical aperture of 0.22) was implanted to target mpARH (2.70 mm posterior, 0.25 mm lateral, and 5.45 mm ventral to the bregma) for ChR2-YFP-injected mice or the DRN (4.65 mm posterior, 0.00 mm lateral, and 3.25 mm ventral to the bregma) for eNpHR3.0-EYFP-injected mice. Importantly, the mpARH is very caudal (posterior) to the typical ARH where most of POMC and AgRP neurons are located. As shown in Fig. , in the mpARH-containing coronal section that have WGA-labeled neurons, the third ventricle already becomes a small hole, namely mammillary recess of the third ventricle. According the mouse brain atlas, the mpARH is ~1.2 mm posterior to the vlVMH. After a 7-day recovery, mice were fasted for 3 h from 9 a.m. in the morning to ensure empty stomach. At 12 p.m., blue or yellow light stimulation (473 nm or 589 nm, 5 ms per pulse, 40 pulses per 1 s for 1 h) was used to activate ERα vlVMH →mpARH or inhibit ERα vlVMH →DRN neural circuits as described by others . Briefly, light intensity was applied at 21 mW per mm2 for photostimulation or 10 mW per mm 2 for photoinhibition to reach appropriate light power exiting the fiber tip in the brain corresponding to 8 mW and 4 mW for activation and inhibition, respectively (web.stanford.eduper group/dlab/cgi-bin/graph/chart.php). Blood glucose was measured at three time points: prior to the start of photostimulation/inhibition, at the end of 1-h photostimulation/inhibition, and 1 h afterward. To validate accurate and sufficient infection of ChR2-YFP or eNpHR3.0-EYFP in ERα vlVMH neurons, all mice were perfused with 10% formalin. Brain sections were cut at 25 μm (5 series) and subjected to histological validation. Only those mice with YFP in the vlVMH, and the fiber tract in the mpARH or DRN were included in analyses. For electrophysiology recordings, the investigator was not blinded for animals’ genotypes, but he was blinded with the treatments (e.g., virus injections) the animals were subjected to. For the measurement of glucose, investigators were blinded for animal’s genotypes or the surgeries the animals were subjected to. For Patch-seq study, the investigator was blinded with the nature of neurons (GE or GI). The data are presented as mean ± SEM (standard error of the mean). Statistical analyses were performed using GraphPad Prism 7.0 to evaluate normal distribution and variations within and among groups. Methods of statistical analyses were chosen based on the design of each experiment and are indicated in figure legends or main text. P < 0.05 was considered to be statistically significant. Care of all animals and procedures were approved by the Baylor College of Medicine Institutional Animal Care and Use Committee. Further information on research design is available in the linked to this article. Supplementary Information Description of Additional Supplementary Items Supplementary Data 1 Supplementary Data 2 Supplementary Movie 1 Reporting summary
SARS-CoV-2 Vaccines and Adverse Effects in Gynecology and Obstetrics: The First Italian Retrospective Study
2278f970-1c73-4b0f-b69a-e55e7c159946
9603573
Gynaecology[mh]
The Italian Medicines Agency (AIFA) published data relating to the surveillance of COVID-19 vaccines on the general population from 27 December 2020 to 26 April 2021, from which it emerged that 309 reports have been entered for every 100,000 doses administered, regardless of vaccine type and dose . The vaccines administered to date are: the Pfizer/BioNTech mRNA vaccine called Comirnaty (43%), the Moderna mRNA vaccine called COVID-19 Modern vaccine (32%), and the AstraZeneca recombinant viral vector vaccine, now called Vaxzevria (25%) . Reports mainly concern the Pfizer/BioNTech Comirnaty vaccine (75%), which was the most used (70.9% of the doses administered), and only in a small part the Vaxzevria vaccine (ex COVID-19 Vaccine AstraZeneca; 22%) and the Moderna vaccine (3%) . Furthermore, adverse effects following the vaccine appear numerically higher in a manner directly proportional to the number of doses . For these vaccines administered, adverse events that have been notified and reported by AIFA and the Medicines and Healthcare Products Regulatory Agency (MHRA) are fever, fatigue, headache, muscle/joint pain, injection site pain, chills, and nausea . Investigating all the events that appear after a vaccination serves to collect as much information as possible and to increase the possibility of identifying the suspicious events . It is not always easy to assess whether there is a causal link with vaccination . They could be a symptom of another disease, or they could be associated with another product taken by the person who got vaccinated. Please note that adverse event reports from AIFA represent a snapshot of the reports present in the National Pharmacovigilance Network at the time of data extraction and may change over time . Focusing on the gynecological field, in our clinical practice, shortly after vaccination, transitory period alteration and anomalies vaginal bleeding was reported by a growing number of patients after vaccination . Similarly, from 2 September 2021, these types of reactions have been reported in the literature through all COVID-19 vaccines at MHRA. MHRA had previously analyzed the reports of post-vaccine menstrual disorders, concluding that there was no causal link between vaccines and menstrual cycle alterations. In consideration of further reports on menstrual disorders, the MHRA has decided to investigate the incidence of such cases and to carry out a new analysis of all available data, currently underway . Moreover, these menstrual cycle changes after vaccination seem to return to normal the following cycle. The mechanism for such adverse reactions has not been sufficiently explored. In reality, trials are underway with case control groups (vaccinated and not vaccinated), for which US National Institutes of Health is investing a lot of resources . Pending the results of these ongoing studies, we carried out a survey in the vaccinated population of our outpatient services to evaluate retrospectively, over a year, if the vaccinated patients had reported adverse reactions. The scientific study is configured as a unicentric and non-profit retrospective investigation. From April 2021 to April 2022 we recruited 100 women, with a middle age of 33 years (range 18–45), who had completed the SARS-CoV-2 vaccination course for 18 months, and who had access to the Gynecology and Obstetrics Unit of the “San Paolo” Hospital for menstrual irregularities and with normal cycle lengths for three consecutive cycles before the first vaccine dose. The average BMI of these women was 27.2. We excluded 259 patients (72%) since they were in menopause, were minors, immune-depressed, pregnant, affected by oncological diseases or by previously recognized gynecological pathologies (fibromatosis or endometrial and ovarian anomalies, polycystic ovary syndrome), in therapy with corticosteroids or contraceptives, or had ongoing vaccines for HPV or other vaccine prophylaxis. Through our data archive, we contacted these patients and asked them to complete a questionnaire consisting of 12 multiple-choice questions . Women who agreed to participate to the study signed an informed consent form. All procedures performed in this study were in accordance with the Declaration of Helsinki, as revised in 2013. The clinical data of the patients, together with the results obtained from the questionnaires, were anonymized, reported in a specific database, and subsequently analyzed. Results are presented in percentages. The data analysis was exploratory, and aimed at describing the information collected in a concise form. The characteristics of the respondents and the responses were summarized by descriptive statistics. We recruited 100 patients aged 18–45 years (average of 33 years), related to outpatient services of the UOC of Gynecology and Obstetrics for menstrual irregularities; they reported having had vaccination for SARS-CoV-2 (with verification of green-pass) and we invited them to complete a questionnaire consisting of 12 questions about the type of menstrual changes they had had and the history of this from the date of the vaccine. From the analysis of these questionnaires, it appears that our patients had received the Pfizer/BioNTech mRNA vaccine (Comirnaty) in 43% of cases, the Moderna mRNA vaccine (COVID-19 Modern Vaccine) in 32% of cases, and the Astrazeneca recombinant viral vector vaccine (Vaxzevria) in 25% of the remaining cases ( and ). In addition, 37% of them received three doses and 63% received two doses of vaccines. Of all 100 women we recruited (ADV), the average onset of menstrual irregularity was 13 days (1–18) from inoculation of the vaccine. Of these, 90% were after the second dose, and 10% were after the third dose for an average duration of 45.5 days; 15% of the total of these women reported irregularities already after the first dose of vaccine, which reappeared after the second/third dose. In our patient group, 23% had menstrual delay and 77% had abnormal uterine bleeding (AUB), of which 47% had metrorrhagia, 30% had menometrorrhagia, and 23% had menorrhagia ( and ). In cases of AUB, only 28% (27/77) needed therapy with tranexamic acid. In addition, 32% (10/32) of patients contacted the gynecologist/family doctor by short routes without performing a clinical-instrumental assessment, 58% (58/100) had an outpatient specialist examination, and 10% (10/100) had an access to the emergency room ( and ). Among patients undergoing medical consultations, 38% (12/32) of women reported written documentation. Of these: 57% (7/12) had no ultrasound changes, 30% (3/12) had a fluid effusion layer in Douglas (9 mm average), and 13% (2/12) reported a hemorrhagic corpus luteum. Clinically, 98% (11/12) had abnormal uterine bleeding at the time of the examination. Additionally, 98% (11/12) of patients who reported reactions performed a blood chemistry evaluation with CBC and coagulation with reported mean values of D-DIMERI = 650 (vn: less than 500) ng/mL, PT = 64%, PTT = 31”, Fibrinogen = 331 ng/mL, HB: 11.9 gr/dl, Hematocrit: 37%, PLT: 263 × 1000/uL, WBC: 17.000, 06 × 1000/mL. No one performed iron therapy. In addition, as for the reproductive out-like, after the last dose of vaccine: 25% (25/100) reported to be pregnant, 5% (5/100) reported threat of preterm birth, 10% (10/100) had an abortion, 13% (13/100) had an IVG, 10% (10/100) tried to get pregnant without getting it, and 37% (37/100) did not try to conceive . Finally, it should also be noted that the total number of accesses for menstrual irregularities between April 2021 and April 2022 was 359 compared to 273 in the period April 2018–April 2019 in the same hospital for the same reasons (+86 patients, 31.5%) Several questions have arisen about the impact of SARS-CoV-2 vaccination and SARS-CoV-2 infection on future fertility . Particularly, regarding male fertility, the absence of SARS-CoV-2 in the semen and prostatic secretions of infected patients has been reported in literature ; the unlikely possibility of sexual transmission through semen at about 1 month after first detection, patients with a recent infection, or those recovering from COVID-19 has also been reported , and SARS-CoV-2 RNA was not detected in semen during the period shortly after infection nor at a later time . There are indirect viral signs, such as testicular injury and inflammatory infiltration, viral orchitis, scrotal discomfort, and altered semen parameters (such as number of spermatozoa with DNA fragmentation). SARS-CoV-2 may lead to infertility through the main receptor binds ACE2 receptor E2 that is widely distributed in the testis, including the Leydig and Sertoli cells . Further studies must investigate these aspects and the impact of “long Covid” on male reproduction . On the other hand, even after considering different types of vaccines (mRNA or viral vector), it was reported that COVID-19 vaccination did not damage the sperm quality and fertilization capacity of men (particularly undergoing ART treatments) and should be considered safe for men’s reproductive health . Regarding females, SARS-CoV-2 may invade target ovarian cells by binding to ACE2, altering female fertility . ACE2 is widely expressed in the ovaries, uterus, vagina, and placenta, regulating angiotensin II (Ang II) levels to exert its physiological functions, such as follicular development and ovulation, corpus luteum angiogenesis and degeneration, and affect endometrial tissue growth . Therefore, the ovarian reserve function should be evaluated in order to analyze the impact of COVID-19 on female fertility . On the other hand, low incidence of severe morbidity among pregnancy affected by COVID-19 was described until recent publications have reported severe morbidity and mortality among pregnant affected by the emerging variants of the SARS-CoV-2 virus 9. Consequently, the Center for Disease Control added pregnancy to the list of high-risk conditions to prioritize vaccination and the American College of Obstetricians and Gynecologists recommend vaccination in any stage of the pregnancy . Concerning women eager for offspring, COVID-19 vaccination does not seem to impact fertility, since in clinical trials, adverse pregnancy outcomes occurred with similar rates in vaccinated and unvaccinated groups . Moreover, in assisted reproduction clinics, fertility measures and pregnancy rates are similar in vaccinated and unvaccinated patients . There have been many reports of people with menstrual disturbance following COVID-19 vaccination, including alterations in frequency, duration, regularity, and volume of menstruation . Similarly, in our clinical practice we wondered if we could evaluate the presence of a connection between changes to menstrual periods and COVID-19 vaccines in our population. For this reason, we decided to use a simple questionnaire, which, despite the limits of a retrospective assessment, can represent the attestation of the adverse event and a starting point for further analysis. From the results of the questionnaire, it emerged that symptoms such as delayed menstruation and abnormal uterine bleeding (metrorrhagia, menometrorrhagia, and menorrhagia) were generally reported within the first three weeks of vaccination, especially after the second dose, with a percentage of 23% and 77%, respectively. However, of these, only a limited share had to resort to first-aid access and in none of these cases was an objectionable pathology documented. It can therefore be inferred that the disease triggered by the vaccine was minor and without sequelae. A minority of clinicians (10%) considered it appropriate to treat these alterations and require blood tests. Of these, the interesting result is represented by the finding of D-DIMERI values above the threshold values, which, although similar to the one present in the general population following the vaccine, would deserve further study in our opinion. In addition, in the presence of blood losses, only a limited percentage of patients performed therapy with tranexanic acid, thus strengthening the hypothesis of an occasional and transient phenomenon. Indeed, in the limited works reported in the literature, it has been described that menstrual alteration after COVID-19 vaccination appears to resolve spontaneously and rapidly, which is generally reassuring for subsequent fertility . These studies have recruited a non-uniform sample that includes both individuals that use hormonal contraception and individuals having natural menstrual cycles . Moreover, it is not simple to know if these disturbances are a direct effect of the vaccine itself and the mechanisms causing these effects, as this can change from person to person . Indeed, changes in menstruation may be due to stress, since the female system is designed to momentarily down-regulate to prevent against pregnancy and preserve energy . This mechanism could justify some of the menstrual irregularity detected during the pandemic with COVID-19 or with vaccination . On the other hand, the COVID-19 vaccination originates an immune response and subsequent inflammation may transitorily disturb the ovaric hormonal production over one or two cycles, with consequential anomalous menstrual bleeding . Concerning this hypothesis, a recent study assessed the ovarian involvement in COVID-19 vaccination immune reaction . This research has revealed the presence of anti-SARS-CoV-2 IgG in serum and follicular fluid in recently vaccinated patients versus non-vaccinated non-infected women candidates for vitro fertilization . This research showed that follicular steroidogenesis showed similar and normal rates of estrogen and progesterone production among groups . Moreover, the valuation of the follicular response to the LH/hCG trigger showed a normal and similar response in the different groups . Therefore, despite the evidence of close follicular immune exposure post-infection with SARS-CoV-2 or following BNT162b2 mRNA vaccine, the maturation of the oocyte and its hormonal milieu did not report any measurable modification compared to non-exposed patients . Our study is a pilot experience which has the limit of being retrospective but can represent a milestone for further study. In particular, the period of observation and the size of the sample should be extended with a multicenter trial in order to provide a greater number of reports and evidence and implement the acquisitions of the AIFA. In fact, for lack of an obvious link of randomness between disorder and COVID-19 vaccine, to date, the number of official reports is low, relative to both the number of people vaccinated and the general prevalence of menstrual disorders . Only the implementation of this research could lead to real understanding around the mechanism of a hypnotizable association between COVID-19 vaccines and menstrual changes. However, it is important to underline, particularly for women eager for offspring, that by today’s knowledge, these effects in menstrual symptoms do not raise concerns since are transitory, spontaneously resolve, and are much less severe than those associated with COVID-19 infection. Therefore, patients who are called for the vaccine should not be discouraged from obtaining it. Limitations of the Study The work reports the analysis of data collected retrospectively, so this can lend itself to bias due to retrospective collection. It was not possible to consider a control group because in Italy, during the period under review, unvaccinated women could not access the clinic. However, a comparison can be made with the general population. We saw incidences of menstrual irregularities in women of childbearing age (in the absence of organic gynecological pathology, for which an unknown or dysfunctional problem is attributable), in which menstrual irregularities occur in an estimated 14% to 25% of women of childbearing age . We had to deal with a large number of patients with fibroids and climacteric, which are among the main conditions of access to the clinic for menstrual irregularities. Patients by age and BMI were not grouped, because the variability of these factors in the indicated period was almost comparable to that of patients visited in the same hospitals before the pandemic. The work reports the analysis of data collected retrospectively, so this can lend itself to bias due to retrospective collection. It was not possible to consider a control group because in Italy, during the period under review, unvaccinated women could not access the clinic. However, a comparison can be made with the general population. We saw incidences of menstrual irregularities in women of childbearing age (in the absence of organic gynecological pathology, for which an unknown or dysfunctional problem is attributable), in which menstrual irregularities occur in an estimated 14% to 25% of women of childbearing age . We had to deal with a large number of patients with fibroids and climacteric, which are among the main conditions of access to the clinic for menstrual irregularities. Patients by age and BMI were not grouped, because the variability of these factors in the indicated period was almost comparable to that of patients visited in the same hospitals before the pandemic. Our research is the first Italian pilot study that investigates and identifies some changes in the menstrual cycle after vaccination with COVID-19 vaccines. Although preliminary, our data, while describing a purely transitory effect of the vaccines examined, represent, in our opinion, an important source of information in order to collect and implement the reports to be submitted to AIFA, both to monitor this phenomenon and for the drafting of new guidelines dedicated to it. Furthermore, from what emerged, the data could have a greater impact at a national and international level. For this reason, our group is already designing more robust research on larger case studies in order to further validate the data that have emerged so far. The future project for this study is to expand it and make it multicentric.
Ophthalmology practice in COVID-19 pandemic: Performance of rapid antigen test versus real time-reverse transcription polymerase chain reaction in a tertiary eye care institute in South India
adcbc476-1f3a-4f7c-9e91-64ac86654c0e
9333034
Ophthalmology[mh]
This is a retrospective descriptive hospital data collection study conducted in a tertiary eye care hospital in South India from October 2020 to April 2021. The institutional ethics committee’s approval was obtained for the study, and it adhered to the tenets of the declaration of Helsinki. Inclusion criteria included patients undergoing emergency ocular surgeries, elective ocular surgeries, and admission for ocular medical emergencies. Any symptomatic patient for COVID-19 during the initial screening was excluded from the study. Sociodemographic data such as name, age, and gender; systemic comorbidities; ocular diagnosis; and surgery/treatment details were recorded for each patient. As a part of preoperative/pre-admission workup, all patients had undergone nasopharyngeal and throat swabs. Rapid antigen test (RAT) was performed using nasopharyngeal swabs, while RT-PCR for SARS-CoV-2 viral RNA was performed using both nasopharyngeal and throat swabs. RAT was performed as a point-of-care (PoC) test in the Ophthalmology OPD or ward by the ophthalmology residents. All tests were performed under strictly controlled isolated conditions, with the residents wearing all recommended personal protective equipment. The microbiologist initially trained ophthalmology residents to collect swabs and perform the PoC-RAT. The ophthalmology residents initially analyzed and reported the test results. The images of the test cards were shared with the microbiologist. The diagnosis was then further confirmed and authorized by the microbiologist. Nasal swab and nasopharyngeal swabs for RT-PCR were also collected simultaneously and transported to the microbiology laboratory in a viral transport medium (VTM) tube without any delay while maintaining the cold chain conditions. PoC-RAT was performed using Accucare™ COVID-19 Antigen Card Test (Mylab Discovery solutions), which was approved by the Indian Council of Medical Research (ICMR) in July 2020. All tests were performed and interpretations were made according to the manufacturer’s recommendations. The sensitivity and specificity stated by the manufacturer were 84% and 100%, respectively, for this test kit. For real-time RT-PCR of SARS-CoV-2 viral RNA, samples were extracted using the magnetic bead extraction method (Thermo scientific viral isolation kit—5XMagMAX, Thermo Fischer scientific Baltics UAB, Vilnius, Lithuania) in Kingfisher platform by Thermofisher as per the manufacturer’s instructions. All recruited patients were screened at the OPD and emergency ward entry points for symptoms of COVID-19 (fever, fatigue, myalgia, sore throat, cold, cough, difficulty in breathing), travel history (international travel or travel to other cities in the last 2 weeks), contact history with suspected or confirmed COVID-19 case in previous 2 weeks, etc. as per AIOS operational guidelines and preferred practice patterns (PPPs). All patients also underwent a mandatory thermal screening. All emergency procedures were performed based on the RAT results, and RT-PCR reports were obtained subsequently. On the contrary, all elective procedures were performed only after obtaining the RT-PCR reports. RAT reports in such cases were used for initial screening for COVID-19 in asymptomatic patients. All surgeries were performed as daycare unless the patient required admission for associated medical conditions as per AIOS guidelines. Surgeries were performed by the most experienced surgeons of the OT team to ensure the quickest surgeries. RAT tests for elective procedures were done a day prior during preoperative evaluation. Non-contact tests were performed for all patients for preoperative evaluation, including IOP evaluation by non-contact tonometer, fundus evaluation by indirect ophthalmoscopy, and ocular biometry with IOL master as per AIOS guidelines. The sample for RT-PCR was sent to the laboratory simultaneously, expecting the report to be available on the next day before the scheduled surgery. Any RAT positive report was considered confirmatory for COVID-19 as per ICMR advisory on strategy for COVID-19 Testing in India. Appropriate PPE was used during the procedures, and all infection prevention and control measures were followed as per MoHFW guidelines. Any patient who had tested positive for COVID-19 via RAT or RT-PCR was advised home isolation or admission in COVID-19 hospital based on MoHFW guidelines. The risks and benefits of emergency surgery carried out in a COVID-19-positive patient were discussed, and appropriate consent was taken. Under these circumstances, surgery was offered with strict isolation of the operating facilities and full personal protective equipment (PPE) for the involved staff. The data collected were entered and analyzed using the IBM-SPSS program (SPSS version 20.0); SPSS Inc., Chicago, IL. Continuous variables were expressed as mean ± SD, while categorical variables were expressed as frequency and percentages. We analyzed the details of 629 patients tested for SARS-CoV-2 by using both RAT and RT-PCR during the study period. The mean age of the patients was 52.37 + 18.11 (range: 0–83) years. Gender distribution was: males-384 (62%) and females-245 (38%). Six hundred (95.4%) patients underwent either elective ocular surgical procedures or admission from the OPD for medical management. The remaining 29 patients were admitted from the emergency. Seventeen cases required surgical management, while 12 needed medical management . Out of the 629 cases, one patient tested positive for SARS-CoV-2 with both RAT and RT-PCR, while two patients tested positive with only RT-PCR after an initial negative RAT. Percent agreement or proportion of agreement observed between the two tests was 99.68% (627/629), while Cohen’s kappa coefficient value was 0.49. The positivity rate for RT-PCR was 0.47% (3/629). The positivity rate for RAT was 0.15% (1/629). Sensitivity of RAT compared to RT-PCR in our study was 33.33%, specificity was 100%, positive predictive value was 100%, and negative predictive value was 99.68%. Clinical details of three positive cases are as below: Case 1: Planned for cataract surgery; asymptomatic for COVID-19; RAT negative but RT-PCR positive. The patient was advised of home isolation. Case 2: Planned for e mergency therapeutic keratoplasty; asymptomatic for COVID-19; RAT negative but RT-PCR positive. The patient was advised of home isolation. Case 3: Case of open globe injury planned for evisceration; asymptomatic for COVID-19; RAT positive, further confirmed by RT-PCR. The patient was admitted in the COVID-19 ward and subsequently underwent emergency surgery maintaining all protocols. For the emergency surgical procedures done in 17 cases, the mean time gap between the initial assessment at the emergency ward to the initiation of surgery using RAT results was 5.06 + 3.1 h. This included testing time, preoperative planning and evaluation, and preparation of the operation theatre (OT). Test results were available within 30 min of performing the PoC-RAT in all cases. However, the mean time needed for obtaining the RT-PCR results for these cases was 12.88 + 1.45 h. Therefore, PoC-RAT provided the advantage of earlier preoperative planning and preparation in an emergency situation. Following the nationwide lockdown during the first and the second wave of the COVID-19 pandemic, all the eye care hospitals started services in a staggered manner. To minimize the spread of COVID-19 infection, all organizations had implemented protocols in the management of routine OPD services and diagnostic and surgical services. With the relaxation of the COVID-19 restrictions and reducing COVID-19 infection rates across the country, all the organizations had ramped up the routine health care services, including elective surgical procedures. However, due to the reluctance of resuming elective procedures during the first and the second waves of the COVID-19 pandemic, the backlog for elective ophthalmic surgical procedures has raised to a paramount level across the country. OPD attendance has, therefore, increased proportionately with the simultaneous increase in the risk of contracting the infection from asymptomatic patients. Ophthalmic surgeons have an added risk of acquiring nosocomial COVID-19 infection due to the proximity of the surgical field to the patient’s nose and face. The respiratory droplets from the cough or sneeze can spread up to 6 m by exhaled air, which puts all the ophthalmic surgeons at an increased risk. Recent studies have also shown that tears and conjunctival secretions contain the SARS-CoV-2 virus in systemically asymptomatic patients. Hence, to prevent the spread of COVID-19 infection and protect the health care team, it is essential to confirm that the patient is not actively infected during an elective or emergency surgical procedure. Testing strategies employed to detect SARS-CoV-2 include rapid antigen tests (RAT) and real-time RT-PCR. RATs are quick tests for detecting SARS-CoV-2 viral antigen with a rapid turnaround time of 15-30 min, thereby offering a considerable advantage of quick detection of cases, prompt isolation, and treatment. RAT has an added advantage of point-of-care (PoC) testing. This is very useful as a screening tool in a high volume and quick surgical turnover setup such as ophthalmic surgeries. However, RAT has the disadvantage of variable sensitivity and specificity as compared to the gold standard RT-PCR in diagnosing COVID-19 infection. Test-track-treat is the strategy adopted by the government to reduce COVID-19 transmission. ICMR has also recommended PoC-RAT to be used in combination with RT-PCR in the hospitals for asymptomatic patients planned for elective procedures. As newer antigen detection test kits for COVID-19 infection are getting introduced, the overall concern of their performance compared to RT-PCR in terms of sensitivity and specificity is also increasing. As per ICMR guidelines, the minimum acceptance criteria of sensitivity and specificity of RAT kits is ≥50% and ≥95%, respectively, when used as a PoC test without transport to a laboratory setup. When it is validated in a laboratory setup with samples collected in viral transport medium (VTM), sensitivity is ≥70% and specificity is ≥99%. We also need to consider few other points while using RAT as a screening tool. If PoC-RAT is done in the acute stage of the disease or during the first week of the symptom onset, the sensitivity and specificity will be high and comparable to RT-PCR. However, the sensitivity of RAT will be lower late in the disease course. In addition, the sensitivity of PoC-RAT will be higher in patients with high viral load. From a practical perspective, knowledge of these factors regarding PoC-RAT is very important to apply it on various scenarios such as screening for asymptomatic health care workers, patients planned for elective or emergency surgical procedures, and in large-scale testing. In our study, PoC-RAT was performed for all patients along with the gold standard RT-PCR. All patients were asymptomatic in our study. Percent agreement or proportion of agreement observed between the two tests was 99.68%, which indicates a very high degree of agreement. Cohen’s kappa coefficient value was 0.49, which indicates only a moderate agreement. Sensitivity of RAT in comparison to RT-PCR in our study was 33.33%, while specificity was 100%. Cohen’s kappa coefficient and sensitivity values calculated in our study were low because of the low positivity rates of RAT (0.15%) and RT-PCR (0.47%). The plausible explanations of the low positivity rates can be that all the patients in our study were asymptomatic for COVID-19 infection. All patients were thoroughly screened at different entry points of the hospital, and multitiered safety protocols were employed at various levels after entry into the hospital to identify any symptomatic COVID-19 patient. Furthermore, the overall positivity rate among the general population in the catchment region for the institute during the study period time was low. RT-PCR remains the gold standard test for diagnosing SARS-CoV-2 infections. Still, it is a laboratory-based procedure that needs cutting-edge technology and equipment, trained health care workers, transportation of the samples, and finally, the communication of the results. Timing of the clinical sampling and testing with RT-PCR is also critical as the shedding of the virus mainly happens during the early phase of the disease course. Due to a large number of people needing testing and limited recourses, the delay in procuring the RT-PCR results is also an expected issue. In addition, the majority of the care centers in India are standalone hospitals that do not have easy access to National Accreditation Board for Laboratories (NABL)-accredited laboratories to conduct an RT-PCR. Procuring an RT-PCR result in remote parts of India can be as delayed as 48–72 h, which can decide the outcome in an emergency situation. In our study, the initial assessment of surgery using RAT results was 5.06 ± 3.1 h, while the mean time for obtaining the RT-PCR results for these cases was 12.88 ± 1.45 h. Therefore, we had a mean lead time of around 7 h for surgery in these cases, which is critical in any emergency. Recently, some studies have shown that RAT is sensitive enough to be comparable with RT-PCR in the diagnosis of SARS-CoV-2 infection. Kumar et al . conducted a similar study where they compared RAT with RT-PCR in asymptomatic patients planned for elective ophthalmic procedures. In their study, only two among 204 patients tested positive with both RAT and RT-PCR, while 10 patients tested negative with RAT but were found to be positive with RT-PCR. However, RAT was done in the laboratory in this study, whereas we performed a POC-RAT test in our study. Tripathy et al . performed POC- RAT in a study on asymptomatic preoperative ophthalmic elective procedure patients. The results showed only 7/224 patients (3.1%) to be COVID-19 positive. Chiamayo et al . in their comparative analysis of RAT versus RT-PCR observed that RAT was capable enough for detecting COVID-19 infection in various samples with comparable sensitivity and specificity (98.33% and 98.73%, respectively). In a study by Porte et al . the diagnostic accuracy of the fluorescence immunochromatographic SARS-CoV-2 antigen test (Bioeasy Biotechnology Co., Shenzhen, China) was compared with SARS-CoV-2 RT-PCR among 127 COVID-19-suspected patients. The sensitivity and specificity of SARS-CoV-2 antigen test was 93.9% and 100%, respectively, with a diagnostic accuracy of 96.1% and Kappa coefficient of 0.9. The sensitivity of the antigen detection test was significantly higher in samples with high viral loads. Yamayoshi et al . in their study compared four RAT kits, and their sensitivity and specificity were compared with RT-PCR with different clinical samples of the COVID-19-suspected patients. The results were quite similar for all four test kits, and it showed higher chance of detection of COVID-19 in individuals who are shedding a large amount of SARS-CoV-2. Cormon et al . conducted a study where they compared seven commercial SARS-CoV-2 rapid point-of-care antigen tests (AgPOCT) with 138 clinical samples that had previously tested positive for SARS-CoV-2 by RT-PCR. Following the study, they concluded that the sensitivity range of most Ag-POCTs overlaps with SARS-CoV-2 viral loads typically in the first week of symptoms. Thus, AgPOCTs can diagnose COVID-19 infection faster and can help in decision-making in various areas of health care and public health. Another study on preoperative COVID-19 testing for elective vitreoretinal surgeries by Kannan et al . in a tertiary eye care center in South India concluded that 1 among 45 asymptomatic patients may become positive on RT-PCR. Strength of the study Rapid antigen test for COVID-19 infection was done as a point-of-care (POC-RAT) test for all patients, thereby facilitating pre-operative screening for COVID-19. Double validation of our RAT results was done initially by the ophthalmology resident and was then subsequently confirmed by the microbiologist. RAT results were compared with RT-PCR results in all cases. High specificity of our study results. Study limitations Cohen’s kappa coefficient and sensitivity values calculated in our study were low because of the low positivity rates of RAT (0.15%) and RT-PCR (0.47%) in asymptomatic preoperative patients. Thus, it limits the degree of extrapolation of our study results. Therefore, we need further multicentric large-scale prospective studies in the future in asymptomatic preoperative patients. Corresponding viral load or cycle threshold (Ct) values were not estimated. Thus, correlation between the cases missed on RAT could not be determined. Rapid antigen test for COVID-19 infection was done as a point-of-care (POC-RAT) test for all patients, thereby facilitating pre-operative screening for COVID-19. Double validation of our RAT results was done initially by the ophthalmology resident and was then subsequently confirmed by the microbiologist. RAT results were compared with RT-PCR results in all cases. High specificity of our study results. Cohen’s kappa coefficient and sensitivity values calculated in our study were low because of the low positivity rates of RAT (0.15%) and RT-PCR (0.47%) in asymptomatic preoperative patients. Thus, it limits the degree of extrapolation of our study results. Therefore, we need further multicentric large-scale prospective studies in the future in asymptomatic preoperative patients. Corresponding viral load or cycle threshold (Ct) values were not estimated. Thus, correlation between the cases missed on RAT could not be determined. We understand that safety of the health care team and patients is of highest importance. Rapid antigen detection tests may not be able to diagnose every case of SARS-CoV-2 infection as they are not as sensitive as RT-PCR. However, these false-negative cases are less infective, and the chances of transmission are less. The sensitivity and Cohen’s kappa coefficient in our study were low, but that can be attributed to the overall low positivity rates with both RAT and RT-PCR. However, the percent agreement observed between the two tests was 99.68%. We, therefore, recommend that all emergency ophthalmic procedures can be performed with only rapid antigen tests by following strict safety protocols and wearing PPE. We have just crossed the second wave of COVID-19 infection, and currently, the positivity rate in the general population has also reduced. Performing RT-PCR for every patient who needs an elective ophthalmic procedure is not practical and economically feasible. Initial screening of all the patients for COVID-19 symptoms at the OPD entry followed by RAT before performing any elective procedure can be the way forward. The surgical team should use adequate PPE and strictly follow all the protocols irrespective of the patient’s infection status to reduce the risk of transmission of SARS-CoV-2. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest.
Identification of different blood concentration of domestic cat (
a83bdb5c-00f1-49f2-b989-e90455205e7b
11560260
Forensic Medicine[mh]
People have begun to realize the importance of animal welfare. Cases of cruelty can occur in wild animal, livestock, and pet animals. Cruelty to animals is a violation of animal welfare. Animal violence can be in the form of actions to omission, teasing to torture, intentional to the consequences of negligence, and also including actions to fight animals, confine animals, and abandon animals . Cruelty to cats choose to be the background of this research because many cases of cruelty to cats have not reported although the cases are still high . Garda Satwa Indonesia (animal rescue organization, especially dogs and cats) in BBC News on June 20, 2018, said that at least every day they rescued 10 cats from persecution in major cities in Indonesia conducted in partnership with various animal lover communities. The increased number of animal lover communities is a change for having support to enforce the law of animal crime. Crime scene investigation (CSI) is the first step in revealing crime cases. CSI according to is a series of investigations in which investigators along with elements of support from crime laboratories and forensic medicine try to reveal cases that have occurred from the evidence obtained at the crime scene. Forensic evidence used to support or deny the results of investigations or findings in investigations such as alibis and witness statements, help to reconstruct cases and develop investigations . Evidence of a crime can help to determine the alleged cause of the death or injury, the mechanism of death, and how death or injury occurs . Cat cruelty cases are usually in the form of mutilation, rubber banded, hit and run, or gunshot victims. Those forms of cruelty can cause bleeding and leave bloodstain in the crime scene. stated that blood identification in the crime scene is used to ensure that the stain is blood, if the stain is blood then it can know the species. Furthermore, the stain can detect the DNA of the blood. Disclosure of evidence in the form of bloodstains according to can be done by conducting a screening (presumptive test), confirmation examination, and other additional examinations. Bloodstain examination in this study is a screening (presumptive test) and a confirmation test. This study used Leucomalachite Green (LMG) as a reagent in a presumptive test and used Takayama reagent as a confirmation test. The presumptive test with LMG is sensitive to detect blood and the confirmation test with Takayama reagent is more specific to detect blood. False positive can occur in the LMG test because of contamination of strong oxidizers, such as sodium hypochlorite, the confirmation test with Takayama reagent is needed to make sure that the stain is blood . LMG reagent has been commonly used as a reagent in the blood presumptive test at the investigation of the human crime scene and the positive reaction by LMG reagent is easy to see with the naked eye . Takayama reagent has a strong ability to detect human blood even though bloodstains have been exposed to non-carbolic domestic floor cleaning agents, powder detergents, and washing machine powder detergents . This research chooses LMG and Takayama reagents to identify bloodstains. This is because of both reagents are rarely used for bloodstain analysis in animal cases, especially for crimes against cats in forensic veterinary cases. In this experiment, cat blood stains were made with various dilutions. This adaptation to the conditions of the actual case might cause blood stains to fade because the perpetrator is trying to remove the stain using water or is naturally erased by rainwater. Sample collection This research was conducted in July and August 2023 at the Clinical Pathology Veterinary Laboratory, Universitas Airlangga. This study used the whole blood of three cats that were taken from the cephalica vein. The blood dilution agent was NaCl 0.9%. The reagent for the presumptive test was LMG and the reagent for the confirmation test was Takayama reagent. Cat blood collections The Institutional Animal Care and Use Committee recommends that the cat before taking a blood collection to be restrained, especially if the cat is fierce. It is best to do it by two people, one person for restraint and one person for taking blood. Blood collection should be done from the cephalica vein as much as 3 ml/cat and kept in an EDTA tube. According to , the blood that can be taken is 6%–8% of the cat’s body weight. The United States Department of Agriculture in has said that the maximum amount that can be taken from a cat is 66 ml/kgBW. After blood collection done, the needle was pulled from the vein, put pressure on the part of the puncture for a few minutes to avoid the occurrence of hematoma, gave disinfectant to the blood collection area . Blood dilution Dilution of the cat blood using aquadest with a ratio of 1: 10, 1: 100, 1: 1,000, 1: 10,000, and 1: 100,000. The first dilution was done by 1 ml of blood with 9 ml NaCl 0.9%. The second dilution was done by taking 1 ml from the first dilution and then mixing with 9 ml NaCl 0.9%. The third dilution was done by taking 1 ml from the second dilution and then mixing with 9 ml NaCl 0.9%. The fourth dilution was done by taking 1 ml from the third dilution and then mixing with 9 ml NaCl 0.9%. The fifth dilution was done by taking 1 ml from the fourth dilution and then mixing with 9 ml NaCl 0.9%. The results of the last dilution that detected positively on the dilution series had made higher dilutions with multiples of two to close to the level of dilution that was detected negatively. This dilution was made by mixed 5 ml from the last dilution series that detected LMG test positive with 5 ml NaCl 0.9% and so on until it approaches the previous dilution level which detected negatively on the LMG test. These dilution series are used for LMG test. The bloodstains making Bloodstains are made by taking two drops of each dilution on the filter paper using a Pasteur pipette . The blood drop method means that blood is dropped at a distance of 5 cm from the filter paper and then allowed to dry at room temperature for at least 1 hour . Each blood dilution is made 10 repetitions. Presumptive test of bloodstain using LMG reagent The use of the presumptive test with LMG was by dropping the LMG reagent on the bloodstain then followed by dropping H 2 O 2 . Each stain on the filter paper was given a drop of LMG reagent and followed by a drop of H 2 O 2 . The color changes that occurred were observed, color changes were the indicator of a positive reaction. The color shift captured using Sony Cyber-Shot DSC-W570 with ISO200 10 cm distance from the object. All images taken from the same position and lighting. The digital image was processed with Adobe Potoshop ® to change the background to lift the bluish-green color as the center of attention of the image. The adjustment which has been done using Adobe Potoshop ® are brightness 50%, details 100%, and contrast 50%. The image was then analyzed with red, green, and blue (RGB). Measure of the NIH ImageJ version 1.52a software to know the RGB value. The adjustment of the image was based on who wrote that some image adjustment techniques can be done to eliminate unnecessary artifacts or image details, improve image content for the disclosure of cases, and changing the background of the image to lift objects as the center of attention of the image. also wrote that image files can manipulated using image processing programs. Confirmation test of bloodstain using the Takayama reagent The hemochromogen crystal examination procedure used the Takayama test adapted from . The procedure was by giving one drop of diluted blood that has positive reaction on LMG test into the object glass; add 1 drop of Takayama reagent (mixture of 7 ml of Aquadest, 3 ml of pyridine, 3 ml of NaOH, and 3 ml of glucose) then cover with a cover glass; heat the bloodstained slide at a temperature of ±65°C using bunsen for 10–15 seconds; take one drop of Takayama reagent; and after cooldown, observe the formation of crystals under the microscope with 400× magnification. The positive result of this test was when the crystal hemochromogen seen under the microscope with 400× magnification. The positive result means that the sample was contains blood. The negative reaction seen nothing under the microscope with 400× magnification, it means that the blood is not detected from the sample. Data analysis The score of color shifts as an LMG test reaction will be analyzed with Statistical Package for the Social Sciences (SPSS) 20 program for Windows using repeated measures ANOVA and continued with Greenhouse-Greisser test to determine the differences between each group if the result showed significant difference ( p < 0.05). The Takayama test reaction to blood dilution is analyzed descriptively. Ethical approval Cats obtained from healthy cats that intensively cared, with previous approval from ACUC, Faculty of Veterinary Medicine, Universitas Airlangga, Surabaya. Certificate number: 1.KEH.118.09.2022. This research was conducted in July and August 2023 at the Clinical Pathology Veterinary Laboratory, Universitas Airlangga. This study used the whole blood of three cats that were taken from the cephalica vein. The blood dilution agent was NaCl 0.9%. The reagent for the presumptive test was LMG and the reagent for the confirmation test was Takayama reagent. The Institutional Animal Care and Use Committee recommends that the cat before taking a blood collection to be restrained, especially if the cat is fierce. It is best to do it by two people, one person for restraint and one person for taking blood. Blood collection should be done from the cephalica vein as much as 3 ml/cat and kept in an EDTA tube. According to , the blood that can be taken is 6%–8% of the cat’s body weight. The United States Department of Agriculture in has said that the maximum amount that can be taken from a cat is 66 ml/kgBW. After blood collection done, the needle was pulled from the vein, put pressure on the part of the puncture for a few minutes to avoid the occurrence of hematoma, gave disinfectant to the blood collection area . Dilution of the cat blood using aquadest with a ratio of 1: 10, 1: 100, 1: 1,000, 1: 10,000, and 1: 100,000. The first dilution was done by 1 ml of blood with 9 ml NaCl 0.9%. The second dilution was done by taking 1 ml from the first dilution and then mixing with 9 ml NaCl 0.9%. The third dilution was done by taking 1 ml from the second dilution and then mixing with 9 ml NaCl 0.9%. The fourth dilution was done by taking 1 ml from the third dilution and then mixing with 9 ml NaCl 0.9%. The fifth dilution was done by taking 1 ml from the fourth dilution and then mixing with 9 ml NaCl 0.9%. The results of the last dilution that detected positively on the dilution series had made higher dilutions with multiples of two to close to the level of dilution that was detected negatively. This dilution was made by mixed 5 ml from the last dilution series that detected LMG test positive with 5 ml NaCl 0.9% and so on until it approaches the previous dilution level which detected negatively on the LMG test. These dilution series are used for LMG test. Bloodstains are made by taking two drops of each dilution on the filter paper using a Pasteur pipette . The blood drop method means that blood is dropped at a distance of 5 cm from the filter paper and then allowed to dry at room temperature for at least 1 hour . Each blood dilution is made 10 repetitions. The use of the presumptive test with LMG was by dropping the LMG reagent on the bloodstain then followed by dropping H 2 O 2 . Each stain on the filter paper was given a drop of LMG reagent and followed by a drop of H 2 O 2 . The color changes that occurred were observed, color changes were the indicator of a positive reaction. The color shift captured using Sony Cyber-Shot DSC-W570 with ISO200 10 cm distance from the object. All images taken from the same position and lighting. The digital image was processed with Adobe Potoshop ® to change the background to lift the bluish-green color as the center of attention of the image. The adjustment which has been done using Adobe Potoshop ® are brightness 50%, details 100%, and contrast 50%. The image was then analyzed with red, green, and blue (RGB). Measure of the NIH ImageJ version 1.52a software to know the RGB value. The adjustment of the image was based on who wrote that some image adjustment techniques can be done to eliminate unnecessary artifacts or image details, improve image content for the disclosure of cases, and changing the background of the image to lift objects as the center of attention of the image. also wrote that image files can manipulated using image processing programs. The hemochromogen crystal examination procedure used the Takayama test adapted from . The procedure was by giving one drop of diluted blood that has positive reaction on LMG test into the object glass; add 1 drop of Takayama reagent (mixture of 7 ml of Aquadest, 3 ml of pyridine, 3 ml of NaOH, and 3 ml of glucose) then cover with a cover glass; heat the bloodstained slide at a temperature of ±65°C using bunsen for 10–15 seconds; take one drop of Takayama reagent; and after cooldown, observe the formation of crystals under the microscope with 400× magnification. The positive result of this test was when the crystal hemochromogen seen under the microscope with 400× magnification. The positive result means that the sample was contains blood. The negative reaction seen nothing under the microscope with 400× magnification, it means that the blood is not detected from the sample. The score of color shifts as an LMG test reaction will be analyzed with Statistical Package for the Social Sciences (SPSS) 20 program for Windows using repeated measures ANOVA and continued with Greenhouse-Greisser test to determine the differences between each group if the result showed significant difference ( p < 0.05). The Takayama test reaction to blood dilution is analyzed descriptively. Cats obtained from healthy cats that intensively cared, with previous approval from ACUC, Faculty of Veterinary Medicine, Universitas Airlangga, Surabaya. Certificate number: 1.KEH.118.09.2022. LMG test The result of LMG reaction on bloodstain with various dilution series of three cats’ blood (A, B, and C). Cat A and B are female cat, while cat C is a male. Three cat blood samples with 10 repetitions in all samples at dilution of 1:10 (d1); 1:100 (d2); and 1:1,000 (d3) showed color changes to be bluish-green. Bluish-green color expression is converted into RGB values through computer software, the NIH ImageJ version 1.52a program. At a dilution of 1:10,000 (d4) sample that had a color change to be bluish-green in samples A1, A3, A5, A8, B7, B9, C5, C7, C8, C9, and C10 with average RGB value 253.970 ± 0.418. At a dilution of 1:100,000 (d8) there was no color change to bluish-green in all samples with an average RGB value of 255 . Based on the results of the serial dilution above, a higher series of dilutions between 1: 10,000 and 1:100,000 with a multiple of two times higher is made; 1:20,000 (d5); 1:40,000 (d6) and 1:80,000 (d7). At a dilution of 1:20,000 there was a color change to be bluish-green in samples A1, A3, A6, B1, B10, C6, and C8 with the average RGB value 254.633 ± 0.207. At a dilution of 1:40,000 color changes were obtained only in sample C10 with an RGB value of 253.386 and the average RGB value of sample C was 254.946 ± 0.054. At a dilution of 1:80,000 there was no color change to bluish-green in all samples with RGB value 255. The color changes of bloodstain from all samples and repetition in filter paper are shown in . Furthermore, the mean score from all dilutions in each sample result is shown in . The number in processed with SPSS 20 program for Windows using repeated measures ANOVA and continued with the Greenhouse-Greisser test to determine the differences between each group if the result showed significant difference ( p < 0.05). The result of Within-Subjects Effects test Greenhouse-Greisser showed significant differences ( p < 0.05). This means that there is a significant difference in the RGB value of LMG reaction to stains from the blood with different levels of dilution. RGB values d1 significant different ( p < 0.05) with d4, d5, d6, d7, and d8. RGB values at d2 are significantly different ( p < 0.05) with d3. RGB d5 values are significantly different ( p < 0.05) with d7 and d8. RGB d6 values significant different with d1, d2, d3, d4, d7, and d8. shows the sharpness of the average increase in RGB values of various levels of dilution of cat blood from the lowest dilution (1:10) to the highest dilution (1:100,000). places a plot of RGB values 1:10,000; 1:20,000; and 1:40,000 blood dilution in almost the same line with plots of RGB values 1:80,000 and 1:100,000 blood dilution. Takayama reagent test The result of Takayama reagent test reaction on bloodstain with various dilution series of three cats blood (A, B, and C) that been observed under the microscope can be seen in . All samples had a positive result in the dilution of 1:10, 1:100, and 1:1,000. All samples in the dilution 1:10,000 had a negative result. The expression of the Takayama reagent test can be seen in , , and . Blood dilutions 1:10, 1:100, and 1:1,000 were detected positively by the Takayama reagent. Blood dilution 1:10,000 and so on is not detected positively by the Takayama reagent under the microscope with 400× magnification. Based on this study the bloodstain that can still be detected by Takayama reagent was at 1:1,000 dilution. Hemochromogen crystals that appear under a microscope with 400× magnification are pink-colored needles. Takayama is a solution of pyridine that added on the bloodstain if there is blood detected it will appear pink crystals of a complex between pyridine and haem form as the slide is warmed. The result of LMG reaction on bloodstain with various dilution series of three cats’ blood (A, B, and C). Cat A and B are female cat, while cat C is a male. Three cat blood samples with 10 repetitions in all samples at dilution of 1:10 (d1); 1:100 (d2); and 1:1,000 (d3) showed color changes to be bluish-green. Bluish-green color expression is converted into RGB values through computer software, the NIH ImageJ version 1.52a program. At a dilution of 1:10,000 (d4) sample that had a color change to be bluish-green in samples A1, A3, A5, A8, B7, B9, C5, C7, C8, C9, and C10 with average RGB value 253.970 ± 0.418. At a dilution of 1:100,000 (d8) there was no color change to bluish-green in all samples with an average RGB value of 255 . Based on the results of the serial dilution above, a higher series of dilutions between 1: 10,000 and 1:100,000 with a multiple of two times higher is made; 1:20,000 (d5); 1:40,000 (d6) and 1:80,000 (d7). At a dilution of 1:20,000 there was a color change to be bluish-green in samples A1, A3, A6, B1, B10, C6, and C8 with the average RGB value 254.633 ± 0.207. At a dilution of 1:40,000 color changes were obtained only in sample C10 with an RGB value of 253.386 and the average RGB value of sample C was 254.946 ± 0.054. At a dilution of 1:80,000 there was no color change to bluish-green in all samples with RGB value 255. The color changes of bloodstain from all samples and repetition in filter paper are shown in . Furthermore, the mean score from all dilutions in each sample result is shown in . The number in processed with SPSS 20 program for Windows using repeated measures ANOVA and continued with the Greenhouse-Greisser test to determine the differences between each group if the result showed significant difference ( p < 0.05). The result of Within-Subjects Effects test Greenhouse-Greisser showed significant differences ( p < 0.05). This means that there is a significant difference in the RGB value of LMG reaction to stains from the blood with different levels of dilution. RGB values d1 significant different ( p < 0.05) with d4, d5, d6, d7, and d8. RGB values at d2 are significantly different ( p < 0.05) with d3. RGB d5 values are significantly different ( p < 0.05) with d7 and d8. RGB d6 values significant different with d1, d2, d3, d4, d7, and d8. shows the sharpness of the average increase in RGB values of various levels of dilution of cat blood from the lowest dilution (1:10) to the highest dilution (1:100,000). places a plot of RGB values 1:10,000; 1:20,000; and 1:40,000 blood dilution in almost the same line with plots of RGB values 1:80,000 and 1:100,000 blood dilution. The result of Takayama reagent test reaction on bloodstain with various dilution series of three cats blood (A, B, and C) that been observed under the microscope can be seen in . All samples had a positive result in the dilution of 1:10, 1:100, and 1:1,000. All samples in the dilution 1:10,000 had a negative result. The expression of the Takayama reagent test can be seen in , , and . Blood dilutions 1:10, 1:100, and 1:1,000 were detected positively by the Takayama reagent. Blood dilution 1:10,000 and so on is not detected positively by the Takayama reagent under the microscope with 400× magnification. Based on this study the bloodstain that can still be detected by Takayama reagent was at 1:1,000 dilution. Hemochromogen crystals that appear under a microscope with 400× magnification are pink-colored needles. Takayama is a solution of pyridine that added on the bloodstain if there is blood detected it will appear pink crystals of a complex between pyridine and haem form as the slide is warmed. LMG test The change in bloodstain color to be bluish-green was the result of an oxidation reaction from LMG which is reduced by hemoglobin and peroxidase. H 2 O 2 breaks down resulting the substance in the mixture being oxidized and producing color . The darker the bluish-green color, the RGB value is lower. The lower RGB value indicates there is more hemoglobin in the stain. The white color or in this study indicates that no hemoglobin detected will get the highest RGB value, which is 255. Sample C was the only one sample that detected LMG test positive in the blood dilution 1:40,000 with the average RGB value 254.946 ± 0.054. It means that in sample C in dilution 1:40,000 still contained hemoglobin. It could happen because sample C was taking from a male cat, while sample A and B from female cat. Based on research result male cat have a concentration of hemoglobin more higher than female cat. Male cat have hemoglobin 10.18 ± 1.52 g/dl, while female cat 9.30 ± 1.46 g/dl. Testosterone, male hormon, can increase kidney ability to produce erythropoietin, a glycoprotein hormone that stimulates the formation of erythrocytes, so the number of hemoglobin in males is higher than in females . LMG was originally a colorless reagent when it was first applied to stains . LMG discoloration after added H 2 O 2 becomes bluish-green when detecting blood appears as a result of LMG oxidation which is catalyzed by heme arising from hemoglobin. A blood stain that tested presumptively sometimes requires re-confirmation of the truth that the stain was blood. There are several categories of bloodstain confirmation tests, including microscopic tests, crystal tests, spectroscopic methods, immunological tests, and spectroscopic tests . In this study bloodstain that can still be detected by LMG is in bloodstain with a blood concentration of 1:40,000. At 1:80,000 and 1:100,000 dilutions there is no color change in all samples and repetition with the proven value of RGB 255 so that there were no hemoglobin detected. This result was different with in their research of the human bloodstain analysis with LMG reagent in the filter paper, cotton cloth, and blood dilution solution gave a positive reaction at a dilution until 1:5,000 and at a dilution of 1:10,000, there is no positive reaction. This research result is also different from that written in the 1:10,000 cat blood dilution there is no positive reaction. Whereas, quote that LMG had a sensitivity limit of 1:100,000 dilution of human blood. This research had the same result as in LMG reacted at a dilution factor of 1:10,000. The LMG reagent did not show a positive reaction at dilution factors of 1:100,000 and higher. The difference in these results can be affected by differences in the temperature of LMG reagent storage, time of making bloodstain, time of making LMG reagents, how to make bloodstain or making reagents, or because of the different amounts of hemoglobin from various species and individuals. Takayama reagent test Hemochromogen crystals appeared from blood, hematin, and other derivatives of hemoglobin utilizing pyridine, piperidine, and a number of other nitrogenous compounds, with ammonium sulfide, hydrazine hydrate, and ammonium sulfide in NaOH as reductants . Takayama test reagent is one of the confirmation tests of blood-based hemochromogen crystal formation by heating dried blood stains and adding pyridine and glucose in an alkaline. Positive results in the confirmation test with Takayama reagent when hemochromogen crystals appear. Crystals are obtained from the reaction of small amounts of blood or fragments of stains with Takayama reagent in the form of shallow salmon-pink rhomboids . stated that the Takayama reagent is sensitive until the dilution of 1:1,000 human blood. results showed the same result of this research. It can happen because between human and cat have an almost similar number of hemoglobin. Based on revealed that cat hemoglobin is between 9.3 and 15.9 g/dl and revealed that cat hemoglobin is 11.63 ± 2.05 g/dl. While, Normal values for human hemoglobin according to are 13–18 gm per 100 ml of blood (g/dl) in adult males and 12–16 g/dl in adult females. Benefits of both tests LMG and Takayama reagents are reagents that are often used in human blood spot testing . The ability of the LMG reagent to detect up to the highest dilution can be relied on for testing the presence of blood spots. However, the LMG reagent has limitations in that false positives can still occur if it is mixed with chemicals such as bleach or body fluids such as semen . Meanwhile, the Takayama reagent is more specific in detecting blood because the results obtained are a picture of hemochromogen crystals, but its sensitivity is low when compared to LMG . If there is a case of violence against cats and other animals, these two reagents can be relied on to help with the proof process. Those who commit violence against cats and other animals can be prosecuted in accordance with applicable laws, especially in terms of animal welfare. It is hoped that the results of this research can support ensuring animal welfare in society. The change in bloodstain color to be bluish-green was the result of an oxidation reaction from LMG which is reduced by hemoglobin and peroxidase. H 2 O 2 breaks down resulting the substance in the mixture being oxidized and producing color . The darker the bluish-green color, the RGB value is lower. The lower RGB value indicates there is more hemoglobin in the stain. The white color or in this study indicates that no hemoglobin detected will get the highest RGB value, which is 255. Sample C was the only one sample that detected LMG test positive in the blood dilution 1:40,000 with the average RGB value 254.946 ± 0.054. It means that in sample C in dilution 1:40,000 still contained hemoglobin. It could happen because sample C was taking from a male cat, while sample A and B from female cat. Based on research result male cat have a concentration of hemoglobin more higher than female cat. Male cat have hemoglobin 10.18 ± 1.52 g/dl, while female cat 9.30 ± 1.46 g/dl. Testosterone, male hormon, can increase kidney ability to produce erythropoietin, a glycoprotein hormone that stimulates the formation of erythrocytes, so the number of hemoglobin in males is higher than in females . LMG was originally a colorless reagent when it was first applied to stains . LMG discoloration after added H 2 O 2 becomes bluish-green when detecting blood appears as a result of LMG oxidation which is catalyzed by heme arising from hemoglobin. A blood stain that tested presumptively sometimes requires re-confirmation of the truth that the stain was blood. There are several categories of bloodstain confirmation tests, including microscopic tests, crystal tests, spectroscopic methods, immunological tests, and spectroscopic tests . In this study bloodstain that can still be detected by LMG is in bloodstain with a blood concentration of 1:40,000. At 1:80,000 and 1:100,000 dilutions there is no color change in all samples and repetition with the proven value of RGB 255 so that there were no hemoglobin detected. This result was different with in their research of the human bloodstain analysis with LMG reagent in the filter paper, cotton cloth, and blood dilution solution gave a positive reaction at a dilution until 1:5,000 and at a dilution of 1:10,000, there is no positive reaction. This research result is also different from that written in the 1:10,000 cat blood dilution there is no positive reaction. Whereas, quote that LMG had a sensitivity limit of 1:100,000 dilution of human blood. This research had the same result as in LMG reacted at a dilution factor of 1:10,000. The LMG reagent did not show a positive reaction at dilution factors of 1:100,000 and higher. The difference in these results can be affected by differences in the temperature of LMG reagent storage, time of making bloodstain, time of making LMG reagents, how to make bloodstain or making reagents, or because of the different amounts of hemoglobin from various species and individuals. Hemochromogen crystals appeared from blood, hematin, and other derivatives of hemoglobin utilizing pyridine, piperidine, and a number of other nitrogenous compounds, with ammonium sulfide, hydrazine hydrate, and ammonium sulfide in NaOH as reductants . Takayama test reagent is one of the confirmation tests of blood-based hemochromogen crystal formation by heating dried blood stains and adding pyridine and glucose in an alkaline. Positive results in the confirmation test with Takayama reagent when hemochromogen crystals appear. Crystals are obtained from the reaction of small amounts of blood or fragments of stains with Takayama reagent in the form of shallow salmon-pink rhomboids . stated that the Takayama reagent is sensitive until the dilution of 1:1,000 human blood. results showed the same result of this research. It can happen because between human and cat have an almost similar number of hemoglobin. Based on revealed that cat hemoglobin is between 9.3 and 15.9 g/dl and revealed that cat hemoglobin is 11.63 ± 2.05 g/dl. While, Normal values for human hemoglobin according to are 13–18 gm per 100 ml of blood (g/dl) in adult males and 12–16 g/dl in adult females. LMG and Takayama reagents are reagents that are often used in human blood spot testing . The ability of the LMG reagent to detect up to the highest dilution can be relied on for testing the presence of blood spots. However, the LMG reagent has limitations in that false positives can still occur if it is mixed with chemicals such as bleach or body fluids such as semen . Meanwhile, the Takayama reagent is more specific in detecting blood because the results obtained are a picture of hemochromogen crystals, but its sensitivity is low when compared to LMG . If there is a case of violence against cats and other animals, these two reagents can be relied on to help with the proof process. Those who commit violence against cats and other animals can be prosecuted in accordance with applicable laws, especially in terms of animal welfare. It is hoped that the results of this research can support ensuring animal welfare in society. Based on this research, cat bloodstain can be detected with LMG reagent until 1:40,000 dilution, while Takayama reagent only can detect cat bloodstain until 1:1,000.
Lockdown effects on a patient receiving immunosuppression for unilateral HLA- B27 associated uveitis during COVID-19 pandemic
98a25a34-1437-49be-829c-0d2776698cbf
8186619
Ophthalmology[mh]
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Nil. There are no conflicts of interest.
Real-time vs. static ultrasound-guided needle cricothyroidotomy: a randomized crossover simulation trial
dc9d0fd8-914b-432e-9b13-97a1b336eb08
11890604
Surgical Procedures, Operative[mh]
Cricothyroidotomy is the last resort in a situation of “cannot intubate, cannot oxygenate (CICO).” However, this procedure is not easy for anesthesiologists. A survey conducted in the United Kingdom (The Fourth National Airway Project: NAP4) reported that the success rate of emergent needle cricothyroidotomy performed by anesthesiologists was less than 50%, whereas surgical cricothyroidotomy performed by surgeons was almost universally successful . Therefore, surgical cricothyroidotomy is recommended over needle cricothyroidotomy by many guidelines in CICO situations – . However, surgical cricothyroidotomy may be associated with a risk of bleeding from the arteries (superior thyroid or cricothyroid arteries) around the cricothyroid membrane (CTM). Percutaneous needle cricothyroidotomy may be beneficial in preventing complications associated with heavy bleeding if the CTM can be accurately punctured. Accurate identification of the CTM and adequate skills for needle handling are essential for successful percutaneous needle cricothyroidotomy. Many studies have shown that ultrasound examination is more accurate than the digital palpation technique to identify CTM , . However, it remains unclear whether accurate identification of the CTM leads to successful needle puncture. In particular, patients with CICO often have obesity or concomitant anatomical abnormalities of the neck (malposition, deviation, and rotation of the trachea), and it is doubtful whether accurate CTM identification will lead to accurate CTM puncture in such patients. Ultrasound-guided needle puncture can be classified into two types: static and real-time (or dynamic) ultrasound techniques. In the static ultrasound technique, ultrasound is used to identify the target location, and then a needle puncture is performed using this information. By contrast, all needle puncture procedures are performed under ultrasound observation in the real-time ultrasound technique. In this simulation study, we examined whether static or real-time ultrasound techniques were more useful for needle cricothyroidotomy in patients with cervical anatomical abnormalities. This study was conducted in accordance with the ethical principles that have their origins in the Declaration of Helsinki and its subsequent amendments, and was approved by the local ethics committee on (Faculty of Medicine Research Ethics Committee; approval number, 1044). This study was registered in the University Hospital Medical Information Network Center Clinical Registration System, identifcation number UMIN000032106, first trial registration on 10/04/2018. This was a randomized, prospective, crossover simulation study. Participants were recruited from among junior residents, anesthesia residents, and anesthesiologists. Exclusion criteria were past experiments of cricothyroidotomy and refusal to participate (Fig. ). Written informed consent was obtained from all the participants. Participants were recruited and data were collected between July 2020 and December 2022 . Anatomically abnormal neck simulator for CTM puncture Three types of anatomically abnormal neck simulators were created for this study (Alfa Bio Co., Gumma, Japan) (Fig. ). These simulators reproduced the anatomically abnormal necks of three clinically encountered patients in which the tracheas deviated and rotated. The simulators included the thyroid, cricoid, and tracheal cartilages, which were then filled with a thick ultrasound-transparent gel. Therefore, it is difficult to palpate the CTM from the surface but possible to observe the cartilage and trachea by ultrasound (Fig. ). Three simulators were used to ensure that information about laryngeal deviation was not communicated to other participants (see Supplemenary Fig. S1 online). Education for needle cricothyroidotomy The participants received a lecture to review the relevant anatomy and details of the needle cricothyroidotomy procedure based on an educational video (Cricothyroidotomy Technique, Medscape) . Hands-on training for ultrasound-guided CTM identification An ultrasound machine used in this study was SonoSite Edge II with ultrasound probe HFL38xi (13-6 MHz, SonoSite, Inc., Bothell, WA, USA). For ultrasound-guided CTM identification, we used a transverse ultrasound view, called the TACA method , in which the neck is scanned from the cephalad to the caudal and then back to the cephalad to identify the thyroid cartilage (T), airline line (A), cricoid cartilage (C), and airline. The airline indicates the CTM (Fig. ). After the lecture on CTM identification, an ultrasound expert (HN) demonstrated the CTM using the TACA method. The participants were trained in ultrasound-guided CTM identification for 30 min. The instructor (HN) confirmed that all participants could identify the CTM using ultrasound guidance. Ultrasound-guided CTM puncture This study compared two types of ultrasound-guided CTM puncture: static and real-time (dynamic) ultrasound techniques. Which technique was performed first was determined randomly using a sealed envelope system. The order of the CTM puncture was randomized in the three simulators using a computer-generated random number table. In the static ultrasound technique (Fig. a), the participants attempted to identify the CTM using the TACA method and marked the location of the CTM on the simulator’s neck skin. After marking the CTM, it was punctured with a 23G needle (cathelin needle, length 70 mm) using a 2.5 ml syringe. In the real-time ultrasound technique (Fig. b), the participants identified the CTM using the TACA method and then commenced the CTM puncture using an out-of-plane approach under ultrasound guidance. The participants were able to confirm whether the CTM puncture was successful by pulling the plunger of the syringe to draw air. The participants attempted CTM puncture until they achieved a successful puncture within 180 s. The success or failure of the CTM puncture was assessed using a fiberscope set in the trachea. Success was defined as the placement of the needle tip in the trachea within 180 s after starting the CTM identification and till finishing puncture. Failure was defined as the needle tip not being placed in the trachea within the time limit of 180 s. The punctures that were deemed successful were divided into two categories: “high accuracy,” defined as a cricothyrotomy expected to be performed without complications, and “low accuracy,” defined as the needle tip placed in the trachea but the catheter is not easily inserted. The accuracy of the puncture was evaluated on a 2-point scale according to the puncture position (Fig. ). The success rate, including puncture accuracy and procedure time, was recorded. Procedure time was defined as the time from the start of handling the ultrasound probe to the end of the CTM puncture. The examinations were set up and handled by an examiner (HW), and all data were collected and evaluated by the instructor (HN). Participants performed the puncture using the assigned technique and then attempted using the other technique (a crossover study). Statistical analysis Numerical values were expressed as ratios or as the mean ± standard deviation for normal distributions and as the median [interquartile range] for non-normal distributions. The success rate and accuracy of the puncture were evaluated using Fisher’s exact test. The presence or absence of carryover effects was evaluated using one-way ANOVA. Statistical significance was considered at p < 0.05. All data were analyzed using GraphPad Prism ver. 7.0 (GraphPad Software Inc. Boston, USA). Power analysis To date, no study has compared ultrasound-guided CTM puncture techniques. Therefore, assuming a clinically significant difference, we calculated ɑ = 0.05 and estimated the sample size needed to obtain 80% power to be 40 participants. Three types of anatomically abnormal neck simulators were created for this study (Alfa Bio Co., Gumma, Japan) (Fig. ). These simulators reproduced the anatomically abnormal necks of three clinically encountered patients in which the tracheas deviated and rotated. The simulators included the thyroid, cricoid, and tracheal cartilages, which were then filled with a thick ultrasound-transparent gel. Therefore, it is difficult to palpate the CTM from the surface but possible to observe the cartilage and trachea by ultrasound (Fig. ). Three simulators were used to ensure that information about laryngeal deviation was not communicated to other participants (see Supplemenary Fig. S1 online). The participants received a lecture to review the relevant anatomy and details of the needle cricothyroidotomy procedure based on an educational video (Cricothyroidotomy Technique, Medscape) . An ultrasound machine used in this study was SonoSite Edge II with ultrasound probe HFL38xi (13-6 MHz, SonoSite, Inc., Bothell, WA, USA). For ultrasound-guided CTM identification, we used a transverse ultrasound view, called the TACA method , in which the neck is scanned from the cephalad to the caudal and then back to the cephalad to identify the thyroid cartilage (T), airline line (A), cricoid cartilage (C), and airline. The airline indicates the CTM (Fig. ). After the lecture on CTM identification, an ultrasound expert (HN) demonstrated the CTM using the TACA method. The participants were trained in ultrasound-guided CTM identification for 30 min. The instructor (HN) confirmed that all participants could identify the CTM using ultrasound guidance. This study compared two types of ultrasound-guided CTM puncture: static and real-time (dynamic) ultrasound techniques. Which technique was performed first was determined randomly using a sealed envelope system. The order of the CTM puncture was randomized in the three simulators using a computer-generated random number table. In the static ultrasound technique (Fig. a), the participants attempted to identify the CTM using the TACA method and marked the location of the CTM on the simulator’s neck skin. After marking the CTM, it was punctured with a 23G needle (cathelin needle, length 70 mm) using a 2.5 ml syringe. In the real-time ultrasound technique (Fig. b), the participants identified the CTM using the TACA method and then commenced the CTM puncture using an out-of-plane approach under ultrasound guidance. The participants were able to confirm whether the CTM puncture was successful by pulling the plunger of the syringe to draw air. The participants attempted CTM puncture until they achieved a successful puncture within 180 s. The success or failure of the CTM puncture was assessed using a fiberscope set in the trachea. Success was defined as the placement of the needle tip in the trachea within 180 s after starting the CTM identification and till finishing puncture. Failure was defined as the needle tip not being placed in the trachea within the time limit of 180 s. The punctures that were deemed successful were divided into two categories: “high accuracy,” defined as a cricothyrotomy expected to be performed without complications, and “low accuracy,” defined as the needle tip placed in the trachea but the catheter is not easily inserted. The accuracy of the puncture was evaluated on a 2-point scale according to the puncture position (Fig. ). The success rate, including puncture accuracy and procedure time, was recorded. Procedure time was defined as the time from the start of handling the ultrasound probe to the end of the CTM puncture. The examinations were set up and handled by an examiner (HW), and all data were collected and evaluated by the instructor (HN). Participants performed the puncture using the assigned technique and then attempted using the other technique (a crossover study). Numerical values were expressed as ratios or as the mean ± standard deviation for normal distributions and as the median [interquartile range] for non-normal distributions. The success rate and accuracy of the puncture were evaluated using Fisher’s exact test. The presence or absence of carryover effects was evaluated using one-way ANOVA. Statistical significance was considered at p < 0.05. All data were analyzed using GraphPad Prism ver. 7.0 (GraphPad Software Inc. Boston, USA). To date, no study has compared ultrasound-guided CTM puncture techniques. Therefore, assuming a clinically significant difference, we calculated ɑ = 0.05 and estimated the sample size needed to obtain 80% power to be 40 participants. Twenty-seven junior residents, twelve anesthesia residents, and nine anesthesiologists participated in the study (Fig. ). The success rate of CTM puncture was significantly higher with the real-time ultrasound technique than with the static ultrasound technique (Table ). Puncture accuracy, indicating a high accuracy rate, was significantly higher with the real-time ultrasound technique (Table ). The procedure time was shorter for the real-time ultrasound technique than for the static ultrasound technique (Table ). The carry-over effect occurred when either the real-time or static ultrasound technique was performed first, was not observed (real-time ultrasound technique: first vs. second trial, p = 0.13). Static ultrasound technique: first versus second trial, p = 0.38). Subgroup analysis In junior residents, there was no significant difference in the success rate between the static and real-time techniques (Table ). Among the anesthesia residents, there were no significant differences in the success rate, higher rate of accuracy, and procedure time. There was no significant difference in success rate among the anesthesiologists. There were no significant differences between the three types of simulators in terms of success rate, high accuracy rate, or procedure time (see Supplementary Table S1 online). No carryover effects were observed that would have occurred if either the real-time or static ultrasound techniques had been performed first (Supplementary Table S2 online). In junior residents, there was no significant difference in the success rate between the static and real-time techniques (Table ). Among the anesthesia residents, there were no significant differences in the success rate, higher rate of accuracy, and procedure time. There was no significant difference in success rate among the anesthesiologists. There were no significant differences between the three types of simulators in terms of success rate, high accuracy rate, or procedure time (see Supplementary Table S1 online). No carryover effects were observed that would have occurred if either the real-time or static ultrasound techniques had been performed first (Supplementary Table S2 online). It is debated whether percutaneous or surgical cricothyroidotomy is superior in emergency front-of-neck airways in CICO case , - . The Difficult Airway Society of the United Kingdom recommends surgical cricothyroidotomy (scalpel bougie tube cricothyroidotomy) . The American Society of Anesthesiologists does not recommend a specific approach . The Japanese Society of Anesthesiologists and the Canadian Anesthesiologists Society support both needle and surgical cricothyroidotomy . Risk of bleeding is a major concern during surgical cricothyroidotomy. A recent cadaveric report described some thyroid ima arteries with multiple branching patterns over the trachea, which may have a high risk for excessive bleeding during surgical cricothyroidotomy . Surgical cricothyroidotomy has been reported to cause more bleeding than percutaneous cricothyroidotomy in animal studies using pigs . In patients with tracheal deviation, misidentification of the CTM may cause unexpected hemorrhage. Therefore, the ultrasound identification of the CTM should be more useful than the digital palpation CTM identification technique for needle cricothyroidotomy in patients with anatomically abnormal necks. Among the anesthesiologists, the success rate of CTM puncture did not differ between the static and real-time ultrasound techniques. The same trend was observed among the anesthesia residents. Although peripheral nerve blocks are usually performed under ultrasound guidance, epidural and spinal anesthesia are still usually performed using the anatomic landmark technique, either blindly or depending on spatial ability. Therefore, anesthesiologists not only routinely practice ultrasound guidance training but also use spatial grasping training. However, junior residents do not have this training environment and would greatly benefit from real-time ultrasound technology, which allows them to immediately understand ultrasound-guided needle projection. We conclude that real-time ultrasound is more useful than static ultrasound for CTM punctures. However, the success rate of the real-time technique is only 60%. This result indicates that ultrasound-guided CTM puncture may not be suitable for emergency cricothyroidotomy. However, in the subgroup analysis, the anesthesiologist reported a success rate of 78% when using the real-time ultrasound technique. Considering that the simulator was designed for a difficult case of cricothyrotomy, ultrasound-guided needle cricothyroidotomy performed by an anesthesiologist may be clinically feasible. After needle cricothyroidotomy, oxygen is typically supplied via low-pressure ventilation or jet ventilation . Alternatively, a cannula can be inserted using the Seldinger technique. Puncture of the limbus of the CTM can result in difficult insertion of the cannula into the trachea and possible misplacement outside the trachea. In this study, the procedure in which a puncture needle was placed at the center of the simulated trachea was defined as highly accurate. The accuracy rate of the real-time ultrasound technique was significantly higher than that of the static ultrasound technique. In the subgroup analysis, both junior residents and anesthesiologists had a significantly higher rate of accuracy with the real-time ultrasound technique than with the static ultrasound technique. In this study, cannula insertion was not performed after puncture. Even if the puncture is performed correctly, ease of cannula insertion may depend on the quality of the puncture kit. Therefore, further studies are needed before the results of this study can be directly applied in clinical practice. Some guidelines state that awake intubation should be selected in cases with a high risk of difficult mask ventilation and tracheal intubation. In patients with anticipated difficult airways, it is important to plan to secure the airway safely. Usually, awake intubation or surgical front-of-neck access is used for difficult airways. In particular, awake intubation will require a backup plan called “double setup airway intervention,” which involves preparing simultaneously for both awake intubation and surgical cricothyroidotomy . If the results of this study were to be integrated into the current guidelines, ultrasound would be used for CTM identification before surgical cricothyroidotomy, and guidewire would be inserted through a thin puncture needle using ultrasound guidance. This guidewire will increase the success rate of cricothyroidotomies. Three subsets (junior residents, anesthesiology residents, and anesthesiologists) were enrolled in this study. It is undeniable that this heterogeneity may have affected the results. In particular, the inexperience of the residents may have contributed to the lower overall success rate. However, cricothyroidotomy in patients with cervical abnormalities is a challenging procedure for anyone, not just junior residents. Therefore, it is essential to find a way to improve its safety using ultrasound guidance. Another problem is that there is currently no commercially available cricothyroidotomy kit suitable for real-time ultrasound technique. In order to take advantage of the results of this study, a new cricothyroidotomy kit needs to be developed. In conclusion, CTM puncture using the real-time ultrasound technique may be more useful than the static ultrasound technique, especially for naïve operators. However, further research is required to translate these results into clinical practice. Below is the link to the electronic supplementary material. Supplementary Material 1
Effectiveness of Anti-Gravity Treadmill Exercise After Total Knee Arthroplasty: Protocol for a Randomized Controlled Trial
ffcbdd12-5094-4f56-8cd0-6ac6c42da66a
11862769
Surgical Procedures, Operative[mh]
Overview Pre- and postoperative rehabilitation for total knee arthroplasty (TKA) varies worldwide . There are guidelines for postoperative physiotherapy after TKA , and physiotherapy guidance for self-directed home exercises is recommended . In Finland, the current care guidelines provide guidance for pre- and postoperative rehabilitation , but the practices associated with the rehabilitation vary among different hospitals. Previous studies have shown that both inpatient and home-based rehabilitation are effective after TKA and that resistance training in water is a feasible mode of rehabilitation with a wide range of positive effects on patients undergoing TKA . Reportedly, a sedentary lifestyle is rather common among patients who have undergone TKA, with pain or discomfort while standing being the greatest barrier to increasing physical activity . Pain limits postoperative walking training and activities of daily living . Anti-gravity exercises could help address these challenges. They enable a more objective analysis of walking, showing the entire picture of a patient’s walking problems. AlterG (AlterG, Inc., Fremont, CA), a patented compressed air technology (NASA differential air pressure technology), can be used to lighten the user’s body weight and the load of gravity with 1% accuracy, thereby enabling a less painful walking exercise compared to normal land-based training . While there is limited research on the effects of AlterG training, randomized controlled trials (RCTs) with small sample sizes have been reported, particularly among patients with neurological disorders, such as cerebral palsy and stroke . These studies demonstrated that AlterG training positively affected walking speed and dynamic balance and reduced the risk of falls . Similar results were reported in studies investigating the effects of AlterG training in rehabilitation after lower limb fractures . Precisely, it was found that AlterG training increased muscle strength in the hip area and enabled better walking . A pilot study on AlterG training after TKA revealed that it increased functional ability and is thus overall a safe, useful, and effective rehabilitation method after TKA . Although previous researchers concluded that while functional outcomes improved over time with the use of anti-gravity gait training, further studies with a larger sample size are required to define the role of this device as an alternative or adjunct to established rehabilitation protocols . Furthermore, AlterG training in the acute phase of postoperative knee rehabilitation after knee surgeries, such as TKA and anterior cruciate ligament reconstruction, demonstrated a positive effect on balance in patients experiencing increased pain in weight-bearing postures . AlterG training also decreased pain, enhanced joint function, improved quality of life (QoL), and maintained thigh muscle strength gains in patients with knee osteoarthritis . However, as previously mentioned, not much research has been conducted on postoperative rehabilitation including AlterG training after TKA , thereby warranting further studies in the future to obtain more knowledge regarding its effects on walking and functional capacity. Aims and Objectives The exposures under investigation were the effects of anti-gravity treadmill training in postoperative rehabilitation following TKA and the added value it offered compared to traditional exercise. We hypothesized that AlterG training after hospitalization leads to faster rehabilitation, better walking quality, improved QoL, improved physical activity, and enhanced balance management compared to traditional rehabilitation methods with instructions, where patients perform the exercises independently at home. In addition, we hypothesized that the differences, in terms of the above factors, between the groups in the study—intervention group and control group—were larger in the early phase of the rehabilitation but became smaller over time. The study has been registered on ClinicalTrials.gov (NCT03904030). This study aimed to determine the effectiveness of the AlterG anti-gravity treadmill in postoperative rehabilitation after TKA. Primary outcomes were perceived pain, walking ability, and QoL. To this end, AlterG rehabilitation and traditional postoperative rehabilitation with instructions were compared. In detail, we aimed to measure the effects of AlterG training on: a patient’s walking ability and walking distance after TKA, a patient’s perceived QoL and functional ability after TKA, a patient’s perceived pain, and lower limb and step symmetry during gait and if it normalizes the patient’s stepping and walking. Pre- and postoperative rehabilitation for total knee arthroplasty (TKA) varies worldwide . There are guidelines for postoperative physiotherapy after TKA , and physiotherapy guidance for self-directed home exercises is recommended . In Finland, the current care guidelines provide guidance for pre- and postoperative rehabilitation , but the practices associated with the rehabilitation vary among different hospitals. Previous studies have shown that both inpatient and home-based rehabilitation are effective after TKA and that resistance training in water is a feasible mode of rehabilitation with a wide range of positive effects on patients undergoing TKA . Reportedly, a sedentary lifestyle is rather common among patients who have undergone TKA, with pain or discomfort while standing being the greatest barrier to increasing physical activity . Pain limits postoperative walking training and activities of daily living . Anti-gravity exercises could help address these challenges. They enable a more objective analysis of walking, showing the entire picture of a patient’s walking problems. AlterG (AlterG, Inc., Fremont, CA), a patented compressed air technology (NASA differential air pressure technology), can be used to lighten the user’s body weight and the load of gravity with 1% accuracy, thereby enabling a less painful walking exercise compared to normal land-based training . While there is limited research on the effects of AlterG training, randomized controlled trials (RCTs) with small sample sizes have been reported, particularly among patients with neurological disorders, such as cerebral palsy and stroke . These studies demonstrated that AlterG training positively affected walking speed and dynamic balance and reduced the risk of falls . Similar results were reported in studies investigating the effects of AlterG training in rehabilitation after lower limb fractures . Precisely, it was found that AlterG training increased muscle strength in the hip area and enabled better walking . A pilot study on AlterG training after TKA revealed that it increased functional ability and is thus overall a safe, useful, and effective rehabilitation method after TKA . Although previous researchers concluded that while functional outcomes improved over time with the use of anti-gravity gait training, further studies with a larger sample size are required to define the role of this device as an alternative or adjunct to established rehabilitation protocols . Furthermore, AlterG training in the acute phase of postoperative knee rehabilitation after knee surgeries, such as TKA and anterior cruciate ligament reconstruction, demonstrated a positive effect on balance in patients experiencing increased pain in weight-bearing postures . AlterG training also decreased pain, enhanced joint function, improved quality of life (QoL), and maintained thigh muscle strength gains in patients with knee osteoarthritis . However, as previously mentioned, not much research has been conducted on postoperative rehabilitation including AlterG training after TKA , thereby warranting further studies in the future to obtain more knowledge regarding its effects on walking and functional capacity. The exposures under investigation were the effects of anti-gravity treadmill training in postoperative rehabilitation following TKA and the added value it offered compared to traditional exercise. We hypothesized that AlterG training after hospitalization leads to faster rehabilitation, better walking quality, improved QoL, improved physical activity, and enhanced balance management compared to traditional rehabilitation methods with instructions, where patients perform the exercises independently at home. In addition, we hypothesized that the differences, in terms of the above factors, between the groups in the study—intervention group and control group—were larger in the early phase of the rehabilitation but became smaller over time. The study has been registered on ClinicalTrials.gov (NCT03904030). This study aimed to determine the effectiveness of the AlterG anti-gravity treadmill in postoperative rehabilitation after TKA. Primary outcomes were perceived pain, walking ability, and QoL. To this end, AlterG rehabilitation and traditional postoperative rehabilitation with instructions were compared. In detail, we aimed to measure the effects of AlterG training on: a patient’s walking ability and walking distance after TKA, a patient’s perceived QoL and functional ability after TKA, a patient’s perceived pain, and lower limb and step symmetry during gait and if it normalizes the patient’s stepping and walking. Participant Selection and Sampling Strategy Participants for this RCT study were recruited from two hospitals in the capital region of Finland: Orton Orthopaedic Hospital and Peijas Hospital, both of which are part of the HUS Helsinki University Hospital. Patients with grades 3 and 4 primary knee osteoarthritis and with a scheduled unilateral TKA were included in the study. Those with rheumatoid arthritis, who have undergone hip or knee arthroplasties within the last year, or with a BMI >40 kg/m 2 were excluded. All eligible patients who came for a knee arthroplasty surgery at Orton Orthopaedic Hospital between 2018 and 2021 and at Peijas Hospital between 2020 and 2021 were asked to participate in the study. The patients were recruited through a nurse’s preoperative visit or a phone call made to the patients attending the surgery. The nurse then checked whether they met the inclusion and exclusion criteria; those who met the criteria were provided with written information regarding the study. The included patients were asked to provide signed consent forms , which they forwarded to the research assistant. When the patient had consented to participate in the trial, the randomization envelope was opened after the surgery. However, due to the nature of the intervention under study (rehabilitation intervention), it was not possible to blind the patients. The patients were randomly allocated either to the treatment group or the control group using a random number generator (StatTrek) by an expert who did not execute the study in practice. All the patients had free access to the available health care services during the study. Overall, 62 patients (31 in each group) were recruited for the study. Study Procedure Patients in both groups underwent initial measurements 1-2 weeks before the surgery, which included questionnaires and functional tests performed by a physiotherapist. All the measurements were taken at Orton Orthopaedic Hospital. First, the patients were asked to complete the questionnaires, and functional tests were then performed. The questionnaires used were a visual analogue scale (measures perceived pain) , painDETECT , Tampa Scale of Kinesiophobia , RAND 36-item Health Survey 1.0 (RAND-36) , Oxford Knee Score , Western Ontario and McMaster Universities Osteoarthritis Index , Beck Depression Inventory , and State-Trait Anxiety Inventory . Functional tests performed were knee range of motion (ROM) , knee swelling , thigh circumference , single-leg stance test , Timed Up and Go test (TUG) , stair climbing test , and 6-minute walk test (6MWT) . Patients were asked to complete the same questionnaires 6-8 weeks after surgery. The same questionnaires and functional tests were administered in the same order 4 and 12 months after the surgery at Orton Orthopaedic Hospital. This data can help obtain information regarding short-term (4 months) and long-term (12 months) changes. Then, 6 months after the surgery, the patients were sent a 6-month questionnaire regarding possible rehabilitation sessions, use of medication, and possible complications after TKA. The purpose of this questionnaire was to obtain information regarding their use of health care services and how their rehabilitation and recovery from the surgery have progressed. After the surgery, the researcher informed each patient over the phone or face-to-face about their assigned group, either the intervention or control group. After that, the researcher booked training times on the AlterG for each member of the intervention group. Patients were sent a reminder through an SMS text message on the time reserved for them to ensure that the risk of forgetting was lower. If the time did not suit the patient, the physiotherapist was informed, and a new time was reserved accordingly. The patients in the intervention group exercised 10 times on the AlterG under the supervision of a registered physiotherapist. Both groups performed traditional postoperative exercises after TKA. The follow-up period was 12 months. Exercises began in the third week after surgery. In the third and fourth weeks after TKA, patients in the intervention group exercised twice a week on the AlterG and thrice a week in the fifth and sixth weeks considering individual variations. The AlterG training was recorded 3 out of 10 times—the first, fifth, and 10th training sessions. The recording began after the patient found a suitable walking speed and lightening. On average, the length of one recording was 2 minutes. The data obtained from the AlterG training was saved to a memory stick and then transferred to an electronic folder. The patient schedule of enrollment, interventions, and assessments are shown in . The study procedure is outlined in . The protocol has been developed using the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) checklist . Postoperative Exercises Furthermore, all the patients (intervention and control groups) in the study independently performed the postoperative exercises instructed by the hospital’s physiotherapists. The exercises were aimed to improve knee ROM and quadriceps and hamstring muscle strength and to stimulate venous circulation. Patients also received guidance on walking with crutches. They were instructed to perform home exercises for approximately 2 months after surgery. However, the researchers do not have information on how actively the patients were engaged with home exercises. The home exercises are described in . Outcome Assessment and Measurements The primary outcomes were walking ability measured with the 6MWT, health-related QoL measured with RAND-36, and perceived pain measured with VAS. All outcomes were measured before the TKA and 4 and 12 months after the TKA. In addition to this, the patients were asked to complete the same questionnaires 6-8 weeks after the TKA. The initial measurements were performed 1-2 weeks before the TKA. Depending on the patient, it took a total of 1.5-2 hours to complete the questionnaires and execute the functional tests. The same measurements were performed 4 and 12 months after the surgery, and the time required for these was the same as in the initial measurements (1.5-2 hours). Regarding the functional tests, the test scores were not calculated within the testing situation; however, the physiotherapist noted the measurement results on the test form, which were then saved in an electronic format. The results obtained from the tests were compared throughout the study between the patient’s own results and between the intervention and control groups; no comparisons were made with the general reference values of the tests. The knee ROM and swelling, thigh muscle circumference, TUG, stair climbing test, and 6MWT were performed only once. The tests were performed twice only in the static balance test, and the best test result was recorded (the maximum time was 60 seconds). All the measures of the study are presented in . During the AlterG training, the weight-bearing symmetry, step length, stance time symmetry, and cadence during walking were measured . AlterG measurements. Weight-bearing symmetry Weight-bearing symmetry gives information about the symmetry of the stance phase. In pathological situations, the stance phase is shorter on the weaker side . Step length symmetry Step length is the distance from the back of one heel to the back of the other heel . Stance phase symmetry Stance phase is the period when the foot is on the ground . Cadence (steps/min) Cadence is the number of steps taken per minute . Data Collection and Statistical Analysis We estimated that a difference of 30 m in the 6MWT between the two groups would represent a clinically relevant difference . To identify such a difference with 2-sided testing (α=.05 and power of 85%), the study required 31 participants in each group, with the assumption of 20% loss to follow-up. The analysis will use the intention-to-treat principle, which will include all randomized patients. Summary statistics will be described using mean and SD, median and IQR, or numbers and percentages. Statistical comparison between the groups will be performed using the t test, Mann-Whitney test, χ 2 test, or Fisher-Freeman-Halton test, as appropriate. A linear mixed model or generalized estimating equation model with appropriate distribution and link function for repeated measurements will be used for analysis. In case of a violation of the test assumptions, a bootstrap-type method or Monte Carlo method will be used. Normal distributions will be graphically evaluated using the Shapiro-Wilk W test. Stata 18 (StataCorp LP) will be used for the analysis. Access to data will be granted to the research team of this study. Ethical Considerations The study protocol was approved by the ethics committee of the Hospital District of Helsinki and Uusimaa in 2017 (HUS/3117/2017). Helsinki University Hospital, Peijas Hospital, was also included in the study in 2020, and updated ethical permission and research permission were received from HUS (HUS 234/2020). Good research ethics practices were maintained in the study in accordance with the Declaration of Helsinki. Before participating in the study, every patient was given an information letter that would help them decide whether they wanted to participate in the study; once the decision was made, they were asked to sign a written consent form. The patients had the right to discontinue the trial whenever they wanted without giving any reason. However, the data collected before the discontinuation can be used for research purposes . There was no inclusion of vulnerable groups such as children, prisoners, or individuals with mental disability. No participant reimbursement was provided to prevent economic factors from impacting the recruitment process. If there were any changes to the protocol, the ethics committee and other relevant parties were informed. Ensuring Data Quality Anonymity and confidentiality were ensured by using numerical codes for the participants. Only the research group members had access to the participants’ names. Data protection and storage security were ensured by storing the participant information and questionnaires in a locked cabinet at Orton Orthopaedic Hospital. Data were securely stored with electronic passwords on the hospital’s server. Participants for this RCT study were recruited from two hospitals in the capital region of Finland: Orton Orthopaedic Hospital and Peijas Hospital, both of which are part of the HUS Helsinki University Hospital. Patients with grades 3 and 4 primary knee osteoarthritis and with a scheduled unilateral TKA were included in the study. Those with rheumatoid arthritis, who have undergone hip or knee arthroplasties within the last year, or with a BMI >40 kg/m 2 were excluded. All eligible patients who came for a knee arthroplasty surgery at Orton Orthopaedic Hospital between 2018 and 2021 and at Peijas Hospital between 2020 and 2021 were asked to participate in the study. The patients were recruited through a nurse’s preoperative visit or a phone call made to the patients attending the surgery. The nurse then checked whether they met the inclusion and exclusion criteria; those who met the criteria were provided with written information regarding the study. The included patients were asked to provide signed consent forms , which they forwarded to the research assistant. When the patient had consented to participate in the trial, the randomization envelope was opened after the surgery. However, due to the nature of the intervention under study (rehabilitation intervention), it was not possible to blind the patients. The patients were randomly allocated either to the treatment group or the control group using a random number generator (StatTrek) by an expert who did not execute the study in practice. All the patients had free access to the available health care services during the study. Overall, 62 patients (31 in each group) were recruited for the study. Patients in both groups underwent initial measurements 1-2 weeks before the surgery, which included questionnaires and functional tests performed by a physiotherapist. All the measurements were taken at Orton Orthopaedic Hospital. First, the patients were asked to complete the questionnaires, and functional tests were then performed. The questionnaires used were a visual analogue scale (measures perceived pain) , painDETECT , Tampa Scale of Kinesiophobia , RAND 36-item Health Survey 1.0 (RAND-36) , Oxford Knee Score , Western Ontario and McMaster Universities Osteoarthritis Index , Beck Depression Inventory , and State-Trait Anxiety Inventory . Functional tests performed were knee range of motion (ROM) , knee swelling , thigh circumference , single-leg stance test , Timed Up and Go test (TUG) , stair climbing test , and 6-minute walk test (6MWT) . Patients were asked to complete the same questionnaires 6-8 weeks after surgery. The same questionnaires and functional tests were administered in the same order 4 and 12 months after the surgery at Orton Orthopaedic Hospital. This data can help obtain information regarding short-term (4 months) and long-term (12 months) changes. Then, 6 months after the surgery, the patients were sent a 6-month questionnaire regarding possible rehabilitation sessions, use of medication, and possible complications after TKA. The purpose of this questionnaire was to obtain information regarding their use of health care services and how their rehabilitation and recovery from the surgery have progressed. After the surgery, the researcher informed each patient over the phone or face-to-face about their assigned group, either the intervention or control group. After that, the researcher booked training times on the AlterG for each member of the intervention group. Patients were sent a reminder through an SMS text message on the time reserved for them to ensure that the risk of forgetting was lower. If the time did not suit the patient, the physiotherapist was informed, and a new time was reserved accordingly. The patients in the intervention group exercised 10 times on the AlterG under the supervision of a registered physiotherapist. Both groups performed traditional postoperative exercises after TKA. The follow-up period was 12 months. Exercises began in the third week after surgery. In the third and fourth weeks after TKA, patients in the intervention group exercised twice a week on the AlterG and thrice a week in the fifth and sixth weeks considering individual variations. The AlterG training was recorded 3 out of 10 times—the first, fifth, and 10th training sessions. The recording began after the patient found a suitable walking speed and lightening. On average, the length of one recording was 2 minutes. The data obtained from the AlterG training was saved to a memory stick and then transferred to an electronic folder. The patient schedule of enrollment, interventions, and assessments are shown in . The study procedure is outlined in . The protocol has been developed using the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) checklist . Furthermore, all the patients (intervention and control groups) in the study independently performed the postoperative exercises instructed by the hospital’s physiotherapists. The exercises were aimed to improve knee ROM and quadriceps and hamstring muscle strength and to stimulate venous circulation. Patients also received guidance on walking with crutches. They were instructed to perform home exercises for approximately 2 months after surgery. However, the researchers do not have information on how actively the patients were engaged with home exercises. The home exercises are described in . The primary outcomes were walking ability measured with the 6MWT, health-related QoL measured with RAND-36, and perceived pain measured with VAS. All outcomes were measured before the TKA and 4 and 12 months after the TKA. In addition to this, the patients were asked to complete the same questionnaires 6-8 weeks after the TKA. The initial measurements were performed 1-2 weeks before the TKA. Depending on the patient, it took a total of 1.5-2 hours to complete the questionnaires and execute the functional tests. The same measurements were performed 4 and 12 months after the surgery, and the time required for these was the same as in the initial measurements (1.5-2 hours). Regarding the functional tests, the test scores were not calculated within the testing situation; however, the physiotherapist noted the measurement results on the test form, which were then saved in an electronic format. The results obtained from the tests were compared throughout the study between the patient’s own results and between the intervention and control groups; no comparisons were made with the general reference values of the tests. The knee ROM and swelling, thigh muscle circumference, TUG, stair climbing test, and 6MWT were performed only once. The tests were performed twice only in the static balance test, and the best test result was recorded (the maximum time was 60 seconds). All the measures of the study are presented in . During the AlterG training, the weight-bearing symmetry, step length, stance time symmetry, and cadence during walking were measured . AlterG measurements. Weight-bearing symmetry Weight-bearing symmetry gives information about the symmetry of the stance phase. In pathological situations, the stance phase is shorter on the weaker side . Step length symmetry Step length is the distance from the back of one heel to the back of the other heel . Stance phase symmetry Stance phase is the period when the foot is on the ground . Cadence (steps/min) Cadence is the number of steps taken per minute . We estimated that a difference of 30 m in the 6MWT between the two groups would represent a clinically relevant difference . To identify such a difference with 2-sided testing (α=.05 and power of 85%), the study required 31 participants in each group, with the assumption of 20% loss to follow-up. The analysis will use the intention-to-treat principle, which will include all randomized patients. Summary statistics will be described using mean and SD, median and IQR, or numbers and percentages. Statistical comparison between the groups will be performed using the t test, Mann-Whitney test, χ 2 test, or Fisher-Freeman-Halton test, as appropriate. A linear mixed model or generalized estimating equation model with appropriate distribution and link function for repeated measurements will be used for analysis. In case of a violation of the test assumptions, a bootstrap-type method or Monte Carlo method will be used. Normal distributions will be graphically evaluated using the Shapiro-Wilk W test. Stata 18 (StataCorp LP) will be used for the analysis. Access to data will be granted to the research team of this study. The study protocol was approved by the ethics committee of the Hospital District of Helsinki and Uusimaa in 2017 (HUS/3117/2017). Helsinki University Hospital, Peijas Hospital, was also included in the study in 2020, and updated ethical permission and research permission were received from HUS (HUS 234/2020). Good research ethics practices were maintained in the study in accordance with the Declaration of Helsinki. Before participating in the study, every patient was given an information letter that would help them decide whether they wanted to participate in the study; once the decision was made, they were asked to sign a written consent form. The patients had the right to discontinue the trial whenever they wanted without giving any reason. However, the data collected before the discontinuation can be used for research purposes . There was no inclusion of vulnerable groups such as children, prisoners, or individuals with mental disability. No participant reimbursement was provided to prevent economic factors from impacting the recruitment process. If there were any changes to the protocol, the ethics committee and other relevant parties were informed. Anonymity and confidentiality were ensured by using numerical codes for the participants. Only the research group members had access to the participants’ names. Data protection and storage security were ensured by storing the participant information and questionnaires in a locked cabinet at Orton Orthopaedic Hospital. Data were securely stored with electronic passwords on the hospital’s server. The data collection began in 2018 and concluded in 2022. This study aimed to obtain valuable information on the effect of AlterG training after TKA. AlterG, along with traditional exercises, could be an effective form of rehabilitation that can be performed at home. We hypothesized that AlterG training leads to faster rehabilitation, better walking quality, improved QoL, improved physical activity, and improved overall functioning. At baseline, there were 62 participants in the study, with 31 in each group. Of these, 35 (56%) were women, and 27 (44%) were men, with a mean age of 66 (SD 7) years. The results of this study will be analyzed in 2025 and 2026. Results from this study will be submitted for publication in peer-reviewed international scientific journals and presented at scientific meetings. Expected Findings The expected findings were related to walking ability, health-related QoL, and perceived pain. The expectations were that pain would decrease, walking distance would be longer, and health-related QoL would improve. In addition, we expected that the differences between the groups would be larger in the short term and eventually level out in 12 months. Comparisons With Prior Work Studies on postoperative rehabilitation after TKA including AlterG training are scarce . To our knowledge, Bugbee et al were the only ones who investigated the effects of AlterG training in patients who underwent TKA in an RCT. Hence, the present study was conducted to further investigate and obtain more information on the effects of AlterG training in patients who underwent TKA. Limitations and Strengths of the Protocol One of the limitations of this study was the risk of dropouts when patients heard that they were not included in the intervention group. Second, not all patients were committed to the 12-month follow-up. Third, the implementation of the study coincided with the COVID-19 pandemic, impacting the progress of the study and leading to dropouts. However, this study also has some strengths. First, this is an RCT study. Second, to the best of our knowledge, there has been only one pilot and feasibility study that directly focused on this issue thus far. With this relatively novel study, it is possible to collect more valuable information on how anti-gravity exercise can be used in rehabilitation after TKA. Third, it investigated aspects of a patient’s physical functioning, perceived QoL, and pain rather extensively, thereby providing an opportunity to control various sources of bias such as state of depression and neuropathic pain. Lastly, the study design was planned in a multi-professional manner. Study Significance and Feasibility The results of this study provided information on how AlterG can be used in rehabilitation after TKA. With this knowledge, hospitals may potentially develop and enhance the rehabilitation program for patients who undergo knee arthroplasty. It is also possible to use the research results more widely with other patient groups, such as those with lower limb problems and athletes. Conclusion The results of this study provided information on how AlterG training can be used in rehabilitation after TKA. This information may enable the enhancement and development of a rehabilitation program for patients undergoing TKA. The expected findings were related to walking ability, health-related QoL, and perceived pain. The expectations were that pain would decrease, walking distance would be longer, and health-related QoL would improve. In addition, we expected that the differences between the groups would be larger in the short term and eventually level out in 12 months. Studies on postoperative rehabilitation after TKA including AlterG training are scarce . To our knowledge, Bugbee et al were the only ones who investigated the effects of AlterG training in patients who underwent TKA in an RCT. Hence, the present study was conducted to further investigate and obtain more information on the effects of AlterG training in patients who underwent TKA. One of the limitations of this study was the risk of dropouts when patients heard that they were not included in the intervention group. Second, not all patients were committed to the 12-month follow-up. Third, the implementation of the study coincided with the COVID-19 pandemic, impacting the progress of the study and leading to dropouts. However, this study also has some strengths. First, this is an RCT study. Second, to the best of our knowledge, there has been only one pilot and feasibility study that directly focused on this issue thus far. With this relatively novel study, it is possible to collect more valuable information on how anti-gravity exercise can be used in rehabilitation after TKA. Third, it investigated aspects of a patient’s physical functioning, perceived QoL, and pain rather extensively, thereby providing an opportunity to control various sources of bias such as state of depression and neuropathic pain. Lastly, the study design was planned in a multi-professional manner. The results of this study provided information on how AlterG can be used in rehabilitation after TKA. With this knowledge, hospitals may potentially develop and enhance the rehabilitation program for patients who undergo knee arthroplasty. It is also possible to use the research results more widely with other patient groups, such as those with lower limb problems and athletes. The results of this study provided information on how AlterG training can be used in rehabilitation after TKA. This information may enable the enhancement and development of a rehabilitation program for patients undergoing TKA.
Prevention and management of intra‐operative complications in maxillary sinus augmentation: A review
ab269672-958c-4134-9719-69f7d2da88e5
11789845
Dentistry[mh]
Intraoperative complications may lead to postoperative infections. Transcrestal approach is less invasive compared to the lateral approach. Lateral approach allows better management of sinus membrane perforations. The transcrestal approach, being a blind technique, requires special care in preoperative planning to avoid complications. INTRODUCTION Maxillary sinus floor elevation, also known as sinus lift, is a commonly performed procedure in oral implantology to increase the available bone height in the posterior maxilla, facilitating successful implant placement. The two most common surgical approaches for maxillary sinus lift are the lateral approach (the first to be proposed over 40 years ago) and the transcrestal approach, aimed at limiting the invasiveness of the conventional procedure while seeking to achieve similar clinical results. The lateral approach involves the creation of a bony window on the maxillary sinus lateral wall, providing direct access to the sinus cavity for membrane elevation and subsequent graft placement. The main advantage of this technique is that the surgeon has the opportunity to directly control the most delicate part of the procedure, namely, the detachment and elevation of the sinus membrane. This allows for optimal lifting of the Schneiderian membrane from the bony walls of the sinus regardless of the size of the cavity and the creation of an adequate sub‐sinus space, which allows for the placement of the graft even in contact with the medial wall of the sinus, optimizing new bone formation. , Transcrestal sinus floor elevation, also known as the osteotome technique, is a minimally invasive approach to sinus augmentation. Unlike the lateral approach, transcrestal sinus floor elevation entails making a minimal antrostomy on the alveolar crest, performed using osteotomes, piezoelectric inserts, or specially designed burs. , , This opening is frequently utilized for implant placement following the elevation of the sinus membrane and grafting procedure. The main limitation of the transcrestal approach is that membrane elevation occurs indirectly, lifted hydrodynamically using saline solution or by granular or injectable biomaterial (paste or gel). , The surgeon lacks direct control over the direction and extent of membrane elevation: in transcrestal sinus floor elevation, predictable exposure of both lateral and medial bone walls occurs primarily in narrow maxillary sinuses. While advancements in biological understanding and surgical techniques have improved the predictability and success rate of maxillary sinus augmentation, intraoperative complications remain a concern for clinicians. In both lateral and transcrestal approaches, they can range from minor issues to more severe events, potentially leading to surgical failure or increased postoperative morbidity. Preventing and managing possible intraoperative complications is a crucial point for improving patient safety and treatment success. Clinicians should anticipate and address potential risks through strategic planning and execution and, at the same time, be aware that effective management of complications is essential for minimizing adverse effects and enhancing surgical procedure success. , Throughout this review, we will examine various aspects of intraoperative complications in both lateral and transcrestal maxillary sinus augmentation, including anatomical considerations, surgical techniques, graft materials, and patient‐related factors. By addressing each of these components comprehensively, we aim to offer practical guidance and recommendations that can inform clinical decision‐making and, ultimately, improve clinical outcomes. INTRAOPERATIVE COMPLICATIONS 2.1 Sinus membrane perforation Sinus membrane perforation is the most frequent complication in maxillary sinus augmentation with lateral approach, with an average frequency ranging from 15.7% to 23.1%, , but with significant variations across individual studies (0%–60%) depending on the surgeon's experience, surgical technique, anatomical conditions, and patient‐related factors. , , , Mean perforation rate reported for the transcrestal approach is lower (3.1%–6.4%), , , , but it should be noted that a significant number of perforations may not have been detected, given the blind nature of this technique. Indeed, when endoscopy was employed to confirm sinus membrane perforation during transcrestal sinus floor elevation in cadaver specimens, this rate increased significantly to 40%. Perforation of the sinus membrane can result in dissemination of grafting material into the sinus cavity, potentially compromising the patency of the ostiomeatal complex and triggering local inflammation often leading to postoperative sinusitis. , , Recognizing and addressing risk factors associated with membrane damage is essential to achieving successful outcomes in sinus augmentation procedures. 2.1.1 Risk factors and prevention Antrostomy technique Lateral approach Even if some clinical studies reported no difference in membrane perforation risk between rotary and piezoelectric instruments, , , systematic reviews and meta‐analyses concluded that membrane perforations during lateral sinus augmentation may be significantly reduced applying piezoelectric devices. , However, in studies comparing the use of piezoelectric bone surgery for creating a lateral access to the sinus, outcomes varied depending on the surgical approach. When bone window outlining and reflection into the sinus were performed, perforation prevalence was 17.6%, similar to rotary instruments. However, the prevalence decreased significantly to 4.7% when ultrasonic instruments were used to thin the lateral wall before opening the window. This approach allows, especially when the lateral wall of the sinus is thick, to have better surgical control, to perceive the proximity to the membrane thanks to the change in color (the thinned area becomes darker because the sinus cavity is visible through it in transparency), and to clearly identify the position of Underwood septa, if present. Overall, within the limitations of the available studies, it appeared that thinning the lateral wall with ultrasonic instruments or bone scrapers reduces the incidence of accidental Schneiderian membrane perforations during antrostomy. Transcrestal approach There are no studies in the literature directly comparing the various techniques used to create crestal access to the maxillary sinus (manual or electric‐driven osteotomes, piezosurgery, various burs designed for this specific application) with regard to perforation prevention. In the absence of sufficient evidence, it is suggested to apply the technique in which the operator is experienced and has achieved a good learning curve to limit the risk of perforation at this stage of the surgical protocol. 2.1.2 Anatomy Membrane thickness The thickness of the sinus membrane has long been a subject of debate. While some consider a thick membrane as indicative of resilience and view a thin membrane as fragile, , studies on cadaver specimens have shown that membrane thickness does not consistently correlate with resistance to tearing. Similarly, it has been demonstrated that sinus membranes measuring 1–1.5 mm in thickness exhibit greater resilience compared to those that are thinner or thicker. Since the literature reports contradictory information, it is difficult to predict the risk of perforation based solely on the thickness of the membrane observed on CBCT, which is often overestimated. Gingival phenotype This parameter may also be considered during pre‐surgical planning to aid in assessing the risk of membrane perforation. It has been noted that a thick gingival phenotype is correlated with a thick sinus membrane, and vice versa. , However, as discussed earlier, establishing a direct correlation between phenotype and the risk of perforation presents challenges. Sinus width The width of the sinus cavity is an important factor to consider in the risk assessment for Schneiderian membrane perforations. , , This parameter can be assessed by measuring the angle between the buccal and palatal walls on CBCT cross‐section images. Normally, this angle is narrower in the most mesial area of the cavity as we approach the anterior wall, while it tends to become wider as we move in the distal direction. In the lateral approach, when this angle is <30°, the prevalence of perforation exceeds 60%, decreasing as the angle becomes wider. This could be due to the difficulty in finding the correct cleavage plane for membrane detachment and therefore using manual instruments appropriately in such a narrow space. Therefore, the anterior part of the sinus cavity, being a narrow zone, represents a high‐risk area when performing lateral approach , (Figure ). Consequently, the surgical window should be positioned as anteriorly as possible to allow for direct visualization and dissection of the membrane in the most critical area. Interestingly, the width of the sinus cavity seems to have completely different effects on influencing the risk of membrane perforation in the transcrestal approach. A recent multicenter study conducted on 430 patients showed a significant correlation between bucco‐palatal sinus width and membrane perforation, with an extremely low perforation rate observed in narrow sinuses and a much higher incidence in wide sinuses. This finding is a clinical confirmation of a principle described by Pommer et al. (2009), demonstrating that the force needed for membrane detachment during transcrestal sinus elevation increases with the size of the elevated area. When this force surpasses the sinus membrane elastic properties, perforation can occur. In narrow sinuses, where the elevated area is smaller compared to wide sinuses, higher elevation heights can typically be achieved before membrane tearing occurs. It is noteworthy that this situation is the exact opposite of what happens in sinus floor elevation with a lateral approach, creating an interesting complementarity between the two surgical techniques. Palato‐nasal recess The palato‐nasal recess, located between the roof of the hard palate and the lateral wall of the nasal cavity, can be found at various heights on the medial wall of the maxillary sinus. The average height of this recess gradually decreases from the premolar to the molar sites. Its position and angulation can impact the level of difficulty encountered during membrane elevation on the medial wall, in both lateral and transcrestal approaches: the sharper this angle, the higher the risk of perforation. Taking into account both the location and angulation, ~15% of premolar sites may exhibit an acute‐angled palato‐nasal recess, complicating membrane elevation. In contrast, this condition is observed in only about 2% of molar sites within the surgical area of sinus floor elevation. , Thickness of the buccal wall The thickness of the lateral wall plays a role in the risk of perforation only during lateral approach. , , As said previously, the ultrasonic erosion of the lateral wall is the safest approach to avoid membrane perforation. However, if the lateral bone wall is thick, erosion can be time‐consuming, and complete removal may be preferred. , Nevertheless, complete removal carries a risk of perforation due to limited surgical control, especially if the membrane is strongly attached and in the presence of Underwood septa. To prevent tearing, the bony lid can be divided into several pieces and detached one by one (Figure ). This approach reduces the pull‐out force but is contraindicated if there is an intraosseous passage of the alveolo‐antral artery. Clearly, the thickness of the buccal wall has no influence on the risk of sinus membrane perforation during a transcrestal approach. Septa The identification of Underwood septa underscores the importance of thorough 3D preoperative imaging to meticulously assess the internal sinus anatomy, given its significant impact on surgical approach. The anticipated risk of perforation is lower with medio‐lateral septa compared to antero‐posterior septa, necessitating the implementation of a specific technique tailored to the anatomy. As an example, when the sinus is divided into three different cavities by two full medio‐lateral septa, it is preferable to plan the creation of three windows. If encountering an antero‐posteriorly oriented septum, it should be removed using ultrasonic osteotomy at its base after elevating the membrane to expose the septum. Alternatively, it is possible to combine the buccal approach with a palatal window without the need to remove the septum. An innovative approach has been suggested when the sinus presents complex septa and sinus floor convolutions. This technique involves partially cutting and removing septa followed by mucosal elevation without graft placement. After a few months, the sinus mucosa thickened due to scarring, allows for more predictable membrane elevation. Also in the transcrestal approach, it is essential to precisely identify the position and orientation of Underwood septa to assess any potential impact on the regenerative procedure. If a medio‐lateral septum is present at the augmentation site, sinus crestal access and subsequent grafting should be planned either mesially or distally to it (Figure ). When the septum orientation is antero‐posterior, crestal osteotomy and graft insertion should be carried out medially or laterally to it. However, this last option is feasible only if the sinus cavity is wide enough in medio‐lateral direction to allow a proper implant placement at crestal level. Residual bone height (RBH) Regarding this parameter, due to conflicting evidence in the literature, reaching a definitive conclusion is challenging for the lateral approach. , , An exception may arise when the residual bone height (RBH) is <4 mm and implants are placed simultaneously. To reduce the risk of fracture of the residual bone crest due to implant insertion in undersized sites, the surgical window should be moved apically. , However, in such cases, the detachment of the membrane from the lower border of the window to the sinus floor will be performed blindly and may pose a higher risk of tearing, particularly in the presence of septa. Regarding the transcrestal approach, the available evidence in the literature does not demonstrate a significant direct influence of residual bone crest height on the risk of membrane perforation. On the contrary, from a clinical perspective, a very low residual crestal height may improve surgical visibility, allowing for better control by the operator over membrane integrity and the initial stages of biomaterial insertion. Location and type of edentulism In the context of the lateral approach, a study indicates a higher incidence of perforations (41.2%) in premolar–molar edentulous areas compared to premolar sites (16.7%) and molar sites (26.2%). However, these findings somewhat conflict with the findings regarding the influence of sinus width on membrane perforation during lateral sinus augmentation. Managing a single missing tooth appears to present greater challenges. There is no evidence supporting direct associations between the risk of membrane perforation, location, and type of edentulism in transcrestal sinus lift procedures. 2.1.3 Patient‐related factors Surgical access In the lateral approach, the surgical access can pose a risk due to the necessity of working perpendicularly to the lateral wall for maximum effectiveness. This requirement can be met more or less easily depending on the morphological biotype of the patient. Achieving this access can be more challenging in brachycephalic patients compared to dolichocephalic patients. In transcrestal sinus augmentation, membrane perforation risk increases in regions with a sloped sinus floor: in these cases, the critical moment is during the execution of the crestal access, both with osteotomes and specific burs. The instrument initially contacts the sinus membrane at the point where the remaining bone crest is at its lowest height. As the antrostomy procedure progresses, the instrument continues to actively engage with the membrane at this location, thereby increasing the risk of tears. , Smokers Several clinical studies and meta‐analyses have demonstrated a strong association between the prevalence of membrane perforations during sinus floor elevation with lateral approach and smoking, , , although it was not possible yet to correlate risk with the number of cigarettes smoked per day. Although it seems reasonable to assume that membrane changes induced by smoking similarly affect the risk of perforation in both the lateral and transcrestal approaches, insufficient scientific evidence is currently available to confirm this hypothesis. 2.1.4 How to diagnose a perforation? Timely identification of perforations is critical for effective management. Although the Valsalva maneuver is commonly used, it has limitations and may not always be the most reliable method, especially showing a significant number of false negatives, particularly in transcrestal approaches. A simpler alternative involves injecting sterile saline solution into the sub‐antral space and observing if the patient feels fluid flowing into their nose, indicating a breach. The use of endoscopy is undoubtedly the most accurate and reliable method for detecting perforations in both lateral and transcrestal approaches, but this equipment is typically not readily available in routine clinical practice. Given the challenge of accurately assessing sinus membrane resistance capacity, the use of a more comprehensive tool to evaluate the level of difficulty of the specific case, as proposed by Testori and colleagues, can be beneficial. Based on the score obtained from considering anatomical and patient‐related parameters, we can determine whether to adopt the lateral approach, which enables precise visualization and treatment of perforations. The impact of perforations on new bone formation and implant success is still a subject of debate, , although recent meta‐analyses have indicated a negative effect of intraoperative membrane tearing on these outcomes. , , However, proper management of the complication and appropriate repair of the membrane tear are fundamental elements to ensure the success of the regenerative procedure. , , 2.2 Management of membrane perforations Perforations often occur during the creation of the lateral window, especially when the osteotomy line grazes the edge, making the breach partially hidden (Figure ). Adjusting the shape of the window to fully expose the perforation (Figure ) is an important initial step, as achieving optimal intraoperative visibility of the area is essential for a correct management of this complication. Starting the detachment of the membrane from the side opposite the perforation (Figure ) can exploit membrane elasticity to reduce and partially close the tear (Figure ). The goal is to work on the membrane, continuing to detach it without causing an increase in the size of the perforation and achieving a secure seal to prevent graft material migration into the sinus cavity. Repair using a collagen membrane is the most commonly utilized method for fixing perforations and maintaining airtightness, with diverse techniques proposed across various studies. , However, even with collagen membrane repair, the integrity of the Schneiderian membrane and the potential for new bone formation could be compromised. , , This challenge arises from the difficulty in assessing membrane resistance to the pressure applied during graft condensation. To address this issue, it is important to compact the graft material against solid bone surfaces, avoiding direct pressure on the membrane to prevent re‐opening of the treated perforation or the creation of new tears. Previous reports showed a higher incidence of sinusitis (31.4%) in cases of membrane perforation, despite attempts to close the perforation with resorbable membranes, indicating that membrane stability during graft placement may not always be guaranteed. To address this issue, techniques such as the Loma Linda Pouch Technique involve the use of a large resorbable membrane that is folded into the sinus to fully enclose the graft material positioned at its center. However, during the membrane repair procedure, it is important not only to contain the graft material but also to preserve the blood and cellular supply to the grafted area, facilitating new bone formation. From a biological point of view, it is important to note that the collagen membrane used in the Loma Linda pouch technique completely isolates the graft material from the vascular and cellular supply originating from the sinus walls during the early phases of healing. An alternative that allows for stabilizing the membrane without hindering the biological processes necessary for new bone formation is the Tattone Technique. This approach involves shaping the membrane appropriately and then fixing it with titanium pins on the medial wall and, if necessary, on the buccal wall (Figure ). This technique is particularly useful when the tear is located near the medial wall, a frequent occurrence in anatomies where an acute‐angled palato‐nasal recess is present (Figure ). , , Another possibility proposed in the literature to manage membrane perforations is to suture them with 6.0 or 7.0 resorbable thread. This option may be valid only when dealing with thick membranes; attempting to suture a perforation on a thin membrane could be detrimental and could potentially widen the perforation. Another promising strategy to mitigate complications related to granule loss through perforation could involve using PRF membranes , to cover the perforations and as grafting material or without any grafting material, with reported implant survival rates of 100% and 98.7%, respectively. In cases of large perforations, a preferable course of action may involve aborting the procedure and closing the flap, allowing for a waiting period of 4–8 weeks before resuming the procedure using a split thickness approach. In the transcrestal approach, Tavelli and colleagues (2020) suggested a classification guiding the clinical management of intraoperative perforations. If the perforation occurs during the creation of the crestal antrostomy (Type 1), the clinician must assess whether it is possible to place a short implant, based on the height of the residual bone crest. If this is feasible, the implant should be inserted after protecting the sinus membrane with a collagen sponge, without placing any graft material. If the height of the residual alveolar crest does not allow for the placement of a short implant, it is suggested to proceed with creating a lateral window to manage the perforation. The same treatment approach should be adopted in cases of perforations occurring during membrane elevation or graft insertion (Type 2). If perforation occurs during implant placement (Type 3), characterized by a diffuse radiopacity appearance on clinical examination, the patient should be closely monitored with frequent follow‐ups. If symptoms of sinusitis develop (such as chronic nasal drainage, pain, or the presence of mucosal fistula), the patient should be treated with medical therapy, and, in consultation with the otolaryngologist, partial or complete removal of the graft should be considered. 2.2.1 Delamination Partial damage to the Schneiderian membrane may occur either during the lateral antrostomy or while detaching and elevating the membrane, leading to a tear in its periosteal component (Figure ). Such damage could likely transform in a full perforation during the continuation of the surgical procedure or postoperatively, due to the fragility of the pseudostratified respiratory epithelium. These lesions require a treatment approach similar to that used for an actual perforation. Unfortunately, due to the limited intraoperative visibility of the transcrestal approach, detecting membrane delamination with this technique is virtually impossible. Sinus membrane perforation Sinus membrane perforation is the most frequent complication in maxillary sinus augmentation with lateral approach, with an average frequency ranging from 15.7% to 23.1%, , but with significant variations across individual studies (0%–60%) depending on the surgeon's experience, surgical technique, anatomical conditions, and patient‐related factors. , , , Mean perforation rate reported for the transcrestal approach is lower (3.1%–6.4%), , , , but it should be noted that a significant number of perforations may not have been detected, given the blind nature of this technique. Indeed, when endoscopy was employed to confirm sinus membrane perforation during transcrestal sinus floor elevation in cadaver specimens, this rate increased significantly to 40%. Perforation of the sinus membrane can result in dissemination of grafting material into the sinus cavity, potentially compromising the patency of the ostiomeatal complex and triggering local inflammation often leading to postoperative sinusitis. , , Recognizing and addressing risk factors associated with membrane damage is essential to achieving successful outcomes in sinus augmentation procedures. 2.1.1 Risk factors and prevention Antrostomy technique Lateral approach Even if some clinical studies reported no difference in membrane perforation risk between rotary and piezoelectric instruments, , , systematic reviews and meta‐analyses concluded that membrane perforations during lateral sinus augmentation may be significantly reduced applying piezoelectric devices. , However, in studies comparing the use of piezoelectric bone surgery for creating a lateral access to the sinus, outcomes varied depending on the surgical approach. When bone window outlining and reflection into the sinus were performed, perforation prevalence was 17.6%, similar to rotary instruments. However, the prevalence decreased significantly to 4.7% when ultrasonic instruments were used to thin the lateral wall before opening the window. This approach allows, especially when the lateral wall of the sinus is thick, to have better surgical control, to perceive the proximity to the membrane thanks to the change in color (the thinned area becomes darker because the sinus cavity is visible through it in transparency), and to clearly identify the position of Underwood septa, if present. Overall, within the limitations of the available studies, it appeared that thinning the lateral wall with ultrasonic instruments or bone scrapers reduces the incidence of accidental Schneiderian membrane perforations during antrostomy. Transcrestal approach There are no studies in the literature directly comparing the various techniques used to create crestal access to the maxillary sinus (manual or electric‐driven osteotomes, piezosurgery, various burs designed for this specific application) with regard to perforation prevention. In the absence of sufficient evidence, it is suggested to apply the technique in which the operator is experienced and has achieved a good learning curve to limit the risk of perforation at this stage of the surgical protocol. 2.1.2 Anatomy Membrane thickness The thickness of the sinus membrane has long been a subject of debate. While some consider a thick membrane as indicative of resilience and view a thin membrane as fragile, , studies on cadaver specimens have shown that membrane thickness does not consistently correlate with resistance to tearing. Similarly, it has been demonstrated that sinus membranes measuring 1–1.5 mm in thickness exhibit greater resilience compared to those that are thinner or thicker. Since the literature reports contradictory information, it is difficult to predict the risk of perforation based solely on the thickness of the membrane observed on CBCT, which is often overestimated. Gingival phenotype This parameter may also be considered during pre‐surgical planning to aid in assessing the risk of membrane perforation. It has been noted that a thick gingival phenotype is correlated with a thick sinus membrane, and vice versa. , However, as discussed earlier, establishing a direct correlation between phenotype and the risk of perforation presents challenges. Sinus width The width of the sinus cavity is an important factor to consider in the risk assessment for Schneiderian membrane perforations. , , This parameter can be assessed by measuring the angle between the buccal and palatal walls on CBCT cross‐section images. Normally, this angle is narrower in the most mesial area of the cavity as we approach the anterior wall, while it tends to become wider as we move in the distal direction. In the lateral approach, when this angle is <30°, the prevalence of perforation exceeds 60%, decreasing as the angle becomes wider. This could be due to the difficulty in finding the correct cleavage plane for membrane detachment and therefore using manual instruments appropriately in such a narrow space. Therefore, the anterior part of the sinus cavity, being a narrow zone, represents a high‐risk area when performing lateral approach , (Figure ). Consequently, the surgical window should be positioned as anteriorly as possible to allow for direct visualization and dissection of the membrane in the most critical area. Interestingly, the width of the sinus cavity seems to have completely different effects on influencing the risk of membrane perforation in the transcrestal approach. A recent multicenter study conducted on 430 patients showed a significant correlation between bucco‐palatal sinus width and membrane perforation, with an extremely low perforation rate observed in narrow sinuses and a much higher incidence in wide sinuses. This finding is a clinical confirmation of a principle described by Pommer et al. (2009), demonstrating that the force needed for membrane detachment during transcrestal sinus elevation increases with the size of the elevated area. When this force surpasses the sinus membrane elastic properties, perforation can occur. In narrow sinuses, where the elevated area is smaller compared to wide sinuses, higher elevation heights can typically be achieved before membrane tearing occurs. It is noteworthy that this situation is the exact opposite of what happens in sinus floor elevation with a lateral approach, creating an interesting complementarity between the two surgical techniques. Palato‐nasal recess The palato‐nasal recess, located between the roof of the hard palate and the lateral wall of the nasal cavity, can be found at various heights on the medial wall of the maxillary sinus. The average height of this recess gradually decreases from the premolar to the molar sites. Its position and angulation can impact the level of difficulty encountered during membrane elevation on the medial wall, in both lateral and transcrestal approaches: the sharper this angle, the higher the risk of perforation. Taking into account both the location and angulation, ~15% of premolar sites may exhibit an acute‐angled palato‐nasal recess, complicating membrane elevation. In contrast, this condition is observed in only about 2% of molar sites within the surgical area of sinus floor elevation. , Thickness of the buccal wall The thickness of the lateral wall plays a role in the risk of perforation only during lateral approach. , , As said previously, the ultrasonic erosion of the lateral wall is the safest approach to avoid membrane perforation. However, if the lateral bone wall is thick, erosion can be time‐consuming, and complete removal may be preferred. , Nevertheless, complete removal carries a risk of perforation due to limited surgical control, especially if the membrane is strongly attached and in the presence of Underwood septa. To prevent tearing, the bony lid can be divided into several pieces and detached one by one (Figure ). This approach reduces the pull‐out force but is contraindicated if there is an intraosseous passage of the alveolo‐antral artery. Clearly, the thickness of the buccal wall has no influence on the risk of sinus membrane perforation during a transcrestal approach. Septa The identification of Underwood septa underscores the importance of thorough 3D preoperative imaging to meticulously assess the internal sinus anatomy, given its significant impact on surgical approach. The anticipated risk of perforation is lower with medio‐lateral septa compared to antero‐posterior septa, necessitating the implementation of a specific technique tailored to the anatomy. As an example, when the sinus is divided into three different cavities by two full medio‐lateral septa, it is preferable to plan the creation of three windows. If encountering an antero‐posteriorly oriented septum, it should be removed using ultrasonic osteotomy at its base after elevating the membrane to expose the septum. Alternatively, it is possible to combine the buccal approach with a palatal window without the need to remove the septum. An innovative approach has been suggested when the sinus presents complex septa and sinus floor convolutions. This technique involves partially cutting and removing septa followed by mucosal elevation without graft placement. After a few months, the sinus mucosa thickened due to scarring, allows for more predictable membrane elevation. Also in the transcrestal approach, it is essential to precisely identify the position and orientation of Underwood septa to assess any potential impact on the regenerative procedure. If a medio‐lateral septum is present at the augmentation site, sinus crestal access and subsequent grafting should be planned either mesially or distally to it (Figure ). When the septum orientation is antero‐posterior, crestal osteotomy and graft insertion should be carried out medially or laterally to it. However, this last option is feasible only if the sinus cavity is wide enough in medio‐lateral direction to allow a proper implant placement at crestal level. Residual bone height (RBH) Regarding this parameter, due to conflicting evidence in the literature, reaching a definitive conclusion is challenging for the lateral approach. , , An exception may arise when the residual bone height (RBH) is <4 mm and implants are placed simultaneously. To reduce the risk of fracture of the residual bone crest due to implant insertion in undersized sites, the surgical window should be moved apically. , However, in such cases, the detachment of the membrane from the lower border of the window to the sinus floor will be performed blindly and may pose a higher risk of tearing, particularly in the presence of septa. Regarding the transcrestal approach, the available evidence in the literature does not demonstrate a significant direct influence of residual bone crest height on the risk of membrane perforation. On the contrary, from a clinical perspective, a very low residual crestal height may improve surgical visibility, allowing for better control by the operator over membrane integrity and the initial stages of biomaterial insertion. Location and type of edentulism In the context of the lateral approach, a study indicates a higher incidence of perforations (41.2%) in premolar–molar edentulous areas compared to premolar sites (16.7%) and molar sites (26.2%). However, these findings somewhat conflict with the findings regarding the influence of sinus width on membrane perforation during lateral sinus augmentation. Managing a single missing tooth appears to present greater challenges. There is no evidence supporting direct associations between the risk of membrane perforation, location, and type of edentulism in transcrestal sinus lift procedures. 2.1.3 Patient‐related factors Surgical access In the lateral approach, the surgical access can pose a risk due to the necessity of working perpendicularly to the lateral wall for maximum effectiveness. This requirement can be met more or less easily depending on the morphological biotype of the patient. Achieving this access can be more challenging in brachycephalic patients compared to dolichocephalic patients. In transcrestal sinus augmentation, membrane perforation risk increases in regions with a sloped sinus floor: in these cases, the critical moment is during the execution of the crestal access, both with osteotomes and specific burs. The instrument initially contacts the sinus membrane at the point where the remaining bone crest is at its lowest height. As the antrostomy procedure progresses, the instrument continues to actively engage with the membrane at this location, thereby increasing the risk of tears. , Smokers Several clinical studies and meta‐analyses have demonstrated a strong association between the prevalence of membrane perforations during sinus floor elevation with lateral approach and smoking, , , although it was not possible yet to correlate risk with the number of cigarettes smoked per day. Although it seems reasonable to assume that membrane changes induced by smoking similarly affect the risk of perforation in both the lateral and transcrestal approaches, insufficient scientific evidence is currently available to confirm this hypothesis. 2.1.4 How to diagnose a perforation? Timely identification of perforations is critical for effective management. Although the Valsalva maneuver is commonly used, it has limitations and may not always be the most reliable method, especially showing a significant number of false negatives, particularly in transcrestal approaches. A simpler alternative involves injecting sterile saline solution into the sub‐antral space and observing if the patient feels fluid flowing into their nose, indicating a breach. The use of endoscopy is undoubtedly the most accurate and reliable method for detecting perforations in both lateral and transcrestal approaches, but this equipment is typically not readily available in routine clinical practice. Given the challenge of accurately assessing sinus membrane resistance capacity, the use of a more comprehensive tool to evaluate the level of difficulty of the specific case, as proposed by Testori and colleagues, can be beneficial. Based on the score obtained from considering anatomical and patient‐related parameters, we can determine whether to adopt the lateral approach, which enables precise visualization and treatment of perforations. The impact of perforations on new bone formation and implant success is still a subject of debate, , although recent meta‐analyses have indicated a negative effect of intraoperative membrane tearing on these outcomes. , , However, proper management of the complication and appropriate repair of the membrane tear are fundamental elements to ensure the success of the regenerative procedure. , , Risk factors and prevention Antrostomy technique Lateral approach Even if some clinical studies reported no difference in membrane perforation risk between rotary and piezoelectric instruments, , , systematic reviews and meta‐analyses concluded that membrane perforations during lateral sinus augmentation may be significantly reduced applying piezoelectric devices. , However, in studies comparing the use of piezoelectric bone surgery for creating a lateral access to the sinus, outcomes varied depending on the surgical approach. When bone window outlining and reflection into the sinus were performed, perforation prevalence was 17.6%, similar to rotary instruments. However, the prevalence decreased significantly to 4.7% when ultrasonic instruments were used to thin the lateral wall before opening the window. This approach allows, especially when the lateral wall of the sinus is thick, to have better surgical control, to perceive the proximity to the membrane thanks to the change in color (the thinned area becomes darker because the sinus cavity is visible through it in transparency), and to clearly identify the position of Underwood septa, if present. Overall, within the limitations of the available studies, it appeared that thinning the lateral wall with ultrasonic instruments or bone scrapers reduces the incidence of accidental Schneiderian membrane perforations during antrostomy. Transcrestal approach There are no studies in the literature directly comparing the various techniques used to create crestal access to the maxillary sinus (manual or electric‐driven osteotomes, piezosurgery, various burs designed for this specific application) with regard to perforation prevention. In the absence of sufficient evidence, it is suggested to apply the technique in which the operator is experienced and has achieved a good learning curve to limit the risk of perforation at this stage of the surgical protocol. Lateral approach Even if some clinical studies reported no difference in membrane perforation risk between rotary and piezoelectric instruments, , , systematic reviews and meta‐analyses concluded that membrane perforations during lateral sinus augmentation may be significantly reduced applying piezoelectric devices. , However, in studies comparing the use of piezoelectric bone surgery for creating a lateral access to the sinus, outcomes varied depending on the surgical approach. When bone window outlining and reflection into the sinus were performed, perforation prevalence was 17.6%, similar to rotary instruments. However, the prevalence decreased significantly to 4.7% when ultrasonic instruments were used to thin the lateral wall before opening the window. This approach allows, especially when the lateral wall of the sinus is thick, to have better surgical control, to perceive the proximity to the membrane thanks to the change in color (the thinned area becomes darker because the sinus cavity is visible through it in transparency), and to clearly identify the position of Underwood septa, if present. Overall, within the limitations of the available studies, it appeared that thinning the lateral wall with ultrasonic instruments or bone scrapers reduces the incidence of accidental Schneiderian membrane perforations during antrostomy. Transcrestal approach There are no studies in the literature directly comparing the various techniques used to create crestal access to the maxillary sinus (manual or electric‐driven osteotomes, piezosurgery, various burs designed for this specific application) with regard to perforation prevention. In the absence of sufficient evidence, it is suggested to apply the technique in which the operator is experienced and has achieved a good learning curve to limit the risk of perforation at this stage of the surgical protocol. Even if some clinical studies reported no difference in membrane perforation risk between rotary and piezoelectric instruments, , , systematic reviews and meta‐analyses concluded that membrane perforations during lateral sinus augmentation may be significantly reduced applying piezoelectric devices. , However, in studies comparing the use of piezoelectric bone surgery for creating a lateral access to the sinus, outcomes varied depending on the surgical approach. When bone window outlining and reflection into the sinus were performed, perforation prevalence was 17.6%, similar to rotary instruments. However, the prevalence decreased significantly to 4.7% when ultrasonic instruments were used to thin the lateral wall before opening the window. This approach allows, especially when the lateral wall of the sinus is thick, to have better surgical control, to perceive the proximity to the membrane thanks to the change in color (the thinned area becomes darker because the sinus cavity is visible through it in transparency), and to clearly identify the position of Underwood septa, if present. Overall, within the limitations of the available studies, it appeared that thinning the lateral wall with ultrasonic instruments or bone scrapers reduces the incidence of accidental Schneiderian membrane perforations during antrostomy. There are no studies in the literature directly comparing the various techniques used to create crestal access to the maxillary sinus (manual or electric‐driven osteotomes, piezosurgery, various burs designed for this specific application) with regard to perforation prevention. In the absence of sufficient evidence, it is suggested to apply the technique in which the operator is experienced and has achieved a good learning curve to limit the risk of perforation at this stage of the surgical protocol. Anatomy Membrane thickness The thickness of the sinus membrane has long been a subject of debate. While some consider a thick membrane as indicative of resilience and view a thin membrane as fragile, , studies on cadaver specimens have shown that membrane thickness does not consistently correlate with resistance to tearing. Similarly, it has been demonstrated that sinus membranes measuring 1–1.5 mm in thickness exhibit greater resilience compared to those that are thinner or thicker. Since the literature reports contradictory information, it is difficult to predict the risk of perforation based solely on the thickness of the membrane observed on CBCT, which is often overestimated. Gingival phenotype This parameter may also be considered during pre‐surgical planning to aid in assessing the risk of membrane perforation. It has been noted that a thick gingival phenotype is correlated with a thick sinus membrane, and vice versa. , However, as discussed earlier, establishing a direct correlation between phenotype and the risk of perforation presents challenges. Sinus width The width of the sinus cavity is an important factor to consider in the risk assessment for Schneiderian membrane perforations. , , This parameter can be assessed by measuring the angle between the buccal and palatal walls on CBCT cross‐section images. Normally, this angle is narrower in the most mesial area of the cavity as we approach the anterior wall, while it tends to become wider as we move in the distal direction. In the lateral approach, when this angle is <30°, the prevalence of perforation exceeds 60%, decreasing as the angle becomes wider. This could be due to the difficulty in finding the correct cleavage plane for membrane detachment and therefore using manual instruments appropriately in such a narrow space. Therefore, the anterior part of the sinus cavity, being a narrow zone, represents a high‐risk area when performing lateral approach , (Figure ). Consequently, the surgical window should be positioned as anteriorly as possible to allow for direct visualization and dissection of the membrane in the most critical area. Interestingly, the width of the sinus cavity seems to have completely different effects on influencing the risk of membrane perforation in the transcrestal approach. A recent multicenter study conducted on 430 patients showed a significant correlation between bucco‐palatal sinus width and membrane perforation, with an extremely low perforation rate observed in narrow sinuses and a much higher incidence in wide sinuses. This finding is a clinical confirmation of a principle described by Pommer et al. (2009), demonstrating that the force needed for membrane detachment during transcrestal sinus elevation increases with the size of the elevated area. When this force surpasses the sinus membrane elastic properties, perforation can occur. In narrow sinuses, where the elevated area is smaller compared to wide sinuses, higher elevation heights can typically be achieved before membrane tearing occurs. It is noteworthy that this situation is the exact opposite of what happens in sinus floor elevation with a lateral approach, creating an interesting complementarity between the two surgical techniques. Palato‐nasal recess The palato‐nasal recess, located between the roof of the hard palate and the lateral wall of the nasal cavity, can be found at various heights on the medial wall of the maxillary sinus. The average height of this recess gradually decreases from the premolar to the molar sites. Its position and angulation can impact the level of difficulty encountered during membrane elevation on the medial wall, in both lateral and transcrestal approaches: the sharper this angle, the higher the risk of perforation. Taking into account both the location and angulation, ~15% of premolar sites may exhibit an acute‐angled palato‐nasal recess, complicating membrane elevation. In contrast, this condition is observed in only about 2% of molar sites within the surgical area of sinus floor elevation. , Thickness of the buccal wall The thickness of the lateral wall plays a role in the risk of perforation only during lateral approach. , , As said previously, the ultrasonic erosion of the lateral wall is the safest approach to avoid membrane perforation. However, if the lateral bone wall is thick, erosion can be time‐consuming, and complete removal may be preferred. , Nevertheless, complete removal carries a risk of perforation due to limited surgical control, especially if the membrane is strongly attached and in the presence of Underwood septa. To prevent tearing, the bony lid can be divided into several pieces and detached one by one (Figure ). This approach reduces the pull‐out force but is contraindicated if there is an intraosseous passage of the alveolo‐antral artery. Clearly, the thickness of the buccal wall has no influence on the risk of sinus membrane perforation during a transcrestal approach. Septa The identification of Underwood septa underscores the importance of thorough 3D preoperative imaging to meticulously assess the internal sinus anatomy, given its significant impact on surgical approach. The anticipated risk of perforation is lower with medio‐lateral septa compared to antero‐posterior septa, necessitating the implementation of a specific technique tailored to the anatomy. As an example, when the sinus is divided into three different cavities by two full medio‐lateral septa, it is preferable to plan the creation of three windows. If encountering an antero‐posteriorly oriented septum, it should be removed using ultrasonic osteotomy at its base after elevating the membrane to expose the septum. Alternatively, it is possible to combine the buccal approach with a palatal window without the need to remove the septum. An innovative approach has been suggested when the sinus presents complex septa and sinus floor convolutions. This technique involves partially cutting and removing septa followed by mucosal elevation without graft placement. After a few months, the sinus mucosa thickened due to scarring, allows for more predictable membrane elevation. Also in the transcrestal approach, it is essential to precisely identify the position and orientation of Underwood septa to assess any potential impact on the regenerative procedure. If a medio‐lateral septum is present at the augmentation site, sinus crestal access and subsequent grafting should be planned either mesially or distally to it (Figure ). When the septum orientation is antero‐posterior, crestal osteotomy and graft insertion should be carried out medially or laterally to it. However, this last option is feasible only if the sinus cavity is wide enough in medio‐lateral direction to allow a proper implant placement at crestal level. Residual bone height (RBH) Regarding this parameter, due to conflicting evidence in the literature, reaching a definitive conclusion is challenging for the lateral approach. , , An exception may arise when the residual bone height (RBH) is <4 mm and implants are placed simultaneously. To reduce the risk of fracture of the residual bone crest due to implant insertion in undersized sites, the surgical window should be moved apically. , However, in such cases, the detachment of the membrane from the lower border of the window to the sinus floor will be performed blindly and may pose a higher risk of tearing, particularly in the presence of septa. Regarding the transcrestal approach, the available evidence in the literature does not demonstrate a significant direct influence of residual bone crest height on the risk of membrane perforation. On the contrary, from a clinical perspective, a very low residual crestal height may improve surgical visibility, allowing for better control by the operator over membrane integrity and the initial stages of biomaterial insertion. Location and type of edentulism In the context of the lateral approach, a study indicates a higher incidence of perforations (41.2%) in premolar–molar edentulous areas compared to premolar sites (16.7%) and molar sites (26.2%). However, these findings somewhat conflict with the findings regarding the influence of sinus width on membrane perforation during lateral sinus augmentation. Managing a single missing tooth appears to present greater challenges. There is no evidence supporting direct associations between the risk of membrane perforation, location, and type of edentulism in transcrestal sinus lift procedures. The thickness of the sinus membrane has long been a subject of debate. While some consider a thick membrane as indicative of resilience and view a thin membrane as fragile, , studies on cadaver specimens have shown that membrane thickness does not consistently correlate with resistance to tearing. Similarly, it has been demonstrated that sinus membranes measuring 1–1.5 mm in thickness exhibit greater resilience compared to those that are thinner or thicker. Since the literature reports contradictory information, it is difficult to predict the risk of perforation based solely on the thickness of the membrane observed on CBCT, which is often overestimated. This parameter may also be considered during pre‐surgical planning to aid in assessing the risk of membrane perforation. It has been noted that a thick gingival phenotype is correlated with a thick sinus membrane, and vice versa. , However, as discussed earlier, establishing a direct correlation between phenotype and the risk of perforation presents challenges. The width of the sinus cavity is an important factor to consider in the risk assessment for Schneiderian membrane perforations. , , This parameter can be assessed by measuring the angle between the buccal and palatal walls on CBCT cross‐section images. Normally, this angle is narrower in the most mesial area of the cavity as we approach the anterior wall, while it tends to become wider as we move in the distal direction. In the lateral approach, when this angle is <30°, the prevalence of perforation exceeds 60%, decreasing as the angle becomes wider. This could be due to the difficulty in finding the correct cleavage plane for membrane detachment and therefore using manual instruments appropriately in such a narrow space. Therefore, the anterior part of the sinus cavity, being a narrow zone, represents a high‐risk area when performing lateral approach , (Figure ). Consequently, the surgical window should be positioned as anteriorly as possible to allow for direct visualization and dissection of the membrane in the most critical area. Interestingly, the width of the sinus cavity seems to have completely different effects on influencing the risk of membrane perforation in the transcrestal approach. A recent multicenter study conducted on 430 patients showed a significant correlation between bucco‐palatal sinus width and membrane perforation, with an extremely low perforation rate observed in narrow sinuses and a much higher incidence in wide sinuses. This finding is a clinical confirmation of a principle described by Pommer et al. (2009), demonstrating that the force needed for membrane detachment during transcrestal sinus elevation increases with the size of the elevated area. When this force surpasses the sinus membrane elastic properties, perforation can occur. In narrow sinuses, where the elevated area is smaller compared to wide sinuses, higher elevation heights can typically be achieved before membrane tearing occurs. It is noteworthy that this situation is the exact opposite of what happens in sinus floor elevation with a lateral approach, creating an interesting complementarity between the two surgical techniques. The palato‐nasal recess, located between the roof of the hard palate and the lateral wall of the nasal cavity, can be found at various heights on the medial wall of the maxillary sinus. The average height of this recess gradually decreases from the premolar to the molar sites. Its position and angulation can impact the level of difficulty encountered during membrane elevation on the medial wall, in both lateral and transcrestal approaches: the sharper this angle, the higher the risk of perforation. Taking into account both the location and angulation, ~15% of premolar sites may exhibit an acute‐angled palato‐nasal recess, complicating membrane elevation. In contrast, this condition is observed in only about 2% of molar sites within the surgical area of sinus floor elevation. , The thickness of the lateral wall plays a role in the risk of perforation only during lateral approach. , , As said previously, the ultrasonic erosion of the lateral wall is the safest approach to avoid membrane perforation. However, if the lateral bone wall is thick, erosion can be time‐consuming, and complete removal may be preferred. , Nevertheless, complete removal carries a risk of perforation due to limited surgical control, especially if the membrane is strongly attached and in the presence of Underwood septa. To prevent tearing, the bony lid can be divided into several pieces and detached one by one (Figure ). This approach reduces the pull‐out force but is contraindicated if there is an intraosseous passage of the alveolo‐antral artery. Clearly, the thickness of the buccal wall has no influence on the risk of sinus membrane perforation during a transcrestal approach. The identification of Underwood septa underscores the importance of thorough 3D preoperative imaging to meticulously assess the internal sinus anatomy, given its significant impact on surgical approach. The anticipated risk of perforation is lower with medio‐lateral septa compared to antero‐posterior septa, necessitating the implementation of a specific technique tailored to the anatomy. As an example, when the sinus is divided into three different cavities by two full medio‐lateral septa, it is preferable to plan the creation of three windows. If encountering an antero‐posteriorly oriented septum, it should be removed using ultrasonic osteotomy at its base after elevating the membrane to expose the septum. Alternatively, it is possible to combine the buccal approach with a palatal window without the need to remove the septum. An innovative approach has been suggested when the sinus presents complex septa and sinus floor convolutions. This technique involves partially cutting and removing septa followed by mucosal elevation without graft placement. After a few months, the sinus mucosa thickened due to scarring, allows for more predictable membrane elevation. Also in the transcrestal approach, it is essential to precisely identify the position and orientation of Underwood septa to assess any potential impact on the regenerative procedure. If a medio‐lateral septum is present at the augmentation site, sinus crestal access and subsequent grafting should be planned either mesially or distally to it (Figure ). When the septum orientation is antero‐posterior, crestal osteotomy and graft insertion should be carried out medially or laterally to it. However, this last option is feasible only if the sinus cavity is wide enough in medio‐lateral direction to allow a proper implant placement at crestal level. Regarding this parameter, due to conflicting evidence in the literature, reaching a definitive conclusion is challenging for the lateral approach. , , An exception may arise when the residual bone height (RBH) is <4 mm and implants are placed simultaneously. To reduce the risk of fracture of the residual bone crest due to implant insertion in undersized sites, the surgical window should be moved apically. , However, in such cases, the detachment of the membrane from the lower border of the window to the sinus floor will be performed blindly and may pose a higher risk of tearing, particularly in the presence of septa. Regarding the transcrestal approach, the available evidence in the literature does not demonstrate a significant direct influence of residual bone crest height on the risk of membrane perforation. On the contrary, from a clinical perspective, a very low residual crestal height may improve surgical visibility, allowing for better control by the operator over membrane integrity and the initial stages of biomaterial insertion. In the context of the lateral approach, a study indicates a higher incidence of perforations (41.2%) in premolar–molar edentulous areas compared to premolar sites (16.7%) and molar sites (26.2%). However, these findings somewhat conflict with the findings regarding the influence of sinus width on membrane perforation during lateral sinus augmentation. Managing a single missing tooth appears to present greater challenges. There is no evidence supporting direct associations between the risk of membrane perforation, location, and type of edentulism in transcrestal sinus lift procedures. Patient‐related factors Surgical access In the lateral approach, the surgical access can pose a risk due to the necessity of working perpendicularly to the lateral wall for maximum effectiveness. This requirement can be met more or less easily depending on the morphological biotype of the patient. Achieving this access can be more challenging in brachycephalic patients compared to dolichocephalic patients. In transcrestal sinus augmentation, membrane perforation risk increases in regions with a sloped sinus floor: in these cases, the critical moment is during the execution of the crestal access, both with osteotomes and specific burs. The instrument initially contacts the sinus membrane at the point where the remaining bone crest is at its lowest height. As the antrostomy procedure progresses, the instrument continues to actively engage with the membrane at this location, thereby increasing the risk of tears. , Smokers Several clinical studies and meta‐analyses have demonstrated a strong association between the prevalence of membrane perforations during sinus floor elevation with lateral approach and smoking, , , although it was not possible yet to correlate risk with the number of cigarettes smoked per day. Although it seems reasonable to assume that membrane changes induced by smoking similarly affect the risk of perforation in both the lateral and transcrestal approaches, insufficient scientific evidence is currently available to confirm this hypothesis. In the lateral approach, the surgical access can pose a risk due to the necessity of working perpendicularly to the lateral wall for maximum effectiveness. This requirement can be met more or less easily depending on the morphological biotype of the patient. Achieving this access can be more challenging in brachycephalic patients compared to dolichocephalic patients. In transcrestal sinus augmentation, membrane perforation risk increases in regions with a sloped sinus floor: in these cases, the critical moment is during the execution of the crestal access, both with osteotomes and specific burs. The instrument initially contacts the sinus membrane at the point where the remaining bone crest is at its lowest height. As the antrostomy procedure progresses, the instrument continues to actively engage with the membrane at this location, thereby increasing the risk of tears. , Several clinical studies and meta‐analyses have demonstrated a strong association between the prevalence of membrane perforations during sinus floor elevation with lateral approach and smoking, , , although it was not possible yet to correlate risk with the number of cigarettes smoked per day. Although it seems reasonable to assume that membrane changes induced by smoking similarly affect the risk of perforation in both the lateral and transcrestal approaches, insufficient scientific evidence is currently available to confirm this hypothesis. How to diagnose a perforation? Timely identification of perforations is critical for effective management. Although the Valsalva maneuver is commonly used, it has limitations and may not always be the most reliable method, especially showing a significant number of false negatives, particularly in transcrestal approaches. A simpler alternative involves injecting sterile saline solution into the sub‐antral space and observing if the patient feels fluid flowing into their nose, indicating a breach. The use of endoscopy is undoubtedly the most accurate and reliable method for detecting perforations in both lateral and transcrestal approaches, but this equipment is typically not readily available in routine clinical practice. Given the challenge of accurately assessing sinus membrane resistance capacity, the use of a more comprehensive tool to evaluate the level of difficulty of the specific case, as proposed by Testori and colleagues, can be beneficial. Based on the score obtained from considering anatomical and patient‐related parameters, we can determine whether to adopt the lateral approach, which enables precise visualization and treatment of perforations. The impact of perforations on new bone formation and implant success is still a subject of debate, , although recent meta‐analyses have indicated a negative effect of intraoperative membrane tearing on these outcomes. , , However, proper management of the complication and appropriate repair of the membrane tear are fundamental elements to ensure the success of the regenerative procedure. , , Management of membrane perforations Perforations often occur during the creation of the lateral window, especially when the osteotomy line grazes the edge, making the breach partially hidden (Figure ). Adjusting the shape of the window to fully expose the perforation (Figure ) is an important initial step, as achieving optimal intraoperative visibility of the area is essential for a correct management of this complication. Starting the detachment of the membrane from the side opposite the perforation (Figure ) can exploit membrane elasticity to reduce and partially close the tear (Figure ). The goal is to work on the membrane, continuing to detach it without causing an increase in the size of the perforation and achieving a secure seal to prevent graft material migration into the sinus cavity. Repair using a collagen membrane is the most commonly utilized method for fixing perforations and maintaining airtightness, with diverse techniques proposed across various studies. , However, even with collagen membrane repair, the integrity of the Schneiderian membrane and the potential for new bone formation could be compromised. , , This challenge arises from the difficulty in assessing membrane resistance to the pressure applied during graft condensation. To address this issue, it is important to compact the graft material against solid bone surfaces, avoiding direct pressure on the membrane to prevent re‐opening of the treated perforation or the creation of new tears. Previous reports showed a higher incidence of sinusitis (31.4%) in cases of membrane perforation, despite attempts to close the perforation with resorbable membranes, indicating that membrane stability during graft placement may not always be guaranteed. To address this issue, techniques such as the Loma Linda Pouch Technique involve the use of a large resorbable membrane that is folded into the sinus to fully enclose the graft material positioned at its center. However, during the membrane repair procedure, it is important not only to contain the graft material but also to preserve the blood and cellular supply to the grafted area, facilitating new bone formation. From a biological point of view, it is important to note that the collagen membrane used in the Loma Linda pouch technique completely isolates the graft material from the vascular and cellular supply originating from the sinus walls during the early phases of healing. An alternative that allows for stabilizing the membrane without hindering the biological processes necessary for new bone formation is the Tattone Technique. This approach involves shaping the membrane appropriately and then fixing it with titanium pins on the medial wall and, if necessary, on the buccal wall (Figure ). This technique is particularly useful when the tear is located near the medial wall, a frequent occurrence in anatomies where an acute‐angled palato‐nasal recess is present (Figure ). , , Another possibility proposed in the literature to manage membrane perforations is to suture them with 6.0 or 7.0 resorbable thread. This option may be valid only when dealing with thick membranes; attempting to suture a perforation on a thin membrane could be detrimental and could potentially widen the perforation. Another promising strategy to mitigate complications related to granule loss through perforation could involve using PRF membranes , to cover the perforations and as grafting material or without any grafting material, with reported implant survival rates of 100% and 98.7%, respectively. In cases of large perforations, a preferable course of action may involve aborting the procedure and closing the flap, allowing for a waiting period of 4–8 weeks before resuming the procedure using a split thickness approach. In the transcrestal approach, Tavelli and colleagues (2020) suggested a classification guiding the clinical management of intraoperative perforations. If the perforation occurs during the creation of the crestal antrostomy (Type 1), the clinician must assess whether it is possible to place a short implant, based on the height of the residual bone crest. If this is feasible, the implant should be inserted after protecting the sinus membrane with a collagen sponge, without placing any graft material. If the height of the residual alveolar crest does not allow for the placement of a short implant, it is suggested to proceed with creating a lateral window to manage the perforation. The same treatment approach should be adopted in cases of perforations occurring during membrane elevation or graft insertion (Type 2). If perforation occurs during implant placement (Type 3), characterized by a diffuse radiopacity appearance on clinical examination, the patient should be closely monitored with frequent follow‐ups. If symptoms of sinusitis develop (such as chronic nasal drainage, pain, or the presence of mucosal fistula), the patient should be treated with medical therapy, and, in consultation with the otolaryngologist, partial or complete removal of the graft should be considered. 2.2.1 Delamination Partial damage to the Schneiderian membrane may occur either during the lateral antrostomy or while detaching and elevating the membrane, leading to a tear in its periosteal component (Figure ). Such damage could likely transform in a full perforation during the continuation of the surgical procedure or postoperatively, due to the fragility of the pseudostratified respiratory epithelium. These lesions require a treatment approach similar to that used for an actual perforation. Unfortunately, due to the limited intraoperative visibility of the transcrestal approach, detecting membrane delamination with this technique is virtually impossible. Delamination Partial damage to the Schneiderian membrane may occur either during the lateral antrostomy or while detaching and elevating the membrane, leading to a tear in its periosteal component (Figure ). Such damage could likely transform in a full perforation during the continuation of the surgical procedure or postoperatively, due to the fragility of the pseudostratified respiratory epithelium. These lesions require a treatment approach similar to that used for an actual perforation. Unfortunately, due to the limited intraoperative visibility of the transcrestal approach, detecting membrane delamination with this technique is virtually impossible. VASCULAR DAMAGE Vascular injury, a risk associated only with the lateral approach, can lead to significant bleeding, primarily when the buccal osteotomy damages the intraosseous passage of the alveolo‐antral artery. This artery is an anastomosis between the infraorbital and posterosuperior alveolar arteries. Rosano and colleagues described three possible routes for the artery: An internal extraosseous pathway which is usually not detectable on cross‐sectional images because the vessel is stuck between the lateral wall and the sinus membrane (Figure ). In this situation, there is no risk of bleeding during buccal antrostomy. A fully intraosseous path within the lateral wall of the maxillary sinus (Figure ) is observed in 47% of cases, and this should be considered when planning the osteotomy in this area. The risk of bleeding should be taken into account. A buccal pathway between the lateral wall and the periosteum (Figure ) that is not identifiable on coronal sections. This may be suspected when an intraosseous pathway is observed in the distal part within a relatively thick vestibular wall. This intraosseous pathway then disappears in presence of a thinner vestibular wall and reappears mesially within the thickness of a thicker wall. The vestibular extraosseous pathway is believed to result from the externalization of vessels due to pathological horizontal bone resorption following periodontal disease. Although this situation is uncommon, when suspected, great care must be taken when elevating the vestibular flap. The frequency and severity of bleeding rise with the size of the artery. According to Testori et al., the probability of significant bleeding is 10% for a vessel diameter of 0.5–1 mm and increases to 57% for diameters larger than 2 mm. , 3.1 Prevention and management of the vascular damage Addressing intraoperative bleeding involves several approaches, including clamping the bleeding site, applying pressure with gauze treated with tranexamic acid, or using bone wax. Each technique carries its own risks, such as the potential for postoperative bleeding recurrence. Another effective method is diathermocoagulation with a bipolar electrosurgical unit, although this poses a risk for the integrity of the sinus membrane. , In cases of intraoperative bleeding, it may be beneficial to avoid suturing the distal incision as this can facilitate the expulsion of any clots. The most effective approach to prevent bleeding involves careful isolation of the alveolo‐antral artery using piezoelectric surgical devices , (Figure ). Additionally, reducing the size of the bony window and adjusting its positioning—either lower or higher when feasible—can also help minimize the risks of bleeding during the procedure. In this context, the use of 3D‐printed surgical guides derived from digital workflows could represent a significant aid for the clinician, providing real‐time guidance during antrostomy, and ensuring precise alignment of surgical steps with predetermined parameters established during the planning phase (Figure ). This approach also reduces the need for intraoperative adjustments, resulting in shorter surgical times and improved overall workflow, ultimately leading to enhanced patient comfort and more predictable surgical outcomes. Prevention and management of the vascular damage Addressing intraoperative bleeding involves several approaches, including clamping the bleeding site, applying pressure with gauze treated with tranexamic acid, or using bone wax. Each technique carries its own risks, such as the potential for postoperative bleeding recurrence. Another effective method is diathermocoagulation with a bipolar electrosurgical unit, although this poses a risk for the integrity of the sinus membrane. , In cases of intraoperative bleeding, it may be beneficial to avoid suturing the distal incision as this can facilitate the expulsion of any clots. The most effective approach to prevent bleeding involves careful isolation of the alveolo‐antral artery using piezoelectric surgical devices , (Figure ). Additionally, reducing the size of the bony window and adjusting its positioning—either lower or higher when feasible—can also help minimize the risks of bleeding during the procedure. In this context, the use of 3D‐printed surgical guides derived from digital workflows could represent a significant aid for the clinician, providing real‐time guidance during antrostomy, and ensuring precise alignment of surgical steps with predetermined parameters established during the planning phase (Figure ). This approach also reduces the need for intraoperative adjustments, resulting in shorter surgical times and improved overall workflow, ultimately leading to enhanced patient comfort and more predictable surgical outcomes. NUMBNESS The occurrence of neurosensory changes following sinus floor elevation with lateral approach, such as transient numbness on the operation side, is seldom highlighted but can impact patient comfort and recovery. 4.1 Risk factors This kind of discomfort may be observed on the operated side after surgery, as a consequence of severing terminal branches of the infraorbital nerve during the mesial vertical releasing incision. Its frequency is related to bone atrophy degree: in more severe cases, the emergence of the infraorbital nerve is nearer to the operative area. 4.2 Prevention To prevent this issue, a nuanced approach involves making a shallower, partial thickness vertical releasing incision in the alveolar mucosa rather than full thickness. Subsequently, gently parting the incision edges with Metzenbaum scissors can stretch the nerve fibers without cutting them, thereby preserving sensory function (Figure ). An alternative strategy consists of using a triangular flap without performing any mesial releasing incision. In the transcrestal approach, no extensive exposure of the lateral wall is required and a minimally invasive flap is performed, usually without releasing incisions. Therefore, postoperative numbness is not an issue when using this technique. Risk factors This kind of discomfort may be observed on the operated side after surgery, as a consequence of severing terminal branches of the infraorbital nerve during the mesial vertical releasing incision. Its frequency is related to bone atrophy degree: in more severe cases, the emergence of the infraorbital nerve is nearer to the operative area. Prevention To prevent this issue, a nuanced approach involves making a shallower, partial thickness vertical releasing incision in the alveolar mucosa rather than full thickness. Subsequently, gently parting the incision edges with Metzenbaum scissors can stretch the nerve fibers without cutting them, thereby preserving sensory function (Figure ). An alternative strategy consists of using a triangular flap without performing any mesial releasing incision. In the transcrestal approach, no extensive exposure of the lateral wall is required and a minimally invasive flap is performed, usually without releasing incisions. Therefore, postoperative numbness is not an issue when using this technique. IMPLANT DISPLACEMENT The risk of implant displacement into the sinus cavity requires careful attention when implants are inserted alongside grafting procedures. Achieving solid implant stability is paramount to avoid the risk of the implant migrating into the sinus or possibly moving into sensitive regions. 5.1 Risk factors and prevention Two primary factors that can contribute to implant displacement are: firstly, the presence of a residual bone crest with extremely low quality, which can be addressed by utilizing the osseodensification technique to improve bone density. Secondly, very limited bone height below the sinus (<3 mm) during simultaneous implant placement. To address this, drilling should be undersized, and tapered implants should be used, although this approach may potentially fracture the vestibular wall due to high pressure transmitted to the bone during implant insertion. To mitigate this risk, the surgical window can be adjusted apically by 8–10 mm, providing a more secure setting for the implant. , Additionally, using cover screws larger than the implant diameter can also be considered. Risk factors and prevention Two primary factors that can contribute to implant displacement are: firstly, the presence of a residual bone crest with extremely low quality, which can be addressed by utilizing the osseodensification technique to improve bone density. Secondly, very limited bone height below the sinus (<3 mm) during simultaneous implant placement. To address this, drilling should be undersized, and tapered implants should be used, although this approach may potentially fracture the vestibular wall due to high pressure transmitted to the bone during implant insertion. To mitigate this risk, the surgical window can be adjusted apically by 8–10 mm, providing a more secure setting for the implant. , Additionally, using cover screws larger than the implant diameter can also be considered. POOR GRAFT ADAPTATION To ensure optimal bone quality, it is essential to achieve maximum contact between the graft and the surrounding native bone. This necessitates avoiding gaps (Figure ) by ensuring that the grafting material directly contacts both the medial and anterior walls. An insulin syringe with a beveled tip is employed for this purpose in the lateral sinus augmentation. The syringe, loaded with grafting material, is inserted into the sinus cavity in a backward and inward direction until it reaches the wall to be grafted, aligning the bevel towards the wall (Figure ). The material is then injected. It is important that the dimensions of the buccal bone window are appropriate to allow for the execution of this technique, while remaining as small as possible. Even in the transcrestal approach, it is necessary to ensure that the graft comes into contact with the sinus walls to optimize bone formation. , The clinician must take care to angle the tip of the biomaterial syringe in various directions (buccal, palatal, mesial, and distal) if using injectable material, to fill the sub‐antral space as evenly as possible. The same outcome should also be achieved when using particulate graft. Once the biomaterial is placed beneath the membrane, the granules can be gently pushed in various directions using small compactors inserted through the crestal antrostomy. In transcrestal sinus augmentation, achieving a uniform distribution of the grafting material can be reliably obtained when the sinus cavity is narrow in bucco‐palatal direction. BENIGN PAROXYSMAL POSITIONAL VERTIGO Benign paroxysmal positional vertigo (BPPV) is an uncommon complication following transcrestal sinus floor elevation performed by using osteotomes. , BPPV can occur due to the displacement of inner ear crystals (otoconia) into the semicircular canals, leading to episodes of vertigo triggered by head movements. The etiology of BPPV is likely associated with the surgical trauma induced by osteotomes and a surgical hammer during bone malleting and condensation to create the crestal antrostomy, leading to the displacement of otoconia. Prevention strategies include minimizing excessive force or rapid movements during the procedure and ensuring careful manipulation to reduce the risk of otoconia displacement. The use of the magnetic mallet instead of the manual surgical mallet could be an effective strategy in preventing BPPV, as there have been no reported cases of BPPV in the literature with the use of this device. However, given the low frequency of this complication and the limited number of studies conducted on this topic, further data are needed to confirm the preventive action of this technology on BPPV. Treatment options for BPPV include canalith repositioning maneuvers (e.g., Epley maneuver) to guide the displaced otoconia back to their original position within the inner ear, alleviating symptoms of vertigo. Additionally, patients may benefit from vestibular rehabilitation exercises to improve balance and reduce the frequency of vertigo episodes following transcrestal sinus floor elevation. CONCLUSIONS Intraoperative complications during sinus augmentation procedures, particularly sinus membrane perforation, pose significant challenges and necessitate careful consideration of various risk factors and preventive strategies. The prevalence of membrane perforation varies between lateral and transcrestal approaches, with distinct implications for surgical techniques and anatomical factors. Risk factors such as antrostomy technique, anatomy (including membrane thickness, gingival phenotype, sinus width, and septa), patient‐related factors (such as surgical access and smoking), and specific procedural aspects must be thoroughly evaluated with a meticulous presurgical planning to minimize the occurrence of intraoperative complications. The implementation of advanced techniques and technologies, to be used both in the planning and in the operational phase, can contribute to enhance surgical precision, ultimately improving patient comfort and treatment predictability. It is important to consider that a close collaboration with the otorhinolaryngologist is necessary in managing patients who have experienced intraoperative complications, with the aim of a rapid and effective multidisciplinary resolution of the problem. The authors declare no conflicts of interest.
It is time for reform: Results from a questionnaire survey on the current status of next generation
135224c1-579e-4101-81d4-26a970591215
11780308
Internal Medicine[mh]
INTRODUCTION Nowadays, the landscape for surgical professionals in Japan is in a transitional phase, triggered by a decreasing number of surgeons, especially in gastrointestinal surgery, and the implementation of legislation concerning work‐style reforms (Figure ). Among the subspecialties in gastrointestinal surgery, hepatobiliary and pancreatic (HBP) surgery demands not only advanced knowledge but also exceptional surgical skills, , , , , outlining the challenging and time‐consuming road to professionalism; however, it should be noted that the field of HBP surgery is not exempt from this transitional phase. Recognizing the need to align HBP surgery practices with this contemporary trend, the Japanese Society of Hepato‐Biliary‐Pancreatic Surgery (JSHBPS) took a gradual step by developing a dedicated working group named the Next Generation Project (NGP). Comprising JHBPS members aged 45 years or younger and supervised by a senior professor, the NGP working group formally initiated its activities by conducting a questionnaire survey to confirm the needs and current circumstances of the “next generation” in HBP surgery. In this report, we present the results of the questionnaire survey. Clarifying the needs and current circumstances of the next generation HBP surgeons allows for developing the fundamental policy of the NGP working group, furthering the society's commitment to fostering a progressive and sustainable HBP surgery. METHODS 2.1 Questionnaire survey The questionnaire was sent to members of the JSHBPS who were 45 years old or younger, using a valid email address. Additionally, to minimize the influence of nonresponses, we configured the survey platform to require responses for all key variables, allowing only surveys with no missing data to be submitted. The questionnaire was developed based on consensus within the NGP working group and supervisor, comprising the following four categories: (i) JSHBPS board certification, (ii) research activity and overseas study, (iii) recruiting, and (iv) work‐life balance. The complete content of this questionnaire is detailed in the supplementary document—Data . The questionnaire survey was sent out just once to minimize duplication. 2.2 Data analysis In this survey, various questioning styles were employed. The majority of questions offered only a single answer choice, and these were converted into percentages for graphical representation if appropriate. For questions with multiple answer choices, the data were presented using the actual numbers and corresponding groups. In selected questions, percentages or numbers were divided according to specific groups. The survey primarily consisted of “closed questions”, and the results of this survey consisted of descriptive statistics. Additionally, the data highlighted in the Results section were determined in agreement by NGP members. Questionnaire survey The questionnaire was sent to members of the JSHBPS who were 45 years old or younger, using a valid email address. Additionally, to minimize the influence of nonresponses, we configured the survey platform to require responses for all key variables, allowing only surveys with no missing data to be submitted. The questionnaire was developed based on consensus within the NGP working group and supervisor, comprising the following four categories: (i) JSHBPS board certification, (ii) research activity and overseas study, (iii) recruiting, and (iv) work‐life balance. The complete content of this questionnaire is detailed in the supplementary document—Data . The questionnaire survey was sent out just once to minimize duplication. Data analysis In this survey, various questioning styles were employed. The majority of questions offered only a single answer choice, and these were converted into percentages for graphical representation if appropriate. For questions with multiple answer choices, the data were presented using the actual numbers and corresponding groups. In selected questions, percentages or numbers were divided according to specific groups. The survey primarily consisted of “closed questions”, and the results of this survey consisted of descriptive statistics. Additionally, the data highlighted in the Results section were determined in agreement by NGP members. RESULTS The questionnaire survey was sent to 1735 JSHBPS members on December 24, 2021, and the responses were obtained from 303 members (17.5%) until January 31, 2022. 3.1 Study population The study population is shown in Table . Out of the 303 respondents, 139 (45.9%) were over 41 years old, followed by 107 (35.3%) aged 36 to 40, 49 (16.2%) aged 31 to 35, and eight (2.6%) aged 24 to 30. Similarly, the distribution of postgraduate years (PGY) among respondents reflected comparable proportions. The vast majority of respondents consisted of male HBP surgeons, accounting for 93.7% ( n = 284), and were affiliated with university or academic centers ( n = 276, 91.1%). At present, nearly half of the respondents ( n = 156, 51.5%) worked in university hospitals or university‐associated hospitals. Among them, 81.7% ( n = 247) were regularly employed as staff doctors, and 12.3% ( n = 37) were Ph.D. students (Supplementary document section 1—Data ). 3.1.1 The board certification system of JSHBPS The section regarding the board certification of JSHBPS included six questions. The details of the board certification are described in Figure . As shown in Table , 25.1% of the respondents had already obtained JSHBPS board certification, and 72.7% of uncertified surgeons answered this board certification was their most prioritized one. Regarding the certification in each PGY, 57.4% of doctors in PGY 18 or older were already certified, 22.0% in PGY 12 to 17, and 2.6% in PGY 9 to 11 (Figure ). A total of 80% of all respondents worked in board‐certified hospitals, including 156 surgeons in university or university‐related hospitals (Figure ). Surgeons must have worked in board‐certified hospitals for more than 3 years in the last 7 years and have performed more than 50 highly advanced HBP surgeries to be eligible for applying for this certification. In this context, about two‐thirds of the surgeons ( n = 209, 69.0%) had worked for more than 5 years in such hospitals, and the total number of operated cases so far was over 50 in 52.1% of the respondents (Figure ). The number of highly advanced surgeries operated in a year is shown in Figure . The most common response was less than nine cases ( n = 141, 46.5%) and 10–29 cases ( n = 134, 44.2%). The last question in this section was given to the respondents' free opinions about the board certification system. Respondents mainly focused on (i) clarifying the judge system, (ii) the loosening of applicant requirements, and (iii) providing financial incentives after certification (supplementary document section 2—Data ). 3.1.2 Research activity and overseas study The section regarding the research activity and overseas study included 11 questions. Most of the respondents had already experienced the clinical, basic, or both types of research, and three‐quarters of the respondents answered that the research activity was necessary for their career formation (Figure ). The detailed number of each respondent's published articles is shown in the supplementary document section 3—Data . Meanwhile, approximately 70% of respondents had never been to overseas or domestic institutions for research or training, although most of them expressed the willingness to do so (Figure ). 3.1.3 Recruiting The section regarding recruiting included 11 questions. Recruiting HBP surgeons is one of the biggest concerns for surgeons who suffer from short‐staffed and burdening working environments of HBP surgeons in Japan. Among the respondents, 98 (32.3%) decided to specialize in HBP surgery during their PGY 3–5, 89 (29.4%) during PGY 6–10, 60 (19.8%) during PGY 1–2, and 36 (11.9%) had already aspired to be HBP surgeons even before graduating from medical school (Figure ). In the question of the reason for choosing HBP surgery, the most common reason was the technical complexity of surgeries ( n = 221, 72.9%), followed by the influence of good role models ( n = 110, 36.3%), and the variety of challenging diseases ( n = 84, 27.7%) (Figure ). Correspondingly, the answer to the question “What is important in the recruitment of HBP surgeons?” was “Show how HBP surgeries are challenging, rewarding, and attractive.” ( n = 238, 78.5%) (supplementary document section 4.4—Data ). Regarding the number of newcomers in each institution, 106 respondents (35.0%) answered only ONE surgeon in a few years, while six respondents (2.0%) answered more than five doctors per year (Figure ). Meanwhile, regarding the recruitment activities, 87 surgeons answered yes (28.7%) with ideas like holding orientation seminars and social gatherings for young candidates, while 216 (71.3%) answered no due to lack of time, lack of human resources, or because they are already recruiting them to be general surgeons first (Figure ). 3.1.4 Work‐life balance The final section focused on the work‐life balance and working environment of HBP surgeons. This section included 22 questions. Figure shows the distribution of HBP staff in the respondents' hospitals, with one‐third answering 1–3 staff(s), another one‐third answering 4–6 staff, and 12.5% answering more than 10 staff. Another chart regarding the number of female staff shows that 71.0% of respondents have no female colleagues, and only 1.7% had more than three female surgeons (supplementary document section 5.3—Data ). In the question about the working style, 60.1% are working with a team‐based style, while 39.9% are with a surgeon‐based style (Figure ). Figure feature the working environment, highlighting the scarcity of off‐duty days or short paid leave of surgeons. Additionally, overtime work exceeding 80 h was found in one‐third of them, despite the upcoming government policy that bans overtime work longer than 80 h. When asked whether they could maintain their current work style a decade later, two‐thirds answered “No” (Figure ). Study population The study population is shown in Table . Out of the 303 respondents, 139 (45.9%) were over 41 years old, followed by 107 (35.3%) aged 36 to 40, 49 (16.2%) aged 31 to 35, and eight (2.6%) aged 24 to 30. Similarly, the distribution of postgraduate years (PGY) among respondents reflected comparable proportions. The vast majority of respondents consisted of male HBP surgeons, accounting for 93.7% ( n = 284), and were affiliated with university or academic centers ( n = 276, 91.1%). At present, nearly half of the respondents ( n = 156, 51.5%) worked in university hospitals or university‐associated hospitals. Among them, 81.7% ( n = 247) were regularly employed as staff doctors, and 12.3% ( n = 37) were Ph.D. students (Supplementary document section 1—Data ). 3.1.1 The board certification system of JSHBPS The section regarding the board certification of JSHBPS included six questions. The details of the board certification are described in Figure . As shown in Table , 25.1% of the respondents had already obtained JSHBPS board certification, and 72.7% of uncertified surgeons answered this board certification was their most prioritized one. Regarding the certification in each PGY, 57.4% of doctors in PGY 18 or older were already certified, 22.0% in PGY 12 to 17, and 2.6% in PGY 9 to 11 (Figure ). A total of 80% of all respondents worked in board‐certified hospitals, including 156 surgeons in university or university‐related hospitals (Figure ). Surgeons must have worked in board‐certified hospitals for more than 3 years in the last 7 years and have performed more than 50 highly advanced HBP surgeries to be eligible for applying for this certification. In this context, about two‐thirds of the surgeons ( n = 209, 69.0%) had worked for more than 5 years in such hospitals, and the total number of operated cases so far was over 50 in 52.1% of the respondents (Figure ). The number of highly advanced surgeries operated in a year is shown in Figure . The most common response was less than nine cases ( n = 141, 46.5%) and 10–29 cases ( n = 134, 44.2%). The last question in this section was given to the respondents' free opinions about the board certification system. Respondents mainly focused on (i) clarifying the judge system, (ii) the loosening of applicant requirements, and (iii) providing financial incentives after certification (supplementary document section 2—Data ). 3.1.2 Research activity and overseas study The section regarding the research activity and overseas study included 11 questions. Most of the respondents had already experienced the clinical, basic, or both types of research, and three‐quarters of the respondents answered that the research activity was necessary for their career formation (Figure ). The detailed number of each respondent's published articles is shown in the supplementary document section 3—Data . Meanwhile, approximately 70% of respondents had never been to overseas or domestic institutions for research or training, although most of them expressed the willingness to do so (Figure ). 3.1.3 Recruiting The section regarding recruiting included 11 questions. Recruiting HBP surgeons is one of the biggest concerns for surgeons who suffer from short‐staffed and burdening working environments of HBP surgeons in Japan. Among the respondents, 98 (32.3%) decided to specialize in HBP surgery during their PGY 3–5, 89 (29.4%) during PGY 6–10, 60 (19.8%) during PGY 1–2, and 36 (11.9%) had already aspired to be HBP surgeons even before graduating from medical school (Figure ). In the question of the reason for choosing HBP surgery, the most common reason was the technical complexity of surgeries ( n = 221, 72.9%), followed by the influence of good role models ( n = 110, 36.3%), and the variety of challenging diseases ( n = 84, 27.7%) (Figure ). Correspondingly, the answer to the question “What is important in the recruitment of HBP surgeons?” was “Show how HBP surgeries are challenging, rewarding, and attractive.” ( n = 238, 78.5%) (supplementary document section 4.4—Data ). Regarding the number of newcomers in each institution, 106 respondents (35.0%) answered only ONE surgeon in a few years, while six respondents (2.0%) answered more than five doctors per year (Figure ). Meanwhile, regarding the recruitment activities, 87 surgeons answered yes (28.7%) with ideas like holding orientation seminars and social gatherings for young candidates, while 216 (71.3%) answered no due to lack of time, lack of human resources, or because they are already recruiting them to be general surgeons first (Figure ). 3.1.4 Work‐life balance The final section focused on the work‐life balance and working environment of HBP surgeons. This section included 22 questions. Figure shows the distribution of HBP staff in the respondents' hospitals, with one‐third answering 1–3 staff(s), another one‐third answering 4–6 staff, and 12.5% answering more than 10 staff. Another chart regarding the number of female staff shows that 71.0% of respondents have no female colleagues, and only 1.7% had more than three female surgeons (supplementary document section 5.3—Data ). In the question about the working style, 60.1% are working with a team‐based style, while 39.9% are with a surgeon‐based style (Figure ). Figure feature the working environment, highlighting the scarcity of off‐duty days or short paid leave of surgeons. Additionally, overtime work exceeding 80 h was found in one‐third of them, despite the upcoming government policy that bans overtime work longer than 80 h. When asked whether they could maintain their current work style a decade later, two‐thirds answered “No” (Figure ). The board certification system of JSHBPS The section regarding the board certification of JSHBPS included six questions. The details of the board certification are described in Figure . As shown in Table , 25.1% of the respondents had already obtained JSHBPS board certification, and 72.7% of uncertified surgeons answered this board certification was their most prioritized one. Regarding the certification in each PGY, 57.4% of doctors in PGY 18 or older were already certified, 22.0% in PGY 12 to 17, and 2.6% in PGY 9 to 11 (Figure ). A total of 80% of all respondents worked in board‐certified hospitals, including 156 surgeons in university or university‐related hospitals (Figure ). Surgeons must have worked in board‐certified hospitals for more than 3 years in the last 7 years and have performed more than 50 highly advanced HBP surgeries to be eligible for applying for this certification. In this context, about two‐thirds of the surgeons ( n = 209, 69.0%) had worked for more than 5 years in such hospitals, and the total number of operated cases so far was over 50 in 52.1% of the respondents (Figure ). The number of highly advanced surgeries operated in a year is shown in Figure . The most common response was less than nine cases ( n = 141, 46.5%) and 10–29 cases ( n = 134, 44.2%). The last question in this section was given to the respondents' free opinions about the board certification system. Respondents mainly focused on (i) clarifying the judge system, (ii) the loosening of applicant requirements, and (iii) providing financial incentives after certification (supplementary document section 2—Data ). Research activity and overseas study The section regarding the research activity and overseas study included 11 questions. Most of the respondents had already experienced the clinical, basic, or both types of research, and three‐quarters of the respondents answered that the research activity was necessary for their career formation (Figure ). The detailed number of each respondent's published articles is shown in the supplementary document section 3—Data . Meanwhile, approximately 70% of respondents had never been to overseas or domestic institutions for research or training, although most of them expressed the willingness to do so (Figure ). Recruiting The section regarding recruiting included 11 questions. Recruiting HBP surgeons is one of the biggest concerns for surgeons who suffer from short‐staffed and burdening working environments of HBP surgeons in Japan. Among the respondents, 98 (32.3%) decided to specialize in HBP surgery during their PGY 3–5, 89 (29.4%) during PGY 6–10, 60 (19.8%) during PGY 1–2, and 36 (11.9%) had already aspired to be HBP surgeons even before graduating from medical school (Figure ). In the question of the reason for choosing HBP surgery, the most common reason was the technical complexity of surgeries ( n = 221, 72.9%), followed by the influence of good role models ( n = 110, 36.3%), and the variety of challenging diseases ( n = 84, 27.7%) (Figure ). Correspondingly, the answer to the question “What is important in the recruitment of HBP surgeons?” was “Show how HBP surgeries are challenging, rewarding, and attractive.” ( n = 238, 78.5%) (supplementary document section 4.4—Data ). Regarding the number of newcomers in each institution, 106 respondents (35.0%) answered only ONE surgeon in a few years, while six respondents (2.0%) answered more than five doctors per year (Figure ). Meanwhile, regarding the recruitment activities, 87 surgeons answered yes (28.7%) with ideas like holding orientation seminars and social gatherings for young candidates, while 216 (71.3%) answered no due to lack of time, lack of human resources, or because they are already recruiting them to be general surgeons first (Figure ). Work‐life balance The final section focused on the work‐life balance and working environment of HBP surgeons. This section included 22 questions. Figure shows the distribution of HBP staff in the respondents' hospitals, with one‐third answering 1–3 staff(s), another one‐third answering 4–6 staff, and 12.5% answering more than 10 staff. Another chart regarding the number of female staff shows that 71.0% of respondents have no female colleagues, and only 1.7% had more than three female surgeons (supplementary document section 5.3—Data ). In the question about the working style, 60.1% are working with a team‐based style, while 39.9% are with a surgeon‐based style (Figure ). Figure feature the working environment, highlighting the scarcity of off‐duty days or short paid leave of surgeons. Additionally, overtime work exceeding 80 h was found in one‐third of them, despite the upcoming government policy that bans overtime work longer than 80 h. When asked whether they could maintain their current work style a decade later, two‐thirds answered “No” (Figure ). DISCUSSION This questionnaire survey aimed to investigate the needs and current circumstances of the “next generation” in HBP surgery, comprising the following four categories: (i) the board‐certification of JSHBPS, (ii) research activity and overseas study, (iii) recruiting, and (iv) work‐life balance. The majority of respondents seem to be “highly motivated”, “university‐affiliated” and “male” HBP surgeons, and their response highlighted the need for optimizing the HBP surgical training and supporting the research investigation. On the contrary, this survey showed HBP surgeons were not actively engaged in recruiting. Meanwhile, this survey demonstrated that HBP surgeons faced long working hours, and the vast majority recognized the need for work‐style reform. The first section of the survey focused on the board certification of JSHBPS, which stands as the primary goal for every HBP surgeon and is highly prioritized, as presented in Figure . Before this certification system was launched by JSHBPS in 2011, there were no programs to monitor and ensure the quality of HBP surgeries. , , , The candidates for this certification are required to have operated more than 50 highly advanced HBP surgeries in the last 7 years, to submit detailed dictations and illustrations of all the surgeries, and to provide the recorded unedited video of one of the operated cases. The criteria for evaluation are strict, with a pass rate of approximately 50%, making it significantly more challenging than other board certifications, which typically have pass rates of around 80%. However, the most important issue was the system for the designation of board‐certified expert surgeons and safety management improved the mortality rate associated with highly advanced HBP surgeries. , , While the board certification system has reached a mature phase, several issues remain unresolved, as highlighted by various opinions. In addition, the operated casers for next generation HBP surgeons seemed to be insufficient. Balancing the demands of the next generation of HBP surgeons and the significance of the certification should be an ongoing topic of discussion. The second section focused on research activity and overseas study, and primarily their activity seemed to be very high. As of the evidence, they have authored many papers as first authors throughout their career (supplementary document section 3.3–3.7—Data ), and some surgeons experienced overseas study. Nevertheless, there was a disparity between the actual status and their willingness. This might be explained by the decreasing number of surgeons, limited financial support, or the isolated environment of Japanese institutions. Expanding the support for such surgeons and providing more information about overseas studies may allow for broadening the horizons of these highly motivated surgeons, leading to an increase in academic levels of Japanese HBP surgery. The third section focused on the recruitment of new HBP surgeons. This study demonstrated that current HBP surgeons typically decide on their career path early in their surgical careers. It is noteworthy that nearly one‐third of respondents decided to pursue a career as an HBP surgeon either before choosing their medical specialty or during their time as medical students. Moreover, their motivation to become HBP surgeons is often driven by a desire to overcome the challenges associated with surgery. However, it should be noted whether this active trend can be consistent for future HBP surgeons. Although the situation would vary depending on the respective institutions, the activity of recruitment was not so evident (Figure ). Certainly, given the severe situation in the context of a decreasing number of surgeons, HBP surgeons who are under 45 years of age should have more weight regarding recruiting. The final section focused on the work‐life balance. Although the number of staff surgeons varied across the institutions, the team‐based approach seemed to be practiced as reported by 60.1% of respondents. Nevertheless, 39.9% of respondents reported exceeding the work hours limit as stipulated by the new laws (i.e., 80 h). Notably, two‐thirds of respondents acknowledged the unsustainability of their current work styles. Burnout is a strong predictor of career dissatisfaction among surgeons, and the findings of this survey may serve as an important warning for the future HBP surgery. Besides, although the percentage of female gastrointestinal surgeons is increasing in Japan, gender equality remains an issue, with significant differences in the personal lives of surgeons based on gender and parental status. The traditional belief that women should bear primary family responsibilities persists among both male and female surgeons, possibly reflecting global trends of gender disparity. , Expanding the team‐based approach, offering incentives, task‐shifting from non‐surgical duties, and implementing structural improvements such as career sharing or on‐site child care could be potential solutions to these issues. However, these proposals are still challenging to implement due to the financial and time investments required from institutions and individuals. , , Dedicated, collective, and concerted efforts by both individuals and institutions where they work are crucial for achieving the most meaningful and impactful improvements in surgeon wellness. The results of this survey may help develop the fundamental policy of the NGP working group. How to minimize the gap between the current circumstances and the aspiration of next‐generation HBP surgeons represents a critical and urgent challenge. At the same time, it should be acknowledged that these surgeons are often faced with long working hours and insufficient training opportunities, which can lead to uncertainties about their future. Prioritizing the improvement of surgical experience and the reduction of work hours could sustain their surgical motivation, enhance work‐life balance, and create room for recruitment efforts. Challenges related to surgical education, , , research activity, work‐life balance, , , , , and recruitment appear to be similar across the globe and in other surgical fields. Sharing experiences and initiatives will be crucial in overcoming these challenges and ensuring the future sustainability of HBP surgery in Japan. The NGP working group can play a pivotal role in facilitating such opportunities. The main limitation of this survey was the relatively low response rate of 17.5% and the composition of the study population, which consisted predominantly of male and university‐affiliated surgeons. As a result, the results might be biased and not fully reflect the perspectives of all young HBP surgeons. However, the ongoing centralization of HBP surgery in university hospitals and highly specialized centers in Japan suggests that this survey may provide valuable insights. Although the proportion of female HBP surgeons was not particularly low in this survey compared to those in JSHBPS members (approximately 4%, data from JSHBPS), the absolute number of female respondents was limited. Given the increasing number of female surgeons and the need to capture their perspectives, surveys that specifically focus on female surgeons may be required. Overall, future studies with a higher response rate and a more diverse population are needed to gain a more comprehensive understanding of the next generation of HBP surgeons in Japan. CONCLUSIONS This questionnaire survey highlighted the current status of the next generation HBP surgeons in Japan. Although they are motivated to acquire advanced surgical skills and recognize the importance of research experience, they are facing long working hours and insufficient training opportunities. Fundamental reforms, such as revising the training curriculum, improving work styles, and enhancing recruitment, are necessary steps forward. However, these are just the beginning; continued dialogue and collaboration are essential to develop comprehensive and sustainable reforms for the future of HBP surgery in Japan. YK‐F, TY, TH and KS: Study concept and design. All authors: Acquisition of data; analysis and interpretation of data. YK‐F, TY and TH: Drafting of the manuscript. All authors: Critical revision of the manuscript for important intellectual content. IE, MO, SE and KS: Study supervision. No financial support concerning this manuscript. The authors who have taken part in this study declare that they have nothing to disclose regarding funding or conflict of interest concerning this manuscript. The content of this paper was presented in JSHBPS 2023. Data S1: Supporting Information.
Clinical outcome of root canal obturation using different based sealers: a retrospective cohort study
4bfcd6a0-c25c-44ed-bd3e-3d06d40f9daa
11673337
Dentistry[mh]
The purpose of root canal treatment of teeth with apical periodontitisis is to reduce number of bacteria in the root canal space and help to initiate periapical healing . Although initial root canal treatment has been shown to have high success rate , still more than 15% failure might occur . In those cases, non-surgical retreatment and periapical surgery are both viable options. The choice of treatment between non-surgical root canal treatment and periapical surgery should be based on the balance of benefits and risks between two treatments, considering factors related to the patient and operator . Some studies reported no difference between the treatment outcomes of two treatment options. However, other studies indicated that periapical surgery has favorable initial success rates, while non-surgical root canal treatment provides long-term success . Besides all these conflicting results, it is a fact that non-surgical endodontic retreatment is the first treatment option for many cases . Relatively lower success rates have been reported in retreatment cases compared to primary endodontic treatments . Initiating periapical healing may be challenging, especially in retreatment cases, due to the presence of residual bacteria and improper primer endodontic treatment . Gutta-percha is the most commonly used obturation material . However, its use alone is inadequate to seal the irregularities in the root canal system . Therefore, gutta-percha is used together with root canal sealers due to these negative aspects. Root canal sealers play an important role in achieving hermetic root canal obturation. They bind gutta-percha to root canal walls, help root canal obturation by filling root canal irregularities, kill bacteria, and prevent bacterial nourishment . Resin-based root canal sealers have been considered as gold standard sealer for many years due to its low solubility, adequate dimensional stability, and good bond strength. However, they do not induce bone formation due to their lack of bioactive properties . Calcium silicate-based sealers have been introduced to the market with the ideal properties such as antimicrobial effect, hydrophilicity, biocompatibility, biomineralization, hydroxyapatite formation, adhesion, and bioactivity . Calcium silicate-based sealers are also considered as biocompatible when they are extruded from the apex . Therefore, calcium silicate-based sealers are associated with increased success rate . Due to concerns regarding high temperatures and its potential adverse effects, it is not recommended to use calcium silicate-based sealers with thermoplastic gutta-percha systems . Nevertheless, using calcium silicate-based sealers in combination with cold gutta-percha techniques such as single cone and lateral compaction obturation appears to be advantageous in terms of ease of use, requiring no extra material and time, and being non-irritating to periapical tissue . To date, some studies have been performed on the outcome of calcium silicate-based endodontic sealers in vital and devital cases . However, none of them have compared the success rates of epoxy resin-based sealers and calcium silicate-based sealers in retreatment cases with periapical lesions. Therefore, the aim of the present study is to evaluate the success rates of non-surgical root canal retreatment in cases with periapical bone destruction that were obturated either with calcium silicate-based sealers or epoxy resin-based sealers, along with gutta-percha. Case selection and treatment procedure Ethical approval was obtained from our university, Faculty of Medicine, Clinical Research Ethics Committee (Approval number: E-71522473-50.01.04-202827-354). Furthermore, a written informed consent was obtained from all patient and legal guardian of minor regarding the use of their radiologic data for scientific research. The data were obtained retrospectively from the records of the teeth treated between September 2020 and February 2022 at our university, Faculty of Dentistry, Department of Endodontics. The study included retreatment cases with symptomatic or asymptomatic apical periodontitis with periapical lesions treated either with epoxy resin-based sealers or calcium silicate-based sealers. Inclusion and exclusion criteria Inclusion criteria were as follows: Teeth with sufficient quality preoperative and postoperative X-rays. Teeth with complete root canal development. Radiologically acceptable quality of the root canal treatment (all root canals obturated sufficiently within 2 mm from the radiological apex, absence of broken file etc.). Acceptable coronal restoration. Patients who came to follow-up sessions. Exclusion criteria were as follows: Internal or external root resorption cases. Teeth with open apex. Severe periodontal loss. Treatments performed in one session. Primary endodontic treatment. Teeth that underwent periapical surgery after the root canal treatment. All root canal treatments and follow-ups were performed by a single endodontic specialist with more than 5 years of experience. A standardized treatment protocol was carried out in two sessions. At the first session access cavity were opened and gutta-percha removed by the aid of rotary retreatment files (ProTaper Universal Retreatment System, Dentsply Maillefer, Ballaigues, Swizerland) and H type hand files (Mani, Tochigi, Japan). After removing old gutta-percha, root canal shaping completed either with ProTaper Next Rotary system (Dentsply Maillefer, Ballaigues, Swizerland) or conventional hand files according to root canal anatomy. Calcium hydroxide dressing (Cerkamed, Stalowa Wola, Poland) were inserted into the root canals between first and second sessions. The second session were scheduled a week after from the first session. At the second session, after removal of calcium hydroxide and radiographic confirmation of gutta-percha position, final irrigation performed with 2.5mL 5% EDTA, 5 mL 3% NaOCl, 2.5mL distilled water, 2.5mL and 2% chlorhexidine per root canal. The root canals were then obturated either with calcium silicate-based root canal sealer (Ceraseal Meta Biomed Co., Cheongju, Korea) or epoxy resin-based root canal sealer (Ah Plus Dentsply DeTrey GmbH, Konstanz, Germany) and gutta-percha. All procedures were performed under an operating microscope (Zumax OMS2350, Zumax Medical Co. Ltd, Jiangsu, China). Patients were advised to attend their follow-up appointments every 6 months. In cases completed with a permanent restoration, the access cavity was filled with bulk-fill resin SDR (Dentsply Sirona, Charlotte, NC, USA) and composite resin (Tokuyama Estelite Posterior, Tokyo, Japan). If a prosthetic restoration was needed, the access cavity was filled with glass ionomer cement and patients were advised to apply to the department of prosthesis as soon as possible. Clinical and radiographic evaluation Recall appointments were archived including radiographic and clinical examination of the treated tooth. Radiographic data were obtained retrospectively from patient admission system (Figs. and ). Radiographs were evaluated by two calibrated examiners. Teeth were all scored according to their healing process and periapical index (PAI) scoring system . Healed: Functional, asymptomatic teeth with no or minimal radiographic periradicular (apical) pathosis. Unhealed: Nonfunctional, symptomatic teeth with or without radiographic periradicular (apical) pathosis or asymptomatic teeth with unchanged, new, or enlarged radiographic periradicular (apical) pathosis. Healing: Teeth that are asymptomatic and functional with a decreased size of radiographic periradicular (apical) pathosis. PAI 1: Normal periapical bone structure, PAI 2: Small changes in bone structure, no demineralization, PAI 3: Changes in bone structure with some diffuse mineral loss, PAI 4: Apical periodontitis with well-defined radiolucent area, PAI 5: Severe apical periodontitis, exacerbating features. Outcome assessment Both healed and healing cases were considered as success and unhealed cases were considered as failure. Patient and tooth related factors such as sex, age, periapical lesion size, coronary restoration type, sealer extrusion and follow-up time were also evaluated. The age of the patients was divided into two categories; those under 45 and those older than 45. Periapical lesion size was evaluated into 3 groups; small lesions (0–2 mm), medium lesions (2–5 mm), and large lesions (more than 5 mm) . Statistical analysis The data were analyzed using IBM SPSS version 26.0 (SPSS Inc., Chicago, IL, USA). Normality of distribution was assessed using the Shapiro-Wilk test. Categorical variables were compared between groups using the Chi-square test and Fisher’s Exact test. An independent samples t-test was used to compare the initial and final PAI differences based on sealer type. Statistical significance was considered at p < 0.05. The level of inter-observer agreement was evaluated using Cohen’s kappa statistics. Ethical approval was obtained from our university, Faculty of Medicine, Clinical Research Ethics Committee (Approval number: E-71522473-50.01.04-202827-354). Furthermore, a written informed consent was obtained from all patient and legal guardian of minor regarding the use of their radiologic data for scientific research. The data were obtained retrospectively from the records of the teeth treated between September 2020 and February 2022 at our university, Faculty of Dentistry, Department of Endodontics. The study included retreatment cases with symptomatic or asymptomatic apical periodontitis with periapical lesions treated either with epoxy resin-based sealers or calcium silicate-based sealers. Inclusion criteria were as follows: Teeth with sufficient quality preoperative and postoperative X-rays. Teeth with complete root canal development. Radiologically acceptable quality of the root canal treatment (all root canals obturated sufficiently within 2 mm from the radiological apex, absence of broken file etc.). Acceptable coronal restoration. Patients who came to follow-up sessions. Exclusion criteria were as follows: Internal or external root resorption cases. Teeth with open apex. Severe periodontal loss. Treatments performed in one session. Primary endodontic treatment. Teeth that underwent periapical surgery after the root canal treatment. All root canal treatments and follow-ups were performed by a single endodontic specialist with more than 5 years of experience. A standardized treatment protocol was carried out in two sessions. At the first session access cavity were opened and gutta-percha removed by the aid of rotary retreatment files (ProTaper Universal Retreatment System, Dentsply Maillefer, Ballaigues, Swizerland) and H type hand files (Mani, Tochigi, Japan). After removing old gutta-percha, root canal shaping completed either with ProTaper Next Rotary system (Dentsply Maillefer, Ballaigues, Swizerland) or conventional hand files according to root canal anatomy. Calcium hydroxide dressing (Cerkamed, Stalowa Wola, Poland) were inserted into the root canals between first and second sessions. The second session were scheduled a week after from the first session. At the second session, after removal of calcium hydroxide and radiographic confirmation of gutta-percha position, final irrigation performed with 2.5mL 5% EDTA, 5 mL 3% NaOCl, 2.5mL distilled water, 2.5mL and 2% chlorhexidine per root canal. The root canals were then obturated either with calcium silicate-based root canal sealer (Ceraseal Meta Biomed Co., Cheongju, Korea) or epoxy resin-based root canal sealer (Ah Plus Dentsply DeTrey GmbH, Konstanz, Germany) and gutta-percha. All procedures were performed under an operating microscope (Zumax OMS2350, Zumax Medical Co. Ltd, Jiangsu, China). Patients were advised to attend their follow-up appointments every 6 months. In cases completed with a permanent restoration, the access cavity was filled with bulk-fill resin SDR (Dentsply Sirona, Charlotte, NC, USA) and composite resin (Tokuyama Estelite Posterior, Tokyo, Japan). If a prosthetic restoration was needed, the access cavity was filled with glass ionomer cement and patients were advised to apply to the department of prosthesis as soon as possible. Recall appointments were archived including radiographic and clinical examination of the treated tooth. Radiographic data were obtained retrospectively from patient admission system (Figs. and ). Radiographs were evaluated by two calibrated examiners. Teeth were all scored according to their healing process and periapical index (PAI) scoring system . Healed: Functional, asymptomatic teeth with no or minimal radiographic periradicular (apical) pathosis. Unhealed: Nonfunctional, symptomatic teeth with or without radiographic periradicular (apical) pathosis or asymptomatic teeth with unchanged, new, or enlarged radiographic periradicular (apical) pathosis. Healing: Teeth that are asymptomatic and functional with a decreased size of radiographic periradicular (apical) pathosis. PAI 1: Normal periapical bone structure, PAI 2: Small changes in bone structure, no demineralization, PAI 3: Changes in bone structure with some diffuse mineral loss, PAI 4: Apical periodontitis with well-defined radiolucent area, PAI 5: Severe apical periodontitis, exacerbating features. Both healed and healing cases were considered as success and unhealed cases were considered as failure. Patient and tooth related factors such as sex, age, periapical lesion size, coronary restoration type, sealer extrusion and follow-up time were also evaluated. The age of the patients was divided into two categories; those under 45 and those older than 45. Periapical lesion size was evaluated into 3 groups; small lesions (0–2 mm), medium lesions (2–5 mm), and large lesions (more than 5 mm) . The data were analyzed using IBM SPSS version 26.0 (SPSS Inc., Chicago, IL, USA). Normality of distribution was assessed using the Shapiro-Wilk test. Categorical variables were compared between groups using the Chi-square test and Fisher’s Exact test. An independent samples t-test was used to compare the initial and final PAI differences based on sealer type. Statistical significance was considered at p < 0.05. The level of inter-observer agreement was evaluated using Cohen’s kappa statistics. The study included retreatment cases of 44 patients, comprised of 20 males and 24 females, with ages ranging from 14 to 67 years (mean age: 32.68 ± 12.01 years). While 28 cases were in the calcium silicate-based sealer group, 16 cases were in the epoxy resin based-sealer group. The majority of cases were anterior teeth. Cohen’s kappa score for inter-observer agreement ranged from 0.687 to 0.954 for healing and initial, final, and delta PAI, which indicates a good agreement between the observers. No significant differences were found between the two groups in terms of age, tooth type, gender, healing status, restoration type, and obturation technique ( p > 0.05). Sealer extrusion was observed only in the calcium silicate-based sealer group but did not affect the healing capacity. The median age of the epoxy resin-based sealer group (37.5) was slightly higher than that of the calcium silicate-based sealer group (31), but this difference was statistically insignificant ( p = 0.065). There were no significant differences in healing status based on gender, tooth type, and obturation technique ( p > 0.05). Both single-rooted and multi-rooted teeth showed similar healing responses ( p = 0.382). One case in the epoxy resin-based sealer group was considered as unhealed, whereas all cases in the calcium silicate-based sealer group were healing or healed. The mean follow-up duration was significantly shorter in the calcium silicate-based sealer group (11.9 months) compared to the epoxy resin-based sealer group (23.6 months) ( p < 0.001). Initial PAI status was significantly higher in the calcium silicate-based sealer group than in the epoxy resin-based sealer group ( p < 0.05), but there was no significant difference in final PAI status between the two groups ( p > 0.05) (Table ). The mean Delta PAI values were significantly different between the two groups ( p = 0.022) (Table ). The calcium silicate-based sealer group had a considerably faster healing rate than the epoxy resin-based sealer group ( p < 0.05), despite having a shorter mean follow-up period. The use of single cone technique has been a matter of concern due to apical leakage and low dentinal tubule penetration . Macedo et al. compared dentinal tubule penetration of single cone and thermoplastized gutta-percha obturation and found significantly less tubule penetration in single cone obturation group. However, it has been shown that use of matched gutta-percha increases dentinal tubule penetration of the sealer . According to another study, cold root canal obturation methods are still quite popular . Therefore, in the present study, single cone obturation and lateral compaction techniques were preferred due to being easy to use and requiring no extra equipment . The lateral compaction method was employed when a single cone obturation was insufficient for hermetic obturation. Moreover, there is no single obturation method for every case. In contrast to the study by Chybowski et al. concentration of 3% NaOCl was preferred in the present study due to lower concentrations of NaOCl have same effect as high concentrations with longer contact period . Similarly, lower concentrations of NaOCl was used in other outcome studies . There are some studies about clinical outcome of non-surgical endodontic retreatment but many of those did not separate retreatment cases from primary endodontic treatment cases . Only retreatment cases included in the present study to standardize inter group variations. Contemporary single visit root canal treatment has gained popularity due to great patients’ acceptance and reduced risk of temporary filling . However, there is no evidence to support the superiority of single session root canal treatment over multiple visits root canal treatment . Furthermore, some studies claim that without calcium hydroxide dressing, proper bacterial elimination cannot be mentioned in root canal treatments, and that using calcium hydroxide may boost the likelihood of clinical success . Therefore, in the present study, root canal procedures were carried out multiple visits utilizing calcium hydroxide as an intra-canal medication; procedures carried out in a single session were excluded from the study due to potential variations in the healing process. Same approach was preferred in different studies . Previous studies have used different sample sizes. Bel Haj Salah et al. completed their study with 7 cases treated by a 3-year experienced endodontic resident. Chybowski et al. reported a study that included 307 cases treated by 4 different endodontists, but they included both initial treatment and retreatment cases. Li et al. evaluated 185 primary treatment and retreatment cases treated with same endodontist in their study. Coşar et al. completed their study with 88 vital cases. In the present study, a total of 44 cases were included, 28 cases for calcium silicate-based sealer group and 16 cases for epoxy resin-based sealer group. Unlike previous studies, all cases were treated by the same operator and only the cases of retreatments with periapical lesions were included in the study to provide more standardized study design. Different irrigation regimes were performed in different studies. Bel Haj Salah et al. used 3.25% NaOCl during root canal enlargement and 17% EDTA for final irrigation. Chybowski et al. used 5.25% NaOCl during root canal enlargement and 17% EDTA with passive ultrasonic activation as final irrigation. In a recent study, Coşar et al. used 2.5% NaOCl during enlargement and 17% EDTA, 2.5% NaOCl, and distilled water as final irrigation. In the present study, 3% NaOCl was used during root canal enlargement, and final irrigation was performed with 5% EDTA, 3% NaOCl, distilled water, and 2% chlorhexidine. NaOCl was activated with a sonic activation device to increase its effectiveness and 5% EDTA was preferred to over 17% EDTA due to having no significant difference in terms of smear layer removal and to minimalize the risk of dentinal erosion due to use of higher concentrations of EDTA . Moreover, 2% chlorhexidine was used to obtain additional antimicrobial effect and increase bonding strength by inhibiting matrix metalloproteinase . The age group was classified in general outcome studies based on potential influences on healing capacity. According to some studies in the literature, significantly better outcomes were observed in patients older than 45 years compared to younger patients . Therefore, patients were divided into two age groups as under 45 and those older than 45 in the present study. No significant difference was found between age and the healing capacity in the present study. Li et al. established a 40-year-old threshold, and Coşar et al. established a 35-year-old threshold. In both studies, no significant correlation was found between the healing capacity and age. Chybowski et al. categorized the patients as being over 50 and those who were under 50 and they found that patients younger than 50 years tended to have a higher rate of success than older patients. This different finding may be related with the higher mean age value. In some of the previous studies lesions were categorized as larger than 5 mm and smaller than 5 mm , while others categorized lesions as small (0–2 mm), medium (2–5 mm) and large (more than 5 mm) lesions, as in our study . In order to evaluate the relationship between lesion size and healing capacity more accurately, this classification was preferred in the present study. Since lesion healing can take four to five years, a minimum of four years of follow-up period is recommended . On the other hand, due to the lengthy duration, patients become less motivated and reluctant to attend follow-up appointments . With the exception of one case in which an epoxy resin-based sealer was applied, all cases showed a clear healing in the present study. An increase in lesion size was observed in the unhealed case. The lesion size did not remain stable in any of the cases. Therefore, the relatively short follow-up period in the present study no longer seems to be a limitation. To the best of our knowledge, no previous studies have investigated the efficacy of epoxy resin-based root canal sealer and calcium silicate-based root canal sealer with cold compaction techniques in the retreatment cases. Some studies have examined the success rate of cases with calcium silicate-based root canal sealers , but very limited of them investigated success rate of retreatment cases . Furthermore, none of them have compared the efficacy of epoxy resin-based sealers and calcium hydroxide-based sealers with cold compaction techniques. In this sense, this study appears to be the first. In the present study, the overall success rates for the calcium silicate-based root canal sealer and resin-based sealer were 100% and 93.75%, respectively. The success rate of the calcium silicate-based sealer group was higher than that of a previous study , but the mean follow-up time in the previous study was longer than in ours . This lower success rate in the previous study could be related with lack of final irrigation regimen along with higher follow-up period. Differently from the previous studies , retreatment in two visits may have increased the success rate along with the irrigation regimen due to antibacterial and antifungal effect of calcium hydroxide which utilized as an intracanal medicament . The high success rate in our study, in addition to the variables previously mentioned, can be attributed to the fact that each case was handled by the same operator with more than five years of postdoctoral experience. In the present study, healing rates among the groups were statistically insignificant. However, significant decrease was detected in the PAI status. While decrease of 2.68 points was observed in the calcium silicate-based root canal sealer group in the 11-month follow-up period in the PAI score, 1.81 points decrease was observed in the epoxy resin-based sealer group in 24 months. This finding indicates that calcium silicate-based sealer has better healing potential than epoxy resin-based sealer. In a previous study, AlBakhakh et al. divided periapical lesions into three subgroups as small, medium, and large as in the present study. They showed that small and medium lesions had a significant success rate compared to large lesions. Unlike this study, large lesions also healed as much as small and medium lesions and success rate is much higher in the present study. This difference could be related with the treatment protocol used and the operator’s clinical experience. To the authors’ knowledge, this is the first clinical trial comparing the success rate of retreatment using calcium silicate-based sealer and epoxy resin-based sealer with single cone and lateral compaction technique. The limited number of cases evaluated and the difference in sample sizes between the groups may be the limitations of this study. Further studies could be performed with more cases and long term follow of period. This study revealed that epoxy resin-based sealer and calcium silicate-based sealer had similar healing rates. Furthermore, there was no difference in the healing rates between single cone obturation and lateral compaction techniques. However, the calcium silicate-based sealer group showed a faster healing capacity than the epoxy resin-based sealer group. Further long-term clinical trials with different pulpal and periapical status and more cases could be beneficial.
Multidisciplinary Approach Improves Eradication Rate and Safety in Refractory
e11178f6-0723-4063-a0c6-70ecbf46a73d
11845206
Patient Education as Topic[mh]
Helicobacter pylori (HP) is a pathogenic bacterium capable of thriving in the stomach's acidic milieu . It is estimated that over 40% of the global population is infected with HP . HP infection is a leading cause of chronic gastritis, peptic ulcers, mucosa-associated lymphoid tissue lymphoma, and gastric cancer . Eradicating HP facilitates the healing of peptic ulcers, markedly diminishes ulcer complications and recurrences, and reduces the risk of gastric cancer development . Consequently, eradication therapy is recommended for infected individuals in various national guidelines. However, the widespread use of HP treatment worldwide has led to a notable increase in antibiotic resistance, posing a challenge to effectively improving HP eradication rates . In 2021, the American Gastroenterological Association (AGA) issued an expert review addressing the management of refractory HP infection, defining “refractory” as treatment failure after the initial attempt . This definition represents the broadest criterion for identifying “refractory HP infection,” underscoring the importance of clinicians closely adhering to consensus guidelines for implementing eradication therapy after the first instance of treatment failure . Refractory HP infection presents several challenges and significant clinical ramifications. First, escalating antibiotic resistance markedly diminishes the efficacy of standard treatment regimens, leading to a reduction in eradication rates . This issue not only restricts treatment options for patients but also complicates infections and escalates treatment costs. Resistance development necessitates clinicians to navigate among multiple antibiotics to identify the most effective combination, a process that consumes additional time and resources and heightens the risk of treatment failure . Furthermore, recurrent treatment failures can exacerbate patients' conditions, heightening the risks of severe complications such as peptic ulcers and gastric cancer . Frequent treatment failures may also undermine patients' confidence, reduce their adherence, and render subsequent treatments more challenging . Resistance dissemination also poses a public health hazard, potentially increasing the number of patients infected with resistant strains and perpetuating a vicious cycle . When confronted with refractory HP infection, clinicians must conduct comprehensive diagnosis and design personalized treatment plans. The AGA review underscores a thorough review of previous antibiotic exposure and advocates for tailored treatment approaches to prevent unnecessary antibiotic exposure for patients . In addition, patient education and communication are pivotal in this process. Clinicians should elucidate the fundamental principles of treatment, dosages, anticipated adverse events, and the significance of completing treatment to enhance treatment compliance and success rates . In November 2020, our center established the HP Multidisciplinary Team (MDT) Clinic, abbreviated as the HP-MDT Clinic, to promote collaboration among nursing, pharmacy, and clinical physicians. The clinic aims to address common causes of eradication therapy failure, including antibiotic and acid suppressant selection, patient adherence, HP genotyping, smoking, high bacterial load before eradication, and CYP2C19 gene polymorphism. Personalized diagnosis and treatment plans are devised for patients to optimize their outcomes. This study aims to analyze the effectiveness of standardized diagnosis and treatment, comprehensive guidance, and patient-centered education, coupled with multidisciplinary collaboration in the HP-MDT model, in enhancing eradication rates and mitigating the risk of potential adverse reactions in patients with refractory HP infection. Study subjects This study included patients with HP infection who attended either general outpatient clinics or HP-MDT outpatient clinics between November 2020 and November 2023. A total of 153 patients were enrolled based on voluntary attendance and consecutive inclusion. Of these, 51 patients were from general outpatient clinics (non–HP-MDT group) and 102 patients were from HP-MDT clinics. All patients were provided with the option to choose between the non-MDT and HP-MDT treatment plans, and they voluntarily selected either the general outpatient clinic (non–HP-MDT) or the HP-MDT clinic for their treatment. Inclusion criteria were as follows: (i) Patients aged 18–80 years were eligible for inclusion; (ii) patients must have had a history of standard HP treatment (≥1 time) without successful eradication; (iii) patients were required to undergo a breath test within the specified time frame (4–8 weeks post-treatment) to determine eradication success; and (iv) for those undergoing drug sensitivity testing, informed consent was obtained, and the drug sensitivity consent form was signed. Exclusion criteria included the following: (i) Pregnant or breastfeeding women; (ii) patients diagnosed with pyloric obstruction or cancer; (iii) patients presenting symptoms of bleeding or perforation; (iv) patients with severe cardiac, hepatic, or renal dysfunction; (v) patients who interrupted treatment for any reason during the medication period, including those unable to attend follow-up visits because of the COVID-19 pandemic; and (vi) patients who did not undergo a follow-up breath test within the specified time frame. Patients in both the HP-MDT and non–HP-MDT groups were treated with regimens recommended by current guidelines, including the Maastricht VI/Florence consensus and the Fifth Chinese National Consensus Report on HP infection . These regimens included either a 14-day therapy of 2 antibiotics + bismuth + proton pump inhibitors (PPI) or a 14-day dual therapy with amoxicillin + PPI. Antibiotics were selected based on patient history and local resistance patterns, using options such as amoxicillin, clarithromycin, doxycycline, furazolidone, levofloxacin, or metronidazole. Physicians avoided prescribing antibiotics that patients had previously used for HP eradication unless special circumstances justified their reuse. Treatment success was uniformly assessed by breath test results conducted 4–8 weeks after therapy. This study was approved by the independent Ethics Committee at the First People's Hospital of Kunshan (No. EC-SOP-007-A07-V4.0). Written informed consent was waived by the IRB because of the retrospective nature of this study. HP-MDT clinic In the HP-MDT clinic process, the primary attending physician began by inquiring about the patient's medical history, including HP treatment history, basic medical history, allergy history, and family history. Detailed patient history was collected, including previous antibiotic exposure (mainly macrolides and quinolones, for both HP eradication and noneradication purposes), medication history, and previous adverse drug reactions (ADRs). The attending physician also considered local HP resistance patterns, patient age, and other relevant factors to determine the most appropriate treatment regimens. These decisions were based on international and domestic consensus reports, including the Maastricht V/Florence Consensus Report , the Maastricht VI/Florence consensus report , and the expert consensus on HP eradication therapy in China . The attending physician then discussed with the pharmacist and the patient the reasons for previous medication failures. Both the attending physician and the pharmacist proposed personal opinions and discussed treatment plans, which could include gastroscopy or genetic testing. After this, the patient returned with the report, and the doctor informed the patient of the plan, inquiring again about medication contraindications, usual medication habits, and history of ADRs before issuing a prescription. For patients who experienced 2 failed eradication therapies, antibiotic selection was sometimes guided by drug sensitivity testing results. The patient then completed a skin test, made payments, and collected medications. If the skin test was positive, the patient returned to the consultation room for a new treatment plan discussion with the doctor and pharmacist. Subsequently, the patient returned to the consultation room where the pharmacist provided detailed guidance on medication methods, precautions, including drug interactions, and adverse reaction management. Pharmacists also ensured that patients fully understood the composition of the eradication regimen, medication instructions (timing, intervals between medications, administration processes), and the steps to take in the case of adverse reactions. Patients were required to repeat the medication process to confirm comprehension and were provided with consultation phone numbers for further inquiries. Nursing staff provided home HP knowledge education, daily life precautions, completed questionnaire surveys and data entry, and arranged follow-up. The HP-MDT team developed standardized home disinfection procedures, which were shared with patients through educational materials such as videos. These materials included detailed guidance on HP transmission routes, disinfection methods for oral contact items, disinfection frequency and timing, and precautions during disinfection. Senior nursing staff also provided on-site demonstrations of these procedures to ensure patient adherence. Finally, case follow-up involved calling patients for timely return visits to ensure continuity of care. Patients were reminded to prepare adequately for follow-ups, including reviewing disinfection practices and ensuring medication adherence before their visits. A detailed flowchart is provided in Figure . Non–HP-MDT clinic In the non–HP-MDT clinic, the primary attending physician first inquired about the patient's medical history, including previous HP treatment, basic medical history, allergies, and family history. Based on this information, the physician judged the reasons for previous medication failures using their experience. A new treatment plan was then quickly formulated, a prescription was issued, and written instructions were provided. The patient completed a skin test, made payments, and collected medications. If the skin test was positive, the doctor changed the prescription and provided written precautions and brief instructions. Finally, the patient self-revisited for follow-up as needed. Sample collection and transportation for HP Patients who consented to undergo HP genetic testing and drug sensitivity testing were instructed to sign informed consent forms and undergo gastroscopy. They were advised to fast for 12 hours before gastroscopy and abstain from taking any medications that could affect gastric flora for 4 weeks. During gastroscopy, 2 pieces of gastric mucosal samples were collected from the greater curvature of the gastric body and the lesser curvature of the gastric antrum. These samples were subjected to bacterial culture, drug sensitivity testing, and drug resistance gene testing. In addition, one gastric mucosal sample was collected from the gastric antrum for bacterial identification and morphological transformation testing. The sampling location in the gastric antrum was within 5 cm of the pylorus, whereas in the gastric body, it was 8 cm from the cardia. Each sample measured (0.2 × 0.2) cm 2 . Specialized transport tubes for HP culture were used, which were labeled with the patient's name, age, collection location, and time. The samples were placed in specialized transport media for HP and transported within 24 hours to the HP Molecular Testing Center in Shanghai in a dry ice environment. Further steps included HP isolation culture, identification, relevant drug sensitivity testing, and molecular biology testing (drug sensitivity and drug resistance gene testing for amoxicillin, clarithromycin, levofloxacin, metronidazole, and tetracycline). Follow-up Patients who intervened at the HP-MDT clinic underwent telephone follow-up or clinic follow-up by HP-MDT medical staff on the 7th and 14th days of medication to assess medication compliance and adverse reactions. They also addressed any relevant patient inquiries. Clinic follow-up was scheduled 4–8 weeks after the end of the treatment course for a re-examination of the 13C or 14C breath test, in accordance with current guidelines for post-treatment evaluation of HP eradication. HP infection and eradication criteria HP infection was defined as a positive result for HP in either the 13C or 14C breath test or histopathological examination of gastric biopsy samples during gastroscopy. HP eradication was defined as the absence of HP infection within 4–8 weeks after completion of eradication therapy. During this timeframe, patients should refrain from taking any antibiotics, bismuth-containing agents, or other antimicrobial herbal medicines for a minimum of 4 weeks. In addition, PPI should be discontinued for 2 weeks before the follow-up examination. If necessary, alternative medications such as H2 receptor antagonists, gastric mucosal protective agents, or antacids may be prescribed for pain relief, ensuring that they do not adversely affect HP growth. A negative result in the 13C or 14C breath test at follow-up indicates successful eradication. Data collection All personnel involved in data collection received standardized training according to the study requirements. Two medical staff members were responsible for collecting, organizing, and verifying data from both general outpatients and HP-MDT clinic patients, which were entered into EpiData software. Statistical analysis and cross-validation were conducted by 2 researchers. Study outcome The primary end point was the eradication rate of HP in both groups of patients, whereas secondary end points included the incidence of adverse reactions in the HP-MDT group and were analyzed according to the per-protocol approach. Statistical analysis Continuous variables were reported with mean ± SD and were compared using Student's independent t -test. Categorical variables were presented as numbers and percentages and were compared using the χ 2 test or Fisher's exact test (if an expected value ≤5 was found). Univariate and multivariate logistic regression was used to investigate the independent variables associated with eradication. The variable that was significant in both univariate and multivariate results would be recognized as an associated factor with eradication. All analyses were performed using IBM SPSS Version 27 (SPSS Statistics V27, IBM Corporation, Somers, NY). The statistical significance level for all the tests was set at P values < 0.05, 2-tailed. This study included patients with HP infection who attended either general outpatient clinics or HP-MDT outpatient clinics between November 2020 and November 2023. A total of 153 patients were enrolled based on voluntary attendance and consecutive inclusion. Of these, 51 patients were from general outpatient clinics (non–HP-MDT group) and 102 patients were from HP-MDT clinics. All patients were provided with the option to choose between the non-MDT and HP-MDT treatment plans, and they voluntarily selected either the general outpatient clinic (non–HP-MDT) or the HP-MDT clinic for their treatment. Inclusion criteria were as follows: (i) Patients aged 18–80 years were eligible for inclusion; (ii) patients must have had a history of standard HP treatment (≥1 time) without successful eradication; (iii) patients were required to undergo a breath test within the specified time frame (4–8 weeks post-treatment) to determine eradication success; and (iv) for those undergoing drug sensitivity testing, informed consent was obtained, and the drug sensitivity consent form was signed. Exclusion criteria included the following: (i) Pregnant or breastfeeding women; (ii) patients diagnosed with pyloric obstruction or cancer; (iii) patients presenting symptoms of bleeding or perforation; (iv) patients with severe cardiac, hepatic, or renal dysfunction; (v) patients who interrupted treatment for any reason during the medication period, including those unable to attend follow-up visits because of the COVID-19 pandemic; and (vi) patients who did not undergo a follow-up breath test within the specified time frame. Patients in both the HP-MDT and non–HP-MDT groups were treated with regimens recommended by current guidelines, including the Maastricht VI/Florence consensus and the Fifth Chinese National Consensus Report on HP infection . These regimens included either a 14-day therapy of 2 antibiotics + bismuth + proton pump inhibitors (PPI) or a 14-day dual therapy with amoxicillin + PPI. Antibiotics were selected based on patient history and local resistance patterns, using options such as amoxicillin, clarithromycin, doxycycline, furazolidone, levofloxacin, or metronidazole. Physicians avoided prescribing antibiotics that patients had previously used for HP eradication unless special circumstances justified their reuse. Treatment success was uniformly assessed by breath test results conducted 4–8 weeks after therapy. This study was approved by the independent Ethics Committee at the First People's Hospital of Kunshan (No. EC-SOP-007-A07-V4.0). Written informed consent was waived by the IRB because of the retrospective nature of this study. In the HP-MDT clinic process, the primary attending physician began by inquiring about the patient's medical history, including HP treatment history, basic medical history, allergy history, and family history. Detailed patient history was collected, including previous antibiotic exposure (mainly macrolides and quinolones, for both HP eradication and noneradication purposes), medication history, and previous adverse drug reactions (ADRs). The attending physician also considered local HP resistance patterns, patient age, and other relevant factors to determine the most appropriate treatment regimens. These decisions were based on international and domestic consensus reports, including the Maastricht V/Florence Consensus Report , the Maastricht VI/Florence consensus report , and the expert consensus on HP eradication therapy in China . The attending physician then discussed with the pharmacist and the patient the reasons for previous medication failures. Both the attending physician and the pharmacist proposed personal opinions and discussed treatment plans, which could include gastroscopy or genetic testing. After this, the patient returned with the report, and the doctor informed the patient of the plan, inquiring again about medication contraindications, usual medication habits, and history of ADRs before issuing a prescription. For patients who experienced 2 failed eradication therapies, antibiotic selection was sometimes guided by drug sensitivity testing results. The patient then completed a skin test, made payments, and collected medications. If the skin test was positive, the patient returned to the consultation room for a new treatment plan discussion with the doctor and pharmacist. Subsequently, the patient returned to the consultation room where the pharmacist provided detailed guidance on medication methods, precautions, including drug interactions, and adverse reaction management. Pharmacists also ensured that patients fully understood the composition of the eradication regimen, medication instructions (timing, intervals between medications, administration processes), and the steps to take in the case of adverse reactions. Patients were required to repeat the medication process to confirm comprehension and were provided with consultation phone numbers for further inquiries. Nursing staff provided home HP knowledge education, daily life precautions, completed questionnaire surveys and data entry, and arranged follow-up. The HP-MDT team developed standardized home disinfection procedures, which were shared with patients through educational materials such as videos. These materials included detailed guidance on HP transmission routes, disinfection methods for oral contact items, disinfection frequency and timing, and precautions during disinfection. Senior nursing staff also provided on-site demonstrations of these procedures to ensure patient adherence. Finally, case follow-up involved calling patients for timely return visits to ensure continuity of care. Patients were reminded to prepare adequately for follow-ups, including reviewing disinfection practices and ensuring medication adherence before their visits. A detailed flowchart is provided in Figure . In the non–HP-MDT clinic, the primary attending physician first inquired about the patient's medical history, including previous HP treatment, basic medical history, allergies, and family history. Based on this information, the physician judged the reasons for previous medication failures using their experience. A new treatment plan was then quickly formulated, a prescription was issued, and written instructions were provided. The patient completed a skin test, made payments, and collected medications. If the skin test was positive, the doctor changed the prescription and provided written precautions and brief instructions. Finally, the patient self-revisited for follow-up as needed. Patients who consented to undergo HP genetic testing and drug sensitivity testing were instructed to sign informed consent forms and undergo gastroscopy. They were advised to fast for 12 hours before gastroscopy and abstain from taking any medications that could affect gastric flora for 4 weeks. During gastroscopy, 2 pieces of gastric mucosal samples were collected from the greater curvature of the gastric body and the lesser curvature of the gastric antrum. These samples were subjected to bacterial culture, drug sensitivity testing, and drug resistance gene testing. In addition, one gastric mucosal sample was collected from the gastric antrum for bacterial identification and morphological transformation testing. The sampling location in the gastric antrum was within 5 cm of the pylorus, whereas in the gastric body, it was 8 cm from the cardia. Each sample measured (0.2 × 0.2) cm 2 . Specialized transport tubes for HP culture were used, which were labeled with the patient's name, age, collection location, and time. The samples were placed in specialized transport media for HP and transported within 24 hours to the HP Molecular Testing Center in Shanghai in a dry ice environment. Further steps included HP isolation culture, identification, relevant drug sensitivity testing, and molecular biology testing (drug sensitivity and drug resistance gene testing for amoxicillin, clarithromycin, levofloxacin, metronidazole, and tetracycline). Patients who intervened at the HP-MDT clinic underwent telephone follow-up or clinic follow-up by HP-MDT medical staff on the 7th and 14th days of medication to assess medication compliance and adverse reactions. They also addressed any relevant patient inquiries. Clinic follow-up was scheduled 4–8 weeks after the end of the treatment course for a re-examination of the 13C or 14C breath test, in accordance with current guidelines for post-treatment evaluation of HP eradication. HP infection was defined as a positive result for HP in either the 13C or 14C breath test or histopathological examination of gastric biopsy samples during gastroscopy. HP eradication was defined as the absence of HP infection within 4–8 weeks after completion of eradication therapy. During this timeframe, patients should refrain from taking any antibiotics, bismuth-containing agents, or other antimicrobial herbal medicines for a minimum of 4 weeks. In addition, PPI should be discontinued for 2 weeks before the follow-up examination. If necessary, alternative medications such as H2 receptor antagonists, gastric mucosal protective agents, or antacids may be prescribed for pain relief, ensuring that they do not adversely affect HP growth. A negative result in the 13C or 14C breath test at follow-up indicates successful eradication. All personnel involved in data collection received standardized training according to the study requirements. Two medical staff members were responsible for collecting, organizing, and verifying data from both general outpatients and HP-MDT clinic patients, which were entered into EpiData software. Statistical analysis and cross-validation were conducted by 2 researchers. The primary end point was the eradication rate of HP in both groups of patients, whereas secondary end points included the incidence of adverse reactions in the HP-MDT group and were analyzed according to the per-protocol approach. Continuous variables were reported with mean ± SD and were compared using Student's independent t -test. Categorical variables were presented as numbers and percentages and were compared using the χ 2 test or Fisher's exact test (if an expected value ≤5 was found). Univariate and multivariate logistic regression was used to investigate the independent variables associated with eradication. The variable that was significant in both univariate and multivariate results would be recognized as an associated factor with eradication. All analyses were performed using IBM SPSS Version 27 (SPSS Statistics V27, IBM Corporation, Somers, NY). The statistical significance level for all the tests was set at P values < 0.05, 2-tailed. Patient's clinical characteristics A total of 153 patients were included in this study, with 80 (52.29%) male and 73 (47.71%) female patients. The average age was 47.24 ± 12.54 years, and the average number of treatment sessions was 2.42 ± 0.86. A “treatment session” is defined as a complete cycle of care, encompassing the initial therapeutic procedure, accompanying patient education, and follow-up evaluation. This process may span multiple visits, depending on the patient's clinical needs. The clinical characteristics of patients in the non–HP-MDT and HP-MDT groups are compared in Table . No significant differences were observed between the non–HP-MDT and HP-MDT groups in sex distribution ( P = 0.567), age ( P = 0.376), and the number of treatment sessions ( P = 0.739). Both groups had a majority of patients requiring exactly 2 treatment sessions (72.55% in both groups). These results confirm the comparability between the non–HP-MDT and HP-MDT groups. Comparisons between patients with and without eradication Table presents a comparison of clinical characteristics between patients who achieved eradication and those who did not. The data indicate that patients who achieved eradication were significantly younger and had a higher attendance rate at the HP-MDT outpatient clinic (both P < 0.05). Specifically, the average age of patients who achieved eradication was 45.81 ± 12.64 years, compared with 50.67 ± 11.74 years for those who did not ( P = 0.028). Among patients who achieved eradication, 75.93% attended the HP-MDT outpatient clinic. In contrast, only 44.44% of patients who did not achieve eradication visited the HP-MDT outpatient ( P < 0.001). These results highlight the significant role of the HP-MDT outpatient clinic in achieving successful eradication and suggest that younger patients are more likely to achieve eradication. There were no significant differences between the eradication and noneradication groups in sex and the number of treatment sessions required (both P > 0.05). Subgroup analyses of eradication rates The overall eradication rate among all 153 patients was 70.59% (108 patients). Figure illustrates the results of the subgroup analyses of eradication rates, stratified by outpatient clinic type, sex, age, and the number of treatment sessions. The analysis reveals that patients who attended the HP-MDT outpatient clinic and younger patients had significantly higher eradication rates (both P < 0.05). Specifically, those treated in the HP-MDT clinic showed a notably higher eradication rate compared with those treated in the non–HP-MDT clinic (80.39% vs 50.98%, P < 0.001, Figure ). Similarly, younger patients were more likely to achieve eradication compared with older patients ( P = 0.028, Figure ). However, there were no significant differences in eradication rates based on sex or the number of treatment sessions required (both P > 0.05, Figure ). This indicates that while the type of outpatient clinic and age are important factors in achieving eradication, sex and the number of treatment sessions do not significantly affect eradication success. Because the HP-MDT group had a higher eradication rate, we conducted further comparisons between the non–HP-MDT and HP-MDT groups, stratified by patient sex, age, and treatment session group. As shown in Table , patients attending the HP-MDT clinic consistently had higher eradication rates across almost all stratified analyses. Overall, the eradication rate was significantly higher in the HP-MDT group compared with the non–HP-MDT group (80.39% vs 50.98%, P < 0.001). When stratified by sex, both male (72.73% vs 48.00%, P = 0.032) and female (89.36% vs 53.85%, P < 0.001) patients in the HP-MDT group showed higher eradication rates. When stratified by age, patients aged 18–39 years (91.18% vs 62.50%, P = 0.022), 40–59 years (74.07% vs 50.00%, P = 0.043), and 60 years and above (78.57% vs 38.46%, P = 0.034) in the HP-MDT group showed higher eradication rates. When stratified by treatment sessions, patients requiring only 2 sessions (82.43% vs 59.46%, P = 0.009) and those needing more than 2 sessions (75.00% vs 28.57%, P = 0.004) in the HP-MDT group showed higher eradication rates. Further stratified analyses combining sex and treatment sessions indicated that female patients in the HP-MDT group consistently had higher eradication rates across different session groups, with significant differences. These findings underscore the effectiveness of the HP-MDT clinic in achieving higher eradication rates across various patient subgroups. Logistic regression models Logistic regression analyses were conducted to identify independent variables associated with the eradication rate. In the univariate analysis, attending the HP-MDT clinic (odds ratio [OR]: 3.94, 95% CI: 1.89 to 8.22, P < 0.001, Table ) and being older than 60 years (OR: 0.32, 95% CI: 0.11 to 0.92, P = 0.034, Table ) were significant factors. In the multivariate analysis, attending the HP-MDT clinic remained a significant independent factor for higher eradication rates (OR: 4.43, 95% CI: 2.02 to 9.71, P < 0.001, Table ). These results underscore that attending the HP-MDT clinic is a stable and significant factor in increasing the likelihood of successful eradication. Genetic testing In this study, 13 patients from the HP-MDT group who experienced failed eradication after 2 or more treatment sessions underwent gene testing. Among these patients, 8 were males and 5 were females. Cultures were unsuccessful in 2 male cases. The CYP2C19 gene genotyping revealed that 61.54% (8 of 13) were rapid metabolizers, 38.46% (5 of 13) were intermediate metabolizers, and none were slow metabolizers (Table ). Coccoid form (>5%) was found in 45.45% (5 of 11) of the cases (Table ). Regarding drug resistance, double antibacterial resistance was observed in 18.18% (2 of 11) of the cases and triple antibacterial resistance was noted in 72.73% (8 of 11) of the cases (Table ). Specifically, the resistance rates were 90.91% (10 of 11) for clarithromycin, 72.73% (8 of 11) for levofloxacin, and 100% (11 of 11) for metronidazole. No resistance was found for penicillin, furazolidone, or tetracycline (Table ). After treatment at the HP-MDT clinic, the eradication rate was 92.31% (12 of 13, Table ). Among these cases, one patient experienced fatigue during the later stages of medication, which resolved after discontinuing the medication for 3 days. No serious adverse reactions were reported. Adverse drug reaction In the HP-MDT group, 13 patients (12.75%) experienced ADRs. These included 7 cases of gastrointestinal reactions, 3 cases of rash, 1 case of fever, and 2 other types of ADRs. Stratified analyses by sex, age, treatment sessions, and eradication status revealed no significant differences in the occurrence of ADRs across these variables (all P > 0.05, Table ). A total of 153 patients were included in this study, with 80 (52.29%) male and 73 (47.71%) female patients. The average age was 47.24 ± 12.54 years, and the average number of treatment sessions was 2.42 ± 0.86. A “treatment session” is defined as a complete cycle of care, encompassing the initial therapeutic procedure, accompanying patient education, and follow-up evaluation. This process may span multiple visits, depending on the patient's clinical needs. The clinical characteristics of patients in the non–HP-MDT and HP-MDT groups are compared in Table . No significant differences were observed between the non–HP-MDT and HP-MDT groups in sex distribution ( P = 0.567), age ( P = 0.376), and the number of treatment sessions ( P = 0.739). Both groups had a majority of patients requiring exactly 2 treatment sessions (72.55% in both groups). These results confirm the comparability between the non–HP-MDT and HP-MDT groups. Table presents a comparison of clinical characteristics between patients who achieved eradication and those who did not. The data indicate that patients who achieved eradication were significantly younger and had a higher attendance rate at the HP-MDT outpatient clinic (both P < 0.05). Specifically, the average age of patients who achieved eradication was 45.81 ± 12.64 years, compared with 50.67 ± 11.74 years for those who did not ( P = 0.028). Among patients who achieved eradication, 75.93% attended the HP-MDT outpatient clinic. In contrast, only 44.44% of patients who did not achieve eradication visited the HP-MDT outpatient ( P < 0.001). These results highlight the significant role of the HP-MDT outpatient clinic in achieving successful eradication and suggest that younger patients are more likely to achieve eradication. There were no significant differences between the eradication and noneradication groups in sex and the number of treatment sessions required (both P > 0.05). The overall eradication rate among all 153 patients was 70.59% (108 patients). Figure illustrates the results of the subgroup analyses of eradication rates, stratified by outpatient clinic type, sex, age, and the number of treatment sessions. The analysis reveals that patients who attended the HP-MDT outpatient clinic and younger patients had significantly higher eradication rates (both P < 0.05). Specifically, those treated in the HP-MDT clinic showed a notably higher eradication rate compared with those treated in the non–HP-MDT clinic (80.39% vs 50.98%, P < 0.001, Figure ). Similarly, younger patients were more likely to achieve eradication compared with older patients ( P = 0.028, Figure ). However, there were no significant differences in eradication rates based on sex or the number of treatment sessions required (both P > 0.05, Figure ). This indicates that while the type of outpatient clinic and age are important factors in achieving eradication, sex and the number of treatment sessions do not significantly affect eradication success. Because the HP-MDT group had a higher eradication rate, we conducted further comparisons between the non–HP-MDT and HP-MDT groups, stratified by patient sex, age, and treatment session group. As shown in Table , patients attending the HP-MDT clinic consistently had higher eradication rates across almost all stratified analyses. Overall, the eradication rate was significantly higher in the HP-MDT group compared with the non–HP-MDT group (80.39% vs 50.98%, P < 0.001). When stratified by sex, both male (72.73% vs 48.00%, P = 0.032) and female (89.36% vs 53.85%, P < 0.001) patients in the HP-MDT group showed higher eradication rates. When stratified by age, patients aged 18–39 years (91.18% vs 62.50%, P = 0.022), 40–59 years (74.07% vs 50.00%, P = 0.043), and 60 years and above (78.57% vs 38.46%, P = 0.034) in the HP-MDT group showed higher eradication rates. When stratified by treatment sessions, patients requiring only 2 sessions (82.43% vs 59.46%, P = 0.009) and those needing more than 2 sessions (75.00% vs 28.57%, P = 0.004) in the HP-MDT group showed higher eradication rates. Further stratified analyses combining sex and treatment sessions indicated that female patients in the HP-MDT group consistently had higher eradication rates across different session groups, with significant differences. These findings underscore the effectiveness of the HP-MDT clinic in achieving higher eradication rates across various patient subgroups. Logistic regression analyses were conducted to identify independent variables associated with the eradication rate. In the univariate analysis, attending the HP-MDT clinic (odds ratio [OR]: 3.94, 95% CI: 1.89 to 8.22, P < 0.001, Table ) and being older than 60 years (OR: 0.32, 95% CI: 0.11 to 0.92, P = 0.034, Table ) were significant factors. In the multivariate analysis, attending the HP-MDT clinic remained a significant independent factor for higher eradication rates (OR: 4.43, 95% CI: 2.02 to 9.71, P < 0.001, Table ). These results underscore that attending the HP-MDT clinic is a stable and significant factor in increasing the likelihood of successful eradication. In this study, 13 patients from the HP-MDT group who experienced failed eradication after 2 or more treatment sessions underwent gene testing. Among these patients, 8 were males and 5 were females. Cultures were unsuccessful in 2 male cases. The CYP2C19 gene genotyping revealed that 61.54% (8 of 13) were rapid metabolizers, 38.46% (5 of 13) were intermediate metabolizers, and none were slow metabolizers (Table ). Coccoid form (>5%) was found in 45.45% (5 of 11) of the cases (Table ). Regarding drug resistance, double antibacterial resistance was observed in 18.18% (2 of 11) of the cases and triple antibacterial resistance was noted in 72.73% (8 of 11) of the cases (Table ). Specifically, the resistance rates were 90.91% (10 of 11) for clarithromycin, 72.73% (8 of 11) for levofloxacin, and 100% (11 of 11) for metronidazole. No resistance was found for penicillin, furazolidone, or tetracycline (Table ). After treatment at the HP-MDT clinic, the eradication rate was 92.31% (12 of 13, Table ). Among these cases, one patient experienced fatigue during the later stages of medication, which resolved after discontinuing the medication for 3 days. No serious adverse reactions were reported. In the HP-MDT group, 13 patients (12.75%) experienced ADRs. These included 7 cases of gastrointestinal reactions, 3 cases of rash, 1 case of fever, and 2 other types of ADRs. Stratified analyses by sex, age, treatment sessions, and eradication status revealed no significant differences in the occurrence of ADRs across these variables (all P > 0.05, Table ). In this study, the age stratification into 18–39, 40–59, and 60+ years was based on clinical distinctions relevant to HP management, allowing for the evaluation of treatment outcomes across different life stages—young adulthood, middle age, and older adulthood. This grouping also provides insights into the impact of age-related factors, such as comorbidities and drug tolerances, on treatment efficacy. In addition, baseline characteristics were comparable across these age groups, supporting the validity of this stratification. The potential influence of age on treatment adherence and outcomes was further evaluated. While older age may influence outcomes to some extent, this was not confirmed in multivariate analysis. Although the proportion of patients aged 60 years and older was higher in the non–HP-MDT group than in the HP-MDT group, this difference was not statistically significant. Notably, the HP-MDT approach demonstrated robust efficacy even among older patients, suggesting that factors beyond age may play a more critical role in determining treatment success. The disparities between male and female patients might be attributed to several factors. Biological distinctions, such as variations in drug metabolism, hormonal influences, or immune system responses, may affect treatment efficacy differently among male and female patients . Furthermore, differences in treatment adherence, healthcare-seeking behavior, and medication compliance could contribute to variations in treatment outcomes between the sexes . Sample size and demographic characteristics, including age distribution and treatment history, may also influence the observed differences. Given the intricate interplay between sex and treatment sessions, further research is warranted to elucidate underlying mechanisms and optimize treatment strategies accordingly. Genetic testing plays a crucial role in treatment planning , especially in refractory cases. In this study, 13 patients from the HP-MDT group who experienced failed eradication after 2 or more treatment sessions underwent CYP2C19 gene typing. The results showed that most of the patients were rapid metabolizers (61.54%), and triple antibiotic resistance (72.73%) was more prevalent than double resistance (18.18%), particularly against clarithromycin, levofloxacin, and metronidazole. Under HP-MDT model treatment, these patients with multiple failed eradication attempts achieved an eradication rate of 92.31%, By conducting genetic testing on patients who have undergone multiple failed treatments in the HP-MDT group, a more precise understanding of patients' drug metabolism and antibiotic resistance can be obtained, leading to the development of more personalized treatment plans. This helps avoid ineffective treatments, enhances eradication rates, and reduces the occurrence of ADRs. Therefore, genetic testing holds significant promise in the treatment of HP infection, offering patients more effective and safer treatment options. The 92.31% eradication rate further underscores the importance of genetic testing in tailoring personalized treatment plans and effectively tackling challenges posed by antibiotic resistance. The HP-MDT diagnostic and therapeutic model stands as a cornerstone in the management and prevention of ADRs. Zeng et al conducted a comprehensive analysis examining the impact of reinforced medication adherence on HP eradication rates in developing regions. Their study demonstrated a lower incidence of adverse reactions (27.3%) in the intervention group, which received enhanced adherence measures, compared with 34.7% in the control group. Similarly, Wang et al conducted a systematic review and meta-analysis comparing vonoprazan dual therapy with triple therapy for HP eradication, revealing a lower incidence of adverse reactions (26.1%) in the dual therapy group compared with 29.6% in the triple therapy group. This study revealed a 12.75% incidence rate of ADRs within the HP-MDT group, mainly presenting as mild gastrointestinal discomfort, contrasting with findings from previous studies. The lower occurrence of ADRs in the HP-MDT group could be attributed to thorough pretreatment communication between physicians and patients regarding antibiotic usage history and potential adverse reactions. In addition, ongoing medication monitoring and patient education throughout the treatment process within the HP-MDT framework fostered collaborative efforts among physicians, pharmacists, and nurses, enabling the provision of personalized treatment plans aimed at mitigating potential ADRs. Although the treatment of HP is largely guided by established clinical guidelines, the integration of multidisciplinary expertise offers distinct advantages. The MDT approach allows for the effective combination and application of diverse professional knowledge, which is particularly beneficial in enhancing patient education, medication adherence, and overall treatment outcomes. While specialists might have varying levels of expertise, the collaborative nature of MDT ensures that patients benefit from a holistic approach, addressing both medical and nonmedical aspects of care. This model has shown positive outcomes in managing complex and refractory conditions, making it particularly relevant for patients with difficult-to-treat HP infections. Furthermore, the HP-MDT model effectively curtails ADR occurrences through the implementation of standardized treatment protocols and comprehensive patient education. Patients attending HP-MDT clinics benefit from personalized one-on-one medication consultations, empowering them with detailed information on eradication regimens, medication instructions, common ADRs, and strategies to address them. In addition, standardized home disinfection procedures and the provision of relevant educational materials and guidance by the HP-MDT team further mitigate the risk of ADRs. The inclusion of home disinfection in the MDT protocol was based on both the AGA guidelines and Chinese guidelines and was specifically adapted to the local context. In China, where many households lack advanced sanitation facilities and where communal eating practices contribute to HP transmission, home disinfection education is crucial. It provides patients with effective methods to eliminate HP and prevent reinfection, ultimately reducing transmission within the household. While the addition of home disinfection represents an additional variable in the MDT approach, its role is essential for addressing the specific needs of the population and may contribute to the long-term success of HP eradication efforts. In conclusion, the HP-MDT diagnostic and therapeutic model, through interdisciplinary collaboration and comprehensive patient education, serves as an effective approach in managing and preventing ADRs. While increased communication time with knowledgeable healthcare providers is an important factor in improving patient outcomes, the benefit of the MDT approach extends beyond just additional interaction time. The MDT model integrates the expertise of various healthcare professionals, including physicians, pharmacists, and nurses, to provide comprehensive, individualized care. This team-based collaboration ensures that patients receive not only adequate treatment planning but also personalized education, emotional support, and guidance on medication management. The MDT approach also helps address the complex challenges of managing HP infections, particularly by reducing treatment failures and mitigating antibiotic resistance. Through better patient education, medication adherence strategies, and proactive management of ADRs, MDT teams contribute to more successful eradication efforts, which are crucial in the context of HP treatment. Therefore, the MDT model should be viewed as a holistic approach that goes beyond simply increasing face-to-face time with physicians, and we believe that its value in managing complex medical conditions warrants further exploration. Several limitations are evident in this study. First, its retrospective design precludes researchers from controlling all variables that may influence outcomes, potentially introducing confounding factors. Second, the relatively small sample size and single-center nature of the study may limit its generalizability. Third, the nonrandomized design of this study is a potential source of bias. However, the consecutive enrollment of patients and baseline comparability between the groups help to mitigate this limitation. Fourth, genetic testing for CYP2C19 polymorphisms was selectively performed in patients who experienced multiple treatment failures (≥2 sessions), where clinical history and known antibiotic resistance patterns were insufficient for guiding treatment adjustments. This decision was based on current clinical guidelines and local practice constraints, including the high cost of genetic testing (approximately 2,300 CNY) and procedural risks associated with endoscopic sampling, making universal testing impractical in routine clinical practice. While this selective approach aligns with recommendations to focus on refractory cases, it may limit the generalizability of findings regarding the role of genetic testing in HP management. Finally, a significant challenge during the study period was the interruption of treatment because of the COVID-19 pandemic, which prevented many patients from attending follow-up visits or completing breath testing. These patients were excluded from the analysis. Future research should address these limitations by using larger sample sizes, randomized study designs, and long-term follow-ups to validate the results and better understand the impact of patient adherence and MDT involvement on treatment outcomes, as well as evaluate the HP-MDT model's application across diverse patient populations. In summary, the HP-MDT clinic significantly enhanced eradication rates and safety for patients with refractory HP infection. The integration of medical, pharmaceutical, and nursing expertise, alongside personalized treatment plans and patient education, led to successful outcomes with minimal adverse events. Our findings underscore the efficacy of the HP-MDT diagnostic and therapeutic model in improving chronic disease management. Guarantor of the article: Yu-Qin Zhao, MM. Specific author contributions: We declare that all the listed authors have participated actively in the study, and all meet the requirements of the authorship. Drs. Y.Q.Z. designed the study and wrote the protocol, Drs. N.D., W.J.W., Z.L.S., and Y.H.X. acquired the manuscript, Drs. N.D., X.Y. W., G.Z.Z., L.W., and Q.H.W. analyzed the data, Drs. N.D. and Y.Q.Z. wrote the first draft of the manuscript and mainly revised the manuscript. All authors approved the final version of the manuscript. Financial support: This study was supported by the Science and Technology Plan Project of The First People's Hospital of Kunshan in 2021 (No. KRY- YN001) and 2021 Kunshan Municipal Science and Technology Special Project (Health) (No. KSZ2159). Potential competing interests: None to report. IRB approval statement: This study was approved by the independent Ethics Committee at the First People's Hospital of Kunshan (No. EC-SOP-007-A07-V4.0). Written informed consent was waived by the IRB due to the retrospective nature of this study. Study Highlights WHAT IS KNOWN ✓ Helicobacter pylori (HP) infection contributes to various gastrointestinal diseases. ✓ Eradication therapy is crucial in managing HP infection, but antibiotic resistance has led to poor outcome. WHAT IS NEW HERE ✓ HP-Multidisciplinary Team clinic integrating medical, pharmaceutical, and nursing expertise was applied. ✓ It improved the eradication rates and safety in patients with refractory HP infection. ✓ Personalized treatment plans contributed to successful outcomes. ✓ Helicobacter pylori (HP) infection contributes to various gastrointestinal diseases. ✓ Eradication therapy is crucial in managing HP infection, but antibiotic resistance has led to poor outcome. ✓ HP-Multidisciplinary Team clinic integrating medical, pharmaceutical, and nursing expertise was applied. ✓ It improved the eradication rates and safety in patients with refractory HP infection. ✓ Personalized treatment plans contributed to successful outcomes.
External quality assessment of medical laboratories in Croatia: preliminary evaluation of post-analytical laboratory testing
73b8e688-e50a-4e10-8bf5-54df5302cc0f
5382856
Pathology[mh]
Comparability of laboratory test results depends on standardization of all phases of laboratory testing, including pre-analytical, analytical and post-analytical phases. Pre-analytical and analytical phases of laboratory testing aim to generate an accurate test result, while the post-analytical phase - when the clinician receives the test results, interprets them, and uses them to make diagnostic and therapeutic decisions - aims to reduce errors or bias associated with the hand-off from laboratory to clinician. The most frequent errors in the post-analytical phase are erroneous validation of analytical data, failure to report test results to appropriate parties, excessively long turnaround time (TAT), mistakes in data entry, manual transcription errors and failure or delay in reporting critical values . Despite the obvious importance of the post-analytical phase to overall laboratory performance, many providers of external quality assessment (EQA) schemes do not take into account the post-analytical phase . In 2009, the Croatian Chamber of Medical Biochemists (CCMB) and Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) assessed the state of pre- and post-analytical procedures in medical laboratories across the country . The results indicated urgent, substantial need for improvement. Therefore, a nationwide EQA scheme covering the post-analytical phase was implemented in 2014, administered by the Croatian Centre for Quality Assessment in Laboratory Medicine (CROQALM) within the CSMBLM. The EQA scheme is implemented modularly three times per year. The CCMB made participation in the scheme mandatory for all medical laboratories in Croatia in 2013 . In the second EQA exercise of 2014, pilot modules on pre- and post-analytical phases were introduced; in all three EQA exercises of 2015, Module 11 dealing with the post-analytical phase was performed. The present study was undertaken to evaluate to what extent the recently introduced nationwide EQA scheme for the post-analytical phase of laboratory testing has influenced laboratory practice in Croatia. Since laboratories showed substantial variation in post-analytical practices before the scheme , we felt it necessary to evaluate the success of the EQA scheme at this early stage in order to identify the more important issues and implementation gaps and thereby help regulators and laboratory directors focus their energies more efficiently in the coming years. In a separate publication, we will assess pre-analytical procedures using an EQA module for the pre-analytical phase developed by the CSMBLM. Module 11 is an educational module about the post-analytical phase of laboratory testing, and it contains an optional questionnaire that presents medical laboratories with routine post-analytical scenarios where standardized practices exist under the Croatian EQA scheme or where clear rules are lacking (‘grey areas’). The present study retrospectively analysed laboratory responses to this questionnaire in 2014-2015 in order to (a) gain on-the-ground insights into current laboratory practices in Croatia and (b) identify the most urgent areas for improving the standardization of the post-analytical phase in Croatian laboratories. Study design This retrospective, longitudinal study involved analysis of the responses of Croatian medical laboratories to a questionnaire distributed during four national EQA exercises conducted in September 2014 and in May, September and November 2015. During this study period, 194 medical laboratories were registered in the Croatian health care system, comprising 125 (64%) medical laboratories from primary health care facilities (including private medical practices and private laboratories) and 69 (36%) medical laboratories from secondary and tertiary health care centres (clinical hospital centres, clinical hospitals, general hospitals, national hospitals, and special hospitals). Although all medical laboratories were obliged to participate in the national EQA scheme, their responses on the questionnaire were voluntary. Laboratories were told that their responses would not affect their overall assessment from CROQALM. No fees or compensation were involved in completing the questionnaire. Data from all responding laboratories were used in the present study; no exclusion criteria were applied. When they filled out the questionnaire, laboratories gave consent for the data to be stored and used by CROQALM for group-level analyses. Members of CROQALM signed statements that they would safeguard the confidentiality of EQA data. Questionnaire Questionnaires have been proposed as an effective method for assessing the post-analytical phase during EQA exercises . The Croatian EQA questionnaire was designed by CROQALM, approved by CSMBLM and CCMB, and distributed to all registered medical laboratories in the country. The responses were analysed by the EQA provider (CROQALM), and one of the authors (JLK) annotated the results in her capacity as EQA/CROQALM Module 11 coordinator. All these steps, from design to final analysis, were conducted via Web interface using inlab2*QALM software, specifically designed in 2011 for quality evaluation of medical laboratory performances (IN2 Group Ltd., Zagreb, Croatia). Medical laboratories receiving the questionnaire were instructed to ask their laboratory manager or laboratory professionals (or quality control manager) to fill it out. The questionnaire was part of Module 11, entitled ‘Post-analytical phase of laboratory testing’, which explained post-analytical practices under the new Croatian EQA scheme. It consisted of closed-type questions covering four indicators of post-analytical quality proposed by the Working Group ‘Laboratory Errors and Patient Safety’ of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) . The questions described specific situations or scenarios often encountered in routine practice concerning TAT, critical values, interpretative comments and procedures (repetition or additional testing) in the event of abnormal test results. Participants could choose only one of the offered responses for each question. Responses to 12 questions administered during one or more of the four exercises during the study period were analysed. Data analysis Data were not analysed statistically. Instead, results were reported as absolute numbers and percentages. This retrospective, longitudinal study involved analysis of the responses of Croatian medical laboratories to a questionnaire distributed during four national EQA exercises conducted in September 2014 and in May, September and November 2015. During this study period, 194 medical laboratories were registered in the Croatian health care system, comprising 125 (64%) medical laboratories from primary health care facilities (including private medical practices and private laboratories) and 69 (36%) medical laboratories from secondary and tertiary health care centres (clinical hospital centres, clinical hospitals, general hospitals, national hospitals, and special hospitals). Although all medical laboratories were obliged to participate in the national EQA scheme, their responses on the questionnaire were voluntary. Laboratories were told that their responses would not affect their overall assessment from CROQALM. No fees or compensation were involved in completing the questionnaire. Data from all responding laboratories were used in the present study; no exclusion criteria were applied. When they filled out the questionnaire, laboratories gave consent for the data to be stored and used by CROQALM for group-level analyses. Members of CROQALM signed statements that they would safeguard the confidentiality of EQA data. Questionnaires have been proposed as an effective method for assessing the post-analytical phase during EQA exercises . The Croatian EQA questionnaire was designed by CROQALM, approved by CSMBLM and CCMB, and distributed to all registered medical laboratories in the country. The responses were analysed by the EQA provider (CROQALM), and one of the authors (JLK) annotated the results in her capacity as EQA/CROQALM Module 11 coordinator. All these steps, from design to final analysis, were conducted via Web interface using inlab2*QALM software, specifically designed in 2011 for quality evaluation of medical laboratory performances (IN2 Group Ltd., Zagreb, Croatia). Medical laboratories receiving the questionnaire were instructed to ask their laboratory manager or laboratory professionals (or quality control manager) to fill it out. The questionnaire was part of Module 11, entitled ‘Post-analytical phase of laboratory testing’, which explained post-analytical practices under the new Croatian EQA scheme. It consisted of closed-type questions covering four indicators of post-analytical quality proposed by the Working Group ‘Laboratory Errors and Patient Safety’ of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) . The questions described specific situations or scenarios often encountered in routine practice concerning TAT, critical values, interpretative comments and procedures (repetition or additional testing) in the event of abnormal test results. Participants could choose only one of the offered responses for each question. Responses to 12 questions administered during one or more of the four exercises during the study period were analysed. Data were not analysed statistically. Instead, results were reported as absolute numbers and percentages. The number of medical laboratories participating in each exercise varied, as did the number that filled out the questionnaire; presents a histogram of response rates to questions. The response rate was always more than 80% of laboratories participating in the exercise. presents the contents of the questionnaire, which varied with the exercise. Responses to the questionnaire were analysed to determine to what extent medical laboratories comply with existing rules issued by professional bodies in Croatia, and to identify how medical laboratories are likely to proceed in frequent yet ‘grey’ situations where no clear rules exist. Frequencies of different responses to all questions are shown in where answers that are clearly non-compliant with existing rules are emphasised in question comment. While laboratory respondents showed good knowledge of the definition of TAT, only approximately half of respondents reported that they routinely monitor TAT. Most respondents (85.7%) showed good knowledge of the definition of critical values (reflected in five questions) and of recommendations on how to apply the definition. On the other hand, the laboratories showed substantial variation in how they responded to certain questions about critical values; many lacked knowledge about age-dependent critical values and how to establish an intra-laboratory list of critical values. The questionnaire presented respondents with three scenarios involving interpretative comments in order to understand to what extent medical laboratories in Croatia take an active role in interpreting test results and communicating those interpretations to clinicians orally or in writing. While laboratories varied in their responses to these scenarios, one third selected answers implying an active role in interpretation of test results, either via a comment written on the report or contact with the clinician and/or patient. In various situations, up to a third of laboratories issued results without additional activities. The last group of questions asked laboratories how they proceed in the event of abnormal test results. For example, does the laboratory repeat the test and, if so, does it use the same or a new sample? Is the sample re-analysed using the same test procedure as before or a different procedure? After re-testing, are the initial and/or follow-up test results shown on the final report? Most laboratories reported that they repeat testing to verify abnormal results. Nevertheless, one third reported that they issue results without such verification. Harmonization and standardization of pre- and post-analytical phases of laboratory work are essential for good clinical care. Since 2007, ISO standard 15189 has included assessment of pre- and post-analytical phases of testing as one of the requirements for accreditation of medical laboratories . Nevertheless, many providers of EQA schemes do not systematically assess the post-analytical phase . Since 2014, all medical laboratories in Croatia are required to participate in a national EQA scheme that includes post-analytical assessment. The present study aimed to assess the current state of laboratory compliance with the EQA scheme, as well as identify areas where clearer rules - or the first set of rules - need to be developed at the national level. This is an urgent problem, because only 11 of 198 (5.5%) registered medical laboratories in Croatia are ISO 15189 - accredited, and most are planning to enter the accreditation process soon . In the present study, we retrospectively analysed the responses of medical laboratories to the Module 11 post-analysis questionnaire incorporated in the Croatian EQA exercises since 2014. This questionnaire focused on the four main quality control indicators of the post-analytical phase of testing. Our results indicate substantial heterogeneity in how medical laboratories in Croatia proceed in situations where no clear rules or guidelines exist. TAT is a frequently used quality indicator: it is easily tracked through the laboratory informatics system, and ISO 15189 mandates that the TAT be established for each type of test, through consultation between laboratory and clinician (item 5.8.11) . One challenge with standardizing TATs across laboratories is that the definition of TAT can vary depending on whether the laboratory is a primary, secondary or tertiary facility and whether the test is routine, emergency or specialized . Our results indicate that although most respondents know that TAT monitoring is an accreditation requirement, they do not have a monitoring system in place. Nevertheless, most respondents do record when the laboratory sample is received and when validated test results are obtained or reported. This likely reflects the widespread use of laboratory informatics systems. ISO 15189 requires that laboratories apply standard procedures for recording and reporting critical values (items 5.8.1, 5.9.1 and 5.9.2) . Laboratories are also required to generate their own lists of critical values based on the local clinical situation, in consultation with clinicians . Most laboratories in our sample showed an understanding of critical values but not how to define own critical values list, or they neglected to adjust them based on patient age. In January 2015, CCMB published an updated and revised, ISO 15189 - compliant list of critical values , which includes critical values and reference intervals for neonatal patients . The recent release of this information at the national level may help to explain the heterogeneity in laboratory responses on our questionnaire. This information may help laboratories define their own lists of critical values and report critical values appropriately. Future work is needed to track laboratory - level implementation of this knowledge around the country. Most medical laboratories reported that they confirm critical values with additional measurement before reporting, which is consistent with CCMB recommendations. Approximately one fifth indicated that they immediately report critical values without test repetition, which is consistent with practices at accredited laboratories in other countries and reflects the fact that ISO 15189 - compliant laboratories may choose not to re-test in specific circumstances, such as when national guidelines in their country recommends it or when the use of advanced laboratory technology places the initial result beyond reasonable doubt. Recent published results of survey on critical results reporting in Croatian medical laboratories found high score for re-analyse critical results before reporting . In general, Croatian laboratories are in compliance with valid CCMB recommendation. While those results based on a carefully designed scoring system are difficult to compare with our preliminary, descriptive results, the two studies may point to the need for more systematic research in this area. Interpretative comments on the laboratory test report can improve treatment outcomes . They are a widely used quality indicator and, since 2007, an obligatory part of ISO 15189 accreditation . Although the CCMB has stated since 2004 that ‘remarks related to the sample (lipemia, hyperbilirubinemia, haemolysis and others) are a mandatory part of every report of laboratory test results’, the type, format and position of the comments on the report are not clearly defined , nor do CCMB guidelines indicate which comments necessitate contacting the clinician. Our results indicate that, depending on the situation, 3-35% medical laboratories do not flag abnormal results to the clinician, either in writing or orally. In addition, one third of laboratories neither repeats the test nor performs additional actions in an effort to confirm the abnormal results. Abnormal results may be significant for diagnosis and treatment, and may call into question the reliability of the test results. Including interpretative comments on lab reports can help prevent the release of incorrect or less reliable test reports . Therefore, our results identify an urgent need to revise and update CCMB recommendations about interpretative comments on test reports, as well as a need for the CCMB and other groups to define when tests or sampling should be repeated or additional tests performed. A small proportion (12%) of respondent laboratories left open-ended comments to one or more of the questions; nearly all these comments were that their laboratory did not routinely encounter, or had never encountered, the scenario described in the question. This suggests that many laboratories feel they lack sufficient knowledge or experience to deal adequately with many post-analytical problems, despite the implementation of the Croatian EQA scheme. This suggests the need for greater training opportunities for medical laboratories in the country. The present study presents a preliminary picture of the early stages of post-analytical EQA at the national level in Croatia. It is based on a sampling of medical laboratories from around the country and makes use of a questionnaire tailored to the logistical, clinical, and regulatory situation in Croatia. As with most questionnaire assessments, there is some risk that practices reported on the survey do not reflect actual practices in the respondent laboratory. To reduce this risk, we asked that the questionnaires be filled out by professional laboratory staff responsible for quality control. Another limitation of our study is that the response rate ranged from 81% to 90%, raising the possibility that our sample was biased. For example, perhaps laboratories that felt more confident about their knowledge and practices were more likely to respond to our survey. If this is true, then our study may underestimate the lack of alignment with post-analytical best practices, which only reinforces our conclusion that much more needs to be done to accelerate the harmonization of post-analytical procedures in Croatia. A third limitation is that the survey was not extensive enough to offer comprehensive insights into laboratory practices and attitudes. While this may have helped ensure a high response rate for the preliminary analysis here, future work may wish to look at these issues in greater detail. In conclusion, assessment of post-analytical quality indicators such as TAT, critical values and interpretative comments are well recognized by both CCMB and ISO 15189, although clear definition of these terms, guidelines compliance and actions to be taken by laboratories are often incomprehensible. The results of Module 11 survey in Croatia highlights major obstacles to harmonization and standardization of post-analytical practices at national level. Future EQA exercises should reinforce the importance of filling out this survey.
Psychometric properties of the Arabic version of the Forgotten Joint Score usage in total hip arthroplasty
dcb34e30-a4e8-4596-90aa-204f0c8aeee4
11773835
Surgical Procedures, Operative[mh]
More than one million total hip arthroplasties are performed worldwide each year . This number is predicted to double over the next few decades . Successful hip arthroplasty typically depends on a surgeon-centered evaluation that considers factors such as implant lifespan, functional outcomes, and complications . Recently, there has been a notable movement towards utilizing patient-reported outcomes as a crucial indicator, aligning with the main objective of hip arthroplasty, which is to provide pain relief and enhance quality of life for patients . This discernible movement encourages patient-centered evaluation by acknowledging the value of patients’ viewpoints and experiences in determining the success of a procedure. Among scoring systems developed to meet these demands, the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and other instruments created especially for the hip . Because of their ceiling effects, both of these standard scoring systems faced criticism. This limits the validity of its use in assessments, particularly in the field of research, potentially hindering the recognition of future improvements in patients who receive the highest score. A systematic review in 2010 assessed the usefulness of HHS by investigating its ceiling effect and revealed that the tool had an unacceptable ceiling effect of 20% . In 2012, an innovative tool was created, with the aim of gauging the extent to which patients can forget about the presence of an implant during their everyday activities. It is named the “Forgotten Joint Score-12”(FJS) . This revolutionary scoring system has garnered substantial attention because of its ability to assess functional outcomes in a patient-centered fashion with a remarkably low ceiling effect . This has resulted in the widespread adoption of such tools. This rapid adoption necessitated the translation and validation of this tool in various languages . This tool was translated and evaluated in Arabic before by the authors specifically for knee arthroplasty patients . Due to the significance of hip arthroplasty, the unique characteristics of the hip joint, differences in patient perspectives, the necessity of accurate clinical decision-making, and the importance of assessing patient-reported outcomes, having a validated and culturally tailored Arabic version of the FJS-12 for hip arthroplasty patients is crucial. As far as we know, there is no verified Arabic version of the FJS-12 for hip arthroplasty. Validating the Arabic version will boost surgeons’ confidence in making well-informed decisions that are more closely in line with the patients’ viewpoints and requirements. Study design, participants, and ethical considerations We retrospectively reviewed a single-center list of patients who underwent unilateral hip arthroplasty between 2015 and 2022 due to primary osteoartheritis, or as a sequelae of Developmental Dysplsia of Hip, avascular necrosis, and inflammatory arthritis. After determining who met for inclusion criteria, 107 native Arabic speakers were asked to participate in part. This number was based on the set threshold of 100 proposed by Terwee et al. [16 ]The follow up period from the operation to the response ranged between 1 and 7 years. Any patient with a cognitive disorder that hindered the ability to answer the questionnaire independently was excluded from the study. Participants below 18 years of age, as well as those who underwent resurfacing or revision procedures, were excluded from the study. All participants provided their informed consent before the study began, emphasizing to participants that they had no commitments to the research team and that they have the right to withdraw at any time. The local Institutional Review Board reviewed and approved this study (No. E-22-7019). Instrument translation, procedure, and data collection The FJS-12 consists of 12 questions that assess a patient’s capacity to disregard the existence of an artificial joint in daily activities. Each item is accompanied by a five-point Likert scale response. The unprocessed findings are transformed into a scale ranging from 0 to 100 points. The highest score indicates a favorable outcome, with the patient unaware of the prosthesis’s presence 9. After obtaining a license for agreement from the FJS-12 copyright owners, our study was carried out. The forward-backward method used to convert the FJS-12 into Arabic had been approved by the tool’s original developers 17. The research team then conducted a pilot test of the questionnaire on 10 patients who underwent unilateral total hip arthroplasty to determine whether there were any problems with questionnaire comprehension. Each issue was discussed by the research team, and following a consensus, the final version was ratified. Validation process and data acquisition Construct and content validity are the two general forms of validity assessed for the Arabic version of the FJS. To evaluate construct validity, participants were tasked with attempting the reduced WOMAC (rWOMAC) once. The correlation coefficients of Pearson were calculated. Strong associations were indicated by values larger than 0.6 . The content validity of the questionnaire describes how thoroughly it accounts for all symptoms reported by patients, which were assessed by floor and ceiling effects. Ceiling and floor effects in < 15% of the patients were considered acceptable . We assessed reliability by measuring whether the test was consistent regardless of time (test-retest reliability) and across items (internal consistency). To assess the test-retest reliability of the survey, patients were asked to complete the Arabic version of the FJS questionnaire at two separate times spaced by two weeks. The intraclass correlation coefficient (ICC) was calculated to estimate the test-retest reliability . Cronbach’s alpha was used to evaluate internal consistency, which is a measure of how closely connected the different parts of a measuring tool are to one another. A Cronbach’s alpha of 0.80 to 0.89 generally implies acceptable internal consistency, while a value of 0.90 or higher denotes excellent internal consistency . Agreement is the property that quantifies the degree of variance in measurements obtained from a tool when several measurements are conducted. Two measures, the Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC), were calculated to assess this agreement. The Standard Error of Measurement (SEM) is determined by using SEM = Sd × √(1 - R), where Sd is the standard deviation of the difference between two measurements and R is the reliability of these measurements. The ICC was utilized to assess the reliability in this equation. The MDC was determined using this formula [12pt]{minimal} $$\:MDC=SEM\:\:1.96\:\:2$$ . The value of 1.96 was derived from the 95% confidence interval for no difference . Statistical data analysis The Statistical Package for Social Studies (SPSS 22; IBM Corp., New York, NY, USA) was used to analyze the data. To evaluate the construct validity of each scoring system, the Pearson’s correlation coefficient was computed. Test-retest reliability and internal consistency were evaluated using Cronbach’s alpha and the intraclass correlation coefficient (ICC), respectively. Additionally, confidence intervals (CIs) at the 95% or 99% level were reported, as deemed suitable. The threshold for statistical significance was established at a p-value of less than 0.05. Futhermore, a threshold of ≥ 0.3 for item-to-total correlation .The usage of a Bland-Altman plot was employed to visually represent the disparity in scores observed between the completion of the two surveys. We retrospectively reviewed a single-center list of patients who underwent unilateral hip arthroplasty between 2015 and 2022 due to primary osteoartheritis, or as a sequelae of Developmental Dysplsia of Hip, avascular necrosis, and inflammatory arthritis. After determining who met for inclusion criteria, 107 native Arabic speakers were asked to participate in part. This number was based on the set threshold of 100 proposed by Terwee et al. [16 ]The follow up period from the operation to the response ranged between 1 and 7 years. Any patient with a cognitive disorder that hindered the ability to answer the questionnaire independently was excluded from the study. Participants below 18 years of age, as well as those who underwent resurfacing or revision procedures, were excluded from the study. All participants provided their informed consent before the study began, emphasizing to participants that they had no commitments to the research team and that they have the right to withdraw at any time. The local Institutional Review Board reviewed and approved this study (No. E-22-7019). The FJS-12 consists of 12 questions that assess a patient’s capacity to disregard the existence of an artificial joint in daily activities. Each item is accompanied by a five-point Likert scale response. The unprocessed findings are transformed into a scale ranging from 0 to 100 points. The highest score indicates a favorable outcome, with the patient unaware of the prosthesis’s presence 9. After obtaining a license for agreement from the FJS-12 copyright owners, our study was carried out. The forward-backward method used to convert the FJS-12 into Arabic had been approved by the tool’s original developers 17. The research team then conducted a pilot test of the questionnaire on 10 patients who underwent unilateral total hip arthroplasty to determine whether there were any problems with questionnaire comprehension. Each issue was discussed by the research team, and following a consensus, the final version was ratified. Validation process and data acquisition Construct and content validity are the two general forms of validity assessed for the Arabic version of the FJS. To evaluate construct validity, participants were tasked with attempting the reduced WOMAC (rWOMAC) once. The correlation coefficients of Pearson were calculated. Strong associations were indicated by values larger than 0.6 . The content validity of the questionnaire describes how thoroughly it accounts for all symptoms reported by patients, which were assessed by floor and ceiling effects. Ceiling and floor effects in < 15% of the patients were considered acceptable . We assessed reliability by measuring whether the test was consistent regardless of time (test-retest reliability) and across items (internal consistency). To assess the test-retest reliability of the survey, patients were asked to complete the Arabic version of the FJS questionnaire at two separate times spaced by two weeks. The intraclass correlation coefficient (ICC) was calculated to estimate the test-retest reliability . Cronbach’s alpha was used to evaluate internal consistency, which is a measure of how closely connected the different parts of a measuring tool are to one another. A Cronbach’s alpha of 0.80 to 0.89 generally implies acceptable internal consistency, while a value of 0.90 or higher denotes excellent internal consistency . Agreement is the property that quantifies the degree of variance in measurements obtained from a tool when several measurements are conducted. Two measures, the Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC), were calculated to assess this agreement. The Standard Error of Measurement (SEM) is determined by using SEM = Sd × √(1 - R), where Sd is the standard deviation of the difference between two measurements and R is the reliability of these measurements. The ICC was utilized to assess the reliability in this equation. The MDC was determined using this formula [12pt]{minimal} $$\:MDC=SEM\:\:1.96\:\:2$$ . The value of 1.96 was derived from the 95% confidence interval for no difference . Construct and content validity are the two general forms of validity assessed for the Arabic version of the FJS. To evaluate construct validity, participants were tasked with attempting the reduced WOMAC (rWOMAC) once. The correlation coefficients of Pearson were calculated. Strong associations were indicated by values larger than 0.6 . The content validity of the questionnaire describes how thoroughly it accounts for all symptoms reported by patients, which were assessed by floor and ceiling effects. Ceiling and floor effects in < 15% of the patients were considered acceptable . We assessed reliability by measuring whether the test was consistent regardless of time (test-retest reliability) and across items (internal consistency). To assess the test-retest reliability of the survey, patients were asked to complete the Arabic version of the FJS questionnaire at two separate times spaced by two weeks. The intraclass correlation coefficient (ICC) was calculated to estimate the test-retest reliability . Cronbach’s alpha was used to evaluate internal consistency, which is a measure of how closely connected the different parts of a measuring tool are to one another. A Cronbach’s alpha of 0.80 to 0.89 generally implies acceptable internal consistency, while a value of 0.90 or higher denotes excellent internal consistency . Agreement is the property that quantifies the degree of variance in measurements obtained from a tool when several measurements are conducted. Two measures, the Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC), were calculated to assess this agreement. The Standard Error of Measurement (SEM) is determined by using SEM = Sd × √(1 - R), where Sd is the standard deviation of the difference between two measurements and R is the reliability of these measurements. The ICC was utilized to assess the reliability in this equation. The MDC was determined using this formula [12pt]{minimal} $$\:MDC=SEM\:\:1.96\:\:2$$ . The value of 1.96 was derived from the 95% confidence interval for no difference . The Statistical Package for Social Studies (SPSS 22; IBM Corp., New York, NY, USA) was used to analyze the data. To evaluate the construct validity of each scoring system, the Pearson’s correlation coefficient was computed. Test-retest reliability and internal consistency were evaluated using Cronbach’s alpha and the intraclass correlation coefficient (ICC), respectively. Additionally, confidence intervals (CIs) at the 95% or 99% level were reported, as deemed suitable. The threshold for statistical significance was established at a p-value of less than 0.05. Futhermore, a threshold of ≥ 0.3 for item-to-total correlation .The usage of a Bland-Altman plot was employed to visually represent the disparity in scores observed between the completion of the two surveys. The initial survey was completed by a total of 107 participants. Out of the total sample, a subset of 72 individuals actively engaged in the retest phase, wherein they were administered a modified iteration of the rWOMAC questionnaire along with the translated version of the FJS-12. According to the data presented in Table , the mean age of the participants in the initial response was approximately 46.12 ± 14.19 years. A significant proportion of the participants, specifically 65.4%, were identified as female. Approximately an equivalent percentage of participants underwent total hip arthroplasty on either the right or left side, with 54.2% of individuals opting for right hip replacement surgery. On average, a period of 36.99 ± 21.69 months elapsed from the time of surgery to the occurrence of a response. Upon initial completion of the Ar-FJS questionnaire, all participants reported no difficulty in comprehending the content. Every survey question achieved a response rate of 100%. In the first and second surveys, the mean FJS score was recorded as 49.22 ± 31.24. During the retesting, this score showed an increase and was measured at 53.91 ± 29.10. The participants had an average rWOMAC score of 11.65 ± 9.59. Validity The Ar-FJS questionnaire demonstrated a moderate correlation with rWOMAC scores ( r = 0.595, p < 0.001), indicating a significant association between the two measures (see Table ). The Ar-FJS demonstrated an adequate ceiling effect of 5.6% ( n = 6) and a flooring effect of 3.7% ( n = 4). Similarly, during retesting, Ar-FJS exhibited a ceiling effect of 1.9% ( n = 2) and a flooring effect of 1.9% ( n = 2), which were comparable to the rWOMAC ceiling effect of 1.9% ( n = 2) and a flooring effect of 3.7% ( n = 4). The correlation coefficients for the rWOMAC scores showed support for the construct validity of the Ar-FJS, suggesting a good relationship between these measures Fig. . Reliability The Ar-FJS exhibited excellent internal consistency, as indicated by a high Cronbach’s alpha value of 0.957, indicating strong reliability among the items Table . This finding is further supported by the results shown in Table , where removing any individual item did not significantly affect the internal consistency, with values consistently above 0.95. The item-total correlation analysis revealed a strong positive correlation (> 0.64) between each item and the overall FJS score, indicating that all items contributed to the measurement of the construct being assessed. Except for item 2, which had an ICC of 0.691 (95% CI, 0.508–0.806), all questions had ICCs above 0.7. The intra-class correlation coefficient (ICC) between the initial and retest total scores was statistically significant and reliable. The ICC value was measured at 0.931 (95% CI, 0.890–0.957), as presented in Table . The test-retest mean difference was − 1.94. However, this difference was not found to be statistically significant ( p = 0.275), indicating that the scores did not show a significant change over time. The Bland-Altman plot, as illustrated in Fig. , offers a graphical depiction of the participants’ responses in relation to the average discrepancy between the two assessments. The plot demonstrates a high level of concordance between the test and retest scores, indicating a lack of discernible systematic bias. Additionally, proportional bias was evaluated using linear regression analysis, and the results indicated no significant correlation between the difference and mean ( p = 0.687), suggesting that the difference in scores did not vary systematically with the mean scores. Moreover, 3.93 was the Standard Error of Measurement (SEM) for the total FJS score, indicating an average amount of measurement error associated with the individual scores. Additionally, the Minimum Detectable Change (MDC) was calculated to be 10.89, representing the smallest difference that can be considered a real change beyond the measurement error. The Ar-FJS questionnaire demonstrated a moderate correlation with rWOMAC scores ( r = 0.595, p < 0.001), indicating a significant association between the two measures (see Table ). The Ar-FJS demonstrated an adequate ceiling effect of 5.6% ( n = 6) and a flooring effect of 3.7% ( n = 4). Similarly, during retesting, Ar-FJS exhibited a ceiling effect of 1.9% ( n = 2) and a flooring effect of 1.9% ( n = 2), which were comparable to the rWOMAC ceiling effect of 1.9% ( n = 2) and a flooring effect of 3.7% ( n = 4). The correlation coefficients for the rWOMAC scores showed support for the construct validity of the Ar-FJS, suggesting a good relationship between these measures Fig. . The Ar-FJS exhibited excellent internal consistency, as indicated by a high Cronbach’s alpha value of 0.957, indicating strong reliability among the items Table . This finding is further supported by the results shown in Table , where removing any individual item did not significantly affect the internal consistency, with values consistently above 0.95. The item-total correlation analysis revealed a strong positive correlation (> 0.64) between each item and the overall FJS score, indicating that all items contributed to the measurement of the construct being assessed. Except for item 2, which had an ICC of 0.691 (95% CI, 0.508–0.806), all questions had ICCs above 0.7. The intra-class correlation coefficient (ICC) between the initial and retest total scores was statistically significant and reliable. The ICC value was measured at 0.931 (95% CI, 0.890–0.957), as presented in Table . The test-retest mean difference was − 1.94. However, this difference was not found to be statistically significant ( p = 0.275), indicating that the scores did not show a significant change over time. The Bland-Altman plot, as illustrated in Fig. , offers a graphical depiction of the participants’ responses in relation to the average discrepancy between the two assessments. The plot demonstrates a high level of concordance between the test and retest scores, indicating a lack of discernible systematic bias. Additionally, proportional bias was evaluated using linear regression analysis, and the results indicated no significant correlation between the difference and mean ( p = 0.687), suggesting that the difference in scores did not vary systematically with the mean scores. Moreover, 3.93 was the Standard Error of Measurement (SEM) for the total FJS score, indicating an average amount of measurement error associated with the individual scores. Additionally, the Minimum Detectable Change (MDC) was calculated to be 10.89, representing the smallest difference that can be considered a real change beyond the measurement error. Patient-reported outcomes (PROs) are essential for evaluating the efficacy and success of hip arthroplasty . These results offer valuable insights into the perspectives, measurable outcomes, and post-surgery satisfaction of patients. The absence of professional tools in languages other than English often hinders their widespread use among diverse populations. We attempted to address this gap by translating and testing the Forgotten Joint Score (FJS) for hip arthroplasty in Arabic, referred to as the Ar-FJS. The FJS is a PRO instrument designed for assessing patients’ awareness and joint functionality, which are essential factors in forecasting the outcome of arthroplasty. Healthcare providers can comprehensively evaluate and monitor patient’s, who underwent hip arthroplasty, results in Arabic-speaking communities due to the translation and validation of the FJS. This research primarily confirmed the validity and reliability of the Arabic version of the FJS-12 for assessing hip joint awareness and function in Arabic-speaking population. Significantly, the Arabic translation did not necessitate any alterations, and all participants encompassed in the research comprehended and provided accurate responses to the inquiries. Existing PRO tools often lack essential factors that determine the success of arthroplasty such as natural joint awareness and joint feeling. In addition, lots of these tools exhibit significant ceiling and floor effects, which make it difficult to differentiate between excellent and good scores. To address these concerns, FJS-12 was developed by Behrend et al. The current study results are similar to those of the original study, demonstrating excellent internal consistency. The original FJS reported a high internal consistency (Cronbach α = 0.95) . In the current study, the Arabic translation was consistent with the original version and had a similar internal consistency (Cronbach α = 0.96) Moreover, the Ar-FJS was evaluated in correlation with another widely used PRO tool, establishing its excellent validity and cultural suitability for Arabic-speaking populations. The mean forgotten hip score was approximately 50%, which was close to that of the original study and other translations. The mean follow-up period was approximately 3 years. The average score in the original study was 59.8%, with a mean follow-up of 2.6 years . The Dutch version computed an average score of 56.1% with a mean follow up of 1.3 years, the French version reported an average of 63.1%, the Persian version conveyed an average of 50.8 with an average follow up of 1.2 years . Moreover, consistent with the original study and other translations, no floor or ceiling effects were observed. Our study reported ceiling and floor effects below the set threshold of 15% in both tests and retests. The study showed excellent test-retest reliability with an ICC of 0.931, which is consistent with other translations . This study examined the association between the Ar-FJS and other established scales, including the rWOMAC, through a comprehensive investigation. The findings of our study indicate a statistically significant positive relationship between the Ar-FJS and rWOMAC scores ( r = 0.595, p < 0.001). This observation aligns with prior research that has documented comparable correlations ( r = 0.559) . One limitation of this study on hip arthroplasty is its exclusive focus on the postoperative phase without considering preoperative factors. This omission restricts the assessment of the impact of preoperative conditions on the reliability and validity of the Ar-FJS in predicting patient outcomes. The responsiveness of the Ar-FJS, which tests the tool’s capacity to identify clinically relevant changes over time and evaluate its longitudinal validity, was also not examined in this study. Another drawback of PRO tools is their reliance on subjective patient-reported results, which can introduce bias. Furthermore, The varying time periods post-surgery may be seen as a limitation of the study, as they could impact the Forgotten Joint Score and perhaps compromise the assessment’s accuracy. Despite these drawbacks, the Ar-FJS exhibits simplicity, reliability, validity, and consistency, and is comparable to the original English version. As it enables the evaluation of clinical outcomes in the Arabic community, its clinical importance is substantial. In both clinical practice and research, the use of Ar-FJS to recognize and evaluate patient symptoms and limitations over time offers significant benefits. Ar-FJS can improve treatment by measuring functional outcomes and proving treatment efficacy. In summary, Ar-FJS utilization in hip arthroplasty shows excellent validity and reliability and can be recommended for use in clinical practice for patients undergoing hip arthroplasty in Arabic-speaking communities.
DeepIDA-GRU: a deep learning pipeline for integrative discriminant analysis of cross-sectional and longitudinal multiview data with applications to inflammatory bowel disease classification
f4cb8336-0de9-4dc7-bd6a-0706a6039537
11771283
Biochemistry[mh]
Biomedical research now commonly integrates diverse data types (e.g. genomics, metabolomics, clinical) from the same individuals to better understand complex diseases. These data types, whether measured at one time point (cross-sectional) or multiple time points (longitudinal), offer diverse snapshots of disease mechanisms. Integrating these complementary data types provides a comprehensive view, leading to meaningful biological insights into disease etiology and heterogeneity. Inflammatory bowel disease (IBD), including Crohn’s disease and ulcerative colitis, is a complex disease with multiple factors (including clinical, genetic, molecular, and microbial levels) contributing to the heterogeneity of the disease. IBD is an autoimmune disorder associated with inflammation of the gastrointestinal tract (Crohn’s disease) or the inner lining of the large intestine and rectum (ulcerative colitis), and is the result of imbalances and interactions between microbes and the immune system. To better understand the etiology of IBD, the integrated human microbiome project (iHMP) for IBD was initiated to investigate potential factors contributing to heterogeneity in IBD . In that study, individuals with and without IBD from five medical centers were recruited and followed for one year and the molecular profiles of the host (e.g. host transcriptomics, metabolomics, proteomics) and microbial activities (e.g. metagenomics, metatranscriptomics) were generated and investigated. Several statistical, temporal, dysbiosis and integrative analyses methods were performed on the multiomics data. Integrative analyses techniques used included lenient cross-measurement type temporal matching and cross-measurement type interaction testing. Our work is motivated by the IBD iHMP study and the many biological research studies that generate cross-sectional and longitudinal data with the ultimate goal of rigorously integrating these different types of data to investigate individual factors that discriminate between disease groups. Several methods, both linear and non-linear , have been proposed in the literature to integrate data from different sources but these methods expect the same types of data (e.g. cross-sectional data only, or longitudinal data only), which limits our ability to apply these methods to our motivating data that is a mix of cross-sectional and longitudinal data. For instance, methods for associating two or more views (e.g. , iDeepViewLearn , JIVE , DeepCCA , DeepGCCA , kernel methods , co-inertia ) or for joint association and prediction (e.g. SIDA , DeepIDA , JACA , MOMA , CVR , randmvlearn , BIP , sJIVE ) are applicable to cross-sectional data only. The Joint Principal Trend Analysis (JPTA) method proposed in for integrating longitudinal data is purely unsupervised, only applicable to two longitudinal data, cannot handle missing data and assumes the same number of time points in both views. However, methods for integrating cross-sectional and longitudinal data are scarce in the literature. The few existing methods do not maximize the association between views and, more importantly, when applied to our motivating data, cannot be used to identify variables discriminating between those with and without IBD. These methods use recurrent neural networks to extract features from different modalities and then simply concatenate the extracted features to perform classification. To bridge the gap in existing literature, we build a pipeline that (1) integrates longitudinal and cross-sectional data from multiple sources such that there is maximal separation between different classes (e.g. disease groups) and maximal association between views; and (2) identifies and ranks relevant variables contributing most to the separation of classes and association of views. Our pipeline combines the strengths of statistical methods, such as the ability to make inference, reduce dimension and extract longitudinal trends, with the flexibility of deep learning, and consists of i) variable selection/ranking, ii) feature extraction, and iii) joint integration and classification . In particular, for variable selection/ranking, we consider the linear methods (linear mixed models [LMM] and JPTA) and the nonlinear method (deep integrative discriminant analysis [DeepIDA] ). DeepIDA is a deep learning method for joint association and classification of cross-sectional data from multiple sources. It combines resampling techniques, specifically bootstrap, to rank variables based on their contributions to classification estimates. Since DeepIDA is applicable to cross-sectional data only, for longitudinal data, we combine DeepIDA with gated recurrent units (GRUs), a class of recurrent neural networks (RNN), to rank variables. We refer to this method as DeepIDA-GRU-Bootstrapping (DGB). Of note, LMM explores linear relationships between a longitudinal variable and an outcome and focuses on identifying variables discriminating between two classes; JPTA explores linear relationship between two longitudinal views and focuses on identifying variables that maximally associate the views; and DGB models nonlinear relationships between classes and two or more longitudinal and cross-sectional data and focuses on simultaneously maximizing within-class separation and between-view associations. For feature extraction, we explore the two methods: Euler characteristics (EC) and functional PCA (FPCA), to extract [12pt]{minimal} $1$ -dimensional embeddings from each of the ( [12pt]{minimal} $2$ -dimensional) longitudinal views. EC and FPCA inherently focus on different characteristics of longitudinal data while extracting features and in this work, the two are compared and analysed using a simulated dataset. Finally, for integration and classification, we combine the existing DeepIDA method (without bootstrap) with GRUs, taking as input the selected variables and the extracted features from each view. Since we do not implement variable ranking at this stage, we refer to this method as DeepIDA-GRU, to distinguish it from DBG which implements bootstrap in DeepIDA with GRUs. DeepIDA-GRU could be used to integrate a mix of longitudinal and cross-sectional data from multiple sources and discriminate between two or more classes. We emphasize that DeepIDA-GRU combines the existing DeepIDA method (without bootstrap) with GRUs. As such, DeepIDA-GRU can directly take longitudinal data as input, making the feature extraction step (which could potentially lead to a loss of information) optional. Please refer to for a visual representation of the DeepIDA-GRU framework. In summary, we provide a pipeline that innovatively combines the strengths of existing statistical and deep learning methods to rigorously integrate cross-sectional and longitudinal data from multiple sources for deeper biological insights. Our pipeline offers four main contributions to the field of integrative analysis. First, our framework allows users to integrate a mix of cross-sectional and longitudinal data, which is appealing and could have broad utility. Second, we allow the use of a clinical outcome in variable selection or ranking. Third, we model complex nonlinear relationships between the different views using deep learning. Fourth, our framework has the ability to accommodate missing data. Datasets and data preprocessing To evaluate the effectiveness of the proposed pipeline, we use simulations to compare the two feature extraction methods and make recommendations on when each is suitable to use. We applied the pipeline to cross-sectional (host transcriptomics data) and longitudinal (metagenomics and metabolomics) IBD data from 90 subjects who had the three measurements. Before preprocessing, the metagenomics data contained path abundances of [12pt]{minimal} $22113$ gene pathways, the metabolomics data consisted of [12pt]{minimal} $103$ hilic negative factors, and the host transcriptomics data consisted of [12pt]{minimal} $55\,765$ probes. We note that, for most of the participants, multiple samples of their host transcriptomics data were collected in a single week. Therefore, in this work, we consider the host transcriptomics as a cross-sectional view, and the data for each individual were taken as the mean of all samples collected from them. Preprocessing followed established techniques in the literature and consisted of (i) keeping variables that have less than [12pt]{minimal} $90\%$ zeros (for metagenomics) or [12pt]{minimal} $5\%$ zeros (for metabolomics) in all collected samples; (ii) adding a pseudo count of 1 to each data value (this ensures that all entries are nonzero and allows for taking logarithms in the next steps); (iii) normalizing using the ‘Trimmed Mean of M-values’ method (for metagenomics); (iv) logarithmic transformation of the data; and (v) plotting the histogram of variances and filtering out variables (pathways) with low variance across all collected samples. After the preprocessing steps, the number of variables remaining for the metagenomics, metabolomics and transcriptomics data was [12pt]{minimal} $2261$ , [12pt]{minimal} $93$ and [12pt]{minimal} $9726$ , respectively. More details about data preprocessing are provided in the . Notations and overview of proposed pipeline Let [12pt]{minimal} $_{d} ^{N p_{d} t_{d}}$ be a tensor representing the longitudinal (if [12pt]{minimal} $t_{d}> 1$ ) or cross-sectional (if [12pt]{minimal} $t_{d} = 1$ ) data corresponding to the [12pt]{minimal} $d$ th view (for [12pt]{minimal} $d [1:D]$ ), for the [12pt]{minimal} $N$ subjects. The subjects, variables and time points of view [12pt]{minimal} $d [1:D]$ are indexed from [12pt]{minimal} $[1:N], [1:p_{d}]$ and [12pt]{minimal} $[1:t_{d}]$ , respectively. Here, for each subject [12pt]{minimal} $n [1:N]$ , the data corresponding to the [12pt]{minimal} $d$ th view has [12pt]{minimal} $p_{d}$ variables and each of these [12pt]{minimal} $p_{d}$ variables was measured at [12pt]{minimal} $t_{d}$ time points. Also, let [12pt]{minimal} ${ = \{_{d}: d [1:D]\}}$ denote the collection of data from all views. [12pt]{minimal} $_{d}^{(n, , )} $ denotes the value of the variable [12pt]{minimal} $ [1:p_{d}]$ at time point [12pt]{minimal} $ [1:t_{d}]$ of the [12pt]{minimal} $n$ th subject (for [12pt]{minimal} $n [1:N]$ ) in the [12pt]{minimal} $d$ th view (for [12pt]{minimal} $d [1:D]$ ). Moreover, we use ‘:’ to include all the data of a particular dimension, for example, [12pt]{minimal} ${_{d}^{(n,:,:)} ^{p_{d} t_{d}}}$ denotes the multivariate time series data of the [12pt]{minimal} $n$ th subject corresponding to the [12pt]{minimal} $d$ th view. Note that there are a total of [12pt]{minimal} $K$ classes [12pt]{minimal} $\{1,2, , K\}$ and each subject [12pt]{minimal} $n [1:N]$ belongs to one of the [12pt]{minimal} $K$ classes and the class of the [12pt]{minimal} $n$ th subject is denoted by [12pt]{minimal} $ (n)$ . The proposed pipeline for integrating both cross-sectional and longitudinal views is pictorially illustrated in and consists of the following steps: (i) Variable Selection or Ranking is used to find the top variables in each view and eliminate irrelevant variables. In other words, the tensor [12pt]{minimal} $_{d} ^{N p_{d} t_{d}}$ is converted to a smaller tensor [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ with fewer variables [12pt]{minimal} $_{d} < p_{d}$ for all [12pt]{minimal} $d [1:D]$ . In this work, we use LMM, DGB and JPTA for variable selection. We describe these briefly in subsequent sections and in more detail in the . The variable selection step is optional and one could go directly to the next step (in this case [12pt]{minimal} $}_{d} = _{d}$ ); (ii) Feature extraction is used to extract important one-dimensional feature embedding from longitudinal data. This step converts the tensor [12pt]{minimal} ${}_{d} ^{N _{d} t_{d}}}$ to [12pt]{minimal} $}_{d} ^{N _{d} 1}$ , where [12pt]{minimal} $_{d}$ is the dimension of the extracted embedding. The two methods explored in this work for feature extraction are based on Euler curves and FPCA, described briefly in subsequent sections and in more detail in the . This step is also optional and one could directly go to the next step (in this case [12pt]{minimal} $}_{d} = }_{d}$ ); (iii) Integration and classification uses DeepIDA-GRU to simultaneously integrate the multiview data [12pt]{minimal} $\{}_{d}, d [1:D]\}$ obtained after the first two steps and perform classification. We will describe each part of the pipeline in the following subsections. Step 1: variable selection or ranking Given the high-dimensionality of our data, it is reasonable to assume that some of the variables are simply noise and do not contribute to the distinction between the classes in the views or the correlation between the views. Consequently, it is essential to identify relevant or meaningful variables. We investigated three techniques for selecting variables from cross-sectional and longitudinal data: (i) LMMs, (ii) DGB and (iii) JPTA. LMM is a univariate method applied to each longitudinal or cross-sectional variable and to each view separately. LMM chooses variables that are essential in discriminating between classes in each view. JPTA is a multivariate linear dimension reduction method for integrating two longitudinal views. DGB is a multivariate nonlinear dimension reduction technique that can be used to combine two or more longitudinal and cross-sectional datasets and differentiate between classes. It is useful for choosing variables that are relevant both in discriminating between classes and in associating views. LMM is applicable to any number of longitudinal and cross-sectional data. Similarly, DGB is also applicable to any number of longitudinal and cross-sectional data. On the other hand, JPTA can be applied to only two longitudinal views. It is possible to omit the variable selection step and instead use the entire set of variables in the second step of the pipeline (that is, [12pt]{minimal} $}_{d} = {}_{d}$ ). We briefly describe the three variable selection methods. Please refer to the for more details. Linear mixed models LMMs are generalizations of linear models that allow the use of both fixed and random effects to model dependencies in samples arising from repeated measurements. LMMs were used in for differential abundance analysis of longitudinal data from the IBD study to identify important longitudinal variables discriminating between IBD status. To determine if a given variable is important to discriminate between disease groups, we construct the two models: (i) null model and (ii) full model. The outcome for each model is the longitudinal variable. The null model associates the outcome with a fixed variable (i.e. time) plus a random intercept, adjusting for covariates of interest (e.g. sites). The full model includes the null model plus the disease status of the sample, treated as a fixed variable. Then, the full and null model are compared using ANOVA to determine statistically significant (p-value [12pt]{minimal} $< 0.05$ ) variables that discriminate between the classes considered. While LMM use the class status in variable selection, it handles each variable separately and does not consider between-views and within-view dependencies. This could lead to a suboptimal variable selection because some variables may only be significant in the presence of other variables. Joint Principal Trend Analysis was introduced in 2018 by as a method to extract shared latent trends and identify important variables from a pair of longitudinal high-dimensional datasets. Following our notation, we let [12pt]{minimal} $\{_{1}^{(n,:,:)}: n [1:N]\}$ and [12pt]{minimal} $\{_{2}^{(n,:,:)}: n [1:N]\}$ be the longitudinal datasets for view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ , respectively, for the [12pt]{minimal} $N$ subjects. The number of variables in view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ are [12pt]{minimal} $p_{1}$ and [12pt]{minimal} $p_{2}$ , respectively, and the number of time points for the two views are [12pt]{minimal} $t_{1}=t_{2}=T$ . Therefore, each subject’s data [12pt]{minimal} $_{i}^{(n,:,:)}$ (for [12pt]{minimal} $i \{1,2\}$ and [12pt]{minimal} $n [1:N]$ ) is a [12pt]{minimal} $p_{i} T$ tensor. In JPTA, the key idea is to represent the data of the two views with the following common principal trends: [12pt]{minimal} _{1}^{(n,:,:)} &= ^{} + _{1}^{(n)}, \\ _{2}^{(n,:,:)} &= ^{} + _{2}^{(n)}, for [12pt]{minimal} $n [1:N]$ , where (i) [12pt]{minimal} $$ and [12pt]{minimal} $$ are [12pt]{minimal} $p_{1} 1$ and [12pt]{minimal} $p_{2} 1$ vectors of variable loadings, respectively; (ii) [12pt]{minimal} $ $ is a [12pt]{minimal} $1 (T+2)$ vector of cubic spline coefficients; (iii) [12pt]{minimal} $$ is a cubic spline basis matrix of size [12pt]{minimal} $T (T+2)$ ; and (iv) [12pt]{minimal} $_{i}^{(n)}$ for [12pt]{minimal} $i \{1,2\}$ are the respective noise vectors. To obtain [12pt]{minimal} $(,, )$ , the following loss is minimized: [12pt]{minimal} & _{,,} _{n=1}^{N} (|| _{1}^{(n,:,:)}- ^{} ||_{F}^{2} +|| _{2}^{(n,:,:)}- ^{} ||_{F}^{2}) \\ & \ \ ^{} \!\! c, ||||_{1} \! \! c_{1}, ||||_{1} \! \! c_{2}, ||||_{2}^{2} 1, ||||_{2}^{2} 1, where [12pt]{minimal} $|| ||_{F}$ represents the Frobenius norm, [12pt]{minimal} $$ is a [12pt]{minimal} $(T+2)$ by [12pt]{minimal} $(T+2)$ matrix given by [12pt]{minimal} $_{i,j} = ^{^{}}(t) B_{j}^{^{}}(t)} dt$ (where [12pt]{minimal} $[]_{t,m} = B_{m}(t)$ ) and the sparsity parameters [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ control the number of nonzero entries in the vectors [12pt]{minimal} $$ and [12pt]{minimal} $$ , respectively. In particular, after solving the optimization problem, the variables corresponding to the entries of [12pt]{minimal} $$ and [12pt]{minimal} $$ which have high absolute values (the top [12pt]{minimal} $c_{1}$ entries from [12pt]{minimal} $$ and the top [12pt]{minimal} $c_{2}$ entries from [12pt]{minimal} $$ ), are the variables that we select as important, using the JPTA method. Thus, using JPTA, we select the top [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ variables for the two views, respectively, that maximize the association between the views. It is important to note that JPTA has several shortcomings relative to LMM and DGB: (i) it does not take into account information about the class labels while selecting the top variables (which makes it more suitable for data exploration and not regression and classification problems); (ii) it can only be used with two longitudinal data; and (iii) it assumes an equal number of time points for both views. DeepIDA-GRU-Bootstrapping DGB is a novel method we propose in this manuscript as an extension to DeepIDA to the scenario where there are longitudinal data in addition to cross-sectional data. DeepIDA is a multivariate dimension reduction method for learning non-linear projections of different views that simultaneously maximize separation between classes and association between views. To aid in interpretability, the authors proposed a homogeneous ensemble approach via bootstrap to rank variables according to how much they contribute to the association of views and separation of classes. In its original form, DeepIDA is applicable only to cross-sectional data, which is limiting. Thus, for longitudinal data, we integrate gated recurrent units (GRUs) into the DeepIDA framework. GRUs , are a class of recurrent neural networks (RNNs) that allow long-term learning of dependencies in sequential data and help mitigate the problem of vanishing / exploding gradients in vanilla RNNs . We refer to this modified network as DeepIDA-GRU (which is shown pictorially in ). Specifically, each cross-sectional view is fed into a dense neural network and each longitudinal view is fed into a GRU. The inclusion of GRUs in the DeepIDA framework enables us to extend the bootstrapping idea of to multiview data consisting of longitudinal and cross-sectional views. We call this approach for variable selection DGB. A detailed description of the DeepIDA bootstrap procedure can be found in but for completeness sake, we enumerate the main steps applied to DGB here: From the set of [12pt]{minimal} $N$ subjects [12pt]{minimal} $[1:N]$ , randomly sample with replacement [12pt]{minimal} $N$ times, to generate each of the [12pt]{minimal} $M$ bootstrap sets [12pt]{minimal} $\{B_{1}, B_{2}, , B_{M}\}$ . Sets [12pt]{minimal} $\{B_{1}^{c}, B_{2}^{c}, , B_{M}^{c}\}$ are called out-of-bag sets. From each view, construct [12pt]{minimal} $M$ number of [12pt]{minimal} $D$ -tuples: [12pt]{minimal} ${\{_{m}| m [1:M]\}}$ of bootstrapped variables, where each [12pt]{minimal} $D$ -tuple [12pt]{minimal} $_{m}$ , consists of [12pt]{minimal} $D$ sets, denoted by [12pt]{minimal} $_{m} = (V_{1,m}, V_{2,m}, , V_{D,m})$ . Here, the [12pt]{minimal} $d$ th set [12pt]{minimal} $V_{d,m}$ consists of randomly selected [12pt]{minimal} $80$ percent variables from the [12pt]{minimal} $d$ th view (where [12pt]{minimal} $d [1:D]$ ). This gives us the set of [12pt]{minimal} $M$ bootstrapped variable subsets: [12pt]{minimal} $\{_{1} = (V_{1,1}, V_{2,1}, , V_{D,1}), _{2} = (V_{1,2}, V_{2,2}, , V_{D,2}), ,_{M}= (V_{1,M}, V_{2,M}, , V_{D,M})\}$ . Pair the bootstrapped subject sets with the bootstrapped variable sets. Let the bootstrapped pairs be given by [12pt]{minimal} $(B_{1}, _{1}),(B_{2}, _{2}), , (B_{M}, _{M})$ and the out-of-bag pairs be given by [12pt]{minimal} $(B_{1}^{c}, _{1}), (B_{2}^{c}, _{2}), , (B_{M}^{c}, _{M})$ . For every variable [12pt]{minimal} $v$ in every view, initialize its score as [12pt]{minimal} ${S_{v}=0}$ . For each bootstrapped pair [12pt]{minimal} $(B_{i},_{i})$ and the out-of-bag pair [12pt]{minimal} $(B_{i}^{c},_{i})$ (where [12pt]{minimal} $i [1:M]$ ), First train the DeepIDA-GRU network using bootstrapped pair [12pt]{minimal} $(B_{i}, _{i})$ and then test the network on the out-of-bag pair [12pt]{minimal} $(B_{i}^{c}, _{i})$ . This gives us a baseline accuracy for the [12pt]{minimal} $i$ th pair and the corresponding model is the baseline model for the [12pt]{minimal} $i$ th bootstrapped pair. For each variable [12pt]{minimal} $u _{i}$ , randomly permute the value of this variable among the different subjects (while keeping the other variables intact). Test the learned baseline model on the permuted data. If there is a decrease in accuracy (compared to the baseline accuracy), then it means that the variable [12pt]{minimal} $u$ was likely important in achieving the baseline accuracy. Therefore, in such a scenario, increase the score of variable [12pt]{minimal} $u$ by [12pt]{minimal} $1$ , that is, [12pt]{minimal} $S_{u}=S_{u}+1$ . The overall importance of any variable [12pt]{minimal} $u$ is then calculated by (1) [12pt]{minimal} & (u) \!=\! }{ u}. Notably, the Integrative Discriminant Analysis (IDA) objective enables DGB to select variables that are important in simultaneously separating the classes and associating the views. However, compared to LMM and JPTA, DGB can be computationally expensive. However, the bootstrapping process is parallelizable, which can significantly improve run time. There exist variants of GRU that can also handle missing data and replacing GRU with such variants would allow DGB to handle missing data. Step 2: feature extraction Feature extraction methods extract important one-dimensional features from longitudinal data. We investigated two methods: (i) EC curves and (ii) Functional Principal Component Analysis (FPCA) for feature extraction. The reason for selecting these two methods is as follows. EC curves have been shown to provide an important low dimensional characterization for complex datasets in a wide range of domains [12pt]{minimal} $[31]$ such as (i) analysis and classification of brain signals from fMRI study; (ii) detection of faults in chemical processes; (iii) characterization of spatio-temporal behavior of fields for diffusion system; and (iv) image analysis to characterize the simulated micrographs for liquid crystal systems. In all these instances, it has been noted that the characteristics of the EC curves vary across the different classes due to the different interactions of variables within each class. This inspired us to investigate whether the behavior of EC curves differs between individuals with and without IBD and if it could serve as an effective method to extract key features from complex high-dimensional multiomics datasets. FPCA, on the other hand, is a widely recognized technique for reducing dimensionality and extracting features from functional data. FPCA aims to identify the eigenfunctions that capture the most variability in functional data. Given that both metabolomics and metagenomics datasets are longitudinal, our objective was to employ FPCA and EC curves to extract features from these datasets. Both EC curves and FPCA offer efficient representations of longitudinal data in a lower-dimensional space. Another important rationale for choosing this combination of feature extraction techniques is their focus on distinct and nearly complementary aspects of the data (as will be illustrated in the Synthetic Analysis of EC and FPCA section). In particular, EC curves capture the relationships among various variables (such as genes, metabolites, or pathways), making them advantageous when the interactions among these variables vary between the two classes (IBD vs non-IBD). In contrast, FPCA characterizes the longitudinal patterns of individual variables independently of other variables, and is therefore expected to perform well when the longitudinal patterns differ between the two classes. In the following, we go into more detail on these feature extraction methods. Note that this step is optional because DeepIDA-GRU can accept longitudinal data directly. Euler curves The Euler characteristic (EC) was first proposed by Euler in 1758 in the context of polyhedra. Recently, Zavala et al. explored the potential of EC as a topological descriptor for complex objects such as graphs and images. EC curves, which are low-dimensional descriptors, were created to capture the essential geometrical features of these objects. The construction of EC curves is as follows. An edge weighted undirected graph [12pt]{minimal} $(,, )$ with [12pt]{minimal} $||$ vertices, [12pt]{minimal} $||$ edges, and set of weights [12pt]{minimal} $ = \{w(e)| e \}$ , can be represented using a symmetric [12pt]{minimal} $||$ by [12pt]{minimal} $||$ matrix [12pt]{minimal} $M$ , where [12pt]{minimal} $M_{i,j} = w({e_{i,j}})$ is the weight associated with the edge [12pt]{minimal} $e_{i,j}$ between the vertices [12pt]{minimal} $v_{i}$ and [12pt]{minimal} $v_{j}$ . For example, the leftmost graph in can be represented by the [12pt]{minimal} $5$ by [12pt]{minimal} $5$ matrix [12pt]{minimal} $M$ , given by [12pt]{minimal} M = [ 1 & 0.6 & 0.8 & 0.7 & 0.1\\ 0.6 & 1 & 0.5 & 0.65 & 0.2 \\ 0.8 & 0.5 & 1 & 0.55 & 0.23 \\ 0.7 & 0.65 & 0.55 & 1 & 0.3\\ 0.1 & 0.2 & 0.23 & 0.3 & 1 ]. The EC, denoted by [12pt]{minimal} $ $ , of a graph is defined by the difference in the number of vertices and the number of edges: [12pt]{minimal} & = |V|-|E| For instance, the EC of the leftmost graph in is [12pt]{minimal} $ = 5-10 = -5$ . In order to obtain a low-dimensional descriptor for complex objects (such as graphs, images, matrices, fields, etc.), EC is often combined with a process known as filtration to generate an EC curve, which can be used to quantify the topology of the complex object. Given an edge-weighted graph [12pt]{minimal} $(,, )$ and a threshold [12pt]{minimal} $ $ , the filtered graph for this threshold [12pt]{minimal} $ $ , which we denote by [12pt]{minimal} $(,, )_{ }$ , is obtained by removing all the edges [12pt]{minimal} $e $ so that [12pt]{minimal} $w(e)> $ . This filtration step is illustrated in for [12pt]{minimal} $ = 0.4$ . For a threshold [12pt]{minimal} $ $ , we denote the EC of the corresponding filtered graph [12pt]{minimal} $(,, )_{ }$ by [12pt]{minimal} $ _{ }$ . Note that for the filtered graph of , the EC is given by [12pt]{minimal} $ _{0.4} = 5-4 = 1$ . The EC curve is a plot between [12pt]{minimal} $ _{ }$ and [12pt]{minimal} $ $ for a series of increasing thresholds [12pt]{minimal} $ $ . The filtration process can be stopped once the threshold is equal to the largest weight of the original graph, at which point the filtered graph is the same as the original graph. The EC curve of the leftmost graph in is the rightmost graph in that figure. It has been demonstrated in that EC curves retain important characteristics of the graph and are therefore useful representations of 2D graphs using 1D vectors. To represent a multivariate time series [12pt]{minimal} $}_{d}^{(n,:,:)} ^{_{d} t_{d}}$ of subject [12pt]{minimal} $n [1:N]$ using an EC curve, we first find the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ precision matrix, or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ correlation matrix or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ covariance matrix from [12pt]{minimal} $}_{d}^{(n,:,:)}$ (by treating the multiple time points in the time series as different samples of a given variable), and denote this matrix by [12pt]{minimal} $M ^{_{d} _{d}}$ . Since [12pt]{minimal} $M$ is a symmetric matrix, it represents an edge-weighted graph. The matrix [12pt]{minimal} $M$ is then subjected to a sequence of increasing thresholds to obtain an EC curve using the filtration process described above. The resulting EC curve is a [12pt]{minimal} $1$ D representation of the time series [12pt]{minimal} $}_{d}^{(n,:,:)}$ that can then be used as input to the integration and classification step. If the number of thresholds used during the filtration process is [12pt]{minimal} $x$ , then the EC method converts [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to [12pt]{minimal} $}_{d} ^{N x 1}$ , which is low-dimensional. Functional principal component analysis FPCA is a dimension reduction method similar to PCA, which can be used for functional or time series data. Here, we use FPCA to convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ into a one-dimensional form [12pt]{minimal} $}_{d} ^{N (xp_{d}) 1}$ , by calculating [12pt]{minimal} $x$ -dimensional scores for each of the [12pt]{minimal} $p_{d}$ variables, where [12pt]{minimal} $x$ is the number of functional principal components considered for each variable. Specifically, for any given variable [12pt]{minimal} $j$ of view [12pt]{minimal} $d$ , [12pt]{minimal} $}_{d}^{:,j,:} ^{N t_{d}}$ is the collection of univariate time series of all [12pt]{minimal} $N$ subjects for that variable [12pt]{minimal} $j$ (where [12pt]{minimal} $d [1:D]$ and [12pt]{minimal} $j [1:_{d}]$ ). FPCA first finds the top [12pt]{minimal} $x$ functional principle components (FPCs): [12pt]{minimal} $f_{1}(t), f_{2}(t), , f_{x}(t) ^{1 t_{d}}$ of the [12pt]{minimal} $N$ time series in [12pt]{minimal} $}_{d}^{:,j,:}$ . These [12pt]{minimal} $x$ FPCs represent the top [12pt]{minimal} $x$ principal modes in the [12pt]{minimal} $N$ univariate time series and are obtained using basis functions such as B-splines and wavelets. Each of the [12pt]{minimal} $N$ univariate time series in [12pt]{minimal} $}_{d}^{:,j,:}$ is then projected on each of the [12pt]{minimal} $x$ FPCs to get an [12pt]{minimal} $x$ -dimensional score for each subject [12pt]{minimal} $n [1:N]$ , corresponding to this variable. The scores of all [12pt]{minimal} $_{d}$ variables are stacked together to obtain a [12pt]{minimal} $_{d} x$ -dimensional vector for that subject. Thus, with the FPCA method, we convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to cross-sectional data [12pt]{minimal} $}_{d} ^{N (x p_{d}) 1}$ . In the Synthetic Analysis of EC and FPCA Section, we compare EC and FPCA using simulations. We demonstrate that when the covariance structure between the classes differs, the EC curves are particularly better at feature extraction than the functional principal components. However, EC curves are not as effective as FPCA in distinguishing between longitudinal data of different classes when the classes have a similar covariance structure and only differ in their temporal trends. Step 3: integration and classification In this step, we describe our approach to integrate the output data from any of the first two steps or the original input data, if the first two steps are skipped. Denote by [12pt]{minimal} $\{}_{d} ^{N _{d} _{d}}, d [1:D]\}$ the data obtained after the first two steps: selection of variables and extraction of features (where both these steps are optional). Data from [12pt]{minimal} $D$ views are integrated using DeepIDA combined with GRUs (that is, DeepIDA-GRU) as described in the variable selection section. As noted, in the DeepIDA-GRU network, each cross-sectional view is fed into a dense neural network, and each longitudinal view is fed into a GRU. The role of neural networks and GRUs is to nonlinearly transform each view. The output of these networks is then entered into the IDA optimization problem . By minimizing the IDA objective, we learn discriminant vectors such that the projection of the non-linearly transformed data onto these vectors results in maximum association between the views and maximum separation between classes. If the feature extraction step is skipped, then each longitudinal view, [12pt]{minimal} $}_{d} ^{N _{d} _{d}}$ with [12pt]{minimal} $t_{d}>1$ , is fed into its respective GRU in the DeepIDA-GRU network. If the feature extraction step is not skipped, then each cross-sectional ( [12pt]{minimal} $}_{d} ^{N _{d}}$ ) and longitudinal ( [12pt]{minimal} $}_{d} ^{N _{d} _{d}}$ with [12pt]{minimal} $_{d}=1$ ) view after the first step is fed into its respective dense neural network in DeepIDA-GRU. DeepIDA-GRU performs integration and classification so that the between-class separation and between-view associations are simultaneously maximized. Similarly to the DeepIDA network , DeepIDA-GRU also uses the nearest centroid classifier for classification. Classification performance is compared using average accuracy, precision, recall and F1 scores. To evaluate the effectiveness of the proposed pipeline, we use simulations to compare the two feature extraction methods and make recommendations on when each is suitable to use. We applied the pipeline to cross-sectional (host transcriptomics data) and longitudinal (metagenomics and metabolomics) IBD data from 90 subjects who had the three measurements. Before preprocessing, the metagenomics data contained path abundances of [12pt]{minimal} $22113$ gene pathways, the metabolomics data consisted of [12pt]{minimal} $103$ hilic negative factors, and the host transcriptomics data consisted of [12pt]{minimal} $55\,765$ probes. We note that, for most of the participants, multiple samples of their host transcriptomics data were collected in a single week. Therefore, in this work, we consider the host transcriptomics as a cross-sectional view, and the data for each individual were taken as the mean of all samples collected from them. Preprocessing followed established techniques in the literature and consisted of (i) keeping variables that have less than [12pt]{minimal} $90\%$ zeros (for metagenomics) or [12pt]{minimal} $5\%$ zeros (for metabolomics) in all collected samples; (ii) adding a pseudo count of 1 to each data value (this ensures that all entries are nonzero and allows for taking logarithms in the next steps); (iii) normalizing using the ‘Trimmed Mean of M-values’ method (for metagenomics); (iv) logarithmic transformation of the data; and (v) plotting the histogram of variances and filtering out variables (pathways) with low variance across all collected samples. After the preprocessing steps, the number of variables remaining for the metagenomics, metabolomics and transcriptomics data was [12pt]{minimal} $2261$ , [12pt]{minimal} $93$ and [12pt]{minimal} $9726$ , respectively. More details about data preprocessing are provided in the . Let [12pt]{minimal} $_{d} ^{N p_{d} t_{d}}$ be a tensor representing the longitudinal (if [12pt]{minimal} $t_{d}> 1$ ) or cross-sectional (if [12pt]{minimal} $t_{d} = 1$ ) data corresponding to the [12pt]{minimal} $d$ th view (for [12pt]{minimal} $d [1:D]$ ), for the [12pt]{minimal} $N$ subjects. The subjects, variables and time points of view [12pt]{minimal} $d [1:D]$ are indexed from [12pt]{minimal} $[1:N], [1:p_{d}]$ and [12pt]{minimal} $[1:t_{d}]$ , respectively. Here, for each subject [12pt]{minimal} $n [1:N]$ , the data corresponding to the [12pt]{minimal} $d$ th view has [12pt]{minimal} $p_{d}$ variables and each of these [12pt]{minimal} $p_{d}$ variables was measured at [12pt]{minimal} $t_{d}$ time points. Also, let [12pt]{minimal} ${ = \{_{d}: d [1:D]\}}$ denote the collection of data from all views. [12pt]{minimal} $_{d}^{(n, , )} $ denotes the value of the variable [12pt]{minimal} $ [1:p_{d}]$ at time point [12pt]{minimal} $ [1:t_{d}]$ of the [12pt]{minimal} $n$ th subject (for [12pt]{minimal} $n [1:N]$ ) in the [12pt]{minimal} $d$ th view (for [12pt]{minimal} $d [1:D]$ ). Moreover, we use ‘:’ to include all the data of a particular dimension, for example, [12pt]{minimal} ${_{d}^{(n,:,:)} ^{p_{d} t_{d}}}$ denotes the multivariate time series data of the [12pt]{minimal} $n$ th subject corresponding to the [12pt]{minimal} $d$ th view. Note that there are a total of [12pt]{minimal} $K$ classes [12pt]{minimal} $\{1,2, , K\}$ and each subject [12pt]{minimal} $n [1:N]$ belongs to one of the [12pt]{minimal} $K$ classes and the class of the [12pt]{minimal} $n$ th subject is denoted by [12pt]{minimal} $ (n)$ . The proposed pipeline for integrating both cross-sectional and longitudinal views is pictorially illustrated in and consists of the following steps: (i) Variable Selection or Ranking is used to find the top variables in each view and eliminate irrelevant variables. In other words, the tensor [12pt]{minimal} $_{d} ^{N p_{d} t_{d}}$ is converted to a smaller tensor [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ with fewer variables [12pt]{minimal} $_{d} < p_{d}$ for all [12pt]{minimal} $d [1:D]$ . In this work, we use LMM, DGB and JPTA for variable selection. We describe these briefly in subsequent sections and in more detail in the . The variable selection step is optional and one could go directly to the next step (in this case [12pt]{minimal} $}_{d} = _{d}$ ); (ii) Feature extraction is used to extract important one-dimensional feature embedding from longitudinal data. This step converts the tensor [12pt]{minimal} ${}_{d} ^{N _{d} t_{d}}}$ to [12pt]{minimal} $}_{d} ^{N _{d} 1}$ , where [12pt]{minimal} $_{d}$ is the dimension of the extracted embedding. The two methods explored in this work for feature extraction are based on Euler curves and FPCA, described briefly in subsequent sections and in more detail in the . This step is also optional and one could directly go to the next step (in this case [12pt]{minimal} $}_{d} = }_{d}$ ); (iii) Integration and classification uses DeepIDA-GRU to simultaneously integrate the multiview data [12pt]{minimal} $\{}_{d}, d [1:D]\}$ obtained after the first two steps and perform classification. We will describe each part of the pipeline in the following subsections. Given the high-dimensionality of our data, it is reasonable to assume that some of the variables are simply noise and do not contribute to the distinction between the classes in the views or the correlation between the views. Consequently, it is essential to identify relevant or meaningful variables. We investigated three techniques for selecting variables from cross-sectional and longitudinal data: (i) LMMs, (ii) DGB and (iii) JPTA. LMM is a univariate method applied to each longitudinal or cross-sectional variable and to each view separately. LMM chooses variables that are essential in discriminating between classes in each view. JPTA is a multivariate linear dimension reduction method for integrating two longitudinal views. DGB is a multivariate nonlinear dimension reduction technique that can be used to combine two or more longitudinal and cross-sectional datasets and differentiate between classes. It is useful for choosing variables that are relevant both in discriminating between classes and in associating views. LMM is applicable to any number of longitudinal and cross-sectional data. Similarly, DGB is also applicable to any number of longitudinal and cross-sectional data. On the other hand, JPTA can be applied to only two longitudinal views. It is possible to omit the variable selection step and instead use the entire set of variables in the second step of the pipeline (that is, [12pt]{minimal} $}_{d} = {}_{d}$ ). We briefly describe the three variable selection methods. Please refer to the for more details. Linear mixed models LMMs are generalizations of linear models that allow the use of both fixed and random effects to model dependencies in samples arising from repeated measurements. LMMs were used in for differential abundance analysis of longitudinal data from the IBD study to identify important longitudinal variables discriminating between IBD status. To determine if a given variable is important to discriminate between disease groups, we construct the two models: (i) null model and (ii) full model. The outcome for each model is the longitudinal variable. The null model associates the outcome with a fixed variable (i.e. time) plus a random intercept, adjusting for covariates of interest (e.g. sites). The full model includes the null model plus the disease status of the sample, treated as a fixed variable. Then, the full and null model are compared using ANOVA to determine statistically significant (p-value [12pt]{minimal} $< 0.05$ ) variables that discriminate between the classes considered. While LMM use the class status in variable selection, it handles each variable separately and does not consider between-views and within-view dependencies. This could lead to a suboptimal variable selection because some variables may only be significant in the presence of other variables. Joint Principal Trend Analysis was introduced in 2018 by as a method to extract shared latent trends and identify important variables from a pair of longitudinal high-dimensional datasets. Following our notation, we let [12pt]{minimal} $\{_{1}^{(n,:,:)}: n [1:N]\}$ and [12pt]{minimal} $\{_{2}^{(n,:,:)}: n [1:N]\}$ be the longitudinal datasets for view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ , respectively, for the [12pt]{minimal} $N$ subjects. The number of variables in view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ are [12pt]{minimal} $p_{1}$ and [12pt]{minimal} $p_{2}$ , respectively, and the number of time points for the two views are [12pt]{minimal} $t_{1}=t_{2}=T$ . Therefore, each subject’s data [12pt]{minimal} $_{i}^{(n,:,:)}$ (for [12pt]{minimal} $i \{1,2\}$ and [12pt]{minimal} $n [1:N]$ ) is a [12pt]{minimal} $p_{i} T$ tensor. In JPTA, the key idea is to represent the data of the two views with the following common principal trends: [12pt]{minimal} _{1}^{(n,:,:)} &= ^{} + _{1}^{(n)}, \\ _{2}^{(n,:,:)} &= ^{} + _{2}^{(n)}, for [12pt]{minimal} $n [1:N]$ , where (i) [12pt]{minimal} $$ and [12pt]{minimal} $$ are [12pt]{minimal} $p_{1} 1$ and [12pt]{minimal} $p_{2} 1$ vectors of variable loadings, respectively; (ii) [12pt]{minimal} $ $ is a [12pt]{minimal} $1 (T+2)$ vector of cubic spline coefficients; (iii) [12pt]{minimal} $$ is a cubic spline basis matrix of size [12pt]{minimal} $T (T+2)$ ; and (iv) [12pt]{minimal} $_{i}^{(n)}$ for [12pt]{minimal} $i \{1,2\}$ are the respective noise vectors. To obtain [12pt]{minimal} $(,, )$ , the following loss is minimized: [12pt]{minimal} & _{,,} _{n=1}^{N} (|| _{1}^{(n,:,:)}- ^{} ||_{F}^{2} +|| _{2}^{(n,:,:)}- ^{} ||_{F}^{2}) \\ & \ \ ^{} \!\! c, ||||_{1} \! \! c_{1}, ||||_{1} \! \! c_{2}, ||||_{2}^{2} 1, ||||_{2}^{2} 1, where [12pt]{minimal} $|| ||_{F}$ represents the Frobenius norm, [12pt]{minimal} $$ is a [12pt]{minimal} $(T+2)$ by [12pt]{minimal} $(T+2)$ matrix given by [12pt]{minimal} $_{i,j} = ^{^{}}(t) B_{j}^{^{}}(t)} dt$ (where [12pt]{minimal} $[]_{t,m} = B_{m}(t)$ ) and the sparsity parameters [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ control the number of nonzero entries in the vectors [12pt]{minimal} $$ and [12pt]{minimal} $$ , respectively. In particular, after solving the optimization problem, the variables corresponding to the entries of [12pt]{minimal} $$ and [12pt]{minimal} $$ which have high absolute values (the top [12pt]{minimal} $c_{1}$ entries from [12pt]{minimal} $$ and the top [12pt]{minimal} $c_{2}$ entries from [12pt]{minimal} $$ ), are the variables that we select as important, using the JPTA method. Thus, using JPTA, we select the top [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ variables for the two views, respectively, that maximize the association between the views. It is important to note that JPTA has several shortcomings relative to LMM and DGB: (i) it does not take into account information about the class labels while selecting the top variables (which makes it more suitable for data exploration and not regression and classification problems); (ii) it can only be used with two longitudinal data; and (iii) it assumes an equal number of time points for both views. DeepIDA-GRU-Bootstrapping DGB is a novel method we propose in this manuscript as an extension to DeepIDA to the scenario where there are longitudinal data in addition to cross-sectional data. DeepIDA is a multivariate dimension reduction method for learning non-linear projections of different views that simultaneously maximize separation between classes and association between views. To aid in interpretability, the authors proposed a homogeneous ensemble approach via bootstrap to rank variables according to how much they contribute to the association of views and separation of classes. In its original form, DeepIDA is applicable only to cross-sectional data, which is limiting. Thus, for longitudinal data, we integrate gated recurrent units (GRUs) into the DeepIDA framework. GRUs , are a class of recurrent neural networks (RNNs) that allow long-term learning of dependencies in sequential data and help mitigate the problem of vanishing / exploding gradients in vanilla RNNs . We refer to this modified network as DeepIDA-GRU (which is shown pictorially in ). Specifically, each cross-sectional view is fed into a dense neural network and each longitudinal view is fed into a GRU. The inclusion of GRUs in the DeepIDA framework enables us to extend the bootstrapping idea of to multiview data consisting of longitudinal and cross-sectional views. We call this approach for variable selection DGB. A detailed description of the DeepIDA bootstrap procedure can be found in but for completeness sake, we enumerate the main steps applied to DGB here: From the set of [12pt]{minimal} $N$ subjects [12pt]{minimal} $[1:N]$ , randomly sample with replacement [12pt]{minimal} $N$ times, to generate each of the [12pt]{minimal} $M$ bootstrap sets [12pt]{minimal} $\{B_{1}, B_{2}, , B_{M}\}$ . Sets [12pt]{minimal} $\{B_{1}^{c}, B_{2}^{c}, , B_{M}^{c}\}$ are called out-of-bag sets. From each view, construct [12pt]{minimal} $M$ number of [12pt]{minimal} $D$ -tuples: [12pt]{minimal} ${\{_{m}| m [1:M]\}}$ of bootstrapped variables, where each [12pt]{minimal} $D$ -tuple [12pt]{minimal} $_{m}$ , consists of [12pt]{minimal} $D$ sets, denoted by [12pt]{minimal} $_{m} = (V_{1,m}, V_{2,m}, , V_{D,m})$ . Here, the [12pt]{minimal} $d$ th set [12pt]{minimal} $V_{d,m}$ consists of randomly selected [12pt]{minimal} $80$ percent variables from the [12pt]{minimal} $d$ th view (where [12pt]{minimal} $d [1:D]$ ). This gives us the set of [12pt]{minimal} $M$ bootstrapped variable subsets: [12pt]{minimal} $\{_{1} = (V_{1,1}, V_{2,1}, , V_{D,1}), _{2} = (V_{1,2}, V_{2,2}, , V_{D,2}), ,_{M}= (V_{1,M}, V_{2,M}, , V_{D,M})\}$ . Pair the bootstrapped subject sets with the bootstrapped variable sets. Let the bootstrapped pairs be given by [12pt]{minimal} $(B_{1}, _{1}),(B_{2}, _{2}), , (B_{M}, _{M})$ and the out-of-bag pairs be given by [12pt]{minimal} $(B_{1}^{c}, _{1}), (B_{2}^{c}, _{2}), , (B_{M}^{c}, _{M})$ . For every variable [12pt]{minimal} $v$ in every view, initialize its score as [12pt]{minimal} ${S_{v}=0}$ . For each bootstrapped pair [12pt]{minimal} $(B_{i},_{i})$ and the out-of-bag pair [12pt]{minimal} $(B_{i}^{c},_{i})$ (where [12pt]{minimal} $i [1:M]$ ), First train the DeepIDA-GRU network using bootstrapped pair [12pt]{minimal} $(B_{i}, _{i})$ and then test the network on the out-of-bag pair [12pt]{minimal} $(B_{i}^{c}, _{i})$ . This gives us a baseline accuracy for the [12pt]{minimal} $i$ th pair and the corresponding model is the baseline model for the [12pt]{minimal} $i$ th bootstrapped pair. For each variable [12pt]{minimal} $u _{i}$ , randomly permute the value of this variable among the different subjects (while keeping the other variables intact). Test the learned baseline model on the permuted data. If there is a decrease in accuracy (compared to the baseline accuracy), then it means that the variable [12pt]{minimal} $u$ was likely important in achieving the baseline accuracy. Therefore, in such a scenario, increase the score of variable [12pt]{minimal} $u$ by [12pt]{minimal} $1$ , that is, [12pt]{minimal} $S_{u}=S_{u}+1$ . The overall importance of any variable [12pt]{minimal} $u$ is then calculated by (1) [12pt]{minimal} & (u) \!=\! }{ u}. Notably, the Integrative Discriminant Analysis (IDA) objective enables DGB to select variables that are important in simultaneously separating the classes and associating the views. However, compared to LMM and JPTA, DGB can be computationally expensive. However, the bootstrapping process is parallelizable, which can significantly improve run time. There exist variants of GRU that can also handle missing data and replacing GRU with such variants would allow DGB to handle missing data. LMMs are generalizations of linear models that allow the use of both fixed and random effects to model dependencies in samples arising from repeated measurements. LMMs were used in for differential abundance analysis of longitudinal data from the IBD study to identify important longitudinal variables discriminating between IBD status. To determine if a given variable is important to discriminate between disease groups, we construct the two models: (i) null model and (ii) full model. The outcome for each model is the longitudinal variable. The null model associates the outcome with a fixed variable (i.e. time) plus a random intercept, adjusting for covariates of interest (e.g. sites). The full model includes the null model plus the disease status of the sample, treated as a fixed variable. Then, the full and null model are compared using ANOVA to determine statistically significant (p-value [12pt]{minimal} $< 0.05$ ) variables that discriminate between the classes considered. While LMM use the class status in variable selection, it handles each variable separately and does not consider between-views and within-view dependencies. This could lead to a suboptimal variable selection because some variables may only be significant in the presence of other variables. was introduced in 2018 by as a method to extract shared latent trends and identify important variables from a pair of longitudinal high-dimensional datasets. Following our notation, we let [12pt]{minimal} $\{_{1}^{(n,:,:)}: n [1:N]\}$ and [12pt]{minimal} $\{_{2}^{(n,:,:)}: n [1:N]\}$ be the longitudinal datasets for view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ , respectively, for the [12pt]{minimal} $N$ subjects. The number of variables in view [12pt]{minimal} $1$ and view [12pt]{minimal} $2$ are [12pt]{minimal} $p_{1}$ and [12pt]{minimal} $p_{2}$ , respectively, and the number of time points for the two views are [12pt]{minimal} $t_{1}=t_{2}=T$ . Therefore, each subject’s data [12pt]{minimal} $_{i}^{(n,:,:)}$ (for [12pt]{minimal} $i \{1,2\}$ and [12pt]{minimal} $n [1:N]$ ) is a [12pt]{minimal} $p_{i} T$ tensor. In JPTA, the key idea is to represent the data of the two views with the following common principal trends: [12pt]{minimal} _{1}^{(n,:,:)} &= ^{} + _{1}^{(n)}, \\ _{2}^{(n,:,:)} &= ^{} + _{2}^{(n)}, for [12pt]{minimal} $n [1:N]$ , where (i) [12pt]{minimal} $$ and [12pt]{minimal} $$ are [12pt]{minimal} $p_{1} 1$ and [12pt]{minimal} $p_{2} 1$ vectors of variable loadings, respectively; (ii) [12pt]{minimal} $ $ is a [12pt]{minimal} $1 (T+2)$ vector of cubic spline coefficients; (iii) [12pt]{minimal} $$ is a cubic spline basis matrix of size [12pt]{minimal} $T (T+2)$ ; and (iv) [12pt]{minimal} $_{i}^{(n)}$ for [12pt]{minimal} $i \{1,2\}$ are the respective noise vectors. To obtain [12pt]{minimal} $(,, )$ , the following loss is minimized: [12pt]{minimal} & _{,,} _{n=1}^{N} (|| _{1}^{(n,:,:)}- ^{} ||_{F}^{2} +|| _{2}^{(n,:,:)}- ^{} ||_{F}^{2}) \\ & \ \ ^{} \!\! c, ||||_{1} \! \! c_{1}, ||||_{1} \! \! c_{2}, ||||_{2}^{2} 1, ||||_{2}^{2} 1, where [12pt]{minimal} $|| ||_{F}$ represents the Frobenius norm, [12pt]{minimal} $$ is a [12pt]{minimal} $(T+2)$ by [12pt]{minimal} $(T+2)$ matrix given by [12pt]{minimal} $_{i,j} = ^{^{}}(t) B_{j}^{^{}}(t)} dt$ (where [12pt]{minimal} $[]_{t,m} = B_{m}(t)$ ) and the sparsity parameters [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ control the number of nonzero entries in the vectors [12pt]{minimal} $$ and [12pt]{minimal} $$ , respectively. In particular, after solving the optimization problem, the variables corresponding to the entries of [12pt]{minimal} $$ and [12pt]{minimal} $$ which have high absolute values (the top [12pt]{minimal} $c_{1}$ entries from [12pt]{minimal} $$ and the top [12pt]{minimal} $c_{2}$ entries from [12pt]{minimal} $$ ), are the variables that we select as important, using the JPTA method. Thus, using JPTA, we select the top [12pt]{minimal} $c_{1}$ and [12pt]{minimal} $c_{2}$ variables for the two views, respectively, that maximize the association between the views. It is important to note that JPTA has several shortcomings relative to LMM and DGB: (i) it does not take into account information about the class labels while selecting the top variables (which makes it more suitable for data exploration and not regression and classification problems); (ii) it can only be used with two longitudinal data; and (iii) it assumes an equal number of time points for both views. DGB is a novel method we propose in this manuscript as an extension to DeepIDA to the scenario where there are longitudinal data in addition to cross-sectional data. DeepIDA is a multivariate dimension reduction method for learning non-linear projections of different views that simultaneously maximize separation between classes and association between views. To aid in interpretability, the authors proposed a homogeneous ensemble approach via bootstrap to rank variables according to how much they contribute to the association of views and separation of classes. In its original form, DeepIDA is applicable only to cross-sectional data, which is limiting. Thus, for longitudinal data, we integrate gated recurrent units (GRUs) into the DeepIDA framework. GRUs , are a class of recurrent neural networks (RNNs) that allow long-term learning of dependencies in sequential data and help mitigate the problem of vanishing / exploding gradients in vanilla RNNs . We refer to this modified network as DeepIDA-GRU (which is shown pictorially in ). Specifically, each cross-sectional view is fed into a dense neural network and each longitudinal view is fed into a GRU. The inclusion of GRUs in the DeepIDA framework enables us to extend the bootstrapping idea of to multiview data consisting of longitudinal and cross-sectional views. We call this approach for variable selection DGB. A detailed description of the DeepIDA bootstrap procedure can be found in but for completeness sake, we enumerate the main steps applied to DGB here: From the set of [12pt]{minimal} $N$ subjects [12pt]{minimal} $[1:N]$ , randomly sample with replacement [12pt]{minimal} $N$ times, to generate each of the [12pt]{minimal} $M$ bootstrap sets [12pt]{minimal} $\{B_{1}, B_{2}, , B_{M}\}$ . Sets [12pt]{minimal} $\{B_{1}^{c}, B_{2}^{c}, , B_{M}^{c}\}$ are called out-of-bag sets. From each view, construct [12pt]{minimal} $M$ number of [12pt]{minimal} $D$ -tuples: [12pt]{minimal} ${\{_{m}| m [1:M]\}}$ of bootstrapped variables, where each [12pt]{minimal} $D$ -tuple [12pt]{minimal} $_{m}$ , consists of [12pt]{minimal} $D$ sets, denoted by [12pt]{minimal} $_{m} = (V_{1,m}, V_{2,m}, , V_{D,m})$ . Here, the [12pt]{minimal} $d$ th set [12pt]{minimal} $V_{d,m}$ consists of randomly selected [12pt]{minimal} $80$ percent variables from the [12pt]{minimal} $d$ th view (where [12pt]{minimal} $d [1:D]$ ). This gives us the set of [12pt]{minimal} $M$ bootstrapped variable subsets: [12pt]{minimal} $\{_{1} = (V_{1,1}, V_{2,1}, , V_{D,1}), _{2} = (V_{1,2}, V_{2,2}, , V_{D,2}), ,_{M}= (V_{1,M}, V_{2,M}, , V_{D,M})\}$ . Pair the bootstrapped subject sets with the bootstrapped variable sets. Let the bootstrapped pairs be given by [12pt]{minimal} $(B_{1}, _{1}),(B_{2}, _{2}), , (B_{M}, _{M})$ and the out-of-bag pairs be given by [12pt]{minimal} $(B_{1}^{c}, _{1}), (B_{2}^{c}, _{2}), , (B_{M}^{c}, _{M})$ . For every variable [12pt]{minimal} $v$ in every view, initialize its score as [12pt]{minimal} ${S_{v}=0}$ . For each bootstrapped pair [12pt]{minimal} $(B_{i},_{i})$ and the out-of-bag pair [12pt]{minimal} $(B_{i}^{c},_{i})$ (where [12pt]{minimal} $i [1:M]$ ), First train the DeepIDA-GRU network using bootstrapped pair [12pt]{minimal} $(B_{i}, _{i})$ and then test the network on the out-of-bag pair [12pt]{minimal} $(B_{i}^{c}, _{i})$ . This gives us a baseline accuracy for the [12pt]{minimal} $i$ th pair and the corresponding model is the baseline model for the [12pt]{minimal} $i$ th bootstrapped pair. For each variable [12pt]{minimal} $u _{i}$ , randomly permute the value of this variable among the different subjects (while keeping the other variables intact). Test the learned baseline model on the permuted data. If there is a decrease in accuracy (compared to the baseline accuracy), then it means that the variable [12pt]{minimal} $u$ was likely important in achieving the baseline accuracy. Therefore, in such a scenario, increase the score of variable [12pt]{minimal} $u$ by [12pt]{minimal} $1$ , that is, [12pt]{minimal} $S_{u}=S_{u}+1$ . The overall importance of any variable [12pt]{minimal} $u$ is then calculated by (1) [12pt]{minimal} & (u) \!=\! }{ u}. Notably, the Integrative Discriminant Analysis (IDA) objective enables DGB to select variables that are important in simultaneously separating the classes and associating the views. However, compared to LMM and JPTA, DGB can be computationally expensive. However, the bootstrapping process is parallelizable, which can significantly improve run time. There exist variants of GRU that can also handle missing data and replacing GRU with such variants would allow DGB to handle missing data. Feature extraction methods extract important one-dimensional features from longitudinal data. We investigated two methods: (i) EC curves and (ii) Functional Principal Component Analysis (FPCA) for feature extraction. The reason for selecting these two methods is as follows. EC curves have been shown to provide an important low dimensional characterization for complex datasets in a wide range of domains [12pt]{minimal} $[31]$ such as (i) analysis and classification of brain signals from fMRI study; (ii) detection of faults in chemical processes; (iii) characterization of spatio-temporal behavior of fields for diffusion system; and (iv) image analysis to characterize the simulated micrographs for liquid crystal systems. In all these instances, it has been noted that the characteristics of the EC curves vary across the different classes due to the different interactions of variables within each class. This inspired us to investigate whether the behavior of EC curves differs between individuals with and without IBD and if it could serve as an effective method to extract key features from complex high-dimensional multiomics datasets. FPCA, on the other hand, is a widely recognized technique for reducing dimensionality and extracting features from functional data. FPCA aims to identify the eigenfunctions that capture the most variability in functional data. Given that both metabolomics and metagenomics datasets are longitudinal, our objective was to employ FPCA and EC curves to extract features from these datasets. Both EC curves and FPCA offer efficient representations of longitudinal data in a lower-dimensional space. Another important rationale for choosing this combination of feature extraction techniques is their focus on distinct and nearly complementary aspects of the data (as will be illustrated in the Synthetic Analysis of EC and FPCA section). In particular, EC curves capture the relationships among various variables (such as genes, metabolites, or pathways), making them advantageous when the interactions among these variables vary between the two classes (IBD vs non-IBD). In contrast, FPCA characterizes the longitudinal patterns of individual variables independently of other variables, and is therefore expected to perform well when the longitudinal patterns differ between the two classes. In the following, we go into more detail on these feature extraction methods. Note that this step is optional because DeepIDA-GRU can accept longitudinal data directly. Euler curves The Euler characteristic (EC) was first proposed by Euler in 1758 in the context of polyhedra. Recently, Zavala et al. explored the potential of EC as a topological descriptor for complex objects such as graphs and images. EC curves, which are low-dimensional descriptors, were created to capture the essential geometrical features of these objects. The construction of EC curves is as follows. An edge weighted undirected graph [12pt]{minimal} $(,, )$ with [12pt]{minimal} $||$ vertices, [12pt]{minimal} $||$ edges, and set of weights [12pt]{minimal} $ = \{w(e)| e \}$ , can be represented using a symmetric [12pt]{minimal} $||$ by [12pt]{minimal} $||$ matrix [12pt]{minimal} $M$ , where [12pt]{minimal} $M_{i,j} = w({e_{i,j}})$ is the weight associated with the edge [12pt]{minimal} $e_{i,j}$ between the vertices [12pt]{minimal} $v_{i}$ and [12pt]{minimal} $v_{j}$ . For example, the leftmost graph in can be represented by the [12pt]{minimal} $5$ by [12pt]{minimal} $5$ matrix [12pt]{minimal} $M$ , given by [12pt]{minimal} M = [ 1 & 0.6 & 0.8 & 0.7 & 0.1\\ 0.6 & 1 & 0.5 & 0.65 & 0.2 \\ 0.8 & 0.5 & 1 & 0.55 & 0.23 \\ 0.7 & 0.65 & 0.55 & 1 & 0.3\\ 0.1 & 0.2 & 0.23 & 0.3 & 1 ]. The EC, denoted by [12pt]{minimal} $ $ , of a graph is defined by the difference in the number of vertices and the number of edges: [12pt]{minimal} & = |V|-|E| For instance, the EC of the leftmost graph in is [12pt]{minimal} $ = 5-10 = -5$ . In order to obtain a low-dimensional descriptor for complex objects (such as graphs, images, matrices, fields, etc.), EC is often combined with a process known as filtration to generate an EC curve, which can be used to quantify the topology of the complex object. Given an edge-weighted graph [12pt]{minimal} $(,, )$ and a threshold [12pt]{minimal} $ $ , the filtered graph for this threshold [12pt]{minimal} $ $ , which we denote by [12pt]{minimal} $(,, )_{ }$ , is obtained by removing all the edges [12pt]{minimal} $e $ so that [12pt]{minimal} $w(e)> $ . This filtration step is illustrated in for [12pt]{minimal} $ = 0.4$ . For a threshold [12pt]{minimal} $ $ , we denote the EC of the corresponding filtered graph [12pt]{minimal} $(,, )_{ }$ by [12pt]{minimal} $ _{ }$ . Note that for the filtered graph of , the EC is given by [12pt]{minimal} $ _{0.4} = 5-4 = 1$ . The EC curve is a plot between [12pt]{minimal} $ _{ }$ and [12pt]{minimal} $ $ for a series of increasing thresholds [12pt]{minimal} $ $ . The filtration process can be stopped once the threshold is equal to the largest weight of the original graph, at which point the filtered graph is the same as the original graph. The EC curve of the leftmost graph in is the rightmost graph in that figure. It has been demonstrated in that EC curves retain important characteristics of the graph and are therefore useful representations of 2D graphs using 1D vectors. To represent a multivariate time series [12pt]{minimal} $}_{d}^{(n,:,:)} ^{_{d} t_{d}}$ of subject [12pt]{minimal} $n [1:N]$ using an EC curve, we first find the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ precision matrix, or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ correlation matrix or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ covariance matrix from [12pt]{minimal} $}_{d}^{(n,:,:)}$ (by treating the multiple time points in the time series as different samples of a given variable), and denote this matrix by [12pt]{minimal} $M ^{_{d} _{d}}$ . Since [12pt]{minimal} $M$ is a symmetric matrix, it represents an edge-weighted graph. The matrix [12pt]{minimal} $M$ is then subjected to a sequence of increasing thresholds to obtain an EC curve using the filtration process described above. The resulting EC curve is a [12pt]{minimal} $1$ D representation of the time series [12pt]{minimal} $}_{d}^{(n,:,:)}$ that can then be used as input to the integration and classification step. If the number of thresholds used during the filtration process is [12pt]{minimal} $x$ , then the EC method converts [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to [12pt]{minimal} $}_{d} ^{N x 1}$ , which is low-dimensional. Functional principal component analysis FPCA is a dimension reduction method similar to PCA, which can be used for functional or time series data. Here, we use FPCA to convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ into a one-dimensional form [12pt]{minimal} $}_{d} ^{N (xp_{d}) 1}$ , by calculating [12pt]{minimal} $x$ -dimensional scores for each of the [12pt]{minimal} $p_{d}$ variables, where [12pt]{minimal} $x$ is the number of functional principal components considered for each variable. Specifically, for any given variable [12pt]{minimal} $j$ of view [12pt]{minimal} $d$ , [12pt]{minimal} $}_{d}^{:,j,:} ^{N t_{d}}$ is the collection of univariate time series of all [12pt]{minimal} $N$ subjects for that variable [12pt]{minimal} $j$ (where [12pt]{minimal} $d [1:D]$ and [12pt]{minimal} $j [1:_{d}]$ ). FPCA first finds the top [12pt]{minimal} $x$ functional principle components (FPCs): [12pt]{minimal} $f_{1}(t), f_{2}(t), , f_{x}(t) ^{1 t_{d}}$ of the [12pt]{minimal} $N$ time series in [12pt]{minimal} $}_{d}^{:,j,:}$ . These [12pt]{minimal} $x$ FPCs represent the top [12pt]{minimal} $x$ principal modes in the [12pt]{minimal} $N$ univariate time series and are obtained using basis functions such as B-splines and wavelets. Each of the [12pt]{minimal} $N$ univariate time series in [12pt]{minimal} $}_{d}^{:,j,:}$ is then projected on each of the [12pt]{minimal} $x$ FPCs to get an [12pt]{minimal} $x$ -dimensional score for each subject [12pt]{minimal} $n [1:N]$ , corresponding to this variable. The scores of all [12pt]{minimal} $_{d}$ variables are stacked together to obtain a [12pt]{minimal} $_{d} x$ -dimensional vector for that subject. Thus, with the FPCA method, we convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to cross-sectional data [12pt]{minimal} $}_{d} ^{N (x p_{d}) 1}$ . In the Synthetic Analysis of EC and FPCA Section, we compare EC and FPCA using simulations. We demonstrate that when the covariance structure between the classes differs, the EC curves are particularly better at feature extraction than the functional principal components. However, EC curves are not as effective as FPCA in distinguishing between longitudinal data of different classes when the classes have a similar covariance structure and only differ in their temporal trends. The Euler characteristic (EC) was first proposed by Euler in 1758 in the context of polyhedra. Recently, Zavala et al. explored the potential of EC as a topological descriptor for complex objects such as graphs and images. EC curves, which are low-dimensional descriptors, were created to capture the essential geometrical features of these objects. The construction of EC curves is as follows. An edge weighted undirected graph [12pt]{minimal} $(,, )$ with [12pt]{minimal} $||$ vertices, [12pt]{minimal} $||$ edges, and set of weights [12pt]{minimal} $ = \{w(e)| e \}$ , can be represented using a symmetric [12pt]{minimal} $||$ by [12pt]{minimal} $||$ matrix [12pt]{minimal} $M$ , where [12pt]{minimal} $M_{i,j} = w({e_{i,j}})$ is the weight associated with the edge [12pt]{minimal} $e_{i,j}$ between the vertices [12pt]{minimal} $v_{i}$ and [12pt]{minimal} $v_{j}$ . For example, the leftmost graph in can be represented by the [12pt]{minimal} $5$ by [12pt]{minimal} $5$ matrix [12pt]{minimal} $M$ , given by [12pt]{minimal} M = [ 1 & 0.6 & 0.8 & 0.7 & 0.1\\ 0.6 & 1 & 0.5 & 0.65 & 0.2 \\ 0.8 & 0.5 & 1 & 0.55 & 0.23 \\ 0.7 & 0.65 & 0.55 & 1 & 0.3\\ 0.1 & 0.2 & 0.23 & 0.3 & 1 ]. The EC, denoted by [12pt]{minimal} $ $ , of a graph is defined by the difference in the number of vertices and the number of edges: [12pt]{minimal} & = |V|-|E| For instance, the EC of the leftmost graph in is [12pt]{minimal} $ = 5-10 = -5$ . In order to obtain a low-dimensional descriptor for complex objects (such as graphs, images, matrices, fields, etc.), EC is often combined with a process known as filtration to generate an EC curve, which can be used to quantify the topology of the complex object. Given an edge-weighted graph [12pt]{minimal} $(,, )$ and a threshold [12pt]{minimal} $ $ , the filtered graph for this threshold [12pt]{minimal} $ $ , which we denote by [12pt]{minimal} $(,, )_{ }$ , is obtained by removing all the edges [12pt]{minimal} $e $ so that [12pt]{minimal} $w(e)> $ . This filtration step is illustrated in for [12pt]{minimal} $ = 0.4$ . For a threshold [12pt]{minimal} $ $ , we denote the EC of the corresponding filtered graph [12pt]{minimal} $(,, )_{ }$ by [12pt]{minimal} $ _{ }$ . Note that for the filtered graph of , the EC is given by [12pt]{minimal} $ _{0.4} = 5-4 = 1$ . The EC curve is a plot between [12pt]{minimal} $ _{ }$ and [12pt]{minimal} $ $ for a series of increasing thresholds [12pt]{minimal} $ $ . The filtration process can be stopped once the threshold is equal to the largest weight of the original graph, at which point the filtered graph is the same as the original graph. The EC curve of the leftmost graph in is the rightmost graph in that figure. It has been demonstrated in that EC curves retain important characteristics of the graph and are therefore useful representations of 2D graphs using 1D vectors. To represent a multivariate time series [12pt]{minimal} $}_{d}^{(n,:,:)} ^{_{d} t_{d}}$ of subject [12pt]{minimal} $n [1:N]$ using an EC curve, we first find the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ precision matrix, or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ correlation matrix or the [12pt]{minimal} $_{d}$ by [12pt]{minimal} $_{d}$ covariance matrix from [12pt]{minimal} $}_{d}^{(n,:,:)}$ (by treating the multiple time points in the time series as different samples of a given variable), and denote this matrix by [12pt]{minimal} $M ^{_{d} _{d}}$ . Since [12pt]{minimal} $M$ is a symmetric matrix, it represents an edge-weighted graph. The matrix [12pt]{minimal} $M$ is then subjected to a sequence of increasing thresholds to obtain an EC curve using the filtration process described above. The resulting EC curve is a [12pt]{minimal} $1$ D representation of the time series [12pt]{minimal} $}_{d}^{(n,:,:)}$ that can then be used as input to the integration and classification step. If the number of thresholds used during the filtration process is [12pt]{minimal} $x$ , then the EC method converts [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to [12pt]{minimal} $}_{d} ^{N x 1}$ , which is low-dimensional. FPCA is a dimension reduction method similar to PCA, which can be used for functional or time series data. Here, we use FPCA to convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ into a one-dimensional form [12pt]{minimal} $}_{d} ^{N (xp_{d}) 1}$ , by calculating [12pt]{minimal} $x$ -dimensional scores for each of the [12pt]{minimal} $p_{d}$ variables, where [12pt]{minimal} $x$ is the number of functional principal components considered for each variable. Specifically, for any given variable [12pt]{minimal} $j$ of view [12pt]{minimal} $d$ , [12pt]{minimal} $}_{d}^{:,j,:} ^{N t_{d}}$ is the collection of univariate time series of all [12pt]{minimal} $N$ subjects for that variable [12pt]{minimal} $j$ (where [12pt]{minimal} $d [1:D]$ and [12pt]{minimal} $j [1:_{d}]$ ). FPCA first finds the top [12pt]{minimal} $x$ functional principle components (FPCs): [12pt]{minimal} $f_{1}(t), f_{2}(t), , f_{x}(t) ^{1 t_{d}}$ of the [12pt]{minimal} $N$ time series in [12pt]{minimal} $}_{d}^{:,j,:}$ . These [12pt]{minimal} $x$ FPCs represent the top [12pt]{minimal} $x$ principal modes in the [12pt]{minimal} $N$ univariate time series and are obtained using basis functions such as B-splines and wavelets. Each of the [12pt]{minimal} $N$ univariate time series in [12pt]{minimal} $}_{d}^{:,j,:}$ is then projected on each of the [12pt]{minimal} $x$ FPCs to get an [12pt]{minimal} $x$ -dimensional score for each subject [12pt]{minimal} $n [1:N]$ , corresponding to this variable. The scores of all [12pt]{minimal} $_{d}$ variables are stacked together to obtain a [12pt]{minimal} $_{d} x$ -dimensional vector for that subject. Thus, with the FPCA method, we convert longitudinal data [12pt]{minimal} $}_{d} ^{N _{d} t_{d}}$ to cross-sectional data [12pt]{minimal} $}_{d} ^{N (x p_{d}) 1}$ . In the Synthetic Analysis of EC and FPCA Section, we compare EC and FPCA using simulations. We demonstrate that when the covariance structure between the classes differs, the EC curves are particularly better at feature extraction than the functional principal components. However, EC curves are not as effective as FPCA in distinguishing between longitudinal data of different classes when the classes have a similar covariance structure and only differ in their temporal trends. In this step, we describe our approach to integrate the output data from any of the first two steps or the original input data, if the first two steps are skipped. Denote by [12pt]{minimal} $\{}_{d} ^{N _{d} _{d}}, d [1:D]\}$ the data obtained after the first two steps: selection of variables and extraction of features (where both these steps are optional). Data from [12pt]{minimal} $D$ views are integrated using DeepIDA combined with GRUs (that is, DeepIDA-GRU) as described in the variable selection section. As noted, in the DeepIDA-GRU network, each cross-sectional view is fed into a dense neural network, and each longitudinal view is fed into a GRU. The role of neural networks and GRUs is to nonlinearly transform each view. The output of these networks is then entered into the IDA optimization problem . By minimizing the IDA objective, we learn discriminant vectors such that the projection of the non-linearly transformed data onto these vectors results in maximum association between the views and maximum separation between classes. If the feature extraction step is skipped, then each longitudinal view, [12pt]{minimal} $}_{d} ^{N _{d} _{d}}$ with [12pt]{minimal} $t_{d}>1$ , is fed into its respective GRU in the DeepIDA-GRU network. If the feature extraction step is not skipped, then each cross-sectional ( [12pt]{minimal} $}_{d} ^{N _{d}}$ ) and longitudinal ( [12pt]{minimal} $}_{d} ^{N _{d} _{d}}$ with [12pt]{minimal} $_{d}=1$ ) view after the first step is fed into its respective dense neural network in DeepIDA-GRU. DeepIDA-GRU performs integration and classification so that the between-class separation and between-view associations are simultaneously maximized. Similarly to the DeepIDA network , DeepIDA-GRU also uses the nearest centroid classifier for classification. Classification performance is compared using average accuracy, precision, recall and F1 scores. Overview of the pipeline We investigate the effectiveness of the proposed pipeline on the longitudinal (metagenomics and metabolomics) and cross-sectional (host transcriptomics) multiview data pertaining to IBD. The preprocessed host transcriptomics, metagenomics and metabolomics datasets are represented using [12pt]{minimal} $3$ -dimensional real-valued tensors of sizes [12pt]{minimal} $ ^{90 9726 1}$ , [12pt]{minimal} $ ^{90 2261 10}$ and [12pt]{minimal} $ ^{90 93 10}$ , respectively, and passed as inputs in the pipeline. In the first step, the variable selection/ranking methods LMM, DGB and JPTA are used to identify key genes, microbial pathways, and metabolites that are relevant in discriminating IBD status and/or associating the views. The top [12pt]{minimal} $200$ and [12pt]{minimal} $50$ variables of the metagenomics and metabolomics data, respectively, are retained using each method. For host transcriptomics data, LMM and DGB are used to select the top [12pt]{minimal} $1000$ statistically significant genes. Since JPTA is only applicable to longitudinal data, no variable selection is performed on the host transcriptomics data using JPTA. The resulting datasets are then passed through the feature extraction and integration/classification steps. We investigate the performance of our feature extraction and integration and classification steps by considering the following three options. Method 1—DeepIDA-GRU with no feature extraction: In this case, there is no feature extraction. For integration and classification with DeepIDA-GRU, the cross-sectional host transcriptomics dataset is fed into a fully connected neural network (with [12pt]{minimal} $3$ layers and [12pt]{minimal} $200,100,20$ neurons in these three layers), while the metagenomics and metabolomics data are fed into their respective GRUs (both consisting of [12pt]{minimal} $2$ layers and [12pt]{minimal} $50$ dimensional hidden unit). Method 2—DeepIDA-GRU with EC for Metagenomics and Mean for Metabolomics: In this case, the two longitudinal views are each converted into cross-sectional form. In particular, EC (with [12pt]{minimal} $100$ threshold values) is used to reduce the metagenomics data to size [12pt]{minimal} $^{90 100 1}$ . The metabolomics data is reduced to size [12pt]{minimal} $^{90 50 1}$ by computing the mean across the time dimension. In particular, when we visualized the EC curves of the metabolomics data, we did not find any differences between the EC curves for those with and without IBD, so we simply used the mean over time. The host transcriptomics data remain unchanged. The host transcriptomics, metabolomics and metagenomics data were then each fed into a [12pt]{minimal} $3$ -layered dense neural networks with structures [12pt]{minimal} $[200,100,20]$ , [12pt]{minimal} $[20,100,20]$ and [12pt]{minimal} $[50,100,20]$ , respectively, for integration and classification with DeepIDA-GRU (which in this case is equivalent to the traditional DeepIDA network). Method 3—DeepIDA-GRU with FPCA for both Metabolomics and Metagenomics views: In this case, FPCA (with [12pt]{minimal} $x=3$ FPCs for each variable) is used to reduce the longitudinal metabolomics and metagenomics data to cross-sectional data of sizes [12pt]{minimal} $ ^{90 150 1}$ and [12pt]{minimal} $^{90 600 1}$ , respectively. The host transcriptomics data remain unchanged. The host-transcriptomics, metabolomics and metagenomics data were each fed into a [12pt]{minimal} $3$ -layered dense neural networks with structures [12pt]{minimal} $[200,100,20]$ , [12pt]{minimal} $[20,100,20]$ and [12pt]{minimal} $[50,100,20]$ , respectively, for integration and classification using DeepIDA. We train and test the [12pt]{minimal} $9$ possible combinations of the [12pt]{minimal} $3$ variable selection methods: LMM, JPTA, and DGB and the [12pt]{minimal} $3$ feature extraction plus integration/classification approaches: Method 1, Method 2 and Method 3. Owing to the limited sample size, [12pt]{minimal} $N$ -fold cross-validation is used to evaluate and compare the performance of these [12pt]{minimal} $9$ combinations. In particular, the model is trained on [12pt]{minimal} $N-1$ subjects (where [12pt]{minimal} $N=90$ ) and tested on the remaining [12pt]{minimal} $1$ subject. This procedure is repeated [12pt]{minimal} $N$ times (hence [12pt]{minimal} $N$ -folds). Average accuracy, macro precision, macro recall, and macro F1 scores are the metrics used for comparison. These performance metrics are summarized in . The entire procedure of [12pt]{minimal} $N$ -fold cross validation is repeated for three arbitrarily selected seeds: [12pt]{minimal} $0,10\,000$ and [12pt]{minimal} $50\,000$ . Each of the [12pt]{minimal} $9$ blocks in reports the performance of the best model among the three seeds. Note that since LMM and DGB leverage information about the output labels while selecting variables, both these methods only use data from the [12pt]{minimal} $N-1$ subjects in the training split of each fold. In [12pt]{minimal} $N=90$ folds, since there are [12pt]{minimal} $90$ different train-test splits, LMM and DGB methods are repeated [12pt]{minimal} $90$ times (once for each fold). Unlike LMM and DGB, JPTA does not use the output labels during variable selection, and hence it is run once on the entire dataset. Classification performance of the proposed pipeline on the IBD longitudinal and cross-sectional data Examining the rows in , we observe that the classification results based on the variables selected by JPTA are slightly worse than LMM and DGB. The lower performance of JPTA could be due to (i) JPTA is a purely unsupervised method and does not account for class membership in variable selection and is therefore not as effective for classification tasks as LMM or DGB; (ii) no variable selection was performed on the host transcriptomics data since JPTA is applicable only to longitudinal data. The classification results based on the variables selected by LMM and DGB are comparable with small variations that depended on the feature extraction method used before integration and classification. The classification results of the feature extraction method FPCA (Method 3) applied to the variables selected by LMM are slightly better than the feature extraction method EC (Method 2) and the direct DeepIDA-GRU application without feature extraction (Method 1). Meanwhile, the EC and FPCA feature extraction methods and DeepIDA-GRU (no feature extraction) yield comparable classification results when applied to the variables selected by DGB. Examining the columns in , it is evident that the three methods (Method [12pt]{minimal} $1$ , Method [12pt]{minimal} $2$ and Method [12pt]{minimal} $3$ ) have comparable results in the IBD application. The direct DeepIDA-GRU-based approach (Method 1) performs best with DGB; the EC (Method 2) and the FPCA (Method 3) approaches work best with LMM and DGB. Moreover, to compare our proposed pipeline with existing methods, we have also included the performance of the deep learning method in . propose a network consisting of one-layer GRUs (one for each view), followed by a logistic regression classifier. In particular, data from each view are fed into a GRU module, and the outputs of all the GRUs are then concatenated. This concatenated output is then fed into a logistic regression classifier for classification. Our proposed pipeline differs from the aforementioned method in three important ways. First, the outputs of the three views are integrated using IDA in DeepIDA-GRU, while in the method of [12pt]{minimal} $[21],[22]$ , the outputs are simply concatenated. Second, the integrated output is classified using the nearest centroid classifier in DeepIDA-GRU, while in the method of [12pt]{minimal} $[21],[22]$ , the concatenated outputs are fed into a logistic regression classifier. Third, in the DeepIDA-GRU network, we use three-layer GRUs, while in the methods of , a single-layer GRU is proposed. Due to the lack of an integration mechanism and a more simplistic one-layer architecture, the method proposed in does not perform as well as the DeepIDA-GRU network proposed in our paper. Since do not include any variable selection methods, we have used their classification framework in conjunction with the three variable selection methods (LMM, JPTA, DGB) proposed in our paper for a fair comparison. Note that we have used the code of for the implementation of the MildInt network. Variables identified by LMM, JPTA and DGB We compare and analyse the top variables selected by LMM, JPTA and DGB. As discussed earlier, both LMM and DGB are performed [12pt]{minimal} $90$ times (once for each fold). Each method generates [12pt]{minimal} $90$ distinct sets of selected variables. For LMM, the variables in each set are ranked according to their corresponding p-values, whereas, for DGB, the variables in each set are ranked according to their eff_prop scores (equation . For LMM, an overall rank/score is associated to every variable using Fisher’s approach for combining p-values . Note that if the Fisher combined p-value is equal to zero for multiple variables, these variables will be assigned the same score. For the DGB method, the average eff_prop value is computed to combine the [12pt]{minimal} $90$ scores of each variable. Lastly, JPTA is performed once, and we choose variables with nonzero coefficients. shows the intersection between the sets of variables selected by LMM, DGB and JPTA for the three views. shows the top [12pt]{minimal} $10$ variables selected by DGB from each view. In , we use violin plots to show the distribution of the top [12pt]{minimal} $5$ host transcriptomic genes selected by DGB. The median expressions of the genes are different between the two classes. Furthermore, in and , we show the mean time series curves for the metagenomics and metabolomics views. In these figures, the average of the univariate time series of all the participants in the IBD and non-IBD classes is used to calculate two mean curves for the top five variables. is exclusive to the DGB approach. Similar analyses for the LMM and JPTA methods are provided in the , with corresponding figures. Literature analysis of top variables There is evidence in the literature to support an association between many of the highly-ranked variables and IBD status. We first consider a few host transcriptomics genes selected by LMM or DGB. The IFITM genes have been associated with the pathogenesis of gastro-intestinal tract . LIPG has been observed to have altered level in Ulcerative Colitis (UC) tissue . AQP9 has been shown to have predictive value in Crohn’s disease . CXCL5 has been observed to have significantly increased levels in IBD patients . FCGR3B is associated with UC susceptibility . The MMPs (matrix metalloproteinases) like MMP3 and MMP10 have been shown to be upregulated in IBD . DUOXA2 has been substantiated as an IBD risk gene . The genes S100A8 and S100A9 have been linked with colitis-associated carcinogenesis . LILRA3 has been observed to be increased in IBD patients . We next consider some of the top metabolites selected by LMM, DGB or JPTA. Uridine has been identified as a therapeutic modulator of inflammation and has been studied in the context of providing protective effects against induced colitis in mice . Suberate is one of the metabolites significantly affected by neoagarotetraose supplementation (which is a hydrolytic product of agar used to alleviate intestinal inflammation) . The authors of suggest saccharin to be a potential key causative factor for IBD. Docosapentaenoic Acid (DPA) has been shown to alleviate UC . Decrease of pantothenic acid in the gut has been remarked as a potential symptom of IBD-related dysbiosis . Valerate has been observed to be altered in UC patients . It has been suggested that uracil production in bacteria could cause inflammation in the gut . Thymine is a pyrimidine that binds to adenine, and adenine has been suggested as a nutraceutical for the prevention of intestinal inflammation . Ethyl glucuronide is used as a biomarker to diagnose alcohol abuse , and alcohol consumption is common in IBD patients . We next consider the metagenomics pathways selected by LMM, DGBUlcerative Colitis (UC) or JPTA. We find that the genus Alistepis , Roseburia , Blautia and Akkermansia have been often linked to IBD and gut health. Several unintegrated pathways involving these genus have been identified by the DGB method as significant. Butanol has been identified as statistically significant (using univariate analysis) in IBD and non-IBD groups , and the pathway PWY-7003 selected by LMM is associated with glycerol degradation to butanol. Thiazole has been associated with anti-inflammatory properties against induced colitis in mice and pathway PWY-6892 (selected by LMM) is associated with thiazole biosynthesis. Tryptophan has been shown to have a role in intestinal inflammation and IBD and the pathway TRPSYN-PWY associated with L-tryptophan biosynthesis is one of the key pathways selected by LMM. Increased levels of L-arginine are correlated with disease severity for UC and the pathway PWY-7400 associated with L-arginine biosynthesis has been selected by JPTA. Thiamine is associated with symptoms of fatigue in IBD and the pathway PWY-7357 associated with thiamine formation is selected by JPTA. As evidenced by these examples from the literature, we illustrate that many genes, metabolitesUlcerative Colitis (UC) and pathways selected by the three methods have been linked to IBD. However, there are some selected variables that may not have been directly examined in the context of IBD. For example, we could not find a direct link of the metagenomic pathways: PWY-7388 (selected in the top [12pt]{minimal} $25$ by JPTA) for IBD. However, this pathway has been associated with psoriasis and it has been observed that patients with psoriasis have increased susceptibility to IBD . Thus, the unstudied genes/metabolites/pathways that the three variable selection methods have discovered may potentially be novel variables linked to IBD. We investigate the effectiveness of the proposed pipeline on the longitudinal (metagenomics and metabolomics) and cross-sectional (host transcriptomics) multiview data pertaining to IBD. The preprocessed host transcriptomics, metagenomics and metabolomics datasets are represented using [12pt]{minimal} $3$ -dimensional real-valued tensors of sizes [12pt]{minimal} $ ^{90 9726 1}$ , [12pt]{minimal} $ ^{90 2261 10}$ and [12pt]{minimal} $ ^{90 93 10}$ , respectively, and passed as inputs in the pipeline. In the first step, the variable selection/ranking methods LMM, DGB and JPTA are used to identify key genes, microbial pathways, and metabolites that are relevant in discriminating IBD status and/or associating the views. The top [12pt]{minimal} $200$ and [12pt]{minimal} $50$ variables of the metagenomics and metabolomics data, respectively, are retained using each method. For host transcriptomics data, LMM and DGB are used to select the top [12pt]{minimal} $1000$ statistically significant genes. Since JPTA is only applicable to longitudinal data, no variable selection is performed on the host transcriptomics data using JPTA. The resulting datasets are then passed through the feature extraction and integration/classification steps. We investigate the performance of our feature extraction and integration and classification steps by considering the following three options. Method 1—DeepIDA-GRU with no feature extraction: In this case, there is no feature extraction. For integration and classification with DeepIDA-GRU, the cross-sectional host transcriptomics dataset is fed into a fully connected neural network (with [12pt]{minimal} $3$ layers and [12pt]{minimal} $200,100,20$ neurons in these three layers), while the metagenomics and metabolomics data are fed into their respective GRUs (both consisting of [12pt]{minimal} $2$ layers and [12pt]{minimal} $50$ dimensional hidden unit). Method 2—DeepIDA-GRU with EC for Metagenomics and Mean for Metabolomics: In this case, the two longitudinal views are each converted into cross-sectional form. In particular, EC (with [12pt]{minimal} $100$ threshold values) is used to reduce the metagenomics data to size [12pt]{minimal} $^{90 100 1}$ . The metabolomics data is reduced to size [12pt]{minimal} $^{90 50 1}$ by computing the mean across the time dimension. In particular, when we visualized the EC curves of the metabolomics data, we did not find any differences between the EC curves for those with and without IBD, so we simply used the mean over time. The host transcriptomics data remain unchanged. The host transcriptomics, metabolomics and metagenomics data were then each fed into a [12pt]{minimal} $3$ -layered dense neural networks with structures [12pt]{minimal} $[200,100,20]$ , [12pt]{minimal} $[20,100,20]$ and [12pt]{minimal} $[50,100,20]$ , respectively, for integration and classification with DeepIDA-GRU (which in this case is equivalent to the traditional DeepIDA network). Method 3—DeepIDA-GRU with FPCA for both Metabolomics and Metagenomics views: In this case, FPCA (with [12pt]{minimal} $x=3$ FPCs for each variable) is used to reduce the longitudinal metabolomics and metagenomics data to cross-sectional data of sizes [12pt]{minimal} $ ^{90 150 1}$ and [12pt]{minimal} $^{90 600 1}$ , respectively. The host transcriptomics data remain unchanged. The host-transcriptomics, metabolomics and metagenomics data were each fed into a [12pt]{minimal} $3$ -layered dense neural networks with structures [12pt]{minimal} $[200,100,20]$ , [12pt]{minimal} $[20,100,20]$ and [12pt]{minimal} $[50,100,20]$ , respectively, for integration and classification using DeepIDA. We train and test the [12pt]{minimal} $9$ possible combinations of the [12pt]{minimal} $3$ variable selection methods: LMM, JPTA, and DGB and the [12pt]{minimal} $3$ feature extraction plus integration/classification approaches: Method 1, Method 2 and Method 3. Owing to the limited sample size, [12pt]{minimal} $N$ -fold cross-validation is used to evaluate and compare the performance of these [12pt]{minimal} $9$ combinations. In particular, the model is trained on [12pt]{minimal} $N-1$ subjects (where [12pt]{minimal} $N=90$ ) and tested on the remaining [12pt]{minimal} $1$ subject. This procedure is repeated [12pt]{minimal} $N$ times (hence [12pt]{minimal} $N$ -folds). Average accuracy, macro precision, macro recall, and macro F1 scores are the metrics used for comparison. These performance metrics are summarized in . The entire procedure of [12pt]{minimal} $N$ -fold cross validation is repeated for three arbitrarily selected seeds: [12pt]{minimal} $0,10\,000$ and [12pt]{minimal} $50\,000$ . Each of the [12pt]{minimal} $9$ blocks in reports the performance of the best model among the three seeds. Note that since LMM and DGB leverage information about the output labels while selecting variables, both these methods only use data from the [12pt]{minimal} $N-1$ subjects in the training split of each fold. In [12pt]{minimal} $N=90$ folds, since there are [12pt]{minimal} $90$ different train-test splits, LMM and DGB methods are repeated [12pt]{minimal} $90$ times (once for each fold). Unlike LMM and DGB, JPTA does not use the output labels during variable selection, and hence it is run once on the entire dataset. Examining the rows in , we observe that the classification results based on the variables selected by JPTA are slightly worse than LMM and DGB. The lower performance of JPTA could be due to (i) JPTA is a purely unsupervised method and does not account for class membership in variable selection and is therefore not as effective for classification tasks as LMM or DGB; (ii) no variable selection was performed on the host transcriptomics data since JPTA is applicable only to longitudinal data. The classification results based on the variables selected by LMM and DGB are comparable with small variations that depended on the feature extraction method used before integration and classification. The classification results of the feature extraction method FPCA (Method 3) applied to the variables selected by LMM are slightly better than the feature extraction method EC (Method 2) and the direct DeepIDA-GRU application without feature extraction (Method 1). Meanwhile, the EC and FPCA feature extraction methods and DeepIDA-GRU (no feature extraction) yield comparable classification results when applied to the variables selected by DGB. Examining the columns in , it is evident that the three methods (Method [12pt]{minimal} $1$ , Method [12pt]{minimal} $2$ and Method [12pt]{minimal} $3$ ) have comparable results in the IBD application. The direct DeepIDA-GRU-based approach (Method 1) performs best with DGB; the EC (Method 2) and the FPCA (Method 3) approaches work best with LMM and DGB. Moreover, to compare our proposed pipeline with existing methods, we have also included the performance of the deep learning method in . propose a network consisting of one-layer GRUs (one for each view), followed by a logistic regression classifier. In particular, data from each view are fed into a GRU module, and the outputs of all the GRUs are then concatenated. This concatenated output is then fed into a logistic regression classifier for classification. Our proposed pipeline differs from the aforementioned method in three important ways. First, the outputs of the three views are integrated using IDA in DeepIDA-GRU, while in the method of [12pt]{minimal} $[21],[22]$ , the outputs are simply concatenated. Second, the integrated output is classified using the nearest centroid classifier in DeepIDA-GRU, while in the method of [12pt]{minimal} $[21],[22]$ , the concatenated outputs are fed into a logistic regression classifier. Third, in the DeepIDA-GRU network, we use three-layer GRUs, while in the methods of , a single-layer GRU is proposed. Due to the lack of an integration mechanism and a more simplistic one-layer architecture, the method proposed in does not perform as well as the DeepIDA-GRU network proposed in our paper. Since do not include any variable selection methods, we have used their classification framework in conjunction with the three variable selection methods (LMM, JPTA, DGB) proposed in our paper for a fair comparison. Note that we have used the code of for the implementation of the MildInt network. We compare and analyse the top variables selected by LMM, JPTA and DGB. As discussed earlier, both LMM and DGB are performed [12pt]{minimal} $90$ times (once for each fold). Each method generates [12pt]{minimal} $90$ distinct sets of selected variables. For LMM, the variables in each set are ranked according to their corresponding p-values, whereas, for DGB, the variables in each set are ranked according to their eff_prop scores (equation . For LMM, an overall rank/score is associated to every variable using Fisher’s approach for combining p-values . Note that if the Fisher combined p-value is equal to zero for multiple variables, these variables will be assigned the same score. For the DGB method, the average eff_prop value is computed to combine the [12pt]{minimal} $90$ scores of each variable. Lastly, JPTA is performed once, and we choose variables with nonzero coefficients. shows the intersection between the sets of variables selected by LMM, DGB and JPTA for the three views. shows the top [12pt]{minimal} $10$ variables selected by DGB from each view. In , we use violin plots to show the distribution of the top [12pt]{minimal} $5$ host transcriptomic genes selected by DGB. The median expressions of the genes are different between the two classes. Furthermore, in and , we show the mean time series curves for the metagenomics and metabolomics views. In these figures, the average of the univariate time series of all the participants in the IBD and non-IBD classes is used to calculate two mean curves for the top five variables. is exclusive to the DGB approach. Similar analyses for the LMM and JPTA methods are provided in the , with corresponding figures. Literature analysis of top variables There is evidence in the literature to support an association between many of the highly-ranked variables and IBD status. We first consider a few host transcriptomics genes selected by LMM or DGB. The IFITM genes have been associated with the pathogenesis of gastro-intestinal tract . LIPG has been observed to have altered level in Ulcerative Colitis (UC) tissue . AQP9 has been shown to have predictive value in Crohn’s disease . CXCL5 has been observed to have significantly increased levels in IBD patients . FCGR3B is associated with UC susceptibility . The MMPs (matrix metalloproteinases) like MMP3 and MMP10 have been shown to be upregulated in IBD . DUOXA2 has been substantiated as an IBD risk gene . The genes S100A8 and S100A9 have been linked with colitis-associated carcinogenesis . LILRA3 has been observed to be increased in IBD patients . We next consider some of the top metabolites selected by LMM, DGB or JPTA. Uridine has been identified as a therapeutic modulator of inflammation and has been studied in the context of providing protective effects against induced colitis in mice . Suberate is one of the metabolites significantly affected by neoagarotetraose supplementation (which is a hydrolytic product of agar used to alleviate intestinal inflammation) . The authors of suggest saccharin to be a potential key causative factor for IBD. Docosapentaenoic Acid (DPA) has been shown to alleviate UC . Decrease of pantothenic acid in the gut has been remarked as a potential symptom of IBD-related dysbiosis . Valerate has been observed to be altered in UC patients . It has been suggested that uracil production in bacteria could cause inflammation in the gut . Thymine is a pyrimidine that binds to adenine, and adenine has been suggested as a nutraceutical for the prevention of intestinal inflammation . Ethyl glucuronide is used as a biomarker to diagnose alcohol abuse , and alcohol consumption is common in IBD patients . We next consider the metagenomics pathways selected by LMM, DGBUlcerative Colitis (UC) or JPTA. We find that the genus Alistepis , Roseburia , Blautia and Akkermansia have been often linked to IBD and gut health. Several unintegrated pathways involving these genus have been identified by the DGB method as significant. Butanol has been identified as statistically significant (using univariate analysis) in IBD and non-IBD groups , and the pathway PWY-7003 selected by LMM is associated with glycerol degradation to butanol. Thiazole has been associated with anti-inflammatory properties against induced colitis in mice and pathway PWY-6892 (selected by LMM) is associated with thiazole biosynthesis. Tryptophan has been shown to have a role in intestinal inflammation and IBD and the pathway TRPSYN-PWY associated with L-tryptophan biosynthesis is one of the key pathways selected by LMM. Increased levels of L-arginine are correlated with disease severity for UC and the pathway PWY-7400 associated with L-arginine biosynthesis has been selected by JPTA. Thiamine is associated with symptoms of fatigue in IBD and the pathway PWY-7357 associated with thiamine formation is selected by JPTA. As evidenced by these examples from the literature, we illustrate that many genes, metabolitesUlcerative Colitis (UC) and pathways selected by the three methods have been linked to IBD. However, there are some selected variables that may not have been directly examined in the context of IBD. For example, we could not find a direct link of the metagenomic pathways: PWY-7388 (selected in the top [12pt]{minimal} $25$ by JPTA) for IBD. However, this pathway has been associated with psoriasis and it has been observed that patients with psoriasis have increased susceptibility to IBD . Thus, the unstudied genes/metabolites/pathways that the three variable selection methods have discovered may potentially be novel variables linked to IBD. There is evidence in the literature to support an association between many of the highly-ranked variables and IBD status. We first consider a few host transcriptomics genes selected by LMM or DGB. The IFITM genes have been associated with the pathogenesis of gastro-intestinal tract . LIPG has been observed to have altered level in Ulcerative Colitis (UC) tissue . AQP9 has been shown to have predictive value in Crohn’s disease . CXCL5 has been observed to have significantly increased levels in IBD patients . FCGR3B is associated with UC susceptibility . The MMPs (matrix metalloproteinases) like MMP3 and MMP10 have been shown to be upregulated in IBD . DUOXA2 has been substantiated as an IBD risk gene . The genes S100A8 and S100A9 have been linked with colitis-associated carcinogenesis . LILRA3 has been observed to be increased in IBD patients . We next consider some of the top metabolites selected by LMM, DGB or JPTA. Uridine has been identified as a therapeutic modulator of inflammation and has been studied in the context of providing protective effects against induced colitis in mice . Suberate is one of the metabolites significantly affected by neoagarotetraose supplementation (which is a hydrolytic product of agar used to alleviate intestinal inflammation) . The authors of suggest saccharin to be a potential key causative factor for IBD. Docosapentaenoic Acid (DPA) has been shown to alleviate UC . Decrease of pantothenic acid in the gut has been remarked as a potential symptom of IBD-related dysbiosis . Valerate has been observed to be altered in UC patients . It has been suggested that uracil production in bacteria could cause inflammation in the gut . Thymine is a pyrimidine that binds to adenine, and adenine has been suggested as a nutraceutical for the prevention of intestinal inflammation . Ethyl glucuronide is used as a biomarker to diagnose alcohol abuse , and alcohol consumption is common in IBD patients . We next consider the metagenomics pathways selected by LMM, DGBUlcerative Colitis (UC) or JPTA. We find that the genus Alistepis , Roseburia , Blautia and Akkermansia have been often linked to IBD and gut health. Several unintegrated pathways involving these genus have been identified by the DGB method as significant. Butanol has been identified as statistically significant (using univariate analysis) in IBD and non-IBD groups , and the pathway PWY-7003 selected by LMM is associated with glycerol degradation to butanol. Thiazole has been associated with anti-inflammatory properties against induced colitis in mice and pathway PWY-6892 (selected by LMM) is associated with thiazole biosynthesis. Tryptophan has been shown to have a role in intestinal inflammation and IBD and the pathway TRPSYN-PWY associated with L-tryptophan biosynthesis is one of the key pathways selected by LMM. Increased levels of L-arginine are correlated with disease severity for UC and the pathway PWY-7400 associated with L-arginine biosynthesis has been selected by JPTA. Thiamine is associated with symptoms of fatigue in IBD and the pathway PWY-7357 associated with thiamine formation is selected by JPTA. As evidenced by these examples from the literature, we illustrate that many genes, metabolitesUlcerative Colitis (UC) and pathways selected by the three methods have been linked to IBD. However, there are some selected variables that may not have been directly examined in the context of IBD. For example, we could not find a direct link of the metagenomic pathways: PWY-7388 (selected in the top [12pt]{minimal} $25$ by JPTA) for IBD. However, this pathway has been associated with psoriasis and it has been observed that patients with psoriasis have increased susceptibility to IBD . Thus, the unstudied genes/metabolites/pathways that the three variable selection methods have discovered may potentially be novel variables linked to IBD. FPCA and Euler curves (EC) provide important one-dimensional representations of longitudinal data. These methods have particular significance because the extracted features can be used with a broad spectrum of existing integration methods that only allow cross-sectional views. Using synthetic simulations, we unravel key properties of EC and FPCA. These simulations demonstrate that Euler curves are better at distinguishing between classes when the covariance structure of the variables differs from one class to another. FPCA performs better when the time-trend of the variables differs between the classes. In addition to comparing EC and FPCA, we also illustrate the performance of the direct DeepIDA-GRU approach, where the feature extraction step is skipped and the longitudinal views are directly fed into DeepIDA-GRU. GRUs have the ability to distinguish certain complicated time-trend differences that pose challenges for FPCA and EC methods. However, FPCA and EC are computationally faster compared to GRUs. Moreover, as demonstrated by these simulations, training a GRU can be challenging and one needs to closely monitor problems like overfitting, diminishing gradients, hyperparameter tuning etc and their out-of-the-box performance may be even worse than the simpler methods like FPCA and EC. To compare EC, FPCA and direct DeepIDA-GRU, we use the following three approaches on synthetically generated multiview longitudinal data: (i) DeepIDA-EC: EC is used to extract one-dimensional features from the longitudinal views and the extracted features are fed into the DeepIDA network for integration and classification; (ii) DeepIDA-FPC: FPCA is used to extract features from the longitudinal views and the extracted features are fed into the DeepIDA network for integration and classification; and (iii) DeepIDA-GRU: no one dimensional features are extracted from the longitudinal views and the data from the longitudinal views are directly fed into the DeepIDA-GRU network for integration and classification. The synthetic datasets we generate are balanced between classes and we use the classification accuracy as a metric for this comparison. Here, we consider [12pt]{minimal} $K=2$ classes, [12pt]{minimal} $D=2$ views and [12pt]{minimal} $N = 500$ subjects. Each view [12pt]{minimal} $d$ (for [12pt]{minimal} $d [1:2]$ ) consists of [12pt]{minimal} $p_{d} = 250$ variables and [12pt]{minimal} $T=20$ time points. We denote by [12pt]{minimal} $C_{1}$ and [12pt]{minimal} $C_{2}$ the noise covariance matrices corresponding to classes [12pt]{minimal} $1$ and [12pt]{minimal} $2$ , respectively. These covariance matrices are constructed as follows: (2) [12pt]{minimal} C_{1} &= C_{}^{} C_{}, \\ C_{2} &= (1-) C_{1} + (C_{}^{} C_{} ), where both [12pt]{minimal} $C_{}$ and [12pt]{minimal} $C_{}$ are [12pt]{minimal} $p_{1}+p_{2}$ by [12pt]{minimal} $p_{1}+p_{2}$ matrices whose entries are identically and independently generated from the Uniform [12pt]{minimal} $(0,1)$ and Power [12pt]{minimal} $(10)$ distributions, respectively. Here, Power [12pt]{minimal} $(10)$ is the power distribution (inverse of Pareto distribution) with parameter [12pt]{minimal} $a=10$ , whose probability density function is given by [12pt]{minimal} $f(x;a) = a x^{a-1}, x [0,1], {a (0, )}$ . Moreover, [12pt]{minimal} $ $ is a parameter that we manipulate to vary the amount of structural difference between [12pt]{minimal} $C_{1}$ and [12pt]{minimal} $C_{2}$ . In particular, when [12pt]{minimal} $ =0$ , [12pt]{minimal} $C_{1} = C_{2}$ and the two classes have the same covariance structure. When [12pt]{minimal} $ = 1$ , the entries of [12pt]{minimal} $C_{1}$ and [12pt]{minimal} $C_{2}$ have completely different and uncorrelated distributions. We let [12pt]{minimal} $ _{d,k}$ and [12pt]{minimal} $ _{d,k}$ be the auto-regressive (AR) and moving-average (MA) parameters, respectively, for the ARMA(1,1) process, corresponding to the [12pt]{minimal} $d$ th view ( [12pt]{minimal} $d \{1,2\}$ ) and the [12pt]{minimal} $k$ th class ( [12pt]{minimal} $k \{1,2\}$ ). For these simulations, these ARMA parameters for the two classes are chosen to be (3) [12pt]{minimal} _{1,1} &= 0.5, _{2,1} = 0.7, _{1,2} = 0.5-, _{2,2} = 0.7-, (4) [12pt]{minimal} _{1,1} &= 0.4, _{2,1} = 0.6, _{1,2} = 0.4-, _{2,2} = 0.6-, where [12pt]{minimal} $ $ is another parameter that is varied to control the amount of difference between the ARMA parameters of the two classes. The synthetic longitudinal data of subject [12pt]{minimal} ${n [1:N]}$ for view [12pt]{minimal} ${d [1:2]}$ is given by a collection of [12pt]{minimal} $T$ vectors: [12pt]{minimal} $_{d}^{(n,:,:)}= \{_{d}^{(n,:,1)}, _{d}^{(n,:,2)}, , _{d}^{(n,:,T)}\}$ (where [12pt]{minimal} ${_{d}^{(n,:,t)} ^{p_{d}}}$ ). Let [12pt]{minimal} $^{(n,:,t)} = [_{1}^{(n,:,t)}, _{2}^{(n,:,t)} ]^{}$ , [12pt]{minimal} $^{(n,t)} = [_{1}^{(n,t)}, _{2}^{(n,t)} ]^{}$ , [12pt]{minimal} $_{:, (n)} = [{ }_{1, (n)},{ }_{2, (n)} ]^{}$ and [12pt]{minimal} $_{:, (n)} = [{ }_{1, (n)},{ }_{2, (n)} ]^{}$ , where [12pt]{minimal} $ (n)$ is the class to which the [12pt]{minimal} $n$ th subject belongs and the vectors [12pt]{minimal} $_{1}^{(n,t)}$ and [12pt]{minimal} $_{2}^{(n,t)}$ are jointly distributed as [12pt]{minimal} $^{(n,t)} (, C_{ (n)})$ for all [12pt]{minimal} $t [1:T]$ . Then the multiview data [12pt]{minimal} $^{(n,:,t)}$ of the subject [12pt]{minimal} $n$ at time [12pt]{minimal} $t$ are generated according to an ARMA [12pt]{minimal} $(1,1)$ process with AR and MA parameters given by [12pt]{minimal} $_{:, (n)}$ and [12pt]{minimal} $_{:, (n)}$ , respectively, and the noise covariance matrix [12pt]{minimal} $C_{ (n)}$ as follows: (5) [12pt]{minimal} ^{(n,:,t)} \!\!= _{:,(n)} \!\! ^{(n,:,t\!-\!1)} \!\!+ ^{(n,:,t)} \!\!+ _{:,(n)} \!\! ^{(n,:,t\!-\!1)} where, [12pt]{minimal} $ $ represents the element wise product. For any given value of [12pt]{minimal} $( , )$ , a total of [12pt]{minimal} $100$ longitudinal multiview datasets are generated randomly according to equation , where in each dataset, there are approximately [12pt]{minimal} $50$ percent of the subjects in class [12pt]{minimal} $1$ and class [12pt]{minimal} $2$ respectively. The three approaches: DeepIDA-GRU, DeepIDA-EC and DeepIDA-FPC are used for the classification task. All the feed-forward networks consist of [12pt]{minimal} $3$ layers with [12pt]{minimal} $[200,100,20]$ neurons, respectively. All GRUs contain [12pt]{minimal} $3$ layers with [12pt]{minimal} $256$ dimensional hidden vector. The synthetic analysis is divided into two cases: Case 1: different covariance matrices, same ARMA parameters: In this case, [12pt]{minimal} $ = 0$ and [12pt]{minimal} $ $ assumes the following set of values: [12pt]{minimal} $ \{0.25,0.5,0.75,1\}$ . Note that a larger [12pt]{minimal} $ $ means more difference in the covariance structure of the variables between the two classes and, therefore, easier for the methods to classify. provides a visual representation of the superiority of EC compared to FPCA for this case. In this figure, the synthetic multivariate time series data (of view [12pt]{minimal} $1$ ), generated with [12pt]{minimal} $ = 0.75$ , for a subject of class [12pt]{minimal} $1$ and a subject of class [12pt]{minimal} $2$ are shown in . The EC curves of all the [12pt]{minimal} $N$ subjects are shown in (which shows that the EC curves can clearly distinguish between the two classes). The [12pt]{minimal} $3$ dimensional FPC scores are plotted in and we notice that FPCA cannot distinguish between the two classes in this case. In , we compare the performance of the three methods on [12pt]{minimal} $100$ randomly generated datasets for each [12pt]{minimal} $ \{0.25,0.5,0.75,1\}$ . In particular, we use box plots to summarize the classification accuracy achieved by the three methods on these [12pt]{minimal} $100$ datasets. Case 2: same covariance matrices, different ARMA parameters, no reverse operation: In this case, [12pt]{minimal} $ = 0$ and [12pt]{minimal} $ $ takes values in [12pt]{minimal} $ \{0.25,0.5,0.75,1\}$ . Note that larger [12pt]{minimal} $ $ corresponds to a larger difference in the ARMA parameters between the two classes and therefore easier for the methods to classify. shows visually that FPCA can clearly distinguish between the two classes, whereas EC cannot. In this figure, the synthetic multivariate time series data (of view [12pt]{minimal} $1$ ), generated with [12pt]{minimal} $ = 0.75$ , for one subject of class [12pt]{minimal} $1$ and one subject of class [12pt]{minimal} $2$ are shown in . The EC curves of all the [12pt]{minimal} $N$ subjects are shown in (which shows that the EC curves are unable to distinguish between the two classes). The [12pt]{minimal} $3$ dimensional FPC scores are plotted in which shows that FPCA performs better at distinguishing between the two classes in this case. In , we compare the performance of the three methods on [12pt]{minimal} $100$ randomly generated datasets for each [12pt]{minimal} $ \{0.25,0.5,0.75,1\}$ . In this figure, we summarize the classification accuracy achieved by the three methods on these [12pt]{minimal} $100$ datasets using box plots. Remark 1. The box plots in Figure and show that DeepIDA-EC performs better at classifying subjects when the covariance structure of the two classes is different, whereas DeepIDA-FPC performs better when the ARMA parameters of the two classes differ. DeepIDA-GRU does not outperform either of these methods even though it has the potential to handle much more complex tasks. This could be mainly due to the fact that the DeepIDA-GRU network was not fine-tuned for each of the randomly generated datasets, which could have led to overfitting/underfitting on many of these datasets. Data collected from multiple sources is increasingly being generated in many biomedical research. These data types could be cross-sectional, longitudinal data, or both. However, the literature for integrating both cross-sectional and longitudinal data is scarce. This work began to fill in the gaps in the existing literature. Motivated by and used for the analysis of data from the IBD study of the iHMP, we have proposed a deep learning pipeline for (i) integrating both cross-sectional and longitudinal data from multiple sources while simultaneously discriminating between disease status; and (ii) identifying key molecular signatures contributing to the association among views and separation between classes within a view. Our pipeline combines the strengths of statistical methods, such as the ability to make inference, reduce dimension, and extract longitudinal trends, with the flexibility of deep learning, and consists of variable selection/ranking, feature extraction, and joint integration and classification. For variable selection/ranking, methods applicable to one view at a time (i.e. LMMs), two longitudinal views (i.e. JPTA), and multiple cross-sectional and longitudinal views (i.e. DGB) were considered. For feature extraction, we considered the FPCA and EC curves. For integration and classification, we implemented Deep IDA with gated recurrent units [DeepIDA-GRU]. When we applied the pipeline to the motivating data, we observed that for variable selection, LMM and DGB achieved slightly better performance metrics than JPTA probably because they are supervised methods– information on class labels is used in variable selection– and they are applicable to more than two views. For feature extraction, the performance of both EC and FPCA was comparable, and both methods performed similarly to the direct DeepIDA-GRU approach without feature extraction. Our research revealed multi-omics signatures and microbial pathways that differentiate between individuals with and without IBD, some of which have been documented in previous studies, while others are associated with diseases linked to IBD, offering potential candidates for exploration in IBD pathobiology. We also compared the performance of EC and FPCA using synthetic datasets and found that these methods outperformed each other in different scenarios. EC performed better when the covariance structure of the variables was different between the two classes, while FPCA outperformed EC when there was a difference in the time trends between the two classes. Deep learning is typically used with a large sample size to ensure generalizability. The main limitation of this work is the small sample size ( [12pt]{minimal} $n=90$ subjects) of the IBD data, which motivated our work but we attempted to mitigate against potential overfitting/underfitting issues through the use of variable selection, feature extraction and leave-one-out-cross-validation instead of [12pt]{minimal} $n$ -fold cross-validation (which would have significantly reduced the sample size for training). Variable selection is widely regarded as an effective technique for high-dimensional low sample size data and helps avoid overfitting and high-variance gradients . In this work, we explored both linear methods for variable selection (LMM and JPTA) and non-linear deep learning-based methods (DGB). More analysis is needed to determine whether the DGB bootstrapping procedure scales well with increasing data sizes. Future work could consider validating the proposed methodology on multiview data with larger sample sizes. Additionally, for larger and more complicated data, it may be worthwhile to investigate whether integrating other deep learning networks like transformers and 1D convolutional networks in the DeepIDA pipeline would yield better results for handling longitudinal data than the DeepIDA with GRUs implemented in this work. Despite the above limitations, we believe that our pipeline for integrating longitudinal and cross-sectional data from multiple sources that combines statistical and machine learning methods fills an important gap in the literature for data integration and will enable biologically meaningful findings. Our extensive investigation of the scenarios under which FPCA outperforms EC curves and vice versa sheds light on the specific scenarios for using these methods. Furthermore, our application of real data has resulted in the identification of molecules and microbial pathways, some of which are implicated in the literature and thus providing evidence to corroborate previous findings, while others are potentially novel and could be explored for their role in the pathobiology of IBD. Key Points We propose a deep learning pipeline capable of seamlessly integrating heterogeneous data from multiple sources and effectively discriminating between multiple classes. Our pipeline merges statistical methods with deep learning, incorporating variable selection and feature extraction techniques such as functional principal component analysis (FPCA) and Euler characteristic curves (EC). It also includes integrative discriminant analysis using dense feed-forward and recurrent neural networks for joint integration and classification. We examined when FPCA is more effective than EC curves and vice versa. FPCA yields better classification when longitudinal trends differ between classes; EC yields better classification under unequal class covariance. Adding variable selection to deep learning led to a strong classification performance, even with a small sample size. The IBD data application shows the potential application of our pipeline to identify molecular signatures that discriminate disease status over time. We propose a deep learning pipeline capable of seamlessly integrating heterogeneous data from multiple sources and effectively discriminating between multiple classes. Our pipeline merges statistical methods with deep learning, incorporating variable selection and feature extraction techniques such as functional principal component analysis (FPCA) and Euler characteristic curves (EC). It also includes integrative discriminant analysis using dense feed-forward and recurrent neural networks for joint integration and classification. We examined when FPCA is more effective than EC curves and vice versa. FPCA yields better classification when longitudinal trends differ between classes; EC yields better classification under unequal class covariance. Adding variable selection to deep learning led to a strong classification performance, even with a small sample size. The IBD data application shows the potential application of our pipeline to identify molecular signatures that discriminate disease status over time. JainSafo_DeepIDAGRU_Supp_bbae339
An assessment of excess mortality during the COVID-19 pandemic, a retrospective post-mortem surveillance in 12 districts – Zambia, 2020–2022
49bb75bf-0af0-4448-bb80-c678251fabc4
11437817
Forensic Medicine[mh]
Since the first cases of SARS-CoV-2 were reported in Zambia in March 2020 through to October 12, 2023, 349,287 cases and 4,069 COVID-19-related deaths were reported . During this period, several phylogenetically distinct strains of SARS-CoV-2 were identified in Zambia, each associated with varying case fatality rates across different wave periods . The number of cases and deaths reported in Zambia likely underestimates the true impact of the pandemic. A nationwide SARS-CoV-2 prevalence survey, using nasal specimens for PCR testing, revealed that for every reported SARS-CoV-2 infection in Zambia, approximately 92 infections went unreported . A systematic postmortem prevalence study among deaths that occurred at a tertiary hospital in Lusaka showed a high SARS-CoV-2 PCR prevalence among deceased persons from January 2021 to June 2021, with greater per cent positivity during peak COVID-19 transmission periods . However, it should be noted that while SARS-CoV-2 was detected among the deceased individuals in this study, it was not determined whether the virus directly caused their deaths or if its presence was merely incidental. Among those with a positive SARS-CoV-2 diagnosis postmortem, only a minority had a COVID-19 diagnosis antemortem. As such, officially reported COVID-19 deaths in Zambia may underestimate the total number of COVID-19 deaths that occurred during the pandemic. During a public health emergency such as the COVID-19 pandemic, monitoring all-cause mortality trends may help assess the severity and impact of the emergency on the population affected . Deaths among confirmed COVID-19 patients do not capture the full extent of the COVID-19 burden. In contrast, deaths from all causes can be used to estimate excess mortality and provide a more complete picture of the impact of the COVID-19 pandemic . For many countries, reporting statistics on COVID-19 mortality has been a challenge due to variations in access to testing, differential diagnostic capacity, the inconsistent and sub-optimal certification of COVID-19 as the immediate cause of death, and the nonspecific clinical presentation of COVID-19 (including in fatal cases) . Additionally, the pandemic led to an increase in deaths due to other causes, because of disruptions to routine health services and loss of livelihoods due to the stringent non-pharmacological interventions put in place to mitigate the pandemic . Therefore, the World Health Organization (WHO) recommends the surveillance of confirmed COVID-19 mortality and all-cause mortality during the COVID-19 pandemic . A study of mortality trends among community deaths at the University Teaching Hospital (UTH) in Lusaka from April 2020 to December 2020 showed 1,139 excess deaths from all causes during the study period compared to the pre-pandemic baseline (2017–2019) . Whilst excess mortality during the COVID-19 pandemic has been established in Lusaka , the country’s predominantly urban capital, it remains unclear whether other regions within the country with different socio-demographic factors had comparable or worse outcomes. We, therefore, conducted this study to assess excess mortality across 12 districts of Zambia during the COVID-19 pandemic. We conducted a mixed methods retrospective analysis of mortality records in Zambia by triangulating data from two different sources (health facility and community death mortuary records) between January 2017 and December 2022. These records were obtained from the mortuaries of the main district referral health facilities in the selected districts. Typically, the most life-threatening medical cases are referred to these referral health facilities from other health facilities within the district and deaths occurring within the communities outside health facilities are brought to these district hospital morgues. We purposively selected 12 districts of Zambia (Chingola, Kabwe, Kitwe, Livingstone, Luanshya, Lusaka, Kasama, Mansa, Chipata, Mongu, Solwezi and Ndola) primarily based on the known availability of mortuary records over the study period, to ensure an adequate mix of urban and rural districts and to ensure adequate geographic representation from across the country. Combined, the selected districts are home to an estimated 6,882,437 (37.4% of the total projected 2021 population) Zambians . Monthly counts of all health facility deaths and all reported community deaths between 2017 and 2019 were collected to estimate baseline all-cause mortality before the COVID-19 pandemic and monthly counts from 2020 to 2022 to estimate excess deaths during the COVID-19 pandemic. A COVID-19 wave period was defined as a sustained increase in the SARS-CoV-2 national test positivity of greater than five per cent, and between January 2020 and December 2022 days were dichotomized as wave or non-wave days. Additionally, wave days were further classified as the first wave (1 Jun 2020 to 1 Oct 2020), second wave (3rd Jan 2021 to 17th March 2021), third wave (29th May 2021 to 20th Aug 2021), fourth wave (12th Dec 2021 to 9th May 2022) and fifth wave (24th Dec 2022 to 31st Dec 2022), all other dates during the study period were non-wave days. Additionally, from January 2020 through to December 2022, daily individual records of all inpatient deaths and all reported community deaths by age and sex disaggregation were collected to analyse changes in the sex and age distribution of deaths between COVID-19 wave periods and non-COVID-19 wave periods. A categorical variable, age group, was recoded from the continuous variable age. For the all-cause excess mortality analysis, the district-specific monthly counts of all deaths from 2017 to 2019 were used as the COVID-19 pre-pandemic baseline to compare with the period of the COVID-19 pandemic (2020–2022) using a Microsoft Excel-based tool developed by Resolve to Save Lives/Prevent Epidemics to calculate monthly medians and 95% confidence intervals for historic deaths . This tool is recommended by an expert panel for Rapid Mortality Surveillance during COVID-19 . We quantified the availability of death records by calculating the percentage of months with complete monthly count information over the total number of months within the study period (January 2017 to December 2022) for each district. To address missing monthly death counts at the district level, mean imputation was applied to the districts with missing data (Ndola, Solwezi, and Livingstone), as the monthly death counts at the district level were normally distributed. Excess mortality was defined as the difference between the all-cause pandemic monthly death counts (2020–2022) (Observed events) and the median pre-pandemic all-cause monthly death counts and the 95% confidence interval (2017–2019) in the 12 districts (The previous step yields baseline mortality and a “normal” range of variability around that baseline). The 95% confidence interval for the median historical monthly count was calculated separately for each month using the sample standard deviations. Excess mortality was present if the observed count was greater than the 95% confidence limit of the historic number of deaths. District-specific excess mortality was calculated separately among community deaths and health facility deaths and then summed to get monthly and yearly excess mortality counts for each district. To estimate overall excess mortality in Zambia, we assumed that there was sufficient heterogeneity in factors contributing to excess deaths in the 12 selected districts to be representative of the entire population of Zambia and that the excess mortality rate was only a function of the population. Based on these assumptions, we extrapolated the median excess mortality rate from the 12 districts to the entire country to estimate the total excess deaths in Zambia. We then compared this national estimate of excess deaths to the number of officially reported COVID-19 deaths in Zambia to calculate the ratio of reported COVID-19 deaths to excess deaths during the pandemic. The individual-level death information from 2020 to 2022 was used to calculate daily death counts among community deaths and health facility deaths. Decedents with missing age or sex variables were dropped from subsequent analyses involving age or sex variables but included in the time series graphs of daily deaths and daily death count analysis. Shapiro-Wilk test was used to test the normality assumption for the distribution of the daily death counts and age variables. All reported daily deaths were plotted as a 14-day rolling average time series disaggregated by place of death (community versus facility), district and age groups. To detect any difference in the median number of reported daily deaths between wave periods and non-wave period days, we used the Mann-Whitney u test. We then used the Kruskal-Wallis rank sum test to determine whether there was a statistically significant difference between the median daily number of deaths across the six identified wave periods (non-wave period, first wave, second wave, third wave, fourth wave, fifth wave). To identify which median daily death counts varied significantly by wave period, pairwise comparisons of median daily death count by wave period were conducted using the Wilcoxon rank sum test with continuity correction. The Benjamini & Hochberg p -value adjustment method was employed for multiple comparisons. Furthermore, we compared disparities in the distribution of the median age at death, the proportion of individuals above 65 at the time of death, as well as sex, place of death (facility or community), and district of death between wave periods and non-wave periods. These comparisons were conducted using the Wilcoxon rank sum test for median age and Pearson’s Chi-squared test for categorical variables. All statistical analyses were performed using R version 4.2.1 statistical software. Between 2017 and 2022, there were 217 140 deaths reported in the 12 districts (range 32786 [2018] to 41613 [2021]) (Fig. ). In the baseline period (2017–2019), October had the highest average death count (3071.3, 95% CI: 2984.8-3157.8) while May had one of the lowest (2710.0, 95% CI: 2528.1-2891.9) (Table ). Districts contributed from 2.4% (Mongu) to 41.0% (Lusaka) of the reported deaths. A total of 112,768 deaths were reported in the 12 districts between 2020 and 2022 of these 17,111 (15.2%) were excess (Fig. ). The median district excess mortality rate was 237.5 (Interquartile range (IQR) [170.5-282.5]) per 100,000 population (Table ). There was district-by-district variation in the number of excess deaths with the highest excess deaths reported in Chingola (449.5 per 100,000 population) and the lowest excess deaths reported in Mongu (101.1 per 100,000 population) (Fig. ; Table ). Most excess deaths were observed in 2021 ( n = 8992, 52.6%). By extrapolation from these 12 districts, we estimate the median excess deaths across all three years of the COVID-19 pandemic in Zambia to be 43,701 (IQR, 31372-51,982). Compared to officially reported COVID-19 deaths , for every reported COVID-19 death, there was a median of 11 (IQR, 8–13) excess deaths during the COVID-19 pandemic in Zambia. We analysed 112,768 individual death records across the 12 districts between January 2020 to December 2022. Overall, only 2.4% of all deaths had a variable missing (age or sex). Over the entire study period, on average, there were more deaths per day during wave periods than during non-wave periods (median [IQR]: 107 [95–126] versus, 96 [85–107]) respectively, < 0.001) (Table ). The second (median [IQR]: 124 [113–138]), third (median IQR: 140 [116–168]) and fourth (median [IQR]: 102 [92–109]) waves had more deaths per day than non-wave periods (median [IQR]: 96 [85–107], p < 0.001), but no difference was seen between the first and fifth waves (Fig. ; Table ). During wave periods, there was an increase in the daily number of deaths across all districts with a return to baseline after the wave periods (Fig. ). When disaggregated by place of death, more deaths during the third wave were reported as having occurred within the community than in health facilities (Fig. ). During non-wave periods approximately half of all reported deaths across all 12 districts occurred in the community (50.6%), and this proportion increased during wave periods (53.1%, p < 0.001) (Table ). There was district-by-district variation in the baseline number of deaths per day, with Lusaka accounting for the highest proportion of all deaths (33.49%) and Mongu the least (2.96%). During wave periods, there was an increase in the proportion of all deaths from Lusaka (45.39%, p < 0.001) (Table ). In both wave and non-wave periods, more males died (58.7%, non-wave period versus 58.6%), However, the difference in proportions of men who died between the two periods was not statistically significant ( p = 0.860). There was an increase in the median age at death during wave periods with a return to baseline median age at death after the wave period (41.0 years, IQR [22.0–63.0] versus 44.0 years, IQR [25.0–67.0], p < 0.001) (Table , supplementary Figs. and ). There was a corresponding increase in the proportion of individuals aged 65 and over who died during wave periods than those that died during non-wave periods (12886 [23.4%] non-wave period versus 11721 [27.7%] wave periods, p < 0.001) (Table , supplementary Fig. ). When the time series of daily deaths was plotted by age group, the most noticeable change from baseline rates was in the 60 and over age group, which showed an increased number of deaths in this age category across the first, second, third and fourth waves (Fig. ). There was excess mortality in all 12 districts included in the study between January 2020-December 2022, with the most excess deaths (52.6%) in 2021. These results suggest that the impact of the COVID-19 pandemic in Zambia was more widespread than official statistics indicate, and not only restricted to urban centres such as Lusaka . Similar findings of excess mortality in other sub-Saharan countries during the COVID-19 pandemic have been observed . Globally, the World Health Organisation estimates excess mortality is 2.74 times higher than reported COVID-19 deaths . Modelled estimates of excess mortality during the COVID-19 pandemic in Zambia vary widely (74.3 credible interval (CI) [2.5-147.6] vs. 228.2 [165.9-322.8] per 100,000 population ). Due to the unavailability of publicly available population-level mortality data, these current estimates of excess mortality in Zambia used statistical models to directly predict excess mortality for Zambia or used mathematical models to generate historical and current mortality data and then calculated the excess mortality rate . Our national estimate of the excess mortality rate (median 237.5 [(IQR)170.5-282.5] per 100,000 population), which lies within the range of modelled estimates for Zambia, is most likely an underestimate as not all deaths that occur within communities are reported at health facilities, and therefore were not included in this analysis. Our study demonstrates the value of applying mortality surveillance to understand the impact of a major public health event. If done in real-time, it could have also helped inform public health messaging and policymaking in response to the COVID-19 pandemic in Zambia. Further studies need to be conducted to understand the characteristics and explore surveillance strategies to detect and report these otherwise undocumented community deaths. The observed variation in excess mortality rates across districts may be attributed to district-specific differences in geospatial factors influencing COVID-19 transmission dynamics, variations in healthcare quality and its utilization by the community, and the proportion of deaths occurring within the community that are reported to health facilities. Consequently, there is a degree of underreporting of community deaths at health facilities, the extent of which is unknown but tends to be higher in rural areas compared to urban areas. Evidence suggests that rural-urban residence significantly impacts the location of deaths in Zambia . Additionally, differences in geographical factors between different regions have been shown to affect the spread of COVID-19 . Further studies are required to identify these factors and their impact on the spread of COVID-19 in Zambia. Most of the observed excess deaths were likely due to COVID-19 or due to the socioeconomic disruption due to the pandemic. We observed an increase in the daily number of deaths during COVID-19 wave periods compared to non-COVID-19 wave periods. To our knowledge, there were no other reported widespread public health emergencies during the study period that could explain the abrupt increase in the number of deaths during the specific wave periods and across all 12 districts almost simultaneously . Additionally, when we analysed the different wave periods, we noted that the relative increase in daily deaths during wave periods was associated with the relative case fatality of the predominant strain of SARS-CoV-2 associated with that wave. The predominant strain during Zambia’s third wave, which had the highest median daily death count, was the delta variant, a finding consistent with numerous other countries . Further, after an increase in the daily number of deaths during a COVID-19 wave period, we observed a return to baseline pre-pandemic mortality rates between waves and this was consistent across all 12 districts visited and across the different age groups. COVID-19 mortality has been shown to disproportionately affect the elderly . We observed an increase in the overall median age at death (44 years vs. 41 years, p < 0.001) and in the proportion of deceased persons aged 65 years and older (27.7 per cent vs. 23.4 per cent, p < 0.001) during COVID-19 wave periods with a return to baseline between wave periods respectively. However, the exact proportion of these excess deaths that were directly attributable to COVID-19 remains unknown because of limitations in antemortem and postmortem SARS-CoV-2 testing and limited death registration and certification across the country. There were more community deaths than facility deaths in both wave periods and non-wave periods. Community deaths increased during wave periods, suggesting potential gaps in health services brought on by the COVID-19 pandemic. As not all deaths that occur within the community are brought to health facilities before burial, the actual proportion of total deaths that occur within the community may even be higher. An analysis of places of death in Zambia among adults 15–59 years between 2010 and 2012 showed that slightly less than half of the adult deaths occurred in the home, factors associated with dying in a health facility included higher educational attainment, urban versus rural residence, and being of female gender . The observed increase in the proportion of deaths in the community during COVID-19 waves could be due to barriers to access to health facilities as some facilities were repurposed to serve as specialised COVID-19 treatment centres whilst other facilities scaled down services offered by only offering essential health services or attending to only emergencies . This could have compromised the quality of outpatient care that chronically ill patients received. Additionally, the myths and misconceptions around COVID-19 could have prevented those in most need of care from seeking health care . We recommend risk communication and engagement strategies tailored to increasing demand within the community for seeking health services during public health emergencies. Additionally, surge capacity plans should be developed by the Ministry of Health and implemented during public health emergencies. These could help ensure the continued provision of essential health services as well as provide additional capacity to respond to the public health emergency. Our study had several limitations. As we investigated excess deaths, we were unable to quantify the proportion of these excess deaths that were due to COVID-19, however, due to the timing and demographic composition of these deaths and the absence of other explanatory events, we believe that most of these deaths could have been due to COVID-19. Only community deaths that were reported at health facilities were analysed during this investigation, as such our findings underestimate the total number of deaths that occurred within these districts, as not all community deaths are reported at these facilities. While our study employed purposive sampling for site selection, which may have introduced selection bias and impacted the generalizability of our findings, the consistency observed across all selected districts—despite varying sociodemographic factors—suggests that our results are broadly reflective of the entire country. This consistent pattern across diverse districts supports the robustness of our findings and their applicability at a national level. Sampling additional districts was impossible because of resource limitations (data abstraction was time-consuming). Our method of determining excess mortality did not consider potential seasonal variations. However, due to the large size of the data set, the consistency of findings across the different districts, the consistency of our findings with current known epidemiological characteristics of COVID-19 and the consistency of our findings with other similar studies, we believe our findings are credible. There was excess mortality in all 12 districts visited during the COVID-19 pandemic in Zambia with most of these deaths occurring within the community and among the elderly. These findings suggest the impact of the COVID-19 pandemic in Zambia was far greater than implied by reported COVID-19 deaths alone. Strengthening routine and continuous mortality surveillance systems with cause of death ascertainment especially among community deaths could help guide public health decision-making and strengthen risk communication and community engagement during public health emergencies. Below is the link to the electronic supplementary material. Supplementary Figure 1: Population pyramid of deaths by wave period, 12 districts of Zambia, 2020–2022 Supplementary Figure 2: Trendline of median age at death and median daily count of deaths in 12 districts, Zambia 2020–2022
Surgical management of umbilical endometrioma within an umbilical hernia
abd0159e-749b-4291-8109-bafc78250188
11751607
Surgical Procedures, Operative[mh]
Endometriosis affects about 5% of women of reproductive age and was first coined by Sampson as the presence of endometrial glands and stroma outside the endometrium and myometrium. It usually affects the pelvis, including the ovaries, uterosacral ligaments and the pouch of Douglas. Extrapelvic endometriosis is less prevalent and symptomatology varies based on site, making the exact prevalence challenging to estimate. Extrapelvic lesions comprise about 12% of lesions, and abdominal wall endometriosis (AWE) is the most common extrapelvic site. Umbilical endometriosis accounts for about 30–40% of AWE and 0.5–1.0% of all endometriosis and can appear either as a primary lesion in the absence of surgery, also known as Villar’s node after Villar first described it in 1886, or as a secondary lesion arising from subsequent scar tissue after abdominal procedures. Patients typically present with a red, purple or black umbilical nodule lesion at the umbilicus and swelling colour/consistency change (83.5%), pain (83%) or bleeding (50.9%) from the umbilicus. Diagnosis is primarily made clinically, though MRI can aid in evaluation. Endometriomas appear homogeneously hyperintense on T1-weighted imaging. Histological findings generally include the presence of endometrial glands and stroma. Management can include hormonal or surgical therapy. Surgical therapy is typically recommended and involves wide local excision of umbilical endometriosis. Resection of umbilical endometriosis has many advantages; it has been well studied, clearly establishes a diagnosis of endometriosis and rules out malignancy. Surgical management boasts low recurrence rates, decreases risk of malignant transformation and is the most common treatment modality for umbilical endometriosis in the literature. Conversely, medical management has had varying degrees of success. Hormonal options reported in the literature include oral contraceptives, progestins and gonadotropin-releasing hormone analogues. Small studies have demonstrated symptomatic control through oral contraceptive pills, while others have seen incomplete response, which suggests it may not be curative. Furthermore, there are no sound studies that compare medical to surgical management. Timely diagnosis and effective treatment can be crucial to improving quality of life and outcomes. In this case, we report a secondary umbilical endometrioma arising from an umbilical hernia in a patient with prior abdominal myomectomy that failed medical management and was successfully treated surgically with excision and simultaneous hernia repair. A nulliparous woman in her 40s presented with umbilical pain and bleeding with menses. Her medical history was significant for hypertension, body mass index (BMI) of 37 kg/m 2 , uterine fibroids, abnormal uterine bleeding and dyschezia. Surgical history was significant for two abdominal myomectomy procedures through a Pfannenstiel incision with breach of the endometrium in both surgeries. Pathology revealed fragments of benign leiomyomata with areas of increased cellularity. Examination revealed a protuberant abdomen with a 3 cm blood-filled cyst at the umbilicus consistent with umbilical endometriosis (see ). Bimanual examination showed a small uterus and no adnexal masses or nodularity on the uterosacral ligaments, rectovaginal septum, parametrium or cul-de-sac. MRI showed an umbilical endometrioma (see ). Before referral for surgical management, the patient attempted medical therapy with medroxyprogesterone acetate without relief and was not a candidate for combined oral contraceptives due to hypertension and elevated BMI. The patient, therefore, underwent umbilectomy, excision of umbilical endometriosis, lysis of adhesions and umbilical hernia repair under general anaesthesia. A circumferential incision was made around the umbilicus using a scalpel. This was taken down to the subcutaneous tissue using electrocautery. The endometrioma was densely adherent to the umbilical skin as expected, as well as the hernia sac, the omental fat contents of the umbilical hernia sac and the anterior rectus sheath (see ). This lesion was hence excised en masse with a wide margin, resulting in a large defect in the anterior abdominal wall (see ). The defect was closed primarily using synthetic mesh and the skin edges were approximated using staples, leaving a subcutaneous drain (see ). The specimen was sent to pathology for analysis. The patient was discharged home the same day of surgery. Surgical pathology revealed cystic endometriosis within subcutaneous tissue, consistent with umbilical endometrioma. Her postoperative course was complicated by a surgical site infection. On postoperative day 33, the patient began a course of oral antibiotic therapy. Medical management failed initially. She was then treated successfully inpatient with intravenous vancomycin and piperacillin-tazobactam therapy and ultrasound-guided percutaneous drainage. Deep wound culture grew coagulase-negative Staphylococcus and rare Corynebacterium jeikeium sensitive to vancomycin. At the time of this writing, the patient has had a 2-year recurrence-free interval. We present a case of full thickness umbilical endometriosis presenting in an umbilical hernia sac in a patient who had had two prior myomectomy procedures through a Pfannenstiel incision with breaching of the endometrium. Having failed medical management, the lesion was successfully surgically removed with a 2-year recurrence-free interval to date. The surgery, however, was complicated by a surgical site infection necessitating readmission. Umbilical endometriosis accounts for 0.5–1.0% of all endometriosis and can appear either as a primary lesion in the absence of surgery, also known as Villar’s node, or as a secondary lesion arising from subsequent scar tissue after abdominal procedures. Abdominal wall endometriosis is reported to have rates of pelvic endometriosis ranging from 13% to 34%, but the exact rate is unknown since the peritoneum is not routinely evaluated during the workup or management of those cases. Several theories explaining the pathogenesis of endometriosis exist, including both local implantation via refluxed endometrial cells entering the abdominal cavity through the tubes and dissemination through lymphovascular channels. Although overall a rare condition, patients with obstetric surgical history such as caesarean sections are at risk for umbilical endometriomas. It is therefore conceivable that a breach of the endometrium during her two myomectomy procedures may have caused seeding of the umbilical hernia in our patient. Umbilical endometrioma must be differentiated from ventral hernia, suture granulomas, abscess, cyst or lipoma. The initial management is therefore often surgical, especially if there is no known history of pelvic endometriosis. When the diagnosis is suspected, on the other hand, some investigators have attempted medical management for those tumours. However, medical management has been infrequently reported, and the studies that investigated medical management have had small sample sizes and did not include long-term follow-up. Additionally, there is not much data specifically comparing medical and surgical management outcomes, which begs the question of whether the role of hormonal therapy should be reserved for postoperative long-term symptom control. Surgical management with wide local excision is recommended, although it lacks long-term efficacy and safety outcome data. Although our patient has had no recurrence over a 2-year follow-up, the recurrence rate after local resection is reported to be about 10%. The typical excisional technique is the circumferential or ‘stump’ technique, similar to our approach. Most resections do not involve entry into the peritoneum, and reconstruction will typically depend on the size of the defect and preference for umbilicoplasty. A superior approach for umbilicoplasty has not been identified, and long-term complication rates have yet to be determined and compared. Umbilicoplasty methods reported include a four-flap umbilicoplasty, bilateral square teeth flaps, purse string suture technique and a v-y flap technique. The simplest techniques are the purse string method, which does not require defatting of the subcutaneous tissue, and the four-flap technique. Techniques that involve defatting of the subcutaneous tissue include the v-y flap and square teeth techniques, which had low complication rates with adequate depth after healing in all but one reported case . We did not perform an umbilicoplasty, resulting in a poor inferior cosmetic result. The anterior abdominal wall defect size having resected the entire tumour en masse was 8–10 cm. Therefore, a mesh repair was indicated. The hernia was repaired using synthetic mesh in accordance with Hernia Surge guidance. Postoperative course was complicated by a surgical site infection. Fortunately, this infection resolved with medical management and did not necessitate more invasive management such as surgical debridement or mesh removal, and is doing very well without any long-term complications. The patient was at risk for surgical site infection given her elevated BMI. In summary, umbilical endometriosis should enter into the differential diagnosis of umbilical tumours, and the workup should include a search for pelvic endometriosis. Surgical excision in this reported case had a 2-year recurrence-free interval but was complicated by a surgical site infection. Several umbilicoplasty techniques exist for optimal cosmesis. We agree with the European Society of Human Reproduction and Embryology guidelines which state that surgical management is preferred for extrapelvic endometriosis as it removes the symptomatic mass and provides a histopathological diagnosis. Medical management may have a role in the treatment of umbilical endometriosis although further studies in cases of endometriosis in a hernia are warranted, as it may be futile to attempt medical therapy, prolonging time to surgery. The role of medical management after surgery to prevent recurrence also deserves further study especially in the presence of pelvic endometriosis. Learning points Recognise the prevalence of extrapelvic endometriosis in reproductive age women. Umbilical endometriosis usually presents with catamenial symptoms and can exist within an umbilical hernia. After complete surgical excision, individualised approaches to umbilical reconstruction based on patient and disease characteristics may be warranted.