title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 0
8.58M
|
---|---|---|---|---|
Targeting Underlying Inflammation in Carcinoma Is Essential for the Resolution of Depressiveness | 412ac2da-3d0f-4572-b520-4b82115a944f | 10000718 | Internal Medicine[mh] | Treating patients with cancer is challenging. In modern clinical practice and research on behavioral changes in patients with oncologic problems, there are several one-sided approaches to this problem. Oncologists are concerned in great detail with localization of the primary oncologic process, pre- and post-operative care, protocols for chemotherapy and radiation therapy, and monitoring for recurrence. However, it is quite common for psychiatrists to be involved in some of the phases of integrative treatment. Mental predisposition could be discussed in the etiology of various carcinomas . Mental disturbances could be a consequence of the patient’s awareness of the illness onset and its possible impact on the patient’s overall quality of life, or they may follow somatic perturbations and be an impact of the various therapies applied . Mental disorders could also induce cancer recurrence . In the new therapeutic strategies for the treatment of neuropathic pain as an oncological complication, it was very interesting to draw a parallel between the changes in the acute and chronic phases of pain and mental disorders management . Inflammatory processes, both acute and chronic, are a hallmark of both oncological and mental disorders . The exacerbation of somatic disorders and mental illnesses could reflect acute inflammation, whereas prolonged processes are related to chronic inflammation . The question is if these landmarks could induce depressive symptomatology, and if so, to what extent and whether these phenomena should be treated as simple non-comorbid depression. In the up-to-date literature, there are a lot of useful pointers on the relationship between carcinoma and inflammation and between depression and inflammation. This review article aimed to compare and integrate these complex interactions in the same context of carcinoma and depression comorbidity. Further, we will try to use this information to potentially improve the clinical approach and discuss the importance of the resolution of inflammation as a new treatment strategy in the cooccurrence of carcinoma and depression.
Inflammation represents the systemic host response to tissue damage. It is usually caused by injury, ischemia, infection, or chemical exposure . Additionally, inflammation plays an important role in tissue repair, regeneration, and remodeling . The inflammatory response involves the recruitment and action of the immune response . Inflammation occurs in two stages, acute and chronic inflammation . Acute inflammation is a part of innate immunity initiated by immune cells and lasts for a short time . It serves as a defense against infection, tissue damage, and allergens. Receptors of innate immunity recognize the structures of microorganisms (pathogen-associated molecular patterns—PAMPs), but also molecules that are released from damaged host cells . These molecules are called danger-associated molecular patterns (DAMPs) and represent proteins or nucleic acids that are not normally found outside the cell. The most important DAMPs include chromatin-associated protein high-mobility group box 1 (HMGB1), adenosine triphosphate (ATP), uric acid (UA), deoxyribonucleic acid (DNA), and degraded extracellular matrix (ECM)-like heparan sulfate and hyaluronan. PAMPs and DAMPs are recognized via pattern recognition receptors (PRRs) . The term “alarmin” is today used as a synonym for DAMP . In acute inflammation, pro-inflammatory mediators such as acute-phase proteins, prostaglandins, leukotrienes, oxygen- and nitrogen-derived free radicals, chemokines, growth factors, and cytokines that are released by immune defense cells locally at the site of inflammation cause neutrophil infiltration . C-reactive protein (CRP), fibrinogen, and procalcitonin (PCT) are part of an innate immune response that is detectable in serum within a few hours of the initiation of inflammation . They facilitate the inflammatory process and represent hallmarks of acute inflammation . Subsequently, other cells of innate and adaptive immunity (e.g., macrophages and lymphocytes) are recruited to the inflammatory environment . In response to DAMPs, innate immune cells secrete cytokines that mediate normal cellular processes and communication between leukocytes and other cells, but also regulate the host’s response to damage . Cytokines can exert proinflammatory and anti-inflammatory effects both locally and systemically . Activated cells of innate immunity produce the most important proinflammatory cytokines: interleukin (IL)-1, tumor necrosis factor-alpha (TNF-α), IL-6, IL-12, and IL-23 . Conversely, the cells of adaptive immunity—activated T lymphocytes—produce interferon-gamma (IFN-γ) and IL-17 . Some cytokines, such as IL-1α and IL-33, act as alarmins . They are released from host cells as a result of injury or death and subsequently mobilize and activate immune cells . The resolution of acute inflammation begins when PAMPs and DAMPs are no longer present . However, if the pathogen cannot be completely eradicated or there is a constant source of self-antigens or a growing tumor that continuously disrupts tissue structure and induces the production of inflammatory cytokines, the second stage of inflammation, chronic inflammation, occurs . Long-lasting chronic inflammation can lead to many chronic diseases including cardiovascular, respiratory, neurodegenerative diseases, and cancer via dysregulation of various signaling pathways . When Rudolf Virchow described leukocytes within primary tumor tissue, a possible link between inflammation and cancer was established in the 19th century . Today, it is obvious that inflammation plays an important role in the biology of tumors. Inflammation may play an anti- or pro-tumorigenic role. Acute inflammation in neoplastic tissues is indicative of an anti-tumor immune response . In chronic inflammation, the inflammatory microenvironment facilitates cell mutations and proliferation leading to tumor development . Alteration of several signaling pathways may contribute to the development of genetic and epigenetic changes in local tissue cells . Additionally, chronic inflammation attenuates anti-tumor immunity and affects cell proliferation, death, senescence, DNA mutation, and angiogenesis . The question remains whether the inflammation is a consequence of the anti-tumor immune response or whether the tumor arose in the setting of chronic inflammation.
Pro-inflammatory peripheral biomarkers elevation, a higher risk of depression in inflammatory and autoimmune diseases, the ability of immune mediators to induce depressive symptoms, and the fact that activated microglial cells reduce levels of serotonin and generate oxidative stress (OS) molecules all point to immune system involvement in the pathogenesis of depression . Blood-brain barrier (BBB) permeability, the brain-gut axis, and the brain-fat axis bring systemic, particularly inflammatory, changes into the spotlight, not just central nervous system (CNS) disturbances in depression . Specific depressive symptomatology was explored in correlation with inflammatory changes in the periphery. Majd et al. (2020) conducted a narrative review and indicated that there is an association between neurovegetative symptoms of depression, such as sleep problems, fatigue or loss of energy, appetite changes, and inflammation . Increased inflammatory markers were measured in patients with major depressive disorder: IL-1β, IL-6, TNF-α, and CRP. Peripheral inflammation could signal the brain by leaky regions in the BBB, the cytokine transport system, and the vagus nerve. They based their conclusions on Capuron et al. (2002) , who demonstrated that IFN administration causes neurovegetative symptoms in the first two weeks, which are less responsive to antidepressant therapy, and depressed mood and cognitive symptoms later, which are responsive to antidepressants. Among other prominent theories of depression, the cytokine theory has played an important role in clinical practice . Cytokines and peripheral immune cell counts could serve as biomarkers for distinct subgroups of inflamed depression and direct further treatment . As recently noted in coronavirus disease (COVID-19), acute inflammation could be followed by behavioral changes termed “sickness behavior”, the resolution of which follows eradication of the infection, although in some cases psychotropic medications are required to resolve mental symptoms, particularly agitation . It seems that some individuals have a predisposition to an exaggerated immune response to an infectious agent that could be harmful, not protective, and also lead to a later onset of depression . The peripheral immune response is particularly exacerbated in depressive patients that are resistant to antidepressants . Resilient animals do not display exacerbated immune responses following acute and chronic stress, suggesting that positive affectivity could buffer the negative impact of stress on immunity . This hypersensitivity could be linked to the role of IL-6 as an important marker. A recently published first meta-analysis with a robust sample reported an adjusted association between IL-6 and future depression . In addition, a small prospective association between depression and IL-6 was observed in both directions. If inflammation is prolonged and chronic, it is important to consider whether symptoms meet the threshold for a diagnosis of a depressive episode and require treatment. However, the elevation of IL-6 may be associated not only with chronic inflammation but also with other pathological processes that may also be observed in depression .
The estimated high prevalence of depression in cancer patients and the insufficient data on the mechanisms by which tumors per se may alter brain functions, including mood and cognition, have engaged the preclinical research community to search for novel cancer-induced models. The main advantage of using animal models in research is the control of confounding variables that are difficult to control in the clinical setting and the ability to unravel mechanistic interactions between neural, immune, and inflammatory processes through which tumors alter brain function. Animal models provide a better explanation for the independent impact of tumor-associated biological processes on affective and cognitive symptoms, independent of cancer-associated stress and treatments. Significant behavioral changes were found in mice with implanted tumors, characterized primarily by an increase in avoidance behavior and a decrease in immobility, defensive-submissive behavior, and non-social exploration . Changes in brain plasticity as a result of disturbed neural redox homeostasis were detected in the brains of tumor-bearing mice with depressive-like behavior . Structural evidence for a depressive-like state induced in a model of mammary cell carcinoma was also observed through decreased dendritic branching of pyramidal neurons in the medial prefrontal cortex . Lipopolysaccharide is a component of gram-negative bacteria commonly used to induce a potent inflammatory response and behavioral changes that rapidly resolve within 24 h, followed by hyperalgesia . Cytokine production in the tumor microenvironment can be detectable in the general circulation of experimental models of various tumor types, as well as in brain areas responsible for mood regulation. These studies reported increased plasma levels of IL-6, IL-12, TNF-α, IL-10, and IL-1β, but also increased IL-1β, IL-10 expression of IL-1β mitochondrial ribonucleic acid in the cortex and hippocampus, and increased levels of IL-6 and TNF-α in the hippocampus . Hippocampal inflammation was related to depressive-like behavior in breast cancer mice, and also gastric-cancer-bearing mice with a significant increase in IL-6, IL-1β, reactive oxygen species (ROS), and cyclooxygenase-2 (COX2) . The model of chronic stress and smoke exposure induced depression-like behavior and lung cancer, respectively, in mice, with the synergistic effect in a combined model manifested through a more prominent inflammatory response . However, the impact of antidepressant fluoxetine was significantly attenuated under the conditions of chronic stress and LPS-induced inflammation, suggesting the role of chronic inflammation in the development of treatment-resistant depression . We could identify several important underlying cascades in the development of depressiveness induced by inflammation. Activation of inflammasomes, particularly nod-like receptor family pyrin domain containing 3 (NLRP3), may occur through DAMPs or PAMPs mediated by toll-like receptors (TLRs) and subsequently activate important intracellular pathways such as IFN I and the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) . At the cellular level, repercussions of these processes could be the production of IL-1α, IL-1β, TNF-α, and IL-6, as well as the activation of microglia and the impairment of astrocytes in depression .
Somatic illnesses could be followed by mental disturbances, or mental disorders could be a typical response to illness that vanished in reconvalescence with the illness resolution or could persist after somatic illness recovery . Hart was the first to propose the concept that “sickness behavior” occurs as a short-term reaction in an acute inflammatory state and is crucial for the survival of the individual . However, when inflammation becomes chronic, as in autoimmune diseases, neurodegenerative diseases, cardiovascular diseases, diabetes and obesity, and cancer, mood symptoms predominate and can worsen the disease. Nearly 30% of cancer patients meet the criteria for a psychiatric diagnosis of depression, neurotic and stress-related disorders, adjustment disorders, sleep disorders, or delirium . The problem of insomnia is very pronounced in patients in the active and stable phase of cancer, especially when associated with a pain syndrome and distress . With regard to the onset and persistence of depressive symptomatology, it was very interesting for us to consider the overlap with pain and fatigue as symptoms of the cluster, presented as two or three concurrent and interrelated symptoms that may or may not have a common etiology and pathophysiological pathways . Dodd et al. (2001) defined pain, fatigue, and insomnia in cancer patients as a cluster . Recently, Charalambous et al. (2019) provided preliminary evidence that targeting fatigue, anxiety, and depression in patients with breast and prostate cancer may have a meaningful effect on pain as a related symptom . A proposed underlying mechanism in the pathogenesis of these symptoms includes systemic inflammation with high pro-inflammatory cytokine levels, oxidative stress, and neuroendocrine-immune alterations . Inflammation-mediated tryptophan catabolism along the kynurenine pathway might contribute significantly to the development of fatigue and depression in cancer patients . Consideration of the common neuroimmune mechanisms of chronic pain and depression and the possible corrective anti-inflammatory effect of antidepressants seem to be of greater importance in this case . Therefore, researchers have developed a model of inflammatory cytokine activity in cancer to explain the co-occurrence of pain, fatigue, and sleep disturbances (summarized in ). Sometimes it is necessary to remember that the primary goal is to eliminate pain sensations to prevent the onset of depressive symptoms. Functioning could be especially compromised with pain sensations that are also correlated with ongoing inflammation . Acute pain was also associated with acute inflammation, and chronic inflammation reflected chronic pain . Chronic pain and depression in humans are associated with persistent low-grade inflammation rather than severe systemic inflammation, with only a partially common underlying mechanism . Neuropathic pain has been shown to be associated with increases in the tryptophan-metabolizing enzyme indolamine 1,3 deoxygenase (IDO1) in the liver but not in the brain, and antagonism of the N-methyl-D-aspartate (NMDA) receptor by kynurenic acid . On the contrary, co-morbid depression was mediated downstream of spinal cord IL-1β signaling and the formation of kynurenine and its metabolites in the brain . Along with anxiety and depression, cancer-related fatigue is one of the most common symptoms in cancer patients . Fatigue and depression have similar clinical presentations . Fatigue can occur independently, be a prodromal symptom of depressive disorders, or be part of a developed depression . Fatigue is defined as a loss of energy that can affect physical, mental, or cognitive functioning and is manifested by loss of motivation, apathy, and reduced concentration and attention . The above symptoms are important characteristics of depressive mood disorder. For these reasons, it is sometimes very difficult in clinical practice to distinguish whether it is just fatigue or depression. Recently, our research group has pointed out that acute and chronic inflammation have a significant impact on fatigue and depression in patients with the inflammatory and neurodegenerative disease multiple sclerosis. We observed that peripheral inflammation was related to fatigue and postulated that brain inflammation in acute episodes could further lead to neurodegeneration and mood and cognitive changes . The new important clinical entity of paraneoplastic disorder should be considered in the context of the clinical field of autoimmune-mediated depression . Paraneoplastic neurologic syndromes (PNSs) are rare cancer-related diseases that can affect any level of the central and peripheral nervous systems . These disorders do not result from tissue invasion by the tumor, metastases, or metabolic or toxic effects of cancer therapy . PNSs are caused by an immune response directed toward neural self-antigens aberrantly expressed by neoplastic cells and marked by specific autoantibodies . Although PNSs can occur in any type of tumor, the most frequently associated malignancies include ovarian and breast cancer, small-cell lung cancer, thymoma, Hodgkin’s lymphoma, and neuroendocrine tumors . The exact immunopathogenic mechanisms for most paraneoplastic syndromes are still unclear. The autoimmune theory postulates an immune cross-reaction between antigens expressed by tumor cells (“onconeural” antigens) and neurons . The autoimmune response, initially directed against tumor cells, results in further damage to neurons that physiologically express the same antigen . The target of the immune attack can be intracellular antigens (anti-Hu, anti-Yo, anti-Ma2, anti-Ri, GAD), antigens on synaptic receptors (NMDA, α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor, γ-aminobutyric acid receptor) or ion channels, and other cell-surface proteins (LGI 1, GQ1b) . The main effector of the immune response in PNSs associated with antibodies directed against intracellular antigens is the CD8 + cytotoxic T cell, whose action results in rapid and extensive neuronal death by cytotoxic activity . Mild signs of inflammation are commonly detected in the cerebrospinal fluid in the early phases of these disorders . Antibodies against plasma membrane antigens, such as ion channels and surface receptors, may play a pathogenic role as direct effectors in neural tissue injury. Mechanisms by which these antibodies affect the targeted cells include antigen internalization and degradation, activation of complement cascades, antibody-dependent cell-mediated cytotoxicity, and blockade of receptor function . Paraneoplastic syndromes of the CNS can be present with neuropsychiatric and cognitive symptoms, abnormal movements, new-onset epilepsy, and sleep disorders . Over the past decade, evidence has accumulated of an intriguing relationship between cancer and neurodegenerative diseases. Progression of both conditions is primarily defined by a set of molecular determinants that are complementarily dysregulated or share important underlying biological mechanisms that promote cell proliferation and apoptosis, including alarmins (discussed in detail in ). DNA, cell cycle aberrations, redox imbalance, inflammation, and immunity are closely associated with shared characteristics of cancer and neurodegenerative diseases. The question arises whether each depressive episode and these kinds of repeated excessive immune and autonomic dysregulation could also contribute to neurodegeneration.
The basic mechanism of action of conventional therapy for malignant diseases, such as radiotherapy and chemotherapy, is to induce the death of tumor cells . However, the process of tumor cell necrosis is often triggered as an accompanying phenomenon in addition to the desired apoptosis. Necrosis is followed by the release of cellular contents outside the cell. Thus, endogenous alarmins reach the intercellular space and become inducers and facilitators of inflammation . In this way, therapeutically induced tumor necrosis may be beneficial to the host . Therefore, another no less important mechanism of action of the therapy is the induction of inflammation and the strengthening of the antitumor immune response. New therapeutically induced tumor necrosis may benefit the innate antitumor immune response, as necrotic cells facilitate the maturation of antigen-presenting cells . Mature antigen-presenting cells, especially dendritic cells, induce a potent acquired antitumor response. Thus, the increase in systemic values of proinflammatory cytokines of innate immunity is accompanied by an increase in values of cytokines of acquired immunity. Chronic inflammation is present in and around most tumors, including those not causally related to an inflammatory process . The percentage of patients with inflammatory components in the tumor microenvironment varied from 28% to 63% depending on tumor type . Anti-tumor therapy is usually followed by a wave of acute inflammation that changes the intensity and course of the antitumor immune response . Although radiotherapy and chemotherapy are options for the treatment of cancer, other treatments are increasingly being explored today, such as immunotherapy . The use of monoclonal antibodies, immunomodulatory agents, modulated immunocompetent cells, or blocking antibodies for checkpoint molecules has shown significant results in cancer therapy and has fundamentally changed the approach to cancer therapy . The discovery of checkpoint molecule inhibitors was awarded the Nobel Prize . The blockade of cytotoxic T-lymphocyte-associated protein 4 (CTLA4) and programmed cell death protein 1 (PD1) molecules with antibodies is now very topical and has also found its application in clinical practice . Research on blocking other checkpoint molecules such as T cell immunoglobulin and immunoreceptor tyrosine-based inhibitory motif domain (TIGIT), the cluster of differentiation 96 (CD96), natural killer receptor NKG2A is in full swing . A strong effect of the application of this type of therapy is the enhancement of both innate and acquired antitumor immune responses . This phenomenon is almost always accompanied by increased production of pro-inflammatory cytokines and momentum of inflammation in the host. These effects could be unwanted in the propagation of inflammation and consequently trigger depressive symptomatology. Since alterations of various cytokines have been established in both depression and cancer, cytokine inhibitors deserve more detailed discussion. Infliximab, a TNF antagonist, improves depressive symptomatology by decreasing CRP levels but has also shown beneficial effects in treating cancer-related fatigue . Adalimumab, another TNF-α-specific neutralizing monoclonal antibody similar to infliximab, has been shown to significantly improve depressive symptomatology in patients with various chronic diseases , but without studies in psychiatric patients. Etanercept, another TNF-α antagonist, reduced depressive-like behavior in preclinical models, but also clinical studies in patients with psoriasis and rheumatoid arthritis . Pentoxifylline, a methylxanthine drug that acts as a strong non-selective TNF-α inhibitor, has improved depressive behavior in animal models but has also shown positive results as an add-on treatment for depression . Ustekinumab, an inhibitor of IL-12 and IL-23, dupilumab, an antagonist of the receptor of IL-4, ixekizumab, an IL-17A inhibitor, and guselkumab, an IL-23 inhibitor, have all been for their antidepressant action . Although cytokine inhibitors have a more targeted effect on depression-related inflammation, these results were limited to specific patient groups. Because cytokine inhibitors are large molecules, they cannot cross the BBB, suggesting that their anti-inflammatory action is limited to peripheral TNF-α. This does not preclude their efficacy, but further studies are needed to determine their potential for treating depression in the presence of concomitant carcinoma. Conversely, re-establishing balance in the peripheral secretion of cytokines is observed after antidepressant use and the resolution of depression. The most recent pharmacological protocols for the treatment of depression in carcinoma target monoamine neurotransmitters, brain-derived neurotrophic and inflammatory factors, and glutamate and its receptors, using monoamine oxidase inhibitors, tricyclic drugs, selective serotonin reuptake inhibitors (SSRIs) and selective serotonin noradrenaline reuptake inhibitors (SNRIs), glutamatergic drugs, opioids, and benzodiazepines . In vitro, SSRIs have been shown to inhibit the release of TNF-α and NO from activated microglia, impede calcium ion influx, decrease the activation of the Janus kinase-signal transducer and activator of transcription (JAK-STAT) pathway, and also reduce inflammatory changes . SSRIs and SNRIs decrease blood and tissue cytokines and regulate complex inflammatory pathways of NF-κB, inflammasomes, TLR4, and peroxisome proliferator-activated receptor gamma (PPAR-γ) . Liu et al. (2020) showed in their systematic review and meta-analysis that patients with depression who responded to treatment had lower baseline levels of the chemotactic factor for neutrophils IL-8 than non-responders . In addition, treatment with antidepressants decreases TNF-α and IL-5 levels. However, long-term treatment with SSRIs has been postulated to increase Th1 and decreases Th2-derived cytokines . Celecoxib, nonsteroidal anti-inflammatory drugs, minocycline, but also statins, polyunsaturated fatty acids, pioglitazone, modafinil, corticosteroids, the vitamin D2 analog i.e., paricalcitol, etc. have already been reported as classical anti-inflammatory drugs with consequent antidepressant effects . Celecoxib, a selective COX-2 inhibitor, exerts anti-depressive action by decreasing IL-6 expression and/or levels . Minocycline, the second-generation tetracycline antibiotic, can cross the BBB more efficiently than other tetracycline antibiotics. It has anti-inflammatory, antioxidant, and neuroprotective effects within the CNS by preventing the release of inflammatory cytokines such as IL-6 and TNF-α . It also inhibits neutrophil migration, degranulation, oxygen-free radical production, and NO release. Statins, known as lipid-lowering agents, have shown anti-inflammatory potency by decreasing levels of CRP and low-density lipoprotein (LDL) cholesterol, TNF-α and IFN-γ production in stimulated T cells, but also by reducing immune activation of T-helper cells . Pioglitazone, primarily used as an antidiabetic drug, acts as a PPAR-γ agonist and decreases the expression of IL-1β, IL-6, TNF-α, inducible nitric oxide synthase (iNOS), and chemoattractant protein-1 (MCP-1/CCL2) . It ameliorates depression-like behaviors by inducing the neuroprotective phenotype of microglia . The psychostimulant modafinil reduces brain inflammation by impacting monocyte recruitment and activation, T cell recruitment and differentiation, cytokine production, and glial activation . Corticosteroids, known for their anti-inflammatory properties, have also been studied for their antidepressant properties . Because of their various side effects, which depend on their dosage and duration of treatment, they should be used with caution . Paricalcitol, a vitamin D2 analog, regulates microglia-mediated neuroinflammation via decreased production of IL-1-β, inhibition of NF-κB and NLRP3 signaling, and caspase-1 overexpression . In examining the link between depression and cancer, numerous experimental studies have revealed that activation of the kynurenine pathway of tryptophan degradation due to inflammation plays an important role in the evolution and persistence of both diseases . The Hamilton group study showed that a history of depression, anxiety, and fear of tumor recurrence was associated with greater use of complementary treatment approaches . The supplements most commonly used by patients are selenium (Se), folic acid, and omega-3 fatty acids . Cancer patients often turn to antioxidants; among them, Se is particularly interesting, either from an inorganic source (sodium selenate) or the amino acid (selenomethionine) . However, it is questionable whether its action can be considered exclusively as an antioxidant because it can also act as an oxidant and exhibit an anticarcinogenic effect . Due to its antioxidant effect, Se is suitable as a supplement in depressive states and is an essential trace element for thyroxine metabolism. Thus, Se deficiency lowers antioxidant protection of the brain and may lead to brain damage—the turnover of dopamine and serotonin increases, while it decreases for noradrenaline and 5-hydroxy-3-indoleacetic acid compared to controls . The role of folic acid is reflected in the synthesis of serotonin, and its supplementation is advised for patients with depression . However, at high doses of folic acid, an adverse action may be observed because its role in metabolism controls the potential proliferative action for cancer cells . Other widely used supplements are omega-3 fatty acids, predominantly eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). They influence optimal cell structure/function and affect synaptic neurotransmission . Therefore, they are recommended for complementary therapy in depression . Improvements are also expected from fatty acid supplementation in chemotherapy and radiotherapy, as they affect inflammation, apoptosis, eicosanoid synthesis, etc. . New therapeutic approaches may include drug-supporting/delivery systems as well as assorted supplements. Among the various supplements, zeolites are in the spotlight. There are a number of zeolite-associated positive effects reported in the literature: antioxidant and anticancer performance, ion exchange, and adsorption/encapsulation features, to name a few . These aluminosilicates can be of synthetic or natural origin, such as clinoptilolite, and are recognized for human application . Interestingly, synthetic zeolites can be designed to meet the specific demands of drug carrier systems and seem to be a far better choice, but are still awaiting general pharmaceutical recognition. Some therapeutic approaches may benefit from zeolite use, i.e., sustained drug-delivery systems, which are considered to be improved therapeutic pathways compared to regular ones . Over the past two decades, researchers have competed to find ideal carriers, exploring a possible synergistic effect between the selected support and the drug itself . There are several reasons for this—firstly, a specially designed carrier of nanometric dimensions must be considered to sustain BBB pass . To meet this requirement, animal testing is set forward with some interesting applications. For example, infrared-activated BBB permeability may be accomplished by utilizing zeolitic imidazolate-based nanocomposite for intracerebral quercetin delivery providing neuroprotective effects . Furthermore, the zeolite platform must encompass enough functional centers to efficiently adsorb/encapsulate drugs. Thus, zeolitic composites were proposed for synergetic tumor thermo-chemotherapy using doxorubicin drug delivery that sustains tumor reduction . In the field of mental disorders, zeolite testing is under-explored with the majority of studies employing only animal models. One way to treat induced bipolar disorder in rats with probiotic cultures, alone and in zeolite-supported formulations, was suggested by Alchujyan et al. . Interestingly, probiotics expressed a positive effect on arginase/nitric oxide synthase activities without significant benefits of zeolite carrier, as both formulations led to beneficial histopathological brain alterations and subsequent behavioral progress in rats. Several reports suggest that recovery of cancer patients can be promoted by zeolite supplementation . This hypothesis is based on zeolites’ excellent adsorption capacity for histamine which may be regarded as beneficial for pain relief . In vitro and in vivo experiments on zeolite frameworks safety are extensively studied, while others investigated double-blinded trials of oral clinoptilolite intake in cancer patients to treat peripheral neuropathy induced by chemotherapy . As reported by Vitale et al., the neuropathy extent was quite similar, occurring in 70.6% and 64.3% of patients in the placebo and zeolite supplementation groups, respectively . Bearing in mind the good adsorption properties of zeolites, their role in the removal of heavy metals is often mentioned in the context of the prevention of mental disorders. A prospective use of zeolite/ethylenediaminetetraacetic acid as a lead scavenger is reported , confirming the role of clinoptilolite in reducing neurotoxicity in mice. Another removal of lead addresses issues with autism spectrum disorder . Injection of zeolite particles is proposed with the possibility of stool excretion after metal adsorption, with no analysis of the detrimental effect zeolite nanoparticles could have on the hematological and gastrointestinal region. As a multifunctional material, Y zeolite is applied as an electrode support for the ruthenium ammine complex in the electrochemical detection of dopamine/serotonin . Extending this system toward zeolite’s possible interaction with L-dopa, as a dopamine precursor may be sound due to several hydrogen bonds that can be formed. However, this emerged as a premise for rising dopamine levels, which is challenging to test/confirm . Expectedly, these propositions are left with only hypothetical opinions.
Immune system alteration is the common denominator of depression and cancer. Additionally, alterations in the immune response seem to overlap in both pathological conditions. The psychiatric correlates are followed by immune disturbances, and we still wonder to what extent the resolution of inflammation in carcinoma might simultaneously contribute to the resolution of the associated depressive symptomatology. Recognition of acute mediators of inflammation is very important, and it is even more important to prevent the transition from acute to chronic inflammation through early anti-inflammatory interventions. The alarmins induce local (central) inflammation by TLR signaling and facilitating NF-κB transcriptional activity and NLRP3 inflammasome in neuronal and nonneuronal cells. Thus, pro-inflammatory cytokines produced in the periphery could activate inflammation in the brain and subsequently modulate the release and function of neurotransmitters, leading to the onset of depression. Previous clinical investigations have shown that the cytokines IL-1, IL-6, IFN-γ, and TNF-α play key roles in these processes. These same cytokines are among the major mediators of the anti-tumor immune response and the chronic inflammation that usually accompanies it. Hypersensitivity and chronification of inflammation suggest an exhausted and insufficient immune response. In conclusion, peripheral inflammation could trigger central immune-inflammatory pathways that lead to pain, fatigue, and depressive symptomatology in patients with cancer. Cancer treatment strategies, as well as conventional psychotropic drugs, could help balance the inflammatory milieu. The new equilibrium in both conditions may be achieved by variously targeted anti-inflammatory strategies. Anti-inflammatory drugs are well known, but new possible pathways and challenging add-on therapies have yet to be found.
|
Cytological Samples: An Asset for the Diagnosis and Therapeutic Management of Patients with Lung Cancer | 98873e9a-130b-44b1-9558-aa645382e242 | 10001120 | Anatomy[mh] | Lung cancer has become the leading cause of cancer death for men and women . The management of this cancer has evolved over the last decade with the emergence of new therapies, such as tyrosine kinase inhibitors and immunotherapy. Therefore, patient samples must allow for both the diagnosis and molecular testing, as well as PD-L1 quantification. As patients are often diagnosed at an advanced stage , pathologists must use samples carefully and appropriately, as the diagnosis is no longer the only result needed for patient management. Cytological samples have a place in the management of patients with pulmonary mass, especially those with an advanced disease for whom surgery is not a therapeutic option. These patients with advanced diseases account for 45% of patients diagnosed with lung cancer . The last version of the World Health Organization (WHO) classification granted a section entirely dedicated to cytology in lung cancer, showing the importance of these samples in this context . In this article, we report the ability of cytological samples to perform the diagnosis of lung cancer and to obtain critical results, such as molecular profile and PD-L1 expression, which are essential for the therapeutic management.
2.1. Samples Collection This study included cytological samples in which suspicious cells were observed and where immunocytochemistry was performed to characterize these cells. The samples were collected between January 2021 and September 2022 in the Cell Biology Laboratory (Timone Hospital, Assistance Publique des Hôpitaux de Marseille, Marseille, France). The different types of these cytological samples were pleural, pericardial, and peritoneal effusions, bronchoalveolar lavage fluids, endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) lymph nodes, EBUS-TBNA mediastinal or pulmonary mass, cerebrospinal fluid, and bone marrow aspiration. Samples were not initially fixed and were kept at 4 °C until slide preparation (smears or cytospin). Slides were stained with Papanicolaou and May-Grünwald–Giemsa stains. The conventional cytological diagnosis was performed by the Cell Biology Laboratory. PD-L1 testing was performed by the Anatomopathology Laboratory (Assistance Publique des Hôpitaux de Marseille, Marseille, France). The next generation sequencing (NGS) was performed by the Oncobiology Laboratory (Assistance Publique des Hôpitaux de Marseille, France). Samples included in this study were obtained from patients attending the Assistance Publique des Hôpitaux de Marseille for diagnosis and treatment. Results of the molecular testing and clinical data were retrospectively analyzed. This project was approved by the local ethics committee (PADS22-389). 2.2. Immunocytochemistry on Cytospins to Phenotype Tumor Cells Samples were prepared on cytospins as previously described in , following the manufacturer’s instructions. As a minimum, one wash was performed between each step. Slides were fixed with paraformaldehyde (PAF) 4% for 10 min, and then incubated with the peroxidase-blocking solution for 30 min. After being washed, slides were incubated with SensiTEK HRP kit (ScyTek Laboratories, Logan, UT, USA) for 10 min. Primary antibodies were incubated for 30 min (see for the list of primary antibodies). Then, the biotinylated secondary antibody was incubated for 15 min, followed by Streptavidin/HRP for 20 min and DAB Quanto chromogen (Diagomics, Blagnac, France) for 5 min. Nuclei were counterstained with Mayer’s hemalun solution. Slides were mounted with Aquatex ® (Merck Millipore, Darmstadt, Germany). Slides were observed under optical microscope (Leica, Wetzlar, Germany). Mouse isotype IgG and rabbit polyclonal antibodies were used as negative controls as part of best practice method. 2.3. Immunohistochemistry on Cytoblock for PD-L1 Expression Cytoblocks were prepared to perform PD-L1 testing. Cytological samples were fixed with formalin 4% for 6 h then centrifugated for 5 min at 670 g. The supernatant was discarded. The cytoblock TM kit (Epredia, Kalamazoo, MI, USA) was used to prepare cytoblocks following the manufacturer’s instructions. A slide stained with H&E was systematically performed before PD-L1 immunostaining to confirm the cytoblock quality and evaluate the adequate number of tumor cells. PD-L1 immunostaining (QR001, Quartett, Germany) was performed with the optiview DAB detection Kit on Benchmarck Ultra (Ventana, Roche, Bale, Switzerland). A positive control was systematically performed as part of best practice method. 2.4. Next Generation Sequencing NGS was performed from frozen cell pellets as previously described . In short, total nucleic acids were extracted with the Maxwell RSC Cell DNA Kit (Promega, Madison, WI, USA) and RNAs were extracted with the Maxwell RSC Simply RNA Blood Kit (Promega). The detection of mutations and fusions were performed by NGS on the Ion Torrent S5XL (ThermoFisher, Waltham, MA, USA) with a custom panel Oncomine Solid Tumor and Oncomine Solid Tumor+ (OST/OST+) and Oncomine Focus RNA assay kit (ThermoFisher, Waltham, MA, USA) (see for the fusion transcript panel and the mutation transcript panel). Ion Torrent Suite, Ion Reporter software (ThermoFisher, Waltham, MA, USA) and a pipeline developed in our laboratory were used for the interpretation of the results.
This study included cytological samples in which suspicious cells were observed and where immunocytochemistry was performed to characterize these cells. The samples were collected between January 2021 and September 2022 in the Cell Biology Laboratory (Timone Hospital, Assistance Publique des Hôpitaux de Marseille, Marseille, France). The different types of these cytological samples were pleural, pericardial, and peritoneal effusions, bronchoalveolar lavage fluids, endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) lymph nodes, EBUS-TBNA mediastinal or pulmonary mass, cerebrospinal fluid, and bone marrow aspiration. Samples were not initially fixed and were kept at 4 °C until slide preparation (smears or cytospin). Slides were stained with Papanicolaou and May-Grünwald–Giemsa stains. The conventional cytological diagnosis was performed by the Cell Biology Laboratory. PD-L1 testing was performed by the Anatomopathology Laboratory (Assistance Publique des Hôpitaux de Marseille, Marseille, France). The next generation sequencing (NGS) was performed by the Oncobiology Laboratory (Assistance Publique des Hôpitaux de Marseille, France). Samples included in this study were obtained from patients attending the Assistance Publique des Hôpitaux de Marseille for diagnosis and treatment. Results of the molecular testing and clinical data were retrospectively analyzed. This project was approved by the local ethics committee (PADS22-389).
Samples were prepared on cytospins as previously described in , following the manufacturer’s instructions. As a minimum, one wash was performed between each step. Slides were fixed with paraformaldehyde (PAF) 4% for 10 min, and then incubated with the peroxidase-blocking solution for 30 min. After being washed, slides were incubated with SensiTEK HRP kit (ScyTek Laboratories, Logan, UT, USA) for 10 min. Primary antibodies were incubated for 30 min (see for the list of primary antibodies). Then, the biotinylated secondary antibody was incubated for 15 min, followed by Streptavidin/HRP for 20 min and DAB Quanto chromogen (Diagomics, Blagnac, France) for 5 min. Nuclei were counterstained with Mayer’s hemalun solution. Slides were mounted with Aquatex ® (Merck Millipore, Darmstadt, Germany). Slides were observed under optical microscope (Leica, Wetzlar, Germany). Mouse isotype IgG and rabbit polyclonal antibodies were used as negative controls as part of best practice method.
Cytoblocks were prepared to perform PD-L1 testing. Cytological samples were fixed with formalin 4% for 6 h then centrifugated for 5 min at 670 g. The supernatant was discarded. The cytoblock TM kit (Epredia, Kalamazoo, MI, USA) was used to prepare cytoblocks following the manufacturer’s instructions. A slide stained with H&E was systematically performed before PD-L1 immunostaining to confirm the cytoblock quality and evaluate the adequate number of tumor cells. PD-L1 immunostaining (QR001, Quartett, Germany) was performed with the optiview DAB detection Kit on Benchmarck Ultra (Ventana, Roche, Bale, Switzerland). A positive control was systematically performed as part of best practice method.
NGS was performed from frozen cell pellets as previously described . In short, total nucleic acids were extracted with the Maxwell RSC Cell DNA Kit (Promega, Madison, WI, USA) and RNAs were extracted with the Maxwell RSC Simply RNA Blood Kit (Promega). The detection of mutations and fusions were performed by NGS on the Ion Torrent S5XL (ThermoFisher, Waltham, MA, USA) with a custom panel Oncomine Solid Tumor and Oncomine Solid Tumor+ (OST/OST+) and Oncomine Focus RNA assay kit (ThermoFisher, Waltham, MA, USA) (see for the fusion transcript panel and the mutation transcript panel). Ion Torrent Suite, Ion Reporter software (ThermoFisher, Waltham, MA, USA) and a pipeline developed in our laboratory were used for the interpretation of the results.
3.1. General Results Between January 2021 and September 2022, 259 cytological samples containing cells suspected of malignancy were analyzed by immunocytochemistry to characterize the type and origin of cancer. Immunocytochemistry allowed for the characterization of the type of cancer in 248 cases (95.7%). In 59 samples (mostly pleural and peritoneal effusions), the immunocytochemistry confirmed the malignancy but with another origin than lung (for example, ovarian, breast, colorectal or pancreatic carcinoma, mesothelioma, neuroblastoma, lymphoma, or melanoma). Concerning the 189 samples with lung cancer, lung adenocarcinoma was diagnosed in 106 cases, followed by non-small cell lung cancer not-otherwise specified (NSCLC NOS) (44 cases), squamous cell carcinoma (20 cases) and neuroendocrine tumors (19 cases), including large cell neuroendocrine carcinoma (3 cases), small cell lung cancer (15 cases), and 1 case of carcinoid tumor (see and ). In 11 cases, samples contained cells that were suspected to be malignant, but the immunocytochemistry did not confirm the malignancy, either because the sample was too necrotic or because the samples contained a low number of tumor cells (<1% of total cells). Among the 189 samples diagnosed with lung cancer, 72 (38.1%) were pleural effusions, 71 (37.5%) were lymph nodes collected by EBUS-TBNA, 29 (15.3%) were mediastinal or pulmonary masses collected by EBUS-TBNA, 6 (3.2%) were bronchoalveolar lavage fluids (BAL), 6 (3.2%) were pericardial effusions, 2 (1.1%) were cerebrospinal fluids (CSF), 2 (1.1%) were peritoneal effusion, and one (0.5%) was bone marrow. Patients were mostly diagnosed at stage IV (75.1%) and were current or former smokers (73%) . 3.2. Molecular and PD-L1 Results PD-L1 was performed on cytoblocks from 115 cytological samples. No results could be reported in 29 cases (25.2%) because there were less than 50 tumor cells. Of the 86 samples that could be analyzed, PD-L1 expression was found negative (<1%) in 22 cases (25.6%), between 1 and 49% in 34 cases (39.5%) and ≥50% in 30 cases (34.9%). Examples of these are shown in . For 35 patients, the cytoblock was not performed because no sample remained after immunocytochemistry and NGS testing, or because the sample was too necrotic. For 20 patients, the cytoblock was available but PD-L1 was not performed on the cytoblock as it had already been done on a biopsy or on the resected tumor. Next generation sequencing (NGS) was performed on 140 out of 150 cases (93%) diagnosed with lung adenocarcinoma or NSCLC NOS. EGFR mutation was found in 20 cases (14.2%), of which 12 received a tyrosine kinase inhibitor. The other 8 patients were not at stage IV, or were treated with the best supportive care, or the treatment was not known. ALK fusions were found in 4 (2.8%) cases and ROS1 fusions in 2 (1.4%) cases. All these patients with ALK or ROS1 fusions were treated with appropriate tyrosine kinase inhibitors. A mutation in TP53 was found in 67 cases (47.8%), 46 of which had another associated mutation. KRAS mutation was found in 48 cases (34.3%), of which 29 had another associated mutation. Other mutations were found as HER2 , PIK3CA , STK11 , RET , BRAF , DDR2 , CTNNB1 , SMAD4 , PTEN and POLE . The absence of mutation or fusion was found in 19 cases (13.6%). 3.3. Improvement of Diagnosis and Therapeutic Management with Cytological Samples We analyzed the impact of cytology results on the management of 189 patients with lung cancer: For 124 (65.6%) cases, the result was conducive to the diagnosis and allowed for therapeutic management. Among them, 37 cases had a biopsy and a cytological sample during the same procedure: the same diagnosis was obtained on biopsy and cytology for 24 patients, while for 13 patients, the biopsy was free of tumor cells and the diagnosis was only performed on cytology. In 3 cases, a biopsy was recommended after cytology because they were necrotic or because there was not enough material to perform the NGS. For 40 (21.2%) patients already known to have lung cancer, a cytological sample was performed in the context of suspected lung cancer progression (demonstrated by imaging). The presence of tumor cells in the cytological sample confirmed the progression and led to a change of therapeutic line. For 19 (10.0%) cases, cytology results had no impact on the therapeutic management. In most of these cases, the metastatic site was already known (pleural, peritoneal, or pericardial) and the cytological analysis was performed because the effusion had to be drained. In other cases, the clinical status deteriorated rapidly, and the patient died within a few days. In 6 (3.2%) cases, no information was available. According to our data, the results obtained from cytological samples allow a therapeutic management for 87% of patients with lung cancer.
Between January 2021 and September 2022, 259 cytological samples containing cells suspected of malignancy were analyzed by immunocytochemistry to characterize the type and origin of cancer. Immunocytochemistry allowed for the characterization of the type of cancer in 248 cases (95.7%). In 59 samples (mostly pleural and peritoneal effusions), the immunocytochemistry confirmed the malignancy but with another origin than lung (for example, ovarian, breast, colorectal or pancreatic carcinoma, mesothelioma, neuroblastoma, lymphoma, or melanoma). Concerning the 189 samples with lung cancer, lung adenocarcinoma was diagnosed in 106 cases, followed by non-small cell lung cancer not-otherwise specified (NSCLC NOS) (44 cases), squamous cell carcinoma (20 cases) and neuroendocrine tumors (19 cases), including large cell neuroendocrine carcinoma (3 cases), small cell lung cancer (15 cases), and 1 case of carcinoid tumor (see and ). In 11 cases, samples contained cells that were suspected to be malignant, but the immunocytochemistry did not confirm the malignancy, either because the sample was too necrotic or because the samples contained a low number of tumor cells (<1% of total cells). Among the 189 samples diagnosed with lung cancer, 72 (38.1%) were pleural effusions, 71 (37.5%) were lymph nodes collected by EBUS-TBNA, 29 (15.3%) were mediastinal or pulmonary masses collected by EBUS-TBNA, 6 (3.2%) were bronchoalveolar lavage fluids (BAL), 6 (3.2%) were pericardial effusions, 2 (1.1%) were cerebrospinal fluids (CSF), 2 (1.1%) were peritoneal effusion, and one (0.5%) was bone marrow. Patients were mostly diagnosed at stage IV (75.1%) and were current or former smokers (73%) .
PD-L1 was performed on cytoblocks from 115 cytological samples. No results could be reported in 29 cases (25.2%) because there were less than 50 tumor cells. Of the 86 samples that could be analyzed, PD-L1 expression was found negative (<1%) in 22 cases (25.6%), between 1 and 49% in 34 cases (39.5%) and ≥50% in 30 cases (34.9%). Examples of these are shown in . For 35 patients, the cytoblock was not performed because no sample remained after immunocytochemistry and NGS testing, or because the sample was too necrotic. For 20 patients, the cytoblock was available but PD-L1 was not performed on the cytoblock as it had already been done on a biopsy or on the resected tumor. Next generation sequencing (NGS) was performed on 140 out of 150 cases (93%) diagnosed with lung adenocarcinoma or NSCLC NOS. EGFR mutation was found in 20 cases (14.2%), of which 12 received a tyrosine kinase inhibitor. The other 8 patients were not at stage IV, or were treated with the best supportive care, or the treatment was not known. ALK fusions were found in 4 (2.8%) cases and ROS1 fusions in 2 (1.4%) cases. All these patients with ALK or ROS1 fusions were treated with appropriate tyrosine kinase inhibitors. A mutation in TP53 was found in 67 cases (47.8%), 46 of which had another associated mutation. KRAS mutation was found in 48 cases (34.3%), of which 29 had another associated mutation. Other mutations were found as HER2 , PIK3CA , STK11 , RET , BRAF , DDR2 , CTNNB1 , SMAD4 , PTEN and POLE . The absence of mutation or fusion was found in 19 cases (13.6%).
We analyzed the impact of cytology results on the management of 189 patients with lung cancer: For 124 (65.6%) cases, the result was conducive to the diagnosis and allowed for therapeutic management. Among them, 37 cases had a biopsy and a cytological sample during the same procedure: the same diagnosis was obtained on biopsy and cytology for 24 patients, while for 13 patients, the biopsy was free of tumor cells and the diagnosis was only performed on cytology. In 3 cases, a biopsy was recommended after cytology because they were necrotic or because there was not enough material to perform the NGS. For 40 (21.2%) patients already known to have lung cancer, a cytological sample was performed in the context of suspected lung cancer progression (demonstrated by imaging). The presence of tumor cells in the cytological sample confirmed the progression and led to a change of therapeutic line. For 19 (10.0%) cases, cytology results had no impact on the therapeutic management. In most of these cases, the metastatic site was already known (pleural, peritoneal, or pericardial) and the cytological analysis was performed because the effusion had to be drained. In other cases, the clinical status deteriorated rapidly, and the patient died within a few days. In 6 (3.2%) cases, no information was available. According to our data, the results obtained from cytological samples allow a therapeutic management for 87% of patients with lung cancer.
In this study, we used cytospin specimens for staining and immunocytochemistry and cytoblocks for PD-L1 testing. The methods of cytological preparation each have their advantages and disadvantages. For example, cytoblocks can be compared to biopsies, while cytospin preparation provides good morphology that can easily be used for immunocytochemistry. Regardless of the method selected, rigorous quality controls are essential . Studies report that small biopsies and cytological samples can account for up to 70% of specimens for the diagnosis of lung cancer . In our study, immunocytochemistry on cytospin allowed to determine the type of cancer in 248 of the 259 cytological samples in which suspected tumor cells were observed. For the remaining 11 samples, the immunocytochemistry did not allow to characterize the cells either because of the quality (necrotic or too much altered cells), the lack of volume (e.g., cerebrospinal fluid), or the low number of suspected tumor cells (<1% of total cells). Despite this, the classification of tumor cells was successful in over 95% of cases. This rate shows the value of cytological samples for identifying tumor cells. Our results are consistent with other studies evaluating the ability of cytological samples to diagnose lung cancer . For example, Rekhtman et al. compared 192 pre-operative cytology specimens with histology and found a concordance of 96%. Proietti et al. assessed the efficacy of lung cancer subtyping in cytology and biopsy samples from 941 patients and found a concordance in 92.8% of cases. Arnold et al. conducted a prospective study to investigate the role of cytology in pleural effusions. They included 921 pleural effusions with 166 lung cancer and 100 lung adenocarcinomas. The sensitivity for the diagnosis of lung cancer was 56% and 82% for adenocarcinoma . Others studies found similar results for the detection and the characterization of tumor cells in pleural effusion . In the last decade, the emergence of immune checkpoint inhibitors and targeted therapies have modified the way in which cytological samples are managed. In addition to the diagnosis, the sample must allow for the assessment of PD-L1 expression and molecular testing. PD-L1 expression is a biomarker that predicts which patients are more likely to respond to immunotherapy. Immunotherapy can be prescribed in first line monotherapy for patients with advanced NSCLC and with ≥50% PD-L1 expression, and in second line therapy for metastatic NSCLC patients with ≥1% PD-L1 expression . In this context, evaluation of PD-L1 expression is essential on cytological samples . This test can be challenging; it needs an adequate protocol and quality controls . For example, macrophages and mesothelial cells must be properly recognized to avoid counting them in the percentage of PD-L1 expressing cells. Cytoblocks are the most commonly used material for analysis and provide concordant results compared to biopsies, even if smears can also be used . A recent multicenter study including 264 patients concluded that PD-L1 expression on cytological samples correctly predicts the efficacy of immunotherapy . In our study, PD-L1 expression was tested only on 115 cytoblocks. When PD-L1 status has already been determined for a patient on biopsy or surgical specimen, the analysis was not performed again on cytological sample. Molecular testing must be performed for patients with advanced NSCLC as several oncogenic drivers are targetable. The International Association for the Study of Lung Cancer (IASLC) recommends to test EGFR mutations and ALK and ROS1 fusions. HER2 , RET , MET , BRAF , and KRAS are not indicated as a routine stand-alone assay but may be included in a large molecular testing panel . In accordance, the European Society for Medical Oncology (ESMO) recommends the use of NGS that includes at least EGFR common mutations, ALK fusions, MET mutations, BRAF mutations, and ROS1 fusions . The absence of formalin in cytological samples facilitates molecular testing to be applied as NGS or polymerase chain reaction (PCR) . Molecular testing on cytological samples has the advantage of providing results even if the sample volume or the number of tumor cells is low . In our study, 93% of patients with lung adenocarcinoma or NSCLC had NGS results. Among them, only 13.5% did not show a molecular alteration in genes included in the tested panel. We have previously demonstrated the feasibility of detecting ALK and ROS1 fusions from cytological samples either by immunocytochemistry completed by fluorescence in situ hybridization (FISH) if positive, or by NGS, with high concordance of both techniques . ALK and ROS1 fusions are now routinely performed by immunocytochemistry on cytological samples as part of diagnosis results . Rekhtman et al. showed the feasibility of testing for EGFR and KRAS mutations in thoracic cytology . In our study, the NGS is performed on the frozen cell pellets. The supernatant from post-centrifuge liquid based cytology can also be used for NGS . The two main criteria that prevent all techniques (i.e immunocytochemistry, molecular testing and PD-L1) from being performed are low sample volume or the low representativeness of tumor cells. Regarding sample volume, pathologists should inform clinicians that the larger the volume sent to the laboratory, the more adequate the sample will be for diagnosis and additional testing. Particularly for effusions (pericardial, pleural, and peritoneal) where the puncture can evacuate up to several liters, the pathologist can receive only one or two milliliters. Adequacy of a standardized volume vary depending on the cellularity and the percentage of tumor cells. Currently, no standardized volume requirements exists, but several studies recommend 50 mL of fluid . Dalvi et al. recommend at least 20 mL but demonstrated that the tumor cell proportion is critical for assessing diagnosis and molecular analysis . The low number of tumor cells is a reason to perform a biopsy and obtain an accurate material for a new test. With a low number of tumor cells detected (1–5%), immunocytochemistry and NGS can potentially be interpreted . But PD-L1 interpretation requires at least 100 tumor cells, otherwise pathologists are unable to obtain a result.
Over the last decade, major therapeutic advances in the treatment of lung cancer, with the introduction of targeted therapies and immune checkpoint inhibitors, have forced pathologists to change their practice and use cytological samples differently. Diagnosis alone is no longer enough and the pathologist must keep a portion of the sample to perform PD-L1 analysis and molecular testing. Cytological samples are obtained by minimally invasive procedures and can provide enough material for the diagnosis and the therapeutic management in patients with lung cancer.
|
A Novel Technique of Amniotic Membrane Preparation Mimicking Limbal Epithelial Crypts Enhances the Number of Progenitor Cells upon Expansion | c0b49748-feb6-4135-9fa1-15fdb55b159c | 10001367 | Anatomy[mh] | The homeostasis of the dynamic cellular organization in the cornea mainly depends on the regenerative efficiency of the stem cells in the surrounding limbus . Tissue-specific human limbal epithelial stem cells (hLESCs) residing in the limbal epithelial crypts of the palisades of Vogt continuously compensate for the loss of superficial human corneal epithelial cells (hCECs) . The insufficient compensation of diminished hCECs in the corneal epithelium due to the lack or malfunction of hLESCs leads to severe ocular surface disease or so-called limbal epithelial stem cell deficiency (LSCD) . The hLESCs play an essential role in epithelial differentiation, angiogenesis, and extracellular matrix (ECM) organization . Diverse therapeutic approaches have been used to treat both monocular and binocular LSCD. However, cultivated limbal epithelial stem cell transplantation (CLET) of expanded autologous limbal tissue seems to be the most common method for monocular LSCD . The CLET procedure is based on isolating the limbal biopsy from the contralateral eye and treatment with a proteolytic enzyme to digest the surrounding ECM, which helps the hLESCs get released and migrate from the niche. Furthermore, the digested limbal tissue or single isolated hLESCs are harvested ex vivo in a medium containing stem-cell-supporting growth factors and supplements, achieving cell expansion and graft tissue synthesis . Upon transplantation, hLESCs reside on the damaged corneal-limbal tissue, re-creating the limbal stem cell niche that allows epithelial regeneration. Following the existing standard protocols, the reported success rate of CLET varies. A favorable morphological outcome implying stable, intact, completely epithelized and avascular corneal surface is reported as 46.7% to 80.9%, whereas success as a functional outcome such as visual acuity varies from 60.5% to 78.7% . Successful transplantation is directly dependent on the graft tissue quality and the percentage of hLESCs/early progenitor cells in the graft. For successful transplantation, at least 3% of the cells in the expanded cell culture must express the p63 marker . Therefore, establishing a protocol that provides a high percentage of the hLESCs/early progenitor cells in the transplantation graft is of high importance. Human amniotic membrane (HAM) has proven to be a very efficient therapeutic tool in many ocular surface diseases, supporting wound healing and regeneration while suppressing inflammation , angiogenesis , and fibrosis , and it possesses anti-microbial features . It is used for corneal epithelial regeneration, conjunctival reconstruction, glaucoma interventions, and the treatment of corneal melting and perforations. Importantly, it is one of the most used carriers for the ex vivo expansion of hLESCs . HAM contains stem cell niche factors that support maintenance . Generally, such maintenance depends on the inhabitance of the stem cells in a specific niche that allows their anchoring and communication with supporting cells, the release of specific growth factors and cell cycle molecules, and the involvement of evolutionary conserved molecular pathways. Within the niche, the stem cells undergo symmetric or asymmetric division to transient amplifying cells (TACs) that leave the environment and become functionally mature corneal cells . It is known that the physical cues of the cellular environment guide stem cell fate . It is also suggested that biomechanical changes in the limbal stromal niche affect hLESCs fate . No less importantly, mechanical and environmental changes in the corneal tissue have implications for some corneal diseases and pathologies . We hereby present a novel suturing preparation technique that causes the three-dimensional (3D) radial folding of the HAM, mimicking crypt-like formations. The novel approach may allow the hLESCs, upon limbal biopsy expansion and cultivation ex vivo, to reside in the undulated crypts of HAM. This may potentially maintain the putative characteristics of the expanded hLESCs and thus ensure a higher quality of the expanded graft tissue compared to the conventional state-of-the-art method. Therefore, we aimed to compare the progenitor/differentiation state of the cells cultivated in the crypt-like HAMs vs. the cells cultivated on the flat-like HAMs.
The Regional Committee for Medical and Health Research Ethics in South-Eastern Norway (No 2017/418) approved tissue harvesting and laboratory procedures, and all tissue collections complied with the Guidelines of the Helsinki Declaration. Unless stated otherwise, all reagents were purchased from Merck (Darmstadt, Germany). 2.1. Human Amniotic Membrane (HAM) A Placenta was collected after a scheduled cesarian section from a full-term pregnancy. Informed consent and institutional board review approval had previously been obtained from the patient. According to the standard protocol, the placenta was immediately transported in a sterile container and further processed under sterile conditions . Proper washing with 0.9% NaCl (Fresenius Kabi AB, Uppsala, Sweden) or 0.9% NaCl containing 100 U/mL Penicillin, 100 μg/mL Streptomycin (P4333), and 2.5 μg/mL Amphotericin B (A2942) was repeatedly performed. The HAM was then separated from the chorion by blunt dissection, washed out from residual blood, and transferred onto a nitrocellulose filter carrier, pore size 0.45 μm (111306-47-CAN, Sartorius, Gottingen, Germany) with the epithelial side up, then divided into 3 × 3 cm and 5 × 5 cm pieces. HAM pieces were cryopreserved in 50% glycerol, 48.5% DMEM/F12 (31331028, Invitrogen, Carlsbad, CA, USA), 100 U/mL Penicillin, 100 μg/mL Streptomycin and 2.5 μg/mL Amphotericin B and stored at −80 °C. 2.2. HAM Preparation for hLESC Expansion and Cultivation Before use, HAMs were thawed, warmed to room temperature, and washed three times with a medium containing DMEM/F12, 100 U/mL Penicillin, and 100 μg/mL Streptomycin. Thereafter, the HAMs were placed on polyester membrane Netwell TM inserts (3479, Corning Inc., New York, NY, USA) 24 mm in diameter, with the epithelial side up by two different techniques : (1) HAMs 3 × 3 cm ( .1A) in size were peeled from the nitrocellulose filter paper. Further, HAMs were placed and stretched on the top of the polyester membrane and sutured by the eight individual sutures ( .1B) near the edge of the polyester membrane. The HAMs were then tightly stretched on the top of the membrane, making a flat surface ( .1C). Excess HAM tissue that remained at the edge of the polyester membrane was carefully removed with a disposable sterile scalpel. (2) The other approach was to use HAMs 5 × 5 cm in size ( .2A) placed on top of the membrane and sutured so that the HAMs were loosely attached. HAMs were sutured by individual sutures ( .2B). In addition, an individual suture was placed in the center of the HAM/polyester membrane to obtain the folding of the HAM and to keep it in close contact with the membrane ( .2C). The excessive HAM tissue at the edges was again removed accordingly. The sutured HAMs on the Netwell TM inserts were immersed in DMEM/F12 medium containing 100 U/mL Penicillin and 100 μg/mL Streptomycin and kept at 37 °C, 5% CO 2 , and 95% air overnight to obtain HAM free of any glycerol remains. 2.3. Limbal Biopsies and Human LESC Harvesting Following corneal transplantation, the remaining human corneal-scleral rings from three donors (n = 3) were divided into twelve limbal biopsies of equal size and thoroughly washed with DMEM/F12 medium containing 100 U/mL Penicillin and 100 μg/mL Streptomycin. The biopsies were treated with neutral protease and Dispase II (2.4 U/mL 4942078001, Roche Diagnostics, Mannheim, Germany) for 10 min at 37 °C. The dissociation process was blocked using Fetal Bovine Serum (FBS, F2442). The limbal biopsies were then placed centrally on the top of the HAMs, with the epithelial side down and submerged in a standardly used complex medium (COM). COM consisted of DMEM/F12, Penicillin (100 U/mL), Streptomycin (100 μg/mL), Amphotericin B (2.5 μg/mL), human epidermal growth factor (2 ng/mL, E9644), insulin (5 μg/mL), sodium selenite (5 ng/mL) and transferrin (5 μg/mL, l1884), cholera toxin A subunit from Vibrio cholerae (30 ng/mL, C8180), hydrocortisone (0.03 μg/mL, H0888), 5% FBS, and 0.5% dimethyl sulfoxide (DMSO, D2650). After 2 h of incubation, the attached limbal biopsies were completely covered by COM. Further, the limbal biopsies and outgrowing LESCs were harvested and incubated at 37 °C with 5% CO 2 , and 95% air for the following three weeks. The culture medium was changed every three days. 2.4. Immunohistochemistry (IHC) and Immunofluorescence Microscopy Limbal biopsies with hLESCs cultured on HAMs were cut from the Netwell TM inserts. Samples were fixed in 4% formalin overnight at 4 °C and then processed in dehydrated graded alcohol series of 70% (10–15 min), 80% (10–15 min), 96% (2 × 10 min), and 100% ethanol (2 × 10 min) before having xylene added (3 × 10 min) and being washed with melted paraffin (3 × 10 min) and then embedded in paraffin for immunohistochemistry (IHC). Paraffinized tissue was cut into 3–4 μm thick sections using an automated microtome (HM 355S, Thermo Fisher Scientific, Waltham, MA, USA) and attached to histological slides. Deparaffinization was performed in xylene (2 × 10 min), then rehydration was performed by sinking in 100%, 96%, and 70% ethanol, and then distilled water. Hematoxylin & Eosin (H&E) staining was primarily performed. Slides were immersed in Mayers hematoxylin plus solution (01825, Histolab, Askim, Sweden) for 10 min and then rinsed with distilled water (10 min) followed by eosin staining (10 min), and then they were rehydrated in upgraded alcohol series of 70%, 96%, 100% ethanol, and xylene. Slides were further mounted using Pertex (00840, Histolab) mounting medium. For IHC, heat-induced antigen retrieval was performed in a microwave for 5 min at 900W and 15 min at 600W in a citrate buffer (pH 6, C9999) or by PT module (LabVision, Fremont, CA, USA). Blocking of non-specific binding sites with 5% Bovine Serum Albumin (BSA, A9418) dissolved in Dulbecco’s Phosphate Buffered Saline (DPBS, 14190-144, Thermo Fisher Scientific) was conducted for 20 min. Further, slides were stained with primary antibodies diluted in 1% BSA for 1 h. Slides were stained using antibodies for the following progenitor markers: tumor protein p63 alpha (p63α, rabbit polyclonal, 1:200 dilution, 4892S, Cell Signaling, Beverly, MA, USA), SRY-Box Transcription Factor 9 (SOX9, 82630, Cell Signaling, rabbit monoclonal, 1:200), quiescence marker: CCAAT/enhancer-binding protein delta (CEBPD, rabbit polyclonal, 1:200 dilution, ab198320, Abcam, Cambridge, UK), and proliferation marker Ki-67 (rabbit monoclonal, 1:200, RM-9106-S, Thermo Scientific) and the following differentiation markers: cytokeratin 3/12 (KRT3/12, mouse monoclonal, 1:100 dilution, 08691431, MP biomedicals, Santa Ana, CA, USA) and connexin-43 (CX43, rabbit polyclonal, 1:300, C6219). Then, the slides were thoroughly washed three times for 5 min with PBS-tween buffer (28352, Thermo Fisher Scientific). Incubation was continued with the appropriate animal type of secondary antibody: Cy3 ® goat anti-rabbit IgG (rabbit monoclonal, 1:500 dilution, A10520, Abcam) for samples stained with p63α, CEBPD, SOX9, Ki67, and Alexa Fluor ® 488 donkey anti-mouse IgG (1:500 dilution, mouse monoclonal, 21202, Abcam) for samples stained with KRT3/12, and Alexa Fluor ® 488 donkey anti-rabbit IgG (1:500 dilution, rabbit monoclonal, A21206, Abcam) for antibody staining CX43. The secondary antibody was incubated for 45 min and washed three times for 5 min. Nuclear staining was performed using a 4′,6-daminidino-2-phenylindole (DAPI) mounting solution (P36931, Life technologies corporation, Carlsbad, CA, USA). Further, LabVision Autostainer 360 (Lab Vision Corporation, VT) was used for staining with antibodies against adherent junction molecules such as E-cadherin (CDH1, mouse monoclonal, 1:50 dilution, n1620, DakoCytomation, Santa Clara, CA, USA) and N-cadherin (CDH2, mouse monoclonal, 1:100 dilution, m3613, DakoCytomation). Visualization was done using the standard peroxidase technique (UltravisionOne HRP system, Thermo Fisher Scientific). Primary antibody binding to an expressed antigen was recognized by a secondary antibody conjugated with peroxidase-labeled polymer with diaminobenzidine (DAB). Each staining was performed at least three times, and each sample was tested in triplicate. Negative and positive controls were performed simultaneously for all antibodies. All antibodies used for IHC in this study are summarized in . Bright-field images of H&E and DAB-stained samples were taken by a ZEISS Axio Observer Z1 microscope (ZEISS, Oberkochen, Germany). Fluorescence was recorded by a ZEISS Axio Imager M1 fluorescence microscope (ZEISS). Three independent individuals used Image J software and counted nuclear antibody positivity (p63α, CEBPD, SOX9, and Ki67). 2.5. Statistical Analysis The technical replicates from the same donor and group of three donors of the hLESC harvested in two different conditions were averaged as a percentage mean ± standard error of the mean (SEM). Prism 8.3.0 (GraphPad, San Diego, CA, USA) was used for statistical analysis. The data were counted and analyzed by two different methods: in percentages representing the ratio of the number of cells positive for a specific marker and the total number of cells (DAPI positivity), or as the number of cells positive for a specific marker per mm 2 . Further, data were tested for normal distribution (Shapiro–Wilk test), and the difference was tested using an unpaired two-sample t-test. The significance level p ≤ 0.05 was counted as significant.
A Placenta was collected after a scheduled cesarian section from a full-term pregnancy. Informed consent and institutional board review approval had previously been obtained from the patient. According to the standard protocol, the placenta was immediately transported in a sterile container and further processed under sterile conditions . Proper washing with 0.9% NaCl (Fresenius Kabi AB, Uppsala, Sweden) or 0.9% NaCl containing 100 U/mL Penicillin, 100 μg/mL Streptomycin (P4333), and 2.5 μg/mL Amphotericin B (A2942) was repeatedly performed. The HAM was then separated from the chorion by blunt dissection, washed out from residual blood, and transferred onto a nitrocellulose filter carrier, pore size 0.45 μm (111306-47-CAN, Sartorius, Gottingen, Germany) with the epithelial side up, then divided into 3 × 3 cm and 5 × 5 cm pieces. HAM pieces were cryopreserved in 50% glycerol, 48.5% DMEM/F12 (31331028, Invitrogen, Carlsbad, CA, USA), 100 U/mL Penicillin, 100 μg/mL Streptomycin and 2.5 μg/mL Amphotericin B and stored at −80 °C.
Before use, HAMs were thawed, warmed to room temperature, and washed three times with a medium containing DMEM/F12, 100 U/mL Penicillin, and 100 μg/mL Streptomycin. Thereafter, the HAMs were placed on polyester membrane Netwell TM inserts (3479, Corning Inc., New York, NY, USA) 24 mm in diameter, with the epithelial side up by two different techniques : (1) HAMs 3 × 3 cm ( .1A) in size were peeled from the nitrocellulose filter paper. Further, HAMs were placed and stretched on the top of the polyester membrane and sutured by the eight individual sutures ( .1B) near the edge of the polyester membrane. The HAMs were then tightly stretched on the top of the membrane, making a flat surface ( .1C). Excess HAM tissue that remained at the edge of the polyester membrane was carefully removed with a disposable sterile scalpel. (2) The other approach was to use HAMs 5 × 5 cm in size ( .2A) placed on top of the membrane and sutured so that the HAMs were loosely attached. HAMs were sutured by individual sutures ( .2B). In addition, an individual suture was placed in the center of the HAM/polyester membrane to obtain the folding of the HAM and to keep it in close contact with the membrane ( .2C). The excessive HAM tissue at the edges was again removed accordingly. The sutured HAMs on the Netwell TM inserts were immersed in DMEM/F12 medium containing 100 U/mL Penicillin and 100 μg/mL Streptomycin and kept at 37 °C, 5% CO 2 , and 95% air overnight to obtain HAM free of any glycerol remains.
Following corneal transplantation, the remaining human corneal-scleral rings from three donors (n = 3) were divided into twelve limbal biopsies of equal size and thoroughly washed with DMEM/F12 medium containing 100 U/mL Penicillin and 100 μg/mL Streptomycin. The biopsies were treated with neutral protease and Dispase II (2.4 U/mL 4942078001, Roche Diagnostics, Mannheim, Germany) for 10 min at 37 °C. The dissociation process was blocked using Fetal Bovine Serum (FBS, F2442). The limbal biopsies were then placed centrally on the top of the HAMs, with the epithelial side down and submerged in a standardly used complex medium (COM). COM consisted of DMEM/F12, Penicillin (100 U/mL), Streptomycin (100 μg/mL), Amphotericin B (2.5 μg/mL), human epidermal growth factor (2 ng/mL, E9644), insulin (5 μg/mL), sodium selenite (5 ng/mL) and transferrin (5 μg/mL, l1884), cholera toxin A subunit from Vibrio cholerae (30 ng/mL, C8180), hydrocortisone (0.03 μg/mL, H0888), 5% FBS, and 0.5% dimethyl sulfoxide (DMSO, D2650). After 2 h of incubation, the attached limbal biopsies were completely covered by COM. Further, the limbal biopsies and outgrowing LESCs were harvested and incubated at 37 °C with 5% CO 2 , and 95% air for the following three weeks. The culture medium was changed every three days.
Limbal biopsies with hLESCs cultured on HAMs were cut from the Netwell TM inserts. Samples were fixed in 4% formalin overnight at 4 °C and then processed in dehydrated graded alcohol series of 70% (10–15 min), 80% (10–15 min), 96% (2 × 10 min), and 100% ethanol (2 × 10 min) before having xylene added (3 × 10 min) and being washed with melted paraffin (3 × 10 min) and then embedded in paraffin for immunohistochemistry (IHC). Paraffinized tissue was cut into 3–4 μm thick sections using an automated microtome (HM 355S, Thermo Fisher Scientific, Waltham, MA, USA) and attached to histological slides. Deparaffinization was performed in xylene (2 × 10 min), then rehydration was performed by sinking in 100%, 96%, and 70% ethanol, and then distilled water. Hematoxylin & Eosin (H&E) staining was primarily performed. Slides were immersed in Mayers hematoxylin plus solution (01825, Histolab, Askim, Sweden) for 10 min and then rinsed with distilled water (10 min) followed by eosin staining (10 min), and then they were rehydrated in upgraded alcohol series of 70%, 96%, 100% ethanol, and xylene. Slides were further mounted using Pertex (00840, Histolab) mounting medium. For IHC, heat-induced antigen retrieval was performed in a microwave for 5 min at 900W and 15 min at 600W in a citrate buffer (pH 6, C9999) or by PT module (LabVision, Fremont, CA, USA). Blocking of non-specific binding sites with 5% Bovine Serum Albumin (BSA, A9418) dissolved in Dulbecco’s Phosphate Buffered Saline (DPBS, 14190-144, Thermo Fisher Scientific) was conducted for 20 min. Further, slides were stained with primary antibodies diluted in 1% BSA for 1 h. Slides were stained using antibodies for the following progenitor markers: tumor protein p63 alpha (p63α, rabbit polyclonal, 1:200 dilution, 4892S, Cell Signaling, Beverly, MA, USA), SRY-Box Transcription Factor 9 (SOX9, 82630, Cell Signaling, rabbit monoclonal, 1:200), quiescence marker: CCAAT/enhancer-binding protein delta (CEBPD, rabbit polyclonal, 1:200 dilution, ab198320, Abcam, Cambridge, UK), and proliferation marker Ki-67 (rabbit monoclonal, 1:200, RM-9106-S, Thermo Scientific) and the following differentiation markers: cytokeratin 3/12 (KRT3/12, mouse monoclonal, 1:100 dilution, 08691431, MP biomedicals, Santa Ana, CA, USA) and connexin-43 (CX43, rabbit polyclonal, 1:300, C6219). Then, the slides were thoroughly washed three times for 5 min with PBS-tween buffer (28352, Thermo Fisher Scientific). Incubation was continued with the appropriate animal type of secondary antibody: Cy3 ® goat anti-rabbit IgG (rabbit monoclonal, 1:500 dilution, A10520, Abcam) for samples stained with p63α, CEBPD, SOX9, Ki67, and Alexa Fluor ® 488 donkey anti-mouse IgG (1:500 dilution, mouse monoclonal, 21202, Abcam) for samples stained with KRT3/12, and Alexa Fluor ® 488 donkey anti-rabbit IgG (1:500 dilution, rabbit monoclonal, A21206, Abcam) for antibody staining CX43. The secondary antibody was incubated for 45 min and washed three times for 5 min. Nuclear staining was performed using a 4′,6-daminidino-2-phenylindole (DAPI) mounting solution (P36931, Life technologies corporation, Carlsbad, CA, USA). Further, LabVision Autostainer 360 (Lab Vision Corporation, VT) was used for staining with antibodies against adherent junction molecules such as E-cadherin (CDH1, mouse monoclonal, 1:50 dilution, n1620, DakoCytomation, Santa Clara, CA, USA) and N-cadherin (CDH2, mouse monoclonal, 1:100 dilution, m3613, DakoCytomation). Visualization was done using the standard peroxidase technique (UltravisionOne HRP system, Thermo Fisher Scientific). Primary antibody binding to an expressed antigen was recognized by a secondary antibody conjugated with peroxidase-labeled polymer with diaminobenzidine (DAB). Each staining was performed at least three times, and each sample was tested in triplicate. Negative and positive controls were performed simultaneously for all antibodies. All antibodies used for IHC in this study are summarized in . Bright-field images of H&E and DAB-stained samples were taken by a ZEISS Axio Observer Z1 microscope (ZEISS, Oberkochen, Germany). Fluorescence was recorded by a ZEISS Axio Imager M1 fluorescence microscope (ZEISS). Three independent individuals used Image J software and counted nuclear antibody positivity (p63α, CEBPD, SOX9, and Ki67).
The technical replicates from the same donor and group of three donors of the hLESC harvested in two different conditions were averaged as a percentage mean ± standard error of the mean (SEM). Prism 8.3.0 (GraphPad, San Diego, CA, USA) was used for statistical analysis. The data were counted and analyzed by two different methods: in percentages representing the ratio of the number of cells positive for a specific marker and the total number of cells (DAPI positivity), or as the number of cells positive for a specific marker per mm 2 . Further, data were tested for normal distribution (Shapiro–Wilk test), and the difference was tested using an unpaired two-sample t-test. The significance level p ≤ 0.05 was counted as significant.
3.1. Epithelial and Basement Membrane (BM) Morphology in Corneal-limbal Tissue and Consequent Localization of hLESCs The distribution of hLESCs was examined in the different BM compartments of the human corneal-limbal tissue in situ and compared to the hLESCs cultured on conventionally flat and alternatively, HAM sutured in a radial pattern, mimicking limbal crypts ex vivo . The human corneal epithelium had 5–7 layers on the flat BM and an avascular Bowman’s layer ( A). The anterior limbus contained 7–10 epithelial layers on the irregular BM and vascularized stroma underneath ( B). The posterior limbal epithelium was attached to the undulated BM and limbal epithelial crypts that were placed deeper and were mainly surrounded by the limbal stroma ( C). The hLESCs were smaller in size, with a high nucleo-cytoplasmic (N-: C) ratio, and could be randomly detected in the basal epithelial layer of the anterior limbus ( B, black arrows). However, the hLESCs seemed to be more present and densely packed in the basal layer of the posterior limbus and limbal epithelial crypts ( C, black arrow). 3.2. Morphology of the hLESC Cultures Expanded on Conventional, Flat-sutured HAMs vs. hLESC Cultures Expanded on the Novel, Radially-sutured HAMs HAMs sutured by the novel radial suture technique comprised of flat and crypt-like areas. Furthermore, the crypts of the HAMs sutured by the novel radial suture technique consisted of (1) undulated HAM areas with the opened surface ( E) and (2) looped HAM areas that appeared to be almost closed ( F, black asterisk). The multi-layering of the epithelial cells was noted in ex vivo expanded hLESC cultures lying on the undulated and looped HAMs compared to cultures lying on the flat HAM ( E,F, black arrows vs. D). A higher presence of columnar-like epithelial cells was noted in the cultures harvested on the undulated ( E) and looped HAMs ( F) compared to cultures harvested on the flat HAM ( D). The polygonal and squamous cells were found in the middle and superficial layers of the hLESC cultures on the flat HAMs ( D). These polygonal and squamous cells appeared to be less present in the hLESC cultures in the crypt-like HAM compartments. To better understand the structural differences in cultivated tissue on flat and crypt-like HAMs, we aimed to compare the marker fingerprint of cultures growing on flat and looped-like HAMs, as these are two morphologically distinct settings. 3.3. Distribution of the Progenitor Markers In Situ Versus In Vitro Study Conditions The progenitor marker p63α was found in some of the cells of the basal and suprabasal layers of the cultures expanded on the flat and undulated HAMs ( .1A,B). However, the HAM loops contained a statistically higher number of p63α-positive hLESCs ( .1C) than cultures on the flat HAM, quantified as percentages: p63α vs. DAPI positivity ratio (flat vs. loop, 37.56 ± 3.34% vs. 62.53 ± 3.32%, p = 0.01, Figure 5A) or as a total number per mm 2 (377.8 ± 34.17 vs. 962.9 ± 167.2, p = 0.03, Figure 5B). Regarding the epithelium in the corneal-limbal tissue, p63α was not found in any of the cells of the corneal epithelium in situ ( .2A) but was identified in some cells of the basal and suprabasal layers of the anterior limbus ( .2B, arrow). The posterior limbal epithelium with undulated BM was enriched with p63α-positive cells in the basal and suprabasal layers ( .2C). Basal and suprabasal cells in the cultures expanded on flat ( .1D), undulated ( .1E), and looped ( .1F) HAMs expressed the SOX9 progenitor marker. However, the SOX9 progenitor marker positivity was significantly higher in the cultures expanded on crypt-like HAMs forming loops than in the cultures growing on flat HAMs, quantified as percentages (35.53 ± 0.96% vs. 43.23 ± 2.32%, p = 0.04, Figure 5A) or as a total number per mm 2 (442.3 ± 62.31 vs. 728.1 ± 65.97, p = 0.03, Figure 5B). In situ, the progenitor marker SOX9 was exclusive for the limbal basal epithelium ( .2E). In particular, the limbal epithelial crypts appeared to be enriched for this marker ( .2F). 3.4. Expression Profile of the Proliferation and Quiescence Markers in the Corneal-limbal Epithelial Tissue Versus In Vitro Study Conditions The expression distribution of the CEBPD marker was similar to p63α and present in the cultures expanded on flat ( .1A), undulated ( .1B), and looped HAMs ( .1C)—mainly in basal and suprabasal layers. In addition, CEBPD was found in the basal epithelial cells of both tissues in situ, the cornea ( .2D) and the anterior and posterior limbus ( .2B,C), accordingly. However, no statistical significance was noted in the number of CEBPD-positive cells expanded on HAM loops compared to cells expanded on flat HAMs, quantified as percentages (22.99 ± 2.96% vs. 30.49 ± 3.33 %, p = 0.17, A) or as a total number per mm 2 (243.00 ± 35.19 vs. 474.1 ± 138.5, p = 0.18, B). Many of the hLESCs expanded on a flat ( .1D, white arrow), undulated ( .1E), and looped HAM ( .1F) were found in the proliferation state. However, some sectors of the epithelial tissue on the flat HAM contained no Ki-67-positive cells ( .1D), while sections of the epithelial tissue on the undulated ( .1E) and/or looped HAM ( .1F) persistently maintained Ki-67-positive cells. Proliferation was significantly higher in cultures expanded on looped HAMs compared to the cultures on flat HAMs, quantified as percentages (8.43 ± 0.38 % vs. 22.38 ± 1.95 %, p = 0.002, A) or as a total number per mm 2 (100.7 ± 10.69 vs. 276.2 ± 33.34, p = 0.01, B). For comparison to the in situ state, proliferation marker Ki-67 was sporadically found in the suprabasal cells of the anterior ( .2E) and posterior ( .2F) limbal epithelium, whereas it was absent in the central corneal epithelium, and only sparsely present in some basal cells of the posterior cornea ( .2D). 3.5. Differentiation Marker profile in the Epithelium of the Corneal-limbal Tissue, and hLESC Cultures on the Flat and Crypt-like HAMs CX43 was uniformly distributed in all ex vivo expanded cells ( .1A–C), a finding similar to the CX43 pattern in the corneal epithelium in situ ( .2A). However, some basal cells in the anterior ( .2B) and posterior limbal epithelium and limbal epithelial crypts ( .2C) appeared to lack the CX43 marker. In the expanded hLESC cultures growing on flat HAMs, the differentiation marker KRT3/12 was present in the polygonal and squamous cells, mainly in the middle and top layers ( .1D). Less KRT3/12 presence could be noted in the cultures expanded on undulated ( .1E) and loop HAMs ( .1F), whereas this marker was almost absent in cells growing in small HAM loops. Regarding the corneal-limbal tissue, KRT3/12 was present in all corneal epithelial cells ( .2D). In the limbal epithelium, the majority of the cells were stained positive for the KRT3/12, whereas the cells in the lowest layers attached to the BM were devoid of this marker ( .2E,F). 3.6. Presentation of Cell Adhesion Molecules in the Corneal-Limbal Epithelium and Expanded hLESC Cultures on Flat and Crypt-like HAMs The transmembrane protein E-cadherin was present in most of the ex vivo expanded epithelial cells ( .1A–C). The same applied to the corneal-limbal tissue in situ ( .2A–C). In hLESC cultures, N-cadherin was present in a few cells of the basal layer on the flat HAMs ( .1D). On the other side, more cells in the basal layer of the crypt-like HAMs seemed to express N-cadherin since the surface of the basal layer of the cultivated tissue appeared enlarged in those crypts compared to the cultures on the flat HAMs ( .1E,F). N-cadherin was exclusive for the limbal basal epithelium in situ. Only a few basal cells in the epithelium of the anterior limbus expressed N-cadherin ( .2E). In contrast, almost all cells of the limbal basal epithelium in the posterior limbus expressed N-cadherin ( .2F).
The distribution of hLESCs was examined in the different BM compartments of the human corneal-limbal tissue in situ and compared to the hLESCs cultured on conventionally flat and alternatively, HAM sutured in a radial pattern, mimicking limbal crypts ex vivo . The human corneal epithelium had 5–7 layers on the flat BM and an avascular Bowman’s layer ( A). The anterior limbus contained 7–10 epithelial layers on the irregular BM and vascularized stroma underneath ( B). The posterior limbal epithelium was attached to the undulated BM and limbal epithelial crypts that were placed deeper and were mainly surrounded by the limbal stroma ( C). The hLESCs were smaller in size, with a high nucleo-cytoplasmic (N-: C) ratio, and could be randomly detected in the basal epithelial layer of the anterior limbus ( B, black arrows). However, the hLESCs seemed to be more present and densely packed in the basal layer of the posterior limbus and limbal epithelial crypts ( C, black arrow).
HAMs sutured by the novel radial suture technique comprised of flat and crypt-like areas. Furthermore, the crypts of the HAMs sutured by the novel radial suture technique consisted of (1) undulated HAM areas with the opened surface ( E) and (2) looped HAM areas that appeared to be almost closed ( F, black asterisk). The multi-layering of the epithelial cells was noted in ex vivo expanded hLESC cultures lying on the undulated and looped HAMs compared to cultures lying on the flat HAM ( E,F, black arrows vs. D). A higher presence of columnar-like epithelial cells was noted in the cultures harvested on the undulated ( E) and looped HAMs ( F) compared to cultures harvested on the flat HAM ( D). The polygonal and squamous cells were found in the middle and superficial layers of the hLESC cultures on the flat HAMs ( D). These polygonal and squamous cells appeared to be less present in the hLESC cultures in the crypt-like HAM compartments. To better understand the structural differences in cultivated tissue on flat and crypt-like HAMs, we aimed to compare the marker fingerprint of cultures growing on flat and looped-like HAMs, as these are two morphologically distinct settings.
The progenitor marker p63α was found in some of the cells of the basal and suprabasal layers of the cultures expanded on the flat and undulated HAMs ( .1A,B). However, the HAM loops contained a statistically higher number of p63α-positive hLESCs ( .1C) than cultures on the flat HAM, quantified as percentages: p63α vs. DAPI positivity ratio (flat vs. loop, 37.56 ± 3.34% vs. 62.53 ± 3.32%, p = 0.01, Figure 5A) or as a total number per mm 2 (377.8 ± 34.17 vs. 962.9 ± 167.2, p = 0.03, Figure 5B). Regarding the epithelium in the corneal-limbal tissue, p63α was not found in any of the cells of the corneal epithelium in situ ( .2A) but was identified in some cells of the basal and suprabasal layers of the anterior limbus ( .2B, arrow). The posterior limbal epithelium with undulated BM was enriched with p63α-positive cells in the basal and suprabasal layers ( .2C). Basal and suprabasal cells in the cultures expanded on flat ( .1D), undulated ( .1E), and looped ( .1F) HAMs expressed the SOX9 progenitor marker. However, the SOX9 progenitor marker positivity was significantly higher in the cultures expanded on crypt-like HAMs forming loops than in the cultures growing on flat HAMs, quantified as percentages (35.53 ± 0.96% vs. 43.23 ± 2.32%, p = 0.04, Figure 5A) or as a total number per mm 2 (442.3 ± 62.31 vs. 728.1 ± 65.97, p = 0.03, Figure 5B). In situ, the progenitor marker SOX9 was exclusive for the limbal basal epithelium ( .2E). In particular, the limbal epithelial crypts appeared to be enriched for this marker ( .2F).
The expression distribution of the CEBPD marker was similar to p63α and present in the cultures expanded on flat ( .1A), undulated ( .1B), and looped HAMs ( .1C)—mainly in basal and suprabasal layers. In addition, CEBPD was found in the basal epithelial cells of both tissues in situ, the cornea ( .2D) and the anterior and posterior limbus ( .2B,C), accordingly. However, no statistical significance was noted in the number of CEBPD-positive cells expanded on HAM loops compared to cells expanded on flat HAMs, quantified as percentages (22.99 ± 2.96% vs. 30.49 ± 3.33 %, p = 0.17, A) or as a total number per mm 2 (243.00 ± 35.19 vs. 474.1 ± 138.5, p = 0.18, B). Many of the hLESCs expanded on a flat ( .1D, white arrow), undulated ( .1E), and looped HAM ( .1F) were found in the proliferation state. However, some sectors of the epithelial tissue on the flat HAM contained no Ki-67-positive cells ( .1D), while sections of the epithelial tissue on the undulated ( .1E) and/or looped HAM ( .1F) persistently maintained Ki-67-positive cells. Proliferation was significantly higher in cultures expanded on looped HAMs compared to the cultures on flat HAMs, quantified as percentages (8.43 ± 0.38 % vs. 22.38 ± 1.95 %, p = 0.002, A) or as a total number per mm 2 (100.7 ± 10.69 vs. 276.2 ± 33.34, p = 0.01, B). For comparison to the in situ state, proliferation marker Ki-67 was sporadically found in the suprabasal cells of the anterior ( .2E) and posterior ( .2F) limbal epithelium, whereas it was absent in the central corneal epithelium, and only sparsely present in some basal cells of the posterior cornea ( .2D).
CX43 was uniformly distributed in all ex vivo expanded cells ( .1A–C), a finding similar to the CX43 pattern in the corneal epithelium in situ ( .2A). However, some basal cells in the anterior ( .2B) and posterior limbal epithelium and limbal epithelial crypts ( .2C) appeared to lack the CX43 marker. In the expanded hLESC cultures growing on flat HAMs, the differentiation marker KRT3/12 was present in the polygonal and squamous cells, mainly in the middle and top layers ( .1D). Less KRT3/12 presence could be noted in the cultures expanded on undulated ( .1E) and loop HAMs ( .1F), whereas this marker was almost absent in cells growing in small HAM loops. Regarding the corneal-limbal tissue, KRT3/12 was present in all corneal epithelial cells ( .2D). In the limbal epithelium, the majority of the cells were stained positive for the KRT3/12, whereas the cells in the lowest layers attached to the BM were devoid of this marker ( .2E,F).
The transmembrane protein E-cadherin was present in most of the ex vivo expanded epithelial cells ( .1A–C). The same applied to the corneal-limbal tissue in situ ( .2A–C). In hLESC cultures, N-cadherin was present in a few cells of the basal layer on the flat HAMs ( .1D). On the other side, more cells in the basal layer of the crypt-like HAMs seemed to express N-cadherin since the surface of the basal layer of the cultivated tissue appeared enlarged in those crypts compared to the cultures on the flat HAMs ( .1E,F). N-cadherin was exclusive for the limbal basal epithelium in situ. Only a few basal cells in the epithelium of the anterior limbus expressed N-cadherin ( .2E). In contrast, almost all cells of the limbal basal epithelium in the posterior limbus expressed N-cadherin ( .2F).
Different techniques of HAM suturing to the corneal surface have been used thus far. A HAM can be sutured as a graft (inlay) or as a patch (overlay). While used as a graft, a HAM is placed on the defect with the stromal face down and acts as a BM, allowing the epithelium to proliferate and regenerate over it. It can be used as a single or multilayered graft with a lamellar sac, filling, or roll-filling technique, mostly depending on the depth of the corneal defect. The patch technique is mostly used for epithelial defects without perforations. The epithelial side of the HAM is placed down towards the defect, and the HAM serves as a biological compressive bandage . Graft alone or as part of a sandwich technique, which is a combination of both graft and patch techniques, has been standardly used for HAMs carrying cultivated hLESCs and limbal explants . However, this is the first study to propose the manipulation of a HAM prior to the expansion of the hLESCs to ensure a better quality of the transplanted tissue as an adjuvant technique to the previously used HAM suturing techniques. Stem cell niches vary in size and functional organization in mammals . Stem cells can be found as individual structures under the BM of the skeletal muscle , or grouped as epithelial stem cells in the hair follicle bulges and neural stem cells in the forebrain subventricular zone in mammals . In this study, we provided a more optimal microenvironment for the expansion of hLESCs ex vivo by mimicking the BM folding in niches residing in the posterior limbus and limbal epithelial crypts. Generally, the maintenance of stem cells, including hLESCs, depends on a functional niche characteristic. These niches provide cell anchoring, mechanical protection, communication with underlying stroma and vasculature, the release of specific growth factors, cell cycle molecules, and the involvement of evolutionary conserved molecular pathways. Such 3D microenvironments allow the stem cells to hold the quiescence, maintain stemness, and undergo asymmetric or symmetric proliferation when needed . Our study supports earlier findings that most hLESC/early progenitor cells reside on the bottom of the limbal epithelial crypts, which are deep epithelial protrusions directly surrounded by a loose stromal matrix . When an epithelial stem cell niche is established along the stiff BM, it maintains its regular morphology. However, when the epithelial stem cell niche forms along a flexible and extensible BM, it may arrange in the form of finger-like protrusions, enabling a higher surface for stem cells to allocate, thus providing the protection and preservation of the putative stem cell characteristics . Stem cell progenies acquire differentiation properties by leaving the stem cell niche towards the more rigid and flat ground, such as the Bowman membrane, to eventually terminally differentiate and undergo apoptosis (26). A HAM is a desirable elastic and adaptable scaffold for creating 3D protrusions that can physically mimic limbal crypts ex vivo. It is a widely available natural semi-transparent and permeable membrane. Its mechanical and functional characteristics are desirable for the migration, adhesion, and growth of epithelial cells on the ocular surface. It possesses high elasticity, low stiffness, and high tensile strength properties , and it also resembles the cornea and conjunctiva in regards to the collagen arrangement . The stiffness should be similar between flat vs. crypt-like HAMs, as we used pieces from the same donor. Even though there might be some local differences in stiffness within the same HAM, we used nine pieces of both flat and crypt-like HAM, and all the pieces showed significant changes related to the suturing method. There was only one extra suture on the crypt-like vs. flat HAMs to enable the folding, so this should not have affected the overall stiffness of the crypt-like vs. flat HAMs. Functionally, HAM is immunotolerant and has low antigenicity, even though some immunomodulatory effects have been reported; it has an anti-fibrotic impact, mainly due to the TGF-β inhibition. It secretes a wide range of growth factors, such as EGF, bFGF, HGF, KGF and KGF receptors, TGFα, and TGFβ 1,2,3 isoforms, sharing some common features with stem cell niche composition . However, not all HAM properties seem beneficial for stem cell maintenance. An intact HAM promotes the epithelial differentiation of explanted limbal cultures. Therefore, removing the epithelium from the HAM upon preparation has been used by some authors to maintain progenitor properties, postpone differentiation, and thus, improve the quality of the explanted tissue . Also, not all of the cells expanded on a HAM have the features of hLESCs or early progenies. As previously shown, most of the hLESC/progenies are positioned in the basal epithelial cell layers—the ones attached to the HAM. In contrast, the cells in the upper/superficial layers exhibit more differentiation properties . With our novel suturing technique, we aimed to enlarge the surface area of the HAM, and hence enlarge the number of cells in the basal layer attached to the HAM, maintaining the more undifferentiated state. In addition, the expanded epithelial tissue appeared multilayered in the crypt-like HAMs, and it contained a higher number of columnar-like cells, likely indicating a higher proliferation rate. The novel suturing technique would also increase the supply of the stem-cell-supporting molecules secreted by the HAM. HAM is widely implemented in tissue engineering and regenerative medicine . However, its limited chemical and physical features, and the high cost of preserving it in a fresh condition, caused the urgency for new solutions . HAM has, so far, undergone additional adjustments to upgrade its properties for easier manipulation, duration, and utilization, and for higher resistance to microbes and broader applications, among others, in ocular surface reconstruction . For instance, AM can be used as a constituent of various composite scaffolds, in a form of extract, and as a hydrogel . Regarding the attempts of HAM modification for successful LESC transplantation, decellularized AM (dAM) conjugated with an electrospun polymer nanofiber mesh promoted LESC proliferation and adhesion in a rabbit model . An amniotic membrane in a form of extract (AME) and eye drops proved beneficial for the treatment of ocular surface disorders, injuries, and the in vivo cultivation of hLESCs . In addition, AME, as an animal-free product, was suggested as a suitable replacement for FBS upon LSC transplantation to avoid the risk of disease transmission and accumulation of bovine antigens . However, all the above-modified HAM methods require very complex processing or serve only as adjuvant therapy. Therefore, we present an easy-handling, widely available, and inexpensive method of HAM manipulation prior to hLESC expansion. As previously mentioned, the successful long-term restoration of the corneal epithelium after CLET requires more than 3% of the cells in the transplanted graft to be p63 positive . In our study, cells growing on either the flat or crypt-like HAMs were enriched with the p63α marker. However, we found a significantly larger number of cells positive for p63α in the looped regions of the crypt-like HAMs, compared to the cells growing on flat HAMs. Initially, the whole tumor protein p63 was perceived as a specific marker for hLESCs . Later studies discovered ΔNp63α to be more distinct for hLESCs and for early progenies residing in the limbal basal epithelium, while other p63 isoforms were detected in the suprabasal layers of the limbus and cornea, playing a role in corneal differentiation . Indeed, in our samples, p63α stained the particular cells in the basal limbal epithelium, while staining in the non-limbal cornea was absent. Significantly higher cell turnover was present in the cultures on the crypt-like HAMs compared to the cultures growing on flat HAMs, indicating the presence of cells with intense proliferation, such as early progenies/TACs. Compared to the in situ state, proliferation appeared to be much lower in corneal-limbal tissue, and it was noted in a few suprabasal cells of the anterior and posterior limbus and some cells in the posterior cornea as previously described . The CEBPD marker was not significantly more abundant in the cultures growing on crypt-like HAMs compared to those on flat-like HAMs. CEBPD is a quiescence marker that controls the cell cycle and inhibits the proliferation of hLESCs in ex vivo cultures . Since proliferation was significantly higher in ex vivo cultures growing on crypt-like HAMs compared to those growing on flat-like HAM, we expected these cells not to be positive for CEBPD; hLESCs do not co-express CEBPD with the Ki-67 marker in the limbus. In addition, CEBPD-positive hLESCs that co-express the ΔNp63α marker in the basal limbal epithelium in situ are the ones considered quiescent . However, it seems that the CEBPD marker is not specific for hLESCs in situ, since it is also found in the cells of the basal corneal epithelium in our samples. The transcriptional factor SOX9 plays diverse roles in the embryonal and adult development of mammals as well as in stem cell maintenance. This marker was upregulated in the cultures on crypt-like HAMs compared to those on flat-like HAMs. Its nuclear localization in TACs is essential for proliferation upon wound healing. However, SOX9 is particularly involved in the proliferation and differentiation steps of early progenies derived from hLESCs, but not in the terminal differentiation, which explains why SOX9 is absent in the cornea . This finding contributes to our conclusion that crypt-like HAMs contain numerous cells that are positive for SOX9 and are thus in a more undifferentiated state. We found no difference in the presence of the CX43 transmembrane protein between the cultures on crypt-like HAM and flat HAM. CX43 is a protein involved in the communication between mammalian cells through diverse mechanisms . Constitutionally, CX43 is present in the corneal epithelium and all suprabasal epithelial layers in the limbus, whereas it is absent in some cells of the basal limbal layer . A similar pattern of expression of CX43 applies to expanded cultures on the flat HAMs . The absence of cell interaction may be one of the mechanisms for maintaining stemness and the quiescence state . Thus, according to some authors, CX43 positivity in cells distinguishes the hLESCs from the TACs/early progenies in vivo. Since most of the cells in the basal and suprabasal layers of the cultivated epithelial tissue were CX43 positive, it seems that a very small cell fraction remains in a quiescent state ex vivo. Cytokeratin 3 (KRT3) and cytokeratin 12 (KRT12) are cornea-specific intermediate filaments, hallmarks of differentiated hCECs in the cornea and differentiated epithelial cells in the limbus . In cultured epithelial tissue, KRT3/12 has been found in the suprabasal and superficial layers, but not in the basal layer of cells cultured on flat HAM, a finding corresponding to our results . However, the KRT3/12 marker was reduced in the looped regions of the crypt-like HAMs. In some small looped regions, the respective marker was absent. The KRT3/12 marker is known to be absent from limbal basal layers—a finding that shows a more mature nature of the corneal basal cells compared to the limbal basal epithelial cells, due to different characteristics of the corresponding basal membranes . Also, KRT3/12 is absent from limbal epithelial crypts . Our study shows that unique cell fractions in the basal layers attached to the HAM are positive for the N-cadherin marker. It seems that the isolation of the cell cultures surrounded by a double HAM membrane increases cell positivity for this marker. N-cadherin is essential for maintaining the progenitor characteristics in cultured hLESCs . Differentiated corneal and limbal epithelial cells express E-cadherin, while N-cadherin is present in the hLESCs/progenitor cells in the limbal basal epithelium, a finding concomitant with ours. In particular, our basal epithelial layer in the posterior limbus appeared to be enriched with this marker. It is suggested that communication with the melanocytes is achieved via N-cadherin forming homotypic adhesions . A disadvantage of our technique may be that a larger area of HAM tissue is needed for transplantation and, evidently, a decreased transparency of the cultivated transplantation graft, in addition to any usual disadvantages of using HAM .
In conclusion, this novel HAM suturing technique increased the number of progenitor cells upon expansion and may thus increase the quality of the transplanted graft. We believe this technique can be a valuable, simple, and inexpensive tool to increase the success rate of corneal epithelial regeneration. However, the suturing technique needs to be tested in vivo to confirm its efficacy. Future clinical studies comparing conventional, flat suturing and the current suturing method are also required.
|
Digital Health Literacy and Person-Centred Care: Co-Creation of a Massive Open Online Course for Women with Breast Cancer | b916f2db-4cef-467a-9228-d54798087089 | 10001393 | Patient-Centered Care[mh] | Breast cancer (BC) represents one of the most frequently diagnosed cancers in women worldwide . According to the most recent data from the European Cancer Information System (ECIS), there were approximately 355,460 new cases of BC diagnosed across Europe in 2020, with 34,088 of those cases occurring in Spain . However, thanks to early diagnosis and therapeutic advances, BC survival has increased in recent years , with a survival rate of around 85% . Increasing prevention and treatment for BC have lowered mortality, but the diagnosis and treatment continue to have a significant impact in many areas of patients’ lives (physical, emotional, cognitive and social) . The diagnosis of BC, which in most cases necessitates an effort to adjust and adapt to the new situation , is typically perceived as a traumatic event with a significant impact on the health-related quality of life of the women who suffer from it, making them more vulnerable to the potential consequences of using biased or low-quality health information. Person-centred care (PCC) is defined as the provision of care that considers a patient’s clinical needs, life circumstances, and personal values and preferences . A central component of PCC is to ensure quality communication between patients and healthcare professionals, with the aim of fostering the process of shared decision making (SDM) . SDM-based interventions, such as patient decision aids (PtDAs), have been shown to improve patients’ knowledge about available treatments and their benefits/risks, decisional conflict and other decisional process variables . There is a need to develop interventions to increase knowledge about PCC and digital health literacy (DHL) , particularly in chronic pathologies such as BC, where the impact of their diagnosis or treatment may increase the number of queries on the Internet and directly influence the understanding of health information . Health literacy (HL) integrates the skills and motivation to find, understand, evaluate and use health information. As a result, HL facilitates informed decision making and improves the ability to manage and address health disparities, giving patients more autonomy and empowerment to take responsibility for their own health, as well as the health of their families and communities. In turn, low HL impacts health outcomes and health-related costs, leading to inefficient healthcare utilization and delivery . DHL is an extension of HL that employs the same operational definition but in the context of information and communication technology resources. It involves both the provision of information and the degree to which information is understood. When these skills are lacking, technology solutions have the potential to either promote or hinder HL . Due to the complexity of health information, it is recommended that DHL interventions be based on a design of co-creation of resources, websites and health tools through collaborative work with patients, allowing them to improve the medical care they receive [ , , ]. Massive open online courses (MOOCs) are designed to engage a large number of participants learning remotely, offering the general population, clinical subpopulations or health professionals good quality knowledge on health issues through interactive and flexible technological resources, with little or no prior learning required . To date, most MOOCs have been developed for the education of medical students and health professionals , but they have also been directed at the general population or clinical subpopulations, showing positive effects in several areas such as healthy nutrition habits , self-management of diabetes or learning risk factors for dementia . As has been observed in some projects with other populations, the development of educational interventions with a MOOC based on a co-creation design, which combines several resources in different formats and adapts to different educational, cultural levels and needs of the users, could be a strategy to face the HL, self-care and empowerment challenges for women with BC. One example is the IC-Health European project ( https://cordis.europa.eu/project/id/727474/es accessed on 20 December 2022), whose results have shown good acceptance of co-created teaching resources aimed at improving the DHL of people with chronic diseases and the general population [ , , ]. In recent years, the framework of participatory action research has been used for the development of eHealth. It is an approach that involves collaboration to develop a process through the construction of knowledge and social change in a community following a cyclical approach and involving stakeholders as co-investigators in the process . As occurs in other participatory processes, the co-design of health interventions contributes to improving the services offered, to the extent that they are adjusted to the needs and priorities of its participants while incorporating their own skills [ , , ]. In general, digital interventions, such as MOOCs, have the potential to improve the quality of life and outcomes for women with BC by providing access to information from anywhere at any time, thereby increasing accessibility and flexibility, as well as support to complement traditional medical treatments. Therefore, the aim of this study is to co-create a MOOC of PCC and DHL for and with women with BC.
2.1. Design The MOOC was co-created using a modified experience-based design approach . The co-creation process was divided into three sequential phases: (a) exploratory phase, (b) development phase and (c) evaluation phase (see ). 2.2. Participants and Recruitment Adult women (≥18 years) in any of their cancer stages and BC survivors (regardless of DHL level and knowledge about PCC), their families/carers and any healthcare professionals involved in the management of BC (oncologists, gynaecologists, nurses, psycho-oncologists, etc.) were invited to participate voluntarily in the MOOC co-creation process. A theoretical sample optimized the maximum variability of sociodemographic and clinical profiles (age, educational level, time since diagnosis and active treatment) of women with BC. The recruitment was carried out via snowball sampling through healthcare professionals and expert patients (BC survivors) between May and June 2020. Participants signed an informed consent declaration. 2.3. Procedure The co-creation process was carried out in three online sessions of 120 min each (via the Zoom platform due to the COVID-19 pandemic and delivered by members of the research team) between June 2020 and March 2021 and was supported by a Moodle platform. The first session (exploratory phase) was held in June 2020 and consisted of (i) a brief presentation of the participants; (ii) identifying the different diagnosis, treatment and long-term follow-up paths for BC represented through a patient journey map (PJM)—a scheme that aims to reflect the care pathway followed by a person —based on their experiences, emotions, feelings and thoughts; (iii) exploring their empowerment and information needs in each phase of the disease; (iv) and exploring patients’ information needs and experiences on patient empowerment and SDM. Health professionals did not participate in the development of the PJM; they offered advice and their experiences on the most frequent concerns found in clinical practice with these patients, according to the phase of the disease. In the second session (development phase), held in July 2020, the participants reviewed the PJM and designed the structure and proposed the contents of the MOOC (self-care, myths related to BC, strategies to improve DHL, etc.) based on the empowerment and information needs identified in the first session and their previous experiences managing BC information online. At the end of this session, participants were encouraged to continue the process of co-creation online between July and December 2020 through a Moodle platform where the participants were registered and which they accessed with an individual username and password (assessment phase). The research team developed and shared some content proposals weekly for the different units of the MOOC, and participants were asked to provide feedback and/or new content proposals (see ). Initially, the content of the units was presented in infographic format (see ) and was mainly related to PCC, self-care and DHL applied to BC. Once all the suggestions for improvement provided by the participants on the content were compiled, a graphic designer developed videos and edited the infographics to provide them with interactivity and visually improve their appearance. Updated contents were shared again with the participants in March 2021. Through questionnaires on the Moodle platform (see ), they could give feedback on the definitive contents of the MOOC (see ). A third session (evaluation phase) was held in March 2021 to offer final feedback about the content and interface of the MOOC (acceptability pilot) and to evaluate the experience in the co-creation of the MOOC by means of specific questionnaires (see ). Four gift cards were raffled off as a token of appreciation during this last meeting. 2.4. Measures 2.4.1. Experience in the Co-Creation Process A 13-item questionnaire was specifically developed to explore patients’ and healthcare professionals’ experience in the co-creation process. The first 6 items were measured using a 5-point Likert scale (from “strongly disagree” to “strongly agree”), addressing satisfaction with communication, objective adequacy, usefulness of patient involvement in the co-creation process, importance of co-creation to design relevant content for patients, self-perception of increased knowledge and feeling of being part of the team project. The following 4 items were also assessed on a 5-point Likert scale (from “insufficient” to “excellent”) and were related to participants’ opinions on the quality and clarity of the co-creation sessions, the methodology employed, the interactions between participants and the researchers’ implication. The last 3 items were open-ended questions about what participants liked the most and the least about the MOOC co-creation process, which aspects they found most useful and which aspects could be improved in the co-creation process (see ). 2.4.2. Acceptability Pilot of the MOOC The MOOC’s acceptability was evaluated using a specific scale created in the context of the project following the technology acceptance model’s (TAM) methodology and based on previous related studies . This scale assessed factors such as ease of navigation, clarity of objectives and language, appropriateness of learning activities and quizzes, and other characteristics of the MOOC. The acceptability questionnaire, answered by both patients and healthcare professionals, included 18 items: the first 15 were rated on a 5-point Likert scale, and the last 3 items were open-ended questions about strengths and weaknesses, improvement suggestions and the main points learned throughout the MOOC (see ). 2.5. Analysis The PJM and MOOC content were progressively developed in conjunction with participants. A draft was created with the information obtained from the online co-creation sessions. The different sections of the PJM summarize the experiences of participants with BC or survivors. The research group reviewed the contributions of the participants and proposed a draft version based on a PCC framework. Subsequently, this version of PJM and MOOC content was reviewed by all participants through an iterative process until consensus was reached. For the experience in the co-creation process and the acceptability pilot of the MOOC measures, means and standard deviations (SD) were calculated for all items assessed, and we also analysed the response distribution for each item.
The MOOC was co-created using a modified experience-based design approach . The co-creation process was divided into three sequential phases: (a) exploratory phase, (b) development phase and (c) evaluation phase (see ).
Adult women (≥18 years) in any of their cancer stages and BC survivors (regardless of DHL level and knowledge about PCC), their families/carers and any healthcare professionals involved in the management of BC (oncologists, gynaecologists, nurses, psycho-oncologists, etc.) were invited to participate voluntarily in the MOOC co-creation process. A theoretical sample optimized the maximum variability of sociodemographic and clinical profiles (age, educational level, time since diagnosis and active treatment) of women with BC. The recruitment was carried out via snowball sampling through healthcare professionals and expert patients (BC survivors) between May and June 2020. Participants signed an informed consent declaration.
The co-creation process was carried out in three online sessions of 120 min each (via the Zoom platform due to the COVID-19 pandemic and delivered by members of the research team) between June 2020 and March 2021 and was supported by a Moodle platform. The first session (exploratory phase) was held in June 2020 and consisted of (i) a brief presentation of the participants; (ii) identifying the different diagnosis, treatment and long-term follow-up paths for BC represented through a patient journey map (PJM)—a scheme that aims to reflect the care pathway followed by a person —based on their experiences, emotions, feelings and thoughts; (iii) exploring their empowerment and information needs in each phase of the disease; (iv) and exploring patients’ information needs and experiences on patient empowerment and SDM. Health professionals did not participate in the development of the PJM; they offered advice and their experiences on the most frequent concerns found in clinical practice with these patients, according to the phase of the disease. In the second session (development phase), held in July 2020, the participants reviewed the PJM and designed the structure and proposed the contents of the MOOC (self-care, myths related to BC, strategies to improve DHL, etc.) based on the empowerment and information needs identified in the first session and their previous experiences managing BC information online. At the end of this session, participants were encouraged to continue the process of co-creation online between July and December 2020 through a Moodle platform where the participants were registered and which they accessed with an individual username and password (assessment phase). The research team developed and shared some content proposals weekly for the different units of the MOOC, and participants were asked to provide feedback and/or new content proposals (see ). Initially, the content of the units was presented in infographic format (see ) and was mainly related to PCC, self-care and DHL applied to BC. Once all the suggestions for improvement provided by the participants on the content were compiled, a graphic designer developed videos and edited the infographics to provide them with interactivity and visually improve their appearance. Updated contents were shared again with the participants in March 2021. Through questionnaires on the Moodle platform (see ), they could give feedback on the definitive contents of the MOOC (see ). A third session (evaluation phase) was held in March 2021 to offer final feedback about the content and interface of the MOOC (acceptability pilot) and to evaluate the experience in the co-creation of the MOOC by means of specific questionnaires (see ). Four gift cards were raffled off as a token of appreciation during this last meeting.
2.4.1. Experience in the Co-Creation Process A 13-item questionnaire was specifically developed to explore patients’ and healthcare professionals’ experience in the co-creation process. The first 6 items were measured using a 5-point Likert scale (from “strongly disagree” to “strongly agree”), addressing satisfaction with communication, objective adequacy, usefulness of patient involvement in the co-creation process, importance of co-creation to design relevant content for patients, self-perception of increased knowledge and feeling of being part of the team project. The following 4 items were also assessed on a 5-point Likert scale (from “insufficient” to “excellent”) and were related to participants’ opinions on the quality and clarity of the co-creation sessions, the methodology employed, the interactions between participants and the researchers’ implication. The last 3 items were open-ended questions about what participants liked the most and the least about the MOOC co-creation process, which aspects they found most useful and which aspects could be improved in the co-creation process (see ). 2.4.2. Acceptability Pilot of the MOOC The MOOC’s acceptability was evaluated using a specific scale created in the context of the project following the technology acceptance model’s (TAM) methodology and based on previous related studies . This scale assessed factors such as ease of navigation, clarity of objectives and language, appropriateness of learning activities and quizzes, and other characteristics of the MOOC. The acceptability questionnaire, answered by both patients and healthcare professionals, included 18 items: the first 15 were rated on a 5-point Likert scale, and the last 3 items were open-ended questions about strengths and weaknesses, improvement suggestions and the main points learned throughout the MOOC (see ).
A 13-item questionnaire was specifically developed to explore patients’ and healthcare professionals’ experience in the co-creation process. The first 6 items were measured using a 5-point Likert scale (from “strongly disagree” to “strongly agree”), addressing satisfaction with communication, objective adequacy, usefulness of patient involvement in the co-creation process, importance of co-creation to design relevant content for patients, self-perception of increased knowledge and feeling of being part of the team project. The following 4 items were also assessed on a 5-point Likert scale (from “insufficient” to “excellent”) and were related to participants’ opinions on the quality and clarity of the co-creation sessions, the methodology employed, the interactions between participants and the researchers’ implication. The last 3 items were open-ended questions about what participants liked the most and the least about the MOOC co-creation process, which aspects they found most useful and which aspects could be improved in the co-creation process (see ).
The MOOC’s acceptability was evaluated using a specific scale created in the context of the project following the technology acceptance model’s (TAM) methodology and based on previous related studies . This scale assessed factors such as ease of navigation, clarity of objectives and language, appropriateness of learning activities and quizzes, and other characteristics of the MOOC. The acceptability questionnaire, answered by both patients and healthcare professionals, included 18 items: the first 15 were rated on a 5-point Likert scale, and the last 3 items were open-ended questions about strengths and weaknesses, improvement suggestions and the main points learned throughout the MOOC (see ).
The PJM and MOOC content were progressively developed in conjunction with participants. A draft was created with the information obtained from the online co-creation sessions. The different sections of the PJM summarize the experiences of participants with BC or survivors. The research group reviewed the contributions of the participants and proposed a draft version based on a PCC framework. Subsequently, this version of PJM and MOOC content was reviewed by all participants through an iterative process until consensus was reached. For the experience in the co-creation process and the acceptability pilot of the MOOC measures, means and standard deviations (SD) were calculated for all items assessed, and we also analysed the response distribution for each item.
Twenty-eight participants from Tenerife and Gran Canaria (Canary Islands, Spain) were contacted between May and June 2020, of whom 19 participated in the co-creation process: 17 patients ( ) and two healthcare professionals (nurses from gynaecology and breast pathology units; mean age 40 (1.41) years and with more than 10 years of professional experience). 3.1. Patient Journey Map Points of contact, experience with healthcare received, emotions, feelings and thoughts, diagnostic and therapeutic treatments, and perception about own participation in shared decision making for the three stages of the trajectory of care of BC (early detection and diagnosis, treatment and long-term follow-up) were collected on the co-designed PJM ( ). 3.1.1. Early Detection and Diagnosis Stage Most of the participants received their diagnosis during routine controls (specialized care) or as a result of the presence of symptoms (primary care), and the main emotions that emerged during this time were shock, anxiety, uncertainty and worry about the future. The main diagnostic techniques that the participants underwent were physical examination (palpation), imaging tests (mammography and ultrasound) and biopsy. The experiences collected about the healthcare received in this stage were related to the perception of professionalism, friendliness, a predisposition to resolve doubts and the transmission of calm and encouragement from the healthcare professionals who attended to them. However, the participants expressed that there were other drawbacks in the medical care received at this time related to the challenges of early detection and the complexity of some administrative processes (e.g., medical appointments). Some participants expressed that they would have liked more advice from medical staff. Other participants expressed that they felt involved in the decision-making process in this phase, and this helped them accept the disease and have trust in the therapeutic approach to be used. 3.1.2. Treatment Stage Participants identified the involvement of other healthcare professionals (e.g., oncology, gynaecology, surgery and rehabilitation, among others). While uncertainty remained the predominant emotion in this stage, other emotions started to emerge as well, including concern for appearance and shock by the physical changes that were occurring as a result of the therapeutic techniques used in this stage (e.g., chemotherapy, radiotherapy, surgery, etc.). In general, the participants experienced empathetic care and a certain psychological accompaniment by the healthcare professionals who assisted them. The participants felt more involved in the decision-making process in the gynaecology units than in the oncology units. They all concurred that the experience of informed participation in their treatment process was positive. 3.1.3. Long-Term Follow-Up Stage The main experience was less follow-up by healthcare professionals, giving rise to feelings of helplessness or loneliness and uncertainty about self-care. Other concerns, such as going back to work or looking for a new job more adapted to their health needs, were shared among the participants. The treatments at this stage focused on breast reconstruction surgery and medication. All the participants said they had received limited information on self-care, medical care to follow from this stage and possible new treatments required by healthcare professionals. However, they commented that at this stage they felt empowered to choose the aspects of their health in which they wanted to be involved, leading them to request personalized attention and to ask questions in order to be more involved in the decision-making process. 3.1.4. Recommendations of the Participants for Other Women with BC or Survivors Additionally, and at their own initiative, the participants in the co-creation sessions provided a series of recommendations or tips for other women diagnosed with BC and suggested their inclusion in the MOOC as another resource. These tips were about family, social, work and empowerment areas and specifically for each of the stages worked ( ). 3.2. Empowerment and Information Needs shows the empowerment and information needs identified in each phase. The main empowerment needs identified were related to strategies for emotional management and guidelines for self-care throughout the process from diagnosis until long-term follow-up. The main information needs were related to the lack of understanding of the meanings of biomarkers, parameters and acronyms found in reports, as well as medical jargon, treatment options and the likelihood of cancer recurrence. The need to have guidelines for accessing information and support resources available online, including association websites and online experiences of other women with BC, was highlighted. 3.3. MOOC Content Development Between July and October 2020, a weekly activity was published on the Moodle platform to carry out the process of co-design of the MOOC content. shows the themes of these activities. Finally, the MOOC was composed of five units: (i) BC (definition, types and stages, diagnostic process, treatments, myths, etc.), (ii) PCC (definition, implementation strategies, tips for preparing consultations with the healthcare professional, etc.); (iii) DHL (definition, guidelines to improve each skill, etc.), (iv) self-care (management of physical side effects, emotional management, etc.) and (v) experiences and advice from patients in different areas (healthcare, family, social and work area) and moments of the disease (diagnosis, treatment and long-term follow-up). 3.4. Experience in the Co-Creation Process Data was available for seventeen participants (89.47%) ( ). All of them strongly agreed or agreed that the general objectives of the project were adequate (item 2) and that the participation of women who have or have had BC is useful for the development of a MOOC on this content (item 3). More than 88% of the participants strongly agreed or agreed that being part of the MOOC co-creation process made the content more relevant to them (item 4) and rated the quality of the activities carried out in the co-creation process (item 7) and the methodology applied (item 8) as very good or excellent. Regarding open questions, participants appreciated the way their experiences were incorporated into the MOOC and how they felt part of something meaningful, sharing experiences with other women in similar situations (item 11). In order to fully engage in the co-creation process, participants expressed that they would have liked to attend a face-to-face session. Additionally, some participants found it challenging to devote more time to the MOOC due to personal issues (item 12). See to consult illustrative quotes from participants’ responses to open questions. 3.5. Acceptability Pilot of the MOOC Data was available for seven participants (36.84%) ( ). Combining the “totally agree” and “agree” categories, most of the participants positively evaluated the acceptability of the MOOC in terms of language, content, relevance, proposed activities and suitability of the MOOC objectives. Regarding open questions, most participants emphasized the usefulness of the MOOC’s content (especially related to SDM) and the way it is presented (through infographics and other audio-visual materials) as strengths. Nevertheless, one participant pointed out some navigation difficulties, while another emphasized the lengthy process (item 16). When it was possible, improvements suggested by participants were implemented, such as adding an initial summary of the MOOC’s content (item 17). All the contents were mentioned as important topics learned after completing the MOOC (item 18). See to consult illustrative quotes from participants’ responses to open questions.
Points of contact, experience with healthcare received, emotions, feelings and thoughts, diagnostic and therapeutic treatments, and perception about own participation in shared decision making for the three stages of the trajectory of care of BC (early detection and diagnosis, treatment and long-term follow-up) were collected on the co-designed PJM ( ). 3.1.1. Early Detection and Diagnosis Stage Most of the participants received their diagnosis during routine controls (specialized care) or as a result of the presence of symptoms (primary care), and the main emotions that emerged during this time were shock, anxiety, uncertainty and worry about the future. The main diagnostic techniques that the participants underwent were physical examination (palpation), imaging tests (mammography and ultrasound) and biopsy. The experiences collected about the healthcare received in this stage were related to the perception of professionalism, friendliness, a predisposition to resolve doubts and the transmission of calm and encouragement from the healthcare professionals who attended to them. However, the participants expressed that there were other drawbacks in the medical care received at this time related to the challenges of early detection and the complexity of some administrative processes (e.g., medical appointments). Some participants expressed that they would have liked more advice from medical staff. Other participants expressed that they felt involved in the decision-making process in this phase, and this helped them accept the disease and have trust in the therapeutic approach to be used. 3.1.2. Treatment Stage Participants identified the involvement of other healthcare professionals (e.g., oncology, gynaecology, surgery and rehabilitation, among others). While uncertainty remained the predominant emotion in this stage, other emotions started to emerge as well, including concern for appearance and shock by the physical changes that were occurring as a result of the therapeutic techniques used in this stage (e.g., chemotherapy, radiotherapy, surgery, etc.). In general, the participants experienced empathetic care and a certain psychological accompaniment by the healthcare professionals who assisted them. The participants felt more involved in the decision-making process in the gynaecology units than in the oncology units. They all concurred that the experience of informed participation in their treatment process was positive. 3.1.3. Long-Term Follow-Up Stage The main experience was less follow-up by healthcare professionals, giving rise to feelings of helplessness or loneliness and uncertainty about self-care. Other concerns, such as going back to work or looking for a new job more adapted to their health needs, were shared among the participants. The treatments at this stage focused on breast reconstruction surgery and medication. All the participants said they had received limited information on self-care, medical care to follow from this stage and possible new treatments required by healthcare professionals. However, they commented that at this stage they felt empowered to choose the aspects of their health in which they wanted to be involved, leading them to request personalized attention and to ask questions in order to be more involved in the decision-making process. 3.1.4. Recommendations of the Participants for Other Women with BC or Survivors Additionally, and at their own initiative, the participants in the co-creation sessions provided a series of recommendations or tips for other women diagnosed with BC and suggested their inclusion in the MOOC as another resource. These tips were about family, social, work and empowerment areas and specifically for each of the stages worked ( ).
Most of the participants received their diagnosis during routine controls (specialized care) or as a result of the presence of symptoms (primary care), and the main emotions that emerged during this time were shock, anxiety, uncertainty and worry about the future. The main diagnostic techniques that the participants underwent were physical examination (palpation), imaging tests (mammography and ultrasound) and biopsy. The experiences collected about the healthcare received in this stage were related to the perception of professionalism, friendliness, a predisposition to resolve doubts and the transmission of calm and encouragement from the healthcare professionals who attended to them. However, the participants expressed that there were other drawbacks in the medical care received at this time related to the challenges of early detection and the complexity of some administrative processes (e.g., medical appointments). Some participants expressed that they would have liked more advice from medical staff. Other participants expressed that they felt involved in the decision-making process in this phase, and this helped them accept the disease and have trust in the therapeutic approach to be used.
Participants identified the involvement of other healthcare professionals (e.g., oncology, gynaecology, surgery and rehabilitation, among others). While uncertainty remained the predominant emotion in this stage, other emotions started to emerge as well, including concern for appearance and shock by the physical changes that were occurring as a result of the therapeutic techniques used in this stage (e.g., chemotherapy, radiotherapy, surgery, etc.). In general, the participants experienced empathetic care and a certain psychological accompaniment by the healthcare professionals who assisted them. The participants felt more involved in the decision-making process in the gynaecology units than in the oncology units. They all concurred that the experience of informed participation in their treatment process was positive.
The main experience was less follow-up by healthcare professionals, giving rise to feelings of helplessness or loneliness and uncertainty about self-care. Other concerns, such as going back to work or looking for a new job more adapted to their health needs, were shared among the participants. The treatments at this stage focused on breast reconstruction surgery and medication. All the participants said they had received limited information on self-care, medical care to follow from this stage and possible new treatments required by healthcare professionals. However, they commented that at this stage they felt empowered to choose the aspects of their health in which they wanted to be involved, leading them to request personalized attention and to ask questions in order to be more involved in the decision-making process.
Additionally, and at their own initiative, the participants in the co-creation sessions provided a series of recommendations or tips for other women diagnosed with BC and suggested their inclusion in the MOOC as another resource. These tips were about family, social, work and empowerment areas and specifically for each of the stages worked ( ).
shows the empowerment and information needs identified in each phase. The main empowerment needs identified were related to strategies for emotional management and guidelines for self-care throughout the process from diagnosis until long-term follow-up. The main information needs were related to the lack of understanding of the meanings of biomarkers, parameters and acronyms found in reports, as well as medical jargon, treatment options and the likelihood of cancer recurrence. The need to have guidelines for accessing information and support resources available online, including association websites and online experiences of other women with BC, was highlighted.
Between July and October 2020, a weekly activity was published on the Moodle platform to carry out the process of co-design of the MOOC content. shows the themes of these activities. Finally, the MOOC was composed of five units: (i) BC (definition, types and stages, diagnostic process, treatments, myths, etc.), (ii) PCC (definition, implementation strategies, tips for preparing consultations with the healthcare professional, etc.); (iii) DHL (definition, guidelines to improve each skill, etc.), (iv) self-care (management of physical side effects, emotional management, etc.) and (v) experiences and advice from patients in different areas (healthcare, family, social and work area) and moments of the disease (diagnosis, treatment and long-term follow-up).
Data was available for seventeen participants (89.47%) ( ). All of them strongly agreed or agreed that the general objectives of the project were adequate (item 2) and that the participation of women who have or have had BC is useful for the development of a MOOC on this content (item 3). More than 88% of the participants strongly agreed or agreed that being part of the MOOC co-creation process made the content more relevant to them (item 4) and rated the quality of the activities carried out in the co-creation process (item 7) and the methodology applied (item 8) as very good or excellent. Regarding open questions, participants appreciated the way their experiences were incorporated into the MOOC and how they felt part of something meaningful, sharing experiences with other women in similar situations (item 11). In order to fully engage in the co-creation process, participants expressed that they would have liked to attend a face-to-face session. Additionally, some participants found it challenging to devote more time to the MOOC due to personal issues (item 12). See to consult illustrative quotes from participants’ responses to open questions.
Data was available for seven participants (36.84%) ( ). Combining the “totally agree” and “agree” categories, most of the participants positively evaluated the acceptability of the MOOC in terms of language, content, relevance, proposed activities and suitability of the MOOC objectives. Regarding open questions, most participants emphasized the usefulness of the MOOC’s content (especially related to SDM) and the way it is presented (through infographics and other audio-visual materials) as strengths. Nevertheless, one participant pointed out some navigation difficulties, while another emphasized the lengthy process (item 16). When it was possible, improvements suggested by participants were implemented, such as adding an initial summary of the MOOC’s content (item 17). All the contents were mentioned as important topics learned after completing the MOOC (item 18). See to consult illustrative quotes from participants’ responses to open questions.
This study presents the development of a MOOC aimed to improve the DHL of women with BC. We used a co-creation approach involving 17 patients and survivors and two nurses. In order to inform the content of the MOOC, we explored participants’ perceptions of the extent they were involved in the decision-making process, as well as their feelings, emotions and information needs throughout the therapeutic process. Most participants indicated that the MOOC co-creation experience was positive and made them feel involved in the project, and they positively valued the final product. Similar results were obtained by our team with other MOOCs developed for pregnant and lactating women and people with type-2 diabetes , including larger samples than the one used in this study. In these two studies, participants’ self-perceived DHL significantly improved after completing the MOOC development compared to baseline. Future work is warranted to evaluate the effectiveness of this MOOC at improving BC patients’ actual DHL (not only self-perceived), objective knowledge of the disease and treatments, and their involvement in treatment decisions. Women in this study pointed out information needs concerning different stages of the cancer, from diagnosis to long-term follow-up, as shown in previous studies . Increasingly, these patients want to be involved in the decisions related to their health, and some studies have focused on involving the patient experience to improve the healthcare they receive . As a result of an exchange of information and values between patients and healthcare providers, SDM engages patients as partners in their own care and optimizes the decision-making process . To support SDM and the use of PtDA in the practice, it is important that patients also have a certain level of HL to increase patient empowerment and allow them to adopt a more participatory role in their healthcare . Online interventions that provide information and support to women with BC appear to cushion the uncertainty they experience at different stages of the disease, and MOOCs can be an effective educational resource for meeting these unmet needs and promoting both DHL and SDM processes . The PJM considered the evolving requirements for empowerment during the stages of diagnosis, treatment and long-term follow-up. Knowledge of the patients’ experiences, through a PJM, facilitates the identification of key moments in which to provide more precise information . As we have seen in the results of this study, depending on the individual experiences of each woman, the care received during various BC periods could be perceived as more or less satisfactory. Based on our results, women with BC positively valued the experience of participating in the co-creation process of the MOOC, which made the content more relevant to them. This result aligns with previous evidence suggesting that a user-centred design process involves the participation of groups of users throughout the entire development cycle, during which they describe the context in which the generated resources will be used, their needs as users, and take part in user tests . These are all contributions for designing and building health information technology through iterations . This intervention represents an opportunity to reach a larger population that, due to health, availability and/or travel circumstances, may find it impossible to attend another type of face-to-face training on this subject. Technology provides great options for enhancing patient care; however, disparities in access and DHL continue to negatively impact vulnerable populations because of potential barriers in the digital sphere for those with low HL . This problem can be especially aggravated as more information is provided online and healthcare professionals must be involved in the development of these skills in their patients with BC, but they also require support and a strategy at the institutional level. Therefore, healthcare organizations must prioritize achieving accessibility for all patients when designing eHealth services . In this regard, the integration of educational materials designed by a representative sample of the target population to which they are addressed makes this proposal an opportunity to contribute to obtaining relevant health results for both affected patients and their healthcare professionals and, ultimately, decision-makers with financial capacity. From a managerial perspective, healthcare organizations should reframe their strategies, procedures and approaches, embracing a patient-centred perspective to become health literate . From a policy perspective, it suggests that individual HL and organizational HL should be handled as two complementary tools to empower people and to engage them in self-care and health policy making . The main strength of this project is having involved the intended audience in the creation of MOOCs, which enhances the significance of the material covered and how it is delivered. This is important because they have valuable insights and perspectives on the subject matter and can provide feedback on the relevance and effectiveness of the content and its delivery. This can lead to the creation of more engaging and effective MOOCs that better meet the needs and expectations of the intended audience. Nonetheless, there are several limitations to the study. Initially, it had been proposed that the co-creation process be based on face-to-face sessions with the participants followed by some online sessions through the Moodle platform. However, due to the COVID-19 pandemic, face-to-face sessions were replaced by online sessions carried out through the Zoom platform. This fact made the co-creation process last a few weeks longer than expected by adapting the work rhythms to the availability and web resources of the participants. However, the online sessions had several advantages: participants did not have to travel, the meetings were easier to organize and fewer financial resources were needed to support the development of the sessions. Another limitation is that, although all professionals related to BC were invited to participate, only two nurses did so. Perhaps the participation of other professionals involved in the process (e.g., gynaecologists and oncologists), as well as family members and/or caregivers, could have been beneficial for the generation of more useful resources. Even though women of all educational levels participated, the majority had higher education, so there was not much variability in this regard and lower educational levels may have been under-represented. In addition, there is a need for independent evaluation of acceptability to confirm the results obtained. Likewise, it is necessary to carry out an evaluation of the effectiveness of the MOOC with an independent sample that allows us to know if there really is an improvement in the levels of DHL and a change in knowledge in all the areas that are included in the different modules of the MOOC (BC, PCC, DHL, etc.).
The work carried out in this project is an example of how the development of educational interventions in MOOC format directed and designed by women with BC, with resources in different formats adapted to different educational/cultural levels and needs of the users, seems to be a viable strategy to generate higher-quality and useful resources for this population. The co-creation methodology and this type of resource aim to address the literacy and empowerment challenges of women with BC.
|
Comparison of Frailty Assessment Tools for Older Thai Individuals at the Out-Patient Clinic of the Family Medicine Department | f54bceae-cb01-428e-bb96-f1a3598691db | 10001464 | Family Medicine[mh] | Frailty is defined as a reduction in the ability to cope with everyday or acute stressors, particularly among older adults . Frailty results in an increased vulnerability brought about by age-associated declines in physiological reserves and functioning across multiple organ systems . The consequences of this condition heighten an individual’s susceptibility to increased dependency and vulnerability, as well as to an increased risk of death . The health care system is affected by increases in health care needs, admissions to hospital, and admissions to long-term care. However, frailty is a dynamic process which can emerge from pre-frail or robust statuses . Validated assessment tools and appropriate interventions are important to reduce morbidity and mortality. A systematic review and meta-analysis of a survey of the models used to evaluate frailty among ≥ 50-year-olds in 62 countries found that 12% of prevalence used physical frailty models and 24% used deficit accumulation models. The prevalences of the consideration of pre-frailty were 46% and 49% for the physical frailty models and the deficit accumulation models, respectively . In terms of geographical location, using physical frailty models, the highest prevalence of physical frailty was found in Africa (22%) and the lowest prevalence was in Europe (8%), while the pre-frailty prevalence was highest in the Americas (50%) and lowest in Europe (42%). However, using deficit accumulation models, the prevalence of frailty was found to be highest in Oceania (31%) and lowest in Europe (22%), while pre-frailty prevalence was highest in Oceania (51%) and lowest in Europe and Asia (49%). The population-level frailty prevalence among community-dwelling adults varied by age, gender, and frailty classification . Several studies have reported that frailty is related to a variety of negative health outcomes and diseases. In 2013, cognitive frailty was described as a group of heterogeneous clinical symptoms based on the presence of both physical frailty and cognitive impairment, excluding consistent Alzheimer’s disease or other dementias. The prevalence of cognitive frailty among community-dwelling older adults was reported to be 9% in a systematic review and meta-analysis . Similarly, the prevalences of frailty and pre-frailty were found to be 20.1% and 49.1%, respectively, in a systematic review and meta-analysis study of community-dwelling older adults with diabetes. Older adults with diabetes were more susceptible to being frail than those without diabetes . Additional factors were found to have an influence on frailty; for example, fruit and vegetable consumption was associated with a lower risk of frailty . There are many measurement tools available which can provide frailty scores when used to screen for or assess the degree of frailty; however, no single score metric is considered the gold standard . It has been recommended that geriatricians in the Asia-Pacific region use a validated measurement tool to identify frailty . There are three major approaches used, i.e., the physical frailty phenotype model of Fried et al. and its rapid screening tool, FRAIL; the deficit accumulation model of Rockwood and Mitnitski, which captures multimorbidity; and mixed physical and psychosocial models, such as the Tilburg Frailty Indicator and the Edmonton Frailty Scale . Another approach by Aguayo GA et al. consists of the use of four models, including a phenotype of the frailty model, a multidimensional model, an accumulation of deficits model, and a disability model. The most commonly used method in the literature is the physical frailty phenotype . The phenotype diagnosis is based on three of the following five criteria: weight loss, exhaustion, physical inactivity, slow walking speed, and weak grip strength . The present study reviews five phenotypic criteria that have been measured in different ways across various studies which could potentially affect the estimates of the prevalence of frailty and the predictive ability of the aforementioned phenotype, potentially leading to different classifications and results . Kutner and Zhang commented on the replacement of the performance-based measures (i.e., grip strength and walking speed) in the original frailty phenotype definition with self-reported items. In Thailand, a study by Boribun N. et. al. found that the prevalence of frailty in Thai community-dwelling older adults was 24.6%, based on the Frail Non-Disabled (FiND) questionnaire. A 2020 study by Sukkriang and Punsawad , which used various frailty assessment tools, found that the prevalence of frailty of older individuals in Thai communities was 11.7%, using Fried’s Frailty Phenotype (Cardiovascular Heart Study) criteria, and studied the validity of various frailty assessment tools. The Clinical Frailty Scale (CFS) used in the same study had a sensitivity of 56% and a specificity of 98.41%; the simple FRAIL questionnaire had a sensitivity of 88% and a specificity of 85.71%; the PRISMA-7 questionnaire sensitivity was 76%; and the specificity was 86.24%. The Timed Up and Go (TUG) test had a sensitivity of 72% and a specificity of 82.54%. The Gerontopole frailty screening tool (GFST) sensitivity was 88% and the specificity was 83.56%. The study by Sriwong et al. (2022) developed a Thai version of the Simple Frailty Questionnaire (T-FRAIL) and modified it to improve its diagnostic properties in the preoperative setting. Their study found that the incidence of frailty diagnosed using the Thai Frailty Index was 40.0%. The identification of frailty using a score of two points or more provided the best Youden index, at 63.1, with a sensitivity of 77.5% (95% CI 69.0–84.6) and a specificity of 85.6% (95% CI 79.6–90.3). There is currently a need for simple, valid, accurate, and reliable methods and tools for detecting frailty which are appropriate for the Thai population. Our team works in an academic hospital and has developed evidence data in our clinic in the hospital. Therefore, the present study was conducted in this clinic. This study compared selected frailty assessment tools, including Fried’s Frailty phenotype (FFP), which is the most commonly used assessment tool used for reference; the Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH), which is recommended in the Thai check-up manual but lacks published validation; and the FiND questionnaire, which is used in communities but, as yet, there is no evidence of its use at the Out-Patient Department (OPD) of Maharaj Nakorn Chiang Mai Hospital (a university-level hospital).
2.1. Samples This cross-sectional study included 251 older patients (age 60 years or older) who came to the OPD of the Family Medicine Department, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, during the period of December 2016–March 2017. The patients signed a consent form declaring their agreement to participate in this research. This study was approved by the Research Ethics Committee of the Faculty of Medicine of Chiang Mai University (no. 380/2016). The inclusion criteria for participants were: (1) Thais 60 years or older and who had been seen at the OPD for more than 1 year, (2) the ability to communicate orally in Thai and read the Thai language, (3) the ability to walk by themselves or with walking aids. The exclusion criteria were: (1) being bed ridden, (2) being handicapped in both hands, (3) currently having a serious illness, and (4) having impaired cognition. The sample size was calculated to be 230 using the following formula: n = Z2α/2 × Se(1 − Se)/d2 × Prev where n = sample size, Se = sensitivity (0.9), Prev = prevalence (0.15) , d = precision of the estimate (1.0), and alpha = 0.1. 2.2. Frailty Assessment Tools 2.2.1. Fried’s Frailty Phenotype The five criteria of Fried’s Frailty Phenotype (FFP) assessment were used as the reference assessment tool in this study, following Fried et al. , with slight modification. These criteria were: (1) Weight loss. My weight has decreased at least 4.5 kg in the past year or I have had an unintentional weight loss of at least 5% of my previous year’s body weight (no = 0, yes = 1). (2) Exhaustion. Self-reported results of the Center for Epidemiologic Studies Depression scale (CES–D). Two statements were provided: (2.1) I felt that everything I did was an effort and (2.2) I could not get going. The question is then asked, “How often in the last week did you feel this way?” The alternative answers are: 0 = rarely or none of the time (<1 day), 1 = some or a little of the time (1–2 days), 2 = a moderate amount of the time (3–4 days), or 3 = most of the time. Answers of “2” or “3” to either of these questions were categorized as frail by the exhaustion criterion (no = 0, yes = 1). (3) Slowness. My walking speed is 20% below baseline (adjusted for gender and height) (no = 0, yes = 1). (4) Weakness. Grip strength is 20% below baseline (adjust for gender and body mass index) (no = 0, yes = 1). (5) Low activity was evaluated with the following question: How often do you engage in activities that require a low or moderate amount of energy such as gardening, cleaning the car, or walking? (more than once a week = 1, once a week = 2, one to three times a month = 3 and hardly ever or never = 4) . A combined FFP score of 0 was considered a “non-frail” phenotype; a score of 1 or 2 was considered a “pre-frail” phenotype; and a score of 3 or more was considered a “frail” phenotype. 2.2.2. Frailty Assessment Tool of the Thai Ministry of Public Health The Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) is a modification of Fried’s Frailty Phenotype, and is included in the Elderly Screening/Assessment Manual (2015) . The assessment tool has 5 criteria: four questions are self-reports and one is based on measurement by medical staff: (1) In the past year, has your weight has decreased by more than 4.5 kg? (no = 0, yes = 1) (2) Do you feel tired all the time? (no = 0, yes = 1) (3) Are you unable to walk alone and need someone for support? (no = 0, yes = 1) (4) The participants walked in a straight line for a distance of 4.5 m. Time was measured from when they started walking (Time < 7 s = 0, time ≥ 7 s or could not walk = 1). (5) The participant had an obvious weakness in their hands, arms, and legs (no = 0, yes = 1). A FATMPH score of 0 was considered a phenotype of “non-frail”; a score of 1 or 2 was considered a phenotype of “pre-frail”; and a score of 3 or more was consider a phenotype of “frail”. 2.2.3. Frail Non-Disabled (FiND) Questionnaire The Frail Non-Disabled (FiND) questionnaire is designed to differentiate between frailty and disability. FiND was used for community-dwelling older Thai adults by Boribun N et. al. . The content validity index (CVI) was 0.8 and Cronbach’s alpha was 0.89 . The FiND questionnaire consists of 5 questions: Do you have any difficulty walking 400 m? (no or some difficulty = 0, much difficulty or unable = 1) Do you have any difficulty climbing up a flight of stairs? (no or some difficulty = 0, much difficulty or unable = 1) During the last year, have you involuntarily lost more than 4.5 kg? (no = 0, yes = 1) How often in the last week did you feel that everything you did was an effort or that you could not get going? (2 times or less = 0, 3 or more times = 1) What is your level of physical activity? (at least 2–4 h per week = 0, mainly sedentary = 1) A combined score of A + B + C + D + E = 0 was considered as “non-frail”; A + B = 0 and C + D + E ≥ 1 was considered as “frail”; and A + B ≥ 1 was considered as “disabled”. 2.3. Data Collection Data were collected using questionnaires and assessed using various tools. The general characteristics recorded included age, sex, religion, education, income, source of payment of medical expenses, history of family disease, present weight, weight one year ago, height, and body mass index. All participants were assessed using the Thai-language version of FATMPH, FFP, and FiND. The inter-rater reliability was 1.0 between researchers and assistants. 2.4. Statistical Analysis The data were analyzed using Stata 12.0 and are presented as frequency, percentage, mean, and standard deviation (SD). The frailty assessment tools were analyzed for their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV); Cohen’s kappa was used to measure the reliability of these assessment tools. 2.5. Evaluation Consequence All frail participants who were involved in any of the study of the assessment tools were advised to undergo comprehensive geriatric assessment. The appropriate interventions were then provided to these individuals.
This cross-sectional study included 251 older patients (age 60 years or older) who came to the OPD of the Family Medicine Department, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, during the period of December 2016–March 2017. The patients signed a consent form declaring their agreement to participate in this research. This study was approved by the Research Ethics Committee of the Faculty of Medicine of Chiang Mai University (no. 380/2016). The inclusion criteria for participants were: (1) Thais 60 years or older and who had been seen at the OPD for more than 1 year, (2) the ability to communicate orally in Thai and read the Thai language, (3) the ability to walk by themselves or with walking aids. The exclusion criteria were: (1) being bed ridden, (2) being handicapped in both hands, (3) currently having a serious illness, and (4) having impaired cognition. The sample size was calculated to be 230 using the following formula: n = Z2α/2 × Se(1 − Se)/d2 × Prev where n = sample size, Se = sensitivity (0.9), Prev = prevalence (0.15) , d = precision of the estimate (1.0), and alpha = 0.1.
2.2.1. Fried’s Frailty Phenotype The five criteria of Fried’s Frailty Phenotype (FFP) assessment were used as the reference assessment tool in this study, following Fried et al. , with slight modification. These criteria were: (1) Weight loss. My weight has decreased at least 4.5 kg in the past year or I have had an unintentional weight loss of at least 5% of my previous year’s body weight (no = 0, yes = 1). (2) Exhaustion. Self-reported results of the Center for Epidemiologic Studies Depression scale (CES–D). Two statements were provided: (2.1) I felt that everything I did was an effort and (2.2) I could not get going. The question is then asked, “How often in the last week did you feel this way?” The alternative answers are: 0 = rarely or none of the time (<1 day), 1 = some or a little of the time (1–2 days), 2 = a moderate amount of the time (3–4 days), or 3 = most of the time. Answers of “2” or “3” to either of these questions were categorized as frail by the exhaustion criterion (no = 0, yes = 1). (3) Slowness. My walking speed is 20% below baseline (adjusted for gender and height) (no = 0, yes = 1). (4) Weakness. Grip strength is 20% below baseline (adjust for gender and body mass index) (no = 0, yes = 1). (5) Low activity was evaluated with the following question: How often do you engage in activities that require a low or moderate amount of energy such as gardening, cleaning the car, or walking? (more than once a week = 1, once a week = 2, one to three times a month = 3 and hardly ever or never = 4) . A combined FFP score of 0 was considered a “non-frail” phenotype; a score of 1 or 2 was considered a “pre-frail” phenotype; and a score of 3 or more was considered a “frail” phenotype. 2.2.2. Frailty Assessment Tool of the Thai Ministry of Public Health The Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) is a modification of Fried’s Frailty Phenotype, and is included in the Elderly Screening/Assessment Manual (2015) . The assessment tool has 5 criteria: four questions are self-reports and one is based on measurement by medical staff: (1) In the past year, has your weight has decreased by more than 4.5 kg? (no = 0, yes = 1) (2) Do you feel tired all the time? (no = 0, yes = 1) (3) Are you unable to walk alone and need someone for support? (no = 0, yes = 1) (4) The participants walked in a straight line for a distance of 4.5 m. Time was measured from when they started walking (Time < 7 s = 0, time ≥ 7 s or could not walk = 1). (5) The participant had an obvious weakness in their hands, arms, and legs (no = 0, yes = 1). A FATMPH score of 0 was considered a phenotype of “non-frail”; a score of 1 or 2 was considered a phenotype of “pre-frail”; and a score of 3 or more was consider a phenotype of “frail”. 2.2.3. Frail Non-Disabled (FiND) Questionnaire The Frail Non-Disabled (FiND) questionnaire is designed to differentiate between frailty and disability. FiND was used for community-dwelling older Thai adults by Boribun N et. al. . The content validity index (CVI) was 0.8 and Cronbach’s alpha was 0.89 . The FiND questionnaire consists of 5 questions: Do you have any difficulty walking 400 m? (no or some difficulty = 0, much difficulty or unable = 1) Do you have any difficulty climbing up a flight of stairs? (no or some difficulty = 0, much difficulty or unable = 1) During the last year, have you involuntarily lost more than 4.5 kg? (no = 0, yes = 1) How often in the last week did you feel that everything you did was an effort or that you could not get going? (2 times or less = 0, 3 or more times = 1) What is your level of physical activity? (at least 2–4 h per week = 0, mainly sedentary = 1) A combined score of A + B + C + D + E = 0 was considered as “non-frail”; A + B = 0 and C + D + E ≥ 1 was considered as “frail”; and A + B ≥ 1 was considered as “disabled”.
The five criteria of Fried’s Frailty Phenotype (FFP) assessment were used as the reference assessment tool in this study, following Fried et al. , with slight modification. These criteria were: (1) Weight loss. My weight has decreased at least 4.5 kg in the past year or I have had an unintentional weight loss of at least 5% of my previous year’s body weight (no = 0, yes = 1). (2) Exhaustion. Self-reported results of the Center for Epidemiologic Studies Depression scale (CES–D). Two statements were provided: (2.1) I felt that everything I did was an effort and (2.2) I could not get going. The question is then asked, “How often in the last week did you feel this way?” The alternative answers are: 0 = rarely or none of the time (<1 day), 1 = some or a little of the time (1–2 days), 2 = a moderate amount of the time (3–4 days), or 3 = most of the time. Answers of “2” or “3” to either of these questions were categorized as frail by the exhaustion criterion (no = 0, yes = 1). (3) Slowness. My walking speed is 20% below baseline (adjusted for gender and height) (no = 0, yes = 1). (4) Weakness. Grip strength is 20% below baseline (adjust for gender and body mass index) (no = 0, yes = 1). (5) Low activity was evaluated with the following question: How often do you engage in activities that require a low or moderate amount of energy such as gardening, cleaning the car, or walking? (more than once a week = 1, once a week = 2, one to three times a month = 3 and hardly ever or never = 4) . A combined FFP score of 0 was considered a “non-frail” phenotype; a score of 1 or 2 was considered a “pre-frail” phenotype; and a score of 3 or more was considered a “frail” phenotype.
The Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) is a modification of Fried’s Frailty Phenotype, and is included in the Elderly Screening/Assessment Manual (2015) . The assessment tool has 5 criteria: four questions are self-reports and one is based on measurement by medical staff: (1) In the past year, has your weight has decreased by more than 4.5 kg? (no = 0, yes = 1) (2) Do you feel tired all the time? (no = 0, yes = 1) (3) Are you unable to walk alone and need someone for support? (no = 0, yes = 1) (4) The participants walked in a straight line for a distance of 4.5 m. Time was measured from when they started walking (Time < 7 s = 0, time ≥ 7 s or could not walk = 1). (5) The participant had an obvious weakness in their hands, arms, and legs (no = 0, yes = 1). A FATMPH score of 0 was considered a phenotype of “non-frail”; a score of 1 or 2 was considered a phenotype of “pre-frail”; and a score of 3 or more was consider a phenotype of “frail”.
The Frail Non-Disabled (FiND) questionnaire is designed to differentiate between frailty and disability. FiND was used for community-dwelling older Thai adults by Boribun N et. al. . The content validity index (CVI) was 0.8 and Cronbach’s alpha was 0.89 . The FiND questionnaire consists of 5 questions: Do you have any difficulty walking 400 m? (no or some difficulty = 0, much difficulty or unable = 1) Do you have any difficulty climbing up a flight of stairs? (no or some difficulty = 0, much difficulty or unable = 1) During the last year, have you involuntarily lost more than 4.5 kg? (no = 0, yes = 1) How often in the last week did you feel that everything you did was an effort or that you could not get going? (2 times or less = 0, 3 or more times = 1) What is your level of physical activity? (at least 2–4 h per week = 0, mainly sedentary = 1) A combined score of A + B + C + D + E = 0 was considered as “non-frail”; A + B = 0 and C + D + E ≥ 1 was considered as “frail”; and A + B ≥ 1 was considered as “disabled”.
Data were collected using questionnaires and assessed using various tools. The general characteristics recorded included age, sex, religion, education, income, source of payment of medical expenses, history of family disease, present weight, weight one year ago, height, and body mass index. All participants were assessed using the Thai-language version of FATMPH, FFP, and FiND. The inter-rater reliability was 1.0 between researchers and assistants.
The data were analyzed using Stata 12.0 and are presented as frequency, percentage, mean, and standard deviation (SD). The frailty assessment tools were analyzed for their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV); Cohen’s kappa was used to measure the reliability of these assessment tools.
All frail participants who were involved in any of the study of the assessment tools were advised to undergo comprehensive geriatric assessment. The appropriate interventions were then provided to these individuals.
The demographic characteristics of the 251 older participants from the OPD are shown in . Most were female and ranged in age from 60 to 69. The majority of the participants were married or living with a partner, had lower than a high school education, and were Buddhist. Half the participants were government officials. Most participants had an income of more than 10,000 baht per month. Their major source of income was from pensions, which provided an adequate income. The health status of the participants is shown in . Several medical conditions were identified among the participants. The most prevalent was hypertension, followed (in declining order of incidence) by dyslipidemia, diabetes mellitus, hyperuricemia, glaucoma or cataracts, chronic kidney disease, benign prostatic hypertrophy, coronary artery disease, cerebrovascular disease, and malignancy, followed by others. In this study, frailty status was evaluated using frailty assessment tools including FFP, FATMPH, and FiND. The frail and non-frail phenotypes were defined based on the combined results of all the assessment tools. The study found that the overall prevalence of frailty was 8.37% based on FFP, most of whom were female (90.47%). The frailty phenotype prevalence determined using FATMPH was 17.53% (female = 65.91%); using FiND, the frailty phenotype prevalence determined was 3.98% (female = 80.00%) ( and ). The sensitivity, specificity, positive predictive value, and negative predictive value of the FATMPH and FiND tools were analyzed and compared with the standard FFP tool. As shown in , FATMHP had a sensitivity of 57.14%, a specificity of 86.09%, a positive predictive value (PPV) of 27.27%, and a negative predictive value (NPV) of 95.65%. FiND had a sensitivity of 19.05%, a specificity of 97.39%, a PPV of 40.00% and an NPV of 92.94%. The comparison of FATMPH and FiND with FFP found the Cohen kappa statistics to be 0.298 for FATMPH and 0.147 for FiND.
Fried’s Frailty Phenotype (FFP) is a well-known and regularly utilized tool for identifying frailty in older individuals . In Thailand, FATMPH was developed as a frailty assessment tool based on FFP. Even though the Fried criteria were not initially intended to be used as a self-reported questionnaire, researchers now usually employ modified questionnaires based on this frailty phenotype . The Frail Non-Disabled (FiND) questionnaire, a self-administered frailty screening instrument designed to differentiate frailty from disability, was developed as a screening tool . We focused on the comparison of both FATMPH and FiND with FFP, which is currently used to assess older patients at the OPD of the Family Medicine Department of the Maharaj Nakorn Chiang Mai Hospital Faculty of Medicine. Most of the participants had a chronic disease (92.43%), most frequently hypertension (65.75%). The prevalence of frailty in this study was 8.37% using FFP, which is lower than the prevalence of frailty among community-based elderly people (9.9%) . Differences in frailty prevalence were due at least in part to differences in the assessment tools used, as well as the different geographical locations covered in this study. Frailty prevalence increased with age and was higher for females than males . The relatively low prevalence of frailty in the study may be due to the fact that most of the participants were in the younger group of the elderly participants (60–69 years, 65.34%). A screening test is defined as a medical test or procedure performed on members (subjects) of a defined asymptomatic population or population subgroup to assess the likelihood of their members having a particular disease or condition . A screening test has only two possible outcomes: positive, suggesting that the subject has the disease or condition; or negative, suggesting that the subject does not have the disease or condition . In prior research, a Korean version of the FRAIL scale (K-FRAIL) was found to be consistent with the multidimensional frailty index and to be a concise tool for screening for frailty in a clinical setting in Korea . In Thailand, many frailty assessment tools have been established for use both for community-dwelling individuals and in hospitals . There have, however, been few studies in Thailand that have included a comparison and validation of the frailty assessment tools used for older Thai adults in order to evaluate their diagnostic efficacy. A previous comparative study of the Thai version of the Simple Frailty Questionnaire (T-FRAIL) and the Thai Frailty Index (TFI) found that T-FRAIL was valid and reliable for frailty detection in elderly patients at a surgery out-patient clinic . Another study of community-dwelling elderly compared several screening tests, including CFS, the simple FRAIL questionnaire, the PRISMA-7 questionnaire, the TUG, and the GFST with Fried’s Frailty Phenotype method. That study found the simple FRAIL questionnaire and the GFST were the most appropriate tests for screening frailty due to their high sensitivity . The present study is the first study to compare the use of FATMPH and FiND with FFP regarding patients in an OPD for older Thai adults. The comparison of FATMPH and FiND found that the sensitivity of FATMPH (57.14%) was higher than that of FiND (19.05%), but that the specificity of FATMPH (86.09%) was lower than that of FiND (97.39%). FATMPH and FiND were both had a lower sensitivity than CFS (56%), the simple FRAIL questionnaire (88%), the PRISMA-7 questionnaire (76%), the TUG (72%), and the GFST (88%), as reported in the study by Sukkriang and Punsawad , as well as the modified T-Frails, including T-Frail M1 (83.3%) and T-Frail M2 (85.8%), as reported in the study by Sriwong . However, the categorizations of FiND (non-frail, frail, and disabled) are different from that of both FATMPH and FFP (non-frail, pre-frail, and frail), which could affect the sensitivity of the tests and which might be a reason that FiND had the lowest sensitivity in the present study. FATMPH had a higher sensitivity than FiND because it was modified from FFP, but its sensitivity as a screening tool remains poor. In addition, FATMPH and FiND both had high specificity, similar to other tools used in previous studies . Most of the screening tools had a specificity of higher than 85%: CSF, at 98.41%, as found in a previous study ; and FiND, at 97.39%, as found in the present study. The sensitivity of both FATMPH and FiND were lower than 85%, suggesting that neither is an adequate screening tool , while the high specificities of both CSF and FiND suggest they are appropriate for confirming the absence of the condition. FiND is a self-assessment questionnaire suitable for use for individuals in communities, as well as in primary care, whereas FFP is appropriate in primary care and acute care for both individuals in communities and in clinical settings, although the assessment time of FFP is longer than that of FiND . The final judgement of whether or not these methods are appropriate will depend on the context. If the score is used as part of a sequence of screening steps, sensitivity is likely to be more important than specificity, while if the score is used to guide treatment initiation, specificity is equally important . The reliability of FATMPH and FiND were compared with FFP and evaluated using Cohen’s kappa statistic. The kappa values of FATMPH and FiND were 0.289 (95% CI = 0.132–0.445) and 0.147 (95% CI = 0.004–0.241), respectively. The levels of agreement of these values were fair (0.21 ≤ K ≤ 0.40) and slight (0.00 ≤ K ≤ 0.20) , respectively. Additionally, in a research context, this measure depends on the prevalence of the condition (with a very low prevalence, κ will be very low, even with high agreement between the raters) . FATMPH’s kappa agreement level was higher than FiND because FATMPH was modified from FFP. Aguayo GA et al. , in a study of the agreement between 35 published frailty scores in the general population, found a very wide range of agreement (Cohen’s kappa = 0.10–0.83). The frailty phenotype properties were impacted by the modified frailty phenotype criteria . The prevalence of frailty was 31.2% for modified self-reported walking, 33.6% for modified self-reported strength, and 31.4% for modified self-reported walking and strength . The agreement with the primary phenotype was 0.651 for modified self-reported walking, 0.913 for modified self-reported strength, and 0.441 for modified self-report walking and strength . FATMPH had a lower agreement (0.268) than that of the Modified Frailty Phenotype. We think that the physical inactivity criteria of FATMPH, i.e., the “Can you walk by yourself or do you need someone help you? (no = 0, yes = 1)” should be re-evaluated, as it appears to be very similar to the walk speed criteria (4.5 m walk time; <7 s = 0, ≥7 s or cannot walk = 1). FFP has two measurements (grip strength and walking speed), but FATMPH uses only walking speed and includes fewer detailed questions. Frailty scores show marked heterogeneity because they are based on different concepts of frailty and research results based on different frailty scores cannot be compared or pooled . A limitation of our study is that it was not representative of all community-dwelling older Thais because the participants were all older patients at the OPD of an academic hospital (Maharaj Nakorn Chiang Mai Hospital) and most were urban residents receiving regular government welfare payments. Further study of validated frailty assessment tools such as multicenter studies, as well as other assessment tools, are necessary to ensure their suitability for the Thai population context.
Our academic hospital-based study using the Thai-language version of the Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) and the FiND questionnaire found that both have only a fair to slight agreement with Fried’s Frailty Phenotype (FFP). Additionally, their predictive power is low and, thus, insufficient for frailty detection in a clinical setting. Further multicenter study of these and other assessment tools is needed to improve frailty screening in older Thai populations.
|
Soil Fungal Community Structure and Its Effect on CO | 862ee59a-d5c1-4bdb-afd7-2d0b5f20723c | 10001496 | Microbiology[mh] | Saline soil accounts for more than 7% of the earth’s land surface, and approximately 70% of saline soil is used for agricultural production . However, the amount of saline soil increases by 10% every year , which is one of the most compelling environmental problems on a global scale. High salinity in soil inhibits plant growth, changes the soil’s physical and chemical properties, and even causes the degradation of soil quality . Fungi are the main members of soil microorganisms and are widely distributed in terrestrial ecosystems. Fungi play a key role in organic matter decomposition and nutrient cycling in the soil ecosystem, and the fungal community structure is often used as an important parameter for measuring the change in soil quality . Understanding the response of the soil fungal community to the change in salinity is essential for rehabilitating salinized soil. Previous studies have reported the effects of salinity on soil microbial biomass , microbial activity , diversity , community composition , soil enzyme activity , and soil physical and chemical properties . However, studies regarding the structural fluctuation and interactions with the soil fungal community among the natural salinity gradient during the process of regional evolution remain scarce. Moreover, soil fungi are active participants in the soil carbon and nutrient cycles; organic matter is decomposed, altered, and modified by soil fungi. The products of these processes are typically greenhouse gases such as CO 2 that are released into the atmosphere . However, plants can form symbiotic relationships with mycorrhizal fungi that firmly anchor carbon in the soil, and scientists have learned that a special type of mycorrhizal fungus, i.e., ectomycorrhizal fungi, is helping plants to take up carbon dioxide more quickly . Therefore, the relationship between soil fungal communities and CO 2 emission fluxes needs to be dissected to provide a fungal perspective for the future development of efficient microbial management strategies to mitigate greenhouse gas emissions . In recent years, many scholars have been studying the changing characteristics of microbial salt stress in different regions. For example, Rath et al. assessed the microbial communities along two naturally occurring salt gradients located around Lake O’Connor in Western Australia and found that estimates of fungi were less affected by salinity than bacteria . Zheng et al. evaluated soil prokaryotic microbial communities in Bohai Bay located in China and found that salinity altered the composition and structure of the prokaryotic microbial communities and enhanced their interactions . This shows that the effect of salinity on microbial composition and structure is still a research hotspot at present; however, most studies have only focused on this, and the correlation between microorganisms and CO 2 emissions in salinized soils has been inadequately studied, especially for soil fungi. The Yellow River Delta (YRD), a land regressive and transgressive area, has formed soil with different salinization degrees due to different distances from the sea, land forming times, and soil desalination degrees. The soil salinity shows a tendency to decrease from sea to land and increase outward from both sides of the river . It is a natural laboratory for studying the relationship between the saline soil microbial community and the salinity gradient . In recent years, studies on soil microorganisms in the YRD have mainly concerned the structure and diversity of the bacterial community in saline or oil-contaminated soils , the effects of environmental factors on the bacterial community , and the relationship between the bacterial community and halophytic vegetation succession ; however, research on the changes in the structure of soil fungal communities under different salinity gradients in this area is almost non-existent. Microorganisms in the natural environment do not exist as independent individuals ; interactions among microbial species have a strong influence on their community stability , and the importance of network interactions for ecosystem processes and functions may exceed species diversity. In this work, we used high-throughput sequencing technology to explore the characteristics of the structure of soil fungal communities under salinity gradients in the Yellow River Delta and then combined this with the Partial Least Squares–Path Model to reveal whether the fungal communities influence CO 2 emissions. We sought to provide a theoretical microbial perspective for future restoration efforts in wetland environments. The purposes of this study were (1) to investigate the effects of salinity on the fungal community structure and fungal community interactions and (2) to investigate whether fungal community diversity significantly affects CO 2 emissions.
2.1. Study Site The YRD, located on the south bank of Bohai Bay and the west bank of Laizhou Bay (118°44”14.1” E–118°55”10.3” E, 37°26”16.7” N–37°32”41.4”), has a warm temperate, semi-humid continental monsoon climate with four distinct seasons. The annual average temperature is 12.9 °C, annual average rainfall is 596 mm, and annual evaporation is 1900–2400 mm. The main soil types in the YRD are fluvo-aquic soil and saline soil, and the soil parent material is Yellow River alluvium. The vegetation types are few, and the structure is simple. There are halophytes, such as Suaeda salsa (L.) Pall. , Aeluropus sinensis (D.) Tzvel. , Tamarix Chinensis Lour. , Imperata cylindrica (L.) Beauv. , and Artemisiacapillaris Thunb , with different salt tolerance levels. 2.2. Soil Sampling The changes in soil salinity and soil type in the Yangtze River Delta were fully considered through several surveys. In October 2018, soil samples were collected in the Yellow River Delta at a distance of no less than 1 km between every two sampling points, as shown in . The 30 soil samples were divided into three levels according to the salinity gradient (10 sampling points per level): high-salinity, medium-salinity, and low-salinity. Considering the distance of each sampling point in the same level, each level was divided into two groups: high-salinity (H1 and H2), medium-salinity (M1 and M2), and low-salinity (L1 and L2). In other words, a total of 30 soil samples were collected, with sampling points within each group (5 sampling points) that were relatively close to each other; the sampling points between groups were relatively far apart. The surface vegetation and cover were removed, and 0–20 cm of the soil was collected using a soil auger according to the diagonal five-point sampling method. After removing the gravel and roots, we placed the soil into two sealed bags (approximately 200 g each). One part was placed in an icebox, brought back to the laboratory within 24 h, and frozen at −80 °C until it was used for molecular biology research; the other part of the fresh sample was processed according to experimental requirements to determine the soil physicochemical properties and to perform laboratory incubation experiments. 2.3. Physicochemical Properties of the Soil Samples Soil electrical conductivity (EC) was measured using an electrical conductivity meter (DDS-307A, China) with a soil–water suspension (soil/water ratio = 1:5). Because EC has always been regarded as a representative index of soil total soluble salt, the EC was used to characterize the soil salinity . The sample soils were classified into three categories according to their salinity status: high-salinity soils (EC1:5 > 3 dS·m −1 ), medium-salinity soils (1.5 dS·m −1 < EC1:5 < 3 dS·m −1 ), and low-salinity soils (EC1:5 < 1.5 dS·m −1 ). Soil texture was measured using a laser particle size analyzer (Mastersizer 3000, Britain). The soil total nitrogen (TN) and organic matter (SOM) were determined using a macro element analyzer (Vario MACRO Cube, Germany). The alkali solution diffusion method was used to determine the soil available nitrogen (AN). The Olsen method was used to determine the soil available phosphorus (AP). Soil moisture content (MC) was determined by a drying-weighing method at 105 °C. The rate of soil CO 2 emissions was evaluated using a laboratory incubation method. Fresh soil equivalent to 60 g of dry soil was weighed into a 250 mL culture bottle and pre-incubated at 25 °C for 7 days to activate the soil microorganisms. Taking into account the frequent inundation of the coastal areas of the Yellow River Delta by seawater and the evaporation of soil moisture in the incubator, the soil water–soil ratio was adjusted to 1:1 with sterile water during the formal incubation process to approach the original water content; this was followed by an unsealed incubation at 25 °C in the dark for a total of 14 days. During the culture period, the water in the bottle was supplemented by a weighing method, and the CO 2 concentration was measured on day 1, 2, 3, 4, 7, 10, 13, and 14. The bottle mouth was sealed with a flip stopper when collecting gas; the gas was pushed into the bottle three times with a 10 mL syringe to ensure that the gas was evenly mixed, a 5 mL gas sample was drawn, and its concentration was measured with an Agilent gas chromatograph and then extracted with a syringe. We added 5 mL of air to the culture bottle, kept the pressure in the bottle consistent, and after 40 min, measured the concentration of the gas sample again; we then used the concentration difference between the two measurements to calculate the CO 2 emission rate on that day. The CO 2 emission rate for each site was calculated by averaging the CO 2 emission rates for 8 days. The CO 2 emission rate was calculated as F = ∆ C × V × 44 × 273 ÷ ( 273 + T ) t × M × 22.4 where F’s unit is in μg kg −1 -d, ΔC is the difference between the two CO 2 concentrations, V is the volume of the incubation flask, T is the incubation temperature, t is the incubation time, and M is the sample weight . 2.4. DNA Extraction, PCR Amplification, and High-Throughput Sequencing The total DNA was extracted from 0.5 g of soil per sample using a Fast DNA Kit (MP Biomedicals, Santa Ana, CA, USA). The detailed steps were carried out according to the instructions of the kit. The primers 528F (GCGGTAATTCCAGCTCCAA) and 706R (AATCCRAGAATTTCACCTCT) were used to amplify the V4 region of the 18S rRNA genes. The PCR reaction system included 10 ng of Genomic DNA, 5.0 μL of 10 × PCR Buffer, 1 μL of 50 μM each primer, 0.5 μL of 10 μM dNTPs, and 0.5 μL of 5 U/μL Plantium Taq DNA; this was replenished to 50 μL with sterilized, double-distilled water. The PCR reaction conditions were as follows: pre-denaturation for 30 s (94 °C), 30 cycles of denaturation for 20 s (94 °C), annealing for 20 s (60 °C), and extension for 20 s (72 °C), followed by a final extension for 5 min (72 °C) and preservation at 10 °C. The PCR products were purified and recovered by a Gene JET Kit (Thermo Scientific, Waltham, MA, USA) and were subjected to high-throughput sequencing on a Thermofisher Ion S5TMXL titanium platform. Cutadapt software (V1.9.1) was used to remove the low-quality reads and to trim the barcode and primer sequences . The operational taxonomy units (OTUs) were clustered at a 97% identity threshold using the UPARSE algorithm (v7.0.1001) (Edgar, 2013). The UCHIME algorithm (v4.2.40) was used to detect and remove the chimerisms . The RDP classifier (v2.0) was used to compare the OTU sequences with the SILVA132 database, with 80% similarity, to classify the sequences at the phylum, class, order, family, genus, and species levels . 2.5. Statistical Analysis In order to compare soil physicochemical properties and fungal community alpha diversity between different saline gradient classes, we performed a one-way analysis of variance (ANOVA) using SPSS (v20.0) software . In order to investigate Pearson’s correlation between the soil salinity and the alpha diversity of fungal communities, we used SPSS software. In order to determine the influence of soil physical and chemical factors on the fungal community structure, we performed a redundancy analysis (RDA) using CANOCO 5.0 software. In order to compare the differences between the salinity gradient classes in the structure of soil fungal communities, we performed a UPGMA clustering analysis using R software . In order to analyze the contribution of the major fungal genera to the community differences, we performed a SIMPER analysis and ANOSIM test using PAST (V1.0) software . Molecular ecological networks of soil fungi with high, medium, and low salinity gradients were constructed using online tools available on the MEAN website ( http://ieg2.ou.Edu/mena/ (accessed on 13 January 2022)) . Gephi software (V0.9.2) was used to visualize the networks . Models were constructed using the “plspm” package in the R language, and goodness-of-fit statistics were used as a predictive power for the path models .
The YRD, located on the south bank of Bohai Bay and the west bank of Laizhou Bay (118°44”14.1” E–118°55”10.3” E, 37°26”16.7” N–37°32”41.4”), has a warm temperate, semi-humid continental monsoon climate with four distinct seasons. The annual average temperature is 12.9 °C, annual average rainfall is 596 mm, and annual evaporation is 1900–2400 mm. The main soil types in the YRD are fluvo-aquic soil and saline soil, and the soil parent material is Yellow River alluvium. The vegetation types are few, and the structure is simple. There are halophytes, such as Suaeda salsa (L.) Pall. , Aeluropus sinensis (D.) Tzvel. , Tamarix Chinensis Lour. , Imperata cylindrica (L.) Beauv. , and Artemisiacapillaris Thunb , with different salt tolerance levels.
The changes in soil salinity and soil type in the Yangtze River Delta were fully considered through several surveys. In October 2018, soil samples were collected in the Yellow River Delta at a distance of no less than 1 km between every two sampling points, as shown in . The 30 soil samples were divided into three levels according to the salinity gradient (10 sampling points per level): high-salinity, medium-salinity, and low-salinity. Considering the distance of each sampling point in the same level, each level was divided into two groups: high-salinity (H1 and H2), medium-salinity (M1 and M2), and low-salinity (L1 and L2). In other words, a total of 30 soil samples were collected, with sampling points within each group (5 sampling points) that were relatively close to each other; the sampling points between groups were relatively far apart. The surface vegetation and cover were removed, and 0–20 cm of the soil was collected using a soil auger according to the diagonal five-point sampling method. After removing the gravel and roots, we placed the soil into two sealed bags (approximately 200 g each). One part was placed in an icebox, brought back to the laboratory within 24 h, and frozen at −80 °C until it was used for molecular biology research; the other part of the fresh sample was processed according to experimental requirements to determine the soil physicochemical properties and to perform laboratory incubation experiments.
Soil electrical conductivity (EC) was measured using an electrical conductivity meter (DDS-307A, China) with a soil–water suspension (soil/water ratio = 1:5). Because EC has always been regarded as a representative index of soil total soluble salt, the EC was used to characterize the soil salinity . The sample soils were classified into three categories according to their salinity status: high-salinity soils (EC1:5 > 3 dS·m −1 ), medium-salinity soils (1.5 dS·m −1 < EC1:5 < 3 dS·m −1 ), and low-salinity soils (EC1:5 < 1.5 dS·m −1 ). Soil texture was measured using a laser particle size analyzer (Mastersizer 3000, Britain). The soil total nitrogen (TN) and organic matter (SOM) were determined using a macro element analyzer (Vario MACRO Cube, Germany). The alkali solution diffusion method was used to determine the soil available nitrogen (AN). The Olsen method was used to determine the soil available phosphorus (AP). Soil moisture content (MC) was determined by a drying-weighing method at 105 °C. The rate of soil CO 2 emissions was evaluated using a laboratory incubation method. Fresh soil equivalent to 60 g of dry soil was weighed into a 250 mL culture bottle and pre-incubated at 25 °C for 7 days to activate the soil microorganisms. Taking into account the frequent inundation of the coastal areas of the Yellow River Delta by seawater and the evaporation of soil moisture in the incubator, the soil water–soil ratio was adjusted to 1:1 with sterile water during the formal incubation process to approach the original water content; this was followed by an unsealed incubation at 25 °C in the dark for a total of 14 days. During the culture period, the water in the bottle was supplemented by a weighing method, and the CO 2 concentration was measured on day 1, 2, 3, 4, 7, 10, 13, and 14. The bottle mouth was sealed with a flip stopper when collecting gas; the gas was pushed into the bottle three times with a 10 mL syringe to ensure that the gas was evenly mixed, a 5 mL gas sample was drawn, and its concentration was measured with an Agilent gas chromatograph and then extracted with a syringe. We added 5 mL of air to the culture bottle, kept the pressure in the bottle consistent, and after 40 min, measured the concentration of the gas sample again; we then used the concentration difference between the two measurements to calculate the CO 2 emission rate on that day. The CO 2 emission rate for each site was calculated by averaging the CO 2 emission rates for 8 days. The CO 2 emission rate was calculated as F = ∆ C × V × 44 × 273 ÷ ( 273 + T ) t × M × 22.4 where F’s unit is in μg kg −1 -d, ΔC is the difference between the two CO 2 concentrations, V is the volume of the incubation flask, T is the incubation temperature, t is the incubation time, and M is the sample weight .
The total DNA was extracted from 0.5 g of soil per sample using a Fast DNA Kit (MP Biomedicals, Santa Ana, CA, USA). The detailed steps were carried out according to the instructions of the kit. The primers 528F (GCGGTAATTCCAGCTCCAA) and 706R (AATCCRAGAATTTCACCTCT) were used to amplify the V4 region of the 18S rRNA genes. The PCR reaction system included 10 ng of Genomic DNA, 5.0 μL of 10 × PCR Buffer, 1 μL of 50 μM each primer, 0.5 μL of 10 μM dNTPs, and 0.5 μL of 5 U/μL Plantium Taq DNA; this was replenished to 50 μL with sterilized, double-distilled water. The PCR reaction conditions were as follows: pre-denaturation for 30 s (94 °C), 30 cycles of denaturation for 20 s (94 °C), annealing for 20 s (60 °C), and extension for 20 s (72 °C), followed by a final extension for 5 min (72 °C) and preservation at 10 °C. The PCR products were purified and recovered by a Gene JET Kit (Thermo Scientific, Waltham, MA, USA) and were subjected to high-throughput sequencing on a Thermofisher Ion S5TMXL titanium platform. Cutadapt software (V1.9.1) was used to remove the low-quality reads and to trim the barcode and primer sequences . The operational taxonomy units (OTUs) were clustered at a 97% identity threshold using the UPARSE algorithm (v7.0.1001) (Edgar, 2013). The UCHIME algorithm (v4.2.40) was used to detect and remove the chimerisms . The RDP classifier (v2.0) was used to compare the OTU sequences with the SILVA132 database, with 80% similarity, to classify the sequences at the phylum, class, order, family, genus, and species levels .
In order to compare soil physicochemical properties and fungal community alpha diversity between different saline gradient classes, we performed a one-way analysis of variance (ANOVA) using SPSS (v20.0) software . In order to investigate Pearson’s correlation between the soil salinity and the alpha diversity of fungal communities, we used SPSS software. In order to determine the influence of soil physical and chemical factors on the fungal community structure, we performed a redundancy analysis (RDA) using CANOCO 5.0 software. In order to compare the differences between the salinity gradient classes in the structure of soil fungal communities, we performed a UPGMA clustering analysis using R software . In order to analyze the contribution of the major fungal genera to the community differences, we performed a SIMPER analysis and ANOSIM test using PAST (V1.0) software . Molecular ecological networks of soil fungi with high, medium, and low salinity gradients were constructed using online tools available on the MEAN website ( http://ieg2.ou.Edu/mena/ (accessed on 13 January 2022)) . Gephi software (V0.9.2) was used to visualize the networks . Models were constructed using the “plspm” package in the R language, and goodness-of-fit statistics were used as a predictive power for the path models .
3.1. Physical and Chemical Properties of the Soil Samples The physical and chemical property data of soil samples are represented as the means ± SE, as shown in . According to the texture classification standard established by the United States Department of Agriculture (USDA), the soil in these six sites belonged to silt loam. The soil salinity ranged from 0.28 to 4.65 dS·m −1 , and according to the level of soil salinity at the sampling points, the soils were classified with a high- (H1 or H2), medium- (M1 or M2), or low-salinity (L1 or L2) gradient. TN, SOM, and AN were the highest at the M2 site and the lowest at the H1 site, while AP was the highest at the M2 site and the lowest at the L2 site. However, there were no significant differences in the soil physicochemical properties between the two groups of the same salinity class, suggesting that distance has little effect on the soil environment within the YRD. These results show that there were significant differences in soil physical and chemical factors along the salinity gradient, forming a certain ecological gradient. The highest soil SOM content was found in the medium-salinity soil, the second highest was found in the low-salinity soil, and the lowest was found in the high-salinity soil. 3.2. Fungal Community Alpha Diversity The fungal community’s coverage in the six sites was greater than 96%, indicating that the sequencing depth can reasonably represent the situation of the samples ( ). The number of fungal OTU increased as the salinity gradient decreased. The Shannon index of the fungal community at the H2 site was the largest, so the fungal diversity in the H2 site was the highest, followed by the H1, L1, L2, M2, and M1 sites. Abundances of fungal communities under different salinity gradients are ordered from highest to lowest as follows: L2, L1, M2, M1, H2, and H1. Taken together, the number of OTUs and abundance of the soil fungal communities increased as the soil salinity gradient decreased, and there was no significant difference in the distance on the alpha diversity in the same salinity class. In addition, a Pearson’s correlation test between soil salinity and fungal α-diversity showed that the number of OTUs, Chao1 index, and ACE index had the highest correlation coefficient with soil salinity, i.e., −0.66, 0.61, and −0.60, respectively; this indicates that soil salinity was the dominant factor affecting the number of OTUs, Chao1 index, and ACE index of the fungal communities, whereas the soil salinity had an insignificant effect on the Shannon index of fungal communities. 3.3. Fungal Community Structure A total of 192 fungal genera belonging to eight phyla were identified by high-throughput sequencing in the YRD. The soil fungal communities mainly included Ascomycota, Basidiomycota, Mucoromycota, and Chytridiomycota at the phylum level ( a). Ascomycota was widely distributed in various sites, and their relative abundance was 54.81–74.65%, which was the dominant fungal phylum in the YRD. The Mucoromycota relative abundance was 4.90–25.74%, which was the subdominant fungal phylum in the YRD. The relative abundance of the top 30 fungal genera is shown in b. Chaetomium was the dominant fungal genus at L1 and L2, with a relative abundance of 19.95% and 25.56%, respectively. Alternaria (13.68%), Cephaliophora (16.97%), Fusarium (6.56%), and Alternaria (9.46%) were dominant at H1, H2, M1, and M2, respectively. Taken together, the fungal community composition was similar under the same salinity class and varied more under different salinity classes, indicating that soil salinity has a greater effect on the fungal community structure than distance in the YRD. 3.4. Differences in the Structure of Fungal Community The analyses of the ANOSIM test and UPGMA clustering based on the Bray–Curtis distance value showed that the soil fungal communities had significantly different distribution patterns under different salinity gradients (r = 0.504, p < 0.01, ). The UPGMA clustering analysis showed that the fungal communities under the same salinity gradient were grouped into one cluster, while the fungal communities under different salinity gradients were scattered. This indicates that soil salinity has a greater influence on soil fungal communities than geographic distance, and fungal community structure was quite different under different salinity gradients in the YRD. A SIMPER analysis was used to further find the different species that lead to different distribution patterns of fungal communities. listed the contribution rates of the major fungal genera to spatial dissimilarity. The difference in the fungal community structure between the H and M groups mainly came from Fusarium , Chaetomium , and Alternaria , and the total contribution rate was 31.39%; Chaetomium , Mortierella , and Malassezia greatly contributed to the difference in the fungal community structure between the H and L groups, with a total contribution rate of 63.85%; the difference in the fungal community structure between the M and L groups was mainly due to Chaetomium , Mortierella , and Fusarium . 3.5. Effects of Environmental Factors on Fungal Community For further insight into the relationship between fungal communities and soil environmental factors under different salinity gradients, a redundancy analysis (RDA) was conducted. As shown in , RDA axes 1 and 2 explained 86.74% of the total variation. The correlation between fungal genera and environmental factors was expressed by the cosine of the angle between them, and the longer the arrow of environmental factors, the greater the influence on the fungal community structure. The results of the RDA forward selection showed that EC, T, AP, AN, TN, and clay caused significant differences in the fungal community distribution patterns under different salinity gradients (F = 6.6, p < 0.01), and Monte Carlo permutations tests indicated that all six factors were significantly correlated with the soil fungal community structure ( p < 0.05). The long arrows of EC, AN, and AP factors show that they had a great influence on the fungal community distribution, and the explanation rate of EC was the highest, which was the dominant factor affecting the fungal community distribution patterns. TN, AN, and clay were positively correlated with most fungal genera, while EC, T, and AP were negatively correlated with most fungal genera; this indicates that TN, AN, and clay played a promotive role in the growth of most fungi, while EC, T, and AP played an inhibitory role in the growth of most fungi. 3.6. Molecular Ecological Network of Fungal Community To investigate the interactions between the soil fungal OTUs under different salinity gradients and to determine the key species in the soil fungal communities, fungal OTUs that appeared in more than 50% of the samples were selected to construct molecular ecological networks ( ; ). Among them, each node signified a fungal OTU, the node size represented a node degree, different colors of the nodes represented different modules, and lines between nodes were colored based on the modules. In addition, 100 random networks with the same number of nodes and connections as the original networks were constructed by the random method, and their topological characteristics were calculated; in contrast, the cohesion of empirical networks was generally higher than that of random networks, which indicated that the interactions between fungal OTUs in the molecular ecological networks were significant. The decrease in salinity changed the network structure of fungal communities and increased the complexity of the networks. According to and , the node quantity, edge quantity, and modularity coefficients of the networks increased as the salinity gradient decreased. In high-salinity soil, there were 12 network modules with 2–14 nodes, and the modularity coefficient was 0.63 ( , a); in medium-salinity soil, there were 10 network modules with 2–28 nodes, and the modularity coefficient was 0.73 ( , b); in low-salinity soil, there were 19 network modules with 2–25 nodes, and the modularity coefficient was 0.76 ( , c). The average degree (Avg K) and average clustering coefficient (Avg CC) values of the medium-salinity network were the highest, while the average path distance (GD) was the shortest. Therefore, in medium-salinity soil, the information, energy, and material among fungal OTUs had the highest transmission efficiency and closest connection. At the same time, the response speed of microorganisms was the fastest, and the microbial community was more prone to change in medium-salinity soil. In this study, OTU_107 (Ascomycota; Prussia ), OTU_52 (Ascomycota; Talaromyces ), and OTU_515,778 (Ascomycota; Aspergillus , Ascomycota; Scopulariopsis ) from Ascomycota had the highest number of connections, which were the core nodes of the molecular ecological network of high-, medium-, and low-salinity soil fungi, respectively. The relative abundance of Ascomycota was the highest in the soil fungal community in the YRD, indicating that Ascomycota occupied an important position in the saline soil environment and played a key role in maintaining the stability of the fungal community. 3.7. Impact of CO 2 Emissions and Soil Fungal Community To investigate the effect of fungi on CO 2 emissions, a partial least squares regression analysis of the CO 2 emission rate of soils in the study area was performed with the soil fungal and soil environmental factors ( a). It was found that the CO 2 emission significantly increased with decreasing soil salinity, and the effect of geographical distance on soil CO 2 emission rate was not significant. The PLS-PM goodness-of-fit was 0.6223, which proved that the PLS-PM was reasonably constructed and statistically significant ( b). The results showed that the fungal diversity had an effect on the CO 2 flux (estimate: 1.08, p < 0.05). Salinity was significantly negatively correlated with the amount of fungal diversity (estimate: −0.58, p < 0.05), while AP, AN, TN, and SOM were significantly positively correlated with the fungal diversity (estimate: 0.41, p < 0.05; estimate: 1.55, p < 0.05; estimate: 0.58, p < 0.05; estimate: 0.40, p < 0.05). This confirms that soil environmental factors indirectly affect CO 2 emissions by influencing fungal communities, and fungal communities significantly affect CO 2 emissions.
The physical and chemical property data of soil samples are represented as the means ± SE, as shown in . According to the texture classification standard established by the United States Department of Agriculture (USDA), the soil in these six sites belonged to silt loam. The soil salinity ranged from 0.28 to 4.65 dS·m −1 , and according to the level of soil salinity at the sampling points, the soils were classified with a high- (H1 or H2), medium- (M1 or M2), or low-salinity (L1 or L2) gradient. TN, SOM, and AN were the highest at the M2 site and the lowest at the H1 site, while AP was the highest at the M2 site and the lowest at the L2 site. However, there were no significant differences in the soil physicochemical properties between the two groups of the same salinity class, suggesting that distance has little effect on the soil environment within the YRD. These results show that there were significant differences in soil physical and chemical factors along the salinity gradient, forming a certain ecological gradient. The highest soil SOM content was found in the medium-salinity soil, the second highest was found in the low-salinity soil, and the lowest was found in the high-salinity soil.
The fungal community’s coverage in the six sites was greater than 96%, indicating that the sequencing depth can reasonably represent the situation of the samples ( ). The number of fungal OTU increased as the salinity gradient decreased. The Shannon index of the fungal community at the H2 site was the largest, so the fungal diversity in the H2 site was the highest, followed by the H1, L1, L2, M2, and M1 sites. Abundances of fungal communities under different salinity gradients are ordered from highest to lowest as follows: L2, L1, M2, M1, H2, and H1. Taken together, the number of OTUs and abundance of the soil fungal communities increased as the soil salinity gradient decreased, and there was no significant difference in the distance on the alpha diversity in the same salinity class. In addition, a Pearson’s correlation test between soil salinity and fungal α-diversity showed that the number of OTUs, Chao1 index, and ACE index had the highest correlation coefficient with soil salinity, i.e., −0.66, 0.61, and −0.60, respectively; this indicates that soil salinity was the dominant factor affecting the number of OTUs, Chao1 index, and ACE index of the fungal communities, whereas the soil salinity had an insignificant effect on the Shannon index of fungal communities.
A total of 192 fungal genera belonging to eight phyla were identified by high-throughput sequencing in the YRD. The soil fungal communities mainly included Ascomycota, Basidiomycota, Mucoromycota, and Chytridiomycota at the phylum level ( a). Ascomycota was widely distributed in various sites, and their relative abundance was 54.81–74.65%, which was the dominant fungal phylum in the YRD. The Mucoromycota relative abundance was 4.90–25.74%, which was the subdominant fungal phylum in the YRD. The relative abundance of the top 30 fungal genera is shown in b. Chaetomium was the dominant fungal genus at L1 and L2, with a relative abundance of 19.95% and 25.56%, respectively. Alternaria (13.68%), Cephaliophora (16.97%), Fusarium (6.56%), and Alternaria (9.46%) were dominant at H1, H2, M1, and M2, respectively. Taken together, the fungal community composition was similar under the same salinity class and varied more under different salinity classes, indicating that soil salinity has a greater effect on the fungal community structure than distance in the YRD.
The analyses of the ANOSIM test and UPGMA clustering based on the Bray–Curtis distance value showed that the soil fungal communities had significantly different distribution patterns under different salinity gradients (r = 0.504, p < 0.01, ). The UPGMA clustering analysis showed that the fungal communities under the same salinity gradient were grouped into one cluster, while the fungal communities under different salinity gradients were scattered. This indicates that soil salinity has a greater influence on soil fungal communities than geographic distance, and fungal community structure was quite different under different salinity gradients in the YRD. A SIMPER analysis was used to further find the different species that lead to different distribution patterns of fungal communities. listed the contribution rates of the major fungal genera to spatial dissimilarity. The difference in the fungal community structure between the H and M groups mainly came from Fusarium , Chaetomium , and Alternaria , and the total contribution rate was 31.39%; Chaetomium , Mortierella , and Malassezia greatly contributed to the difference in the fungal community structure between the H and L groups, with a total contribution rate of 63.85%; the difference in the fungal community structure between the M and L groups was mainly due to Chaetomium , Mortierella , and Fusarium .
For further insight into the relationship between fungal communities and soil environmental factors under different salinity gradients, a redundancy analysis (RDA) was conducted. As shown in , RDA axes 1 and 2 explained 86.74% of the total variation. The correlation between fungal genera and environmental factors was expressed by the cosine of the angle between them, and the longer the arrow of environmental factors, the greater the influence on the fungal community structure. The results of the RDA forward selection showed that EC, T, AP, AN, TN, and clay caused significant differences in the fungal community distribution patterns under different salinity gradients (F = 6.6, p < 0.01), and Monte Carlo permutations tests indicated that all six factors were significantly correlated with the soil fungal community structure ( p < 0.05). The long arrows of EC, AN, and AP factors show that they had a great influence on the fungal community distribution, and the explanation rate of EC was the highest, which was the dominant factor affecting the fungal community distribution patterns. TN, AN, and clay were positively correlated with most fungal genera, while EC, T, and AP were negatively correlated with most fungal genera; this indicates that TN, AN, and clay played a promotive role in the growth of most fungi, while EC, T, and AP played an inhibitory role in the growth of most fungi.
To investigate the interactions between the soil fungal OTUs under different salinity gradients and to determine the key species in the soil fungal communities, fungal OTUs that appeared in more than 50% of the samples were selected to construct molecular ecological networks ( ; ). Among them, each node signified a fungal OTU, the node size represented a node degree, different colors of the nodes represented different modules, and lines between nodes were colored based on the modules. In addition, 100 random networks with the same number of nodes and connections as the original networks were constructed by the random method, and their topological characteristics were calculated; in contrast, the cohesion of empirical networks was generally higher than that of random networks, which indicated that the interactions between fungal OTUs in the molecular ecological networks were significant. The decrease in salinity changed the network structure of fungal communities and increased the complexity of the networks. According to and , the node quantity, edge quantity, and modularity coefficients of the networks increased as the salinity gradient decreased. In high-salinity soil, there were 12 network modules with 2–14 nodes, and the modularity coefficient was 0.63 ( , a); in medium-salinity soil, there were 10 network modules with 2–28 nodes, and the modularity coefficient was 0.73 ( , b); in low-salinity soil, there were 19 network modules with 2–25 nodes, and the modularity coefficient was 0.76 ( , c). The average degree (Avg K) and average clustering coefficient (Avg CC) values of the medium-salinity network were the highest, while the average path distance (GD) was the shortest. Therefore, in medium-salinity soil, the information, energy, and material among fungal OTUs had the highest transmission efficiency and closest connection. At the same time, the response speed of microorganisms was the fastest, and the microbial community was more prone to change in medium-salinity soil. In this study, OTU_107 (Ascomycota; Prussia ), OTU_52 (Ascomycota; Talaromyces ), and OTU_515,778 (Ascomycota; Aspergillus , Ascomycota; Scopulariopsis ) from Ascomycota had the highest number of connections, which were the core nodes of the molecular ecological network of high-, medium-, and low-salinity soil fungi, respectively. The relative abundance of Ascomycota was the highest in the soil fungal community in the YRD, indicating that Ascomycota occupied an important position in the saline soil environment and played a key role in maintaining the stability of the fungal community.
2 Emissions and Soil Fungal Community To investigate the effect of fungi on CO 2 emissions, a partial least squares regression analysis of the CO 2 emission rate of soils in the study area was performed with the soil fungal and soil environmental factors ( a). It was found that the CO 2 emission significantly increased with decreasing soil salinity, and the effect of geographical distance on soil CO 2 emission rate was not significant. The PLS-PM goodness-of-fit was 0.6223, which proved that the PLS-PM was reasonably constructed and statistically significant ( b). The results showed that the fungal diversity had an effect on the CO 2 flux (estimate: 1.08, p < 0.05). Salinity was significantly negatively correlated with the amount of fungal diversity (estimate: −0.58, p < 0.05), while AP, AN, TN, and SOM were significantly positively correlated with the fungal diversity (estimate: 0.41, p < 0.05; estimate: 1.55, p < 0.05; estimate: 0.58, p < 0.05; estimate: 0.40, p < 0.05). This confirms that soil environmental factors indirectly affect CO 2 emissions by influencing fungal communities, and fungal communities significantly affect CO 2 emissions.
4.1. Differences in the Structures of Fungal Community under Different Salinity Gradients Ascomycota was the predominant phylum with the highest abundance in the present study, which was similar to the results obtained by Wang et al. (2020) using molecular biology methods to study fungal samples in a saline environment . The vast majority of Ascomycota fungi are saprophytes, which can decompose refractory organic substances such as lignin and keratin and play an important role in nutrient cycling . Mucoromycota had the highest abundance in low-salinity soil, but Cryptomycota had the highest abundance in high-salinity soil. This suggests that Cryptomycota is more salt-tolerant than Mucoromycota. With the decrease in salinity in the soil, plant growth and species , plant rhizosphere exudates , and other factors could cause changes in the microbial composition and structure, which may be the reason for the decreased distribution of Chaetomium in high-salinity soil. Salinity had a significant effect on microbial diversity . The results of the diversity analysis of fungal communities showed that the number of OTUs and abundance of the soil fungal communities increased as the soil salinity gradient decreased. Yang et al. highlighted in their study of soil fungi in the YRD that significantly lower values of the Chao1 richness index were observed in extreme salinity soil , which was also confirmed in the present study. The Pearson’s correlation test showed that the number of OTUs, Chao1 index, and ACE index were significantly negatively correlated with soil salinity, which could be attributed to the increase in the extracellular osmolarity of fungi caused by the accumulation of salt in the soil and that the fungi that were not adapted to osmotic stress were inhibited or even died , thus reducing the number of fungal OTUs, Chao1 index, and ACE index. Consistent with the findings of Yang et al., salinity altered the fungal community structure . The UPGMA cluster analysis showed that there were obvious similarities in the structures of fungal communities with the same salinity gradient, but there were great differences in the structures of fungal communities under different salinity gradients, which could be due to the similar soil environments and similar effects on fungal communities under the same salinity gradient, but also due to different soil environments under different salinity gradients, so that the effects on fungal communities were different. The RDA analysis indicated that EC, T, AP, AN, TN, and clay had a significant effect on the fungal community structure. EC had the greatest influence, which was the main factor leading to the difference in the distribution patterns of fungal communities under different salinity gradients. Chowdhury et al. (2011) found that soil salinity affected the composition of soil microbial communities through osmotic potential, and fungi were more sensitive to salinity than bacteria . Rajaniemi and Allison (2009) demonstrated that the effect of soil salinity on the composition of soil microbial communities is greater than that of soil C and N, which is consistent with the conclusion of this paper . The SIMPER analysis showed that Chaetomium , Mortierella , and Fusarium were the fungal groups with the highest contribution to the difference in community structure, with an average relative abundance of 5.32%, 2.31%, and 0.89%, respectively. Among these, Mortierella , which was significantly correlated with EC, T, and AN ( p < 0.05), could degrade the toxic organic compounds in soil, prevent soil degradation, and improve soil health . Fusarium was significantly correlated with EC, TN, AN, and clay ( p < 0.01). Chaetomium , part of the Ascomycota phylum, was significantly correlated with EC, T, AN, TN, and AP ( p < 0.05); it could produce a large number of cellulolytic enzymes and plays an important role in the carbon cycle of the natural ecosystem and soil improvement. 4.2. Molecular Ecological Networks of Fungal Community under Different Salinity Gradients In the natural environment, microorganisms often form complex network structures through various interactions rather than as independent individuals . The positive correlations between microorganisms may mean that there are positive ecological interactions, such as commensalism or mutualism . The negative correlations may be attributed to competition or amensalism . Zheng et al. found that all six networks had positive association percentages above 98% in their study of soil microbial responses to salt stress in Bohai Bay, China, which is similar to the results of the present study . In the three fungal networks with different salinity in this study, the percentages of the positive connections among fungal OTUs were all above 90%, which could be because the high-salinity habitat forced the fungi to strengthen their cooperation in response to salt stress, or because the fungi in the high-salinity environment had mutualism in the long-term co-evolution process. The module is a closely connected area in the network, which is usually interpreted as a niche . The low-salinity network had the largest number of modules (19) and the highest modularity (0.76); therefore, the niche differentiation degree of microorganisms in the low-salinity soil was the highest, and the community structure was the most complex. The nodes with the highest number of connections in the network were identified as the core nodes . The absence of the core nodes may cause module and network decomposition , so they play an important role in maintaining the stability of the microbial community. Core nodes were usually interpreted as key species . OTU_107, OTU_52, OTU_515, and OTU_778 from Ascomycota had the most connections, which were identified as the core nodes of the fungal molecular ecological networks. Ascomycota also had the highest relative abundance in the fungal community in the YRD, which indicated that Ascomycota occupied an important position in the saline soil environment and played a key role in maintaining the stability of the fungal community. 4.3. Impact of CO 2 Emissions and Soil Fungal Community Soil properties influence C cycling by altering wetland microbial diversity, and this is an important but previously underestimated indirect pathway . Soil CO 2 emission is an important indicator that responds to the participation of soil microorganisms in the carbon cycle process and converts organic matter . Soil fungi not only release CO 2 during the metabolic decomposition of organic matter but also participate in the carbon sequestration processes to reduce CO 2 emissions . It was found that increased soil salinity indirectly reduces CO 2 emissions by reducing soil fungal diversity. This is mainly because increased salinity has a strong negative effect on fungal community activity. For example, elevated salinity in the soil increases the extracellular osmotic pressure rate of fungi, which inhibits or even kills fungal activity and ultimately leads to a decrease in fungal diversity . Increases in soil organic matter and TN increase soil CO 2 emissions because soil with higher organic matter and TN content tend to have higher soil C and N content, resulting in strong soil respiration and high CO 2 emissions . Moreover, the decomposition process of organic matter by saprophytic fungi releases CO 2 , and a study by Suvendu et al. found a significant positive correlation between the saprophytic fungus Mortierella and CO 2 emissions . In the present study, Mortierella was one of the genera that contributed most to the differences in soil fungal community structure at different salinities and was significantly correlated with EC. This suggests that salinity can influence CO 2 emissions by affecting fungal communities. Therefore, it can be inferred that CO 2 emissions from the Yellow River Delta are closely related to the existence of soil fungal communities, while soil environmental factors mainly affect soil CO 2 emissions indirectly by influencing the fungal communities.
Ascomycota was the predominant phylum with the highest abundance in the present study, which was similar to the results obtained by Wang et al. (2020) using molecular biology methods to study fungal samples in a saline environment . The vast majority of Ascomycota fungi are saprophytes, which can decompose refractory organic substances such as lignin and keratin and play an important role in nutrient cycling . Mucoromycota had the highest abundance in low-salinity soil, but Cryptomycota had the highest abundance in high-salinity soil. This suggests that Cryptomycota is more salt-tolerant than Mucoromycota. With the decrease in salinity in the soil, plant growth and species , plant rhizosphere exudates , and other factors could cause changes in the microbial composition and structure, which may be the reason for the decreased distribution of Chaetomium in high-salinity soil. Salinity had a significant effect on microbial diversity . The results of the diversity analysis of fungal communities showed that the number of OTUs and abundance of the soil fungal communities increased as the soil salinity gradient decreased. Yang et al. highlighted in their study of soil fungi in the YRD that significantly lower values of the Chao1 richness index were observed in extreme salinity soil , which was also confirmed in the present study. The Pearson’s correlation test showed that the number of OTUs, Chao1 index, and ACE index were significantly negatively correlated with soil salinity, which could be attributed to the increase in the extracellular osmolarity of fungi caused by the accumulation of salt in the soil and that the fungi that were not adapted to osmotic stress were inhibited or even died , thus reducing the number of fungal OTUs, Chao1 index, and ACE index. Consistent with the findings of Yang et al., salinity altered the fungal community structure . The UPGMA cluster analysis showed that there were obvious similarities in the structures of fungal communities with the same salinity gradient, but there were great differences in the structures of fungal communities under different salinity gradients, which could be due to the similar soil environments and similar effects on fungal communities under the same salinity gradient, but also due to different soil environments under different salinity gradients, so that the effects on fungal communities were different. The RDA analysis indicated that EC, T, AP, AN, TN, and clay had a significant effect on the fungal community structure. EC had the greatest influence, which was the main factor leading to the difference in the distribution patterns of fungal communities under different salinity gradients. Chowdhury et al. (2011) found that soil salinity affected the composition of soil microbial communities through osmotic potential, and fungi were more sensitive to salinity than bacteria . Rajaniemi and Allison (2009) demonstrated that the effect of soil salinity on the composition of soil microbial communities is greater than that of soil C and N, which is consistent with the conclusion of this paper . The SIMPER analysis showed that Chaetomium , Mortierella , and Fusarium were the fungal groups with the highest contribution to the difference in community structure, with an average relative abundance of 5.32%, 2.31%, and 0.89%, respectively. Among these, Mortierella , which was significantly correlated with EC, T, and AN ( p < 0.05), could degrade the toxic organic compounds in soil, prevent soil degradation, and improve soil health . Fusarium was significantly correlated with EC, TN, AN, and clay ( p < 0.01). Chaetomium , part of the Ascomycota phylum, was significantly correlated with EC, T, AN, TN, and AP ( p < 0.05); it could produce a large number of cellulolytic enzymes and plays an important role in the carbon cycle of the natural ecosystem and soil improvement.
In the natural environment, microorganisms often form complex network structures through various interactions rather than as independent individuals . The positive correlations between microorganisms may mean that there are positive ecological interactions, such as commensalism or mutualism . The negative correlations may be attributed to competition or amensalism . Zheng et al. found that all six networks had positive association percentages above 98% in their study of soil microbial responses to salt stress in Bohai Bay, China, which is similar to the results of the present study . In the three fungal networks with different salinity in this study, the percentages of the positive connections among fungal OTUs were all above 90%, which could be because the high-salinity habitat forced the fungi to strengthen their cooperation in response to salt stress, or because the fungi in the high-salinity environment had mutualism in the long-term co-evolution process. The module is a closely connected area in the network, which is usually interpreted as a niche . The low-salinity network had the largest number of modules (19) and the highest modularity (0.76); therefore, the niche differentiation degree of microorganisms in the low-salinity soil was the highest, and the community structure was the most complex. The nodes with the highest number of connections in the network were identified as the core nodes . The absence of the core nodes may cause module and network decomposition , so they play an important role in maintaining the stability of the microbial community. Core nodes were usually interpreted as key species . OTU_107, OTU_52, OTU_515, and OTU_778 from Ascomycota had the most connections, which were identified as the core nodes of the fungal molecular ecological networks. Ascomycota also had the highest relative abundance in the fungal community in the YRD, which indicated that Ascomycota occupied an important position in the saline soil environment and played a key role in maintaining the stability of the fungal community.
2 Emissions and Soil Fungal Community Soil properties influence C cycling by altering wetland microbial diversity, and this is an important but previously underestimated indirect pathway . Soil CO 2 emission is an important indicator that responds to the participation of soil microorganisms in the carbon cycle process and converts organic matter . Soil fungi not only release CO 2 during the metabolic decomposition of organic matter but also participate in the carbon sequestration processes to reduce CO 2 emissions . It was found that increased soil salinity indirectly reduces CO 2 emissions by reducing soil fungal diversity. This is mainly because increased salinity has a strong negative effect on fungal community activity. For example, elevated salinity in the soil increases the extracellular osmotic pressure rate of fungi, which inhibits or even kills fungal activity and ultimately leads to a decrease in fungal diversity . Increases in soil organic matter and TN increase soil CO 2 emissions because soil with higher organic matter and TN content tend to have higher soil C and N content, resulting in strong soil respiration and high CO 2 emissions . Moreover, the decomposition process of organic matter by saprophytic fungi releases CO 2 , and a study by Suvendu et al. found a significant positive correlation between the saprophytic fungus Mortierella and CO 2 emissions . In the present study, Mortierella was one of the genera that contributed most to the differences in soil fungal community structure at different salinities and was significantly correlated with EC. This suggests that salinity can influence CO 2 emissions by affecting fungal communities. Therefore, it can be inferred that CO 2 emissions from the Yellow River Delta are closely related to the existence of soil fungal communities, while soil environmental factors mainly affect soil CO 2 emissions indirectly by influencing the fungal communities.
Our study found that the soil fungal abundance increased as the soil salinity decreased. EC had the greatest, most significant impact on the fungal community structure, which was the dominant factor leading to the difference in the distribution patterns of the fungal communities under different salinity gradients. Chaetomium was the dominant fungal genus in the low-salinity soil, while Aspergillus was the dominant fungal genus in the high- and medium-salinity soil. The SIMPER analysis showed that Chaetomium , Fusarium , Mortierella , Alternaria , and Malassezia were the dominant fungal groups leading to the difference in the structures of fungal communities under different salinity gradients. In the molecular ecological networks, the decrease in salinity changed the reticulation of fungal communities and increased the complexity of the network. Moreover, fungal community diversity affects CO 2 emissions, soil environmental factors also affect CO 2 emissions by influencing fungal communities, and increased soil salinity decreases soil CO 2 emissions.
|
The Impact of Health Education on the Quality of Life of Patients Hospitalized in Forensic Psychiatry Wards | c1b3ae16-59f9-4b7d-a4e4-b339239996aa | 10001608 | Forensic Medicine[mh] | Research on the quality of life began in the early 1960s and 1970s, and in recent years this issue has received more structured interest from researchers in various fields of science. The concept of quality of life is ambiguous, multidimensional, and multidisciplinary, and reflects many aspects of human functioning. To a large extent, it is a subjective value and depends on a person’s mental state, personality, preferences, and value system. The subjective assessment of a patient’s quality of life is still relatively little known, and scientific work on it has been scarce. In psychiatry, this situation is a bit more complicated, because the subjective factor in the assessment of the patient’s mental state is of great diagnostic and prognostic importance. Therefore, systematic studies of the assessment of the quality of life of patients with mental disorders were undertaken, with some reluctance, and were delayed in relation to the studies of the quality of life of somatic patients . Reflections on the quality of life of patients with schizophrenia bring the question of whether a schizophrenic patient is able to accurately assess their quality of life, due to the lack of insight and cognitive deficits often associated with the disease. Scientists believe that patients with schizophrenia are aware of their own social deficits and the information obtained from them is useful in the process of diagnosis and treatment. Regardless of whether the subjective dimension is consistent with the patient’s objective situation or not, it remains important in the holistic assessment of the patient’s mental state [ , , ]. Patients treated in forensic psychiatry wards are an extremely difficult group of patients. These are not only mentally ill people but also perpetrators of acts prohibited by law. These patients usually have a long criminal history, suffer from severe mental disorders, are often drug resistant, addicted to psychoactive substances, and very often have severe personality disorders co-occurring. Working with such a patient is a long-term and multidimensional process and is not based solely on providing services resulting from the presence of psychopathological symptoms. For these reasons, in addition to pharmacological and psychotherapeutic treatment, patients are provided with a wide range of sociotherapeutic, rehabilitation, and resocialization interactions, which are a set of interactions that are designed to lead to the patient’s proper functioning in society, in accordance with the accepted social and legal order. These interactions are a special form of a multidimensional approach that includes elements of education (i.e., education and upbringing), care, and therapy. Return to society, readaptation, coping with the disease, and restoring hope for a satisfying life are the main goals of education and upbringing in forensic psychiatry wards, and the patient’s participation in this process is a factor building the patient’s co-responsibility for their own health . Scientific research clearly shows that pharmacological treatment, in combination with psychosocial interactions, is an important element of therapeutic programs aimed at helping people with schizophrenia recover . The function of health educators among this specific group of patients is often taken on by nurses who, in addition to standard nursing procedures, conduct psychiatric rehabilitation activities closely related to, among other things, the health education of patients. The nurse of the forensic psychiatry ward is the person closest to the patient, in light of this they are an unquestionable source of information about the patient’s changing physical and mental condition. The knowledge about the patient obtained by nurses is a reliable foundation for both treatment and rehabilitation, because patients of forensic psychiatry wards stay there long after their mental illness has stopped being a leading problem. The health education program for mentally ill offenders, developed by the first author, was the starting point for examining the impact of educational programs on the quality of life of patients long-term isolated in a forensic psychiatry ward. So far, no such studies have been conducted. The obtained results may become a premise for standardizing the work of nurses and developing a model of patient care in the forensic psychiatry ward, as well as be used to develop therapeutic and rehabilitation programs for patients with mental disorders, where the element of education will be an important part of the rehabilitation process.
The study was conducted at the State Hospital for Mental and Nervous Diseases in Rybnik, Poland, in five units of forensic psychiatry. The study group consisted of 67 men, aged 22–73, with a diagnosis of schizophrenia. The study lasted for 6 months, from December 2019 to May 2020, during which patients gained knowledge and social competences during lectures in the field of broadly understood health education. The reference group in the study was a group of 48 patients interned in a forensic psychiatry ward, for whom no health educational activities were conducted. The statistical analysis indicated that the study and reference groups did not differ in a statistically significant way ( ). The health education program was structured, individualized, and adapted to the educational needs of patients hospitalized in forensic psychiatry wards. The educational process in which they participated was intended to increase knowledge about mental illness, including its causes, symptoms, dynamics, and treatment options, but also to develop social skills. The assessment of the effect of health education was carried out with a knowledge test, performed twice, before and after the educational cycle. The knowledge test carried out before the series of educational lectures was intended to assess the initial level of patients’ knowledge of topics that would appear in the series of lectures. The health education program consisted of 40 topics related to social life, mental health, healthy lifestyle, and functioning of the patient in the forensic psychiatry ward. Patients participating in the study attended educational group lectures twice a week, for a period of 6 months. After the completion of the medical education cycle, the same knowledge test was performed again.
A total of 115 patients of forensic psychiatric wards, diagnosed with schizophrenia, participated in the study, of which, data on 101 patients were obtained, 61 patients in the study group and 40 patients in the reference group. Out of the initial number of 67 patients in the study group, 61 patients completed the six-month health education cycle after the first assessment with the knowledge test. Of the 48 patients in the reference group in the first measurement, 40 had a second measurement after 6 months. A total of 14 patients from both groups did not complete the study, due to discharge from the hospital, refusal to complete research questionnaires, or transfer of the patient to a non-forensic psychiatry ward. 3.1. WHO Quality of Life (WHOQOL-BREF) Scale shows how patients answered the question related to the overall assessment of their quality of life. In the first measurement, the quality of life was negatively assessed by 7.5% of the study group and 6.3% of the reference group (answer “very bad” or “bad”). The quality of life was positively assessed by 58.3% of the study group and 48% of the reference group (answer “good” or “very good”). In the study group, 34.3% of patients and in the reference group, 45.8%, did not specify the assessment of the quality of life, marking the answer “neither good nor bad”. In the second assessment, the percentage values of quality of life in the study group increased, both in relation to the positive assessment, by 2.4% (60.7% of the group), and negative, by 4% (11.5% of the group). However, the number of people who could not define their quality of life decreased, to 27.9% of the group. In the reference group, in the second measurement, the percentage values of the negative assessment increased by 6.2% (12.5% of the group), and the unspecified assessment, by 4.2% (50% of the group), while the positive assessment of the quality of life decreased by 10.5% (37.5% of the group). The Wilcoxon paired-order test within each of the obtained groups did not show statistically significant individual changes in the assessment of the quality of life. There was also no statistically significant difference in the distribution of individual changes between the study and reference groups. Detailed results are presented in . Descriptive statistics of individual domains of the WHOQOL-BREF scale in the study group and the control group for the 1st measurement are presented in , and for the 2nd measurement in . For each of the groups of analyzed results, no statistically significant difference in their distributions compared to the theoretical normal distribution was found. Statistical analysis of mean differences of individual domains from the study and control groups, performed with the Student’s t -test for unrelated samples, showed statistical significance ( p < 0.001101) only for the 3rd domain (social status). The higher values of this domain were observed in the study group. No statistically significant difference was found in the distributions of any of the groups compared to the theoretical normal distribution. Statistical analysis of the differences in the means of individual domains from the study group and the control group, using the Student’s t -test for unrelated samples, showed a statistically significant result in the 3rd domain (social status). presents the results of descriptive statistics of changes in domain values between the first and second measurement within each group, and the result of the test of significance of changes in the group. In the second measurement, an increase of 3.7 points in the mean value in domain 1, and a decrease of 5.5 points in the mean value in domain 3, were observed in the study group. In none of the analyses was a significant result obtained at the significance level of p = 0.05. The significance levels obtained in the study group for domain 1 ( p = 0.09) and domain 3 ( p = 0.06) indicate the occurrence of a statistical trend related to the conducted medical education cycle, regarding the increase in patients’ assessment of their somatic condition (domain 1) and decrease in their assessment of social status (domain 3). To determine whether the domain values are correlated with the qualitative factors given in the characteristics of the groups, analysis of covariance (ANCOVA) or variance (ANOVA) was performed. The results of men who had two measurements were included in the analyses. The qualitative factors analyzed are shown in . also shows the significance level for individual effects contributing to the dependent variable (WHOQO-BREF scale domain value), by the analyzed pairs of variables (ANCOVA analysis) or only the categorical variable (ANOVA analysis). For the remaining domains of the relationship, the quantitative variable (age of life) for the second measurement was not analyzed, due to the lack of correlation shown in other analyses. In the study group, in all ANCOVA analyses, a statistically significant result was obtained for the results of the first measurement, indicating a relationship between the values of the analyzed domains and the current age of life. The ANCOVA analysis in the study group showed that the form of professional activity of patients, has a significant impact on the values of domain 1 (somatic condition) and domain 2 (psychological condition), both in the results of the first and second assessments. Patients who were on disability pension assessed their somatic condition (domain 1) and psychological condition (domain 2) the worst, in comparison to other respondents. The ANCOVA analysis showed, in the first evaluation, that the mean domain 1 values are related to the severity of the clinical condition expressed on the CGI-S scale. A significant impact of the level of mental disturbances on domain 1 values was found only before the start of the training cycle. At the end of the cycle, such statistical significance was not demonstrated. In the performed ANCOVA covariance analysis, a significant result was also obtained for the impact of the number of correct answers in the knowledge test on the value of the D1 domain (somatic condition) in the first assessment. The results indicate lower average domain 1 values for fewer correct answers, before the start of the education cycle ( p = 0.02). Such results indicate that patients with a lower level of medical education, expressed in the number of correct answers, assess their quality of life, described by their somatic condition, worse. Along with the increase in the level of knowledge in the field of health education, the self-assessment of the quality of life in the domain of somatic condition increases. For the results of the second assessment, after the end of the six-month health education cycle, the mean domain values did not differ significantly ( p = 0.89). shows the interdependence of changes in the number of correct answers to the questions of the knowledge test, with changes in the values of individual domains of the WHOQO-BREF quality of life scale, among all patients in the study group who underwent two measurements, and those who in the first measurement answered correctly to less than 34 knowledge test questions. An increase in the number of correct answers in the entire group of patients, after the completion of the health education training cycle, was found in 39 men (63.9% of the group). Expanding knowledge in the field of health education, identified with the number of correct answers, occurred statistically significantly more often than in 50% of the entire group ( p = 0.01). The value of 50% is defined in statistics as a probable occurrence. Of these 39 patients, 20 (32.8% of the group) also had an increase in domain 1, 19 (31.3% of the group) an increase in domain 2, 12 (19.7% of the group) an increase in domain 3, and 16 (26.2% of the group) an increase in domain 4. In the group of 30 patients with less than 34 correct answers to the knowledge test in the first assessment, an increase in the number of correct answers in the second assessment was found in 19 men, which is statistically significantly for greater than 50% of the group ( p = 0.05). Of these 19 patients, 11 (36.7% of the group) also had an increase in domains 1 and 2, 6 (20.0% of the group) an increase in domain 3, and 8 (26.7% of the group) an increase in domain 4. also includes the calculated OR odds ratios (with 95% confidence interval) of the increase in the domain value, with an increase in the number of correct answers compared to those in which the increase in the number of correct responses did not occur. For the entire study group, the estimated OR values ranged from 1.00 for domain 4, to 2.53 for domain 2. OR values above 1.00 indicate a greater chance of an increase in the domain value with an increase in the number of correct answers. None of these values turned out to be statistically significant, due to the wide confidence intervals. The greatest chance of domain growth, associated with an increase in the number of correct answers, was found for domain 2 (psychological state). Here the chance is 2.53 times greater than the chance of domain growth with no increase in the number of correct answers. Although the value was not statistically significant, the calculated significance level of p = 0.09, may indicate a presence of statistical tendency. For domain 3, the odds ratio was OR = 1.51, and for domain 1, OR = 1.26. The obtained values indicate that the chance of an increase in the self-assessment of the quality of life in these domains is greater than without improving the state of knowledge in the field of medical education. Analyzing the results among patients who answered less than 34 questions correctly in the first assessment, a statistically significant result was obtained for the odds ratio OR = 6.19, domain 2. The chance of an increase in self-assessment of mental health (domain 2) among patients with an initial low level of medical knowledge after the cycle of educational training, is over 6 times greater than the chance of domain growth with no improvement in knowledge. The results of , based on retrospective data, allow us to conclude that conducting a series of lectures in the field of medical education probably increases the level of medical knowledge in patients, which may affect the change in self-assessment of health, especially in the psychological sphere (domain 2). 3.2. Health Education Knowledge Test The general knowledge of men in the field of health education was assessed as the total number of correct answers to all questions of the test. The parameters of the descriptive statistics of the number of correct answers in the two assessments, in the study and reference groups, are presented in . In the study group, the number of correct answers of patients in the first assessment ranged from 19 to 40, and in the second assessment from 17 to 40. The average number of correct answers in the first assessment in the study group was 32.9, and in the reference group it was 34.3. The variations of the results described by the standard deviation of these measurements were 4.0 and 4.6, respectively. In the reference group, the range of changes in the number of correct answers in the first measurement was the same as in the study group, and in the second measurement it ranged from 22 to 39. The average results were 31.2 and 30.5, respectively, and the standard deviations were 5.1 and 4.3. Percentile values in individual groups of results are presented in . The distribution of the analyzed numbers of correct answers to the questions of the knowledge test deviated in three cases from the theoretical normal distribution (Kolmogorow–Smirnov test). The Mann-Whitney U test did not show a statistically significant difference in the results between the groups for the first assessment, while for the second assessment statistical significance was obtained. In the Wilcoxon test, comparing the results from the two assessments, a statistically significant difference was shown only for the study group. The results of these analyses indicate that, after the health education cycle, the medical knowledge of patients in the study group significantly improved. 3.3. Summary of Results The conducted study, the main purpose of which was to determine the impact of health education on the quality of life of patients of forensic psychiatry wards, indicates that, in terms of assessing the quality of life, educational activities carried out, based on the health education program, in the group of patients did not have a significant impact on their overall quality of life. However, they showed a steady upward trend of scoring in the assessment of the quality of life, after the cycle of health education, in the somatic domain, but not in a significant way. The age of the patients’ life had a significant impact on their assessment of their quality of life in the initial phase of the study, i.e., before the health education cycle, patients assessed their quality of life worse. In the field of health education, the program implemented among patients of forensic psychiatry wards turned out to be effective and significantly improved the patients’ knowledge.
shows how patients answered the question related to the overall assessment of their quality of life. In the first measurement, the quality of life was negatively assessed by 7.5% of the study group and 6.3% of the reference group (answer “very bad” or “bad”). The quality of life was positively assessed by 58.3% of the study group and 48% of the reference group (answer “good” or “very good”). In the study group, 34.3% of patients and in the reference group, 45.8%, did not specify the assessment of the quality of life, marking the answer “neither good nor bad”. In the second assessment, the percentage values of quality of life in the study group increased, both in relation to the positive assessment, by 2.4% (60.7% of the group), and negative, by 4% (11.5% of the group). However, the number of people who could not define their quality of life decreased, to 27.9% of the group. In the reference group, in the second measurement, the percentage values of the negative assessment increased by 6.2% (12.5% of the group), and the unspecified assessment, by 4.2% (50% of the group), while the positive assessment of the quality of life decreased by 10.5% (37.5% of the group). The Wilcoxon paired-order test within each of the obtained groups did not show statistically significant individual changes in the assessment of the quality of life. There was also no statistically significant difference in the distribution of individual changes between the study and reference groups. Detailed results are presented in . Descriptive statistics of individual domains of the WHOQOL-BREF scale in the study group and the control group for the 1st measurement are presented in , and for the 2nd measurement in . For each of the groups of analyzed results, no statistically significant difference in their distributions compared to the theoretical normal distribution was found. Statistical analysis of mean differences of individual domains from the study and control groups, performed with the Student’s t -test for unrelated samples, showed statistical significance ( p < 0.001101) only for the 3rd domain (social status). The higher values of this domain were observed in the study group. No statistically significant difference was found in the distributions of any of the groups compared to the theoretical normal distribution. Statistical analysis of the differences in the means of individual domains from the study group and the control group, using the Student’s t -test for unrelated samples, showed a statistically significant result in the 3rd domain (social status). presents the results of descriptive statistics of changes in domain values between the first and second measurement within each group, and the result of the test of significance of changes in the group. In the second measurement, an increase of 3.7 points in the mean value in domain 1, and a decrease of 5.5 points in the mean value in domain 3, were observed in the study group. In none of the analyses was a significant result obtained at the significance level of p = 0.05. The significance levels obtained in the study group for domain 1 ( p = 0.09) and domain 3 ( p = 0.06) indicate the occurrence of a statistical trend related to the conducted medical education cycle, regarding the increase in patients’ assessment of their somatic condition (domain 1) and decrease in their assessment of social status (domain 3). To determine whether the domain values are correlated with the qualitative factors given in the characteristics of the groups, analysis of covariance (ANCOVA) or variance (ANOVA) was performed. The results of men who had two measurements were included in the analyses. The qualitative factors analyzed are shown in . also shows the significance level for individual effects contributing to the dependent variable (WHOQO-BREF scale domain value), by the analyzed pairs of variables (ANCOVA analysis) or only the categorical variable (ANOVA analysis). For the remaining domains of the relationship, the quantitative variable (age of life) for the second measurement was not analyzed, due to the lack of correlation shown in other analyses. In the study group, in all ANCOVA analyses, a statistically significant result was obtained for the results of the first measurement, indicating a relationship between the values of the analyzed domains and the current age of life. The ANCOVA analysis in the study group showed that the form of professional activity of patients, has a significant impact on the values of domain 1 (somatic condition) and domain 2 (psychological condition), both in the results of the first and second assessments. Patients who were on disability pension assessed their somatic condition (domain 1) and psychological condition (domain 2) the worst, in comparison to other respondents. The ANCOVA analysis showed, in the first evaluation, that the mean domain 1 values are related to the severity of the clinical condition expressed on the CGI-S scale. A significant impact of the level of mental disturbances on domain 1 values was found only before the start of the training cycle. At the end of the cycle, such statistical significance was not demonstrated. In the performed ANCOVA covariance analysis, a significant result was also obtained for the impact of the number of correct answers in the knowledge test on the value of the D1 domain (somatic condition) in the first assessment. The results indicate lower average domain 1 values for fewer correct answers, before the start of the education cycle ( p = 0.02). Such results indicate that patients with a lower level of medical education, expressed in the number of correct answers, assess their quality of life, described by their somatic condition, worse. Along with the increase in the level of knowledge in the field of health education, the self-assessment of the quality of life in the domain of somatic condition increases. For the results of the second assessment, after the end of the six-month health education cycle, the mean domain values did not differ significantly ( p = 0.89). shows the interdependence of changes in the number of correct answers to the questions of the knowledge test, with changes in the values of individual domains of the WHOQO-BREF quality of life scale, among all patients in the study group who underwent two measurements, and those who in the first measurement answered correctly to less than 34 knowledge test questions. An increase in the number of correct answers in the entire group of patients, after the completion of the health education training cycle, was found in 39 men (63.9% of the group). Expanding knowledge in the field of health education, identified with the number of correct answers, occurred statistically significantly more often than in 50% of the entire group ( p = 0.01). The value of 50% is defined in statistics as a probable occurrence. Of these 39 patients, 20 (32.8% of the group) also had an increase in domain 1, 19 (31.3% of the group) an increase in domain 2, 12 (19.7% of the group) an increase in domain 3, and 16 (26.2% of the group) an increase in domain 4. In the group of 30 patients with less than 34 correct answers to the knowledge test in the first assessment, an increase in the number of correct answers in the second assessment was found in 19 men, which is statistically significantly for greater than 50% of the group ( p = 0.05). Of these 19 patients, 11 (36.7% of the group) also had an increase in domains 1 and 2, 6 (20.0% of the group) an increase in domain 3, and 8 (26.7% of the group) an increase in domain 4. also includes the calculated OR odds ratios (with 95% confidence interval) of the increase in the domain value, with an increase in the number of correct answers compared to those in which the increase in the number of correct responses did not occur. For the entire study group, the estimated OR values ranged from 1.00 for domain 4, to 2.53 for domain 2. OR values above 1.00 indicate a greater chance of an increase in the domain value with an increase in the number of correct answers. None of these values turned out to be statistically significant, due to the wide confidence intervals. The greatest chance of domain growth, associated with an increase in the number of correct answers, was found for domain 2 (psychological state). Here the chance is 2.53 times greater than the chance of domain growth with no increase in the number of correct answers. Although the value was not statistically significant, the calculated significance level of p = 0.09, may indicate a presence of statistical tendency. For domain 3, the odds ratio was OR = 1.51, and for domain 1, OR = 1.26. The obtained values indicate that the chance of an increase in the self-assessment of the quality of life in these domains is greater than without improving the state of knowledge in the field of medical education. Analyzing the results among patients who answered less than 34 questions correctly in the first assessment, a statistically significant result was obtained for the odds ratio OR = 6.19, domain 2. The chance of an increase in self-assessment of mental health (domain 2) among patients with an initial low level of medical knowledge after the cycle of educational training, is over 6 times greater than the chance of domain growth with no improvement in knowledge. The results of , based on retrospective data, allow us to conclude that conducting a series of lectures in the field of medical education probably increases the level of medical knowledge in patients, which may affect the change in self-assessment of health, especially in the psychological sphere (domain 2).
The general knowledge of men in the field of health education was assessed as the total number of correct answers to all questions of the test. The parameters of the descriptive statistics of the number of correct answers in the two assessments, in the study and reference groups, are presented in . In the study group, the number of correct answers of patients in the first assessment ranged from 19 to 40, and in the second assessment from 17 to 40. The average number of correct answers in the first assessment in the study group was 32.9, and in the reference group it was 34.3. The variations of the results described by the standard deviation of these measurements were 4.0 and 4.6, respectively. In the reference group, the range of changes in the number of correct answers in the first measurement was the same as in the study group, and in the second measurement it ranged from 22 to 39. The average results were 31.2 and 30.5, respectively, and the standard deviations were 5.1 and 4.3. Percentile values in individual groups of results are presented in . The distribution of the analyzed numbers of correct answers to the questions of the knowledge test deviated in three cases from the theoretical normal distribution (Kolmogorow–Smirnov test). The Mann-Whitney U test did not show a statistically significant difference in the results between the groups for the first assessment, while for the second assessment statistical significance was obtained. In the Wilcoxon test, comparing the results from the two assessments, a statistically significant difference was shown only for the study group. The results of these analyses indicate that, after the health education cycle, the medical knowledge of patients in the study group significantly improved.
The conducted study, the main purpose of which was to determine the impact of health education on the quality of life of patients of forensic psychiatry wards, indicates that, in terms of assessing the quality of life, educational activities carried out, based on the health education program, in the group of patients did not have a significant impact on their overall quality of life. However, they showed a steady upward trend of scoring in the assessment of the quality of life, after the cycle of health education, in the somatic domain, but not in a significant way. The age of the patients’ life had a significant impact on their assessment of their quality of life in the initial phase of the study, i.e., before the health education cycle, patients assessed their quality of life worse. In the field of health education, the program implemented among patients of forensic psychiatry wards turned out to be effective and significantly improved the patients’ knowledge.
Studying the quality of life of patients in forensic psychiatry wards is a big challenge. It should be noted that these patients remain in continuous isolation for many years and it is difficult to talk about a good quality of life of interned patients. A long stay of a patient in the ward does not have a positive effect on their well-being. It is a source of internal conflicts, a sense of injustice and frustration. Forced isolation for such a long time means that the assessment of the quality of life of patients is influenced by many factors, not necessarily positive, such as, for example, the decision to extend forced hospitalization. In addition, unpublished research by the authors also shows that patients of forensic psychiatry wards do not have adequate support from relatives, often they feel left alone, which deepens their feeling of isolation. These are just some of the factors that accompany patients and may have a negative impact on the assessment of their quality of life. It should also be remembered that a patient in a psychiatric ward is under constant observation and their health, behavior, and participation in treatment and therapy are monitored. For those reasons patients of forensic psychiatry wards are therefore unable to maintain their physical and psychological autonomy, they stay in an artificially created environment for many years, which causes numerous limitations in everyday functioning, which certainly affects the quality of their lives. The patient of the forensic psychiatry ward is a compulsorily hospitalized patient who, above all else, wants to regain their freedom, therefore their answers to all kinds of tests should be treated with great caution, because they may want to present themselves in a way that is favorable to them. The study of the relationship between the impact of health education on the quality of life of patients was preceded by asking patients about their general assessment of their quality of life, to be able to globally determine its level in this particular group of patients. Both before and after the health education cycle, patients very similarly assessed their quality of life, so it can be concluded that education has no significant impact on their assessment of their quality of life. The obtained results are similar to the results of other studies, which have indicated that, for example, an education and an increase in the level of knowledge about schizophrenia do not translate into an increase in the subjective assessment of the overall quality of life . The obtained results also confirm the results of another study, which showed that greater criticism of the disease obtained through educational activities, and thus increased awareness of the disease and its consequences, is associated with a lower assessment of patients’ quality of life . Interestingly, a detailed analysis of the results of this study showed that most patients assess their overall quality of life at a satisfactory level, and only a small group of patients present a negative opinion. Although the overall assessment of the quality of life of the surveyed patients turned out to be satisfactory, other studies clearly indicate that people with mental disorders assess their quality of life worse than healthy people [ , , , ]. Schizophrenia is a severe mental disorder characterized by positive and negative symptoms and cognitive deficits. Compared to healthy individuals, patients with schizophrenia are at greater risk for comorbid physical illnesses, cognitive and occupational impairments, frequent hospitalizations, high medical costs, and increased risk of suicide and mortality, all of which come with a heavy personal and family burden that undoubtedly impacts their quality of life. It is therefore possible that patients deprived of liberty, free from factors unfavorable to mental health, including the stigma of the disease, can assess the quality of their life adequately to the conditions in which they currently find themselves, and it is possible that, if the internment lasts a very long time and the patient is deprived of stressors that people with schizophrenia experience in free conditions, their assessment of their quality of life is definitely better. Thanks to the WHOQOL-BREF standardized quality of life scale used in this study, it was possible to assess the quality of life of patients in forensic psychiatry wards in a detailed way, focusing on the assessment in several domains: somatic condition, psychological state, social status, and environment. Based on the conducted research, it can be concluded that the participation of patients in the health education cycle changes the values of individual domains. For the assessment of the somatic condition of patients, this change turns out to be beneficial—education improves their somatic well-being, probably also by increasing self-awareness, which, however, also causes a decrease in their social self-esteem. Similarly, other scientific studies indicate that some sociodemographic and clinical characteristics affect the quality of life of patients with schizophrenia. These results deepen the knowledge about these characteristics and should be considered in the clinical assessment of the patient and in planning appropriate and effective strategies for their psychosocial rehabilitation [ , , ]. In the presented results concerning the analysis of pairs of variables, it was found that the assessment of the quality of life in the somatic domain is significantly affected by such factors as professional activity, severity of disease symptoms, and the number of correct answers in the knowledge test. It follows that patients with a lower level of medical education assess their quality of life related to their somatic condition worse. The situation changed after the medical education cycle, when the self-assessment of the quality of life in the domain of somatic condition improved, along with the increase in the level of knowledge in the field of health education. The obtained results indicate that the participation of patients in the cycle of health education has a positive effect on their quality of life in the somatic sphere. The conducted analyses show that the general assessment of the quality of life did not show a significant correlation with the education process, however, the somatic component of the scale of quality of life changed positively. Since educational impacts affect the quality of life of patients in the somatic aspect, this information is not only important from the point of view of the care of a patient staying in a forensic psychiatry ward. This conclusion can be applied to patients with schizophrenia in general and may be useful for therapists who want to introduce the process of patient health education to their work with patients diagnosed with schizophrenia. The obtained results confirm the conclusions of some studies of patients with schizophrenia, which have showed that educational activities in the field of promoting a healthy lifestyle are significantly related to the results of the WHOQOL-BREF quality of life questionnaire . The analysis of the odds ratios in the conducted study, also showed that conducting a series of educational lectures in the field of health education is likely to increase the level of medical knowledge in patients, which may change the self-assessment of their health, especially in the psychological domain. The study also showed that educational activities are effective in knowledge improvement. A set of 40 issues necessary to conduct educational lectures in the field of broadly understood health education, preceded by a knowledge test, estimated the initial level of knowledge of patients, both from the study group and the reference group. The analysis of the results showed that after the health education cycle, the medical knowledge of patients in the study group significantly improved, which was not shown in the reference group. This proves the effectiveness of educational activities in this group of patients. Increasing the medical knowledge of patients may have pro-health implications for them. There are many publications on the effectiveness of this form of therapy and the methods of its conduct [ , , ]. The relationship of participation in psychoeducation with shorter hospitalization time, fewer relapses, improved health and psychosocial functioning of patients, their better cooperation, and greater knowledge about the disease has been previously demonstrated [ , , , , ]. These observations have important clinical implications. The main therapeutic goal in forensic psychiatry wards is to prepare patients for life in freedom, in accordance with applicable law and social norms, and in such a way as to minimize the risk of re-committing a criminal act. Undoubtedly, all educational activities undertaken by patients are key tools to achieving this goal. The results of this study confirm the possibility of improving their condition through the health educational program. The fact that these interactions improve the patient’s knowledge, and thus contribute to a greater awareness of the patient’s life with a mental illness and all its consequences, gives hope for improving social functioning, and thus a chance to live in accordance with its principles. Providing educational information and involving patients in treatment has become an important and effective element of psychiatric care, which has been confirmed by numerous scientific studies [ , , , , ]. Each psychosocial impact, as well as rehabilitation, neutralizes the causes of patients’ withdrawal from social life and teaches them to return to a situation in which they could function properly in their environment, which in relation to the group of patients of forensic psychiatry wards is an extremely important clue in the process of therapy and treatment . Scientific research clearly shows that pharmacological treatment, combined with psychosocial interactions, is an important element of therapeutic programs aimed at helping people with schizophrenia recover . Since the main purpose of the patient’s stay in a forensic psychiatry ward is to prepare them for life in freedom, in accordance with the applicable social norms, the inclusion of non-pharmacological forms of treatment becomes not only a method but also a somewhat ethical obligation. It is not only an addition to pharmacological treatment, but an integral part.
The global quality of life of interned patients with schizophrenia is not significantly related to educational activities, however, sub-domain analysis indicates that health education improves their somatic well-being. Psychiatric rehabilitation through educational activities effectively increases the level of patients’ knowledge.
|
The Effect of Spring Barley Fertilization on the Content of Polycyclic Aromatic Hydrocarbons, Microbial Counts and Enzymatic Activity in Soil | 9519a542-8fe6-40a0-8db0-9a83b79982fd | 10001663 | Microbiology[mh] | Environmental pollution caused by polycyclic aromatic hydrocarbons (PAHs) poses one of the greatest problems in the contemporary world. Polycyclic aromatic hydrocarbons (PAHs) are considered to be especially toxic to humans, likewise to plants, microorganisms and other living organisms. PAH toxicity is a well-known fact, especially its ability to cause cancer [ , , , , ]. Polycyclic aromatic hydrocarbons belong to the group of persistent organic pollutants. These highly toxic compounds are accumulated in soil and persist in the environment for long periods of time . These compounds are generated during incomplete combustion of organic matter in natural and anthropogenic processes . Polycyclic aromatic hydrocarbons are classified into two main groups based on their chemical structure: low-molecular-weight (LMW) PAHs that contain two or three aromatic rings and high-molecular-weight (HMW) PAHs that contain four or more aromatic rings. Low-molecular-weight PAHs are relatively easily degraded, whereas most HMW PAHs with fused rings are carcinogenic and much more difficult to decompose . The microbial degradation of PAHs is influenced by various environmental factors, including the availability of nutrients, the abundance and type of soil-dwelling microorganisms, as well as the type and chemical properties of degraded PAHs. Polycyclic aromatic hydrocarbons can be potentially degraded/transformed by a wide range of bacterial and fungal species . Microorganisms easily adapt to new environmental conditions and they derive energy and nutrients from compounds that are not products of their own metabolism. This observation implies that microorganisms could be effectively used to reduce PAH levels in soil. Microorganisms that are potentially useful in soil remediation can be divided: autotrophs that derive carbon from carbon dioxide and heterotrophs that obtain carbon from the degradation of organic matter from both natural and anthropogenic sources . Polycyclic aromatic hydrocarbons are sources of carbon and energy for microorganisms and their content in soil can be effectively reduced through the addition of organic matter which that microbial activity . Low nutrient availability can also decrease the effectiveness of bioremediation in areas contaminated with PAHs. In addition to easily metabolized sources of carbon, microorganisms also require minerals, including nitrogen, phosphorus, potassium and iron, for metabolic and growth processes. Therefore, contaminated and nutrient-deficient soils should be supplemented to stimulate the growth of autochthonous microorganisms . According to Ravanipour , nutrient application can be regarded as the most important factor in bioremediation strategies for removing PAHs from soil. Dissolved organic matter (DOM) is a major source of organic carbon (Corg) in soil and it plays a key role in carbon cycling. The presence of strong bonds between PAHs and soil organic matter (SOM) can significantly decrease the bioavailability and mobility of PAHs. As a result, these pollutants tend to accumulate in carbon-rich organic soils rather than in the deeper strata of mineral soils . Organic matter increases soil moisture content and stimulates microbial growth. At the same time, organic and mineral nutrients enhance the abundance of exogenous microorganisms in the soil microflora, which increases the counts and viability of bacteria and other organisms capable of degrading PAHs . There are several biological remediation techniques (bioremediation; bacteria and fungi, phycoremediation; algae, phytoremediation; plants and rhizoremediation; plant and microbe) for the treatment of PAH-contaminated soil. Based on the selection of the proper remediation approach, these remediation techniques are carried out by two basic types: (i) in situ (land farming, biostimulation, bioaugmentation, composting and phytoremediation) and (ii) ex situ (bioreactors) . The rate of PAH biodegradation is affected by pH, which affects the development of soil microorganisms and enzymes. An increase in soil acidity promotes the accumulation of PAHs in soil . The persistence of PAHs containing three and four aromatic rings increases in acidic soils. Liming can slow down PAH decomposition, depending on soil parameters, environmental factors and the properties of PAHs . Most microorganisms are sensitive to pH and have a preference for pH-neutral environments (6.5–7.5) . Neutralization of soil pH increases bacterial abundance and promotes the decomposition of PAHs . Environmental contamination with persistent organic pollutants has emerged as a serious threat of pollution. Scientific knowledge upon microbial interactions with individual pollutants over the past decades has helped to abate environmental pollution . For the last four decades, the degradation of PAHs by microorganisms has been well studied; most of the reported work has been focused primarily on the biodegradation of PAHs containing two to four fused rings . Limited work has been dedicated to HMW PAHs. The influencing mechanism of soil fertilization PAH biodegradation is still unclear, especially microbe counts and soil enzyme activities. In the natural environment, organic compounds are degraded by soil microorganisms and enzymes both under aerobic and anaerobic conditions . Intermediate decomposition products are often more toxic for microorganisms, animals and humans than the parent compounds. The presence and accumulation of PAHs in soil has not been extensively studied to date and further research is needed to address this problem. Therefore, the aim of this study was to evaluate the influence of long-term varied organic mineral and mineral fertilization during the growing season and after the harvest of spring barley grown in the eighth cycle of crop rotation, on the microbial activity and biochemical properties of soil and the accumulation of PAHs in soil. The research was conducted in order to assess the effect of long-term fertilization with manure and mineral fertilizers on the content of polycyclic aromatic hydrocarbons (PAHs) in soil. Relationships were also explored between the soil content of PAHs and the soil microbiological (counts of bacteria and fungi) and biological activity (enzymatic activity). The combined application of manure and mineral fertilizers has been studied in only a very few research experiments, hence their effect on PAH content in soil is still unexplored. The new insights contribute to a better understanding of PAH biodegradation processes under complex natural conditions. It was assumed that the optimal fertilization both with manure and with mineral fertilizers customized strictly to nutritional requirements to field crops does not exceed the permissible concentrations of the assessed PAHs in the soil.
2.1. Research Location and Experimental Design Soil samples for the study were obtained in 2015 from a long-term controlled field experiment established in Bałcyny, Poland (N: 53°35′38.1″, E: 19°50′56.1″) in 1986. The experiment was conducted in three replicates (blocks) on soil developed from sandy loam (Haplic Luvisols, IUSS Working Group ), according to a previously described design . The soil nutrition regime included the application of manure and mineral fertilizers or mineral fertilizers only. The same amount of nutrients was supplied with mineral fertilizers in both systems. The following mineral fertilizers were applied in the production of spring barley ( Hordeum vulgare L.): (1) N 0 P 0 K 0 , (2) N 1 P 1 K 1 , (3) N 2 P 1 K 1 , (4) N 3 P 1 K 1 , (5) N 2 P 1 K 2 , (6) N 2 P 1 K 3 , (7) N 2 P 1 K 2 Mg, (8) N 2 P 1 K 2 MgCa (N 1 -30, N 2 -60, N 3 -90, P 1 -34.9, K 1 -33.2, K 2 -66.4, K 3 -99.7, Mg-18.1 kg ha −1 ) ( ). The following crops were grown in rotation: sugar beets, spring barley, maize and spring wheat. After the spring wheat harvest, soil was limed with 2.5 t CaO ha −1 two years before the spring barley cultivation. Before the study, soil composition (per kg) was as follows: 100.0 mg of K, 53.2 mg of Mg, 41.3 mg of P, 7.9 g of organic carbon and 0.79 g of total nitrogen. Soil pH was slightly acidic (6.2 in 1 mol dm −3 KCl). Spring barley was grown in the second year after manure application (at the rate of 40 t ha −1 ). The content of nutrients, heavy metals and PAHs (LMW and HMW) in the manure was described previously by Krzebietke et al. . All samples were analyzed for the 16 PAH priority pollutant listed by US EPA . Soil samples for analyses of chemical, biochemical and microbiological properties and PAH levels were collected at a depth of 0–30 cm on four dates during the growing season of spring barley (BBCH-10, BBCH-23), after harvest and after skimming. Fresh soil samples for microbiological and biochemical analyses were passed through a sieve with a 2 mm mesh size directly after they had been transported to the laboratory. In a study by Smreczak and Maliszewska-Kordybach , spring barley was most susceptible to soil contamination with selected PAHs in comparison with other crop species (maize, white mustard, sunflower). Therefore, soil samples for the analyses of microbiological and biochemical parameters and PAH content were collected in 2015 when spring barley was grown in rotation. Agronomic practices were applied in accordance with the requirements of the tested crop ( ). Phenological observations were conducted during the growing season of spring barley and the main developmental stages are described in . 2.2. Chemical Analyses of Soil Selected chemical properties of soil (pH, Hh, total N, Corg) were analyzed. The following parameters were determined in air-dried soil samples: pH 1 mol KCl∙dm −3 , by the potentiometric method; hydrolytic acidity (Hh), by Kappen’s method; total nitrogen content, by distillation after mineralization in sulfuric (VI) acid with the addition of the selenium reagent mixture; organic carbon content, by the Kurmies method. The content of 16 PAHs was determined with the Trace GC/MS Ultra ITQ900 system with a TRIPlus autosampler (Thermo Fisher Scientific, Waltham, MA, USA) and a flame ionization detector. The total content of 16 PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(1,2,3-cd)pyrene, dibenzo(a,h)anthracene and benzo(g,h,i)perylene) was determined by the method described by Krzebietke et al. . The content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) and HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) was determined in soil samples. 2.3. Microbiological and Biochemical Analyses of Soil The counts of the following soil-dwelling microorganisms were determined in the soil samples: organotrophic bacteria, on Bunt and Rovira agar ; ammonifying and nitrogen-fixing bacteria, on the medium described by Wyszkowska ; actinobacteria, on the medium described by Küster and Williams with the addition of nystatin and actidione ; fungi, on Martin’s agar . Microbial counts were determined by plating in three replicates. Microbial cultures were incubated at a temperature of 28 °C. The number of colony-forming units (CFU) was determined with a colony counter. The activity of the following soil enzymes was determined in three replicates: dehydrogenases, by the method described by Öhlinger ; urease, acid phosphatase and alkaline phosphatase, by the method described by Alef and Nannipieri ; catalase, by the method described by Johnson and Temple . Microorganisms were isolated with the serial dilution method following the procedure described in the study by Wyszkowska et al. . The procedure for the determination of soil enzymatic activity was presented in the study by Borowik et al. and microbial counts. The culture conditions and the exact procedure for the isolation of microorganisms were described in our earlier paper in the study by Borowik et al. . 2.4. Statistical Analysis The data (content of LMW PAHs and HMW PAHs and total content of 16 PAHs) were processed statistically by repeated measures ANOVA, where manure application and varied mineral fertilization were the fixed grouping factors and the sampling date was the repeated measure factor: (1) y i j k l = μ + τ i + f k + ( τ f ) i k + D a t e l + ( τ D a t e ) i l + ( f D a t e ) k l + ( τ f D a t e ) i k l + β j + ( β D a t e ) j l + ε i j k l where μ is the general average; τ i is the effect of manure and NPK i ; f k is the effect of manure application k; β j is the blocking effect j; Date l is the repeated measures effect; ( τ f ) i k is the effect of the interactions between the i th rate of NPK fertilization and manure k ; ( τ D a t e ) i l is the effect of the interactions between the i th rate of NPK fertilization and sampling date l ; ( f D a t e ) k l is the effect of the interactions between manure k and sampling date l ; ( β D a t e ) j l is the effect of the interactions between blocks and sampling date l ; ( τ f D a t e ) i k l is the effect of the interactions between the i th rate of NPK fertilization, manure k and sampling date l ; ε i j k l is the random error with normal distribution, expected value 0 and variance σ2. Before performing statistical analyses, dependent variables in each group were tested for normal distribution. The homogeneity of variance was determined in groups and the sphericity (equality of variances of the differences between measurements) was evaluated with Mauchly’s test. Data that did not satisfy the sphericity condition were analyzed with the use of Wilk’s lambda test and Pillai’s trace criterion. The Shapiro–Wilk test revealed that the data did not have normal distribution, therefore they were log transformed. In the next step, data were processed in Tukey’s post hoc HSD test at p < 0.05. Microbial counts and enzymatic activity were evaluated with the Kruskal–Wallis non-parametric test for independent samples. The analyses were performed on untransformed data. The relationships between soil microbial activity, biochemical properties, organic carbon and total nitrogen content vs. PAH content (17 parameters) were determined by principal component analysis (PCA). The strength of the correlations in PCA was validated with the use of Bartlett’s test of sphericity. The number of principal components was selected with the Kaiser criterion based on eigenvalues greater than one (λi > 1). The interpretation of individual principal component PCi was simplified by varimax rotation. The results of all chemical, microbiological and biochemical analyses were interpreted by focusing on the main effects. All statistical analyses were performed in the Statistica 13 program . 2.5. Weather Conditions Considerable variations in temperature and precipitation were noted in 2015 ( ). Microorganisms require supportive weather conditions, including temperature and soil moisture content, which determine the rate of microbial growth and enzymatic activity of soil. Changes in temperature and precipitation were monitored for 7 days before each sampling date. Soil samples were collected for microbiological and biochemical analyses on four dates (22 April, 18 May, 8 August and 15 September 2015) and weather conditions varied considerably in each monitoring period. The lowest temperature (6.2 °C) was observed before the first sampling date, whereas the temperature before the second sampling date was 1.9 times higher. In contrast, precipitation during the 7 days preceding the sample collection was 1.8 times lower in May than in April. The least favorable weather conditions were noted in August (7 days before sampling), which was characterized by a very high temperature (19.7 °C) and an absence of rainfall. In September, the temperature (12.6 °C) was only marginally higher than in April and precipitation levels (7.8 mm) were higher than in the remaining sampling periods.
Soil samples for the study were obtained in 2015 from a long-term controlled field experiment established in Bałcyny, Poland (N: 53°35′38.1″, E: 19°50′56.1″) in 1986. The experiment was conducted in three replicates (blocks) on soil developed from sandy loam (Haplic Luvisols, IUSS Working Group ), according to a previously described design . The soil nutrition regime included the application of manure and mineral fertilizers or mineral fertilizers only. The same amount of nutrients was supplied with mineral fertilizers in both systems. The following mineral fertilizers were applied in the production of spring barley ( Hordeum vulgare L.): (1) N 0 P 0 K 0 , (2) N 1 P 1 K 1 , (3) N 2 P 1 K 1 , (4) N 3 P 1 K 1 , (5) N 2 P 1 K 2 , (6) N 2 P 1 K 3 , (7) N 2 P 1 K 2 Mg, (8) N 2 P 1 K 2 MgCa (N 1 -30, N 2 -60, N 3 -90, P 1 -34.9, K 1 -33.2, K 2 -66.4, K 3 -99.7, Mg-18.1 kg ha −1 ) ( ). The following crops were grown in rotation: sugar beets, spring barley, maize and spring wheat. After the spring wheat harvest, soil was limed with 2.5 t CaO ha −1 two years before the spring barley cultivation. Before the study, soil composition (per kg) was as follows: 100.0 mg of K, 53.2 mg of Mg, 41.3 mg of P, 7.9 g of organic carbon and 0.79 g of total nitrogen. Soil pH was slightly acidic (6.2 in 1 mol dm −3 KCl). Spring barley was grown in the second year after manure application (at the rate of 40 t ha −1 ). The content of nutrients, heavy metals and PAHs (LMW and HMW) in the manure was described previously by Krzebietke et al. . All samples were analyzed for the 16 PAH priority pollutant listed by US EPA . Soil samples for analyses of chemical, biochemical and microbiological properties and PAH levels were collected at a depth of 0–30 cm on four dates during the growing season of spring barley (BBCH-10, BBCH-23), after harvest and after skimming. Fresh soil samples for microbiological and biochemical analyses were passed through a sieve with a 2 mm mesh size directly after they had been transported to the laboratory. In a study by Smreczak and Maliszewska-Kordybach , spring barley was most susceptible to soil contamination with selected PAHs in comparison with other crop species (maize, white mustard, sunflower). Therefore, soil samples for the analyses of microbiological and biochemical parameters and PAH content were collected in 2015 when spring barley was grown in rotation. Agronomic practices were applied in accordance with the requirements of the tested crop ( ). Phenological observations were conducted during the growing season of spring barley and the main developmental stages are described in .
Selected chemical properties of soil (pH, Hh, total N, Corg) were analyzed. The following parameters were determined in air-dried soil samples: pH 1 mol KCl∙dm −3 , by the potentiometric method; hydrolytic acidity (Hh), by Kappen’s method; total nitrogen content, by distillation after mineralization in sulfuric (VI) acid with the addition of the selenium reagent mixture; organic carbon content, by the Kurmies method. The content of 16 PAHs was determined with the Trace GC/MS Ultra ITQ900 system with a TRIPlus autosampler (Thermo Fisher Scientific, Waltham, MA, USA) and a flame ionization detector. The total content of 16 PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(1,2,3-cd)pyrene, dibenzo(a,h)anthracene and benzo(g,h,i)perylene) was determined by the method described by Krzebietke et al. . The content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) and HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) was determined in soil samples.
The counts of the following soil-dwelling microorganisms were determined in the soil samples: organotrophic bacteria, on Bunt and Rovira agar ; ammonifying and nitrogen-fixing bacteria, on the medium described by Wyszkowska ; actinobacteria, on the medium described by Küster and Williams with the addition of nystatin and actidione ; fungi, on Martin’s agar . Microbial counts were determined by plating in three replicates. Microbial cultures were incubated at a temperature of 28 °C. The number of colony-forming units (CFU) was determined with a colony counter. The activity of the following soil enzymes was determined in three replicates: dehydrogenases, by the method described by Öhlinger ; urease, acid phosphatase and alkaline phosphatase, by the method described by Alef and Nannipieri ; catalase, by the method described by Johnson and Temple . Microorganisms were isolated with the serial dilution method following the procedure described in the study by Wyszkowska et al. . The procedure for the determination of soil enzymatic activity was presented in the study by Borowik et al. and microbial counts. The culture conditions and the exact procedure for the isolation of microorganisms were described in our earlier paper in the study by Borowik et al. .
The data (content of LMW PAHs and HMW PAHs and total content of 16 PAHs) were processed statistically by repeated measures ANOVA, where manure application and varied mineral fertilization were the fixed grouping factors and the sampling date was the repeated measure factor: (1) y i j k l = μ + τ i + f k + ( τ f ) i k + D a t e l + ( τ D a t e ) i l + ( f D a t e ) k l + ( τ f D a t e ) i k l + β j + ( β D a t e ) j l + ε i j k l where μ is the general average; τ i is the effect of manure and NPK i ; f k is the effect of manure application k; β j is the blocking effect j; Date l is the repeated measures effect; ( τ f ) i k is the effect of the interactions between the i th rate of NPK fertilization and manure k ; ( τ D a t e ) i l is the effect of the interactions between the i th rate of NPK fertilization and sampling date l ; ( f D a t e ) k l is the effect of the interactions between manure k and sampling date l ; ( β D a t e ) j l is the effect of the interactions between blocks and sampling date l ; ( τ f D a t e ) i k l is the effect of the interactions between the i th rate of NPK fertilization, manure k and sampling date l ; ε i j k l is the random error with normal distribution, expected value 0 and variance σ2. Before performing statistical analyses, dependent variables in each group were tested for normal distribution. The homogeneity of variance was determined in groups and the sphericity (equality of variances of the differences between measurements) was evaluated with Mauchly’s test. Data that did not satisfy the sphericity condition were analyzed with the use of Wilk’s lambda test and Pillai’s trace criterion. The Shapiro–Wilk test revealed that the data did not have normal distribution, therefore they were log transformed. In the next step, data were processed in Tukey’s post hoc HSD test at p < 0.05. Microbial counts and enzymatic activity were evaluated with the Kruskal–Wallis non-parametric test for independent samples. The analyses were performed on untransformed data. The relationships between soil microbial activity, biochemical properties, organic carbon and total nitrogen content vs. PAH content (17 parameters) were determined by principal component analysis (PCA). The strength of the correlations in PCA was validated with the use of Bartlett’s test of sphericity. The number of principal components was selected with the Kaiser criterion based on eigenvalues greater than one (λi > 1). The interpretation of individual principal component PCi was simplified by varimax rotation. The results of all chemical, microbiological and biochemical analyses were interpreted by focusing on the main effects. All statistical analyses were performed in the Statistica 13 program .
Considerable variations in temperature and precipitation were noted in 2015 ( ). Microorganisms require supportive weather conditions, including temperature and soil moisture content, which determine the rate of microbial growth and enzymatic activity of soil. Changes in temperature and precipitation were monitored for 7 days before each sampling date. Soil samples were collected for microbiological and biochemical analyses on four dates (22 April, 18 May, 8 August and 15 September 2015) and weather conditions varied considerably in each monitoring period. The lowest temperature (6.2 °C) was observed before the first sampling date, whereas the temperature before the second sampling date was 1.9 times higher. In contrast, precipitation during the 7 days preceding the sample collection was 1.8 times lower in May than in April. The least favorable weather conditions were noted in August (7 days before sampling), which was characterized by a very high temperature (19.7 °C) and an absence of rainfall. In September, the temperature (12.6 °C) was only marginally higher than in April and precipitation levels (7.8 mm) were higher than in the remaining sampling periods.
3.1. Selected Chemical Parameters of Soil 3.1.1. Soil pH and Hydrolytic Acidity The growth of soil-dwelling microorganisms is determined by environmental conditions, including soil pH, which influences the microbiological and biochemical parameters of soil. Soil regularly amended with manure was characterized by higher pH values (in 1 mol KCl dm −3 ) and lower hydrolytic acidity than soil supplied with mineral fertilizers only ( ). Increasing nitrogen rates clearly decreased soil pH and increased hydrolytic acidity and the greatest changes in these parameters were observed under the influence of the highest nitrogen rate. In a study by Lemanowicz , high nitrogen rates and the absence of liming also undesirably increased hydrolytic acidity in soil. As expected, regular liming considerably increased soil pH and reduced hydrolytic acidity. The effects of liming were more pronounced in soil amended with manure than in soil supplied with mineral fertilizers only. 3.1.2. Organic Carbon and Total Nitrogen Content Carbon and nitrogen are essential for PAH degradation. Microorganisms have different nutritional requirements and various values of C:N ratios have been reported as optimal in the literature. Fungi dominate in soils with high C content and limited N supply. In turn, bacterial growth is influenced by both C and N content . According to Amezcua-Allieri et al. , the C:N ratio affects the rate at which PAHs are removed from soil. Farahani et al. reported that the rate of PAH degradation in soil is determined by the C:N ratio in the growth medium and the chemical form of nitrogen. Organic carbon and total nitrogen are important indicators of soil fertility . In the present study, the content of organic carbon and total N in soil was significantly influenced by manure and mineral fertilization ( , ). Manure (M) exerted a significant effect and varied mineral fertilization (Min) and the interaction between these factors (M × Min) exerted highly significant effects on the total nitrogen content of soil. Nitrogen fertilization clearly increased the total nitrogen content of soil relative to the control treatment and enhanced the accumulation of Corg in soil in 2015 ( ). The increase in soil Corg content in response to rising N rates can be attributed to the accumulation of biomass in soil after the harvest of each crop grown in rotation. Siwik-Ziomek and Lemanowicz also reported an increase in the total nitrogen content of soil in response to increasing rates of N fertilizer. The value of the Block (B) parameter was not significant, which indicates that soil variability in the experimental field had no effect on the content of Corg and total N. 3.2. Microbiological and Biochemical Properties of Soil 3.2.1. Microbial Abundance Organotrophic Bacteria In soil sown with spring barley, the counts of organotrophic bacteria were 1.7 higher in treatments that were regularly amended with manure than in treatments that were supplied with mineral fertilizers only ( a). The abundance of organotrophic bacteria increased with a rise in the N rate ( b). The highest N rate induced the greatest (1.4-fold) increase in the counts of organotrophic bacteria relative to the control treatment. The growth of organotrophic bacteria was also stimulated by higher potassium rates (66.4 and 99.7 kg∙ha −1 ). Liming decreased soil acidity and increased the availability of nutrients for organotrophic bacteria. Higher N and K rates induced similar effects. In 2015, the abundance of organotrophic bacteria in soil varied widely from 18,108 CFU kg −1 DM to 283,108 CFU kg −1 DM soil ( c). Bacterial counts were highest in soil samples collected in May (144,108 CFU kg −1 DM soil) and lowest in August (2.5 times lower). In April, the average abundance of organotrophic bacteria reached 69,108 CFU kg −1 DM soil and it was 19% lower than in September. May and September were characterized by the most favorable temperatures for bacterial growth (12.0 °C and 12.6 °C, respectively, during the 7-day monitoring period before sampling), which could explain the increase in the abundance of organotrophic bacteria in these months. According to Borowik et al. , organotrophic bacteria proliferate most rapidly at a temperature of around 15 °C. Ammonifying Bacteria Manure and mineral fertilizers significantly modified the abundance of ammonifying bacteria in soil ( a,b). The growth of these microorganisms was enhanced in treatments regularly amended with manure. Potassium exerted varied effects on the counts of ammonifying bacteria; a moderate K rate decreased their abundance, whereas the highest K rate stimulated the proliferation of ammonifying bacteria ( b). Magnesium supplied with N 2 P 1 K 2 had a minor influence on the counts of ammonifying bacteria. As expected, regular liming created the most favorable environment for the growth of ammonifying bacteria. The abundance of ammonifying bacteria varied during the growing season ( c) and it was highest in August (139,108 CFU kg −1 DM soil) which was characterized by highly unfavorable weather conditions during the 7-day monitoring period before sampling (drought and very high temperature, 19.7 °C). According to Dąbek-Szreniawska et al. , a decrease in soil moisture content stimulates the growth of ammonifying bacteria. In the present study, the counts of ammonifying bacteria were 8% lower in May than in August and precipitation levels (3.9 mm) during the 7-day monitoring period before sampling were lower than in April and September ( ). The abundance of ammonifying bacteria was lowest in April (65 × 10 8 CFU kg −1 DM soil) and it was 15% higher in September (precipitation during the 7-day monitoring period before sampling reached 7.2 and 7.8 mm, respectively). The analyzed parameter was highest in May and August and it was considerably lower in April and September. These results could be attributed to optimal temperatures for microbial growth in May and August. Despite low precipitation in these months, soil water content was probably sufficient to promote the growth of ammonifying bacteria. Manure application increased the counts of ammonifying bacteria 1.4-fold relative to treatments supplied with mineral fertilizers only. The decomposition of organic matter supplied to soil with manure increased the content of mineral N and created a favorable environment for the development of soil microorganisms, including N-fixing bacteria. An increase in the content of mineral N as well as higher microbial counts promoted N immobilization in soil. Nitrogen-Fixing Bacteria Manure significantly increased the counts of N-fixing bacteria in soil ( a). The abundance of N-fixing bacteria was 1.4-fold higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only. The decomposition of organic matter supplied with manure increased the content of mineral N. It should also be noted that manure creates favorable conditions for the growth of soil-dwelling microorganisms, including N-fixing bacteria. An increase in the content of mineral N and higher microbial counts promoted N immobilization in soil. Increasing N rates exerted a minor effect on the abundance of N-fixing bacteria in soil ( b). Potassium was a more influential factor and higher K rates stimulated the proliferation of N-fixing bacteria. Magnesium decreased the abundance of N-fixing bacteria, whereas liming promoted their growth. The counts of N-fixing bacteria in soil varied widely from 18 × 10 8 to 247 × 10 8 CFU kg −1 DM soil ( c). Average microbial counts were similar in April and August. The abundance of N-fixing bacteria was nearly two-fold higher in May and 21% lower in September relative to May. Actinobacteria Long-term manure soil application as well as mineral fertilization significantly modified actinobacteria counts in soil ( a,b). Actinobacteria counts were double the amount higher in soil regularly amended with manure than in soil supplied with mineral fertilizers only ( a). Lower N rates (30 and 60 kg ha −1 ) did not have a highly stimulating effect on actinobacteria counts. Only the highest N rate (90 kg ha −1 ) induced a 15% increase in the abundance of actinobacteria relative to the control (without mineral fertilization). According to Vetanovetz and Peterson , mineral N fertilization increases actinobacteria counts in soil. In the current study, the growth of actinobacteria was stimulated by higher K rates (66.4 and 99.7 kg ha −1 ). The highest K rate induced the greatest (2-fold) increase in actinobacteria counts relative to the lowest K rate. Magnesium did not influence the abundance of the studied bacterial group. Actinobacteria counts clearly increased in regularly limed soil. Actinobacteria counts varied considerably during the growing season of 2015 ( c). The mean abundance of actinobacteria increased steadily between April and September. Barabasz and Vořišek and Natywa et al. reported the highest actinobacteria counts in summer, which could be attributed to high temperatures. Fungi Fungal abundance was 32% higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only ( a). Fungal counts also increased in response to higher N rates ( b). Similar observations were made by Natywa et al. , Sosnowski et al. and Sosnowski and Jankowski . According to Niewiadomska et al. , N fertilization considerably increased fungal abundance relative to control soil. In the work of Wyszkowska , increasing urea rates also led to a significant increase in fungal counts in soil. In the present study, fungal abundance was 1.7-fold higher in soil supplied with the highest K rate than in soil fertilized with N 2 P 1 K 1 . Regular liming also enhanced fungal growth in soil. In the growing season of 2015, fungal counts in soil ranged from 11 × 10 6 to 250 × 10 6 CFU kg −1 DM soil ( c). Mean fungal counts were highest in May (130 × 10 6 CFU kg −1 DM soil) and lowest in August. The analyzed parameter was similar in early spring (April) and late summer (September). 3.2.2. Enzymatic Activity Dehydrogenases Dehydrogenases (DHA) are regarded as reliable indicators of soil biochemical activity. Dehydrogenase activity is influenced by enzymes secreted by soil-dwelling microorganisms, both aerobic and anaerobic . Dehydrogenases determine soil quality and fertility . Ciarkowska and Gambuś reported a strong correlation between DHA activity and organic carbon content in soil. In the present study, manure and mineral fertilization modified DHA activity in soil ( a,b). Dehydrogenase activity was 1.8-fold higher in soil with manure application than in soil supplied with mineral fertilizers only ( a). Manure exerted similar effects on DHA activity in the work of Koper and Siwik-Ziomek and Saha et al. . According to Piotrowska and Koper and Natywa et al. , DHA activity in soil increased in response to organic amendments and decreased in response to mineral fertilizers (NPK+Ca). In turn, Kucharski and Wałdowska found that mineral fertilizers stimulated DHA activity, but to a smaller extent than organic amendments. A comparison of the observed changes in DHA activity revealed that the lowest N rate used in the study (30 kg ha −1 ) decreased the analyzed parameter by 8% relative to the control treatment ( c). However, DHA activity decreased in response to higher N rates (60 and 90 kg N ha −1 ). Kucharski , Lemanowicz and Koper and Niewiadomska et al. also found that higher N rates suppressed DHA activity in soil. In contrast, potassium did not inhibit DHA activity and even increased the studied parameter. In a study by Koper and Siwik-Ziomek , comprehensive mineral and organic fertilization with calcium and magnesium enhanced the biochemical activity of soil-dwelling microorganisms, increased DHA activity and promoted microbial growth. In the current experiment, regular soil liming enhanced DHA activity by increasing soil pH and reducing hydrolytic acidity. Zaborowska et al. also reported that DHA activity decreased more than three-fold when soil pH was reduced from 7.1 to 6.4. Kalembasa and Kuziemska found that soil liming stimulated DHA activity. Dehydrogenase activity in soil was determined in the range of 2.13 to 9.65 µmol TFF kg −1 DM h −1 during the growing season ( c). This parameter peaked in August 2015 (5.75 µmol TFF kg −1 DM h −1 ) and was only somewhat lower in September (5.26 µmol TFF kg −1 DM h −1 ). In May, DHA activity was 20% higher than in April. The observed variations in the studied parameter could be attributed to changes in the moisture and oxygen content of soil ( ). Catalase Catalase is an antioxidant enzyme that protects plants against abiotic and biotic factors that cause oxidative stress . Manure amendment increased catalase activity in soil ( a). The value of this parameter was 17% higher in the second year after manure application than in soil supplied with mineral fertilizers only. In a study by Lemanowicz and Koper , catalase activity also increased in treatments where maize was amended with manure. In the present study, the lowest N rate (30 kg ha −1 ) had no significant effect on catalase activity in soil ( b). In turn, the highest N rate (90 kg ha −1 ) increased catalase activity. Increasing N rates also stimulated catalase activity in the work of Lemanowicz and Koper . Potassium and magnesium fertilizers stimulated catalase activity in soil. Regular liming was particularly effective in enhancing catalase activity and it increased the analyzed parameter 1.4-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Catalase activity in soil varied during the growing season of 2015 ( c). The highest value was noted in September, followed by August; it was the lowest in May. Urease Regular supply of manure increased organic matter content and stimulated urease activity in soil ( a). Kucharski et al. also found that manure application significantly enhanced urease activity in tested soils. In our study, urease activity was not significantly modified by mineral fertilization ( b). However, soil liming exerted a positive effect on urease activity. In the growing season of 2015, urease activity ranged from 0.02 to 0.46 mmol N-NH 4 kg −1 soil h −1 ( c). The studied parameter was highest in September (0.23 mmol N-NH 4 kg −1 soil h −1 ) and lowest in May (0.04 mmol N-NH 4 kg −1 soil h −1 ). Urease activity was 1.5 times higher in August (0.18 mmol N-NH 4 kg −1 soil h −1 ) than in April. Acid Phosphatase Acid phosphatase activity differed significantly between treatments treated by manure and treatments supplied with mineral fertilizers only ( a,b). In soil regularly amended with manure, acid phosphatase activity was 1.7 times higher than in soil supplied with mineral fertilizers only. Lemanowicz and Koper also found that acid phosphatase activity was lower when manure was not applied. In turn, mineral fertilizers had no significant influence on the activity of the discussed enzyme. However, higher N rates can stimulate acid phosphatase activity by increasing the concentration of H+ in the soil solution as a result of nitrification and enhancing NH4+ uptake by plants. The highest N rate applied (90 kg N ha −1 ) induced the greatest (20%) increase in acid phosphatase activity relative to the control treatment. In the work of Kucharski , very high N rates (240 kg ha −1 ) stimulated the activity of acid phosphatase. Lemanowicz and Koper , Lemanowicz and Siwik-Ziomek and Lemanowicz also reported an increase in acid phosphatase activity with a rise in mineral N rates. The cited authors observed that high N rates stimulated the activity of acid phosphomonoesterase. In contrast, liming induced a minor decrease in the studied parameter. Phosphomonoesterases are highly sensitive to changes in pH and the optimal soil pH for acid phosphatase is 4.0–6.5 . Kuziemska et al. found that soil liming significantly decreased acid phosphatase activity regardless of year or sampling date. Acid phosphatase activity ranged from 2.94 to 14.66 mmol PN kg −1 h −1 in the growing season of 2015 ( c). The analyzed parameter was highest in August and September and much lower in April and May. According to Natywa et al. , acid phosphatase activity increases in fall due to the supply of fresh organic matter with harvest residues that stimulate microbial growth. Lemanowicz and Krzyżaniak observed that enzymatic processes are difficult to interpret during the growing season because they are largely influenced by changes in temperature and soil moisture content. Alkaline Phosphatase Long-term manure amendment and mineral fertilization modified alkaline phosphatase (AlP) activity in soil ( a,b). In soil amended with manure every other year, this parameter was 2.3 times higher than in treatments supplied with mineral fertilizers only. According to research, organic phosphorus enhances alkaline phosphatase activity in soil . Sienkiewicz et al. found that prolonged manure amendment increased the content of available phosphorus in soil. Lemanowicz and Koper reported strong correlations between the content of organic and plant-available phosphorus vs. phosphatase activity. In their opinion, phosphatase activity is indicative of phosphorus levels in soil. Alkaline phosphatase activity was stimulated by the lowest N rate (30 kg N ha −1 ) and suppressed by higher N rates (60 and 90 kg ha −1 ). In the work of Lemanowicz and Koper , an N rate of 90 kg N ha −1 also induced a significant (13%) decrease in AlP activity. Higher N rates also inhibited AlP activity in a study by Lemanowicz . Kucharski reported that a very high N rate (240 kg ha −1 ) decreased the value of this parameter in soil. In the current experiment, regular soil liming increased AlP activity two-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Similar results were reported by Kalembasa and Kuziemska and Kuziemska et al. . Liming enhances soil enzymatic activity because nutrients are more available in soils with a near-neutral pH . Lemanowicz found a correlation between AlP activity and hydrolytic acidity. Similar observations were made in the present study, where AlP activity decreased with a rise in hydrolytic acidity ( ). Alkaline phosphatase activity fluctuated in the growing season of 2015 ( c). This parameter was highest in April (1.75 mmol PN kg −1 h −1 ) and the values noted in May and August were similar. Higher AlP activity in spring could be associated with rapid phosphorus uptake by plant roots and the resulting decrease in the content of available phosphorus in soil. Such conditions support the secretion of phosphatases by plant roots, which catalyze the hydrolysis of organic phosphorus compounds to mineral compounds . According to Lemanowicz and Bartkowiak , phosphatase secretion by roots and microorganisms is determined by the plants’ phosphorus requirements. In the present study, alkaline phosphatase activity was lowest in September (1.08 mmol PN kg −1 h −1 ). 3.3. Content of PAHs in Soil Statistical analyses revealed that manure (M), mineral fertilization (Min) and M × Min interactions significantly influenced the total content of 16 PAHs and the content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) and HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) ( ). In 2015, the total content of PAHs (16) and the content of LMW PAHs was higher in soil amended with manure than in soil supplied with mineral fertilizers only (the effect of manure) ( ). The content of PAHs in soil varied significantly across sampling dates ( ). 3.3.1. Content of LMW PAHs in Soil The content of LMW PAHs in soil differed significantly during the growing season ( , ); it was highest in May (384.7 µg kg −1 ) and lowest in August (119.8 µg kg −1 ). This value was significantly higher in April (259.5 µg kg −1 ) than in September (210.0 µg kg −1 ). In the growing season of 2015, the content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) was highly similar in soil treated with manure and in soil supplied with mineral fertilizers only ( ). The analyzed parameter was higher between April and August in soil treated by manure and in September in treatments supplied with mineral fertilizers. In April and September, the content of LMW PAHs was identical in soil supplied with mineral fertilizers only. 3.3.2. Content of HMW PAHs in Soil The content of HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) in soil differed significantly during the growing season ( , ). The analyzed parameter was highest in September (158.3 µg kg −1 ) and lowest in August (75.0 µg kg −1 ). The content of HMW PAHs was lower in April, May and August, and in September in soil supplied with mineral fertilizers only ( ). In April, the greatest difference in the analyzed parameter was observed between soil with manure treatment (81.5 µg kg −1 ) and soil supplied with mineral fertilizers only (117.0 µg kg −1 ). 3.3.3. Total Content of 16 PAHs The total content of 16 PAHs in soil varied significantly in the growing season of 2015 ( , ). The fluctuations in the analyzed parameter could have resulted from varied weather conditions. According to Eriksson et al. , low temperatures significantly decrease the rate of PAH degradation in soil. Wang et al. observed that, in periods of heavy rainfall, atmospheric PAHs are transported to soil and tend to accumulate in the soil environment. The total content of PAHs was lowest in August (194.8 µg kg −1 ) and highest in May (484.6 µg kg −1 ). The analyzed parameter was significantly lower in April (358.7 µg kg −1 ) than in September (368.3 µg kg −1 ). According to the IUNG system , soil can be classified as non-contaminated (i.e., with ∑13PAH concentrations < 600 µg kg −1 ). Microbial abundance and soil enzymatic activity undoubtedly influenced the observed fluctuations in the total content of 16 PAHs. The examined parameter was lowest in August when dehydrogenase activity in soil was much higher ( ). In a study by Maliszewska-Kordybach and Smreczak , high PAH levels inhibited the activity of dehydrogenases, which is highly sensitive to these pollutants. In the present study, fungal abundance was highest in May (soil most contaminated with PAHs) ( ). Gałązka et al. also reported an increase in fungal counts with a raise in anthracene levels in soil. Samanta et al. emphasized the important role of the biodegradation of PAHs in the soil environment and compared their activity with that of bacteria. In the current study, the total content of 16 PAHs was higher in soil amended with manure on all sampling dates. 3.4. Principal Component Analysis: Correlations between Selected Soil Properties The presence of correlations between selected properties of soil samples collected on four dates in 2015 was identified by principal component analysis (PCA). In April, the first two principal components explained 65% of total variance in the following variables: abundance of organotrophic bacteria, ammonifying bacteria, nitrogen-fixing bacteria, actinobacteria and fungi; activity of dehydrogenases, catalase, urease, acid phosphatase and alkaline phosphatase; content of total nitrogen and organic carbon; Hh and pH; content of LMW PAHs; content of HMW PAHs; total content of 16 PAHs ( , ). The analyzed parameters were grouped on one side of the PC1 axis and the total variance explained by these components was very high at 48.3%. Microbial counts (organotrophic bacteria, ammonifying bacteria, N-fixing bacteria and actinobacteria) were strongly correlated with alkaline phosphatase activity in soil. An analysis of the first principal component (PC1) also revealed strong negative correlations between Hh values vs. the activity of catalase, dehydrogenases and urease; pH; total nitrogen content; organic carbon content; total content of 16 PAHs; content of LMW PAHs. An analysis of the second principal component (PC2) demonstrated that the negative correlation between acid phosphatase activity and the content of HMW PAHs explained 16.7% of total variance in the examined soil properties. The influence of enzymatic activity on the studied soil parameters increased in May. The strong correlations between the activity of dehydrogenases, catalase, urease, acid phosphatase and alkaline phosphatase, pH, organic carbon content, total nitrogen content and actinobacteria counts explained 41.4% of total variance ( , ). An analysis of PC2 revealed strong correlations between the total content of 16 PAHs, content of LMW PAHs, counts of organotrophic bacteria and hydrolytic acidity. In August, PC1 explained 42.1% of total variance in the examined soil parameters. The abundance of organotrophic bacteria and actinobacteria and soil enzymatic activity (dehydrogenases, urease and acid and alkaline phosphatase) were strongly linked with pH and organic carbon and total nitrogen content ( , ). Similar to the previous sampling date, the studied parameters were bound by a strong negative correlation with Hh values. An analysis of PC2 revealed that the strong correlation between the counts of nitrogen-fixing bacteria and the content of HMW PAHs explained 17.0% of total variance. Soil samples collected in September were also characterized by high levels of microbial and enzymatic activity. High microbial abundance can be attributed to a higher content of organic matter that was supplied to soil with harvest residues. An analysis of PC1 demonstrated that strong correlations between all microbial counts (organotrophic, ammonifying, N-fixing bacteria and actinobacteria), enzymatic activity (dehydrogenases, catalase, urease and acid and alkaline phosphatase), pH (in 1 mol KCl) and the content of organic carbon and total nitrogen explained 50.5% of total variance ( , ). An analysis of PC2 also revealed that total PAH content and the content of LMW and HMW PAHs in soil were strongly correlated. Microbial abundance increases under supportive conditions for microbial growth . According to Wielgosz and Szember , microbial counts tend to be higher in two periods of the year: in spring, when temperature and soil moisture content increase, and in fall, when fresh organic matter is supplied to the soil environment with harvest residues. Natywa et al. and Wielgosz and Szember also observed that the increase in the abundance of soil-dwelling microorganisms in fall is directly linked with the additional supply of organic matter in the form of harvest residues. Sosnowski et al. reported higher soil microbial counts in fall than in spring, regardless of the experimental factors, and attributed their findings to higher precipitation in fall. In the work of Lemanowicz and Bartkowiak , acid phosphatase activity was highly correlated with the organic carbon content of soil. In the present study, the above correlation was noted in soil samples collected between May and September. According to Dąbek-Szreniawska et al. , soil pH has a considerable influence on enzymatic activity. In the current experiment, hydrolytic acidity had a negative effect on the activity of soil enzymes, excluding acid phosphatase and catalase. Natywa et al. found that dehydrogenase activity was significantly affected by pH and the content of organic carbon and total nitrogen in soil. Ciarkowska and Gambuś also reported a strong correlation between dehydrogenase activity and organic carbon content. In turn, Zaborowska et al. found that dehydrogenase activity was strongly affected by soil pH. In a study by Maliszewska-Kordybach and Smreczak , soil contamination with PAHs inhibited dehydrogenase activity. Lipińska et al. observed that dehydrogenases were more resistant to PAH pollution than urease. According to Wyszkowska and Wyszkowski , Lipińska et al. and Lipińska et al. , urease activity is compromised in soils heavily contaminated with PAHs. The presence of correlations between LMW PAHs (fluorene, fluoranthene and anthracene) and dehydrogenase activity was also reported by Klimkowicz-Pawlas and Maliszewska-Kordybach and Oleszczuk et al. . The content of PAHs is determined by the concentration of organic carbon and total nitrogen in soil [ , , ]. In the present study, organic carbon and total nitrogen concentrations were strongly correlated with the total content of PAHs and the content of LMW PAHs in soil samples collected in early spring ( ). Maliszewska-Kordybach et al. , Wyszkowski and Ziółkowska and Jin et al. also observed significant correlations between organic carbon content and PAH levels in soil. In contrast, organic carbon content had no significant impact on PAH levels in soil in a study by Bi et al. . Gałązka et al. demonstrated that the content of HMW PAHs was negatively correlated with acid phosphatase activity. The above correlation was also noted in this study in soil samples collected in early spring. In turn, Gałązka et al. found that fungal abundance increased with a rise in anthracene levels in soil. According to Samanta et al. , fungi and bacteria play an equally important role in PAH biodegradation in soil. Lehmann et al. demonstrated that an increase in soil organic carbon content stimulated microbial activity and minimized the toxic effects of soil pollutants. Soil is a complex matrix whose physical, physicochemical, chemical and biological properties are correlated with microbial activity and the presence of pollutants such as PAHs. Weather fluctuations during the growing season also exert a strong influence on chemical and biochemical processes in soil. The relationships between the examined soil parameters were, at least partly, identified in PCA. The PCA revealed that biological processes in soil are determined mainly by carbon and nitrogen content in soil, soil pH and Hh values. Microbial proliferation rates affect soil enzymatic activity. However, the impact of specific microbial groups on PAH levels in soil could not be determined based on the results of a short-term study. Data covering a longer period of time are also needed to formulate reliable conclusions about the impact of PAHs on soil enzymatic activity. However, the present findings indicate that PCA should be used to evaluate the relationships between diverse soil parameters.
3.1.1. Soil pH and Hydrolytic Acidity The growth of soil-dwelling microorganisms is determined by environmental conditions, including soil pH, which influences the microbiological and biochemical parameters of soil. Soil regularly amended with manure was characterized by higher pH values (in 1 mol KCl dm −3 ) and lower hydrolytic acidity than soil supplied with mineral fertilizers only ( ). Increasing nitrogen rates clearly decreased soil pH and increased hydrolytic acidity and the greatest changes in these parameters were observed under the influence of the highest nitrogen rate. In a study by Lemanowicz , high nitrogen rates and the absence of liming also undesirably increased hydrolytic acidity in soil. As expected, regular liming considerably increased soil pH and reduced hydrolytic acidity. The effects of liming were more pronounced in soil amended with manure than in soil supplied with mineral fertilizers only. 3.1.2. Organic Carbon and Total Nitrogen Content Carbon and nitrogen are essential for PAH degradation. Microorganisms have different nutritional requirements and various values of C:N ratios have been reported as optimal in the literature. Fungi dominate in soils with high C content and limited N supply. In turn, bacterial growth is influenced by both C and N content . According to Amezcua-Allieri et al. , the C:N ratio affects the rate at which PAHs are removed from soil. Farahani et al. reported that the rate of PAH degradation in soil is determined by the C:N ratio in the growth medium and the chemical form of nitrogen. Organic carbon and total nitrogen are important indicators of soil fertility . In the present study, the content of organic carbon and total N in soil was significantly influenced by manure and mineral fertilization ( , ). Manure (M) exerted a significant effect and varied mineral fertilization (Min) and the interaction between these factors (M × Min) exerted highly significant effects on the total nitrogen content of soil. Nitrogen fertilization clearly increased the total nitrogen content of soil relative to the control treatment and enhanced the accumulation of Corg in soil in 2015 ( ). The increase in soil Corg content in response to rising N rates can be attributed to the accumulation of biomass in soil after the harvest of each crop grown in rotation. Siwik-Ziomek and Lemanowicz also reported an increase in the total nitrogen content of soil in response to increasing rates of N fertilizer. The value of the Block (B) parameter was not significant, which indicates that soil variability in the experimental field had no effect on the content of Corg and total N.
The growth of soil-dwelling microorganisms is determined by environmental conditions, including soil pH, which influences the microbiological and biochemical parameters of soil. Soil regularly amended with manure was characterized by higher pH values (in 1 mol KCl dm −3 ) and lower hydrolytic acidity than soil supplied with mineral fertilizers only ( ). Increasing nitrogen rates clearly decreased soil pH and increased hydrolytic acidity and the greatest changes in these parameters were observed under the influence of the highest nitrogen rate. In a study by Lemanowicz , high nitrogen rates and the absence of liming also undesirably increased hydrolytic acidity in soil. As expected, regular liming considerably increased soil pH and reduced hydrolytic acidity. The effects of liming were more pronounced in soil amended with manure than in soil supplied with mineral fertilizers only.
Carbon and nitrogen are essential for PAH degradation. Microorganisms have different nutritional requirements and various values of C:N ratios have been reported as optimal in the literature. Fungi dominate in soils with high C content and limited N supply. In turn, bacterial growth is influenced by both C and N content . According to Amezcua-Allieri et al. , the C:N ratio affects the rate at which PAHs are removed from soil. Farahani et al. reported that the rate of PAH degradation in soil is determined by the C:N ratio in the growth medium and the chemical form of nitrogen. Organic carbon and total nitrogen are important indicators of soil fertility . In the present study, the content of organic carbon and total N in soil was significantly influenced by manure and mineral fertilization ( , ). Manure (M) exerted a significant effect and varied mineral fertilization (Min) and the interaction between these factors (M × Min) exerted highly significant effects on the total nitrogen content of soil. Nitrogen fertilization clearly increased the total nitrogen content of soil relative to the control treatment and enhanced the accumulation of Corg in soil in 2015 ( ). The increase in soil Corg content in response to rising N rates can be attributed to the accumulation of biomass in soil after the harvest of each crop grown in rotation. Siwik-Ziomek and Lemanowicz also reported an increase in the total nitrogen content of soil in response to increasing rates of N fertilizer. The value of the Block (B) parameter was not significant, which indicates that soil variability in the experimental field had no effect on the content of Corg and total N.
3.2.1. Microbial Abundance Organotrophic Bacteria In soil sown with spring barley, the counts of organotrophic bacteria were 1.7 higher in treatments that were regularly amended with manure than in treatments that were supplied with mineral fertilizers only ( a). The abundance of organotrophic bacteria increased with a rise in the N rate ( b). The highest N rate induced the greatest (1.4-fold) increase in the counts of organotrophic bacteria relative to the control treatment. The growth of organotrophic bacteria was also stimulated by higher potassium rates (66.4 and 99.7 kg∙ha −1 ). Liming decreased soil acidity and increased the availability of nutrients for organotrophic bacteria. Higher N and K rates induced similar effects. In 2015, the abundance of organotrophic bacteria in soil varied widely from 18,108 CFU kg −1 DM to 283,108 CFU kg −1 DM soil ( c). Bacterial counts were highest in soil samples collected in May (144,108 CFU kg −1 DM soil) and lowest in August (2.5 times lower). In April, the average abundance of organotrophic bacteria reached 69,108 CFU kg −1 DM soil and it was 19% lower than in September. May and September were characterized by the most favorable temperatures for bacterial growth (12.0 °C and 12.6 °C, respectively, during the 7-day monitoring period before sampling), which could explain the increase in the abundance of organotrophic bacteria in these months. According to Borowik et al. , organotrophic bacteria proliferate most rapidly at a temperature of around 15 °C. Ammonifying Bacteria Manure and mineral fertilizers significantly modified the abundance of ammonifying bacteria in soil ( a,b). The growth of these microorganisms was enhanced in treatments regularly amended with manure. Potassium exerted varied effects on the counts of ammonifying bacteria; a moderate K rate decreased their abundance, whereas the highest K rate stimulated the proliferation of ammonifying bacteria ( b). Magnesium supplied with N 2 P 1 K 2 had a minor influence on the counts of ammonifying bacteria. As expected, regular liming created the most favorable environment for the growth of ammonifying bacteria. The abundance of ammonifying bacteria varied during the growing season ( c) and it was highest in August (139,108 CFU kg −1 DM soil) which was characterized by highly unfavorable weather conditions during the 7-day monitoring period before sampling (drought and very high temperature, 19.7 °C). According to Dąbek-Szreniawska et al. , a decrease in soil moisture content stimulates the growth of ammonifying bacteria. In the present study, the counts of ammonifying bacteria were 8% lower in May than in August and precipitation levels (3.9 mm) during the 7-day monitoring period before sampling were lower than in April and September ( ). The abundance of ammonifying bacteria was lowest in April (65 × 10 8 CFU kg −1 DM soil) and it was 15% higher in September (precipitation during the 7-day monitoring period before sampling reached 7.2 and 7.8 mm, respectively). The analyzed parameter was highest in May and August and it was considerably lower in April and September. These results could be attributed to optimal temperatures for microbial growth in May and August. Despite low precipitation in these months, soil water content was probably sufficient to promote the growth of ammonifying bacteria. Manure application increased the counts of ammonifying bacteria 1.4-fold relative to treatments supplied with mineral fertilizers only. The decomposition of organic matter supplied to soil with manure increased the content of mineral N and created a favorable environment for the development of soil microorganisms, including N-fixing bacteria. An increase in the content of mineral N as well as higher microbial counts promoted N immobilization in soil. Nitrogen-Fixing Bacteria Manure significantly increased the counts of N-fixing bacteria in soil ( a). The abundance of N-fixing bacteria was 1.4-fold higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only. The decomposition of organic matter supplied with manure increased the content of mineral N. It should also be noted that manure creates favorable conditions for the growth of soil-dwelling microorganisms, including N-fixing bacteria. An increase in the content of mineral N and higher microbial counts promoted N immobilization in soil. Increasing N rates exerted a minor effect on the abundance of N-fixing bacteria in soil ( b). Potassium was a more influential factor and higher K rates stimulated the proliferation of N-fixing bacteria. Magnesium decreased the abundance of N-fixing bacteria, whereas liming promoted their growth. The counts of N-fixing bacteria in soil varied widely from 18 × 10 8 to 247 × 10 8 CFU kg −1 DM soil ( c). Average microbial counts were similar in April and August. The abundance of N-fixing bacteria was nearly two-fold higher in May and 21% lower in September relative to May. Actinobacteria Long-term manure soil application as well as mineral fertilization significantly modified actinobacteria counts in soil ( a,b). Actinobacteria counts were double the amount higher in soil regularly amended with manure than in soil supplied with mineral fertilizers only ( a). Lower N rates (30 and 60 kg ha −1 ) did not have a highly stimulating effect on actinobacteria counts. Only the highest N rate (90 kg ha −1 ) induced a 15% increase in the abundance of actinobacteria relative to the control (without mineral fertilization). According to Vetanovetz and Peterson , mineral N fertilization increases actinobacteria counts in soil. In the current study, the growth of actinobacteria was stimulated by higher K rates (66.4 and 99.7 kg ha −1 ). The highest K rate induced the greatest (2-fold) increase in actinobacteria counts relative to the lowest K rate. Magnesium did not influence the abundance of the studied bacterial group. Actinobacteria counts clearly increased in regularly limed soil. Actinobacteria counts varied considerably during the growing season of 2015 ( c). The mean abundance of actinobacteria increased steadily between April and September. Barabasz and Vořišek and Natywa et al. reported the highest actinobacteria counts in summer, which could be attributed to high temperatures. Fungi Fungal abundance was 32% higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only ( a). Fungal counts also increased in response to higher N rates ( b). Similar observations were made by Natywa et al. , Sosnowski et al. and Sosnowski and Jankowski . According to Niewiadomska et al. , N fertilization considerably increased fungal abundance relative to control soil. In the work of Wyszkowska , increasing urea rates also led to a significant increase in fungal counts in soil. In the present study, fungal abundance was 1.7-fold higher in soil supplied with the highest K rate than in soil fertilized with N 2 P 1 K 1 . Regular liming also enhanced fungal growth in soil. In the growing season of 2015, fungal counts in soil ranged from 11 × 10 6 to 250 × 10 6 CFU kg −1 DM soil ( c). Mean fungal counts were highest in May (130 × 10 6 CFU kg −1 DM soil) and lowest in August. The analyzed parameter was similar in early spring (April) and late summer (September). 3.2.2. Enzymatic Activity Dehydrogenases Dehydrogenases (DHA) are regarded as reliable indicators of soil biochemical activity. Dehydrogenase activity is influenced by enzymes secreted by soil-dwelling microorganisms, both aerobic and anaerobic . Dehydrogenases determine soil quality and fertility . Ciarkowska and Gambuś reported a strong correlation between DHA activity and organic carbon content in soil. In the present study, manure and mineral fertilization modified DHA activity in soil ( a,b). Dehydrogenase activity was 1.8-fold higher in soil with manure application than in soil supplied with mineral fertilizers only ( a). Manure exerted similar effects on DHA activity in the work of Koper and Siwik-Ziomek and Saha et al. . According to Piotrowska and Koper and Natywa et al. , DHA activity in soil increased in response to organic amendments and decreased in response to mineral fertilizers (NPK+Ca). In turn, Kucharski and Wałdowska found that mineral fertilizers stimulated DHA activity, but to a smaller extent than organic amendments. A comparison of the observed changes in DHA activity revealed that the lowest N rate used in the study (30 kg ha −1 ) decreased the analyzed parameter by 8% relative to the control treatment ( c). However, DHA activity decreased in response to higher N rates (60 and 90 kg N ha −1 ). Kucharski , Lemanowicz and Koper and Niewiadomska et al. also found that higher N rates suppressed DHA activity in soil. In contrast, potassium did not inhibit DHA activity and even increased the studied parameter. In a study by Koper and Siwik-Ziomek , comprehensive mineral and organic fertilization with calcium and magnesium enhanced the biochemical activity of soil-dwelling microorganisms, increased DHA activity and promoted microbial growth. In the current experiment, regular soil liming enhanced DHA activity by increasing soil pH and reducing hydrolytic acidity. Zaborowska et al. also reported that DHA activity decreased more than three-fold when soil pH was reduced from 7.1 to 6.4. Kalembasa and Kuziemska found that soil liming stimulated DHA activity. Dehydrogenase activity in soil was determined in the range of 2.13 to 9.65 µmol TFF kg −1 DM h −1 during the growing season ( c). This parameter peaked in August 2015 (5.75 µmol TFF kg −1 DM h −1 ) and was only somewhat lower in September (5.26 µmol TFF kg −1 DM h −1 ). In May, DHA activity was 20% higher than in April. The observed variations in the studied parameter could be attributed to changes in the moisture and oxygen content of soil ( ). Catalase Catalase is an antioxidant enzyme that protects plants against abiotic and biotic factors that cause oxidative stress . Manure amendment increased catalase activity in soil ( a). The value of this parameter was 17% higher in the second year after manure application than in soil supplied with mineral fertilizers only. In a study by Lemanowicz and Koper , catalase activity also increased in treatments where maize was amended with manure. In the present study, the lowest N rate (30 kg ha −1 ) had no significant effect on catalase activity in soil ( b). In turn, the highest N rate (90 kg ha −1 ) increased catalase activity. Increasing N rates also stimulated catalase activity in the work of Lemanowicz and Koper . Potassium and magnesium fertilizers stimulated catalase activity in soil. Regular liming was particularly effective in enhancing catalase activity and it increased the analyzed parameter 1.4-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Catalase activity in soil varied during the growing season of 2015 ( c). The highest value was noted in September, followed by August; it was the lowest in May. Urease Regular supply of manure increased organic matter content and stimulated urease activity in soil ( a). Kucharski et al. also found that manure application significantly enhanced urease activity in tested soils. In our study, urease activity was not significantly modified by mineral fertilization ( b). However, soil liming exerted a positive effect on urease activity. In the growing season of 2015, urease activity ranged from 0.02 to 0.46 mmol N-NH 4 kg −1 soil h −1 ( c). The studied parameter was highest in September (0.23 mmol N-NH 4 kg −1 soil h −1 ) and lowest in May (0.04 mmol N-NH 4 kg −1 soil h −1 ). Urease activity was 1.5 times higher in August (0.18 mmol N-NH 4 kg −1 soil h −1 ) than in April. Acid Phosphatase Acid phosphatase activity differed significantly between treatments treated by manure and treatments supplied with mineral fertilizers only ( a,b). In soil regularly amended with manure, acid phosphatase activity was 1.7 times higher than in soil supplied with mineral fertilizers only. Lemanowicz and Koper also found that acid phosphatase activity was lower when manure was not applied. In turn, mineral fertilizers had no significant influence on the activity of the discussed enzyme. However, higher N rates can stimulate acid phosphatase activity by increasing the concentration of H+ in the soil solution as a result of nitrification and enhancing NH4+ uptake by plants. The highest N rate applied (90 kg N ha −1 ) induced the greatest (20%) increase in acid phosphatase activity relative to the control treatment. In the work of Kucharski , very high N rates (240 kg ha −1 ) stimulated the activity of acid phosphatase. Lemanowicz and Koper , Lemanowicz and Siwik-Ziomek and Lemanowicz also reported an increase in acid phosphatase activity with a rise in mineral N rates. The cited authors observed that high N rates stimulated the activity of acid phosphomonoesterase. In contrast, liming induced a minor decrease in the studied parameter. Phosphomonoesterases are highly sensitive to changes in pH and the optimal soil pH for acid phosphatase is 4.0–6.5 . Kuziemska et al. found that soil liming significantly decreased acid phosphatase activity regardless of year or sampling date. Acid phosphatase activity ranged from 2.94 to 14.66 mmol PN kg −1 h −1 in the growing season of 2015 ( c). The analyzed parameter was highest in August and September and much lower in April and May. According to Natywa et al. , acid phosphatase activity increases in fall due to the supply of fresh organic matter with harvest residues that stimulate microbial growth. Lemanowicz and Krzyżaniak observed that enzymatic processes are difficult to interpret during the growing season because they are largely influenced by changes in temperature and soil moisture content. Alkaline Phosphatase Long-term manure amendment and mineral fertilization modified alkaline phosphatase (AlP) activity in soil ( a,b). In soil amended with manure every other year, this parameter was 2.3 times higher than in treatments supplied with mineral fertilizers only. According to research, organic phosphorus enhances alkaline phosphatase activity in soil . Sienkiewicz et al. found that prolonged manure amendment increased the content of available phosphorus in soil. Lemanowicz and Koper reported strong correlations between the content of organic and plant-available phosphorus vs. phosphatase activity. In their opinion, phosphatase activity is indicative of phosphorus levels in soil. Alkaline phosphatase activity was stimulated by the lowest N rate (30 kg N ha −1 ) and suppressed by higher N rates (60 and 90 kg ha −1 ). In the work of Lemanowicz and Koper , an N rate of 90 kg N ha −1 also induced a significant (13%) decrease in AlP activity. Higher N rates also inhibited AlP activity in a study by Lemanowicz . Kucharski reported that a very high N rate (240 kg ha −1 ) decreased the value of this parameter in soil. In the current experiment, regular soil liming increased AlP activity two-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Similar results were reported by Kalembasa and Kuziemska and Kuziemska et al. . Liming enhances soil enzymatic activity because nutrients are more available in soils with a near-neutral pH . Lemanowicz found a correlation between AlP activity and hydrolytic acidity. Similar observations were made in the present study, where AlP activity decreased with a rise in hydrolytic acidity ( ). Alkaline phosphatase activity fluctuated in the growing season of 2015 ( c). This parameter was highest in April (1.75 mmol PN kg −1 h −1 ) and the values noted in May and August were similar. Higher AlP activity in spring could be associated with rapid phosphorus uptake by plant roots and the resulting decrease in the content of available phosphorus in soil. Such conditions support the secretion of phosphatases by plant roots, which catalyze the hydrolysis of organic phosphorus compounds to mineral compounds . According to Lemanowicz and Bartkowiak , phosphatase secretion by roots and microorganisms is determined by the plants’ phosphorus requirements. In the present study, alkaline phosphatase activity was lowest in September (1.08 mmol PN kg −1 h −1 ).
Organotrophic Bacteria In soil sown with spring barley, the counts of organotrophic bacteria were 1.7 higher in treatments that were regularly amended with manure than in treatments that were supplied with mineral fertilizers only ( a). The abundance of organotrophic bacteria increased with a rise in the N rate ( b). The highest N rate induced the greatest (1.4-fold) increase in the counts of organotrophic bacteria relative to the control treatment. The growth of organotrophic bacteria was also stimulated by higher potassium rates (66.4 and 99.7 kg∙ha −1 ). Liming decreased soil acidity and increased the availability of nutrients for organotrophic bacteria. Higher N and K rates induced similar effects. In 2015, the abundance of organotrophic bacteria in soil varied widely from 18,108 CFU kg −1 DM to 283,108 CFU kg −1 DM soil ( c). Bacterial counts were highest in soil samples collected in May (144,108 CFU kg −1 DM soil) and lowest in August (2.5 times lower). In April, the average abundance of organotrophic bacteria reached 69,108 CFU kg −1 DM soil and it was 19% lower than in September. May and September were characterized by the most favorable temperatures for bacterial growth (12.0 °C and 12.6 °C, respectively, during the 7-day monitoring period before sampling), which could explain the increase in the abundance of organotrophic bacteria in these months. According to Borowik et al. , organotrophic bacteria proliferate most rapidly at a temperature of around 15 °C. Ammonifying Bacteria Manure and mineral fertilizers significantly modified the abundance of ammonifying bacteria in soil ( a,b). The growth of these microorganisms was enhanced in treatments regularly amended with manure. Potassium exerted varied effects on the counts of ammonifying bacteria; a moderate K rate decreased their abundance, whereas the highest K rate stimulated the proliferation of ammonifying bacteria ( b). Magnesium supplied with N 2 P 1 K 2 had a minor influence on the counts of ammonifying bacteria. As expected, regular liming created the most favorable environment for the growth of ammonifying bacteria. The abundance of ammonifying bacteria varied during the growing season ( c) and it was highest in August (139,108 CFU kg −1 DM soil) which was characterized by highly unfavorable weather conditions during the 7-day monitoring period before sampling (drought and very high temperature, 19.7 °C). According to Dąbek-Szreniawska et al. , a decrease in soil moisture content stimulates the growth of ammonifying bacteria. In the present study, the counts of ammonifying bacteria were 8% lower in May than in August and precipitation levels (3.9 mm) during the 7-day monitoring period before sampling were lower than in April and September ( ). The abundance of ammonifying bacteria was lowest in April (65 × 10 8 CFU kg −1 DM soil) and it was 15% higher in September (precipitation during the 7-day monitoring period before sampling reached 7.2 and 7.8 mm, respectively). The analyzed parameter was highest in May and August and it was considerably lower in April and September. These results could be attributed to optimal temperatures for microbial growth in May and August. Despite low precipitation in these months, soil water content was probably sufficient to promote the growth of ammonifying bacteria. Manure application increased the counts of ammonifying bacteria 1.4-fold relative to treatments supplied with mineral fertilizers only. The decomposition of organic matter supplied to soil with manure increased the content of mineral N and created a favorable environment for the development of soil microorganisms, including N-fixing bacteria. An increase in the content of mineral N as well as higher microbial counts promoted N immobilization in soil. Nitrogen-Fixing Bacteria Manure significantly increased the counts of N-fixing bacteria in soil ( a). The abundance of N-fixing bacteria was 1.4-fold higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only. The decomposition of organic matter supplied with manure increased the content of mineral N. It should also be noted that manure creates favorable conditions for the growth of soil-dwelling microorganisms, including N-fixing bacteria. An increase in the content of mineral N and higher microbial counts promoted N immobilization in soil. Increasing N rates exerted a minor effect on the abundance of N-fixing bacteria in soil ( b). Potassium was a more influential factor and higher K rates stimulated the proliferation of N-fixing bacteria. Magnesium decreased the abundance of N-fixing bacteria, whereas liming promoted their growth. The counts of N-fixing bacteria in soil varied widely from 18 × 10 8 to 247 × 10 8 CFU kg −1 DM soil ( c). Average microbial counts were similar in April and August. The abundance of N-fixing bacteria was nearly two-fold higher in May and 21% lower in September relative to May. Actinobacteria Long-term manure soil application as well as mineral fertilization significantly modified actinobacteria counts in soil ( a,b). Actinobacteria counts were double the amount higher in soil regularly amended with manure than in soil supplied with mineral fertilizers only ( a). Lower N rates (30 and 60 kg ha −1 ) did not have a highly stimulating effect on actinobacteria counts. Only the highest N rate (90 kg ha −1 ) induced a 15% increase in the abundance of actinobacteria relative to the control (without mineral fertilization). According to Vetanovetz and Peterson , mineral N fertilization increases actinobacteria counts in soil. In the current study, the growth of actinobacteria was stimulated by higher K rates (66.4 and 99.7 kg ha −1 ). The highest K rate induced the greatest (2-fold) increase in actinobacteria counts relative to the lowest K rate. Magnesium did not influence the abundance of the studied bacterial group. Actinobacteria counts clearly increased in regularly limed soil. Actinobacteria counts varied considerably during the growing season of 2015 ( c). The mean abundance of actinobacteria increased steadily between April and September. Barabasz and Vořišek and Natywa et al. reported the highest actinobacteria counts in summer, which could be attributed to high temperatures. Fungi Fungal abundance was 32% higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only ( a). Fungal counts also increased in response to higher N rates ( b). Similar observations were made by Natywa et al. , Sosnowski et al. and Sosnowski and Jankowski . According to Niewiadomska et al. , N fertilization considerably increased fungal abundance relative to control soil. In the work of Wyszkowska , increasing urea rates also led to a significant increase in fungal counts in soil. In the present study, fungal abundance was 1.7-fold higher in soil supplied with the highest K rate than in soil fertilized with N 2 P 1 K 1 . Regular liming also enhanced fungal growth in soil. In the growing season of 2015, fungal counts in soil ranged from 11 × 10 6 to 250 × 10 6 CFU kg −1 DM soil ( c). Mean fungal counts were highest in May (130 × 10 6 CFU kg −1 DM soil) and lowest in August. The analyzed parameter was similar in early spring (April) and late summer (September).
In soil sown with spring barley, the counts of organotrophic bacteria were 1.7 higher in treatments that were regularly amended with manure than in treatments that were supplied with mineral fertilizers only ( a). The abundance of organotrophic bacteria increased with a rise in the N rate ( b). The highest N rate induced the greatest (1.4-fold) increase in the counts of organotrophic bacteria relative to the control treatment. The growth of organotrophic bacteria was also stimulated by higher potassium rates (66.4 and 99.7 kg∙ha −1 ). Liming decreased soil acidity and increased the availability of nutrients for organotrophic bacteria. Higher N and K rates induced similar effects. In 2015, the abundance of organotrophic bacteria in soil varied widely from 18,108 CFU kg −1 DM to 283,108 CFU kg −1 DM soil ( c). Bacterial counts were highest in soil samples collected in May (144,108 CFU kg −1 DM soil) and lowest in August (2.5 times lower). In April, the average abundance of organotrophic bacteria reached 69,108 CFU kg −1 DM soil and it was 19% lower than in September. May and September were characterized by the most favorable temperatures for bacterial growth (12.0 °C and 12.6 °C, respectively, during the 7-day monitoring period before sampling), which could explain the increase in the abundance of organotrophic bacteria in these months. According to Borowik et al. , organotrophic bacteria proliferate most rapidly at a temperature of around 15 °C.
Manure and mineral fertilizers significantly modified the abundance of ammonifying bacteria in soil ( a,b). The growth of these microorganisms was enhanced in treatments regularly amended with manure. Potassium exerted varied effects on the counts of ammonifying bacteria; a moderate K rate decreased their abundance, whereas the highest K rate stimulated the proliferation of ammonifying bacteria ( b). Magnesium supplied with N 2 P 1 K 2 had a minor influence on the counts of ammonifying bacteria. As expected, regular liming created the most favorable environment for the growth of ammonifying bacteria. The abundance of ammonifying bacteria varied during the growing season ( c) and it was highest in August (139,108 CFU kg −1 DM soil) which was characterized by highly unfavorable weather conditions during the 7-day monitoring period before sampling (drought and very high temperature, 19.7 °C). According to Dąbek-Szreniawska et al. , a decrease in soil moisture content stimulates the growth of ammonifying bacteria. In the present study, the counts of ammonifying bacteria were 8% lower in May than in August and precipitation levels (3.9 mm) during the 7-day monitoring period before sampling were lower than in April and September ( ). The abundance of ammonifying bacteria was lowest in April (65 × 10 8 CFU kg −1 DM soil) and it was 15% higher in September (precipitation during the 7-day monitoring period before sampling reached 7.2 and 7.8 mm, respectively). The analyzed parameter was highest in May and August and it was considerably lower in April and September. These results could be attributed to optimal temperatures for microbial growth in May and August. Despite low precipitation in these months, soil water content was probably sufficient to promote the growth of ammonifying bacteria. Manure application increased the counts of ammonifying bacteria 1.4-fold relative to treatments supplied with mineral fertilizers only. The decomposition of organic matter supplied to soil with manure increased the content of mineral N and created a favorable environment for the development of soil microorganisms, including N-fixing bacteria. An increase in the content of mineral N as well as higher microbial counts promoted N immobilization in soil.
Manure significantly increased the counts of N-fixing bacteria in soil ( a). The abundance of N-fixing bacteria was 1.4-fold higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only. The decomposition of organic matter supplied with manure increased the content of mineral N. It should also be noted that manure creates favorable conditions for the growth of soil-dwelling microorganisms, including N-fixing bacteria. An increase in the content of mineral N and higher microbial counts promoted N immobilization in soil. Increasing N rates exerted a minor effect on the abundance of N-fixing bacteria in soil ( b). Potassium was a more influential factor and higher K rates stimulated the proliferation of N-fixing bacteria. Magnesium decreased the abundance of N-fixing bacteria, whereas liming promoted their growth. The counts of N-fixing bacteria in soil varied widely from 18 × 10 8 to 247 × 10 8 CFU kg −1 DM soil ( c). Average microbial counts were similar in April and August. The abundance of N-fixing bacteria was nearly two-fold higher in May and 21% lower in September relative to May.
Long-term manure soil application as well as mineral fertilization significantly modified actinobacteria counts in soil ( a,b). Actinobacteria counts were double the amount higher in soil regularly amended with manure than in soil supplied with mineral fertilizers only ( a). Lower N rates (30 and 60 kg ha −1 ) did not have a highly stimulating effect on actinobacteria counts. Only the highest N rate (90 kg ha −1 ) induced a 15% increase in the abundance of actinobacteria relative to the control (without mineral fertilization). According to Vetanovetz and Peterson , mineral N fertilization increases actinobacteria counts in soil. In the current study, the growth of actinobacteria was stimulated by higher K rates (66.4 and 99.7 kg ha −1 ). The highest K rate induced the greatest (2-fold) increase in actinobacteria counts relative to the lowest K rate. Magnesium did not influence the abundance of the studied bacterial group. Actinobacteria counts clearly increased in regularly limed soil. Actinobacteria counts varied considerably during the growing season of 2015 ( c). The mean abundance of actinobacteria increased steadily between April and September. Barabasz and Vořišek and Natywa et al. reported the highest actinobacteria counts in summer, which could be attributed to high temperatures.
Fungal abundance was 32% higher in soil amended with manure every other year than in soil supplied with mineral fertilizers only ( a). Fungal counts also increased in response to higher N rates ( b). Similar observations were made by Natywa et al. , Sosnowski et al. and Sosnowski and Jankowski . According to Niewiadomska et al. , N fertilization considerably increased fungal abundance relative to control soil. In the work of Wyszkowska , increasing urea rates also led to a significant increase in fungal counts in soil. In the present study, fungal abundance was 1.7-fold higher in soil supplied with the highest K rate than in soil fertilized with N 2 P 1 K 1 . Regular liming also enhanced fungal growth in soil. In the growing season of 2015, fungal counts in soil ranged from 11 × 10 6 to 250 × 10 6 CFU kg −1 DM soil ( c). Mean fungal counts were highest in May (130 × 10 6 CFU kg −1 DM soil) and lowest in August. The analyzed parameter was similar in early spring (April) and late summer (September).
Dehydrogenases Dehydrogenases (DHA) are regarded as reliable indicators of soil biochemical activity. Dehydrogenase activity is influenced by enzymes secreted by soil-dwelling microorganisms, both aerobic and anaerobic . Dehydrogenases determine soil quality and fertility . Ciarkowska and Gambuś reported a strong correlation between DHA activity and organic carbon content in soil. In the present study, manure and mineral fertilization modified DHA activity in soil ( a,b). Dehydrogenase activity was 1.8-fold higher in soil with manure application than in soil supplied with mineral fertilizers only ( a). Manure exerted similar effects on DHA activity in the work of Koper and Siwik-Ziomek and Saha et al. . According to Piotrowska and Koper and Natywa et al. , DHA activity in soil increased in response to organic amendments and decreased in response to mineral fertilizers (NPK+Ca). In turn, Kucharski and Wałdowska found that mineral fertilizers stimulated DHA activity, but to a smaller extent than organic amendments. A comparison of the observed changes in DHA activity revealed that the lowest N rate used in the study (30 kg ha −1 ) decreased the analyzed parameter by 8% relative to the control treatment ( c). However, DHA activity decreased in response to higher N rates (60 and 90 kg N ha −1 ). Kucharski , Lemanowicz and Koper and Niewiadomska et al. also found that higher N rates suppressed DHA activity in soil. In contrast, potassium did not inhibit DHA activity and even increased the studied parameter. In a study by Koper and Siwik-Ziomek , comprehensive mineral and organic fertilization with calcium and magnesium enhanced the biochemical activity of soil-dwelling microorganisms, increased DHA activity and promoted microbial growth. In the current experiment, regular soil liming enhanced DHA activity by increasing soil pH and reducing hydrolytic acidity. Zaborowska et al. also reported that DHA activity decreased more than three-fold when soil pH was reduced from 7.1 to 6.4. Kalembasa and Kuziemska found that soil liming stimulated DHA activity. Dehydrogenase activity in soil was determined in the range of 2.13 to 9.65 µmol TFF kg −1 DM h −1 during the growing season ( c). This parameter peaked in August 2015 (5.75 µmol TFF kg −1 DM h −1 ) and was only somewhat lower in September (5.26 µmol TFF kg −1 DM h −1 ). In May, DHA activity was 20% higher than in April. The observed variations in the studied parameter could be attributed to changes in the moisture and oxygen content of soil ( ). Catalase Catalase is an antioxidant enzyme that protects plants against abiotic and biotic factors that cause oxidative stress . Manure amendment increased catalase activity in soil ( a). The value of this parameter was 17% higher in the second year after manure application than in soil supplied with mineral fertilizers only. In a study by Lemanowicz and Koper , catalase activity also increased in treatments where maize was amended with manure. In the present study, the lowest N rate (30 kg ha −1 ) had no significant effect on catalase activity in soil ( b). In turn, the highest N rate (90 kg ha −1 ) increased catalase activity. Increasing N rates also stimulated catalase activity in the work of Lemanowicz and Koper . Potassium and magnesium fertilizers stimulated catalase activity in soil. Regular liming was particularly effective in enhancing catalase activity and it increased the analyzed parameter 1.4-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Catalase activity in soil varied during the growing season of 2015 ( c). The highest value was noted in September, followed by August; it was the lowest in May. Urease Regular supply of manure increased organic matter content and stimulated urease activity in soil ( a). Kucharski et al. also found that manure application significantly enhanced urease activity in tested soils. In our study, urease activity was not significantly modified by mineral fertilization ( b). However, soil liming exerted a positive effect on urease activity. In the growing season of 2015, urease activity ranged from 0.02 to 0.46 mmol N-NH 4 kg −1 soil h −1 ( c). The studied parameter was highest in September (0.23 mmol N-NH 4 kg −1 soil h −1 ) and lowest in May (0.04 mmol N-NH 4 kg −1 soil h −1 ). Urease activity was 1.5 times higher in August (0.18 mmol N-NH 4 kg −1 soil h −1 ) than in April. Acid Phosphatase Acid phosphatase activity differed significantly between treatments treated by manure and treatments supplied with mineral fertilizers only ( a,b). In soil regularly amended with manure, acid phosphatase activity was 1.7 times higher than in soil supplied with mineral fertilizers only. Lemanowicz and Koper also found that acid phosphatase activity was lower when manure was not applied. In turn, mineral fertilizers had no significant influence on the activity of the discussed enzyme. However, higher N rates can stimulate acid phosphatase activity by increasing the concentration of H+ in the soil solution as a result of nitrification and enhancing NH4+ uptake by plants. The highest N rate applied (90 kg N ha −1 ) induced the greatest (20%) increase in acid phosphatase activity relative to the control treatment. In the work of Kucharski , very high N rates (240 kg ha −1 ) stimulated the activity of acid phosphatase. Lemanowicz and Koper , Lemanowicz and Siwik-Ziomek and Lemanowicz also reported an increase in acid phosphatase activity with a rise in mineral N rates. The cited authors observed that high N rates stimulated the activity of acid phosphomonoesterase. In contrast, liming induced a minor decrease in the studied parameter. Phosphomonoesterases are highly sensitive to changes in pH and the optimal soil pH for acid phosphatase is 4.0–6.5 . Kuziemska et al. found that soil liming significantly decreased acid phosphatase activity regardless of year or sampling date. Acid phosphatase activity ranged from 2.94 to 14.66 mmol PN kg −1 h −1 in the growing season of 2015 ( c). The analyzed parameter was highest in August and September and much lower in April and May. According to Natywa et al. , acid phosphatase activity increases in fall due to the supply of fresh organic matter with harvest residues that stimulate microbial growth. Lemanowicz and Krzyżaniak observed that enzymatic processes are difficult to interpret during the growing season because they are largely influenced by changes in temperature and soil moisture content. Alkaline Phosphatase Long-term manure amendment and mineral fertilization modified alkaline phosphatase (AlP) activity in soil ( a,b). In soil amended with manure every other year, this parameter was 2.3 times higher than in treatments supplied with mineral fertilizers only. According to research, organic phosphorus enhances alkaline phosphatase activity in soil . Sienkiewicz et al. found that prolonged manure amendment increased the content of available phosphorus in soil. Lemanowicz and Koper reported strong correlations between the content of organic and plant-available phosphorus vs. phosphatase activity. In their opinion, phosphatase activity is indicative of phosphorus levels in soil. Alkaline phosphatase activity was stimulated by the lowest N rate (30 kg N ha −1 ) and suppressed by higher N rates (60 and 90 kg ha −1 ). In the work of Lemanowicz and Koper , an N rate of 90 kg N ha −1 also induced a significant (13%) decrease in AlP activity. Higher N rates also inhibited AlP activity in a study by Lemanowicz . Kucharski reported that a very high N rate (240 kg ha −1 ) decreased the value of this parameter in soil. In the current experiment, regular soil liming increased AlP activity two-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Similar results were reported by Kalembasa and Kuziemska and Kuziemska et al. . Liming enhances soil enzymatic activity because nutrients are more available in soils with a near-neutral pH . Lemanowicz found a correlation between AlP activity and hydrolytic acidity. Similar observations were made in the present study, where AlP activity decreased with a rise in hydrolytic acidity ( ). Alkaline phosphatase activity fluctuated in the growing season of 2015 ( c). This parameter was highest in April (1.75 mmol PN kg −1 h −1 ) and the values noted in May and August were similar. Higher AlP activity in spring could be associated with rapid phosphorus uptake by plant roots and the resulting decrease in the content of available phosphorus in soil. Such conditions support the secretion of phosphatases by plant roots, which catalyze the hydrolysis of organic phosphorus compounds to mineral compounds . According to Lemanowicz and Bartkowiak , phosphatase secretion by roots and microorganisms is determined by the plants’ phosphorus requirements. In the present study, alkaline phosphatase activity was lowest in September (1.08 mmol PN kg −1 h −1 ).
Dehydrogenases (DHA) are regarded as reliable indicators of soil biochemical activity. Dehydrogenase activity is influenced by enzymes secreted by soil-dwelling microorganisms, both aerobic and anaerobic . Dehydrogenases determine soil quality and fertility . Ciarkowska and Gambuś reported a strong correlation between DHA activity and organic carbon content in soil. In the present study, manure and mineral fertilization modified DHA activity in soil ( a,b). Dehydrogenase activity was 1.8-fold higher in soil with manure application than in soil supplied with mineral fertilizers only ( a). Manure exerted similar effects on DHA activity in the work of Koper and Siwik-Ziomek and Saha et al. . According to Piotrowska and Koper and Natywa et al. , DHA activity in soil increased in response to organic amendments and decreased in response to mineral fertilizers (NPK+Ca). In turn, Kucharski and Wałdowska found that mineral fertilizers stimulated DHA activity, but to a smaller extent than organic amendments. A comparison of the observed changes in DHA activity revealed that the lowest N rate used in the study (30 kg ha −1 ) decreased the analyzed parameter by 8% relative to the control treatment ( c). However, DHA activity decreased in response to higher N rates (60 and 90 kg N ha −1 ). Kucharski , Lemanowicz and Koper and Niewiadomska et al. also found that higher N rates suppressed DHA activity in soil. In contrast, potassium did not inhibit DHA activity and even increased the studied parameter. In a study by Koper and Siwik-Ziomek , comprehensive mineral and organic fertilization with calcium and magnesium enhanced the biochemical activity of soil-dwelling microorganisms, increased DHA activity and promoted microbial growth. In the current experiment, regular soil liming enhanced DHA activity by increasing soil pH and reducing hydrolytic acidity. Zaborowska et al. also reported that DHA activity decreased more than three-fold when soil pH was reduced from 7.1 to 6.4. Kalembasa and Kuziemska found that soil liming stimulated DHA activity. Dehydrogenase activity in soil was determined in the range of 2.13 to 9.65 µmol TFF kg −1 DM h −1 during the growing season ( c). This parameter peaked in August 2015 (5.75 µmol TFF kg −1 DM h −1 ) and was only somewhat lower in September (5.26 µmol TFF kg −1 DM h −1 ). In May, DHA activity was 20% higher than in April. The observed variations in the studied parameter could be attributed to changes in the moisture and oxygen content of soil ( ).
Catalase is an antioxidant enzyme that protects plants against abiotic and biotic factors that cause oxidative stress . Manure amendment increased catalase activity in soil ( a). The value of this parameter was 17% higher in the second year after manure application than in soil supplied with mineral fertilizers only. In a study by Lemanowicz and Koper , catalase activity also increased in treatments where maize was amended with manure. In the present study, the lowest N rate (30 kg ha −1 ) had no significant effect on catalase activity in soil ( b). In turn, the highest N rate (90 kg ha −1 ) increased catalase activity. Increasing N rates also stimulated catalase activity in the work of Lemanowicz and Koper . Potassium and magnesium fertilizers stimulated catalase activity in soil. Regular liming was particularly effective in enhancing catalase activity and it increased the analyzed parameter 1.4-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Catalase activity in soil varied during the growing season of 2015 ( c). The highest value was noted in September, followed by August; it was the lowest in May.
Regular supply of manure increased organic matter content and stimulated urease activity in soil ( a). Kucharski et al. also found that manure application significantly enhanced urease activity in tested soils. In our study, urease activity was not significantly modified by mineral fertilization ( b). However, soil liming exerted a positive effect on urease activity. In the growing season of 2015, urease activity ranged from 0.02 to 0.46 mmol N-NH 4 kg −1 soil h −1 ( c). The studied parameter was highest in September (0.23 mmol N-NH 4 kg −1 soil h −1 ) and lowest in May (0.04 mmol N-NH 4 kg −1 soil h −1 ). Urease activity was 1.5 times higher in August (0.18 mmol N-NH 4 kg −1 soil h −1 ) than in April.
Acid phosphatase activity differed significantly between treatments treated by manure and treatments supplied with mineral fertilizers only ( a,b). In soil regularly amended with manure, acid phosphatase activity was 1.7 times higher than in soil supplied with mineral fertilizers only. Lemanowicz and Koper also found that acid phosphatase activity was lower when manure was not applied. In turn, mineral fertilizers had no significant influence on the activity of the discussed enzyme. However, higher N rates can stimulate acid phosphatase activity by increasing the concentration of H+ in the soil solution as a result of nitrification and enhancing NH4+ uptake by plants. The highest N rate applied (90 kg N ha −1 ) induced the greatest (20%) increase in acid phosphatase activity relative to the control treatment. In the work of Kucharski , very high N rates (240 kg ha −1 ) stimulated the activity of acid phosphatase. Lemanowicz and Koper , Lemanowicz and Siwik-Ziomek and Lemanowicz also reported an increase in acid phosphatase activity with a rise in mineral N rates. The cited authors observed that high N rates stimulated the activity of acid phosphomonoesterase. In contrast, liming induced a minor decrease in the studied parameter. Phosphomonoesterases are highly sensitive to changes in pH and the optimal soil pH for acid phosphatase is 4.0–6.5 . Kuziemska et al. found that soil liming significantly decreased acid phosphatase activity regardless of year or sampling date. Acid phosphatase activity ranged from 2.94 to 14.66 mmol PN kg −1 h −1 in the growing season of 2015 ( c). The analyzed parameter was highest in August and September and much lower in April and May. According to Natywa et al. , acid phosphatase activity increases in fall due to the supply of fresh organic matter with harvest residues that stimulate microbial growth. Lemanowicz and Krzyżaniak observed that enzymatic processes are difficult to interpret during the growing season because they are largely influenced by changes in temperature and soil moisture content.
Long-term manure amendment and mineral fertilization modified alkaline phosphatase (AlP) activity in soil ( a,b). In soil amended with manure every other year, this parameter was 2.3 times higher than in treatments supplied with mineral fertilizers only. According to research, organic phosphorus enhances alkaline phosphatase activity in soil . Sienkiewicz et al. found that prolonged manure amendment increased the content of available phosphorus in soil. Lemanowicz and Koper reported strong correlations between the content of organic and plant-available phosphorus vs. phosphatase activity. In their opinion, phosphatase activity is indicative of phosphorus levels in soil. Alkaline phosphatase activity was stimulated by the lowest N rate (30 kg N ha −1 ) and suppressed by higher N rates (60 and 90 kg ha −1 ). In the work of Lemanowicz and Koper , an N rate of 90 kg N ha −1 also induced a significant (13%) decrease in AlP activity. Higher N rates also inhibited AlP activity in a study by Lemanowicz . Kucharski reported that a very high N rate (240 kg ha −1 ) decreased the value of this parameter in soil. In the current experiment, regular soil liming increased AlP activity two-fold relative to the treatment fertilized with N 2 P 1 K 2 Mg. Similar results were reported by Kalembasa and Kuziemska and Kuziemska et al. . Liming enhances soil enzymatic activity because nutrients are more available in soils with a near-neutral pH . Lemanowicz found a correlation between AlP activity and hydrolytic acidity. Similar observations were made in the present study, where AlP activity decreased with a rise in hydrolytic acidity ( ). Alkaline phosphatase activity fluctuated in the growing season of 2015 ( c). This parameter was highest in April (1.75 mmol PN kg −1 h −1 ) and the values noted in May and August were similar. Higher AlP activity in spring could be associated with rapid phosphorus uptake by plant roots and the resulting decrease in the content of available phosphorus in soil. Such conditions support the secretion of phosphatases by plant roots, which catalyze the hydrolysis of organic phosphorus compounds to mineral compounds . According to Lemanowicz and Bartkowiak , phosphatase secretion by roots and microorganisms is determined by the plants’ phosphorus requirements. In the present study, alkaline phosphatase activity was lowest in September (1.08 mmol PN kg −1 h −1 ).
Statistical analyses revealed that manure (M), mineral fertilization (Min) and M × Min interactions significantly influenced the total content of 16 PAHs and the content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) and HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) ( ). In 2015, the total content of PAHs (16) and the content of LMW PAHs was higher in soil amended with manure than in soil supplied with mineral fertilizers only (the effect of manure) ( ). The content of PAHs in soil varied significantly across sampling dates ( ). 3.3.1. Content of LMW PAHs in Soil The content of LMW PAHs in soil differed significantly during the growing season ( , ); it was highest in May (384.7 µg kg −1 ) and lowest in August (119.8 µg kg −1 ). This value was significantly higher in April (259.5 µg kg −1 ) than in September (210.0 µg kg −1 ). In the growing season of 2015, the content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) was highly similar in soil treated with manure and in soil supplied with mineral fertilizers only ( ). The analyzed parameter was higher between April and August in soil treated by manure and in September in treatments supplied with mineral fertilizers. In April and September, the content of LMW PAHs was identical in soil supplied with mineral fertilizers only. 3.3.2. Content of HMW PAHs in Soil The content of HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) in soil differed significantly during the growing season ( , ). The analyzed parameter was highest in September (158.3 µg kg −1 ) and lowest in August (75.0 µg kg −1 ). The content of HMW PAHs was lower in April, May and August, and in September in soil supplied with mineral fertilizers only ( ). In April, the greatest difference in the analyzed parameter was observed between soil with manure treatment (81.5 µg kg −1 ) and soil supplied with mineral fertilizers only (117.0 µg kg −1 ). 3.3.3. Total Content of 16 PAHs The total content of 16 PAHs in soil varied significantly in the growing season of 2015 ( , ). The fluctuations in the analyzed parameter could have resulted from varied weather conditions. According to Eriksson et al. , low temperatures significantly decrease the rate of PAH degradation in soil. Wang et al. observed that, in periods of heavy rainfall, atmospheric PAHs are transported to soil and tend to accumulate in the soil environment. The total content of PAHs was lowest in August (194.8 µg kg −1 ) and highest in May (484.6 µg kg −1 ). The analyzed parameter was significantly lower in April (358.7 µg kg −1 ) than in September (368.3 µg kg −1 ). According to the IUNG system , soil can be classified as non-contaminated (i.e., with ∑13PAH concentrations < 600 µg kg −1 ). Microbial abundance and soil enzymatic activity undoubtedly influenced the observed fluctuations in the total content of 16 PAHs. The examined parameter was lowest in August when dehydrogenase activity in soil was much higher ( ). In a study by Maliszewska-Kordybach and Smreczak , high PAH levels inhibited the activity of dehydrogenases, which is highly sensitive to these pollutants. In the present study, fungal abundance was highest in May (soil most contaminated with PAHs) ( ). Gałązka et al. also reported an increase in fungal counts with a raise in anthracene levels in soil. Samanta et al. emphasized the important role of the biodegradation of PAHs in the soil environment and compared their activity with that of bacteria. In the current study, the total content of 16 PAHs was higher in soil amended with manure on all sampling dates.
The content of LMW PAHs in soil differed significantly during the growing season ( , ); it was highest in May (384.7 µg kg −1 ) and lowest in August (119.8 µg kg −1 ). This value was significantly higher in April (259.5 µg kg −1 ) than in September (210.0 µg kg −1 ). In the growing season of 2015, the content of LMW PAHs (naphthalene, acenaphthene, acenaphthylene, fluorene, anthracene, phenanthrene, fluoranthene, pyrene and chrysene) was highly similar in soil treated with manure and in soil supplied with mineral fertilizers only ( ). The analyzed parameter was higher between April and August in soil treated by manure and in September in treatments supplied with mineral fertilizers. In April and September, the content of LMW PAHs was identical in soil supplied with mineral fertilizers only.
The content of HMW PAHs (benzo(a)anthracene, benzo(a)pyrene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(g,h,i)perylene, indeno(1,2,3-cd)pyrene and dibenzo(a,h)anthracene) in soil differed significantly during the growing season ( , ). The analyzed parameter was highest in September (158.3 µg kg −1 ) and lowest in August (75.0 µg kg −1 ). The content of HMW PAHs was lower in April, May and August, and in September in soil supplied with mineral fertilizers only ( ). In April, the greatest difference in the analyzed parameter was observed between soil with manure treatment (81.5 µg kg −1 ) and soil supplied with mineral fertilizers only (117.0 µg kg −1 ).
The total content of 16 PAHs in soil varied significantly in the growing season of 2015 ( , ). The fluctuations in the analyzed parameter could have resulted from varied weather conditions. According to Eriksson et al. , low temperatures significantly decrease the rate of PAH degradation in soil. Wang et al. observed that, in periods of heavy rainfall, atmospheric PAHs are transported to soil and tend to accumulate in the soil environment. The total content of PAHs was lowest in August (194.8 µg kg −1 ) and highest in May (484.6 µg kg −1 ). The analyzed parameter was significantly lower in April (358.7 µg kg −1 ) than in September (368.3 µg kg −1 ). According to the IUNG system , soil can be classified as non-contaminated (i.e., with ∑13PAH concentrations < 600 µg kg −1 ). Microbial abundance and soil enzymatic activity undoubtedly influenced the observed fluctuations in the total content of 16 PAHs. The examined parameter was lowest in August when dehydrogenase activity in soil was much higher ( ). In a study by Maliszewska-Kordybach and Smreczak , high PAH levels inhibited the activity of dehydrogenases, which is highly sensitive to these pollutants. In the present study, fungal abundance was highest in May (soil most contaminated with PAHs) ( ). Gałązka et al. also reported an increase in fungal counts with a raise in anthracene levels in soil. Samanta et al. emphasized the important role of the biodegradation of PAHs in the soil environment and compared their activity with that of bacteria. In the current study, the total content of 16 PAHs was higher in soil amended with manure on all sampling dates.
The presence of correlations between selected properties of soil samples collected on four dates in 2015 was identified by principal component analysis (PCA). In April, the first two principal components explained 65% of total variance in the following variables: abundance of organotrophic bacteria, ammonifying bacteria, nitrogen-fixing bacteria, actinobacteria and fungi; activity of dehydrogenases, catalase, urease, acid phosphatase and alkaline phosphatase; content of total nitrogen and organic carbon; Hh and pH; content of LMW PAHs; content of HMW PAHs; total content of 16 PAHs ( , ). The analyzed parameters were grouped on one side of the PC1 axis and the total variance explained by these components was very high at 48.3%. Microbial counts (organotrophic bacteria, ammonifying bacteria, N-fixing bacteria and actinobacteria) were strongly correlated with alkaline phosphatase activity in soil. An analysis of the first principal component (PC1) also revealed strong negative correlations between Hh values vs. the activity of catalase, dehydrogenases and urease; pH; total nitrogen content; organic carbon content; total content of 16 PAHs; content of LMW PAHs. An analysis of the second principal component (PC2) demonstrated that the negative correlation between acid phosphatase activity and the content of HMW PAHs explained 16.7% of total variance in the examined soil properties. The influence of enzymatic activity on the studied soil parameters increased in May. The strong correlations between the activity of dehydrogenases, catalase, urease, acid phosphatase and alkaline phosphatase, pH, organic carbon content, total nitrogen content and actinobacteria counts explained 41.4% of total variance ( , ). An analysis of PC2 revealed strong correlations between the total content of 16 PAHs, content of LMW PAHs, counts of organotrophic bacteria and hydrolytic acidity. In August, PC1 explained 42.1% of total variance in the examined soil parameters. The abundance of organotrophic bacteria and actinobacteria and soil enzymatic activity (dehydrogenases, urease and acid and alkaline phosphatase) were strongly linked with pH and organic carbon and total nitrogen content ( , ). Similar to the previous sampling date, the studied parameters were bound by a strong negative correlation with Hh values. An analysis of PC2 revealed that the strong correlation between the counts of nitrogen-fixing bacteria and the content of HMW PAHs explained 17.0% of total variance. Soil samples collected in September were also characterized by high levels of microbial and enzymatic activity. High microbial abundance can be attributed to a higher content of organic matter that was supplied to soil with harvest residues. An analysis of PC1 demonstrated that strong correlations between all microbial counts (organotrophic, ammonifying, N-fixing bacteria and actinobacteria), enzymatic activity (dehydrogenases, catalase, urease and acid and alkaline phosphatase), pH (in 1 mol KCl) and the content of organic carbon and total nitrogen explained 50.5% of total variance ( , ). An analysis of PC2 also revealed that total PAH content and the content of LMW and HMW PAHs in soil were strongly correlated. Microbial abundance increases under supportive conditions for microbial growth . According to Wielgosz and Szember , microbial counts tend to be higher in two periods of the year: in spring, when temperature and soil moisture content increase, and in fall, when fresh organic matter is supplied to the soil environment with harvest residues. Natywa et al. and Wielgosz and Szember also observed that the increase in the abundance of soil-dwelling microorganisms in fall is directly linked with the additional supply of organic matter in the form of harvest residues. Sosnowski et al. reported higher soil microbial counts in fall than in spring, regardless of the experimental factors, and attributed their findings to higher precipitation in fall. In the work of Lemanowicz and Bartkowiak , acid phosphatase activity was highly correlated with the organic carbon content of soil. In the present study, the above correlation was noted in soil samples collected between May and September. According to Dąbek-Szreniawska et al. , soil pH has a considerable influence on enzymatic activity. In the current experiment, hydrolytic acidity had a negative effect on the activity of soil enzymes, excluding acid phosphatase and catalase. Natywa et al. found that dehydrogenase activity was significantly affected by pH and the content of organic carbon and total nitrogen in soil. Ciarkowska and Gambuś also reported a strong correlation between dehydrogenase activity and organic carbon content. In turn, Zaborowska et al. found that dehydrogenase activity was strongly affected by soil pH. In a study by Maliszewska-Kordybach and Smreczak , soil contamination with PAHs inhibited dehydrogenase activity. Lipińska et al. observed that dehydrogenases were more resistant to PAH pollution than urease. According to Wyszkowska and Wyszkowski , Lipińska et al. and Lipińska et al. , urease activity is compromised in soils heavily contaminated with PAHs. The presence of correlations between LMW PAHs (fluorene, fluoranthene and anthracene) and dehydrogenase activity was also reported by Klimkowicz-Pawlas and Maliszewska-Kordybach and Oleszczuk et al. . The content of PAHs is determined by the concentration of organic carbon and total nitrogen in soil [ , , ]. In the present study, organic carbon and total nitrogen concentrations were strongly correlated with the total content of PAHs and the content of LMW PAHs in soil samples collected in early spring ( ). Maliszewska-Kordybach et al. , Wyszkowski and Ziółkowska and Jin et al. also observed significant correlations between organic carbon content and PAH levels in soil. In contrast, organic carbon content had no significant impact on PAH levels in soil in a study by Bi et al. . Gałązka et al. demonstrated that the content of HMW PAHs was negatively correlated with acid phosphatase activity. The above correlation was also noted in this study in soil samples collected in early spring. In turn, Gałązka et al. found that fungal abundance increased with a rise in anthracene levels in soil. According to Samanta et al. , fungi and bacteria play an equally important role in PAH biodegradation in soil. Lehmann et al. demonstrated that an increase in soil organic carbon content stimulated microbial activity and minimized the toxic effects of soil pollutants. Soil is a complex matrix whose physical, physicochemical, chemical and biological properties are correlated with microbial activity and the presence of pollutants such as PAHs. Weather fluctuations during the growing season also exert a strong influence on chemical and biochemical processes in soil. The relationships between the examined soil parameters were, at least partly, identified in PCA. The PCA revealed that biological processes in soil are determined mainly by carbon and nitrogen content in soil, soil pH and Hh values. Microbial proliferation rates affect soil enzymatic activity. However, the impact of specific microbial groups on PAH levels in soil could not be determined based on the results of a short-term study. Data covering a longer period of time are also needed to formulate reliable conclusions about the impact of PAHs on soil enzymatic activity. However, the present findings indicate that PCA should be used to evaluate the relationships between diverse soil parameters.
The study revealed considerable seasonal variations in PAH levels in soil, depending on weather conditions and the activity of soil-dwelling microorganisms. The total content of 16 PAHs and the content of LMW and HMW PAHs was higher in soil amended with manure than in soil supplied with mineral fertilizers only. Manure application increased organic carbon and total nitrogen content, stimulated the activity of organotrophic, ammonifying and nitrogen-fixing bacteria, actinobacteria and fungi and increased the activity of dehydrogenases, catalase, urease and acid and alkaline phosphatase. Rising N rates increased the abundance of organotrophic bacteria and fungi and enhanced acid phosphatase activity in soil but inhibited the activity of dehydrogenases and alkaline phosphatase. Soil liming was most effective in increasing the counts of ammonifying bacteria, nitrogen-fixing bacteria, organotrophic bacteria and actinobacteria. Liming also enhanced the activity of catalase, urease and alkaline phosphatase and suppressed acid phosphatase activity. The study showed that manure is one of the important sources of polycyclic aromatic hydrocarbons in the soil. Further research, therefore, is still needed to investigate the effects of applied manure and mineral fertilizers under field conditions on the bioremediation of PAH-polluted soils.
|
A Model for Predicting and Grading the Quality of Grain Storage Processes Affected by Microorganisms under Different Environments | 43d57b17-d1c3-430e-b827-0142a31cc328 | 10001665 | Microbiology[mh] | Fusarium and Aspergillus are the main pathogenic fungi of grain and oil crops . In warm and humid areas, serious diseases will reduce the quality of crops such as wheat and corn, posing a serious threat to grain and oil production. Once the temperature and humidity of grain storage environments change, it is susceptible to pathogenic fungi and molds, and some may also produce mycotoxins, such as aflatoxin B1 (AFB1) produced by Aspergillus flavus and Fusarium. This fungus produces deoxynivalenol (DON) and zearalenone (ZEN). Therefore, a suitable environment is very important to ensure the quality of grains during storage, and experiments by Zain et al. demonstrated that temperature and moisture are important factors affecting toxin derivation. Among others, it is important to calculate the equilibrium moisture content in order to understand the behavior of moisture content of grains in storage environments . In addition, aflatoxin B1 (AFB1), deoxynivalenol (DON) and zearalenone (ZEN) are common and important toxins in wheat and maize, which pose serious risks to grain quality and human health. Other common non-toxic fungi rarely exceeded the limit values in historical testing records; therefore, in this paper, toxic fungi were selected as the main factors when conducting the quality prediction and constructing grading model of grain storage process affected by microorganisms in different environments. On this basis, temperature and moisture were selected as environmental variables, and aflatoxin B1 (AFB1), deoxynivalenol (DON) and zearalenone (ZEN) were chosen as monitoring indicators for this experiment and sampled and tested regularly. As the world economy develops, and as people’s living standards continue to improve, food production increases year by year and food quality issues are of increasing concern. It is noteworthy that global post-production grain quantity and quality losses due to harvesting, storage, deterioration, and insect and mold contamination account for 15% to 20% of the total. Grain storage suffers from serious quality losses, the main causes of which include moderate reduction, dry matter depletion, and pest infestation. Therefore, reducing food losses due to storage and improving food utilization and safety are urgent needs that must be addressed internationally. This is an important prerequisite for establishing a resilient and developing a sustainable global agricultural food system. In the face of such complex challenges, scholars have conducted research on predicting and evaluating quality changes during grain storage in order to determine the appropriate environmental settings, both to improve grain storage quality and to reduce grain quality losses during the storage phase. Coradi et al. developed six linear regression models to predict grain storage quality and evaluated the models to achieve high prediction accuracy. Faree et al. used multiple linear regression and the artificial neural network (ANN) to predict the quality of maize grains during storage; they achieved better prediction results. Lutz et al. used a wireless sensor network, an IoT platform, to monitor the equilibrium moisture content in real time and used ANN to predict the quality of maize grains stored under different conditions. Szwedziak et al. used a proprietary computer application based on the RGB model to assess the contamination status of maize grains. Xie et al. predicted public risk perceptions more accurately by building bp neural networks. Liu et al. constructed a bidirectional long- and short-term memory (BiLSTM) model and selected six influencing factors of municipal solid waste power generation as input indicators to achieve an effective prediction of municipal solid waste power generation. In the current research, deep learning methods have been gradually applied to the prediction of quality changes in grain storage processes, but because of the close dependence of any quality changes in grain storage processes on environmental factors such as temperature and humidity with temporal characteristics, simple artificial neural networks (ANNs) cannot solve the problem of gradient explosion and information distortion, and their prediction accuracy is often lower than that of deep learning that can learn the close dependence of methods. In this study, we developed a FEDformer-based prediction model for quality changes in grain storage process and a K-means++-based grading evaluation model for quality changes in grain storage process. Firstly, in the prediction model, we use three factors affecting grain quality to predict the grain quality changes during storage to reduce the uncertainty of the prediction model. Secondly, in the clustering model, we set an evaluation index S based on the conclusion of the prediction model, which integrates the current and predicted values of toxin content to grade and evaluate the quality changes during grain storage. The experimental results showed that the grain storage process quality change prediction model had the highest prediction accuracy and the lowest prediction error compared with other models. Finally, we suggest corresponding suggestions for optimizing grain storage. The contributions of this study include three main aspects: (1) the establishment of a FEDformer-based model for predicting quality changes in grain storage process. Experiments show that the model is more accurate in predicting the quality changes of grain, as compared with several other deep learning models. (2) The establishment of a grading evaluation model based on K-means++ for the quality change of grain storage process. Based on the experimental results of this model, a reasonable grading evaluation of grain quality can be obtained. (3) The analysis, based on the changes of toxin content in grain during storage and grain quality grading evaluation obtained from the above study, of the factors influencing the quality changes in grain storage process by microbial environment. Corresponding suggestions are made for the optimization of grain storage. In addition, the environmental and quality changing data in the process of grain storage provide support for the subsequent blockchain-based grain collection, storage and transportation whole process traceability . The structure of the paper is as follows: reviews the previous literature. presents the prediction model and the clustering model proposed in this paper. describes the experimental results and analysis. is the discussions and implementations section. Finally, the paper is concluded.
2.1. Factors Affecting the Quality of Wheat and Corn during Storage According to the survey, there are many factors that affect the quality of wheat and corn during storage, the most important of which are toxins . Grain contamination by mycotoxins such as deoxynivalenol (DON), aflatoxin B1 (AFB1), and zearalenone (ZEN), which are very stable and are not metabolized, is harmful to humans and animals ; grain contamination by mycotoxins has been observed even in the absence of yield reduction, thus leading to yield loss ; furthermore, several of these toxins are highly toxic to both humans and animals, depending on the type of toxin and the amount of food or feed consumed. Consumption of food or feed contaminated with mycotoxins can cause various diseases in humans and animals, generally known as mycotoxicosis, with strong toxic effects such as skin irritation, vomiting, diarrhea, weakness, loss of appetite, bleeding, neurological disorders, abortion, and may even produce death . In addition, a study by Bennett et al. showed that some types of Fusarium toxins (zearalenone ZEN, etc.) are associated with an increasing number of cancers in humans. Therefore, toxins in grain seriously affect grain quality and endanger human health , which must be paid sufficient attention. Toxin production in grain is a complex process, and the rapid reproduction and growth of microorganisms are responsible for its toxicity , while the growth of microorganisms is closely related to the environment and is mainly influenced by temperature and moisture . Toxin-producing fungi in microorganisms originate mainly from various fungi of the genera Aspergillus, Penicillium and Camara; these fungi are capable of producing various toxic secondary metabolites such as aflatoxin B1, zearalenone ZON and deoxynivalenol DON, which lead to an accelerated respiration rate of grain quality, increasing the explanation of carbohydrates, proteins and oils, thus seriously affecting the quality of grain . Storage conditions such as a climate suitable for toxin growth, moisture, temperature and oxygen levels are considered as influencing factors for toxin production . Among them, temperature and moisture are the key factors affecting microbial growth , both of which affect grain quality by influencing the activity of grain microorganisms. In a study by Saleemullah et al. , aflatoxin content was measured and analyzed in grains stored for 18 months, and the aflatoxin content increased from 27.1 L/kg to 31.9 L/kg; this indicates that the aflatoxin content of grains was strongly influenced by the storage period, and subsequent experiments showed that storing grains in warehouses during heavy rains led to increased formation of toxins such as aflatoxins. Kumar et al. demonstrated that high temperatures are considered to be an important determinant of fungal growth and production of toxins such as AFB1 . The activity of microorganisms is closely related to the environment in which they live; any change in the environment will affect their activity. Suitable environmental conditions can promote the growth and reproduction of microorganisms, while adverse environmental conditions can inhibit the growth of microorganisms and can even cause their death. The impact of moisture on wheat and corn in storage is manifested in this way: once in a water content environment suitable for microbial growth, wheat and corn are susceptible to mold infiltration by plant pathogens and produce fungal toxins, such as aflatoxin, zearalenone, vomitoxin, etc., thus affecting their quality. The influence of temperature on the storage process of wheat and maize is mainly manifested in the fact that the change in grain temperature is closely related to the condition of the grain itself, microbial activity, and many other factors. Baliukoniene et al. conducted experiments to determine toxins in maize and wheat at different storage temperatures; when the temperature in the silo was 15–25 °C and after one month of storage, wheat was strongly contaminated with micro fungi: wheat contained 31.37 × 103 cfu/g, which was 50% and 71% higher compared to other grain bins at different temperatures, and zearalenone ZEN content was 2.89 µg/kg in corn and 5.01 µg/kg in wheat. In contrast, the zearalenone ZEN content of corn and wheat in other grain silos ranged from 40% to 64.6% of the levels mentioned above. The effect of grain temperature on grain storage quality is mainly expressed through the effect on pests, microorganisms. and grain quality. Food security storage and the maintenance of grain quality can be achieved by controlling the temperature of the environment in which the grain pile organisms are located, limiting the growth and reproduction of harmful organisms, and delaying the aging of grain quality. 2.2. Overview of Prediction Methods The time series forecasting methods in existing studies can be classified into traditional linear regression methods , machine learning methods , and deep learning methods [ , , , , , , , , , , , , , ]. Traditional linear regression methods are moving average models (MA) based on historical white noise modeling, autoregressive models (AR) based on historical time series modeling, and autoregressive moving average models (ARIMA) that combine the first two models. These are also widely used in time series forecasting tasks . However, none of the above models can capture nonlinear relationships. Machine learning can solve simple non-linear relationships. Drucker et al. equipped a support vector machine (SVM) with regression prediction capability for nonlinear data by introducing soft interval and distance loss functions, hence the name support vector regression (SVR). Yu et al. used a two-stage support vector regression (BI-SVR) based on Bayesian inference to predict a feeding batch of penicillin culture process with the performance of soft sensors, and the prediction results of BI-SVR were significantly improved compared to SVM. Jaques et al. used the decision tree algorithms REPTree and M5P, random forest and linear regression for predicting the physical and physiological quality of soybean seeds and experimentally demonstrated an improved accuracy index compared to linear regression. Compared with the above methods, deep learning methods are able to solve more complex nonlinear problems. Artificial neural networks (ANN) mainly consist of the input, hidden, and output layers, with multilayer perceptron (MLP) being the most commonly used. Asadollahfardi et al. applied ANN in the MLP framework to predict total dissolved solids (TDS) in Zayande Rud River, Isfahan Province, Iran, and could obtain more reliable prediction results. Recurrent neural networks (RNN) are capable of learning dynamic temporal features using memory units, but the model suffers from the problem of being prone to gradient disappearance and difficulty in learning long time dependencies. The long short-term memory network (LSTM) [ , , ] solves the problem of the gradient disappearance of longer sequences in training by learning temporal dependencies through a gate mechanism; it can maintain temporal information in the state for a long time, and it is widely used in time series prediction. Kang et al. and Vo et al. applied the bidirectional long short-term memory network Bi-LSTM to time series prediction. Compared to LSTM, it considers both forward and backward sequences, which is advantageous for time or location data with contextual compliance features. The selected pass recurrent unit (GRU) is a simplified version of LSTM; it combines forgetting and input gates as update gates, and it has fewer parameters and reduced complexity compared to LSTM. Yang et al. proposed a BRNN-based method for predicting the remaining time of tram charging, Bi-GRU, in order to solve the problem of one-way prediction; compared to LSTM and SVR models, it performed better in terms of accuracy and stability. In 2017, Vaswani et al. proposed a novel architecture Transformer, which showed powerful modeling capabilities for long-term dependencies and interactions in time series data. Zhang et al. used the Transformer-based time series prediction model for predicting the next hour’s electricity consumption and achieved promising results. Many Transformer variants have been proposed to address the special challenges in time series forecasting tasks. Among them, Zhou et al. proposed the Informer framework, which is an efficiency optimized long time series forecasting model based on the Transformer; it greatly reduces the time complexity and the space complexity of the Transformer.
According to the survey, there are many factors that affect the quality of wheat and corn during storage, the most important of which are toxins . Grain contamination by mycotoxins such as deoxynivalenol (DON), aflatoxin B1 (AFB1), and zearalenone (ZEN), which are very stable and are not metabolized, is harmful to humans and animals ; grain contamination by mycotoxins has been observed even in the absence of yield reduction, thus leading to yield loss ; furthermore, several of these toxins are highly toxic to both humans and animals, depending on the type of toxin and the amount of food or feed consumed. Consumption of food or feed contaminated with mycotoxins can cause various diseases in humans and animals, generally known as mycotoxicosis, with strong toxic effects such as skin irritation, vomiting, diarrhea, weakness, loss of appetite, bleeding, neurological disorders, abortion, and may even produce death . In addition, a study by Bennett et al. showed that some types of Fusarium toxins (zearalenone ZEN, etc.) are associated with an increasing number of cancers in humans. Therefore, toxins in grain seriously affect grain quality and endanger human health , which must be paid sufficient attention. Toxin production in grain is a complex process, and the rapid reproduction and growth of microorganisms are responsible for its toxicity , while the growth of microorganisms is closely related to the environment and is mainly influenced by temperature and moisture . Toxin-producing fungi in microorganisms originate mainly from various fungi of the genera Aspergillus, Penicillium and Camara; these fungi are capable of producing various toxic secondary metabolites such as aflatoxin B1, zearalenone ZON and deoxynivalenol DON, which lead to an accelerated respiration rate of grain quality, increasing the explanation of carbohydrates, proteins and oils, thus seriously affecting the quality of grain . Storage conditions such as a climate suitable for toxin growth, moisture, temperature and oxygen levels are considered as influencing factors for toxin production . Among them, temperature and moisture are the key factors affecting microbial growth , both of which affect grain quality by influencing the activity of grain microorganisms. In a study by Saleemullah et al. , aflatoxin content was measured and analyzed in grains stored for 18 months, and the aflatoxin content increased from 27.1 L/kg to 31.9 L/kg; this indicates that the aflatoxin content of grains was strongly influenced by the storage period, and subsequent experiments showed that storing grains in warehouses during heavy rains led to increased formation of toxins such as aflatoxins. Kumar et al. demonstrated that high temperatures are considered to be an important determinant of fungal growth and production of toxins such as AFB1 . The activity of microorganisms is closely related to the environment in which they live; any change in the environment will affect their activity. Suitable environmental conditions can promote the growth and reproduction of microorganisms, while adverse environmental conditions can inhibit the growth of microorganisms and can even cause their death. The impact of moisture on wheat and corn in storage is manifested in this way: once in a water content environment suitable for microbial growth, wheat and corn are susceptible to mold infiltration by plant pathogens and produce fungal toxins, such as aflatoxin, zearalenone, vomitoxin, etc., thus affecting their quality. The influence of temperature on the storage process of wheat and maize is mainly manifested in the fact that the change in grain temperature is closely related to the condition of the grain itself, microbial activity, and many other factors. Baliukoniene et al. conducted experiments to determine toxins in maize and wheat at different storage temperatures; when the temperature in the silo was 15–25 °C and after one month of storage, wheat was strongly contaminated with micro fungi: wheat contained 31.37 × 103 cfu/g, which was 50% and 71% higher compared to other grain bins at different temperatures, and zearalenone ZEN content was 2.89 µg/kg in corn and 5.01 µg/kg in wheat. In contrast, the zearalenone ZEN content of corn and wheat in other grain silos ranged from 40% to 64.6% of the levels mentioned above. The effect of grain temperature on grain storage quality is mainly expressed through the effect on pests, microorganisms. and grain quality. Food security storage and the maintenance of grain quality can be achieved by controlling the temperature of the environment in which the grain pile organisms are located, limiting the growth and reproduction of harmful organisms, and delaying the aging of grain quality.
The time series forecasting methods in existing studies can be classified into traditional linear regression methods , machine learning methods , and deep learning methods [ , , , , , , , , , , , , , ]. Traditional linear regression methods are moving average models (MA) based on historical white noise modeling, autoregressive models (AR) based on historical time series modeling, and autoregressive moving average models (ARIMA) that combine the first two models. These are also widely used in time series forecasting tasks . However, none of the above models can capture nonlinear relationships. Machine learning can solve simple non-linear relationships. Drucker et al. equipped a support vector machine (SVM) with regression prediction capability for nonlinear data by introducing soft interval and distance loss functions, hence the name support vector regression (SVR). Yu et al. used a two-stage support vector regression (BI-SVR) based on Bayesian inference to predict a feeding batch of penicillin culture process with the performance of soft sensors, and the prediction results of BI-SVR were significantly improved compared to SVM. Jaques et al. used the decision tree algorithms REPTree and M5P, random forest and linear regression for predicting the physical and physiological quality of soybean seeds and experimentally demonstrated an improved accuracy index compared to linear regression. Compared with the above methods, deep learning methods are able to solve more complex nonlinear problems. Artificial neural networks (ANN) mainly consist of the input, hidden, and output layers, with multilayer perceptron (MLP) being the most commonly used. Asadollahfardi et al. applied ANN in the MLP framework to predict total dissolved solids (TDS) in Zayande Rud River, Isfahan Province, Iran, and could obtain more reliable prediction results. Recurrent neural networks (RNN) are capable of learning dynamic temporal features using memory units, but the model suffers from the problem of being prone to gradient disappearance and difficulty in learning long time dependencies. The long short-term memory network (LSTM) [ , , ] solves the problem of the gradient disappearance of longer sequences in training by learning temporal dependencies through a gate mechanism; it can maintain temporal information in the state for a long time, and it is widely used in time series prediction. Kang et al. and Vo et al. applied the bidirectional long short-term memory network Bi-LSTM to time series prediction. Compared to LSTM, it considers both forward and backward sequences, which is advantageous for time or location data with contextual compliance features. The selected pass recurrent unit (GRU) is a simplified version of LSTM; it combines forgetting and input gates as update gates, and it has fewer parameters and reduced complexity compared to LSTM. Yang et al. proposed a BRNN-based method for predicting the remaining time of tram charging, Bi-GRU, in order to solve the problem of one-way prediction; compared to LSTM and SVR models, it performed better in terms of accuracy and stability. In 2017, Vaswani et al. proposed a novel architecture Transformer, which showed powerful modeling capabilities for long-term dependencies and interactions in time series data. Zhang et al. used the Transformer-based time series prediction model for predicting the next hour’s electricity consumption and achieved promising results. Many Transformer variants have been proposed to address the special challenges in time series forecasting tasks. Among them, Zhou et al. proposed the Informer framework, which is an efficiency optimized long time series forecasting model based on the Transformer; it greatly reduces the time complexity and the space complexity of the Transformer.
In recent years, Transformer has become a typical representative of neural network models used in the field of time-series prediction. FEDformer is an improved model based on Transformer; it focuses on the implementation of analyzing the relationship characteristics between data indicators, reducing the time complexity, improving the prediction accuracy and model learning efficiency of indicators, and thus reasonably and effectively predicting toxin content. 3.1. Data Source Grain storage monitoring data for this study covered more than 20 regions with 139 wheat and corn samples totaling 2100 units of data, with wheat and corn originating from the middle and lower temperate regions. The datasets for training and testing in the experiment were divided as shown in . In addition, wheat and maize were obtained from the middle and lower reaches of temperate river valley production areas. The microbial toxin limits selected for this paper were as follows: 20 μg/kg for aflatoxin B1, 500 μg/kg for zearalenone ZON and 1000 μg/kg for deoxynivalenol DON for maize; 5 μg/kg for aflatoxin B1, 60 μg/kg for zearalenone ZON and 1000 μg/kg for deoxynivalenol DON for wheat. Fusarium DON was limited to 1000 μg/kg. 3.2. FEDformer-Based Model for Predicting Quality Changes in Grain Storage Processes 3.2.1. Model Fundamentals FEDformer combines Transformer and seasonal-trend decomposition methods, capturing the global pattern of the world sequence with the seasonal-trend decomposition method, while capturing the more detailed structure with Transformer FEDformer’s. The main structure (backbone) uses an encoder–decoder structure, consisting of n encoders and m decoders, and it includes four internal submodules: a frequency domain learning module (Frequency Enhanced Block), a frequency domain attention module (Frequency Enhanced Attention), a period trend decomposition module (MOE Decomp), and a one-dimensional Convolution module (Conv1d). The MOE Decomp module decomposes the sequence into a periodic term (seasonal, S) and a trend line (trend, T). This decomposition is not performed only once, but in an iterative decomposition mode. In the encoder, the input passes through two MOE Decomp layers, each of which decomposes the signal into two components: seasonal and trend. The trend component is discarded, and the seasonal component is passed to the next layers for learning and finally to the decoder. In the decoder, the input of the encoder also passes through three MOE Decomp layers and is decomposed into seasonal and trend components. Among them, the seasonal component is passed to the next layers for learning, where the frequency domain Attention (Frequency Enhanced Attention) layer learns the frequency domain correlation between the seasonal term of the encoder and the decoder, and the trend component is summed up and finally added back to the seasonal term to restore the original sequence. Among the Frequency Enhanced Block (FEB) and the Frequency Enhanced Attention (FEA), the Attention mechanism used in the Frequency Enhanced Attention (FEA) is of linear complexity, while the Attention mechanism used in the traditional Transformer is of square complexity. This has the advantage of greatly reducing the length of the input vector and thus the computational complexity, but this sampling must be detrimental to the input information. This loss must be detrimental to the input information. However, experiments have shown that this loss has little impact on the final accuracy. This is because the general signal is sparser in the frequency domain compared to the time domain. Moreover, a large amount of information in the high frequency part is so-called noise; it can often be discarded in time series prediction problems, since noise often represents a randomly generated part and thus cannot be predicted. In the learning phase, FEB uses a fully concatenated layer R as a learnable parameter. FEA, on the other hand, performs a cross-attention operation on the signals from the encoder and decoder in order to learn the intrinsic relationship between the two parts of the signal. The frequency domain complementation process is relative to the previous frequency domain sampling. In order to cause the signal to revert to its original length, the frequency points not picked by the previous sampling need to be zeroed and projected back to the time domain, because the signal projected back to the frequency domain is the same as the previous input signal dimension by the complementation operation in the previous step. The FEDformer model is shown in . The specific function of the encoder is shown in the following Equations: (1) S e n 1 , _ = M O E D e c o m p F E B X e n 0 + X e n 0 , (2) S e n 2 , _ = M O E D e c o m p c o n v 1 d c o n v 1 d F E B X e n 0 + X e n 0 . The specific function of the decoder is expressed in Equations (3) S d e 1 , T d e 1 = M O E D e c o m p F E B X d e 0 + X d e 0 , (4) S d e 2 , T d e 2 = M O E D e c o m p F E A S d e 1 , L a y e r N o r m ( S e n 2 + S d e 1 ) , (5) S d e 3 , T d e 3 = M O E D e c o m p c o n v 1 d c o n v 1 d S d e 2 + S d e 2 . 3.2.2. Model for Predicting Quality Changes during Grain Storage Our data are divided into six dimensions of information with a period of 30 days; these are time, temperature, moisture content, AFB1 content, ZEN content, and DON content, where AFB1, ZEN, and DON content are predictors. Therefore, to be applicable to the application scenario of this paper, we improved the construction of the model Encoder embedding as well as that of the Decoder Embedding. First, we set the three dimensions of month, week, and day to represent the characteristics of the time dimension; this has the advantage of replacing the time dimension from one-dimensional information to three-dimensional information, and it can correctly represent the time sequence information and enhance the importance of the time dimension, making the model pay more attention to the characteristics of the time dimension in the learning process so as to predict the indicators more effectively. The construction of Encoder embedding is shown in . The construction of the decoder embedding is shown in . Second, in the model, we change the data reading method from sequential reading to reading with a 30-day period; this prevents the situation in which different samples predict each other, and thus it reasonably applies to the scenario in this paper. Finally, we set the data of the first seven days to predict the data of the next seven days. The specific improvement process is shown in . 3.3. Grading Evaluation of Quality Changes in Grain Storage Process Based on K-Means++ In order to evaluate the grade of quality changes in the grain storage process in grain silos, we set an evaluation index S , which integrates the current and predicted values of toxin content, and the formula of the evaluation index S is shown in Equation (6). (6) S = y i , y ¯ i , where y i , i ∈{1,2,…, n } is the true value, y ¯ i , i ∈{1,2,…, n } is the mean of the predicted values in the next 7 days, and n is the number of indicator variables. In this paper, a clustering algorithm is used to grade all samples for quality variation and to construct a quality grading space based on the evaluation index S . Since the amount of data in this subject is small and there are no dirty data, the K-means++ algorithm is fast and efficient, and it can achieve good clustering performance on the sample space of arbitrary shape, which is suitable for analyzing the model data of this study, so the K-means++ algorithm is selected for the grain quality change grading in this paper. The K-means++ algorithm is an improvement of the K-means algorithm, and its main difference is the initialization to determine the initial clustering center. K-means algorithm determines the initial clustering center randomly, while K-means++ is based on the distance from the current sample point to the existing center point to provide the probability of the sample point’s becoming the next clustering center (the greater the distance, the greater the probability), and then, according to the probability size, to extract the next clustering center, and to repeat until the extraction of K clustering centers. The specific steps are shown in . 3.4. Model Evaluation Metrics 3.4.1. Evaluation Metrics for Predictive Models The evaluation metrics of the prediction models are the mean absolute percentage error ( MAPE ), the mean square error ( MSE ), the root mean square error ( RMSE ), the mean absolute error ( MAE ), and the symmetric mean absolute percentage error ( SMAPE ), respectively; they are used to evaluate the prediction performance and the degree of fit of the models. MAE , MSE , RMSE , MAPE , and SMAPE are used to measure the difference between the predicted data and true data and the range of values. A perfect model is equal to zero when the predicted value exactly matches the true value; the larger the error, the larger the value. The formula for calculating the mean absolute percentage error is shown in (7): (7) M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i y i ∣ The formula for calculating the mean square error is shown in (8): (8) M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the root mean square error is shown in (9): (9) R M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the mean absolute error is shown in (10): (10) M A E = 1 n ∑ i = 1 n ∣ y ′ i − y i ∣ The formula for calculating the symmetric mean absolute percentage error is shown in (11): (11) S M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i ∣ ∣ y ′ i ∣ − ∣ y i ∣ / 2 where y i , i ∈{1,2,…, n } is the true value, y i ′ , i ∈{1,2,…, n } is the predicted value, and n is the number of indicator variables. 3.4.2. Evaluation Metrics for Clustering Models The evaluation index of the clustering model is the contour coefficient S . The core idea of the contour coefficient is to determine the relative size of the inter-class distance and intra-class distance. The value is between [−1,1], and the larger the value, the better the clustering result. The formula for the profile coefficient S is shown in (12): (12) S = 1 N ∑ i = 1 N b i − a i m a x a i , b i where a i is the average distance of other samples in the cluster to which i belongs, b i is the minimum value of the average distance of samples from i to other clusters, and N is the number of samples.
Grain storage monitoring data for this study covered more than 20 regions with 139 wheat and corn samples totaling 2100 units of data, with wheat and corn originating from the middle and lower temperate regions. The datasets for training and testing in the experiment were divided as shown in . In addition, wheat and maize were obtained from the middle and lower reaches of temperate river valley production areas. The microbial toxin limits selected for this paper were as follows: 20 μg/kg for aflatoxin B1, 500 μg/kg for zearalenone ZON and 1000 μg/kg for deoxynivalenol DON for maize; 5 μg/kg for aflatoxin B1, 60 μg/kg for zearalenone ZON and 1000 μg/kg for deoxynivalenol DON for wheat. Fusarium DON was limited to 1000 μg/kg.
3.2.1. Model Fundamentals FEDformer combines Transformer and seasonal-trend decomposition methods, capturing the global pattern of the world sequence with the seasonal-trend decomposition method, while capturing the more detailed structure with Transformer FEDformer’s. The main structure (backbone) uses an encoder–decoder structure, consisting of n encoders and m decoders, and it includes four internal submodules: a frequency domain learning module (Frequency Enhanced Block), a frequency domain attention module (Frequency Enhanced Attention), a period trend decomposition module (MOE Decomp), and a one-dimensional Convolution module (Conv1d). The MOE Decomp module decomposes the sequence into a periodic term (seasonal, S) and a trend line (trend, T). This decomposition is not performed only once, but in an iterative decomposition mode. In the encoder, the input passes through two MOE Decomp layers, each of which decomposes the signal into two components: seasonal and trend. The trend component is discarded, and the seasonal component is passed to the next layers for learning and finally to the decoder. In the decoder, the input of the encoder also passes through three MOE Decomp layers and is decomposed into seasonal and trend components. Among them, the seasonal component is passed to the next layers for learning, where the frequency domain Attention (Frequency Enhanced Attention) layer learns the frequency domain correlation between the seasonal term of the encoder and the decoder, and the trend component is summed up and finally added back to the seasonal term to restore the original sequence. Among the Frequency Enhanced Block (FEB) and the Frequency Enhanced Attention (FEA), the Attention mechanism used in the Frequency Enhanced Attention (FEA) is of linear complexity, while the Attention mechanism used in the traditional Transformer is of square complexity. This has the advantage of greatly reducing the length of the input vector and thus the computational complexity, but this sampling must be detrimental to the input information. This loss must be detrimental to the input information. However, experiments have shown that this loss has little impact on the final accuracy. This is because the general signal is sparser in the frequency domain compared to the time domain. Moreover, a large amount of information in the high frequency part is so-called noise; it can often be discarded in time series prediction problems, since noise often represents a randomly generated part and thus cannot be predicted. In the learning phase, FEB uses a fully concatenated layer R as a learnable parameter. FEA, on the other hand, performs a cross-attention operation on the signals from the encoder and decoder in order to learn the intrinsic relationship between the two parts of the signal. The frequency domain complementation process is relative to the previous frequency domain sampling. In order to cause the signal to revert to its original length, the frequency points not picked by the previous sampling need to be zeroed and projected back to the time domain, because the signal projected back to the frequency domain is the same as the previous input signal dimension by the complementation operation in the previous step. The FEDformer model is shown in . The specific function of the encoder is shown in the following Equations: (1) S e n 1 , _ = M O E D e c o m p F E B X e n 0 + X e n 0 , (2) S e n 2 , _ = M O E D e c o m p c o n v 1 d c o n v 1 d F E B X e n 0 + X e n 0 . The specific function of the decoder is expressed in Equations (3) S d e 1 , T d e 1 = M O E D e c o m p F E B X d e 0 + X d e 0 , (4) S d e 2 , T d e 2 = M O E D e c o m p F E A S d e 1 , L a y e r N o r m ( S e n 2 + S d e 1 ) , (5) S d e 3 , T d e 3 = M O E D e c o m p c o n v 1 d c o n v 1 d S d e 2 + S d e 2 . 3.2.2. Model for Predicting Quality Changes during Grain Storage Our data are divided into six dimensions of information with a period of 30 days; these are time, temperature, moisture content, AFB1 content, ZEN content, and DON content, where AFB1, ZEN, and DON content are predictors. Therefore, to be applicable to the application scenario of this paper, we improved the construction of the model Encoder embedding as well as that of the Decoder Embedding. First, we set the three dimensions of month, week, and day to represent the characteristics of the time dimension; this has the advantage of replacing the time dimension from one-dimensional information to three-dimensional information, and it can correctly represent the time sequence information and enhance the importance of the time dimension, making the model pay more attention to the characteristics of the time dimension in the learning process so as to predict the indicators more effectively. The construction of Encoder embedding is shown in . The construction of the decoder embedding is shown in . Second, in the model, we change the data reading method from sequential reading to reading with a 30-day period; this prevents the situation in which different samples predict each other, and thus it reasonably applies to the scenario in this paper. Finally, we set the data of the first seven days to predict the data of the next seven days. The specific improvement process is shown in .
FEDformer combines Transformer and seasonal-trend decomposition methods, capturing the global pattern of the world sequence with the seasonal-trend decomposition method, while capturing the more detailed structure with Transformer FEDformer’s. The main structure (backbone) uses an encoder–decoder structure, consisting of n encoders and m decoders, and it includes four internal submodules: a frequency domain learning module (Frequency Enhanced Block), a frequency domain attention module (Frequency Enhanced Attention), a period trend decomposition module (MOE Decomp), and a one-dimensional Convolution module (Conv1d). The MOE Decomp module decomposes the sequence into a periodic term (seasonal, S) and a trend line (trend, T). This decomposition is not performed only once, but in an iterative decomposition mode. In the encoder, the input passes through two MOE Decomp layers, each of which decomposes the signal into two components: seasonal and trend. The trend component is discarded, and the seasonal component is passed to the next layers for learning and finally to the decoder. In the decoder, the input of the encoder also passes through three MOE Decomp layers and is decomposed into seasonal and trend components. Among them, the seasonal component is passed to the next layers for learning, where the frequency domain Attention (Frequency Enhanced Attention) layer learns the frequency domain correlation between the seasonal term of the encoder and the decoder, and the trend component is summed up and finally added back to the seasonal term to restore the original sequence. Among the Frequency Enhanced Block (FEB) and the Frequency Enhanced Attention (FEA), the Attention mechanism used in the Frequency Enhanced Attention (FEA) is of linear complexity, while the Attention mechanism used in the traditional Transformer is of square complexity. This has the advantage of greatly reducing the length of the input vector and thus the computational complexity, but this sampling must be detrimental to the input information. This loss must be detrimental to the input information. However, experiments have shown that this loss has little impact on the final accuracy. This is because the general signal is sparser in the frequency domain compared to the time domain. Moreover, a large amount of information in the high frequency part is so-called noise; it can often be discarded in time series prediction problems, since noise often represents a randomly generated part and thus cannot be predicted. In the learning phase, FEB uses a fully concatenated layer R as a learnable parameter. FEA, on the other hand, performs a cross-attention operation on the signals from the encoder and decoder in order to learn the intrinsic relationship between the two parts of the signal. The frequency domain complementation process is relative to the previous frequency domain sampling. In order to cause the signal to revert to its original length, the frequency points not picked by the previous sampling need to be zeroed and projected back to the time domain, because the signal projected back to the frequency domain is the same as the previous input signal dimension by the complementation operation in the previous step. The FEDformer model is shown in . The specific function of the encoder is shown in the following Equations: (1) S e n 1 , _ = M O E D e c o m p F E B X e n 0 + X e n 0 , (2) S e n 2 , _ = M O E D e c o m p c o n v 1 d c o n v 1 d F E B X e n 0 + X e n 0 . The specific function of the decoder is expressed in Equations (3) S d e 1 , T d e 1 = M O E D e c o m p F E B X d e 0 + X d e 0 , (4) S d e 2 , T d e 2 = M O E D e c o m p F E A S d e 1 , L a y e r N o r m ( S e n 2 + S d e 1 ) , (5) S d e 3 , T d e 3 = M O E D e c o m p c o n v 1 d c o n v 1 d S d e 2 + S d e 2 .
Our data are divided into six dimensions of information with a period of 30 days; these are time, temperature, moisture content, AFB1 content, ZEN content, and DON content, where AFB1, ZEN, and DON content are predictors. Therefore, to be applicable to the application scenario of this paper, we improved the construction of the model Encoder embedding as well as that of the Decoder Embedding. First, we set the three dimensions of month, week, and day to represent the characteristics of the time dimension; this has the advantage of replacing the time dimension from one-dimensional information to three-dimensional information, and it can correctly represent the time sequence information and enhance the importance of the time dimension, making the model pay more attention to the characteristics of the time dimension in the learning process so as to predict the indicators more effectively. The construction of Encoder embedding is shown in . The construction of the decoder embedding is shown in . Second, in the model, we change the data reading method from sequential reading to reading with a 30-day period; this prevents the situation in which different samples predict each other, and thus it reasonably applies to the scenario in this paper. Finally, we set the data of the first seven days to predict the data of the next seven days. The specific improvement process is shown in .
In order to evaluate the grade of quality changes in the grain storage process in grain silos, we set an evaluation index S , which integrates the current and predicted values of toxin content, and the formula of the evaluation index S is shown in Equation (6). (6) S = y i , y ¯ i , where y i , i ∈{1,2,…, n } is the true value, y ¯ i , i ∈{1,2,…, n } is the mean of the predicted values in the next 7 days, and n is the number of indicator variables. In this paper, a clustering algorithm is used to grade all samples for quality variation and to construct a quality grading space based on the evaluation index S . Since the amount of data in this subject is small and there are no dirty data, the K-means++ algorithm is fast and efficient, and it can achieve good clustering performance on the sample space of arbitrary shape, which is suitable for analyzing the model data of this study, so the K-means++ algorithm is selected for the grain quality change grading in this paper. The K-means++ algorithm is an improvement of the K-means algorithm, and its main difference is the initialization to determine the initial clustering center. K-means algorithm determines the initial clustering center randomly, while K-means++ is based on the distance from the current sample point to the existing center point to provide the probability of the sample point’s becoming the next clustering center (the greater the distance, the greater the probability), and then, according to the probability size, to extract the next clustering center, and to repeat until the extraction of K clustering centers. The specific steps are shown in .
3.4.1. Evaluation Metrics for Predictive Models The evaluation metrics of the prediction models are the mean absolute percentage error ( MAPE ), the mean square error ( MSE ), the root mean square error ( RMSE ), the mean absolute error ( MAE ), and the symmetric mean absolute percentage error ( SMAPE ), respectively; they are used to evaluate the prediction performance and the degree of fit of the models. MAE , MSE , RMSE , MAPE , and SMAPE are used to measure the difference between the predicted data and true data and the range of values. A perfect model is equal to zero when the predicted value exactly matches the true value; the larger the error, the larger the value. The formula for calculating the mean absolute percentage error is shown in (7): (7) M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i y i ∣ The formula for calculating the mean square error is shown in (8): (8) M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the root mean square error is shown in (9): (9) R M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the mean absolute error is shown in (10): (10) M A E = 1 n ∑ i = 1 n ∣ y ′ i − y i ∣ The formula for calculating the symmetric mean absolute percentage error is shown in (11): (11) S M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i ∣ ∣ y ′ i ∣ − ∣ y i ∣ / 2 where y i , i ∈{1,2,…, n } is the true value, y i ′ , i ∈{1,2,…, n } is the predicted value, and n is the number of indicator variables. 3.4.2. Evaluation Metrics for Clustering Models The evaluation index of the clustering model is the contour coefficient S . The core idea of the contour coefficient is to determine the relative size of the inter-class distance and intra-class distance. The value is between [−1,1], and the larger the value, the better the clustering result. The formula for the profile coefficient S is shown in (12): (12) S = 1 N ∑ i = 1 N b i − a i m a x a i , b i where a i is the average distance of other samples in the cluster to which i belongs, b i is the minimum value of the average distance of samples from i to other clusters, and N is the number of samples.
The evaluation metrics of the prediction models are the mean absolute percentage error ( MAPE ), the mean square error ( MSE ), the root mean square error ( RMSE ), the mean absolute error ( MAE ), and the symmetric mean absolute percentage error ( SMAPE ), respectively; they are used to evaluate the prediction performance and the degree of fit of the models. MAE , MSE , RMSE , MAPE , and SMAPE are used to measure the difference between the predicted data and true data and the range of values. A perfect model is equal to zero when the predicted value exactly matches the true value; the larger the error, the larger the value. The formula for calculating the mean absolute percentage error is shown in (7): (7) M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i y i ∣ The formula for calculating the mean square error is shown in (8): (8) M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the root mean square error is shown in (9): (9) R M S E = 1 n ∑ i = 1 n y ′ i − y i 2 The formula for calculating the mean absolute error is shown in (10): (10) M A E = 1 n ∑ i = 1 n ∣ y ′ i − y i ∣ The formula for calculating the symmetric mean absolute percentage error is shown in (11): (11) S M A P E = 100 % n ∑ i = 1 n ∣ y ′ i − y i ∣ ∣ y ′ i ∣ − ∣ y i ∣ / 2 where y i , i ∈{1,2,…, n } is the true value, y i ′ , i ∈{1,2,…, n } is the predicted value, and n is the number of indicator variables.
The evaluation index of the clustering model is the contour coefficient S . The core idea of the contour coefficient is to determine the relative size of the inter-class distance and intra-class distance. The value is between [−1,1], and the larger the value, the better the clustering result. The formula for the profile coefficient S is shown in (12): (12) S = 1 N ∑ i = 1 N b i − a i m a x a i , b i where a i is the average distance of other samples in the cluster to which i belongs, b i is the minimum value of the average distance of samples from i to other clusters, and N is the number of samples.
4.1. Comparative Experiments of Models for Predicting Quality Changes during Grain Storage In order to effectively evaluate the performance of the FEDformer-based model in predicting the quality changes during grain storage, in this paper, several deep learning prediction methods are selected as comparison experiments, and we set up a 5-fold cross-validation experiment in this experiment in order to prevent overfitting. In addition, the FEDformer model contains several hyperparameters that affect the accuracy of the model, and we determined that the learning rate and the number of days to predict the future have the greatest impact on the performance of the model through experiments, and conducted several comparative experiments for this purpose. Each parameter setting in the proposed model is shown in . Among the eight prediction models, CNN has the largest prediction error, while the traditional LSTM is second only to CNN. In addition, four prediction models, LSTM, GRU, BILSTM and BIGRU, have similar prediction accuracy and little difference in prediction error. Transformer, Informer, and FEDformer have significantly higher prediction accuracy and lower prediction error than other prediction models. The prediction error is significantly reduced. Compared with other models, the prediction error of FEDformer is the smallest, and the experiments of MAE, MSE, RMSE, MAPE, and MSPE of FEDformer model for the maize test set are 0.01, 0.0005, 0.023, 0.01, and 0.09, respectively. The results are shown in . The MAE, MSE, RMSE, MAPE and MSPE of FEDformer for the wheat test set were 0.017, 0.0006, 0.025, 0.108, and 0.14, respectively. The experimental results for MAE, MSE, RMSE, MAPE, and MSPE for the wheat test set are shown in . 4.2. Comparison of Clustering Models for Quality Changes during Grain Storage In this paper, the clustering algorithm selects the S-value of the evaluation index for each sample per day as the clustering feature. , and show the plots of the clustering results of K-means++, K-means, and K-medoids for wheat and maize with their corresponding three toxins, respectively, showing the line graphs of the number of clusters from 3–7 contour coefficients for wheat and maize. From the figure, the contour coefficients of 3–7 clusters of the three clustering models basically show a decreasing trend, and the contour coefficients of three clusters are the largest, indicating compactness between instances within three clusters and large inter-cluster distances. In addition, the contour coefficient of the K-means++ clustering model is the maximum among the three models, so the K-means++ clustering model is selected as the clustering model for quality change in grain storage process in this paper. The grain quality change is divided into three levels, and the cluster centers, grain quality change, and the number of samples in each level for the three toxin levels corresponding to wheat and corn are shown in and . The distance of the cluster centers from the origin was calculated based on the indicators, and the categories 1–3 were defined as grain quality variation levels1-level2-level3, respectively. from which the indicators of the cluster centers increase sequentially with the increase of the quality levels. plots the clustering results of K-means++ for wheat and maize with their corresponding three toxins, and the results show that the three clusters have the largest profile coefficients and large distances between clusters, thus classifying the grain quality variation into three classes. plots the clustering results of K-means for wheat and maize with their corresponding three toxins. Compared with K-means++, the contour coefficients of each cluster are generally lower than the results of K-means++, and the contour coefficients of three clusters are still the maximum. plots the clustering results of K-medoids for wheat and maize with their corresponding three toxins. Compared with K-means++ and K-means, the contour coefficients of each cluster are generally lower than the results of the above two algorithms, and the contour coefficients of three clusters are still the maximum. shows the clustering centers, the grain quality variation, and the number of samples in each level for the three toxin levels corresponding to wheat. The results showed that most wheat samples had zearalenone ZON and deoxynivalenol DON levels in Level 1, while most wheat samples had aflatoxin B1 levels in Level 2. shows the clustering centers, the grain quality variation, and the number of samples in each level for the three toxin levels corresponding to maize. The results showed that the vast majority of maize samples contained aflatoxin B1, zearalenone ZON, and deoxynivalenol DON at Level 1.
In order to effectively evaluate the performance of the FEDformer-based model in predicting the quality changes during grain storage, in this paper, several deep learning prediction methods are selected as comparison experiments, and we set up a 5-fold cross-validation experiment in this experiment in order to prevent overfitting. In addition, the FEDformer model contains several hyperparameters that affect the accuracy of the model, and we determined that the learning rate and the number of days to predict the future have the greatest impact on the performance of the model through experiments, and conducted several comparative experiments for this purpose. Each parameter setting in the proposed model is shown in . Among the eight prediction models, CNN has the largest prediction error, while the traditional LSTM is second only to CNN. In addition, four prediction models, LSTM, GRU, BILSTM and BIGRU, have similar prediction accuracy and little difference in prediction error. Transformer, Informer, and FEDformer have significantly higher prediction accuracy and lower prediction error than other prediction models. The prediction error is significantly reduced. Compared with other models, the prediction error of FEDformer is the smallest, and the experiments of MAE, MSE, RMSE, MAPE, and MSPE of FEDformer model for the maize test set are 0.01, 0.0005, 0.023, 0.01, and 0.09, respectively. The results are shown in . The MAE, MSE, RMSE, MAPE and MSPE of FEDformer for the wheat test set were 0.017, 0.0006, 0.025, 0.108, and 0.14, respectively. The experimental results for MAE, MSE, RMSE, MAPE, and MSPE for the wheat test set are shown in .
In this paper, the clustering algorithm selects the S-value of the evaluation index for each sample per day as the clustering feature. , and show the plots of the clustering results of K-means++, K-means, and K-medoids for wheat and maize with their corresponding three toxins, respectively, showing the line graphs of the number of clusters from 3–7 contour coefficients for wheat and maize. From the figure, the contour coefficients of 3–7 clusters of the three clustering models basically show a decreasing trend, and the contour coefficients of three clusters are the largest, indicating compactness between instances within three clusters and large inter-cluster distances. In addition, the contour coefficient of the K-means++ clustering model is the maximum among the three models, so the K-means++ clustering model is selected as the clustering model for quality change in grain storage process in this paper. The grain quality change is divided into three levels, and the cluster centers, grain quality change, and the number of samples in each level for the three toxin levels corresponding to wheat and corn are shown in and . The distance of the cluster centers from the origin was calculated based on the indicators, and the categories 1–3 were defined as grain quality variation levels1-level2-level3, respectively. from which the indicators of the cluster centers increase sequentially with the increase of the quality levels. plots the clustering results of K-means++ for wheat and maize with their corresponding three toxins, and the results show that the three clusters have the largest profile coefficients and large distances between clusters, thus classifying the grain quality variation into three classes. plots the clustering results of K-means for wheat and maize with their corresponding three toxins. Compared with K-means++, the contour coefficients of each cluster are generally lower than the results of K-means++, and the contour coefficients of three clusters are still the maximum. plots the clustering results of K-medoids for wheat and maize with their corresponding three toxins. Compared with K-means++ and K-means, the contour coefficients of each cluster are generally lower than the results of the above two algorithms, and the contour coefficients of three clusters are still the maximum. shows the clustering centers, the grain quality variation, and the number of samples in each level for the three toxin levels corresponding to wheat. The results showed that most wheat samples had zearalenone ZON and deoxynivalenol DON levels in Level 1, while most wheat samples had aflatoxin B1 levels in Level 2. shows the clustering centers, the grain quality variation, and the number of samples in each level for the three toxin levels corresponding to maize. The results showed that the vast majority of maize samples contained aflatoxin B1, zearalenone ZON, and deoxynivalenol DON at Level 1.
Food quality is related to human health, and a decline in food quality increases the risk of human illness , in addition to the current global spread of COVID-19, which is impacting international food supply chains and straining food supplies . Sudden outbreaks of desert locusts in some countries, superimposed on global epidemics, make conventional disaster prevention and control difficult to implement, exacerbating concerns about food quality loss and food security. Therefore, it is of great interest to predict the quality of grain during storage; however, the factors that lead to changes in quality during storage are complex, and the deterioration in grain quality is due to contamination by toxins produced by microorganisms during storage, which seriously affects people’s health, where temperature and moisture are important factors that affect microbial activity; thus, the toxin content is a decisive factor in the quality of grain, and therefore our experiment used it as a monitoring indicator and temperature and moisture content as environmental variables. According to the experiment, we determined that the content of aflatoxin B1 (AFB1), deoxynivalenol (DON) and zearalenone (ZEN) increased with the increase in moisture content in the environment, and the content of these three toxins was highest at a moisture content of 22%, which means that 22% moisture content was most suitable for the growth of the three toxins; the three toxins also showed an increasing trend with the increase in temperature. The most suitable temperature for the growth of aflatoxin B1 (AFB1) and zearalenone (ZEN) is 25°, and the most suitable temperature for the growth of deoxynivalenol (DON) is 30°. The results of this experiment are in agreement with those of Lutz et al. . In addition, we have offered some comments on the maintenance of grain quality during storage as follows. Large losses in grain quality endanger human health, increase environmental stress, and affect the sustainable development of agricultural food. Currently, improved storage equipment is the most commonly used method to reduce food quality losses in the storage chain. A considerable number of institutions and companies around the world are using and promoting sealed storage technology, including the WFP, FAO, GrainPro, etc., by providing sealed bags and other storage equipment, thus reducing losses in the food storage chain. It is known from the results of studies that sealed bags can effectively reduce quality losses and that they are less costly and suitable for economically underdeveloped areas. However, many times, the quality changes in grain are not detectable by the senses; this can increase the safety risks of consuming grain and can cause unpredictable adverse effects on human health. Our approach is based on monitoring of the storage environment and use of artificial intelligence technology to assist in decision making. The use of artificial intelligence technology not only reduces the cost of manual sampling, but also increases the sampling period. Based on our experiments, we believe that attention should be paid to the influence of environmental factors on grain quality during storage, especially temperature and moisture content, where equilibrium moisture is important . The stored grain interacts with the storage conditions, the air between the grains and the storage structure, leading to variations in grain quality temperature, moisture content and relative humidity between the grains. The combination of these factors characterizes the storage environment in which the equilibrium moisture content of the grain varies. The global market for quality grains is growing, and concerns and worries about grain quality are increasing. There is an urgent need to determine ways to reduce losses in the storage phase that will ensure food safety and agricultural sustainability. Many grain managers are investing more in efficient and reliable grain quality management technologies. Grain bin monitoring and artificial intelligence technology-assisted decision-making approaches are storage phase information systems that use information technology to ensure grain quality by controlling and monitoring environment-related factors. This is considered to be an intelligent approach that incorporates emerging information technology, and it is an efficient strategy to significantly reduce grain quality losses and labor costs, an issue that has attracted much attention worldwide. Grain bin monitoring and artificial intelligence technologies can be an option to assist decision making, and some countries have already started to invest in the development of related technologies and have put them to use. Thus, bin monitoring and AI technology-assisted decision making can play an important role in ensuring the safety and the high quality of grain. The development of a combination of IoT-based grain bin monitoring and AI-assisted decision-making technology will be the main trend in the development of storage technology in the coming years, and monitoring technology and AI technology may further develop into mainstream applications. Since there are many factors that affect grain quality, a more accurate quantitative and qualitative evaluation is the main challenge at present. Therefore, a more accurate assessment of grain quality has important positive implications for grain utilization and safety, for economic development, and for human health. In order to promote grain bin monitoring and AI technology-assisted decision-making methods, the implementation cost of this technology must be further explored. The ideal monitoring and AI-assisted decision-making technology should be an efficient and effective system that continuously improves grain safety and nutritional value, reduces quality losses, and addresses the international problem of grain storage losses in a sustainable manner. Therefore, a prudent policy should be adopted to enhance food safety and quality and to increase food utilization while promoting the development of information technology.
Environmental changes during storage are an important factor affecting the quality of grain. We determined that the levels of aflatoxin B1 (AFB1), deoxynivalenol (DON) and zearalenone (ZEN) increased with the increase in moisture content in the environment and with the increase in ambient temperature, where the environmental conditions with 22% moisture content and temperature between 25° and 30° were most suitable for the growth of these three toxins. Therefore, unfavorable environmental conditions can lead to a decrease in the quality of stored grain and an increase in toxins, which can cause significant health problems. Therefore, accurate prediction of the content of various toxins in grain under different environmental conditions and reasonable definition of quality levels can help grain managers to provide early warning of grain quality and greatly reduce the labor cost of grain toxin detection. In this study, six indicator variables were first collected, and then the contents of three toxins were input into a FEDformer-based model to predict aflatoxin B1 content based on K-means++ of grain storage process, using zearalenone content and deoxy peptide humic acid content as input variables. The evaluation index S for the current and predicted values of toxin was set, and the quality change model was constructed according to the evaluation index to evaluate the quality of grain storage process in a graded manner.
|
People’s Willingness to Pay for Dental Checkups and the Associated Individual Characteristics: A Nationwide Web-Based Survey among Japanese Adults | f754554e-f76d-4f5c-a67c-0ba853769fe2 | 10001831 | Dental[mh] | Maintaining good oral health hygiene entails routine dental checkups and consulting dentists to diagnose underlying dental diseases. Apart from maintaining masticatory function, good oral health conditions contribute to an enhanced quality of life and reduce the impact of systemic non-communicable diseases . Therefore, creating health policy plans to promote the importance of regular dental checkups among the population is essential. However, access to dental services such as routine checkups is affected by the health insurance system in each country. In countries with extensive dental insurance policies, the use of insurance coverage contributes to an increase in dental service utilization [ , , ]. Moreover, studies in several countries have shown that an individual’s economic situation is one of the factors associated with access to dental services [ , , , ]. That is, even if they perceive the need for a specific dental service, their decision depends on their own income limitations and whether they are willing to spend resources for that dental healthcare service. The contingent valuation method (CVM) is used for measuring the benefits of healthcare services. This method evaluates the Willingness to Pay (WTP), which is “the maximum amount of monetary value that an individual would be willing to sacrifice to obtain the benefit of that healthcare service,” through questionnaires or face-to-face interviews based on a hypothetical scenario regarding the healthcare service [ , , ]. Several studies have reported on WTP for dental treatments, including preventive treatment of dental caries , periodontal disease , dental implant treatment , and orthodontic treatment . The findings of studies on WTP can be used to perform economic evaluations of desired healthcare services by the general population and are expected to be a resource for policy planning regarding oral health . As such, several dental WTP studies have demonstrated the distribution of WTP values to dental healthcare services based on the responses of survey participants and have shown that high/low WTP values are associated with individual characteristics such as socioeconomic status [ , , , , , ]. There have been few WTP studies on dental checkups . A Finnish study evaluated the WTP for dental checkups in 7-year-olds as one of the healthcare services targeted toward medical and dental students; however, the study did not target the general population and was limited to the evaluation of pediatric dental checkups. Further, a Japanese study evaluated the WTP for dental checkups in regular visitors and infrequent visitors; however, this study only surveyed patients in dental clinics, and the findings were not applicable to the general population. In Japan, a universal health insurance coverage system was established in 1961, and most dental treatments are covered by the medical insurance system, with patients paying a co-payment cost of 10–30% of the treatment cost . However, since the medical insurance system covers only the treatment of diseases, services such as dental checkups, wherein the presence or diagnosis of oral diseases is not certain, are not covered by the insurance system . The Ministry of Health, Labour, and Welfare of Japan released a report in 2022 on its oral health policy plan, which states that the most recently published rate of regular dental checkups among the population was 52.9%; therefore, it is necessary to continue to improve this rate in the future . In 2022, the Japanese government released a basic policy of “universal oral health checks,” which allows all citizens to receive dental checkups throughout their lives . Therefore, a study focused on understanding the WTP for receiving dental checkups to assess the economic value of targeting the general population on a nationwide scale, not just patients visiting dental clinics, would have policy implications. In particular, understanding the characteristics of individuals with low WTP values for dental checkups from among those who do not receive regular dental checkups will provide evidence for planning policies to further improve the rate of regular dental checkups; however, no such studies have been conducted. The purpose of this study was to obtain and compare the WTP values for dental checkups in two groups of study participants (those who received regular dental checkups and those who did not) by using data from a nationwide web-based survey and to analyze the study participants’ individual characteristics associated with high/low WTP values for each of the two groups. Therefore, the null hypothesis for this study was set as follows: (1) there is no difference in the WTP values for dental checkups between those who did and did not receive regular dental checkups, and (2) there is no association between the WTP values and individual characteristics of each study group.
2.1. Study Design and Study Participants This was a cross-sectional study conducted using a web-based survey in accordance with the STROBE statement. The study participants were recruited from among the registrants of a research company specializing in web-based surveys (Macromill, Inc.; Tokyo, Japan), which has approximately 1.3 million registered residents in Japan. The age criterion for the study participants was 20–69 years. Japan has a population of approximately 75 million residents aged 20–69 years. Based on an error margin of 2%, 95% confidence coefficient, and 50% population proportion, a minimum sample size of 2401 participants was required. Additionally, in a Ministry of Health, Labour and Welfare report, a web-based survey was used to assess the status of receiving dental checkups, which covered 3556 individuals aged ≥20 years on a nationwide scale . Accordingly, the study targeted a sample size of 3200 participants aged 20–69 years. Finally, a total of 3336 participants were randomly selected from the research company’s database of registrants using a quota sampling method based on the Japanese national census population . The distribution of the study participants was divided according to gender (men: 50.3%, women: 49.7%), age group (20–29 years: 15.7%, 30–39 years: 18.3%, 40–49 years: 23.8%, 50–59 years: 21.7%, and 60–69 years: 20.6%), and regional category (Hokkaido region: 4.2%, Tohoku region: 6.8%, Kanto region: 35.9%, Chubu region: 18.0%, Kinki region: 15.9%, Chugoku region: 5.4%, Shikoku region: 2.8%, and Kyushu region: 11.0%), which reflects the representation of the Japanese population . As this study used a web-based survey, all study participants had to answer each question before they could proceed to the next question. Additionally, all participants completed the survey; thus, no missing values were obtained. All questions were asked in the Japanese language. The web-based survey was conducted over a 3-day period, from 12–14 October 2022. All participants agreed to participate in the study and answered the survey questions. Participants’ personal information was protected by Macromill, Inc. . The participants were given points that could be converted into cash. This study was approved by the Research Ethics Committee of Nippon Dental University College in Tokyo before the web-based survey was conducted (9 August 2022, approval No. 293). 2.2. Outcome Variable (WTP Values for Dental Checkups) The outcome variable in this study was the WTP value for dental checkups. WTP values were obtained from the study participants based on the payment card method . (The questionnaire is provided in a .) The participants were asked about the maximum amount they would be willing to pay to receive one dental checkup. As a proviso to this question, the following description was provided to the study participants: (1) “Under the Japanese medical insurance system, healthcare services for disease prevention are not covered by insurance. Please answer this question by assuming full payment at your own expense.” (2) “’Dental checkups’ in this survey refers to a checkup by a dentist to assess the condition of the teeth for the purpose of early detection of dental caries and periodontal disease (radiographs are obtained, if necessary). It does not include scaling of calculus or polishing of tooth surfaces.” The study participants were presented with the following range of amounts for their responses: 0 yen, 1000 yen, 2000 yen, 3000 yen, 4000 yen, 5000 yen, 6000 yen, 7000 yen, 8000 yen, 9000 yen, 10,000 yen, 11,000 yen, 12,000 yen, 13,000 yen, 14,000 yen, 15,000 yen, 16,000 yen, 17,000 yen, 18,000 yen, 19,000 yen, and 20,000 yen or more (as of February 2023, 1000 yen = 7.5 USD). These ranges were set based on a previous study . The questionnaire on WTP values for dental checkups was pretested before the actual survey was administered to the study participants. Participants who responded “0 yen” were given additional questions to determine whether their reason was “true zeros” or “protest zeros” . If the participant answered “The cost of dental checkups should be fully paid by the government, insurers, or other parties” as the reason for choosing 0 yen, this response was defined as a “protest zeros” response because it does not reflect an economic evaluation of healthcare services such as dental checkups . Hence, these “protest zeros” responses were excluded when conducting the statistical analysis. 2.3. Explanatory Variables The explanatory variables were set according to the individual characteristics of the study participants, which consisted of socioeconomic factors and oral health status. Socioeconomic factors included gender, age, household income, employment status, marital status, presence of children, and the municipality of residence. Variables related to oral health status included the number of teeth and frequency of tooth brushing. The participants’ ages were categorized into the following five groups: 20–29 years, 30–39 years, 40–49 years, 50–59 years, and 60–69 years. Household income was categorized into six groups: <2 million yen, 2–4 million yen, 4–6 million yen, 6–8 million yen, ≥8 million yen, and unknown (As of 2020, the average household income of Japanese people was 5.64 million yen, and the median was 4.4 million yen .) Employment status was categorized into four groups: regular workers, homemakers, part-time workers, and not working and others. Marital status was categorized as married or single. The presence of children variable was categorized as having children or no children. The municipalities in which study participants resided were categorized into four groups based on the Japanese municipality system: metropolises (ordinance-designated cities with populations of ≥500,000 and the 23 wards of Tokyo), core cities (ordinance-designated cities with populations of ≥200,000), other cities (cities with populations of ≥50,000 excluding metropolises and core cities), and towns/villages (small municipalities that do not meet the specifications of cities). The number of teeth in the study participants was categorized into three groups: <20, 20–27, and ≥28 teeth. The frequency of tooth brushing was categorized into four groups: ≥3 times daily, twice daily, once daily, and occasional/no brushing. 2.4. Statistical Analysis First, descriptive statistics were calculated for each variable. The outcome variable (WTP values for dental checkups) was used as quantitative data, and the explanatory variables were used as categorical data. In addition, study participants were divided into two groups based on whether they received regular dental checkups (those who received regular dental checkups: RDC group; those who did not receive regular dental checkups: non-RDC group). The criterion for whether or not the participants received regular dental checkups was “whether or not they received dental checkups at least once a year,” based on a survey by the Ministry of Health, Labour, and Welfare . Second, to understand the distribution of WTP values for dental checkups, graphs were created for the RDC and non-RDC groups. In addition, the descriptive statistics of the WTP values for both the RDC and non-RDC groups were calculated, and the two groups were compared after excluding the “protest zeros” responses, using the Mann-Whitney U test; this test was used because the WTP values did not follow the normal distribution. Third, the association between the outcome variable (WTP values for dental checkups) and the explanatory variables (gender, age, household income, employment status, marital status, presence of children, municipality of residence, number of teeth, and frequency of tooth brushing) was evaluated using the Tobit regression model for the RDC and non-RDC groups. The Tobit regression model was used because the WTP values for dental checkups were either zero or more amounts but not negative amounts and because they exhibit characteristics as censored data . In addition, Tobit regression was calculated using robust standard errors. With regard to the inclusion of explanatory variables, univariate and multivariate analyses were conducted after adjusting for all variables. In all analyses, the “protest zeros” responses were excluded. In this study, Stata version 17 (StataCorp LLC, College Station, TX, USA) was used for statistical analysis. Statistical significance was set at p < 0.05.
This was a cross-sectional study conducted using a web-based survey in accordance with the STROBE statement. The study participants were recruited from among the registrants of a research company specializing in web-based surveys (Macromill, Inc.; Tokyo, Japan), which has approximately 1.3 million registered residents in Japan. The age criterion for the study participants was 20–69 years. Japan has a population of approximately 75 million residents aged 20–69 years. Based on an error margin of 2%, 95% confidence coefficient, and 50% population proportion, a minimum sample size of 2401 participants was required. Additionally, in a Ministry of Health, Labour and Welfare report, a web-based survey was used to assess the status of receiving dental checkups, which covered 3556 individuals aged ≥20 years on a nationwide scale . Accordingly, the study targeted a sample size of 3200 participants aged 20–69 years. Finally, a total of 3336 participants were randomly selected from the research company’s database of registrants using a quota sampling method based on the Japanese national census population . The distribution of the study participants was divided according to gender (men: 50.3%, women: 49.7%), age group (20–29 years: 15.7%, 30–39 years: 18.3%, 40–49 years: 23.8%, 50–59 years: 21.7%, and 60–69 years: 20.6%), and regional category (Hokkaido region: 4.2%, Tohoku region: 6.8%, Kanto region: 35.9%, Chubu region: 18.0%, Kinki region: 15.9%, Chugoku region: 5.4%, Shikoku region: 2.8%, and Kyushu region: 11.0%), which reflects the representation of the Japanese population . As this study used a web-based survey, all study participants had to answer each question before they could proceed to the next question. Additionally, all participants completed the survey; thus, no missing values were obtained. All questions were asked in the Japanese language. The web-based survey was conducted over a 3-day period, from 12–14 October 2022. All participants agreed to participate in the study and answered the survey questions. Participants’ personal information was protected by Macromill, Inc. . The participants were given points that could be converted into cash. This study was approved by the Research Ethics Committee of Nippon Dental University College in Tokyo before the web-based survey was conducted (9 August 2022, approval No. 293).
The outcome variable in this study was the WTP value for dental checkups. WTP values were obtained from the study participants based on the payment card method . (The questionnaire is provided in a .) The participants were asked about the maximum amount they would be willing to pay to receive one dental checkup. As a proviso to this question, the following description was provided to the study participants: (1) “Under the Japanese medical insurance system, healthcare services for disease prevention are not covered by insurance. Please answer this question by assuming full payment at your own expense.” (2) “’Dental checkups’ in this survey refers to a checkup by a dentist to assess the condition of the teeth for the purpose of early detection of dental caries and periodontal disease (radiographs are obtained, if necessary). It does not include scaling of calculus or polishing of tooth surfaces.” The study participants were presented with the following range of amounts for their responses: 0 yen, 1000 yen, 2000 yen, 3000 yen, 4000 yen, 5000 yen, 6000 yen, 7000 yen, 8000 yen, 9000 yen, 10,000 yen, 11,000 yen, 12,000 yen, 13,000 yen, 14,000 yen, 15,000 yen, 16,000 yen, 17,000 yen, 18,000 yen, 19,000 yen, and 20,000 yen or more (as of February 2023, 1000 yen = 7.5 USD). These ranges were set based on a previous study . The questionnaire on WTP values for dental checkups was pretested before the actual survey was administered to the study participants. Participants who responded “0 yen” were given additional questions to determine whether their reason was “true zeros” or “protest zeros” . If the participant answered “The cost of dental checkups should be fully paid by the government, insurers, or other parties” as the reason for choosing 0 yen, this response was defined as a “protest zeros” response because it does not reflect an economic evaluation of healthcare services such as dental checkups . Hence, these “protest zeros” responses were excluded when conducting the statistical analysis.
The explanatory variables were set according to the individual characteristics of the study participants, which consisted of socioeconomic factors and oral health status. Socioeconomic factors included gender, age, household income, employment status, marital status, presence of children, and the municipality of residence. Variables related to oral health status included the number of teeth and frequency of tooth brushing. The participants’ ages were categorized into the following five groups: 20–29 years, 30–39 years, 40–49 years, 50–59 years, and 60–69 years. Household income was categorized into six groups: <2 million yen, 2–4 million yen, 4–6 million yen, 6–8 million yen, ≥8 million yen, and unknown (As of 2020, the average household income of Japanese people was 5.64 million yen, and the median was 4.4 million yen .) Employment status was categorized into four groups: regular workers, homemakers, part-time workers, and not working and others. Marital status was categorized as married or single. The presence of children variable was categorized as having children or no children. The municipalities in which study participants resided were categorized into four groups based on the Japanese municipality system: metropolises (ordinance-designated cities with populations of ≥500,000 and the 23 wards of Tokyo), core cities (ordinance-designated cities with populations of ≥200,000), other cities (cities with populations of ≥50,000 excluding metropolises and core cities), and towns/villages (small municipalities that do not meet the specifications of cities). The number of teeth in the study participants was categorized into three groups: <20, 20–27, and ≥28 teeth. The frequency of tooth brushing was categorized into four groups: ≥3 times daily, twice daily, once daily, and occasional/no brushing.
First, descriptive statistics were calculated for each variable. The outcome variable (WTP values for dental checkups) was used as quantitative data, and the explanatory variables were used as categorical data. In addition, study participants were divided into two groups based on whether they received regular dental checkups (those who received regular dental checkups: RDC group; those who did not receive regular dental checkups: non-RDC group). The criterion for whether or not the participants received regular dental checkups was “whether or not they received dental checkups at least once a year,” based on a survey by the Ministry of Health, Labour, and Welfare . Second, to understand the distribution of WTP values for dental checkups, graphs were created for the RDC and non-RDC groups. In addition, the descriptive statistics of the WTP values for both the RDC and non-RDC groups were calculated, and the two groups were compared after excluding the “protest zeros” responses, using the Mann-Whitney U test; this test was used because the WTP values did not follow the normal distribution. Third, the association between the outcome variable (WTP values for dental checkups) and the explanatory variables (gender, age, household income, employment status, marital status, presence of children, municipality of residence, number of teeth, and frequency of tooth brushing) was evaluated using the Tobit regression model for the RDC and non-RDC groups. The Tobit regression model was used because the WTP values for dental checkups were either zero or more amounts but not negative amounts and because they exhibit characteristics as censored data . In addition, Tobit regression was calculated using robust standard errors. With regard to the inclusion of explanatory variables, univariate and multivariate analyses were conducted after adjusting for all variables. In all analyses, the “protest zeros” responses were excluded. In this study, Stata version 17 (StataCorp LLC, College Station, TX, USA) was used for statistical analysis. Statistical significance was set at p < 0.05.
3.1. Demographic Characteristics of the Study Participants and the Number and Proportion of the RDC and Non-RDC Groups shows the demographic characteristics of the study participants ( n = 3336) and the number and proportion of each when divided into the RDC group ( n = 1785; 53.5%) and non-RDC group ( n = 1551; 46.5%). The Chi-squared test revealed statistically significant differences between the two groups in the following variables: gender, household income, employment status, marital status, municipalities, number of teeth, frequency of tooth brushing ( p < 0.001), and presence of children ( p = 0.004). 3.2. Distribution and Comparison of WTP Values for Dental Checkups between the RDC and Non-RDC Groups The distribution of WTP values for dental checkups is shown in for the RDC group and for the non-RDC group. shows the comparison of the descriptive statistics of WTP values for dental checkups in the RDC and non-RDC groups, excluding responses with protest zeros (RDC group: 22, non-RDC group: 49). In the RDC group (1763 participants), the median was 3000 yen (22.51 USD), the interquartile range was 2000–4000 yen (15.01–30.02 USD), and mean was 3439.6 yen (25.81 USD). In the non-RDC group (1502 participants), the median was 2000 yen (15.01 USD), the interquartile range was 1000–3000 yen (7.50–22.51 USD), and the mean was 2713.0 (20.36 USD) yen. The Mann-Whitney U test revealed a statistically significant difference between the RDC and non-RDC groups ( p < 0.001). 3.3. Association between WTP Values for Dental Checkups and Study Participants’ Individual Characteristics in the RDC and Non-RDC Groups The multivariate Tobit regression model demonstrated the association between WTP values for dental checkups and characteristics of the study participants for both the RDC and non-RDC groups ( and ) (The results of the univariate Tobit analysis are shown in ). Regarding the WTP values for dental checkups in the RDC group ( ), age 50–59 years (coefficient: −515.14, 95%CI: −999.65 to −30.64), household income <2 million yen (coefficient: −543.95, 95%CI: −1081.14 to −6.77), homemaker and part-time worker employment status (homemaker, coefficient: −407.09, 95%CI: −780.41 to −33.78; part-time worker, coefficient: −408.48, 95%CI: −764.61 to −52.35), and having children (coefficient: −442.31, 95%CI: −779.65 to −104.98) were significantly associated with decreased WTP values, while male gender (coefficient: 329.48, 95%CI: 0.83 to 658.12), household incomes >8 million yen (coefficient: 600.55, 95%CI: 140.55 to 1060.56), and tooth brushing ≥3 times daily (coefficient: 473.18, 95%CI: 10.78 to 935.57) were associated with increased WTP values. Regarding the WTP values for dental checkups in the non-RDC group ( ), age ≥30 years (30–39 years, coefficient: −741.24, 95%CI: −1354.07 to −128.41; 40–49 years, coefficient: −1200.37, 95%CI: −1762.87 to −637.88; 50–59 years, coefficient: −1034.32, 95%CI: −1591.12 to −477.53; 60–69 years, coefficient: −696.59, 95%CI: −1332.38 to −60.80), household incomes <4 million yen (<2 million yen, coefficient: −897.82, 95%CI: −1580.95 to −214.69; 2–4 million yen, coefficient: −529.94, 95%CI: −994.76 to −65.12), and presence of ≥28 teeth (coefficient: −362.48, 95%CI: −707.17 to −17.80) were significantly associated with decreased WTP values, while household incomes of ≥8 million (coefficient: 661.82, 95%CI: 110.49 to 1213.14) were associated with increased WTP values.
shows the demographic characteristics of the study participants ( n = 3336) and the number and proportion of each when divided into the RDC group ( n = 1785; 53.5%) and non-RDC group ( n = 1551; 46.5%). The Chi-squared test revealed statistically significant differences between the two groups in the following variables: gender, household income, employment status, marital status, municipalities, number of teeth, frequency of tooth brushing ( p < 0.001), and presence of children ( p = 0.004).
The distribution of WTP values for dental checkups is shown in for the RDC group and for the non-RDC group. shows the comparison of the descriptive statistics of WTP values for dental checkups in the RDC and non-RDC groups, excluding responses with protest zeros (RDC group: 22, non-RDC group: 49). In the RDC group (1763 participants), the median was 3000 yen (22.51 USD), the interquartile range was 2000–4000 yen (15.01–30.02 USD), and mean was 3439.6 yen (25.81 USD). In the non-RDC group (1502 participants), the median was 2000 yen (15.01 USD), the interquartile range was 1000–3000 yen (7.50–22.51 USD), and the mean was 2713.0 (20.36 USD) yen. The Mann-Whitney U test revealed a statistically significant difference between the RDC and non-RDC groups ( p < 0.001).
The multivariate Tobit regression model demonstrated the association between WTP values for dental checkups and characteristics of the study participants for both the RDC and non-RDC groups ( and ) (The results of the univariate Tobit analysis are shown in ). Regarding the WTP values for dental checkups in the RDC group ( ), age 50–59 years (coefficient: −515.14, 95%CI: −999.65 to −30.64), household income <2 million yen (coefficient: −543.95, 95%CI: −1081.14 to −6.77), homemaker and part-time worker employment status (homemaker, coefficient: −407.09, 95%CI: −780.41 to −33.78; part-time worker, coefficient: −408.48, 95%CI: −764.61 to −52.35), and having children (coefficient: −442.31, 95%CI: −779.65 to −104.98) were significantly associated with decreased WTP values, while male gender (coefficient: 329.48, 95%CI: 0.83 to 658.12), household incomes >8 million yen (coefficient: 600.55, 95%CI: 140.55 to 1060.56), and tooth brushing ≥3 times daily (coefficient: 473.18, 95%CI: 10.78 to 935.57) were associated with increased WTP values. Regarding the WTP values for dental checkups in the non-RDC group ( ), age ≥30 years (30–39 years, coefficient: −741.24, 95%CI: −1354.07 to −128.41; 40–49 years, coefficient: −1200.37, 95%CI: −1762.87 to −637.88; 50–59 years, coefficient: −1034.32, 95%CI: −1591.12 to −477.53; 60–69 years, coefficient: −696.59, 95%CI: −1332.38 to −60.80), household incomes <4 million yen (<2 million yen, coefficient: −897.82, 95%CI: −1580.95 to −214.69; 2–4 million yen, coefficient: −529.94, 95%CI: −994.76 to −65.12), and presence of ≥28 teeth (coefficient: −362.48, 95%CI: −707.17 to −17.80) were significantly associated with decreased WTP values, while household incomes of ≥8 million (coefficient: 661.82, 95%CI: 110.49 to 1213.14) were associated with increased WTP values.
4.1. Major Findings of This Study Using a nationwide web-based survey, the WTP values for dental checkups in the RDC and non-RDC groups were ascertained and analyzed to assess their association with the study participants’ individual characteristics. As a result, two major points were revealed. First, the median WTP value for dental checkups was 3000 yen (22.51 USD) (mean: 3439.6 yen [25.81 USD]) in the RDC group and 2000 yen (15.01 USD) (mean: 2713.0 yen [20.36 USD]) in the non-RDC group, which was a statistically significant difference between the two groups. Second, age 50–59 years, lower household income, homemaker and part-time worker employment status, and having children were significantly associated with lower WTP values, while male gender, higher household income, and tooth brushing ≥3 times daily were associated with higher WTP values. In the non-RDC group ( ), age ≥30 years, lower household income, and presence of ≥28 teeth were significantly associated with lower WTP values, while higher household incomes were associated with higher WTP values. Therefore, the results of this study suggest that the WTP values for dental checkups were lower in the non-RDC group than in the RDC group, and socioeconomic factors were associated with WTP values in both groups. 4.2. WTP Values for Dental Checkups in the RDC and Non-RDC Groups Although many WTP studies have reported on dental treatment [ , , , , , ], few have focused on receiving dental checkups . A previous study related to the results of this study showed that the WTP values for dental checkups were evaluated by targeting patients in dental clinics, with a median WTP value of 2000 yen (mean: 2252.6 yen) for regular visitors and a median WTP value of 2000 yen (mean: 2124.9 yen) for infrequent visitors. The results of this study were different from the findings of the previous study; however, they are not simply comparable because the previous study surveyed patients visiting dental clinics, whereas, in this study, the sample was recruited from the general population that approximates the Japanese population, using the quota sampling method and conducting a web-based survey. Therefore, the results of this study are the first to determine WTP values for dental checkups in the general population nationwide and can be expected to contribute to health policy planning. This study found that the RDC group responded with a higher value than the non-RDC group regarding the maximum amount they could pay for a dental checkup (RDC group: median 3000 yen, mean 3439.6 yen; non-RDC group: median 2000 yen, mean 2713.0 yen). Several previous studies have suggested that those who habitually receive regular dental checkups have an increased awareness of oral health . Therefore, the results of this study may also have been influenced by the fact that the RDC group gave more importance to receiving dental checkups to maintain their oral health than the non-RDC group. Another possible factor affecting the WTP value is its association with household income, as described in . 4.3. Association between WTP Values for Dental Checkups and Individual Characteristics in the RDC and Non-RDC Groups There was a positive correlation between WTP values and household income in both the RDC and non-RDC groups. Those with lower household incomes were likelier to report lower WTP values for dental checkups. Of particular note is that in the non-RDC group, associations were observed in a wide range of age groups over 30 years. In addition, the non-RDC group responded with significantly lower WTP values for dental checkups than the RDC group. Several previous studies have shown that income limitations are a barrier to regular dental attendance [ , , , ], and the results of this study support these previous findings from the perspective of economic evaluation of dental checkups. Based on these findings, it can be implied that compared to the RDC group, there is a limitation to the maximum amount that can be paid for dental checkups within a wide age range in the non-RDC group and that this may be associated with economic background factors; hence, this suggests the need for policy interventions. Further, homemakers, part-time workers, and those with children responded with lower WTP values only in the RDC group. Even if these participants were in the habit of receiving regular dental checkups, they might be limited in the amount of cost they can spend on dental checkups; therefore, it is possible that they reported low WTP values. Moreover, the results of this study showed that men had higher WTP values for dental checkups than women. Previous studies have shown that women are more likely to have an increased awareness of oral health than men . However, according to a report by Japan’s Ministry of Health, Labour and Welfare, the income of working men is about 2.1 times that of working women , and the reason for this is reportedly due to the differences in employment positions and length of service between men and women . Therefore, it is possible that men in the RDC group answered that they could afford to spend on dental checkups because of their financial stability rather than because of their awareness of oral health. Regarding oral health conditions, in the RDC group, those who brushed their teeth ≥3 times daily reported higher WTP values. This result suggests that participants in the RDC group had a high awareness of oral health and identified the importance of dental checkups. In contrast, in the non-RDC group, those with ≥28 teeth had lower WTP values. The reason for this may be that they have a full set of teeth and have no trouble chewing; therefore, they have little awareness of the need to protect their oral health. However, this causal relationship remains unclear and requires further investigation. Regarding the municipality in which the study participants resided, there was no statistically significant association in either the RDC or non-RDC group. Generally, there are reportedly more barriers in rural areas than in urban areas with regard to access to dental services . However, there is reportedly little inequality in the geographic distribution of the number of dental clinics in Japan . That is, there are fewer differences in barriers to access to dental services between rural and urban areas; as a result, there may have been less impact on WTP values for both groups in this study. 4.4. Implications for Health Policy of This Study Under the Japanese medical insurance system, most dental treatments, such as caries treatment, endodontic treatment, periodontal disease treatment, and prosthetic treatment, are covered by insurance. However, preventive practices such as dental checkups are not covered by insurance . The results of this study showed that in both the RDC and non-RDC groups, those with a lower household income were more likely to report a lower maximum amount they could pay for dental checkups. This result raises concerns, particularly among the non-RDC group, who may face barriers to accessing dental services due to economic reasons. Universal Health Coverage (UHC) is one of the Sustainable Development Goals (SDGs) advocated by the United Nations in 2015 , and the Japanese medical insurance system may be considered to be achieving UHC. However, several Japanese studies have suggested that income limitations may affect access to dental services . Therefore, the establishment of a system that allows people to receive dental checkups without co-payment, using public funds and other financial resources, may result in improved access to dental services . It is necessary for policymakers to plan health policies that consider socioeconomic factors, such as people’s incomes, to ensure equality of oral health status. 4.5. Limitations of This Study This study has several limitations. First, in WTP studies, there are several methods to obtain WTP values from study participants, and each method has its advantages and disadvantages . This study used a payment card as the appropriate method because a web-based survey was used to obtain the WTP values for dental checkups from study participants on a nationwide scale. Therefore, this method may have had range bias since the study participants’ choices were bound by the amount in the payment card they were presented with . Second, although the study sample approximated the Japanese population using a quota sampling method, sampling bias cannot be completely ruled out because the study participants were selected from among those registered with a web-based survey company. Internet usage among the Japanese is increasing ; however, the possibility of sampling bias remains a concern in web-based surveys . Third, this was a cross-sectional study conducted using a web-based survey for the study participants. Although this study revealed the individual characteristics of the study participants associated with high/low WTP values in both the RDC and non-RDC groups, it was not possible to determine a causal relationship between their factors due to the cross-sectional design of the study.
Using a nationwide web-based survey, the WTP values for dental checkups in the RDC and non-RDC groups were ascertained and analyzed to assess their association with the study participants’ individual characteristics. As a result, two major points were revealed. First, the median WTP value for dental checkups was 3000 yen (22.51 USD) (mean: 3439.6 yen [25.81 USD]) in the RDC group and 2000 yen (15.01 USD) (mean: 2713.0 yen [20.36 USD]) in the non-RDC group, which was a statistically significant difference between the two groups. Second, age 50–59 years, lower household income, homemaker and part-time worker employment status, and having children were significantly associated with lower WTP values, while male gender, higher household income, and tooth brushing ≥3 times daily were associated with higher WTP values. In the non-RDC group ( ), age ≥30 years, lower household income, and presence of ≥28 teeth were significantly associated with lower WTP values, while higher household incomes were associated with higher WTP values. Therefore, the results of this study suggest that the WTP values for dental checkups were lower in the non-RDC group than in the RDC group, and socioeconomic factors were associated with WTP values in both groups.
Although many WTP studies have reported on dental treatment [ , , , , , ], few have focused on receiving dental checkups . A previous study related to the results of this study showed that the WTP values for dental checkups were evaluated by targeting patients in dental clinics, with a median WTP value of 2000 yen (mean: 2252.6 yen) for regular visitors and a median WTP value of 2000 yen (mean: 2124.9 yen) for infrequent visitors. The results of this study were different from the findings of the previous study; however, they are not simply comparable because the previous study surveyed patients visiting dental clinics, whereas, in this study, the sample was recruited from the general population that approximates the Japanese population, using the quota sampling method and conducting a web-based survey. Therefore, the results of this study are the first to determine WTP values for dental checkups in the general population nationwide and can be expected to contribute to health policy planning. This study found that the RDC group responded with a higher value than the non-RDC group regarding the maximum amount they could pay for a dental checkup (RDC group: median 3000 yen, mean 3439.6 yen; non-RDC group: median 2000 yen, mean 2713.0 yen). Several previous studies have suggested that those who habitually receive regular dental checkups have an increased awareness of oral health . Therefore, the results of this study may also have been influenced by the fact that the RDC group gave more importance to receiving dental checkups to maintain their oral health than the non-RDC group. Another possible factor affecting the WTP value is its association with household income, as described in .
There was a positive correlation between WTP values and household income in both the RDC and non-RDC groups. Those with lower household incomes were likelier to report lower WTP values for dental checkups. Of particular note is that in the non-RDC group, associations were observed in a wide range of age groups over 30 years. In addition, the non-RDC group responded with significantly lower WTP values for dental checkups than the RDC group. Several previous studies have shown that income limitations are a barrier to regular dental attendance [ , , , ], and the results of this study support these previous findings from the perspective of economic evaluation of dental checkups. Based on these findings, it can be implied that compared to the RDC group, there is a limitation to the maximum amount that can be paid for dental checkups within a wide age range in the non-RDC group and that this may be associated with economic background factors; hence, this suggests the need for policy interventions. Further, homemakers, part-time workers, and those with children responded with lower WTP values only in the RDC group. Even if these participants were in the habit of receiving regular dental checkups, they might be limited in the amount of cost they can spend on dental checkups; therefore, it is possible that they reported low WTP values. Moreover, the results of this study showed that men had higher WTP values for dental checkups than women. Previous studies have shown that women are more likely to have an increased awareness of oral health than men . However, according to a report by Japan’s Ministry of Health, Labour and Welfare, the income of working men is about 2.1 times that of working women , and the reason for this is reportedly due to the differences in employment positions and length of service between men and women . Therefore, it is possible that men in the RDC group answered that they could afford to spend on dental checkups because of their financial stability rather than because of their awareness of oral health. Regarding oral health conditions, in the RDC group, those who brushed their teeth ≥3 times daily reported higher WTP values. This result suggests that participants in the RDC group had a high awareness of oral health and identified the importance of dental checkups. In contrast, in the non-RDC group, those with ≥28 teeth had lower WTP values. The reason for this may be that they have a full set of teeth and have no trouble chewing; therefore, they have little awareness of the need to protect their oral health. However, this causal relationship remains unclear and requires further investigation. Regarding the municipality in which the study participants resided, there was no statistically significant association in either the RDC or non-RDC group. Generally, there are reportedly more barriers in rural areas than in urban areas with regard to access to dental services . However, there is reportedly little inequality in the geographic distribution of the number of dental clinics in Japan . That is, there are fewer differences in barriers to access to dental services between rural and urban areas; as a result, there may have been less impact on WTP values for both groups in this study.
Under the Japanese medical insurance system, most dental treatments, such as caries treatment, endodontic treatment, periodontal disease treatment, and prosthetic treatment, are covered by insurance. However, preventive practices such as dental checkups are not covered by insurance . The results of this study showed that in both the RDC and non-RDC groups, those with a lower household income were more likely to report a lower maximum amount they could pay for dental checkups. This result raises concerns, particularly among the non-RDC group, who may face barriers to accessing dental services due to economic reasons. Universal Health Coverage (UHC) is one of the Sustainable Development Goals (SDGs) advocated by the United Nations in 2015 , and the Japanese medical insurance system may be considered to be achieving UHC. However, several Japanese studies have suggested that income limitations may affect access to dental services . Therefore, the establishment of a system that allows people to receive dental checkups without co-payment, using public funds and other financial resources, may result in improved access to dental services . It is necessary for policymakers to plan health policies that consider socioeconomic factors, such as people’s incomes, to ensure equality of oral health status.
This study has several limitations. First, in WTP studies, there are several methods to obtain WTP values from study participants, and each method has its advantages and disadvantages . This study used a payment card as the appropriate method because a web-based survey was used to obtain the WTP values for dental checkups from study participants on a nationwide scale. Therefore, this method may have had range bias since the study participants’ choices were bound by the amount in the payment card they were presented with . Second, although the study sample approximated the Japanese population using a quota sampling method, sampling bias cannot be completely ruled out because the study participants were selected from among those registered with a web-based survey company. Internet usage among the Japanese is increasing ; however, the possibility of sampling bias remains a concern in web-based surveys . Third, this was a cross-sectional study conducted using a web-based survey for the study participants. Although this study revealed the individual characteristics of the study participants associated with high/low WTP values in both the RDC and non-RDC groups, it was not possible to determine a causal relationship between their factors due to the cross-sectional design of the study.
Based on the results of this study, the null hypothesis stated in the objective was rejected, and the following conclusions were obtained: (1) the WTP values for dental checkups were lower in the non-RDC group than in the RDC group, and (2) there was a significant association between high/low WTP values and socioeconomic factors in both groups; in particular, in the non-RDC group, those with a lower household income aged ≥30 years were more likely to propose lower WTP values for dental checkups. Hence, this result suggests the need for policy intervention to improve access to regular dental checkups.
|
Enhancing Medical Students’ Knowledge and Performance in Otolaryngology Rotation through Combining Microlearning and Task-Based Learning Strategies | fd3dee44-df3b-4977-a0ec-794620c6f5e9 | 10001912 | Otolaryngology[mh] | Generation Z medical students, born between 1995 and 2012, have specific learning preferences and seek personalized learning opportunities that help them achieve optimal use of time and resources. They enjoy short activities and desire to have access to their learning needs independently in the moment with the help of technology, especially while performing tasks. In addition, they prefer to receive feedback on their performance just in time . Hence, clinical teachers have to implement on-the-job learning experiences that are technology-enhanced and need short attention spans . Microlearning (ML) is one of the strategies that is recommended for generation Z medical students , because it addresses their preferences with the help of technology . As these students are not engaged with long lectures or time-consuming learning activities , the best way is to provide them with meaningful and concise chunks of the material that are quickly and easily learned . The duration of these instructional units ranges from a matter of seconds to 15 min , and can be in the form of micro videos, job aids, quizzes, assignments and case studies . The benefits of implementing ML include improved concepts’ retention, higher students’ motivation, more engagement in collaborative learning and better students’ performance. Considering these advantages, ML is widely studied in health professional education and it has been shown that this strategy enhances learners’ engagement and has positive effects on their knowledge and confidence while performing clinical tasks. Furthermore, it is a foundation for promoting critical thinking and clinical reasoning. However, despite these benefits, ML is not a suitable strategy to be implemented alone for teaching complicated subjects to health professional students , and it should be applied within the context of a wider teaching–learning ecosystem , especially while teaching a specific task in working environments or a wide range of information . In these cases, a blended approach of ML and other teaching strategies is preferred [ , , ]. Task-Based Learning (TBL) is a clinical teaching–learning strategy that is suitable to be combined with ML. In TBL, students visit the patients in a real clinical setting and are guided to learn the related tasks through understanding the underlying concepts and mechanisms and then applying the acquired knowledge and skills in other situations . Indeed, TBL focuses on not only performing the task, but also understanding the relevant basic and clinical medical knowledge, and moreover developing generic competencies such as communication skills and problem solving . In this way, TBL supports the integration of medical knowledge with patient care, i.e., amalgamation of the theory and practice . It is shown that TBL is an effective clinical teaching strategy for both undergraduate and post graduate medical education. However, there are some potential limitations for implementing TBL in medical education. In TBL, the teaching topics and tasks are not systematically structured and organized. Therefore, it is difficult for some students, especially ones with poor self-directed learning abilities, to gain a comprehensive understanding of the task performance. Hence, it is recommended to modify TBL strategy with a focus on adopting suitable methods for imparting the prerequisite knowledge to students while they are conducting the tasks , for which ML can be an appropriate strategy. Thus, regarding the emerging trend of using ML in clinical education and the recommendation of integrating it with work-based learning strategies on one hand and the need to modify TBL for better knowledge provision to the students on the other hand, we assessed the effect of a combined ML–TBL method on final-year medical students’ knowledge and performance for the selected tasks in an otolaryngology rotation.
This quasi-experimental study with non-equivalent pre- and post-test design included two control and one intervention groups, and was conducted from October 24 to December 19, 2021. The study was approved by the Ethical Committee of Tehran University of Medical Sciences (Reference number: ID.IR.TUMS.MEDICINE.REC.1399.608). 2.1. Participants and Setting A total of 59 final-year medical students in their clerkship rotation to the otolaryngology ward of an educational hospital participated in the study. The hospital is an otolaryngology referral center which hosts a wide variety of patients with a range of conditions, from simple conditions to the most complicated ones. Medical students spend two weeks in this rotation, particularly in clinics and the emergency unit, where they are the first line encounters to the patients. They have to manage the patients and perform routine clinical procedures under the supervision of residents and clinical teachers. In fact, before conducting this study, the teaching method in this rotation was based on learning to conduct assigned tasks; however, it was not designed as a TBL method to ensure students’ learning. Routinely about 10 students are allocated to each rotation. Therefore, we assigned two rotations to each of the study groups, i.e., 20, 20 and 19 students to the first control, second control and intervention groups, respectively. We briefed students about the study purpose and design, and obtained their informed consent. They had the right to withdraw their participation in the study at any time. Meanwhile, they were assured that this decision would not affect their learning and assessment experience in the rotation. 2.2. Preparation Phase Two medical educationists held a three-hour workshop for three otolaryngology clinical teachers to brief them on the study design, TBL, ML and teaching–learning process in each of the study groups. After this orientation, the otolaryngology clinical teachers reviewed the curriculum and selected five essential clinical tasks to be covered in the study. They considered the following criteria for selecting the tasks: (a) the task was either a common problem in the community or a complicated one; (b) the task was likely to be encountered by students during their clinical rotation; and (c) the task could be trained through TBL and ML. The final selected tasks were ear examination, ear irrigation, subcuticular suture, nasal packing and epistaxis management. They devised the learning objectives for each of these tasks. Then, the clinical teachers created a micro-video for each of the above-mentioned tasks. For this purpose, they first worked on the scenario of each video in order to concisely cover the main objectives, task steps and students’ common mistakes. The scenarios were reviewed by another clinical teacher to ensure the appropriateness of the content and the coverage of the learning objectives. Moreover, an e-learning specialist helped the clinical teachers in consideration of pedagogical aspects while creating the contents. The videos included the performance of the task on real patients or manikins alongside with the narration of the clinical teachers focusing on the main points. Whenever necessary, a few slides or images were also included. Each of the five videos was 7 to 10 min long with an average of 8 min. 2.3. Instruments We used two instruments to assess the students’ knowledge and performance: For assessing the knowledge, we developed a 15-item multiple-choice question test with maximum score of 15. The test covered the intended learning objectives related to the five selected tasks in different levels of Blooms taxonomy, i.e., taxonomy one (knowledge and understanding), taxonomy two (application and analysis) and taxonomy three (synthesis and evaluation) . The test included one case-based scenario for each of the tasks. Two other otolaryngology clinical teachers who were not involved in the study confirmed the test with regard to the coverage of the objectives. The test reliability was 0.76 using the Kuder–Richardson 20 formula which has been proven to be acceptable . To assess students’ performance, we used the Direct Observation of Procedural Skills (DOPS) assessment method. DOPS is a workplace-based assessment method for evaluating students’ procedural skills . It encourages a deep approach to learning through helping students identify their areas of weaknesses and improve their performance. Evaluating each clinical procedure or task needs a standard DOPS checklist . In this study, we developed a separate checklist for each of the selected tasks which included 5, 6, 7, 10 and 5 items for ear examination, ear irrigation, epistaxis management, subcuticular suture and nasal packing tasks respectively. These checklists covered relevant items to each task such as awareness of the anatomy, pre-procedure preparation, compliance with sterile principles, performance of the task steps, post-procedure actions, communication skills, obtaining patient consent and complying with medical ethics. In addition, there was a question asking for rating the student’s overall performance. Each individual student performance on each item was rated by an observer, a clinical teacher or resident, within a five Likert type scale continuum consisting of “couldn’t perform (incompetent)”, “less than expected (needs further development)”, “moderate”, “acceptable (competent enough)” and “higher than expected (excellent)”. The tool’s content validity was checked by three otolaryngologists. The reliability was confirmed by the Cronbach’s alpha of 0.841 . 2.4. Study Design To avoid contamination of the study groups, we conducted the study in the first control group, followed by the second control and intervention groups. Students in the otolaryngology rotation routinely participate in an orientation session at the beginning of their rotation to be briefed on the expected competencies, duties and tasks. In this session, we explained the study aims, obtained students’ informed consent and took the knowledge pre-test. Participants of each study group underwent their assigned teaching–learning method; namely, routine, TBL or combined ML–TBL. On the last day of the rotation, they once again took the knowledge test as the post-test. In addition, during the rotation, clinical teachers and residents filled the DOPS tool for each individual student performing each task and provided them with feedbacks on their performance. In the following, we explain the teaching–learning strategy in each of the study groups. Routine teaching–learning method in the first control group: Students in this group experienced the routine program of the otolaryngology ward. They were responsible for visiting patients and performing routine procedures and tasks, including five selected tasks for this study, in the clinics and emergency unit. Clinical teachers and residents supervised students and were accessible for answering their questions and guiding them for patient management. However, with regard to the continuum of systematic to opportunistic clinical program in the SPICES model , there was no systematic approach to monitor the variety of patients visited by students. They managed only those patients that opportunistically referred to the clinics and emergency unit. In addition, students attended some lecture-based classes on a variety of clinical conditions according to the curriculum. TBL method in the second control group: In this study group, after briefing the students about TBL, they received a study guide through WhatsApp social media including the list of tasks, their related learning objectives and the important points for performing the tasks. The students visited and managed the patients like the previous study group. Meanwhile, students attended a 20- to 30-min interactive lecture-based class for each task at the middle of the rotation, in which the clinical teachers reviewed the prerequisite knowledge and acquainted them with the expectations and details of roles and responsibilities for performing each task. Moreover, encounters with the tasks were systematically monitored to ensure correct performance of the tasks by the students. Combined ML–TBL method in the intervention group: The students of this group experienced the same teaching–learning process as the second control group. In addition, they received the five micro-videos through WhatsApp social media on the first day of the rotation. They were briefed to first refer to the videos to resolve their problems while performing the tasks, and then, if necessary, ask the clinical teachers or residents for further guidance. 2.5. Statistical Analysis The data were analyzed using IBM SPSS Statistics for Windows, version 17 (IBM Corp., Armonk, New York, NY, USA). We used the Shapiro–Wilk test and shape of distribution for examining the normality of data distribution. In addition, we conducted a paired t-test, ANOVA and ANCOVA for analyzing parametric variables, and a Kruskal–Wallis test for non-parametric ones.
A total of 59 final-year medical students in their clerkship rotation to the otolaryngology ward of an educational hospital participated in the study. The hospital is an otolaryngology referral center which hosts a wide variety of patients with a range of conditions, from simple conditions to the most complicated ones. Medical students spend two weeks in this rotation, particularly in clinics and the emergency unit, where they are the first line encounters to the patients. They have to manage the patients and perform routine clinical procedures under the supervision of residents and clinical teachers. In fact, before conducting this study, the teaching method in this rotation was based on learning to conduct assigned tasks; however, it was not designed as a TBL method to ensure students’ learning. Routinely about 10 students are allocated to each rotation. Therefore, we assigned two rotations to each of the study groups, i.e., 20, 20 and 19 students to the first control, second control and intervention groups, respectively. We briefed students about the study purpose and design, and obtained their informed consent. They had the right to withdraw their participation in the study at any time. Meanwhile, they were assured that this decision would not affect their learning and assessment experience in the rotation.
Two medical educationists held a three-hour workshop for three otolaryngology clinical teachers to brief them on the study design, TBL, ML and teaching–learning process in each of the study groups. After this orientation, the otolaryngology clinical teachers reviewed the curriculum and selected five essential clinical tasks to be covered in the study. They considered the following criteria for selecting the tasks: (a) the task was either a common problem in the community or a complicated one; (b) the task was likely to be encountered by students during their clinical rotation; and (c) the task could be trained through TBL and ML. The final selected tasks were ear examination, ear irrigation, subcuticular suture, nasal packing and epistaxis management. They devised the learning objectives for each of these tasks. Then, the clinical teachers created a micro-video for each of the above-mentioned tasks. For this purpose, they first worked on the scenario of each video in order to concisely cover the main objectives, task steps and students’ common mistakes. The scenarios were reviewed by another clinical teacher to ensure the appropriateness of the content and the coverage of the learning objectives. Moreover, an e-learning specialist helped the clinical teachers in consideration of pedagogical aspects while creating the contents. The videos included the performance of the task on real patients or manikins alongside with the narration of the clinical teachers focusing on the main points. Whenever necessary, a few slides or images were also included. Each of the five videos was 7 to 10 min long with an average of 8 min.
We used two instruments to assess the students’ knowledge and performance: For assessing the knowledge, we developed a 15-item multiple-choice question test with maximum score of 15. The test covered the intended learning objectives related to the five selected tasks in different levels of Blooms taxonomy, i.e., taxonomy one (knowledge and understanding), taxonomy two (application and analysis) and taxonomy three (synthesis and evaluation) . The test included one case-based scenario for each of the tasks. Two other otolaryngology clinical teachers who were not involved in the study confirmed the test with regard to the coverage of the objectives. The test reliability was 0.76 using the Kuder–Richardson 20 formula which has been proven to be acceptable . To assess students’ performance, we used the Direct Observation of Procedural Skills (DOPS) assessment method. DOPS is a workplace-based assessment method for evaluating students’ procedural skills . It encourages a deep approach to learning through helping students identify their areas of weaknesses and improve their performance. Evaluating each clinical procedure or task needs a standard DOPS checklist . In this study, we developed a separate checklist for each of the selected tasks which included 5, 6, 7, 10 and 5 items for ear examination, ear irrigation, epistaxis management, subcuticular suture and nasal packing tasks respectively. These checklists covered relevant items to each task such as awareness of the anatomy, pre-procedure preparation, compliance with sterile principles, performance of the task steps, post-procedure actions, communication skills, obtaining patient consent and complying with medical ethics. In addition, there was a question asking for rating the student’s overall performance. Each individual student performance on each item was rated by an observer, a clinical teacher or resident, within a five Likert type scale continuum consisting of “couldn’t perform (incompetent)”, “less than expected (needs further development)”, “moderate”, “acceptable (competent enough)” and “higher than expected (excellent)”. The tool’s content validity was checked by three otolaryngologists. The reliability was confirmed by the Cronbach’s alpha of 0.841 .
To avoid contamination of the study groups, we conducted the study in the first control group, followed by the second control and intervention groups. Students in the otolaryngology rotation routinely participate in an orientation session at the beginning of their rotation to be briefed on the expected competencies, duties and tasks. In this session, we explained the study aims, obtained students’ informed consent and took the knowledge pre-test. Participants of each study group underwent their assigned teaching–learning method; namely, routine, TBL or combined ML–TBL. On the last day of the rotation, they once again took the knowledge test as the post-test. In addition, during the rotation, clinical teachers and residents filled the DOPS tool for each individual student performing each task and provided them with feedbacks on their performance. In the following, we explain the teaching–learning strategy in each of the study groups. Routine teaching–learning method in the first control group: Students in this group experienced the routine program of the otolaryngology ward. They were responsible for visiting patients and performing routine procedures and tasks, including five selected tasks for this study, in the clinics and emergency unit. Clinical teachers and residents supervised students and were accessible for answering their questions and guiding them for patient management. However, with regard to the continuum of systematic to opportunistic clinical program in the SPICES model , there was no systematic approach to monitor the variety of patients visited by students. They managed only those patients that opportunistically referred to the clinics and emergency unit. In addition, students attended some lecture-based classes on a variety of clinical conditions according to the curriculum. TBL method in the second control group: In this study group, after briefing the students about TBL, they received a study guide through WhatsApp social media including the list of tasks, their related learning objectives and the important points for performing the tasks. The students visited and managed the patients like the previous study group. Meanwhile, students attended a 20- to 30-min interactive lecture-based class for each task at the middle of the rotation, in which the clinical teachers reviewed the prerequisite knowledge and acquainted them with the expectations and details of roles and responsibilities for performing each task. Moreover, encounters with the tasks were systematically monitored to ensure correct performance of the tasks by the students. Combined ML–TBL method in the intervention group: The students of this group experienced the same teaching–learning process as the second control group. In addition, they received the five micro-videos through WhatsApp social media on the first day of the rotation. They were briefed to first refer to the videos to resolve their problems while performing the tasks, and then, if necessary, ask the clinical teachers or residents for further guidance.
The data were analyzed using IBM SPSS Statistics for Windows, version 17 (IBM Corp., Armonk, New York, NY, USA). We used the Shapiro–Wilk test and shape of distribution for examining the normality of data distribution. In addition, we conducted a paired t-test, ANOVA and ANCOVA for analyzing parametric variables, and a Kruskal–Wallis test for non-parametric ones.
All 59 students took the pre- and post-tests. There were no significant differences among the study groups with regard to the gender and age ( ). shows the comparison of the knowledge scores within and among three study groups. In all cases, the normal distribution of the samples was checked using the Shapiro–Wilk test, shape of samples distribution, QQ plot, kurtosis and skewness. The results showed that while there were no significant differences in knowledge pre-test scores among three groups ( p -value = 0.628), significant increases were observed between the pre- and post-test scores in each group ( p -values of 0.001, 0.024 and 0.001 for control 1, control 2 and intervention groups respectively). We used analysis of covariance to compare the post-test scores among three groups with elimination of the effect of the pre-test scores. To do so, we considered the study groups, post-test scores and pre-test scores as independent variable, dependent variable and covariate, respectively. The results showed a significant difference among the post-test scores (F = 3.423, p -value = 0.040). We applied the Kruskal–Wallis test for comparing the mean DOPS scores among study groups due to non-normal distribution of scores in the second control group. As indicated in , the DOPS scores in the intervention group are higher than the control ones for all the tasks. We conducted the Repeated Measurement ANOVA test to assess the differences of mean DOPS scores for each task in the study groups and observed no significant differences ( and ). This finding indicates that the task type had no effect on the results.
In this study, we assessed the effect of the integrated ML–TBL strategy on the knowledge and performance of final-year medical students in their otolaryngology rotation. In order to be able to measure the effect of the integrated approach, the study was conducted in three groups, including two control (group 1: routine teaching and group 2: TBL methods) and one intervention (integrated ML–TBL method) group. The result revealed significant increases in participant knowledge and performance in the intervention group compared to the control ones. ML has some advantages that we believe were influential in our study. The asynchronistic nature of ML provides the students with the possibility of controlling the place and time of learning. This characteristic, alongside with the short length and just-to-point content, make ML a suitable strategy for learning quickly within the minimum time span, i.e., just-in-time learning. It allows students to have access to the information in the moment that they need to learn with the help of technology and improves their levels of cognition and skills . These advantages can be useful in clinical education, where there is patient load and students need to have access to relevant and concise information while managing patients. There are some studies that have assessed the effect of ML in medical education, though a few have assessed students’ clinical performance in a real work setting, and most of them have evaluated students’ reaction and knowledge acquisition . ML alone is not recommended for teaching complex tasks in real work environments and it should be implemented in combination with other instructional strategies that are appropriate for learning in such settings . In this study, we combined ML with TBL, because medical students have to learn tasks that are expected to be performed in their future real job. This requires adopting clinical teaching methods that are appropriate for workplace-based learning, among which TBL is a recommended approach. This strategy provides the chance of just-in-time training while performing a task. To implement this strategy, after selecting and designing the tasks, the support information for students’ learning should be identified and provided in a concise and just-to-point way . We covered this step by providing students with micro videos. Our results are supported by Cheng et al.; they compared the use of a just-in-time micro-video with reading textbooks for performing the task of splinting technique and found that watching a three-minute video immediately before performing the technique resulted in shorter preparation time, higher performance assessment scores and a higher rate of successful splint application in comparison with reading medical textbooks . We observed the same finding and significant increases in DOPS scores revealed better task performance in the intervention group who had access to micro-videos. We could not find any other study focusing on an integrated ML–TBL approach. However, there are other studies that have separately examined the effect of learning through ML or TBL in medical education. Most of the studies have addressed ML effect on participants’ learning (the second level in Kirkpatrick model) and evaluated knowledge and not skill acquisition. Their results showed higher knowledge scores with the help of ML . In one study, providing surgery clerkship students with a ML module resulted in higher knowledge scores compared to the control group . These results support the present study findings on improvement of knowledge scores in the intervention group. Meanwhile, Tian et al. compared the effect of TBL and conventional lecture-based method with regard to the theoretical and practical scores of postgraduate medical students in a course on immunohistochemistry. They found significant differences in the mean score of the practical test, in contrast to the theoretical test scores which showed no significant differences. The authors indicated that lecture-based learning could transfer knowledge to the students systematically and comprehensively, although it was insufficient for practical problem solving while performing laboratory exercises. On the other hand, TBL is effective for problem solving while performing the tasks, because of greater student engagement compared to lecture sessions . In our experience, we achieved the same results and mean knowledge score was higher in the first control group than the second one. Finally, there is a need to decide on an appropriate microlearning method for different educational subjects . In our experience, students’ DOPS scores increased for all the tasks regardless of their types which covered physical examination, patient management and clinical procedures. This finding, alongside with the study by Cheng et al. , indicates that micro-videos being accessible in clinical setting enhance learners’ performance in conducting a variety of medical tasks. We assume that ML is especially helpful in clinical disciplines like otolaryngology, where medical students are the first line encounters of the patients in emergency units and clinics and there is the challenge of time constraint. On the other hand, we suggest further research on usefulness of ML in clinical contexts with more flexibility, where students have more time for learning prerequisite knowledge and then performing clinical tasks. We recognize some limitations for our study. It was only implemented in a small number of clerkship students on a two-week otolaryngology rotation in one hospital. Hence, whether our findings are transferable to other clinical education settings is an area for future research. We assessed the second level of the Kirkpatrick model including both knowledge acquisition and task performance. Moreover, among different ML modalities and techniques, we implemented micro-videos. We recommend future research in this field should address higher levels of learning outcomes from various ML modalities.
ML could have an important place in clinical education especially when integrated with other clinical education strategies like TBL. ML facilitates students’ learning through concise, short learning units which are integrated into daily routine and accessed on-demand while performing the tasks. Such combined strategies can enhance both knowledge and skill acquisition in medical students.
|
Development of a Person-Centred Integrated Care Approach for Chronic Disease Management in Dutch Primary Care: A Mixed-Method Study | 59b9a982-1d59-47de-8e0b-c5da4820b638 | 10001916 | Patient-Centered Care[mh] | Over the last decades, the increasing prevalence of chronic diseases has cast a huge burden on healthcare systems worldwide . Currently, chronic diseases are the leading cause of death globally, with cardiovascular diseases, diabetes, and chronic lung diseases causing the highest mortality . In the Netherlands, 59% of the population had one or more chronic diseases in 2020 . In addition, between 2004 and 2017, the prevalence of patients with two or more chronic diseases (multimorbidity) in central Europe has increased in adults aged 50 and over . Most importantly, chronic diseases have a major impact on patients’ health-related quality of life, especially when they have multiple chronic conditions [ , , , ]. To reduce the burden of chronic diseases on patients and healthcare providers, single disease management programs (DMPs) have been developed [ , , ]. Based on Dutch primary care [ , , ], we define DMPs as long-term chronic care programs in primary care that are predominantly run by general practice nurses (PNs) under the responsibility of a general practitioner (GP) and focus on assessing, monitoring, and treating a single chronic disease. DMPs for chronic obstructive pulmonary disease (COPD), cardiovascular diseases (CVD), and diabetes mellitus type 2 (DM2) are currently the most widely implemented. Although DMPs have shown some minor improvements in process indicators, such as coordination of care and communication between caregivers , they have failed to show improvement in patients’ health-related quality of life (HRQoL) . A possible explanation could be that DMPs mainly focus on the medical aspects of a specific condition, with less attention being paid to other chronic diseases or social problems that may also impact HRQoL. In addition, an organisation in which patients with multiple chronic diseases attend multiple DMPs provided by multiple healthcare professionals (HCP) is not desirable, both from an economical and patient perspective . Patients may receive overlapping or conflicting treatment advice . Furthermore, the DMP approach seems to conflict with the core competencies of primary care professionals, i.e., medical generalism, community orientation, focusing on social determinants of health and societal factors, and working from a personal–professional relationship with patients . An alternative approach for DMPs might be found in Person-Centred and Integrated Care (PC-IC), as increasingly advised by international guidelines on multimorbidity and chronic conditions [ , , ]. Instead of focusing on a standard set of disease management processes determined by health professionals, PC-IC aims to ensure that patients’ values and concerns shape the way long-term conditions are managed . This approach encourages patients to select treatment goals and to work with clinicians to determine their specific needs for treatment and support of their chronic diseases . A PC-IC approach is believed to improve the quadruple aims of better patient and HCP experience, population health, and cost-effectiveness . Currently, several studies on such PC-IC approaches to managing chronic conditions in primary care are emerging, but descriptions of their scientific foundation are lacking . In addition, in the Netherlands, a shift is taking place from DMPs to PC-IC approaches initiated by primary care HCP organizations. To scientifically support this movement, this paper describes a mixed-method multiphase development of a PC-IC approach for the management of patients with one or more chronic diseases in Dutch primary care. We co-designed the approach with all stakeholders involved, i.e., academics, HCPs, patients, and healthcare insurers.
2.1. Design A multiphase process to develop a PC-IC approach for patients with one or more chronic conditions, but at least DM, COPD, or CVD was started in March 2019 and finished in July 2020. We conducted the process together with three large primary care cooperatives in the eastern part of the Netherlands, i.e., the Nijmegen region (168 GPs, approximately 290,000 inhabitants), the Arnhem region (193 GPs, ~440,000 inhabitants), and the Doetinchem region (116 GPs, ~150,000 inhabitants). We followed a four-phase process in which the information collected in each phase was commented on by stakeholders and used in the next phase (see ). The four subsequent phases were all a priori defined by the project team based on criteria for reporting the development of complex interventions in healthcare and including all relevant stakeholders . In short, in Phase 1 we conducted a scoping review and a document analysis to identify key elements to construct a conceptual model for delivering PC-IC care. In Phase 2, national experts on DM2, CVD, and COPD and local HCPs commented on the conceptual model using online qualitative surveys. In Phase 3, patients with one or more chronic conditions commented on the conceptual model in individual interviews. To conclude the development process, in Phase 4 the conceptual model was presented to the local primary care cooperatives and finalized after processing their comments. We used the Standards for Reporting Qualitative Research (SRQR) guidelines to design and report the methods and results of the respective sub-studies . The medical ethics review board of the Radboud University Medical Center declared that ethics approval for the study was not required under Dutch National Law (registration number: 2019-5756). All participants received written information about the study and their written informed consent was obtained prior to their participation. 2.2. Scoping Review and Document Analysis (Phase 1) In this phase, we aimed to identify which process elements and which interventions a PC-IC approach should contain. We identified the key process elements (e.g., history taking or discussing patients’ goals) for successful (multiple) chronic disease management by conducting a scoping review. We identified the key interventions by conducting a document analysis. For the scoping review, we searched PubMed, EMBASE, Cochrane, Turning Research into Practice (TRIP) Medical Database, and the Guidelines International Network (GIN) to identify key elements for the successful management of (multiple) chronic diseases in primary care (see for the search strategies). All eligible publications up until 27 August 2019 were included, and no lower limit with regard to publication date was applied. Forward citation tracking was used and the reference lists of relevant publications were hand searched for additional relevant publications. Two of the authors (LR and MW) independently screened the titles and abstracts of the publications and reviewed the full text of those that seemed eligible for the scoping review. Publications were included if the language was English or Dutch, if the target population consisted of patients with multiple chronic conditions, and if the target setting was primary care. Primary care was defined as a non-hospital community setting with medical care continuity by (the equivalent of) a GP. Publications were excluded if they were study protocols, commentaries, or cost-effectiveness analyses. Next, one author (LR) extracted data on publication details, methods used, and recommendations on important elements of clinical care from the included publications. The extracted details were cross-checked by a second author (MW). The results of the scoping review were used to create a conceptual model including key process elements for PC-IC. For the document analysis, we analysed all Dutch chronic disease care standards and GP guidelines relevant to the DMPs for COPD, CVD, and DM2 [ , , , , , ] to identify all unique interventions that were used in the management of these conditions. The documents were analysed by two authors (LR and MW) using inductive thematical coding ( ). Using an affinity diagram, a schematic overview of unique key interventions to be included in the PC-IC approach was developed. The resulting intervention model was combined with the process model from the scoping review to form our conceptual PC-IC approach, which was further adjusted in the subsequent phases. 2.3. Online Surveys with Healthcare Professionals (Phase 2) We conducted online surveys among healthcare professionals using open-ended responses, with a thematic analysis of wordings in order to further adjust the conceptual model of our PC-IC approach. This method was chosen because it enabled HCPs from different disciplines to give their individual opinions and flexibility to contribute to the study at a time that suited participants. Each regional primary care cooperative purposively selected a heterogenous group of 10 to 15 HCPs in the following professions or disciplines: GPs with a special interest in CVD, DM, or COPD, regular GPs, PNs, allied HCPs (e.g., physiotherapists, dieticians), social workers and other HCPs involved in the care for patients with chronic diseases. In addition, six GPs with a special interest in CVD, DM, or COPD who were involved in the national guidelines or health policy committees were asked to participate. All participants were monetarily compensated for their time and received written information on the conceptual model of the PC-IC approach before the online survey started. The online survey was performed in five subsequent parts in which open-ended questions were sent to participants through an adapted secured version of LimeSurvey (LimeSurvey GmbH, Hamburg, Germany). Each survey focused on a predetermined part of the conceptual model of the PC-IC approach. Questions concerned the strength and limitations of different parts of the PC-IC approach. If there were doubts about the responses to the questionnaire items, we asked follow-up questions via e-mail or phone until the answers could be sufficiently interpreted. Analysis of the questionnaire data was performed by three researchers (LR, MW, and AO) using thematical coding, as described in . To conclude this phase, we organized a virtual meeting with all participants in which we presented the results of the surveys and checked for agreement. This resulted in an adapted version of the conceptual model of the PC-IC approach. 2.4. Individual Interviews with Patients (Phase 3) We then organized individual semi-structured telephone interviews with chronic disease patients to explore their opinions on the conceptual model of the PC-IC approach. Each primary care cooperative recruited patients with DM2 and/or COPD and/or CVD who received chronic disease management from their general practitioner. Participating patients received written information on the study and the conceptual model of the PC-IC approach by e-mail or postal mail before being interviewed. Patients were recruited until data saturation was reached. Patients did not receive financial compensation for their participation. The interviews were conducted by two researchers (LR and FB). The interviewer first explained the goal of the interview and presented the conceptual model before asking questions regarding expected strengths, weaknesses, and points for improvement of the different elements and interventions (see ). The interviews were audio recorded, transcribed verbatim, coded, and analysed according to the thematic analysis approach, see . A summary of the results was offered for member checking. This resulted in an adapted version of the conceptual model of the PC-IC approach. 2.5. Finalization of PC-IC Approach (Phase 4) In this last phase of the development process, we aimed to collect final feedback from the remaining stakeholders (see ) on the adapted version of the conceptual model of the PC-IC approach. Because of their vital role in the organisation and reimbursement of primary healthcare for chronic patients, representatives of the three primary care cooperatives involved and three healthcare insurance companies were invited to and participated in a joint meeting to give oral feedback on the adapted version of the PC-IC approach from their perspectives. Neither patients nor HCPs were invited to this meeting. After the presentation of the PC-IC approach by one of the authors (LR) an open discussion with the ten participants was moderated by another author (EB). Notes were taken by one of the authors (LR) during the discussion. Finally, to improve the comprehensibility of the approach for people with limited health literacy, two experts from the Dutch Centre of Expertise on Health Disparities (Pharos) were asked to provide written feedback on the comprehensibility of the conceptual model. Their feedback was collected and summarized by one of the authors (LR). All input from phases one through four was processed by the research team in a report of the feedback on the PC-IC approach. This report was shared with the participants and a meeting was held with stakeholders of the primary care cooperatives for the finalization of the PC-IC approach.
A multiphase process to develop a PC-IC approach for patients with one or more chronic conditions, but at least DM, COPD, or CVD was started in March 2019 and finished in July 2020. We conducted the process together with three large primary care cooperatives in the eastern part of the Netherlands, i.e., the Nijmegen region (168 GPs, approximately 290,000 inhabitants), the Arnhem region (193 GPs, ~440,000 inhabitants), and the Doetinchem region (116 GPs, ~150,000 inhabitants). We followed a four-phase process in which the information collected in each phase was commented on by stakeholders and used in the next phase (see ). The four subsequent phases were all a priori defined by the project team based on criteria for reporting the development of complex interventions in healthcare and including all relevant stakeholders . In short, in Phase 1 we conducted a scoping review and a document analysis to identify key elements to construct a conceptual model for delivering PC-IC care. In Phase 2, national experts on DM2, CVD, and COPD and local HCPs commented on the conceptual model using online qualitative surveys. In Phase 3, patients with one or more chronic conditions commented on the conceptual model in individual interviews. To conclude the development process, in Phase 4 the conceptual model was presented to the local primary care cooperatives and finalized after processing their comments. We used the Standards for Reporting Qualitative Research (SRQR) guidelines to design and report the methods and results of the respective sub-studies . The medical ethics review board of the Radboud University Medical Center declared that ethics approval for the study was not required under Dutch National Law (registration number: 2019-5756). All participants received written information about the study and their written informed consent was obtained prior to their participation.
In this phase, we aimed to identify which process elements and which interventions a PC-IC approach should contain. We identified the key process elements (e.g., history taking or discussing patients’ goals) for successful (multiple) chronic disease management by conducting a scoping review. We identified the key interventions by conducting a document analysis. For the scoping review, we searched PubMed, EMBASE, Cochrane, Turning Research into Practice (TRIP) Medical Database, and the Guidelines International Network (GIN) to identify key elements for the successful management of (multiple) chronic diseases in primary care (see for the search strategies). All eligible publications up until 27 August 2019 were included, and no lower limit with regard to publication date was applied. Forward citation tracking was used and the reference lists of relevant publications were hand searched for additional relevant publications. Two of the authors (LR and MW) independently screened the titles and abstracts of the publications and reviewed the full text of those that seemed eligible for the scoping review. Publications were included if the language was English or Dutch, if the target population consisted of patients with multiple chronic conditions, and if the target setting was primary care. Primary care was defined as a non-hospital community setting with medical care continuity by (the equivalent of) a GP. Publications were excluded if they were study protocols, commentaries, or cost-effectiveness analyses. Next, one author (LR) extracted data on publication details, methods used, and recommendations on important elements of clinical care from the included publications. The extracted details were cross-checked by a second author (MW). The results of the scoping review were used to create a conceptual model including key process elements for PC-IC. For the document analysis, we analysed all Dutch chronic disease care standards and GP guidelines relevant to the DMPs for COPD, CVD, and DM2 [ , , , , , ] to identify all unique interventions that were used in the management of these conditions. The documents were analysed by two authors (LR and MW) using inductive thematical coding ( ). Using an affinity diagram, a schematic overview of unique key interventions to be included in the PC-IC approach was developed. The resulting intervention model was combined with the process model from the scoping review to form our conceptual PC-IC approach, which was further adjusted in the subsequent phases.
We conducted online surveys among healthcare professionals using open-ended responses, with a thematic analysis of wordings in order to further adjust the conceptual model of our PC-IC approach. This method was chosen because it enabled HCPs from different disciplines to give their individual opinions and flexibility to contribute to the study at a time that suited participants. Each regional primary care cooperative purposively selected a heterogenous group of 10 to 15 HCPs in the following professions or disciplines: GPs with a special interest in CVD, DM, or COPD, regular GPs, PNs, allied HCPs (e.g., physiotherapists, dieticians), social workers and other HCPs involved in the care for patients with chronic diseases. In addition, six GPs with a special interest in CVD, DM, or COPD who were involved in the national guidelines or health policy committees were asked to participate. All participants were monetarily compensated for their time and received written information on the conceptual model of the PC-IC approach before the online survey started. The online survey was performed in five subsequent parts in which open-ended questions were sent to participants through an adapted secured version of LimeSurvey (LimeSurvey GmbH, Hamburg, Germany). Each survey focused on a predetermined part of the conceptual model of the PC-IC approach. Questions concerned the strength and limitations of different parts of the PC-IC approach. If there were doubts about the responses to the questionnaire items, we asked follow-up questions via e-mail or phone until the answers could be sufficiently interpreted. Analysis of the questionnaire data was performed by three researchers (LR, MW, and AO) using thematical coding, as described in . To conclude this phase, we organized a virtual meeting with all participants in which we presented the results of the surveys and checked for agreement. This resulted in an adapted version of the conceptual model of the PC-IC approach.
We then organized individual semi-structured telephone interviews with chronic disease patients to explore their opinions on the conceptual model of the PC-IC approach. Each primary care cooperative recruited patients with DM2 and/or COPD and/or CVD who received chronic disease management from their general practitioner. Participating patients received written information on the study and the conceptual model of the PC-IC approach by e-mail or postal mail before being interviewed. Patients were recruited until data saturation was reached. Patients did not receive financial compensation for their participation. The interviews were conducted by two researchers (LR and FB). The interviewer first explained the goal of the interview and presented the conceptual model before asking questions regarding expected strengths, weaknesses, and points for improvement of the different elements and interventions (see ). The interviews were audio recorded, transcribed verbatim, coded, and analysed according to the thematic analysis approach, see . A summary of the results was offered for member checking. This resulted in an adapted version of the conceptual model of the PC-IC approach.
In this last phase of the development process, we aimed to collect final feedback from the remaining stakeholders (see ) on the adapted version of the conceptual model of the PC-IC approach. Because of their vital role in the organisation and reimbursement of primary healthcare for chronic patients, representatives of the three primary care cooperatives involved and three healthcare insurance companies were invited to and participated in a joint meeting to give oral feedback on the adapted version of the PC-IC approach from their perspectives. Neither patients nor HCPs were invited to this meeting. After the presentation of the PC-IC approach by one of the authors (LR) an open discussion with the ten participants was moderated by another author (EB). Notes were taken by one of the authors (LR) during the discussion. Finally, to improve the comprehensibility of the approach for people with limited health literacy, two experts from the Dutch Centre of Expertise on Health Disparities (Pharos) were asked to provide written feedback on the comprehensibility of the conceptual model. Their feedback was collected and summarized by one of the authors (LR). All input from phases one through four was processed by the research team in a report of the feedback on the PC-IC approach. This report was shared with the participants and a meeting was held with stakeholders of the primary care cooperatives for the finalization of the PC-IC approach.
3.1. Scoping Review and Document Analysis (Phase 1) 3.1.1. Scoping Review We identified 203 unique publications, of which 18 were included in the review ( ). Included publications were published between 2007 and 2019, of which 67% were in the last five years (2015–2019). All publications were in English and most were from the United States or the United Kingdom. Most publications stated there is still a lack of research and thus insufficient evidence for optimal clinical management of people with multiple chronic diseases [ , , ]. Only a few of the included studies focused on person-centred outcomes . Nonetheless, authors generally agreed that interventions that are generic in nature (i.e., not specific for the underlying condition(s)) and with a person-centred approach are most likely to result in health benefits for patients with chronic diseases and multimorbidity, in comparison to a single disease approach [ , , , ]. Assessment of Multiple Domains—Integral Health Status Besides the medical domain, authors recommended paying attention to other domains of life as well, i.e., to functional limitations, mental health, and social functioning [ , , , , , , , , , ]. Patients with limited physical, emotional, and financial capacities are most disrupted by their chronic illness, but interventions to support these particular patient capacities have been scarcely studied . With regard to mental health, it is recommended to discuss this domain with patients and to actively monitor signs of anxiety, distress, and depression . For the social domain, social circumstances, including social support, living conditions, and financial constraints should be considered . Health professionals are encouraged to involve relatives or other informal caregivers in key decisions about the management of the patient’s health, if the patient so desires [ , , ]. In addition, the needs of these relatives should be considered as well . By including all of these domains, interventions have the potential to better address health inequalities in the population . We summarized the multiple domains in the concept of integral health status ( ). Case Management Case management is considered to be an effective way to support patients in achieving their goals and communicating with other HCPs . Case managers are advised to perform regular face-to-face assessments with the patient . Establishing a partnership between different disciplines (i.e., primary care physicians, medical specialists, nurses, mental health professionals, and social care workers) may provide the key to improving care for patients with multimorbidity and psychological distress . The patient should also be part of this team . Communication and coordination across health professionals are considered essential in providing multimorbidity care [ , , , , , ]. To improve partnership and communication between health professionals and the patient and family, it is recommended to work in small teams with dedicated contact persons on both sides . Clinical Assessment Multiple publications recommend assessing disease burden by determining how day-to-day life is affected by the patient’s health problems and establishing how health problems and treatments interact . Examples of health problems influencing disease burden are chronic pain, depression and anxiety, and incontinence . Another recommendation is to assess the burden of treatment because this can greatly influence patients’ quality of life [ , , , , , ]. For example, NICE recommends discussing the number of healthcare appointments a patient has and the format in which they take place, the number of non-pharmacological treatments, the assessment of polypharmacy, and the effects of all treatments on mental health or well-being . An annual medication review is recommended to evaluate the risks, benefits, possible interactions, and treatment adherence for each drug the patient uses . Finally, Muth et al. noticed that the management of risk factors for future disease can be a major treatment burden for patients with multimorbidity and should be carefully considered when optimizing care . Patient Preferences and Priorities Many studies described the importance to elicit patients’ preferences and priorities for care [ , , , , , , , ]. Addressing a patient’s priorities helps to minimize adverse effects of psychological distress . Using these preferences and priorities, together with the health professional’s clinical expertise and based on the best available evidence, individual goals for care should be determined [ , , ]. In this conversation, health professionals should also explore, without any assumptions, to what extent a patient wants to be involved in decision-making . Another important factor to consider when discussing goals with patients with multimorbidity is life expectancy and prognosis of the conditions . Care Plan After prioritizing the patient’s problems, a care plan should be drafted, which sets out realistic treatment goals, monitoring, treatment, prevention, (self-)management advice, responsibility for coordination of care, and timing of follow-up through shared decision-making [ , , ]. The plan should be shared with other involved professionals, the patient, and the family . When choosing interventions, it is advised to use the best available evidence, but to also recognize the limitations of the evidence base for patients with multimorbidity and to check if an intervention is effective in terms of patient-related outcomes . Possible interventions should be tailored and adapted to a patient’s individual needs and shared decision-making should be used to maximize the impact of interventions [ , , ]. The key process elements of the PC-IC approach that we could retrieve from the included publications are summarized in . 3.1.2. Document Analysis For this phase, we analysed three clinical guidelines of the Dutch College of General Practitioners and three national care standards for DM2, COPD, and CVD [ , , , , , ]. The document analysis resulted in a list of categories with unique key interventions for disease-specific and holistic care ( ) which was converted into a draft conceptual intervention model for the PC-IC approach. After processing feedback from stakeholders as described in , and below, this resulted in a graphical representation of the final intervention model for use in daily practice. 3.2. Online Qualitative Surveys with Healthcare Professionals (Phase 2) A total of 56 HCPs were invited to participate in the online qualitative survey study. Fifty-two (93%) responded and 10 were asked follow-up questions to clarify the responses of their initial input. The majority of the participants consisted of GPs (n = 16) and PNs (n = 15), but several other disciplines were also involved ( ). The results of the survey were categorized as: general comments on the PC-IC approach and comments on the individual phases of the care process (i.e., assessment; setting personal health goals; choosing interventions; individual care plan; evaluation). 3.2.1. General Comments In general, most participants agreed with the underlying vision of the PC-IC approach, namely that person-centred and holistic care would improve the quality of care for patients with one or more chronic diseases (Q1, see ). It is likely to lead to more insight into the patient’s health status and any underlying problems. Using the PC-IC approach could increase the motivation of the patient for behavioural change and therefore may improve therapy compliance and health status. Many participants expect that this approach will initially take up more time, but that this time will be restored in the future. In the long term, therefore, the approach could save time and lead to more efficient provision of care (Q2). Another anticipated advantage of the PC-IC approach is the cyclical aspect, which ensures that the process continues and the patient’s health status is checked repeatedly. Some participants liked the fact that the PC-IC approach has a strong theoretical basis and would give patients more control and responsibility (Q3). According to some participants, a potential disadvantage of the approach could be that it may be too time-consuming, both for the HCP and for the patient (Q4). Therefore, some participants considered it not feasible to implement the approach in daily practice in its current form. In addition, some participants doubted the magnitude of the positive effects on the quality of care and patients’ health of the PC-IC approach. In addition, participants questioned which patients the care program would be suitable. Some thought it would be useful for all patients, whereas others suggested using it only for the more complex patients. Others indicated that the program may be too complicated for people with limited health skills (Q5). 3.2.2. Assessment of Integral Health Status Assessing patients’ integral health status was considered a positive development by almost all participants, who indicated that a broader assessment of health status may have positive effects for both the patient and the HCP. It provides insight into the connection between health problems and their underlying causes for both parties. This creates more awareness and motivation for change in patients, especially if the underlying cause of these health problems concerns a domain other than the medical domain (Q6). Involving family members or informal caregivers in discussing the overall health situation was also mentioned as a strong point. They can often provide useful additional information and may be supportive during treatment. Assessing and discussing integral health status also provides clear goals and priorities for the patient. Therefore, participants considered the integral health status a suitable way to map out complex patients. Filling out a questionnaire online (at home) helps the patient better prepare and saves time during the consultation. A disadvantage of focusing on integral health status instead of the disease-oriented approach might be that the medical aspects may not be sufficiently addressed and the severity of individual chronic diseases becomes less clear to patients. In addition, HCPs feared a patient may not want to talk about other areas of life as he/she may consider them irrelevant to the condition. It was also mentioned that making a more elaborate assessment of the patient’s health status could be confrontational for some, especially for those with many problems (Q7). 3.2.3. Setting Personal Health Goals Most participants were enthusiastic about setting personal health goals through shared decision-making. The most important advantage mentioned was that it may motivate the patient toward behavioural change. Contributing factors to motivation were awareness, commitment, and responsibility on the side of the patient. Setting personal goals also benefits the HCP, who gains more insight into the patient’s priorities, and is more in tune with the patient, which could make interventions more effective (Q8). Participants also mentioned the disadvantages and pitfalls of setting personal goals, such as that the importance of disease control might be overlooked (Q9). In addition, HCPs mentioned the risk that the patient sets unattainable goals, which can demotivate both the patient and the HCP. 3.2.4. Choosing Interventions Although the graphical representation of the PC-IC conceptual model for use in daily practice and the accompanying schematic overview of existing key interventions to support the management of patients with chronic conditions (see ) was appreciated by many participants, the graphical representation in its initial form as presented to the participants was deemed confusing by some of them due to the inclusion of too much information in one visualisation. Without further explanation, this makes the model difficult to understand (Q10). Participants also mentioned that it is difficult to create a static model for the supply of interventions, which will usually vary between regions and possibly change over time. 3.2.5. Individual Care Plan Participants saw many advantages of a care plan, both for the patient and for the HCP. The most important advantage is that the patient and the various HCPs involved may share the same specific personal goals, which makes communication between patient and caregiver and between different caregivers easier. The care plan provides a clear structure and benefits evaluation of personal goals. Participants also indicated that it fits well within a holistic approach (Q11). Disadvantages could be that it is time-consuming to draw up the plan, that the conversation with the patient can become subordinate to the plan, that making a plan is not yet sufficiently integrated into the ICT systems, and that a care plan can lead to the medicalisation of non-somatic problems. Disadvantages for patients could be that it can be invasive, that it can evoke resistance, and that it can create ambiguity if not all HCPs are on the same page (Q12). Participants wanted to include the following information in the proposed individual care plan format: the patient’s specific goals; the selected interventions; an overview of the HCPs involved and their responsibilities; and time of evaluation. 3.2.6. Evaluation Many participants found it unclear whether a patient-level evaluation had been included in the process of the PC-IC approach. They indicated that they missed this essential step and would prefer to add it (Q13). An advantage of an evaluation is that it provides new information which can be used in the next cycle. No disadvantages of an evaluation were mentioned. 3.3. Individual Interviews with Patients (Phase 3) Twelve patients were invited for the interview study. One patient did not want to participate, two were not eligible as they did not receive care in a DMP, and nine consented to be interviewed. Data saturation was reached after the first eight interviews. Eight patients (88.8%) were male, their mean age was 65 years (range 58–79 years). One patient had COPD, three had CVD, two had DM2, and three had any combination of these chronic diseases. The median duration of being in the DMP was ten years (range 2–10 years). The following main categories were”Ide’Iified during the analysis of the interview transcripts: personalized care, cooperation, patient role, and PN role. 3.3.1. Personalized Care Patients were generally positive towards the presented manner of personalized care, especially regarding the integral health status assessment and use of individual care plans. The integral health assessment may give patients and HCPs better insight and focus on holistic well-being, and may account better for comorbidity, disease interaction, and psychological factors. It may detect issues affecting patients’ well-being and identify those who require more support. It may also improve working relationships by shifting towards a more personal approach rather than a disease focus. Two participants were content with their current care and expected no benefits from the new approach. The PC-IC approach would be inviting patients to be more involved in their healthcare. Having the care plan at home could help remind and motivate them and allow easier involvement of informal caregivers/social support systems. The wording of the care plan should be easy to understand. Some participants called for more flexibility regarding individual care plans to be adaptive to patients’ needs and unexpected circumstances and requested options to communicate their questions and concerns to their PN after the plan is formulated. Longer consultation time would allow for more personal attention, and opportunities for better patient education, and is expected to benefit health outcomes. One participant believed that too much consultation time was reserved for patients. Some participants suggested adjusting the consultation frequency and the duration according to each patients’ individual needs. This may allow PNs to direct their efforts more efficiently (Q14). 3.3.2. Co-Operation Participants saw the benefits of being equal stakeholders in their own care. This may improve care participation and help them carry responsibility for their own health. Greater equality may also improve the working relationship with HCPs. Giving patients the opportunity to prepare for care consultations was seen as a way to improve participation and equality. Using digital questionnaires for assessing integral health status was appreciated by the participants. Avoiding time constraints when answering health-related questions may also cause more reflection on health, better quality answers, and time during consultations to explore the answers. One participant noted that completing the questionnaire allows patients to share thoughts about their health with their informal caregivers/social support system more easily (Q15). Some participants worried that patients with low literacy, facing language barriers, insufficient health skills, or insufficient computer skills may have difficulties using the questionnaire. One participant affirmed this, saying his low literacy made him feel insecure and uncertain when filling out questionnaires. One participant thought that thirty minutes was too long to fill out a questionnaire. Participants gave several suggestions regarding accessibility. Intelligible and straightforward questions were seen as important. Further suggestions were: visual instead of numeric response scales, a paper-version alternative, and a narrator function. One participant suggested a shorter alternative to the questionnaire. Using the questionnaire results would support the patient and the PN, as both may get better insight into the patient’s current health status and its long-term course. This may provide a sense of control and assurance for the patient and may help both sides to prepare for consultations and help discover previously unacknowledged problems that affect the patient’s health when the responses yield unexpected results. Several participants mentioned that the color-coding of the results made them more insightful, while one participant found this too confrontational and judgmental towards patients. Several participants saw potential flaws in using the questionnaire results. Two participants warned that this could lead to a search for non-existent problems. One participant thought paying attention to psychological stressors was lacking, while these can cause or amplify illness. Participants mentioned that the results should also be kept simple, some suggesting that a summary would suffice. Some participants suggested additional questions, one suggesting a question about literacy, another suggesting to include socioeconomic background, and a third suggesting questions regarding what mattered most in the life of patients (Q16). Participants also provided advice on the quality of communication with their HCPs, from which five requirements for communication emerged: trust, authenticity, empathy, constructiveness, and specificity. Trust improves patient openness and working relationships and requires continuity of care. Authentic personal interest makes patients feel seen and heard, and is conducive to developing trust. Being empathetic may provide a sense of safety and comfort, and let patients know that they are supported. Being constructive may create a positive and motivational focus on the patient. Finally, adjusting one’s approach to specific patient abilities and needs may benefit mutual understanding and working relationships. 3.3.3. Role of the Patient Participants thought that gaining ownership and self-management was important and that patients are ultimately responsible for their actions regarding their well-being, but may often be unaware of their potential influence on it. Being aware of this may stimulate self-management. They noted the potential benefits of self-management but also expressed thoughts on factors limiting its attainability. Participants thought that experiencing ownership in care may motivate better adherence to treatment and healthcare advice, and may facilitate acceptance of advice. They noted that any level of self-management may be beneficial for this. Similar to the HCPs the participating patients also mentioned that formulating personal health goals may contribute to more personalized care. Patient-specific factors such as personality traits, acceptance, and knowledge were thought to have an important limiting influence on attaining self-management (Q17). Several participants noted that patients’ responsibility extended to communication with the PN, as patients may choose to withhold information on topics such as mental problems or illiteracy, but this may prevent them from receiving optimal care. Three participants elaborated on involving informal caregivers/social support systems or primarily spouses, during consultations and at home. Patients bringing their spouses to consultations may be a source of information for the PN. The spouse may help retain information and provide support at home, as well as develop more understanding of the patient and their problems themselves. One participant noted that this involvement should be balanced with professional care, as a patient might value the opinion of their spouse more than that of the PN (Q18). 3.3.4. Role of the Practice Nurse Participants saw benefits in the proposed role of the PN in providing a patient-centred model of care, but also mentioned several limiting factors and provided feedback on their perception of PNs’ responsibilities in the process. The PN taking on the case-manager role may provide a more central viewpoint of patient wellbeing, in line with the integral health assessment. Having a central point of responsibility may also benefit from continuity of care. However, some PNs may lack the knowledge and skills to deal with complex cases or the affinity to handle certain aspects of patient well-being. Guiding patients toward appropriate care when faced with these limitations was marked as a responsibility of the PN. One participant noted that patients may still prefer GP visits for certain problems regardless of the PN’s capability. Resource constraints, such as available time per patient, were seen as a potential limitation. Other PN responsibilities mentioned were on supporting self-management and communications within the healthcare team. Participants noted that improving self-management is dependent on the PN creating opportunities to do so, which might require them to develop flexibility in their approach according to patients’ needs. Two participants suggested that HCPs could provide summaries of consultations as additional support when formulating personal goals. Sharing of relevant information between HCPs was seen as important in keeping care teams informed on patient health and may prevent patients from having to repeat their story several times. Participants had different opinions on GP involvement in their care. Most participants advised that their GP did not need to be ‘visibly’ involved in their care, one saying that he expected the PN to have more relevant expertise than the GP, and another saying that the GP should be involved when deemed required. One participant saw merit in some but limited GP visibility, even if only once a year (Q19). 3.4. Finalization of Recommended PC-IC Approach (Phase 4) 3.4.1. Health Insurers In general, health insurers found the PC-IC approach a good and positive development to move towards integral and holistic tailor-made care. Their suggestions for further improvement were: to describe the inclusion criteria for the PC-IC approach in practice more clearly, for example, every patient with two or more chronic diseases; pay more attention to the consequences of a shortage of HCPs in the future by having patients prepare their consultation at home and using more e-health applications; pay more attention to the required change in organisations and practices, because the implementation of the intervention determines if the intervention is successful. 3.4.2. Dutch Centre of Expertise on Health Disparities (Pharos) The experts from Pharos felt positive about the PC-IC approach to health and treatment because they found that, for a lot of people in vulnerable positions, not only disease, but also context, abilities, and possibilities influence health. A digital questionnaire for the assessment of health status that is already used in Dutch hospitals and general practices (the Nijmegen Clinical Screening Instrument, or NCSI) was tested with people with limited health skills and led to suggestions for improvement of the language use and layout of this digital questionnaire. In addition, Pharos provided feedback on the conceptual intervention model, which was found to be an unsuitable way to visualise and discuss these treatments. The model was considered too complicated and interfered with the integral approach. 3.4.3. Finalization of the PC-IC Approach Based on the scientific literature, current practice guidelines, and input of a variety of stakeholders, the holistic, PC-IC approach for the management of patients with (multiple) chronic diseases in primary care was finalized in a meeting with relevant stakeholders of each primary care cooperative ( and ).
3.1.1. Scoping Review We identified 203 unique publications, of which 18 were included in the review ( ). Included publications were published between 2007 and 2019, of which 67% were in the last five years (2015–2019). All publications were in English and most were from the United States or the United Kingdom. Most publications stated there is still a lack of research and thus insufficient evidence for optimal clinical management of people with multiple chronic diseases [ , , ]. Only a few of the included studies focused on person-centred outcomes . Nonetheless, authors generally agreed that interventions that are generic in nature (i.e., not specific for the underlying condition(s)) and with a person-centred approach are most likely to result in health benefits for patients with chronic diseases and multimorbidity, in comparison to a single disease approach [ , , , ]. Assessment of Multiple Domains—Integral Health Status Besides the medical domain, authors recommended paying attention to other domains of life as well, i.e., to functional limitations, mental health, and social functioning [ , , , , , , , , , ]. Patients with limited physical, emotional, and financial capacities are most disrupted by their chronic illness, but interventions to support these particular patient capacities have been scarcely studied . With regard to mental health, it is recommended to discuss this domain with patients and to actively monitor signs of anxiety, distress, and depression . For the social domain, social circumstances, including social support, living conditions, and financial constraints should be considered . Health professionals are encouraged to involve relatives or other informal caregivers in key decisions about the management of the patient’s health, if the patient so desires [ , , ]. In addition, the needs of these relatives should be considered as well . By including all of these domains, interventions have the potential to better address health inequalities in the population . We summarized the multiple domains in the concept of integral health status ( ). Case Management Case management is considered to be an effective way to support patients in achieving their goals and communicating with other HCPs . Case managers are advised to perform regular face-to-face assessments with the patient . Establishing a partnership between different disciplines (i.e., primary care physicians, medical specialists, nurses, mental health professionals, and social care workers) may provide the key to improving care for patients with multimorbidity and psychological distress . The patient should also be part of this team . Communication and coordination across health professionals are considered essential in providing multimorbidity care [ , , , , , ]. To improve partnership and communication between health professionals and the patient and family, it is recommended to work in small teams with dedicated contact persons on both sides . Clinical Assessment Multiple publications recommend assessing disease burden by determining how day-to-day life is affected by the patient’s health problems and establishing how health problems and treatments interact . Examples of health problems influencing disease burden are chronic pain, depression and anxiety, and incontinence . Another recommendation is to assess the burden of treatment because this can greatly influence patients’ quality of life [ , , , , , ]. For example, NICE recommends discussing the number of healthcare appointments a patient has and the format in which they take place, the number of non-pharmacological treatments, the assessment of polypharmacy, and the effects of all treatments on mental health or well-being . An annual medication review is recommended to evaluate the risks, benefits, possible interactions, and treatment adherence for each drug the patient uses . Finally, Muth et al. noticed that the management of risk factors for future disease can be a major treatment burden for patients with multimorbidity and should be carefully considered when optimizing care . Patient Preferences and Priorities Many studies described the importance to elicit patients’ preferences and priorities for care [ , , , , , , , ]. Addressing a patient’s priorities helps to minimize adverse effects of psychological distress . Using these preferences and priorities, together with the health professional’s clinical expertise and based on the best available evidence, individual goals for care should be determined [ , , ]. In this conversation, health professionals should also explore, without any assumptions, to what extent a patient wants to be involved in decision-making . Another important factor to consider when discussing goals with patients with multimorbidity is life expectancy and prognosis of the conditions . Care Plan After prioritizing the patient’s problems, a care plan should be drafted, which sets out realistic treatment goals, monitoring, treatment, prevention, (self-)management advice, responsibility for coordination of care, and timing of follow-up through shared decision-making [ , , ]. The plan should be shared with other involved professionals, the patient, and the family . When choosing interventions, it is advised to use the best available evidence, but to also recognize the limitations of the evidence base for patients with multimorbidity and to check if an intervention is effective in terms of patient-related outcomes . Possible interventions should be tailored and adapted to a patient’s individual needs and shared decision-making should be used to maximize the impact of interventions [ , , ]. The key process elements of the PC-IC approach that we could retrieve from the included publications are summarized in . 3.1.2. Document Analysis For this phase, we analysed three clinical guidelines of the Dutch College of General Practitioners and three national care standards for DM2, COPD, and CVD [ , , , , , ]. The document analysis resulted in a list of categories with unique key interventions for disease-specific and holistic care ( ) which was converted into a draft conceptual intervention model for the PC-IC approach. After processing feedback from stakeholders as described in , and below, this resulted in a graphical representation of the final intervention model for use in daily practice.
We identified 203 unique publications, of which 18 were included in the review ( ). Included publications were published between 2007 and 2019, of which 67% were in the last five years (2015–2019). All publications were in English and most were from the United States or the United Kingdom. Most publications stated there is still a lack of research and thus insufficient evidence for optimal clinical management of people with multiple chronic diseases [ , , ]. Only a few of the included studies focused on person-centred outcomes . Nonetheless, authors generally agreed that interventions that are generic in nature (i.e., not specific for the underlying condition(s)) and with a person-centred approach are most likely to result in health benefits for patients with chronic diseases and multimorbidity, in comparison to a single disease approach [ , , , ]. Assessment of Multiple Domains—Integral Health Status Besides the medical domain, authors recommended paying attention to other domains of life as well, i.e., to functional limitations, mental health, and social functioning [ , , , , , , , , , ]. Patients with limited physical, emotional, and financial capacities are most disrupted by their chronic illness, but interventions to support these particular patient capacities have been scarcely studied . With regard to mental health, it is recommended to discuss this domain with patients and to actively monitor signs of anxiety, distress, and depression . For the social domain, social circumstances, including social support, living conditions, and financial constraints should be considered . Health professionals are encouraged to involve relatives or other informal caregivers in key decisions about the management of the patient’s health, if the patient so desires [ , , ]. In addition, the needs of these relatives should be considered as well . By including all of these domains, interventions have the potential to better address health inequalities in the population . We summarized the multiple domains in the concept of integral health status ( ). Case Management Case management is considered to be an effective way to support patients in achieving their goals and communicating with other HCPs . Case managers are advised to perform regular face-to-face assessments with the patient . Establishing a partnership between different disciplines (i.e., primary care physicians, medical specialists, nurses, mental health professionals, and social care workers) may provide the key to improving care for patients with multimorbidity and psychological distress . The patient should also be part of this team . Communication and coordination across health professionals are considered essential in providing multimorbidity care [ , , , , , ]. To improve partnership and communication between health professionals and the patient and family, it is recommended to work in small teams with dedicated contact persons on both sides . Clinical Assessment Multiple publications recommend assessing disease burden by determining how day-to-day life is affected by the patient’s health problems and establishing how health problems and treatments interact . Examples of health problems influencing disease burden are chronic pain, depression and anxiety, and incontinence . Another recommendation is to assess the burden of treatment because this can greatly influence patients’ quality of life [ , , , , , ]. For example, NICE recommends discussing the number of healthcare appointments a patient has and the format in which they take place, the number of non-pharmacological treatments, the assessment of polypharmacy, and the effects of all treatments on mental health or well-being . An annual medication review is recommended to evaluate the risks, benefits, possible interactions, and treatment adherence for each drug the patient uses . Finally, Muth et al. noticed that the management of risk factors for future disease can be a major treatment burden for patients with multimorbidity and should be carefully considered when optimizing care . Patient Preferences and Priorities Many studies described the importance to elicit patients’ preferences and priorities for care [ , , , , , , , ]. Addressing a patient’s priorities helps to minimize adverse effects of psychological distress . Using these preferences and priorities, together with the health professional’s clinical expertise and based on the best available evidence, individual goals for care should be determined [ , , ]. In this conversation, health professionals should also explore, without any assumptions, to what extent a patient wants to be involved in decision-making . Another important factor to consider when discussing goals with patients with multimorbidity is life expectancy and prognosis of the conditions . Care Plan After prioritizing the patient’s problems, a care plan should be drafted, which sets out realistic treatment goals, monitoring, treatment, prevention, (self-)management advice, responsibility for coordination of care, and timing of follow-up through shared decision-making [ , , ]. The plan should be shared with other involved professionals, the patient, and the family . When choosing interventions, it is advised to use the best available evidence, but to also recognize the limitations of the evidence base for patients with multimorbidity and to check if an intervention is effective in terms of patient-related outcomes . Possible interventions should be tailored and adapted to a patient’s individual needs and shared decision-making should be used to maximize the impact of interventions [ , , ]. The key process elements of the PC-IC approach that we could retrieve from the included publications are summarized in .
Besides the medical domain, authors recommended paying attention to other domains of life as well, i.e., to functional limitations, mental health, and social functioning [ , , , , , , , , , ]. Patients with limited physical, emotional, and financial capacities are most disrupted by their chronic illness, but interventions to support these particular patient capacities have been scarcely studied . With regard to mental health, it is recommended to discuss this domain with patients and to actively monitor signs of anxiety, distress, and depression . For the social domain, social circumstances, including social support, living conditions, and financial constraints should be considered . Health professionals are encouraged to involve relatives or other informal caregivers in key decisions about the management of the patient’s health, if the patient so desires [ , , ]. In addition, the needs of these relatives should be considered as well . By including all of these domains, interventions have the potential to better address health inequalities in the population . We summarized the multiple domains in the concept of integral health status ( ).
Case management is considered to be an effective way to support patients in achieving their goals and communicating with other HCPs . Case managers are advised to perform regular face-to-face assessments with the patient . Establishing a partnership between different disciplines (i.e., primary care physicians, medical specialists, nurses, mental health professionals, and social care workers) may provide the key to improving care for patients with multimorbidity and psychological distress . The patient should also be part of this team . Communication and coordination across health professionals are considered essential in providing multimorbidity care [ , , , , , ]. To improve partnership and communication between health professionals and the patient and family, it is recommended to work in small teams with dedicated contact persons on both sides .
Multiple publications recommend assessing disease burden by determining how day-to-day life is affected by the patient’s health problems and establishing how health problems and treatments interact . Examples of health problems influencing disease burden are chronic pain, depression and anxiety, and incontinence . Another recommendation is to assess the burden of treatment because this can greatly influence patients’ quality of life [ , , , , , ]. For example, NICE recommends discussing the number of healthcare appointments a patient has and the format in which they take place, the number of non-pharmacological treatments, the assessment of polypharmacy, and the effects of all treatments on mental health or well-being . An annual medication review is recommended to evaluate the risks, benefits, possible interactions, and treatment adherence for each drug the patient uses . Finally, Muth et al. noticed that the management of risk factors for future disease can be a major treatment burden for patients with multimorbidity and should be carefully considered when optimizing care .
Many studies described the importance to elicit patients’ preferences and priorities for care [ , , , , , , , ]. Addressing a patient’s priorities helps to minimize adverse effects of psychological distress . Using these preferences and priorities, together with the health professional’s clinical expertise and based on the best available evidence, individual goals for care should be determined [ , , ]. In this conversation, health professionals should also explore, without any assumptions, to what extent a patient wants to be involved in decision-making . Another important factor to consider when discussing goals with patients with multimorbidity is life expectancy and prognosis of the conditions .
After prioritizing the patient’s problems, a care plan should be drafted, which sets out realistic treatment goals, monitoring, treatment, prevention, (self-)management advice, responsibility for coordination of care, and timing of follow-up through shared decision-making [ , , ]. The plan should be shared with other involved professionals, the patient, and the family . When choosing interventions, it is advised to use the best available evidence, but to also recognize the limitations of the evidence base for patients with multimorbidity and to check if an intervention is effective in terms of patient-related outcomes . Possible interventions should be tailored and adapted to a patient’s individual needs and shared decision-making should be used to maximize the impact of interventions [ , , ]. The key process elements of the PC-IC approach that we could retrieve from the included publications are summarized in .
For this phase, we analysed three clinical guidelines of the Dutch College of General Practitioners and three national care standards for DM2, COPD, and CVD [ , , , , , ]. The document analysis resulted in a list of categories with unique key interventions for disease-specific and holistic care ( ) which was converted into a draft conceptual intervention model for the PC-IC approach. After processing feedback from stakeholders as described in , and below, this resulted in a graphical representation of the final intervention model for use in daily practice.
A total of 56 HCPs were invited to participate in the online qualitative survey study. Fifty-two (93%) responded and 10 were asked follow-up questions to clarify the responses of their initial input. The majority of the participants consisted of GPs (n = 16) and PNs (n = 15), but several other disciplines were also involved ( ). The results of the survey were categorized as: general comments on the PC-IC approach and comments on the individual phases of the care process (i.e., assessment; setting personal health goals; choosing interventions; individual care plan; evaluation). 3.2.1. General Comments In general, most participants agreed with the underlying vision of the PC-IC approach, namely that person-centred and holistic care would improve the quality of care for patients with one or more chronic diseases (Q1, see ). It is likely to lead to more insight into the patient’s health status and any underlying problems. Using the PC-IC approach could increase the motivation of the patient for behavioural change and therefore may improve therapy compliance and health status. Many participants expect that this approach will initially take up more time, but that this time will be restored in the future. In the long term, therefore, the approach could save time and lead to more efficient provision of care (Q2). Another anticipated advantage of the PC-IC approach is the cyclical aspect, which ensures that the process continues and the patient’s health status is checked repeatedly. Some participants liked the fact that the PC-IC approach has a strong theoretical basis and would give patients more control and responsibility (Q3). According to some participants, a potential disadvantage of the approach could be that it may be too time-consuming, both for the HCP and for the patient (Q4). Therefore, some participants considered it not feasible to implement the approach in daily practice in its current form. In addition, some participants doubted the magnitude of the positive effects on the quality of care and patients’ health of the PC-IC approach. In addition, participants questioned which patients the care program would be suitable. Some thought it would be useful for all patients, whereas others suggested using it only for the more complex patients. Others indicated that the program may be too complicated for people with limited health skills (Q5). 3.2.2. Assessment of Integral Health Status Assessing patients’ integral health status was considered a positive development by almost all participants, who indicated that a broader assessment of health status may have positive effects for both the patient and the HCP. It provides insight into the connection between health problems and their underlying causes for both parties. This creates more awareness and motivation for change in patients, especially if the underlying cause of these health problems concerns a domain other than the medical domain (Q6). Involving family members or informal caregivers in discussing the overall health situation was also mentioned as a strong point. They can often provide useful additional information and may be supportive during treatment. Assessing and discussing integral health status also provides clear goals and priorities for the patient. Therefore, participants considered the integral health status a suitable way to map out complex patients. Filling out a questionnaire online (at home) helps the patient better prepare and saves time during the consultation. A disadvantage of focusing on integral health status instead of the disease-oriented approach might be that the medical aspects may not be sufficiently addressed and the severity of individual chronic diseases becomes less clear to patients. In addition, HCPs feared a patient may not want to talk about other areas of life as he/she may consider them irrelevant to the condition. It was also mentioned that making a more elaborate assessment of the patient’s health status could be confrontational for some, especially for those with many problems (Q7). 3.2.3. Setting Personal Health Goals Most participants were enthusiastic about setting personal health goals through shared decision-making. The most important advantage mentioned was that it may motivate the patient toward behavioural change. Contributing factors to motivation were awareness, commitment, and responsibility on the side of the patient. Setting personal goals also benefits the HCP, who gains more insight into the patient’s priorities, and is more in tune with the patient, which could make interventions more effective (Q8). Participants also mentioned the disadvantages and pitfalls of setting personal goals, such as that the importance of disease control might be overlooked (Q9). In addition, HCPs mentioned the risk that the patient sets unattainable goals, which can demotivate both the patient and the HCP. 3.2.4. Choosing Interventions Although the graphical representation of the PC-IC conceptual model for use in daily practice and the accompanying schematic overview of existing key interventions to support the management of patients with chronic conditions (see ) was appreciated by many participants, the graphical representation in its initial form as presented to the participants was deemed confusing by some of them due to the inclusion of too much information in one visualisation. Without further explanation, this makes the model difficult to understand (Q10). Participants also mentioned that it is difficult to create a static model for the supply of interventions, which will usually vary between regions and possibly change over time. 3.2.5. Individual Care Plan Participants saw many advantages of a care plan, both for the patient and for the HCP. The most important advantage is that the patient and the various HCPs involved may share the same specific personal goals, which makes communication between patient and caregiver and between different caregivers easier. The care plan provides a clear structure and benefits evaluation of personal goals. Participants also indicated that it fits well within a holistic approach (Q11). Disadvantages could be that it is time-consuming to draw up the plan, that the conversation with the patient can become subordinate to the plan, that making a plan is not yet sufficiently integrated into the ICT systems, and that a care plan can lead to the medicalisation of non-somatic problems. Disadvantages for patients could be that it can be invasive, that it can evoke resistance, and that it can create ambiguity if not all HCPs are on the same page (Q12). Participants wanted to include the following information in the proposed individual care plan format: the patient’s specific goals; the selected interventions; an overview of the HCPs involved and their responsibilities; and time of evaluation. 3.2.6. Evaluation Many participants found it unclear whether a patient-level evaluation had been included in the process of the PC-IC approach. They indicated that they missed this essential step and would prefer to add it (Q13). An advantage of an evaluation is that it provides new information which can be used in the next cycle. No disadvantages of an evaluation were mentioned.
In general, most participants agreed with the underlying vision of the PC-IC approach, namely that person-centred and holistic care would improve the quality of care for patients with one or more chronic diseases (Q1, see ). It is likely to lead to more insight into the patient’s health status and any underlying problems. Using the PC-IC approach could increase the motivation of the patient for behavioural change and therefore may improve therapy compliance and health status. Many participants expect that this approach will initially take up more time, but that this time will be restored in the future. In the long term, therefore, the approach could save time and lead to more efficient provision of care (Q2). Another anticipated advantage of the PC-IC approach is the cyclical aspect, which ensures that the process continues and the patient’s health status is checked repeatedly. Some participants liked the fact that the PC-IC approach has a strong theoretical basis and would give patients more control and responsibility (Q3). According to some participants, a potential disadvantage of the approach could be that it may be too time-consuming, both for the HCP and for the patient (Q4). Therefore, some participants considered it not feasible to implement the approach in daily practice in its current form. In addition, some participants doubted the magnitude of the positive effects on the quality of care and patients’ health of the PC-IC approach. In addition, participants questioned which patients the care program would be suitable. Some thought it would be useful for all patients, whereas others suggested using it only for the more complex patients. Others indicated that the program may be too complicated for people with limited health skills (Q5).
Assessing patients’ integral health status was considered a positive development by almost all participants, who indicated that a broader assessment of health status may have positive effects for both the patient and the HCP. It provides insight into the connection between health problems and their underlying causes for both parties. This creates more awareness and motivation for change in patients, especially if the underlying cause of these health problems concerns a domain other than the medical domain (Q6). Involving family members or informal caregivers in discussing the overall health situation was also mentioned as a strong point. They can often provide useful additional information and may be supportive during treatment. Assessing and discussing integral health status also provides clear goals and priorities for the patient. Therefore, participants considered the integral health status a suitable way to map out complex patients. Filling out a questionnaire online (at home) helps the patient better prepare and saves time during the consultation. A disadvantage of focusing on integral health status instead of the disease-oriented approach might be that the medical aspects may not be sufficiently addressed and the severity of individual chronic diseases becomes less clear to patients. In addition, HCPs feared a patient may not want to talk about other areas of life as he/she may consider them irrelevant to the condition. It was also mentioned that making a more elaborate assessment of the patient’s health status could be confrontational for some, especially for those with many problems (Q7).
Most participants were enthusiastic about setting personal health goals through shared decision-making. The most important advantage mentioned was that it may motivate the patient toward behavioural change. Contributing factors to motivation were awareness, commitment, and responsibility on the side of the patient. Setting personal goals also benefits the HCP, who gains more insight into the patient’s priorities, and is more in tune with the patient, which could make interventions more effective (Q8). Participants also mentioned the disadvantages and pitfalls of setting personal goals, such as that the importance of disease control might be overlooked (Q9). In addition, HCPs mentioned the risk that the patient sets unattainable goals, which can demotivate both the patient and the HCP.
Although the graphical representation of the PC-IC conceptual model for use in daily practice and the accompanying schematic overview of existing key interventions to support the management of patients with chronic conditions (see ) was appreciated by many participants, the graphical representation in its initial form as presented to the participants was deemed confusing by some of them due to the inclusion of too much information in one visualisation. Without further explanation, this makes the model difficult to understand (Q10). Participants also mentioned that it is difficult to create a static model for the supply of interventions, which will usually vary between regions and possibly change over time.
Participants saw many advantages of a care plan, both for the patient and for the HCP. The most important advantage is that the patient and the various HCPs involved may share the same specific personal goals, which makes communication between patient and caregiver and between different caregivers easier. The care plan provides a clear structure and benefits evaluation of personal goals. Participants also indicated that it fits well within a holistic approach (Q11). Disadvantages could be that it is time-consuming to draw up the plan, that the conversation with the patient can become subordinate to the plan, that making a plan is not yet sufficiently integrated into the ICT systems, and that a care plan can lead to the medicalisation of non-somatic problems. Disadvantages for patients could be that it can be invasive, that it can evoke resistance, and that it can create ambiguity if not all HCPs are on the same page (Q12). Participants wanted to include the following information in the proposed individual care plan format: the patient’s specific goals; the selected interventions; an overview of the HCPs involved and their responsibilities; and time of evaluation.
Many participants found it unclear whether a patient-level evaluation had been included in the process of the PC-IC approach. They indicated that they missed this essential step and would prefer to add it (Q13). An advantage of an evaluation is that it provides new information which can be used in the next cycle. No disadvantages of an evaluation were mentioned.
Twelve patients were invited for the interview study. One patient did not want to participate, two were not eligible as they did not receive care in a DMP, and nine consented to be interviewed. Data saturation was reached after the first eight interviews. Eight patients (88.8%) were male, their mean age was 65 years (range 58–79 years). One patient had COPD, three had CVD, two had DM2, and three had any combination of these chronic diseases. The median duration of being in the DMP was ten years (range 2–10 years). The following main categories were”Ide’Iified during the analysis of the interview transcripts: personalized care, cooperation, patient role, and PN role. 3.3.1. Personalized Care Patients were generally positive towards the presented manner of personalized care, especially regarding the integral health status assessment and use of individual care plans. The integral health assessment may give patients and HCPs better insight and focus on holistic well-being, and may account better for comorbidity, disease interaction, and psychological factors. It may detect issues affecting patients’ well-being and identify those who require more support. It may also improve working relationships by shifting towards a more personal approach rather than a disease focus. Two participants were content with their current care and expected no benefits from the new approach. The PC-IC approach would be inviting patients to be more involved in their healthcare. Having the care plan at home could help remind and motivate them and allow easier involvement of informal caregivers/social support systems. The wording of the care plan should be easy to understand. Some participants called for more flexibility regarding individual care plans to be adaptive to patients’ needs and unexpected circumstances and requested options to communicate their questions and concerns to their PN after the plan is formulated. Longer consultation time would allow for more personal attention, and opportunities for better patient education, and is expected to benefit health outcomes. One participant believed that too much consultation time was reserved for patients. Some participants suggested adjusting the consultation frequency and the duration according to each patients’ individual needs. This may allow PNs to direct their efforts more efficiently (Q14). 3.3.2. Co-Operation Participants saw the benefits of being equal stakeholders in their own care. This may improve care participation and help them carry responsibility for their own health. Greater equality may also improve the working relationship with HCPs. Giving patients the opportunity to prepare for care consultations was seen as a way to improve participation and equality. Using digital questionnaires for assessing integral health status was appreciated by the participants. Avoiding time constraints when answering health-related questions may also cause more reflection on health, better quality answers, and time during consultations to explore the answers. One participant noted that completing the questionnaire allows patients to share thoughts about their health with their informal caregivers/social support system more easily (Q15). Some participants worried that patients with low literacy, facing language barriers, insufficient health skills, or insufficient computer skills may have difficulties using the questionnaire. One participant affirmed this, saying his low literacy made him feel insecure and uncertain when filling out questionnaires. One participant thought that thirty minutes was too long to fill out a questionnaire. Participants gave several suggestions regarding accessibility. Intelligible and straightforward questions were seen as important. Further suggestions were: visual instead of numeric response scales, a paper-version alternative, and a narrator function. One participant suggested a shorter alternative to the questionnaire. Using the questionnaire results would support the patient and the PN, as both may get better insight into the patient’s current health status and its long-term course. This may provide a sense of control and assurance for the patient and may help both sides to prepare for consultations and help discover previously unacknowledged problems that affect the patient’s health when the responses yield unexpected results. Several participants mentioned that the color-coding of the results made them more insightful, while one participant found this too confrontational and judgmental towards patients. Several participants saw potential flaws in using the questionnaire results. Two participants warned that this could lead to a search for non-existent problems. One participant thought paying attention to psychological stressors was lacking, while these can cause or amplify illness. Participants mentioned that the results should also be kept simple, some suggesting that a summary would suffice. Some participants suggested additional questions, one suggesting a question about literacy, another suggesting to include socioeconomic background, and a third suggesting questions regarding what mattered most in the life of patients (Q16). Participants also provided advice on the quality of communication with their HCPs, from which five requirements for communication emerged: trust, authenticity, empathy, constructiveness, and specificity. Trust improves patient openness and working relationships and requires continuity of care. Authentic personal interest makes patients feel seen and heard, and is conducive to developing trust. Being empathetic may provide a sense of safety and comfort, and let patients know that they are supported. Being constructive may create a positive and motivational focus on the patient. Finally, adjusting one’s approach to specific patient abilities and needs may benefit mutual understanding and working relationships. 3.3.3. Role of the Patient Participants thought that gaining ownership and self-management was important and that patients are ultimately responsible for their actions regarding their well-being, but may often be unaware of their potential influence on it. Being aware of this may stimulate self-management. They noted the potential benefits of self-management but also expressed thoughts on factors limiting its attainability. Participants thought that experiencing ownership in care may motivate better adherence to treatment and healthcare advice, and may facilitate acceptance of advice. They noted that any level of self-management may be beneficial for this. Similar to the HCPs the participating patients also mentioned that formulating personal health goals may contribute to more personalized care. Patient-specific factors such as personality traits, acceptance, and knowledge were thought to have an important limiting influence on attaining self-management (Q17). Several participants noted that patients’ responsibility extended to communication with the PN, as patients may choose to withhold information on topics such as mental problems or illiteracy, but this may prevent them from receiving optimal care. Three participants elaborated on involving informal caregivers/social support systems or primarily spouses, during consultations and at home. Patients bringing their spouses to consultations may be a source of information for the PN. The spouse may help retain information and provide support at home, as well as develop more understanding of the patient and their problems themselves. One participant noted that this involvement should be balanced with professional care, as a patient might value the opinion of their spouse more than that of the PN (Q18). 3.3.4. Role of the Practice Nurse Participants saw benefits in the proposed role of the PN in providing a patient-centred model of care, but also mentioned several limiting factors and provided feedback on their perception of PNs’ responsibilities in the process. The PN taking on the case-manager role may provide a more central viewpoint of patient wellbeing, in line with the integral health assessment. Having a central point of responsibility may also benefit from continuity of care. However, some PNs may lack the knowledge and skills to deal with complex cases or the affinity to handle certain aspects of patient well-being. Guiding patients toward appropriate care when faced with these limitations was marked as a responsibility of the PN. One participant noted that patients may still prefer GP visits for certain problems regardless of the PN’s capability. Resource constraints, such as available time per patient, were seen as a potential limitation. Other PN responsibilities mentioned were on supporting self-management and communications within the healthcare team. Participants noted that improving self-management is dependent on the PN creating opportunities to do so, which might require them to develop flexibility in their approach according to patients’ needs. Two participants suggested that HCPs could provide summaries of consultations as additional support when formulating personal goals. Sharing of relevant information between HCPs was seen as important in keeping care teams informed on patient health and may prevent patients from having to repeat their story several times. Participants had different opinions on GP involvement in their care. Most participants advised that their GP did not need to be ‘visibly’ involved in their care, one saying that he expected the PN to have more relevant expertise than the GP, and another saying that the GP should be involved when deemed required. One participant saw merit in some but limited GP visibility, even if only once a year (Q19).
Patients were generally positive towards the presented manner of personalized care, especially regarding the integral health status assessment and use of individual care plans. The integral health assessment may give patients and HCPs better insight and focus on holistic well-being, and may account better for comorbidity, disease interaction, and psychological factors. It may detect issues affecting patients’ well-being and identify those who require more support. It may also improve working relationships by shifting towards a more personal approach rather than a disease focus. Two participants were content with their current care and expected no benefits from the new approach. The PC-IC approach would be inviting patients to be more involved in their healthcare. Having the care plan at home could help remind and motivate them and allow easier involvement of informal caregivers/social support systems. The wording of the care plan should be easy to understand. Some participants called for more flexibility regarding individual care plans to be adaptive to patients’ needs and unexpected circumstances and requested options to communicate their questions and concerns to their PN after the plan is formulated. Longer consultation time would allow for more personal attention, and opportunities for better patient education, and is expected to benefit health outcomes. One participant believed that too much consultation time was reserved for patients. Some participants suggested adjusting the consultation frequency and the duration according to each patients’ individual needs. This may allow PNs to direct their efforts more efficiently (Q14).
Participants saw the benefits of being equal stakeholders in their own care. This may improve care participation and help them carry responsibility for their own health. Greater equality may also improve the working relationship with HCPs. Giving patients the opportunity to prepare for care consultations was seen as a way to improve participation and equality. Using digital questionnaires for assessing integral health status was appreciated by the participants. Avoiding time constraints when answering health-related questions may also cause more reflection on health, better quality answers, and time during consultations to explore the answers. One participant noted that completing the questionnaire allows patients to share thoughts about their health with their informal caregivers/social support system more easily (Q15). Some participants worried that patients with low literacy, facing language barriers, insufficient health skills, or insufficient computer skills may have difficulties using the questionnaire. One participant affirmed this, saying his low literacy made him feel insecure and uncertain when filling out questionnaires. One participant thought that thirty minutes was too long to fill out a questionnaire. Participants gave several suggestions regarding accessibility. Intelligible and straightforward questions were seen as important. Further suggestions were: visual instead of numeric response scales, a paper-version alternative, and a narrator function. One participant suggested a shorter alternative to the questionnaire. Using the questionnaire results would support the patient and the PN, as both may get better insight into the patient’s current health status and its long-term course. This may provide a sense of control and assurance for the patient and may help both sides to prepare for consultations and help discover previously unacknowledged problems that affect the patient’s health when the responses yield unexpected results. Several participants mentioned that the color-coding of the results made them more insightful, while one participant found this too confrontational and judgmental towards patients. Several participants saw potential flaws in using the questionnaire results. Two participants warned that this could lead to a search for non-existent problems. One participant thought paying attention to psychological stressors was lacking, while these can cause or amplify illness. Participants mentioned that the results should also be kept simple, some suggesting that a summary would suffice. Some participants suggested additional questions, one suggesting a question about literacy, another suggesting to include socioeconomic background, and a third suggesting questions regarding what mattered most in the life of patients (Q16). Participants also provided advice on the quality of communication with their HCPs, from which five requirements for communication emerged: trust, authenticity, empathy, constructiveness, and specificity. Trust improves patient openness and working relationships and requires continuity of care. Authentic personal interest makes patients feel seen and heard, and is conducive to developing trust. Being empathetic may provide a sense of safety and comfort, and let patients know that they are supported. Being constructive may create a positive and motivational focus on the patient. Finally, adjusting one’s approach to specific patient abilities and needs may benefit mutual understanding and working relationships.
Participants thought that gaining ownership and self-management was important and that patients are ultimately responsible for their actions regarding their well-being, but may often be unaware of their potential influence on it. Being aware of this may stimulate self-management. They noted the potential benefits of self-management but also expressed thoughts on factors limiting its attainability. Participants thought that experiencing ownership in care may motivate better adherence to treatment and healthcare advice, and may facilitate acceptance of advice. They noted that any level of self-management may be beneficial for this. Similar to the HCPs the participating patients also mentioned that formulating personal health goals may contribute to more personalized care. Patient-specific factors such as personality traits, acceptance, and knowledge were thought to have an important limiting influence on attaining self-management (Q17). Several participants noted that patients’ responsibility extended to communication with the PN, as patients may choose to withhold information on topics such as mental problems or illiteracy, but this may prevent them from receiving optimal care. Three participants elaborated on involving informal caregivers/social support systems or primarily spouses, during consultations and at home. Patients bringing their spouses to consultations may be a source of information for the PN. The spouse may help retain information and provide support at home, as well as develop more understanding of the patient and their problems themselves. One participant noted that this involvement should be balanced with professional care, as a patient might value the opinion of their spouse more than that of the PN (Q18).
Participants saw benefits in the proposed role of the PN in providing a patient-centred model of care, but also mentioned several limiting factors and provided feedback on their perception of PNs’ responsibilities in the process. The PN taking on the case-manager role may provide a more central viewpoint of patient wellbeing, in line with the integral health assessment. Having a central point of responsibility may also benefit from continuity of care. However, some PNs may lack the knowledge and skills to deal with complex cases or the affinity to handle certain aspects of patient well-being. Guiding patients toward appropriate care when faced with these limitations was marked as a responsibility of the PN. One participant noted that patients may still prefer GP visits for certain problems regardless of the PN’s capability. Resource constraints, such as available time per patient, were seen as a potential limitation. Other PN responsibilities mentioned were on supporting self-management and communications within the healthcare team. Participants noted that improving self-management is dependent on the PN creating opportunities to do so, which might require them to develop flexibility in their approach according to patients’ needs. Two participants suggested that HCPs could provide summaries of consultations as additional support when formulating personal goals. Sharing of relevant information between HCPs was seen as important in keeping care teams informed on patient health and may prevent patients from having to repeat their story several times. Participants had different opinions on GP involvement in their care. Most participants advised that their GP did not need to be ‘visibly’ involved in their care, one saying that he expected the PN to have more relevant expertise than the GP, and another saying that the GP should be involved when deemed required. One participant saw merit in some but limited GP visibility, even if only once a year (Q19).
3.4.1. Health Insurers In general, health insurers found the PC-IC approach a good and positive development to move towards integral and holistic tailor-made care. Their suggestions for further improvement were: to describe the inclusion criteria for the PC-IC approach in practice more clearly, for example, every patient with two or more chronic diseases; pay more attention to the consequences of a shortage of HCPs in the future by having patients prepare their consultation at home and using more e-health applications; pay more attention to the required change in organisations and practices, because the implementation of the intervention determines if the intervention is successful. 3.4.2. Dutch Centre of Expertise on Health Disparities (Pharos) The experts from Pharos felt positive about the PC-IC approach to health and treatment because they found that, for a lot of people in vulnerable positions, not only disease, but also context, abilities, and possibilities influence health. A digital questionnaire for the assessment of health status that is already used in Dutch hospitals and general practices (the Nijmegen Clinical Screening Instrument, or NCSI) was tested with people with limited health skills and led to suggestions for improvement of the language use and layout of this digital questionnaire. In addition, Pharos provided feedback on the conceptual intervention model, which was found to be an unsuitable way to visualise and discuss these treatments. The model was considered too complicated and interfered with the integral approach. 3.4.3. Finalization of the PC-IC Approach Based on the scientific literature, current practice guidelines, and input of a variety of stakeholders, the holistic, PC-IC approach for the management of patients with (multiple) chronic diseases in primary care was finalized in a meeting with relevant stakeholders of each primary care cooperative ( and ).
In general, health insurers found the PC-IC approach a good and positive development to move towards integral and holistic tailor-made care. Their suggestions for further improvement were: to describe the inclusion criteria for the PC-IC approach in practice more clearly, for example, every patient with two or more chronic diseases; pay more attention to the consequences of a shortage of HCPs in the future by having patients prepare their consultation at home and using more e-health applications; pay more attention to the required change in organisations and practices, because the implementation of the intervention determines if the intervention is successful.
The experts from Pharos felt positive about the PC-IC approach to health and treatment because they found that, for a lot of people in vulnerable positions, not only disease, but also context, abilities, and possibilities influence health. A digital questionnaire for the assessment of health status that is already used in Dutch hospitals and general practices (the Nijmegen Clinical Screening Instrument, or NCSI) was tested with people with limited health skills and led to suggestions for improvement of the language use and layout of this digital questionnaire. In addition, Pharos provided feedback on the conceptual intervention model, which was found to be an unsuitable way to visualise and discuss these treatments. The model was considered too complicated and interfered with the integral approach.
Based on the scientific literature, current practice guidelines, and input of a variety of stakeholders, the holistic, PC-IC approach for the management of patients with (multiple) chronic diseases in primary care was finalized in a meeting with relevant stakeholders of each primary care cooperative ( and ).
4.1. Summary of Results To our knowledge, this paper is the first to describe in detail the subsequent steps in the development of a person-centred and integrated care approach for people with (multiple) chronic conditions in primary care. In the first phase, the scoping review identified that a PC-IC approach for multimorbidity should comprise multiple domains of health status, a case manager, and a thorough assessment of patient preferences and priorities. These essential elements were incorporated into a conceptual model for the PC-IC approach. The document analysis resulted in a list of unique interventions. In the second phase, HCPs commented on the (dis)advantages of the conceptual model, and provided suggestions for the improvement of the conceptual intervention model. The third phase consisted of a patient-level evaluation step to the PC-IC approach. Patients commented on the conceptual model and indicated that this approach could have many advantages, such as being more responsible for their own health and having a partnership with the HCP. In the final phase, health insurers and the Dutch Centre of Expertise on Health Disparities (Pharos) provided feedback on the model, after which the PC-IC approach was finalized in a meeting with relevant stakeholders of each of the three primary care cooperatives involved. 4.2. Comparison to Existing Literature & Interpretation Our findings are supported by other interventions to deliver personalized primary care for patients with chronic conditions that have been reported [ , , ]. Similar to our approach, these interventions all include a PC-IC consultation, case management, personal goal setting, and network support. Differences between the respective approaches consist mainly of the targeted population and the way eligible patients are selected. The most recent interventions focus on targeting multimorbidity or ‘high-need’ patients. For example, Salisbury et al. developed and evaluated the 3D approach for people with multimorbidity in the UK, in which general practices offered greater continuity of care and biannual person-centred, comprehensive health reviews . They selected patients with at least three types of chronic diseases and, although patients experienced the provided care as more person-centred, no favourable effects on HRQoL, general well-being, or patients’ treatment burden were observed . Another intervention, which was also developed in the Netherlands, divides patients into low-, moderate-, and high-care-need subgroups, and only the high-care-need subgroup receives the intervention . The effects of this intervention have not been reported yet, but a likely advantage of targeting all patients with chronic conditions, as we aim with our intervention, is that it may reduce overtreatment in patients who actually need less care than they currently receive according to the strict DMP protocols. This may create more time for patients who need more attention from their primary care HCPs. The results from our interviews with patients suggest that the developed PC-IC approach may solve several problems in current chronic care. For example, Rimmelzwaan et al. found that people with multimorbidity missed an approach that focuses on the patient “as a whole” . In addition, these authors also observed that the participants in this study reported that HCPs should treat their patients as equals. Our study shows that patients believe that this new PC-IC approach could improve holistic care, time, and attention in consultations with the NP, as well as the partnership between patients and HCP. Furthermore, our findings are similar to research by Rijken et al. , who found that people with multimorbidity have the following priorities in their chronic care: having one health record shared by all HCPs involved in their care, regular comprehensive assessments, and receiving support from their HCPs to self-manage their chronic conditions. In our study, we have predominantly focused on the micro-level service delivery aspects of PC-IC care. However, to support the PC-IC approach, other levels, and components of integrated care, i.e., the meso and macro levels of service delivery, leadership and governance, workforce, financing, technologies and medical products, and information and research, have to be considered and studied as well . For financing, Bour et al. have studied a complementary payment model to this PC-IC approach, which is published elsewhere in this journal . 4.3. Strengths & Limitations A particular strength of our study was the rigorous and extensive development process per region with relevant stakeholders. The development of the PC-IC approach based on the existing literature and the input from stakeholders makes the foundation of the conceptual model the best it can be before the scheduled feasibility study is executed, making the feasibility study even more effective. Because the development process was finalized per region, it could be tailored to fit the regional situation. We did, however, not further analyse regional differences, which limits the generalisability of the results to other regions in the Netherlands or other countries. Another advantage of our study was that HCPs and patients could comment on a tangible conceptual model, which made their feedback more specific and useful to modify the concept. A final strength of the study was the high participation rate of HCPs. This may be due to the method of online interviews, because of the advantages of online interviewing: significant savings in time for participants and the opportunity for participants to carefully formulate a response to a particular question . Another explanation could be the compensation HCPs received from the regional primary care cooperatives to participate in the study. We also acknowledge some limitations. First, in the beginning of the project we performed a scoping review on multimorbidity, but the scope of the project later expanded to people with one or more chronic diseases, also because of the feedback from participating HCPs. Nonetheless, we think the findings are also relevant for patients with single chronic diseases, as problems may still arise in other areas of life, and PC-IC seems also effective in single disease cases . In addition, the scoping review is currently somewhat outdated. However, we decided not to update the scoping review at this stage, as the intervention is based on the consecutive phases of the development process. Second, in the interview study (Phase 3), eight of the nine patients interviewed were males, which limited our ability to take the role of gender into account when adapting the draft conceptual PC-IC model from the patient perspective. This clearly reduced the diversity of the study sample and may also explain why data saturation was reached rather quickly. Third, due to the influence of COVID-19 restriction measures, the method of interviewing patients had to be revised. To limit the potential exposure of patients with chronic diseases to the SARS-CoV-2 virus, we chose to conduct the interviews by phone. The pitfall of this method is that non-verbal signals cannot be seen, which might lead to different conversations and different observations from the interviews. An advantage might be that the patient feels more anonymous and is more likely to respond frankly, although the topic of our study was not particularly sensitive. Fourth, HCPs and patients commented on a theoretical model. After actually experiencing it in their practices, their views and opinions may be different. Therefore, the experiences of patients and HCPs should also be examined after having implemented the model in the upcoming feasibility study. 4.4. Implications 4.4.1. Recommendations for Future Research Our next studies will focus on the feasibility and the actual effects of the developed PC-IC approach in terms of the Quadruple Aim, in which we will focus on health-related quality of life, self-management behaviour, and patient experience, as outcome variables in research on the effects of PC-IC should be tailored to be person-centred . As part of the cluster, in the randomised trial that is currently underway we assess barriers and facilitators of switching from the current to the new (PC-IC) approach in several domains (i.e., professionals, patients, organizational, and financial domains). The insights we gain from this will be part of the recommendations regarding the implementation of the PC-IC approach elsewhere. Furthermore, more research is needed on the acceptability of this approach in patients with limited health literacy. 4.4.2. Recommendations for Practice Although this study offers some important insights for HCPs searching for a PC-IC approach to chronic care, the anticipated superiority of this approach relative to the current DMPs has yet to be studied.
To our knowledge, this paper is the first to describe in detail the subsequent steps in the development of a person-centred and integrated care approach for people with (multiple) chronic conditions in primary care. In the first phase, the scoping review identified that a PC-IC approach for multimorbidity should comprise multiple domains of health status, a case manager, and a thorough assessment of patient preferences and priorities. These essential elements were incorporated into a conceptual model for the PC-IC approach. The document analysis resulted in a list of unique interventions. In the second phase, HCPs commented on the (dis)advantages of the conceptual model, and provided suggestions for the improvement of the conceptual intervention model. The third phase consisted of a patient-level evaluation step to the PC-IC approach. Patients commented on the conceptual model and indicated that this approach could have many advantages, such as being more responsible for their own health and having a partnership with the HCP. In the final phase, health insurers and the Dutch Centre of Expertise on Health Disparities (Pharos) provided feedback on the model, after which the PC-IC approach was finalized in a meeting with relevant stakeholders of each of the three primary care cooperatives involved.
Our findings are supported by other interventions to deliver personalized primary care for patients with chronic conditions that have been reported [ , , ]. Similar to our approach, these interventions all include a PC-IC consultation, case management, personal goal setting, and network support. Differences between the respective approaches consist mainly of the targeted population and the way eligible patients are selected. The most recent interventions focus on targeting multimorbidity or ‘high-need’ patients. For example, Salisbury et al. developed and evaluated the 3D approach for people with multimorbidity in the UK, in which general practices offered greater continuity of care and biannual person-centred, comprehensive health reviews . They selected patients with at least three types of chronic diseases and, although patients experienced the provided care as more person-centred, no favourable effects on HRQoL, general well-being, or patients’ treatment burden were observed . Another intervention, which was also developed in the Netherlands, divides patients into low-, moderate-, and high-care-need subgroups, and only the high-care-need subgroup receives the intervention . The effects of this intervention have not been reported yet, but a likely advantage of targeting all patients with chronic conditions, as we aim with our intervention, is that it may reduce overtreatment in patients who actually need less care than they currently receive according to the strict DMP protocols. This may create more time for patients who need more attention from their primary care HCPs. The results from our interviews with patients suggest that the developed PC-IC approach may solve several problems in current chronic care. For example, Rimmelzwaan et al. found that people with multimorbidity missed an approach that focuses on the patient “as a whole” . In addition, these authors also observed that the participants in this study reported that HCPs should treat their patients as equals. Our study shows that patients believe that this new PC-IC approach could improve holistic care, time, and attention in consultations with the NP, as well as the partnership between patients and HCP. Furthermore, our findings are similar to research by Rijken et al. , who found that people with multimorbidity have the following priorities in their chronic care: having one health record shared by all HCPs involved in their care, regular comprehensive assessments, and receiving support from their HCPs to self-manage their chronic conditions. In our study, we have predominantly focused on the micro-level service delivery aspects of PC-IC care. However, to support the PC-IC approach, other levels, and components of integrated care, i.e., the meso and macro levels of service delivery, leadership and governance, workforce, financing, technologies and medical products, and information and research, have to be considered and studied as well . For financing, Bour et al. have studied a complementary payment model to this PC-IC approach, which is published elsewhere in this journal .
A particular strength of our study was the rigorous and extensive development process per region with relevant stakeholders. The development of the PC-IC approach based on the existing literature and the input from stakeholders makes the foundation of the conceptual model the best it can be before the scheduled feasibility study is executed, making the feasibility study even more effective. Because the development process was finalized per region, it could be tailored to fit the regional situation. We did, however, not further analyse regional differences, which limits the generalisability of the results to other regions in the Netherlands or other countries. Another advantage of our study was that HCPs and patients could comment on a tangible conceptual model, which made their feedback more specific and useful to modify the concept. A final strength of the study was the high participation rate of HCPs. This may be due to the method of online interviews, because of the advantages of online interviewing: significant savings in time for participants and the opportunity for participants to carefully formulate a response to a particular question . Another explanation could be the compensation HCPs received from the regional primary care cooperatives to participate in the study. We also acknowledge some limitations. First, in the beginning of the project we performed a scoping review on multimorbidity, but the scope of the project later expanded to people with one or more chronic diseases, also because of the feedback from participating HCPs. Nonetheless, we think the findings are also relevant for patients with single chronic diseases, as problems may still arise in other areas of life, and PC-IC seems also effective in single disease cases . In addition, the scoping review is currently somewhat outdated. However, we decided not to update the scoping review at this stage, as the intervention is based on the consecutive phases of the development process. Second, in the interview study (Phase 3), eight of the nine patients interviewed were males, which limited our ability to take the role of gender into account when adapting the draft conceptual PC-IC model from the patient perspective. This clearly reduced the diversity of the study sample and may also explain why data saturation was reached rather quickly. Third, due to the influence of COVID-19 restriction measures, the method of interviewing patients had to be revised. To limit the potential exposure of patients with chronic diseases to the SARS-CoV-2 virus, we chose to conduct the interviews by phone. The pitfall of this method is that non-verbal signals cannot be seen, which might lead to different conversations and different observations from the interviews. An advantage might be that the patient feels more anonymous and is more likely to respond frankly, although the topic of our study was not particularly sensitive. Fourth, HCPs and patients commented on a theoretical model. After actually experiencing it in their practices, their views and opinions may be different. Therefore, the experiences of patients and HCPs should also be examined after having implemented the model in the upcoming feasibility study.
4.4.1. Recommendations for Future Research Our next studies will focus on the feasibility and the actual effects of the developed PC-IC approach in terms of the Quadruple Aim, in which we will focus on health-related quality of life, self-management behaviour, and patient experience, as outcome variables in research on the effects of PC-IC should be tailored to be person-centred . As part of the cluster, in the randomised trial that is currently underway we assess barriers and facilitators of switching from the current to the new (PC-IC) approach in several domains (i.e., professionals, patients, organizational, and financial domains). The insights we gain from this will be part of the recommendations regarding the implementation of the PC-IC approach elsewhere. Furthermore, more research is needed on the acceptability of this approach in patients with limited health literacy. 4.4.2. Recommendations for Practice Although this study offers some important insights for HCPs searching for a PC-IC approach to chronic care, the anticipated superiority of this approach relative to the current DMPs has yet to be studied.
Our next studies will focus on the feasibility and the actual effects of the developed PC-IC approach in terms of the Quadruple Aim, in which we will focus on health-related quality of life, self-management behaviour, and patient experience, as outcome variables in research on the effects of PC-IC should be tailored to be person-centred . As part of the cluster, in the randomised trial that is currently underway we assess barriers and facilitators of switching from the current to the new (PC-IC) approach in several domains (i.e., professionals, patients, organizational, and financial domains). The insights we gain from this will be part of the recommendations regarding the implementation of the PC-IC approach elsewhere. Furthermore, more research is needed on the acceptability of this approach in patients with limited health literacy.
Although this study offers some important insights for HCPs searching for a PC-IC approach to chronic care, the anticipated superiority of this approach relative to the current DMPs has yet to be studied.
Based on the scientific literature, current practice guidelines, and the input of a variety of stakeholders, we developed a holistic, person-centred and integrated approach for the management of patients with (multiple) chronic diseases in primary care. Future evaluation of the PC-IC approach will show if this approach leads to more favourable outcomes and should replace the current single-disease approach in the management of chronic conditions and multimorbidity in Dutch primary care.
|
Health Communication in the Time of COVID-19 Pandemic: A Qualitative Analysis of Italian Advertisements | 6e2c9296-965c-4757-b883-6b5822475b29 | 10001965 | Health Communication[mh] | The onset of the COVID-19 pandemic between the end of 2019 and the beginning of 2020 represented a phenomenon strongly correlated to a climate of great health uncertainty. After China, Italy was the second largest country with the most immediate and widespread diffusion of the SARS-CoV-2 virus. These data characterized it as having a privileged point of view and, in some respects, being a model of the various processes involved in the pandemic event. In Italy, as in other countries, the expansion of the virus, combined with the initial inability to provide adequate solutions for the treatment of the disease, immediately forced the national public health authorities to think of and plan a series of measures that could curb the spread of disease and the blocking of hospital systems. The resulting list of measures was accompanied by a significant communicative effort to seek the general population’s compliance and enable the established political and health strategies to be effective. Communication in the health field has therefore played an essential role in the management processes of the COVID-19 pandemic. However, it is well recognized that public health is a highly difficult field in which to intervene with massive communication campaigns to inform citizens about health risks and disease prevention . Since the beginning of the pandemic, an overall sense of greater “entropy” and communicative disorder has spread , in line with gradually emerging information linked to the initial definition and naming of the disease, the hypotheses about its origins and the continuously updated practical management that the socio-political-health authorities put in place . Furthermore, this scenario has been complicated by the vast world of misinformation/disinformation, referred to as the “infodemic” by the World Health Organization . These processes and problems have manifested themselves in a climate previously characterized by a progressive erosion of trust in public institutions and by a general state of information crisis in the field of health and science . Consequently, various leaders and institutions opted for a communication diversification which, through the use of different channels (traditional and digital), could allow greater visibility and reliability of communication on public health. During the most intense phases of the pandemic, research testifies to greater access to the television medium in the daily life of the population and, more generally, the crucial role played by traditional media (for example, the radio in addition to television) in the process of building trust with audiences in different countries of the world . At the same time, there has been a growing use of social media as a digital medium with a strategic role. For example, the Italian Ministry of Health used its official Facebook page to mitigate the spread of misinformation and to offer updates on the pandemic . Through these channels, institutional communication, following the evolution of the pandemic, has made effective use of public advertising, primarily through audiovisual spots, intending to represent the fundamental support in countering the SARS-CoV-2 infection, mitigating the effects of the disease and, in general, support the health and psychological well-being of the population. The present work is specifically focused on this short and strategically planned communication and analyzes the entire Italian institutional advertising campaign, broadcast during the COVID-19 emergency, through a qualitative investigation of their main characteristics. 1.1. Advertising during COVID-19 It is widely acknowledged that advertising acts as a “cultural operator” and through micro-stories of everyday life, “it deliberately turns abstract notions into specific situations by precisely delineating features, contexts, dialogues and social interactions” (p. 3). Spots configure cultural imageries, stereotypes and tropes and are meticulously designed to portray and channel particular messages, shared by the involved agents, using specific visual, sound and textual methods. Specifically, in the world of advertising, connections and emotional links with the target audiences are usually guaranteed through the production tactics involving identification and empathetic connections . These features remain even when public institutions promote advertising concerning both political issues and humanitarian goals . Taking into account the colossal scale of the social, economic and political events related to COVID-19, the attention of social scientists and scholars should be focused not only for the scrutiny of the events themselves but also on how they are narratively reported: cultural narratives generated by advertising can account for new forms of awareness and sensibilities . During the COVID-19 health emergency, in line with modified lifestyles and consumption habits, advertising had to adapt to these changing communities, even capitalizing or adapting audiovisual spots. Especially in the first months of the pandemic, a change in content, language and images has been found, aimed at utilizing an emotional message over a product sponsorship . Thus, audiovisual spots played an essential role: brands gained a new social function and advertising changed its traditional role to offering key support in improving resilience, alleviating stress and catalyzing health and psychological management . Besides these already important features, institutional communication had to encourage healthy behaviors (such as social distancing, mask-wearing and vaccination) in a general effort to obtain individuals’ compliance . Thus, the research for methods for promoting behavior change becomes more and more time-sensitive. For at least the past 60 years, social psychology widely acknowledged the importance of persuasive communication in trying to change peoples’ attitudes and behaviors, and these studies generated a consensus regarding the role of attitudes in affecting actions as well as concerning the existence of several variables moderating attitude–behavior relationships . These pathways were analyzed in line with several relevant public/private domains, such as tobacco use , sustainable holiday choices and HIV prevention , among other things. Specifically referring to this last issue, an interesting metanalysis has pointed out the several theoretical backgrounds implied in the explication of the relations between persuasive communication and changing actions, ranging from the health belief model and the protection–motivation theory to theories of reasoned action and of planned behavior , and from the social-cognitive theory to the information–motivation–behavioral skills model . Therefore, several variables are invoked to explain and propose successful persuasive communications, including beliefs and emotions, perceived desirability and normative pressure, perceptions and behavioral intentions, knowledge and behavioral skills . In addition, even the combination between self-benefits and social norms has been identified as appealing in persuasive communications dealing with sustainable practices . An integrative framework to analyze how different mechanisms in different situations can impact persuasion is the elaboration likelihood model (ELM; ). This model emphasizes the importance of motivation and the ability to elaborate a message as critical factors affecting how deeply individuals ultimately elaborate it, thus defining a dual route of persuasive message processing, namely central and peripheral routes. Being defined based on full vs. reduced active engagement and the evaluation of the information by the recipients, the central route involves complex cues, requiring extensive cognitive efforts, such as argument quality, rational appeals and informational cues. In contrast, the peripheral route encompasses more implicit and superficial aspects of the message, thus implying a more heuristic process, such as source attractiveness and prestige, emotional appeals, visual and sound effects and so on . In addition, this integrative model proposes “a link between the amount of elaboration people put into forming or changing an attitude and the strength of that attitude, with greater elaboration leading to greater strength” (p. 327). In other words, since not all persuasive communications are equal, the choice and use of different cues will imply different elaboration levels, thus producing different strength outcomes and, finally, different possibilities that attitudes will guide behaviors. A recent study concerning COVID-19 vaccination showed that both central and peripheral routes influenced individually perceived informativeness and perceived persuasiveness, in turn affecting attitudes towards vaccination and the intention to obtain the vaccine . Since public health communication and campaigns aim, especially during a sanitary emergency, to obtain a massive adhesion to specific attitudes and behaviors, persuasive messages should be disseminated in the most conducive way. Consequently, the types of cues and the related levels of elaboration are of significant interest to the comprehension and improvement of the persuasive processes. 1.2. The Current Study This work aims to investigate how Italian public institutions focused on health communication by means of institutional spots during the different phases of the pandemic crisis. In this light, cultural issues and persuasive pathways were considered. Specifically, this work tried to answer the following research questions: (a) in line with the literature concerning persuasive communication, what were the main variables/factors that social advertising relied on when trying to affect health attitudes and behaviors, and (b) how the different variables were combined in order to propose specific communicative pathways following the different waves/phases of the COVID-19 pandemic. These research directions were explored in accordance with several variables, concerning: (i) the scopes of the spots; (ii) the major cultural narratives proposed by Italian institutional advertising; and (iii) in accordance with the elaboration likelihood model, the main types of central and peripheral cues. It is widely acknowledged that advertising acts as a “cultural operator” and through micro-stories of everyday life, “it deliberately turns abstract notions into specific situations by precisely delineating features, contexts, dialogues and social interactions” (p. 3). Spots configure cultural imageries, stereotypes and tropes and are meticulously designed to portray and channel particular messages, shared by the involved agents, using specific visual, sound and textual methods. Specifically, in the world of advertising, connections and emotional links with the target audiences are usually guaranteed through the production tactics involving identification and empathetic connections . These features remain even when public institutions promote advertising concerning both political issues and humanitarian goals . Taking into account the colossal scale of the social, economic and political events related to COVID-19, the attention of social scientists and scholars should be focused not only for the scrutiny of the events themselves but also on how they are narratively reported: cultural narratives generated by advertising can account for new forms of awareness and sensibilities . During the COVID-19 health emergency, in line with modified lifestyles and consumption habits, advertising had to adapt to these changing communities, even capitalizing or adapting audiovisual spots. Especially in the first months of the pandemic, a change in content, language and images has been found, aimed at utilizing an emotional message over a product sponsorship . Thus, audiovisual spots played an essential role: brands gained a new social function and advertising changed its traditional role to offering key support in improving resilience, alleviating stress and catalyzing health and psychological management . Besides these already important features, institutional communication had to encourage healthy behaviors (such as social distancing, mask-wearing and vaccination) in a general effort to obtain individuals’ compliance . Thus, the research for methods for promoting behavior change becomes more and more time-sensitive. For at least the past 60 years, social psychology widely acknowledged the importance of persuasive communication in trying to change peoples’ attitudes and behaviors, and these studies generated a consensus regarding the role of attitudes in affecting actions as well as concerning the existence of several variables moderating attitude–behavior relationships . These pathways were analyzed in line with several relevant public/private domains, such as tobacco use , sustainable holiday choices and HIV prevention , among other things. Specifically referring to this last issue, an interesting metanalysis has pointed out the several theoretical backgrounds implied in the explication of the relations between persuasive communication and changing actions, ranging from the health belief model and the protection–motivation theory to theories of reasoned action and of planned behavior , and from the social-cognitive theory to the information–motivation–behavioral skills model . Therefore, several variables are invoked to explain and propose successful persuasive communications, including beliefs and emotions, perceived desirability and normative pressure, perceptions and behavioral intentions, knowledge and behavioral skills . In addition, even the combination between self-benefits and social norms has been identified as appealing in persuasive communications dealing with sustainable practices . An integrative framework to analyze how different mechanisms in different situations can impact persuasion is the elaboration likelihood model (ELM; ). This model emphasizes the importance of motivation and the ability to elaborate a message as critical factors affecting how deeply individuals ultimately elaborate it, thus defining a dual route of persuasive message processing, namely central and peripheral routes. Being defined based on full vs. reduced active engagement and the evaluation of the information by the recipients, the central route involves complex cues, requiring extensive cognitive efforts, such as argument quality, rational appeals and informational cues. In contrast, the peripheral route encompasses more implicit and superficial aspects of the message, thus implying a more heuristic process, such as source attractiveness and prestige, emotional appeals, visual and sound effects and so on . In addition, this integrative model proposes “a link between the amount of elaboration people put into forming or changing an attitude and the strength of that attitude, with greater elaboration leading to greater strength” (p. 327). In other words, since not all persuasive communications are equal, the choice and use of different cues will imply different elaboration levels, thus producing different strength outcomes and, finally, different possibilities that attitudes will guide behaviors. A recent study concerning COVID-19 vaccination showed that both central and peripheral routes influenced individually perceived informativeness and perceived persuasiveness, in turn affecting attitudes towards vaccination and the intention to obtain the vaccine . Since public health communication and campaigns aim, especially during a sanitary emergency, to obtain a massive adhesion to specific attitudes and behaviors, persuasive messages should be disseminated in the most conducive way. Consequently, the types of cues and the related levels of elaboration are of significant interest to the comprehension and improvement of the persuasive processes. This work aims to investigate how Italian public institutions focused on health communication by means of institutional spots during the different phases of the pandemic crisis. In this light, cultural issues and persuasive pathways were considered. Specifically, this work tried to answer the following research questions: (a) in line with the literature concerning persuasive communication, what were the main variables/factors that social advertising relied on when trying to affect health attitudes and behaviors, and (b) how the different variables were combined in order to propose specific communicative pathways following the different waves/phases of the COVID-19 pandemic. These research directions were explored in accordance with several variables, concerning: (i) the scopes of the spots; (ii) the major cultural narratives proposed by Italian institutional advertising; and (iii) in accordance with the elaboration likelihood model, the main types of central and peripheral cues. 2.1. Data We collected and analyzed 34 Italian spots (coinciding with the whole institutional campaign from March 2020 to December 2021), which were broadcasted during the first four COVID-19 waves through the institutional national TV channels (RAI channels), social media (e.g., YouTube) and social networks (institutional official pages and profiles). The whole corpus (see ) was available at the official Italian government website ( https://www.governo.it/it/node , accessed on 1 September 2022). 2.2. Coding Scheme and Procedure Spots were analyzed through qualitative multimodal content analysis. Based on the literature concerning institutional advertising during the pandemic and of the more general application of ELM to advertising , a codebook on an Excel sheet was created (with each line as an item and each column as a variable). Two coders had a training session and were well-instructed on the different variables included in the research project; after having independently co-coded 20% of the sample, a joint discussion on disagreements was carried out and certain operational definitions were refined, thus obtaining a satisfactory reliability. In some cases, e.g., cultural narratives, values and gestures could be codified on more than one option. The coding activity was conducted in accordance with the following domains. (a) General (meta)data . In this domain, a categorization of the scopes of advertising was proposed, through bottom-up and top-down processes, as it enabled us to frame the main functions of spots. (b) Cultural narratives . We considered this variable essential since, through the scenarios and social interactions offered by the spots’ micro-stories, the representations of reality were transformed into cultural references, supporting how reality may be perceived and explained . (c) Central cues . As the favorite way for an accurate cognitive elaboration, the presence of information, the reference to morality and values and the type of argumentation were investigated. (d) Peripheral cues . The features of testimonials, images and soundtracks were considered significant but activated less elaborate reflections. These domains and variables were included in our analysis as they were able to explain the institutional and communicative efforts to activate both more general meaning-making processes and more contextualized pathways of content elaboration. The specific codifying variables are presented in . We collected and analyzed 34 Italian spots (coinciding with the whole institutional campaign from March 2020 to December 2021), which were broadcasted during the first four COVID-19 waves through the institutional national TV channels (RAI channels), social media (e.g., YouTube) and social networks (institutional official pages and profiles). The whole corpus (see ) was available at the official Italian government website ( https://www.governo.it/it/node , accessed on 1 September 2022). Spots were analyzed through qualitative multimodal content analysis. Based on the literature concerning institutional advertising during the pandemic and of the more general application of ELM to advertising , a codebook on an Excel sheet was created (with each line as an item and each column as a variable). Two coders had a training session and were well-instructed on the different variables included in the research project; after having independently co-coded 20% of the sample, a joint discussion on disagreements was carried out and certain operational definitions were refined, thus obtaining a satisfactory reliability. In some cases, e.g., cultural narratives, values and gestures could be codified on more than one option. The coding activity was conducted in accordance with the following domains. (a) General (meta)data . In this domain, a categorization of the scopes of advertising was proposed, through bottom-up and top-down processes, as it enabled us to frame the main functions of spots. (b) Cultural narratives . We considered this variable essential since, through the scenarios and social interactions offered by the spots’ micro-stories, the representations of reality were transformed into cultural references, supporting how reality may be perceived and explained . (c) Central cues . As the favorite way for an accurate cognitive elaboration, the presence of information, the reference to morality and values and the type of argumentation were investigated. (d) Peripheral cues . The features of testimonials, images and soundtracks were considered significant but activated less elaborate reflections. These domains and variables were included in our analysis as they were able to explain the institutional and communicative efforts to activate both more general meaning-making processes and more contextualized pathways of content elaboration. The specific codifying variables are presented in . In line with the broadcast date/time and with the main objects proposed by the audiovisual spots, three phases of the Italian institutional campaign concerning health communication during COVID-19 were identified. (1) Facing lockdown . Spots from 1 to 12 (broadcast from 11 February 2020 to 23 April 2020) were included in this round. The main scopes of these spots concerned hygiene rules for virus prevention, the presentation of supporting services to facilitate the sanitary emergency, the promotion of virtuous behaviors and emotional messages of union/solidarity. As for the cultural narrative domain (see ), the most recurring narratives concerned: - call for collective responsibility and mutual protection (five spots); - macro social changes (four spots) (telephonic and online services for retired people, doorstep pensions, online school activities and ads in opposition to violence against women); - resilience and overcoming challenges (three spots); - feeling of togetherness (three spots); - sense of community (three spots); - space and social atmosphere (two spots); - gratitude (one spot) addressed to “our heroes” (physicians, security forces and other workers from various productive sectors). Looking at the central cue domain (see ), five spots (on 12) have no explicit informative claims and substantially coincide with union-solidarity messages. The others, concerning rational/informative issues, presented data just in one case (n. 11: “in Italy a woman is killed every three days […]. From 2000 to today, 3230 feminicides were committed: 1564 by the hands of their partner/ex-partner”). The moral domain is more other-oriented (eight spots) than self-oriented (four spots): whereas in the first case, messages concerning reciprocity and mutual protection were proposed, and in the second case, references to vulnerable groups—e.g., older people, women at risk—or individual calls for self-protective measures were found. As for the values, eleven spots stressed the importance of responsibility and social justice for vulnerable people, accompanied by the value of national security (five spots); and the value of fighting against a war/challenge is conveyed in four spots. The argument slightly favors a one-side perspective, i.e., it was preferred when practices, behaviors and services are proposed, whereas a two-sided argument was used in order to anticipate possible targets’ reactions (n. 3: “It is fair staying home, these are the rules, and now even if you are young, it’s time to comply with them”). As for the peripheral cue domain (see ), five spots either contained no testimonials or did not have well-identifiable ones (e.g., people are quickly shown and have no voice). In six spots, testimonials were celebrities, mainly from TV, cinema and theatre (four spots) and the music world (two spots). A unique spot involving sportspersons included 10 top athletes from different sports. Testimonials were directly related to COVID-19 in a single video (spot no. 7), showing physicians, pharmacists and other professional figures. As for the gesture, about half of the spots present iconic realistic items (e.g., washing hands and keeping distance): they accompany and strengthen what is said and model the audience’s behaviors as a function. A similar function was achieved by deictic (two spots) and batonic gestures (one spot), whereas a stricter emotional activation was promoted by symbolic ones (three spots). Spot no. 2 is “bilingual”, as it also uses Italian Sign Language, having a broader inclusive aim. As for the audiovisual domain, five spots show some kinds of moving images (e.g., empty public spaces and classrooms, children’s pictures); otherwise, almost neutral images and scenarios were shown. As for the soundtracks, eight spots were classified as emotional, four of which utilized well-known motifs/songs. (2) Living with COVID-19 . Spots from 13 to 24 (broadcast from 5 May 2020 to 28 October 2020) were included in this round. Even if officially set during the first Italian lockdown, spots from 13 to 15 were included as well, since they appeared as “transitional” messages aimed to convey public recommendations for facing the last days of lockdown. The main scopes proposed by audiovisual spots concerned hygiene rules for virus prevention, supporting services for the health emergency and the promotion of virtuous behaviors. As for the cultural narrative domain (see ), we found: - collective responsibility and mutual protection (nine times); - sense of community (two times); - having a resilient attitude (two times); - social spaces and atmosphere (two times); - macro social changes (one time) (high school qualification in person); - the possibility of coming back (one time). The central cue domain (see ) was constructed as follows: firstly, all the audio videos contained informative cues, even if no data, graphs or percentages were proposed; second, half of them presented both self-oriented and other-oriented morality, with just two cases being self-oriented and three being other-oriented. Direct calls mostly utilized this mixed morality (e.g., “cover your mouth, noise and chin well”, spot no. 13), having a both a subjective and public impact (e.g., “a simple precaution aimed to protect both your health and the health of others”, spot no. 13). As for values, responsibility and social justice for vulnerable people again played an essential role (nine times), accompanied by national security (six times) and fighting a war/challenge (two times). A reference to the research on COVID-19 (spot no. 16) was also shown. The argumentation was somewhat divided between one- and both-sided. When the second type of argumentation was used, it seems to hint at the possible public resistance to healthy attitudes and practices, as in the following example: “to wear the mask is not easy as it seems” (spot no. 19). As for the peripheral cue domain (see ), five spots have no testimonials, since they present stylized persons, a visual illustration of the COVID-19 virus or other images. The other videos show both ordinary persons (of different ages, roles and positions) and celebrities (in three spots). Celebrities come from the TV/theatre world; specifically, two were popular comedians. Spots involving these actors have a familiar atmosphere and present funny inserts (e.g., dialect words, mistakes in proposing a Latin adage). During this stage, no testimonial was directly related to the pandemic. As for gestures, five spots again represented iconic realistic gestures (e.g., washing hands, wearing the mask); in addition, both indexical (four times) and batonic gestures (three times) recurred. More generally, gesture appears intentionally marked to emphasize the advertising aims, which in this phase were similar to those of the first round. One spot also presented symbolic gestures, whereas spot no. 18 made use of realistic gestures with symbolic functions: opening the door, turning off the PC and taking off the mask all represent (prudently) regained freedom. As for the images, four spots presented emotional scenarios/activities (e.g., smiling, playing, empty spaces), whereas the most considerable part contained relatively neutral images (e.g., everyday places and activities). Just three spots had an emotional soundtrack, whereas the others were classified as neutral. However, no popular songs/motifs were used. (3) The vaccine challenge . Spots from 25 to 34 (broadcast from 17 January 2021 to 29 December 2021) were included in this round. The scope of the spots concerned the vaccine awareness campaign and, just in one case, the supporting services for the health emergency (but again related to the vaccine facilitation). As for the cultural narrative domain (see ), we observed the following: - collective responsibility and mutual protection (five times); - social spaces and atmosphere (eight times); - resilience and overcoming challenges (seven times); - togetherness (six times); - sense of community (four times); - the idea of coming back to past situations (three times); - macro social changes (two times), referring to the risks of vaccine-related fake news; - gratitude (one spot) addressed to members of the scientific field for their work with vaccines. Looking at the central cue domain (see ), all the spots (except no. 25, which displayed explicit emotional elicitation) offered propositional content related to behavioral rules and vaccine application. However, data were provided in just two cases (e.g., “the vaccine reduces up to 90% the risk to going to the intensive care unit”, spot no. 32). As for morality, five spots were other-oriented, three were person-oriented and three displayed mixed morality. The value of responsibility and social justice for vulnerable people played again a central role (seven times), accompanied by the values of fighting a war/challenge (seven times) and national security (six times). More than in the other rounds, references to the research on COVID-19 also recurred (four times). Argumentation was widely two-sided (nine spots): the legitimacy of doubting (spot no. 25) and the ease of being taken in by a hoax (spot no. 29) led to the addressing of socially widespread worries (e.g., “[vaccine] have passed all the testing procedures concerning safety”; spot no. 32). As for the peripheral cue domain (see ), spots frequently made use of mixed testimonials (both celebrities and common people, seven times), whereas either popular or unknown persons were present in two spots. Celebrities were mainly from TV/theatre (six spots), sport (five spots) and music (four spots). In addition, the protagonists of four videos were also associated with the COVID-19 pandemic (physicians and/or researchers). As for gestures, two features were found: (a) the presence of (again marked) mixed gestures, categorized in all types; (b) the occurrence of a specific repeated and dynamic gesture, represented by the iconic–symbolic “V” (made through the conjoint opening of the index and middle fingers), that is first placed on one’s arm (to represent the act of vaccination iconically) and, in a second moment, is held in front of the body (to represent the victory symbolically). This value–charge gesture clearly communicates the association between the vaccination campaign and the pandemic defeat. As for the audiovisual cues, just two spots had neutral images and only a single one had a neutral soundtrack; the prevalent emotional soundtracks coincide with popular songs/motifs (two popular Italian singers specifically created one song for this situation). Therefore, broad emotional audiovisual activation was provided. During the health emergency related to the COVID-19 pandemic, extraordinary interventions and measures were taken. Health communication had to be “resilient” to face the changing global community and meet citizens’ needs and expectations, trying to maintain responsible relationships with media and various strategic public institutions . Institutional communication attempted to lessen collective uncertainty and promote cooperative attitudes and behaviors by emphasizing individual and social responsibility and trust . As a largely shared challenge to persuade populations to adopt behavioral changes, it was essential to understand how these persuasive efforts were organized in a nation with a privileged view of the situation, namely Italy. The qualitative multimodal content analysis of the Italian institutional advertising campaign enabled us to propose the following insights. In terms of the earliest and most critical phase, during lockdown, institutional advertising had to accompany the severe restrictions characterizing that period, facing uncertainties, informational needs and emotional turmoil. Two main directions were found in public spots. First, a wide variety of messages and cues, involving several cultural narratives and heterogeneous scopes, informative and conative aims and individual and collective values were evidenced. The well-blended narratives and the rather equally distributed central and peripheral cues, typical of this phase, appear oriented to emphasize the exceptionality of the situation. Second, the overall value of “inclusivity” was proposed, since: (i) scopes, narratives and contents were addressed to fragile populations (retired persons, at-risk women, students), as also testified by the use of Italian Sign Language; (ii) morality was mostly other-oriented and values were focused on responsibility and national security, thus emphasizing self-transcending and conservative orientations . These features, together with one-sided argumentations and realistic gestures, supported the search for a widespread and popular elaboration of contents, involving multiple cultural narratives and not necessarily extensive cognitive efforts, to face the emergency pandemic period. On the one hand, this mixing and including style recalls some features of the protection–motivation theory , especially concerning the beliefs of personal susceptibility; on the other hand, it emphasizes the relevance of normative pressures, typical of more reasoned approaches . The second phase, focusing on the need for co-existence with the pandemic, had a transitional nature: some cultural narratives and scopes typical of the initial stage (e.g., togetherness, union/solidarity, gratitude) were overcome in favor of more functional messages and cues. In addition, certain features typical of the third phase began to appear. The configuration of narratives represented a first timid impulse toward the idea of a “normal” life. The scopes, mainly oriented to present hygiene rules, supporting services, and virtuous behaviors; contents, dealing with simplified and repeated behaviors and practices; morality, both self- and other-oriented; and values, again emphasizing self-transcending and conservative orientations, concurred to promote a clear and univocal attitude toward health practices. The types of arguments, mixed between one-sided and two-sided but involving a larger number of issues, implied a more functional communicative style. In addition, the fewer recurring peripheral cues, mainly images and soundtracks, outline a more rational approach. In this context, testimonials were either missing or non-popular; when celebrities were used, three comic actors generated an atmosphere of familiarity and reassurance; therefore, they could be identifiable as non-biased testimonials . The presence of marked gestures, which have a modelling and strengthening function, move this peripheral cue even closer to the central pathway. This stage appears to be more in line with a socio-cognitive approach : since people are more likely to perform a behavior once they acquire the relevant knowledge and behavioral skills, persuasive communications should try to successfully model behavioral skills . The third phase, essentially identified with the institutional pro-vaccine campaign, showed a more propositional aim and followed a new communicative scenario. The cultural narratives, again focused on responsibility and mutuality (representing the common thread of the whole campaign) were set in a changed context and appeared as future-oriented: the memory of social spaces and atmospheres, the emphasis on resilience and national security, the battle against the pandemic and the trust in science converged to create a general encouraging climate, promoting the opportunity to both come back and to overcome the challenges proposed by the pandemic. The main feature of the institutional campaign in this phase is “contamination”, which can be defined in the following forms: (1) the chronotope, since space and time, which were “suspended” in the previous stages, are proposed again, thus creating a connection between past practices, present care and future opportunities; (2) testimonials, with (i) both celebrity and non-celebrity figures appearing in the same spots and (ii) celebrities from all domains (TV, music and sport), as well as (iii) COVID-19-related non-celebrity figures; (3) central and peripheral pathways, involving values—not only highlighting self-transcendence and conservation but also openness and self-enhancement —and contents and arguments (mostly both-sided), as well as images and soundtracks (broadly emotional and popular). In addition, a specific kind of contamination was proposed by gestures, again playing an essential role in institutional advertising, specifically in its ad hoc created dynamics and repeated configuration (from vaccine to victory). More generally, although this phase presented the highest number of peripheral cue occurrences (see ), hypothetically implying a reduced need for extensive cognitive activities, the presence of equally important central cues, such as informative references and two-sided argumentations, demonstrated the need to take into account the sensitivity and the variety of vaccine-related attitudes, making use of all the possible cues and persuasive pathways. Similarly to other stances, vaccine attitudes can be objects of ambivalence, one of the most promising moderators of the attitude–behavior link . Therefore, focusing on the perceived desirability of the behaviors and social pressure can increase the engagement related to COVID-19 vaccines. This work has some limitations, mainly related to the reduced and contextualized sample, even though it coincided with the Italian institutional advertising campaign. In addition, the variables, although they were selected from among the most representative in the recent international literature devoted to these matters and fit our sample well, were not exhaustive in terms of the central/peripheral cues in advertising. Most importantly, since pandemic communication was an important point for mitigating the crisis’ effects, an approach oriented to investigate the effectiveness of the applied communication and social advertising could fulfill the insights obtained from our results, mainly focused on the communicative features of the spots which the public institutions in Italy commissioned. This input represents an essential base for future investigation. Nonetheless, to the best of our knowledge, this is the first work matching the analysis of cultural narratives with the ELM framework and applying the ELM model to institutional advertising concerning the health emergency of COVID-19. In addition, some emerging results—e.g., the types of arguments and the reference to values, the importance of gestures, and the role of testimonials—offer innovative implications for health communication and literacy and new inputs for audiovisual institutional communication. Our study emphasized the importance of qualitative investigation as an opportunity to deepen the communicative efforts by institutions in the battle against the COVID-19 pandemic and the infodemic, in line with strategical advertising communication and against a broader socio-cultural background. Even if we propose a coding activity enabling us to compare the spots of each round referring to the incidences of cultural narratives and central and peripheral cues, we believe that the most interesting insights concern the way in which the included variables offer specific inputs and, at the same time, are set together as more holistic repertoires in different contexts. Therefore, similar narratives, as well as the preference for central/peripheric cues, can have different outcomes when they are differently blended, thus configuring communicative pathways regarding inclusivity (first round), functionality (second round) and contamination (third round). In addition, this study offers significant applicative opportunities in promoting overall and specific awareness about health communication. In particular, the comprehension of the configuration of the cultural narratives and of the persuasive (central/peripheral) cues can (a) inform about the levels of elaboration and attitude creation, including epistemic self-defense skills and health literacy ; (b) improve effective communicative patterns in public communication and social advertising by institutional actors, also enhancing targeted and contextualized messages and trying to restore trust in institutions; (c) offer qualitative support for quantitative tools based on deep learning methods, as well as contesting the spread of misinformation and the “crisis” of discerning disinformation from accurate news . |
Availability of Medical Services and Teleconsultation during COVID-19 Pandemic in the Opinion of Patients of Hematology Clinics—A Cross-Sectional Pilot Study (Silesia, Poland) | dd1cacca-528c-46f0-bac4-19011859dd68 | 10002034 | Internal Medicine[mh] | For a long time, coronaviruses were considered benign pathogens that cause respiratory symptoms of minor severity that resolve within a few days. The arrival of new infectious virus species has given rise to an increase in interest in these viruses. Before the emergence of the new SARS-CoV-2 coronavirus, a highly infectious species of SARS coronavirus had already appeared in the public, in 2002, causing a worldwide outbreak. Ten years after the SARS outbreak, new cases of the respiratory disease caused by the MERS coronavirus emerged, but this virus did not entail an outbreak. In contrast, a new SARS-CoV-2 virus emerged in December 2019, which triggered the COVID-19 pandemic in 2020 due to the rapid spread and severity of cases worldwide . The Wuhan live animal and seafood market is considered the epicenter of COVID-19. In Poland, the first case was reported on 4 March 2020. The aim of the prevention effort was primarily to stem the spread of infection to prevent overburdening the healthcare system . The most common symptoms present at the onset of SARS-CoV-2 coronavirus infection were dry cough, fever, general weakness, and muscle aches. The course of the infection largely depends on the age of the patient, and more severe symptoms are observed more often in the elderly than in children . Most symptomatic patients have a mild form of the disease (80% of patients). In contrast, 14% of symptomatic patients have a severe course of the disease, i.e., accelerated breathing, significant resting dyspnea, involvement of more than 50% of the lung parenchyma, and saturation below 94%. A minority, of 6% of patients have a critical course of the disease with acute respiratory distress syndrome, with multiple organ failure and septic shock . In about 20% of people, the disease is asymptomatic. To a large extent, the course of the disease and its severity depend on the patient’s immune response to infection. Coronavirus, SARS-CoV-2 is primarily transmitted between people by the droplet route, where close person-to-person contact is not necessary. For infection to occur, the virus must be transmitted to the mucous membranes of the throat, nose, or eyes. The minimum infectious dose of the virus has not been determined . The pandemic continues to be a global threat to health care and the availability of health services. It has affected all countries and therefore health systems have had to adapt to the new situation to ensure rapid access to medical care. However, due to reduced access to medical services during this time, the functioning of the healthcare system has been disrupted. To curb the spread of the virus, many diseases were treated through telemedicine, primarily using teleconsultation. Telemedicine has reduced personal contact between doctors and patients and reduced the risk of exposure to disease for patients and medical personnel. However, telemedicine does not fully replace the interaction that occurs in face-to-face interactions . In Poland, the majority of teleconsultations within the framework of so-called telemedicine and medical advice are carried out in contact through a telecommunications device (such as a telephone). According to estimates, this is 95% of all teleconsultations. Other forms, such as video chat, are marginal . Nonetheless, alternative modes of communication, such as online consultations and teleconsultation, have significant benefits in emergencies. Among other things, they provide patients with real-time information and professional advice from physicians during times of inaccessibility to medical facilities . The purpose of the survey was to gather patients’ opinions on the quality and availability of specialized medical services during the pandemic. Based on the data collected regarding patients’ opinions on services provided via telephone systems, a picture was created of the opinions of clinic patients regarding teleconsultation, and attention was paid to emerging problems. It was assumed that the coronavirus pandemic negatively affected the quality and availability of medical services provided by public health care providers.
2.1. Study Organization The study included a 200-person group of patients, completing their visits to specialized hematology outpatient clinics in Bytom (Silesia, Poland) , aged over 18 years, with various levels of education. To anonymize the study, only data on gender, age, and the fact of treatment were collected. All data were coded with appropriate symbols, preventing the identification of patients by the Act of 29 August 1997, on the Protection of Personal Data (Journal of Laws of 1997, No. 133, item 883). The primary criteria for inclusion were the patient’s written consent, expressed through participation in the survey, and that the patients be aged 18 or over. Participation in the study was anonymous and completely voluntary. The study adhered to the provisions of the Declaration of Helsinki and received a positive opinion from the Bioethics Committee of the Silesian Medical University in Katowice (ID: PCN/0022/KB/211/20). 2.2. Research Tool A proprietary survey questionnaire was developed for the study, which was conducted on paper and used face-to-face interaction with patients. The survey questionnaire contained 17 closed questions. The first five questions (metric) were about gender, age, place of residence, education, and current occupational status. The remaining 12 questions were aimed at finding out the patients’ opinions on the teleconsultations conducted and assessing their availability and quality. The questionnaire was validated by administering it twice, two weeks apart, to a group of 30 people; in the first version, respondents were given a chance to express their opinion and indicate comments on the content of the questionnaire. The second time, the repetition of responses was tested. The reliability of the questionnaire was assessed using Cronbach’s alpha coefficient and was shown to be 0.83, which in psychological research indicates good reliability. 2.3. Study Sample The study included 200 patients, most of whom were women (58%). The largest number of respondents belonged to the age group of 60 years and older (44%), and the smallest number belonged to the age group of 18–28 years (9%). Of the respondents, 94.5% were city residents and most had a secondary/vocational education (68%). The surveyed patients were mostly employed (50%) or retired (49%) . 2.4. Statistical Compilation Statistical analysis was carried out using Statistica software (Statsoft, Poland). Multivariate tables were used in the calculations, individual groups of respondents were compared, and relationships between variables were analyzed. Mann–Whitney U and Kruskal–Wallis tests were used in statistical inference. The p -values <0.05 were considered statistically significant. For the results of the statistical inference, the abbreviation T is adopted in the text.
The study included a 200-person group of patients, completing their visits to specialized hematology outpatient clinics in Bytom (Silesia, Poland) , aged over 18 years, with various levels of education. To anonymize the study, only data on gender, age, and the fact of treatment were collected. All data were coded with appropriate symbols, preventing the identification of patients by the Act of 29 August 1997, on the Protection of Personal Data (Journal of Laws of 1997, No. 133, item 883). The primary criteria for inclusion were the patient’s written consent, expressed through participation in the survey, and that the patients be aged 18 or over. Participation in the study was anonymous and completely voluntary. The study adhered to the provisions of the Declaration of Helsinki and received a positive opinion from the Bioethics Committee of the Silesian Medical University in Katowice (ID: PCN/0022/KB/211/20).
A proprietary survey questionnaire was developed for the study, which was conducted on paper and used face-to-face interaction with patients. The survey questionnaire contained 17 closed questions. The first five questions (metric) were about gender, age, place of residence, education, and current occupational status. The remaining 12 questions were aimed at finding out the patients’ opinions on the teleconsultations conducted and assessing their availability and quality. The questionnaire was validated by administering it twice, two weeks apart, to a group of 30 people; in the first version, respondents were given a chance to express their opinion and indicate comments on the content of the questionnaire. The second time, the repetition of responses was tested. The reliability of the questionnaire was assessed using Cronbach’s alpha coefficient and was shown to be 0.83, which in psychological research indicates good reliability.
The study included 200 patients, most of whom were women (58%). The largest number of respondents belonged to the age group of 60 years and older (44%), and the smallest number belonged to the age group of 18–28 years (9%). Of the respondents, 94.5% were city residents and most had a secondary/vocational education (68%). The surveyed patients were mostly employed (50%) or retired (49%) .
Statistical analysis was carried out using Statistica software (Statsoft, Poland). Multivariate tables were used in the calculations, individual groups of respondents were compared, and relationships between variables were analyzed. Mann–Whitney U and Kruskal–Wallis tests were used in statistical inference. The p -values <0.05 were considered statistically significant. For the results of the statistical inference, the abbreviation T is adopted in the text.
In response to the question “How do you rate the availability of services provided during the COVID-19 pandemic?”, the majority of respondents rated the availability of services during the COVID-19 pandemic as good (35%), and 25.5% as definitely good. In contrast, 21.5% of respondents marked the answer “difficult to say”, 34 people (17%) rated the availability as bad, and only two people (1%) as definitely bad. To the next question, i.e., “How do you rate the quality of services provided during the COVID-19 pandemic?”, 32% of respondents rated the quality of services provided as good, and 27% of people answered: “hard to say”. Another 20% of respondents rated the quality as good, 15.5% of respondents marked the answer “bad”, while only 5.5% of people answered, “definitely bad”. When asked to evaluate the quality of the services provided through ICT systems, 30.5% of respondents thought that the introduction of teleconsultation and its quality were good. 27.5% had no opinion on the subject, while 21.5% of respondents rated the quality of services provided through ICT systems badly. Of the respondents, 17% gave a decidedly good rating, and 3.5% gave a decidedly bad rating. Furthermore, 56% of respondents indicated that the creation of teleconsultation during the COVID-19 pandemic was a good idea, while 44% indicated that it was not a good idea. In response to the question “What do you like best about the advice provided through telephone or online systems?” (respondents could indicate more than one answer), most respondents indicated the convenience of visiting without leaving home (49.5%), 45.5% marked safety related to the possibility of contracting a virus; however, 45% indicated the answer “I don’t like this type of visit”. Additionally, 30.5% of respondents indicated the lack of waiting in line, while only 17% of people marked the answer that they had better contact with the doctor. In a question about possible problems arising when providing advice via ICT systems (again, it was possible to mark more than one answer), the largest number of people (56%) indicated that they had not noticed any problems in this regard, 40.5% of respondents had problems with connectivity, while 38.5% of people had problems understanding the information provided, 32.5% of respondents indicated poor contact with the doctor, and 26.5% of people indicated a lack of examination. To the question “Do you think it would have been a good idea to conduct visits via ICT systems without the pandemic?”, 54% said yes, while 46% of people indicated a “no” answer. The same number of respondents, as with the previous question, answered the question “Are you willing to use the advice provided by the telephone method?” and 54% indicated “yes”, while 46% indicated “no”. Regarding the question about the attitude of medical personnel to the advice given by the telephone method, 34.5% of respondents answered “difficult to say”, 28.5% of people rated the attitude of medical personnel to the advice given as being well, as did 20.5% of respondents. In contrast, the answer “bad” was marked by 14% of people, and “definitely bad” by 2.5% of respondents. In response to the question “Have you used other medical facilities that also provided telehealth appointments?”, 77.5% of people answered that they had used telehealth elsewhere, while 22.5% of people had not used this type of service elsewhere. The last question included only those who answered yes to the previous question, i.e., “Have you used other medical facilities where teleconsultation visits were also conducted?” and referred to 155 people. This question was about the evaluation of conducted visits to another facility via telehealth systems and 31.6% of people rated the conducted visits to another facility via telehealth systems badly, 29% of people did not comment, 26.5% of respondents rated the visits well, 11% of people marked the answer “definitely badly”, and 1.9% of people marked the answer “definitely well”. Referring to the question: “How do you rate the availability of services provided during the COVID-19 pandemic?”, a breakdown was made in the responses in terms of the number of women and men . Of men and women, 17.5% rated the availability of provided services during the pandemic well, 15% of women rated this availability strongly well, while only 10.5% of men gave this rating (“strongly well”). A bad rating was given by 12.5% of women and 4.5% of men. The answer “definitely bad” was indicated by 1% of men, and 0% of women. In contrast, 13% of women and 8.5% of men had no opinion. There was no relationship between the variable’s gender and the evaluation of the availability of medical services during the COVID-19 pandemic ( p > 0.05). For the same question—How do you rate the availability of services provided during the COVID-19 pandemic?” for respondents by age , in the age group of 60 and over, 14.5% of respondents rated the availability of services provided during the pandemic poorly. The answer good was marked by 4.5% of people, and bad by 1% of respondents. The same number, i.e., 12% of respondents, marked the answer “good” and “hard to say”. In the 50–59 age group, the largest number of respondents answered “good” (7.5% of people). Six percent of respondents marked the answer “definitely good”, and “hard to say” was indicated by 4.5%. No one marked the answers “bad” and “definitely bad”. On the other hand, in the 40–49 age group, the highest number of responses was “good” (7%). “Good” was marked by 4% of respondents, 2.5% of people had no opinion on the subject, and 2% of respondents indicated the answer “bad”. No one marked the answer “definitely bad”. Respondents in the 29–39 age group mostly indicated the answer “definitely good” (5.5%), 5% of people indicated the answer “good”, 0.5% indicated the answer “bad”, and 2.5% had no opinion. Additionally, no one marked the answer “definitely bad”. In contrast, in the 18–28 age group, there are only two ratings, i.e., “definitely good” (5.5%) and “good” (2.5%). A statistically significant relationship was found between the variable age and the evaluation of the availability of services during the pandemic. Those over 60 were more likely to negatively evaluate the availability of medical services provided during the COVID-19 pandemic (T = 11.868; r = 0.632; p = 0.001). About the professional status of the respondents, the answers to the above question—“How would you rate the availability of services provided during the COVID-19 pandemic?” —were as follows: among working people, as many as 20% of respondents rated the availability of provided services during the pandemic well, 19% of working respondents indicated the answer “definitely well”, 3% “poorly”, while 8% had no opinion. No one marked the answer “definitely bad”. Those on a pension, on the other hand, mostly (15%) marked the answer “good”. Of respondents, 14% marked the answer “bad”, while 13.5% had no opinion on the subject. In contrast, “definitely good” was marked by 5.5% of people, and “definitely bad” by 1%. Those who were pupils or students (1%) marked one answer—”definitely good”. There was a statistically significant relationship between the variable of occupational status and the assessment of the availability of services during the pandemic. Those who were employed/retired were more likely to negatively evaluate the availability of medical services provided during the COVID-19 pandemic (T = 12.003; r = 0.614; p = 0.002). Another question asked “Are you willing to use telephonic advice?”, and respondents were grouped by age and gender . Overwhelmingly, reluctance to teleprompting was shown by women in the age group of 60 years and older (T = 10.099; r = 0.703; p = 0.001). The rest of the respondents’ answers were similar, so no differences were noted ( p > 0.05). The more frequent response was “yes” among both women and men, regardless of age.
The pandemic has changed the way healthcare services are delivered to patients around the world. To provide precautions and physical distancing during the COVID-19 pandemic, telephone consultation was provided as an alternative method to face-to-face visits, primarily in primary care (PCP) . However, telemedicine also has some drawbacks, as it primarily focuses on the symptoms presented by the patient, patients are often not comprehensively examined and visual cues are often lacking. In addition, there are issues regarding the relationship between doctor and patient, or problems regarding the quality of the information provided . Despite the drawbacks, telephone consultations were used during the pandemic because of their ability to deliver remote, essential health care to patients and to halt the spread of the virus . A study by Zammit, et al. found that there was a significant improvement in patient satisfaction and an increased preference for telephone consultations . Telemedicine during the pandemic made a huge impact mainly among older patients or patients with chronic diseases. The advantages of telephone telemedicine, in addition to preventing the transmission of infections, are convenience and saving time. However, the difficulty of checking and explaining the condition to patients, the possibly incomplete assessment of their health status, and the misunderstandings that can arise from a telephone consultation between a doctor and a patient negatively affect this type of medical service . The COVID-19 pandemic has proven that telemedicine is a very helpful and desirable tool in healthcare. It allows for a personalized approach on the part of healthcare professionals toward patients and the establishment of positive interactions between them. This represents a very valuable aspect from the perspective of both parties. The use of telemedicine has made it possible to access medications (so-called e-prescriptions, electronic prescriptions), make diagnoses, implement comprehensive treatment, and, in addition, carry out health education processes, including issues related to the prevention of chronic diseases. Studies related to teleconsultation, which were conducted before the outbreak of the SARS-CoV-2 virus pandemic, did not show a significant decrease in effectiveness compared with traditional visits made in a stationary manner . A study that was conducted in the context of the role and importance of telemedicine in the initial wave of the COVID-19 pandemic was the original work carried out by Fatyga et al. . This study was related to elderly patients of a Silesian diabetes clinic. It involved 86 patients, aged ≥60 years, whose leading disease was type 2 diabetes. The study did not include patients with microvascular complications of diabetes, those who had suffered a stroke, were struggling with depression or other mental disorders, or were consuming excessive amounts of alcoholic beverages. The results obtained by the authors show that, for the most part, a significant number of patients—despite complying with all restrictions related to the sanitary-epidemiological regime, i.e., taking preventive behaviors—declared frequent or constant feelings of fear of contracting coronavirus disease. Consequently, alternatives such as the use of telemedicine were far more favorable to them due to the lack of real contact with other people, thereby offsetting the risk of potential illness due to COVID-19. The conclusions of the survey demonstrate the validity of the use of telemedicine, although it is worth considering measures to improve it. In addition, it seems non-negligible to conduct further scientific research, including clinical research, focusing on the issue of telephone and electronic medicine from the point of view of patients, which will allow more accurate interpretations regarding the adequate management of medical personnel in this area, as well as strengthening behavioral health strategies among the elderly population. Patient satisfaction with the use of telemedicine can also vary depending on the availability of both face-to-face visits and teleconsultation . In a study conducted on the satisfaction and importance of teleconsultation during the coronavirus pandemic among patients with rheumatoid arthritis, 62.3% said the quality of teleconsultation was not as satisfactory when compared with in-person consultations . In contrast, in another study on determining patients’ satisfaction with the quality of teleconsultation. Patients in the surveyed PCPs rated communication with the doctor and comprehensiveness of medical care the highest. The treatment used helped 47.5% of patients improve their health . Additionally, studies have been conducted on the use of telemedicine among asthma patients. However, the disadvantages brought to the fore regarding teleconsultation were the limited ability to perform tests, or the lack of personal contact between doctor and patient . From a subsequent study conducted among 14,000 respondents on the satisfaction of patients using teleconsultation with their PCP during the pandemic, more than 40% of respondents were satisfied with the teleconsultation provided and said that the quality of services provided in this way was comparable to the advice given in an inpatient manner. In contrast, 36.3% of people rated the quality of an in-person visit to a PCP higher than a teleconsultation . Thanks to telemedicine, people in high-risk groups, for example those with cardiovascular disease, diabetes, or Parkinson’s disease, were able to effectively monitor their health status during the pandemic, while maintaining constant contact with medical personnel . The study also found that doctors and nurses showed lower satisfaction with teleconsultation than patients. Above all, medical personnel were concerned about emergencies that could occur due to the patient’s limited visualization during a telephone consultation. Telephone consultations tended to convey less information than video consultations; however, despite this, teleconsultation was preferred over video visits by both providers and patients, especially those who were less technologically advanced . The nature of telemedicine may limit a provider’s ability to obtain a comprehensive physical examination, which is fundamental to a physician’s diagnostic arsenal. Of course, telemedicine does not apply to every scope, such as invasive procedures, dental procedures, or critically ill patients requiring in-person visits . Lack of easy access to PCPs and specialized treatment has also been associated with widespread and higher levels of perceived anxiety among patients . Inadequate access to reliable information has also fostered anti-vaccine movements . In an era of efforts to curb the epidemic, it is essential to safeguard the health needs of both COVID-19-infected patients and other patients. It is also important that people who identify worrisome symptoms in themselves that may indicate the development of a condition should not give up on early diagnosis . It should also be noted that the earlier a patient is diagnosed, the greater the chances of a faster recovery, which serves to minimize treatment costs burdening the healthcare system. Therefore, it is recommended that health promotion and disease prevention activities be increased, as well as broader health education for the public for both citizens as a whole and for patients suffering from various diseases . Undoubtedly, the e-health solutions implemented so far, such as e-prescription, e-referral, teleconsultation, or video consultation with a doctor, have made it possible to secure the basic needs of patients to a large extent; nevertheless, it is necessary to improve them further as doing so will make the healthcare system more resilient to emergencies (including further epidemics) in the future . Nevertheless, when implementing such solutions, intensified information and education campaigns should also be carried out, especially those that emphasize the development of digital competencies among senior citizens burdened with multiple diseases. The elderly, for example, have repeatedly reported difficulties in using the Internet Patient Account. In the future, it should also be pointed out that, among other things, hospitals should have procedures in place to take appropriate and proportionate action, particularly about restricting the exercise of patient rights . This restriction should not be tantamount to a ban, leading to the deprivation of patients’ rights, and should not prevent the realization of the rights of persons authorized by the patient, or relatives . There is an urgent need to further standardize the provision of health services using solutions that allow remote communication . Telemedicine or video consultations should not completely replace in-person highly specialized medical consultations, they should be a form of support for the patient’s treatment process in emergencies, such as in the case of the next wave of COVID-19 or the emergence of a new pandemic. However, the development of telemedicine during the pandemic was undoubtedly necessary and essential but still needs to be refined . During the pandemic, telemedicine was an alternative method of diagnosing, treating, monitoring, and distantly supporting patients who did not require face-to-face contact with medical personnel . The study conducted by the authors of this paper indicates that patients’ attitudes toward the use of telemedicine services during the COVID-19 pandemic varied. Younger people rate the quality and accessibility of teleconsultation services well, in contrast to those over 60. Strengths and Limitations The study is not free of limitations. The first limitation of the conducted survey is the scope of the research sample, which includes only one specialist outpatient clinic provider from one country. However, this sample was sufficient to test and validate the research tool—a questionnaire to assess patient satisfaction with the quality of remote medical care. In addition, despite the pandemic, the survey was conducted using a face-to-face survey method, which helped reduce researcher error and the risk of “bot/fake responders”, as is the case with similar surveys conducted using the computer-assisted web interview (CAWI) method. A survey of a larger number of respondents from across the country is planned for the follow-up survey stage, which will be conducted to finalize and update the results. The second limitation is that the very evaluation of the quality of remote advice came only from the point of view of patients, who are not qualified to substantively assess the effectiveness and selection of appropriate treatment methods. The indicated research limitation provides an interesting direction for further research that could address the evaluation of the quality of the treatment by qualified medical personnel or healthcare coordinators.
The study is not free of limitations. The first limitation of the conducted survey is the scope of the research sample, which includes only one specialist outpatient clinic provider from one country. However, this sample was sufficient to test and validate the research tool—a questionnaire to assess patient satisfaction with the quality of remote medical care. In addition, despite the pandemic, the survey was conducted using a face-to-face survey method, which helped reduce researcher error and the risk of “bot/fake responders”, as is the case with similar surveys conducted using the computer-assisted web interview (CAWI) method. A survey of a larger number of respondents from across the country is planned for the follow-up survey stage, which will be conducted to finalize and update the results. The second limitation is that the very evaluation of the quality of remote advice came only from the point of view of patients, who are not qualified to substantively assess the effectiveness and selection of appropriate treatment methods. The indicated research limitation provides an interesting direction for further research that could address the evaluation of the quality of the treatment by qualified medical personnel or healthcare coordinators.
Patients’ approach to the use of teleconsultation services during the COVID-19 pandemic varies, primarily due to attitudes toward the new situation, the age of the patient, or the need to adapt to specific solutions not always understood by the public. The availability of medical services during the COVID-19 pandemic is rated significantly lower by the elderly (over 60) and the group of pensioners/retirees. There is no gender variation in respondents’ opinions. Telemedicine cannot completely replace inpatient services, especially among the elderly. It is necessary to refine remote visits to convince the public of this type of service. Remote visits should be refined and adapted to the needs of patients in such a way as to remove any barriers and problems arising from this type of service. This system should also be introduced as a target, providing an alternative method of inpatient services even after the pandemic ends.
|
Efficacy and Accuracy of Maxillary Arch Expansion with Clear Aligner Treatment | 0f94e149-f825-4521-a49a-dfa50e2d22c4 | 10002100 | Dental[mh] | The term “clear align therapy (CAT)” refers to the orthodontic technique with clear aligners for the treatment of dental malocclusions [ , , ]. Since its development in 1997, Invisalign ® technology has been established worldwide as an aesthetic alternative to labial fixed appliances . Since its first appearance on the market, the Invisalign ® system has seen significant development over time; many of its features have been continuously improved. New and different attachment designs have been developed, and the manufacturing material has been tested and improved. To allow for additional treatment biomechanics, the combined use of the clear aligner treatment with computer-guided piezocision and new auxiliaries, such as “precision cuts” and “Power Ridges”, has been proposed and used. According to the manufacturer, Invisalign ® is capable of effectively performing dental movements, such as bicuspid derotation, up to 50° and root movements of maxillary central incisors up to 4 mm. Despite the defended efficacy of the treatment, there is still controversy among professionals about the real clinical potency. On the one hand, the defenders are convinced and show cases of successful treatment, providing clinical evidence. In contrast, the opponents argue that there are significant limitations, especially when it comes to the treatment of cases with complex malocclusions [ , , , ]. Rossini et al., in their systematic literature review, found that the clear aligner treatment aligns and levels the arches and is effective in controlling anterior intrusion but not anterior extrusion. It is effective in controlling posterior buccolingual inclination but not anterior buccolingual inclination, and it is effective in controlling upper molar bodily movements of about 1.5 mm but is not effective in controlling the rotation of rounded teeth, in particular . Aligners are now commonly used, such as in fixed appliance therapy, for the treatment of malocclusions of all types and severity, particularly for transverse dento-alveolar problems requiring the expansion of one or both arches . In the evaluation of occlusion in the transverse plane, it is considered correct when the palatal cusp of the maxillary posterior teeth occludes with the central fossa of the mandibular posterior teeth . If the upper buccal cusp occludes with the central fossa of the posterior lower teeth, a malocclusion occurs, which is called a crossbite . This type of malocclusion may be of skeletal origin, whereby the dento-alveolar processes are correctly positioned in relation to the bony base, but the base presents maxillary skeletal hypoplasia or mandibular skeletal hyperplasia (or both) . When the malocclusion is skeletal, its early correction is recommended through maxillary expansion with an orthopedic appliance, which guarantees greater stability over time . When the malocclusion is of dental origin, the bone base has a correct transverse proportion, but dento-alveolar processes are altered [ , , ]. It has been observed that one in three patients presents with a posterior crossbite of at least one tooth . Arch expansion can be used to resolve crowding, correct dento-alveolar crossbite, or modify the arch shape . Single-tooth crossbite is an easy case to treat with clear aligners; the aligners function as bite-planes that eliminate occlusal interferences and help to correct the crossbite. The crossbite of multiple teeth can be more complicated . The aligners expand mainly by changing the torque of the posterior teeth through a crown buccal movement. The expansion can be performed at the canine, molar, and premolar level, or differentiated by maintaining a stable sector . Several authors in their studies observed that treatment with the Invisalign ® system achieves a significant increase in the transverse measurements of the width of the arch as well as the perimeter of the arch [ , , ]. Current knowledge on invisible aligners allows us to have a much clearer idea of the basic characteristics of aligner systems, but there remains a need to increase the use of systems other than Invisalign ® to provide greater evidence for different aligners that are widespread on the market . The predictability of posterior expansion through treatment with aligners has been compared to the efficacy of the multibracket technique, and treatment with self-ligating multibrackets has been shown to be effective in solving mild crowding by increasing the width of the arch and correcting buccolingual tilt, occlusal contacts, and root angulations. While the Invisalign ® treatment aligns the arches by derotating the teeth and leveling the arches, due to the lack of control of tooth movement, Invisalign ® can easily tip crowns and be less effective in correcting transverse problems . There is precedent in the literature for the effectiveness of Invisalign ® clear aligners (Align Technology, Santa Clara, CA, USA) and the predictability of its software (Align Technology, Santa Clara, CA, USA) for the planning of treatment with arch expansion. Some authors have evaluated how effective clear aligners are in achieving the proposed treatment objectives ; others have compared the results of treatment with clear aligners with those obtained with therapies using fixed appliances. Most of these investigations were carried out with the previous EX30 system, which was recently replaced by SmartTrack (Align Technology, Santa Clara, CA, USA), so it is necessary to evaluate the characteristics of the updated system. Posterior expansion of up to 2 mm per quadrant is a predictable movement achievable with aligners and decreases with increasing planned expansion . It is advised, in case of crossbite, to overcorrect the expansion in the Clincheck ® programming until the palatal cusps of the upper molars contact the buccal cusps of the mandibular molars . Beyond 2 mm of expansion, cross elastics or other auxiliaries may be necessary to achieve the planned result . The predictability of maxillary expansion with clear aligners has shown wide variability over time. Several studies that have evaluated the expansion of dental arches suggest that to minimize the risk of gingival recurrence and recession, the expansion limit of the arch width should be a maximum of 2–3 mm per quadrant. Invisalign ® may be indicated to achieve expansion in cases with crowding of 1 to 5 mm and in cases that require expansion to achieve space to include blocked out teeth. The expansion of the arch with Invisalign ® can result in an aesthetic advantage for the patient because, by widening the dental arches, it allows for improved aesthetics of the smile by reducing the buccal corridors [ , , , ]. Considering this variability in the results obtained from studies in the literature concerning the predictability of maxillary expansion with clear aligners, the aim of this study is to evaluate the efficacy and the accuracy of maxillary arch transverse expansion using the Invisalign ® clear aligner system without auxiliaries other than Invisalign ® attachments.
This prospective study was approved by the Ethical Committee of Sapienza University of Rome n° 1621/15 r. 3364, and the patients and/or their parents signed the informed consent for participation in the study. The patients were selected from a group of 140 subjects recruited in the UOC of Orthodontics of the Department of Odontostomatological and Maxillo-Facial Science of “Sapienza” University of Rome. A total of twenty-eight patients were included in the study. The patients were selected according to the following inclusion criteria: patients of both sexes, aged between 13 and 25 years old with complete permanent teeth, treatments performed with Invisalign ® aligners made from Smart-Track ® material, treatments that required transverse dento-alveolar expansion (2–4 mm) to correct malocclusion, patients with sufficient clinical crown height (greater than 4 mm), and patients who followed the treatment with good compliance. The exclusion criteria considered in the study were as follows: patients affected by systemic diseases and orofacial syndromes, patients with missing teeth in the posterior sectors, need for extractive therapy, presence of agenesis (excluding the third molar), excessive dental erosion at the cusp level such that the apex of the dental cusps cannot be found and multiple and/or advanced caries, patients with conoid teeth, patients with periodontal diseases, need for auxiliaries to correct transversal problems (TADs, REP, criss-cross elastics), patients with implants, prosthodontic rehabilitation or ankylosed teeth, and patients requiring orthognathic surgery. All the patients were treated with the Invisalign ® technique by a single Invisalign provider. The treatment protocol for all the selected patients included the application of the Invisalign ® clear aligner system without auxiliaries except for the Invisalign ® attachments. In no cases were tooth extraction or interproximal enamel reduction (IPR) performed. Upper arch expansion was planned to correct crowding and transverse discrepancy. The patients were instructed on how to use the aligners: they should wear it all day, except during meals and dental hygiene, and all night; the change time between aligners was 7 days. The fit of the aligner and the presence of all attachments was checked by the provider every four stages. It was explained to all the patients that they were part of a research protocol and they or their parents accepted their participation by signing the informed consent; the patient’s collaboration was recorded in the clinical record. For each patient, an intraoral scan of the pretreatment dental arches (T0) and a scan at the end of treatment (T1) were performed with the Itero Flex ® scanner. The final position of the corresponding ClinCheck ® representation (TC) was also collected to establish the accuracy of the final virtual model with respect to the movements observed in the post-treatment model. Three models were then collected for each patient according to the following timetable: Pretreatment STL model (T0) obtained by scanning the maxillary arch before starting Invisalign ® treatment. Post-treatment STL model (T1) obtained from scanning the maxillary arch at the end of the treatment with Invisalign ® . STL model from the final model programmed on the ClinCheck ® software (TC). All models of the maxillary arches were opened with the program ExoCad ® (DentalCad). Using the program’s own measuring tool, linear millimeter measurements were taken. All measurements were performed by a trained single operator. The following transverse linear measurements were carried out on the upper arch for each T0 and T1 model and for the ClinCheck ® model (TC) ( ): Intercanine cusp width: linear distance in millimeters between the cusp of the maxillary canine of one hemiarch to the cusp of the maxillary canine of the contralateral hemiarch (A). Intercanine gingival width: linear distance in millimeters between the most apical point of the palatal surface of the canine’s crown of the maxillary canine of one hemiarch to the same point of the contralateral hemiarch (B). First inter-premolar width: linear distance in millimeters between the buccal cusp of the first premolar of one hemiarch to the buccal cusp of the contralateral first premolar (C). Second inter-premolar width: linear distance in millimeters between the buccal cusp of the second premolar of one hemiarch to the buccal cusp of the contralateral first premolar (D). First molar mesio-vestibular cusp width: linear distance in millimeters between the mesiobuccal cusp of the first molar of one hemiarch to the mesiobuccal cusp of the contralateral first molar (E). First molar gingival width: linear distance in millimeters between the most apical point of the palatal surface of the first molar’s crown of one hemiarch to the same point of the contralateral hemiarch (F). In addition, the following measurements were performed: Expansion obtained was calculated by the difference between the post-treatment distance with respect to the pretreatment amplitude (T1-T0). Planned expansion was calculated by the difference between the planned distance on the Clincheck ® with respect to the pretreatment amplitude (TC-T0). Accuracy of expansion was calculated by the difference between planned expansion on the Clincheck with respect to the obtained expansion (TC-T1). Clinical accuracy (%) was achieved for all measurements, using the equation [(expansion obtained/planned expansion) × 100]. To estimate the size of the sample population for this study, a preliminary investigation was carried out to determine the power of the study (PS) and to establish the effect size (ES) (0,58) of the sampled population for the experimental study. Twenty-six patients were needed to estimate the expansion movement with a 95% confidence interval (CI), a power of 80%, and a level of significance of 5% for detecting an effect size of 0.58. Intra-examiner reliability was evaluated; the same examiner performed the measurements on 10 patients and repeated them two weeks later. The reliability of all measurements was assessed using an interclass correlation coefficient (ICC). Numerical variables were expressed as mean and standard deviation values. Descriptive statistical analysis was performed for all measurements separately to compare the T0-T1 changes and the T0-TC differences. The normality of the measurements was assessed using the Shapiro–Wilks test. To compare the means between groups, a Student’s t -test was performed for independent data once normality was validated. If normality was not met, the nonparametric test (Mann–Whitney U test) was applied. The significance level applied in the analysis was 5% (α = 0.05). SPSS software (IBM Corp, Chicago, IL, USA) version 26 was used to analyze the data.
The results obtained displayed a high degree of intra-observer reliability with an intraclass correlation coefficient > 0.80 for all linear measurements. Twenty-eight patients (15 males, 18 females), with a mean age of 17 ± 3.2 years old were evaluated. The shows the descriptive statistics of all the measurements performed pretreatment (T0), post-treatment (T1), and in the Clincheck ® model (TC). The planned expansion (TC-T0), the expansion obtained (T1-T0), the difference between expansion obtained and planned expansion, and the clinical accuracy are described in . The planned expansion (mm) increased progressively from anterior to posterior at the level of the cusps, i.e., the planned intercanine width was on average smaller than the planned width of the first premolar, and the planned width of the first premolar was on average smaller than the planned width of the first molar. Furthermore, the planned expansions in millimeters for intercanine and intermolar gingival width were less than those for the cusp width. On average, an expansion of between 5% and 7% more than the initial width (between 1.6 mm and 3.5 mm) was planned. The maximum expansion was planned at the level of the first inter-premolar width (7.35%, 2.95 mm) and the minimum at the intercanine cusp width (4.86%, 1.6 mm). On average, an expansion of between 3% and 7% more than the initial width was obtained. The maximum expansion was obtained at the first inter-premolar width level (6.87%, 2.7 mm) and the minimum at the first intermolar gingival width level (2.92%, 0.98 mm). The percentage of expansion obtained was less than the percentage of expansion planned in all measures. The T1-TC difference was less than 1 mm, except for the width of the intermolar buccal cusp that reaches it. The greatest differences between T1 and TC occurred at the level of the intermolar buccal cusp width (1.05 mm) and at the level of the gingival width (intercanine gingival width 0.98 mm and first intermolar gingival width 0.78 mm). However, in the intercanine, inter-premolar, and intermolar measurements at the level of the cusps, the differences between the expansion obtained and the planned expansion were not statistically significant, while they were statistically significant for gingival measurements (intercanine gingival width, intermolar gingival width). This result suggests that there is more vestibular tipping movement than body movement of the crowns at the level of the canine and of first molars. The global clinical accuracy of the expansion treatment was 70.88%. The accuracy of the gingival measurements was low, around 50%, while for the measurements of the cusp width, the accuracy was between 70% and 82%. In the intercusp measurements, the expansion was more accurate for the first premolar (93.53%) and less for the first molar (70.55%).
This study evaluated the possibility of effective transversal expansion of the upper arch through Invisalign ® treatment without the use of auxiliaries other than Invisalign ® attachments and the difference at different levels. In addition, the accuracy of the virtual pretreatment model developed with ClinCheck ® was evaluated in relation to the results obtained on from transversal expansion of the maxillary arch. Monitoring tooth movement in orthodontics is important to assess the ability of devices to achieve movement and establish protocols capable of achieving orthodontic treatment goals [ , , , ]. New technologies facilitate the evaluation of dental movement and allow for more precise measurements [ , , , ]. In this way, it was possible to evaluate the possibility of expansion with Invisalign ® . The results show that it is possible to expand to a higher percentage at the intercuspid level of the molar area and less at the canine intercuspid level. These results are in line with Morales-Burruezo et al. who analyzed transverse expansion using Invisalign SmartTrack and concluded that expansion is achievable when it is alveolar, with higher efficiency at the premolar level and lower at the canine level. However, Clemens et al. , who evaluated using the Peer Assessment Rating index (PAR index) in 51 patients treated with aligners, observed that of the 25 patients who required transverse augmentation, 79% did so, resulting in 17% remaining stable and 4% worsening. To assess the accuracy of expansion, an effectiveness index was considered, i.e., the closer the expansion obtained was to that predicted by the ClinCheck ® . Effectiveness was considered to be 100% if the expansion obtained was statistically equal to that predicted. The results of this study showed an average accuracy of effectiveness of 70%. The differences in accuracy between the different measures (intercanine cusp and gingival width, first inter-premolar width and first intermolar cusp and gingival width) were not statistically significant; therefore, the overall accuracy of the expansion treatment was 70%, regardless of tooth type. The present study showed that the effectiveness is lower when measured at the palatal side of the tooth, in agreement with Houle et al. , who claimed that body movement is not possible but instead a coronal inclination of the tooth. Furthermore, they state that the accuracy of digital programming with aligners is 72.8% in the maxillary arch, in accordance with our results. In our study, the effectiveness was on average 55% at the intermolar gingival level, while at the canine gingival level, it was 43%, and these results suggest, as reported in other studies [ , , ], that there is less movement of the root portion of the tooth compared to the cusp portion, at least at the canine and molar levels. It would therefore appear that, although a body movement is programmed in the ClinCheck ® , what is obtained is mainly a tipping coronal movement of the tooth. Kraviz et al. analyzed the predictability of Invisalign treatment with G3 material by superimposing initial and final models and showed that transverse expansion is not very accurate, with a predictability of 40.7%. The authors state that any type of movement has a predictability of 41%. However, it should be noted that the authors analyzed the effectiveness of the expansion with aligners made of G3 material, while the present study analyzed the results with the use of the new SmartTrack ® material. This difference could explain a better performance of the new material to which the expansive force is applied. Similar studies were performed by Lione et al. on the analysis of dental expansion movements in digital dental models. In agreement with the present study, they obtained a greater expansion at the level of the upper first molars with respect to other teeth. In their study, linear and angular measurements were performed before treatment (T0), at the end of treatment (T1), and on final virtual models (ClinCheck ® models), and significant differences were obtained for both linear and angular measurements for maxillary canines, resulting in little predictability . In another study, Lione et al. evaluated maxillary expansion with the Invisalign First System ® in growing subjects. Twenty-three patients with a mean age of 9.4 ± 1.2 years old, with a maxillary posterior transverse interarch discrepancy, were included in the study. The discrepancy was obtained by calculating the difference between the maxillary intermolar width, measured between the central fossae of the maxillary first molars on each side, and the mandibular intermolar width, measured between the mesiobuccal cusps of the mandibular first molars on each side. Patients were treated without extraction with Invisalign First System ® clear aligners with no auxiliaries other than Invisalign ® attachments, and no interproximal enamel reduction (IPR) was planned during treatment, as in our protocol. The results of their study showed a significant increase in the greatest width in the first primary molars compared to the second primary molars and primary canines. Maxillary first molars also showed the greatest expansion in mesial intermolar width due to rotation that occurred during expansion around the palatal root of the hinge tooth. These results are consistent with ours in that the greatest expansion was obtained in the most posterior sectors and at the occlusal level; however, in our study we did not consider both cusps of the molar, so it was not possible to assess whether rotation was present. This study has some limitations; for example, the amount of crowding that could influence the effectiveness of the expansion treatment was not considered, and the patients were not classified according to the amount of expansion needed considering the crowding. For future research, it would be advisable to increase the size of the sample, considering different groups of malocclusions and include a control group with another type of appliance useful for dento-alveolar expansion. In addition, other measures could be included to evaluate the vestibular inclination of the teeth and the rotation as a treatment effect to confirm the promising results of the present study.
Experience has shown us that certain movements cannot be achieved with aligners, but the actual limitations are unclear. Previsualization of the result can often be misleading for clinicians and patients. In conclusion, the efficacy in maxillary arch transverse expansion, on average, is rated at 70%, and is not related to the type of tooth considered but applies overall. Effectiveness is lower at the lingual level, with an average of 55% at the intermolar level, and 46% at the canine level. Statistically significant differences were found between the efficacy at the cuspal level compared to the efficacy measured at the most apical point of the palatal surface of the tooth, indicating that there is more tipping movement than body movement. The ClinCheck programs a body movement, whereas what we have obtained is a tipping movement.
|
TLIF Online Videos for Patient Education—Evaluation of Comprehensiveness, Quality, and Reliability | abac38bc-63d3-412e-bddb-41c79449ce22 | 10002268 | Patient Education as Topic[mh] | Transforaminal lumbar interbody fusion (TLIF) surgery is an established procedure to treat a wide range of lumbar spine pathologies, e.g., degenerative pathologies, trauma, and infection . Moreover, the volume of lumbar fusion surgery and consequently revision surgery is constantly increasing, thus imposing a significant socioeconomic burden . Martin et al. found that the volume of elective lumbar fusion increased by 62.3%, from 122,679 cases in 2004 to 199,140 in 2015, in the United States of America . As reviewed by Kaustubh et al., this may be attributed to a variety of reasons such as the aging of the population, patient expectations, improved anesthetic and perioperative management, as well as new surgical techniques leading to faster recovery and more favorable outcomes [ , , , ]. Over the last decade, technological advancements and, more recently, the COVID-19 pandemic have continuously led to the establishment and further development of telemedicine and internet-based patient education . As we have reviewed previously, numerous authors found that the majority of the North American population with access to the internet uses it to obtain information on health-related issues, and that the internet has become one of the most important sources of health education [ , , , , ]. Moreover, YouTube (Alphabet, Mountain View, CA, USA) has become one of the most influential websites in regard to health education and information, with more than 1 billion visitors monthly and almost three-quarters of the U.S. population using it [ , , ]. However, the content does not undergo peer-review before being uploaded; consequently, there is an immanent risk of obtaining wrong or misleading information [ , , , ]. Considering the increasing number of patients acquiring health-related information from YouTube (Alphabet, Mountain View, CA, USA), the company is taking actions to address this relevant issue . While various authors have assessed online information on spine surgical procedures and even lumbar fusion, to our knowledge, there is to date no study reviewing the online media content on TLIF surgery in particular [ , , , , ]. Given the variety of lumbar interbody fusion techniques such as TLIF, XLIF, OLIF, ALIF, and PLIF, and TLIF surgery being one of the most common surgical procedures for interbody fusion, the online media content on TLIF surgery needs to be carefully evaluated in regard to its quality, reliability, and even more importantly its comprehensiveness. Moreover, high-quality content strongly supports patients in making an informed decision as the videos can be re-watched, paused, etc., and may be watched in a less stressful environment, thereby enhancing adequate information uptake and processing. Eastwood et al. were able to show that extensive multidisciplinary preoperative patient education resulted in significantly better postoperative outcomes, thereby highlighting the crucial role of patient education in terms of successful surgical interventions . Moreover, we strongly believe that freely accessible online content that is comprehensive and of high quality would generally be a valuable tool to further improve patient education and subsequently postoperative outcomes. As observed by Phan et al., the majority of the most commonly accessed online patient education material pertaining to surgical treatments of the spine exceeded the readability limits recommended by the American Medical Association and the National Institutes of Health, consequently suggesting that patients would not be able to comprehend the provided content . This further highlights the need for high-quality yet also easily comprehensive online content. Therefore, we aim to evaluate the comprehensiveness, reliability, and quality of online videos on TLIF surgery using established scoring tools for online media. Moreover, we aim to evaluate the suitability and reproducibility of these established online media scoring tools. Furthermore, we hypothesize that the currently available scoring tools are limited in regard to their ability to provide an objective and reproducible assessment.
Analogous to previous studies that evaluated online videos, this study’s design was purely descriptive, and in a similar fashion we used 3 search items: “TLIF”, “TLIF surgery” and “Transforaminal Interbody Fusion” on YouTube ( www.youtube.com , accessed on 25 October 2021) . The results were sorted by view count, and the 60 most-viewed videos from each search were evaluated further . Consequently, 180 videos were screened, and off-topic videos, duplicates, videos with a language other than English, and otherwise inadequate videos were excluded . Thirteen videos were excluded as they merely contained patient reported outcome reports and consequently could be considered off-topic in regard to our study design; another five videos were excluded due to the use of a language other than English. Furthermore, we excluded 11 videos that either lacked any verbal or written description or explanation, or merely depicted the surgical procedure. We further excluded two promotional videos and another two videos which focused on the comparison of different procedures. Another 18 videos were excluded because they covered surgical procedures other than TLIF surgery. Furthermore, all duplicates were eliminated. Finally, the screening and selection process resulted in 30 videos that met the inclusion criteria. We recorded the title of the video, universal resource locator, number of total views, number of likes, number of dislikes, duration in seconds, and source of the videos. The sources were classified as MD (Medical Doctor), HC institution (healthcare institution), patient, and HC company (health care company). The resulting 30 videos were further assessed by 8 observers—4 neurosurgeons and 4 orthopedic surgeons (R1-R8). All of the observers were trained and experienced spine surgeons and were fluent in spoken and written English. Analogous to previous studies, we used 3 acknowledged scoring and grading systems to assess the media content: Global Quality Scale (GQS), modified DISCERN tool, and the JAMA benchmark score [ , , ]. Additionally, the observers rated the videos on a subjective basis, as well as in regard to technical sound and video quality, using a scale ranging from 1 to 5 (1 = excellent to 5 = insufficient). Observers determined whether advertisements were present (yes/no). In order to determine whether the videos could be suited for patient education, the observers evaluated the following 4 questions in a binary yes/no manner: “Is it easily understandable for laypersons?”, Are the risks/complications discussed?”, “Is the procedure described adequately?”, and “Is the rehabilitation described adequately?”
Data were stored and processed for further analysis in MS Excel (Microsoft Corporation, 2018. Microsoft Excel, available online: https://office.microsoft.com/excel (accessed on 6 June 2020). Statistical analysis was carried out in GraphPad Prism (GraphPad Prism version 9.1.0 for Windows, GraphPad Software, San Diego, CA, USA, available online: www.graphpad.com accessed on 6 June 2020) and SPSS (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY, USA: IBM Corp.). Descriptive analysis included mean (metric variables) and percentiles (scores). Hypothesis testing (for difference in medians) employed Mann–Whitney U tests, association was estimated via Spearman’s rho, and rater agreement using Fleiss’ Kappa. Difference in distribution was tested via modified Chi-Square tests for multiple variables. An alpha of 0.05 was assumed to constitute statistical significance. Where appropriate, confidence intervals are reported, also using alpha = 0.05. In all cases, two-sided testing was performed.
Eight surgeons (four orthopedic surgeons, four neuro-surgeons) rated a total of 30 YouTube videos. At the time of rating, the videos had between 9188 and 1,530,408 views and between 0 and 3344 likes ( ). The median rater assessment for all videos was “moderate quality” (GQS = 3). Poor quality (GQS = 1) was rated in 8.3%, “generally poor” (GQS = 2) in 26.7%, “moderate quality” (GQS = 3) in 35.0%, “good quality” (GQS = 4) in 27.1%, and “excellent quality” (GQS = 5) in 9.2% of cases. There was a significant difference between raters regarding their GQS (Kruskal–Wallis test p = 0.003) in that rater 4 (R4) assigned significantly lower scores than R2 (Dunn’s correction for multiple comparisons p = 0.038) and R5 ( p = 0.015) as well as R8 ( p = 0.012). In total, the neurosurgeons rated the videos higher than the orthopedic surgeons (Mann–Whitney U test p = 0.047, ). There was slight rater agreement on GQS, indicated by Fleiss’ Kappa = 0.13 ( p < 0.05). There was a strong and highly significant correlation between median GQS and the number of views (Spearman’s rho = 0.48, p = 0.007). Only the individual rating of rater 6 (Spearman’s rho = 0.48, p = 0.007) and rater 7 (Spearman’s rho = 0.49, p = 0.006) showed a significant and strong association with views ( ). Median GQS also showed correlation with the number of likes, and again only the scores assigned by rater 6 were statistically significantly associated (Spearman’s rho = 0.50, p = 0.005). Only raters 6 and 7 showed a statistically significant assignment of scores regarding views. Overall, there was a strong correlation between median GQS score and views, rendering this YouTube characteristic possibly an effective tool when judging the quality of a video. The median DISCERN score was 2. In total, 19.7% were rated with a score of 0, 30.0% with a score of 1, 32.9% with a score of 2, 13.8% with a score of 3, 2.9% with a score of 4, and only 1.3% with a score of 5. Raters R6 (neurosurgeon) and R3 (orthopedic surgeon) differed significantly in their rating from the others (Kruskal–Wallis test p < 0.001 and multiple comparisons with these two raters corrected via Dunn’s method all p < 0.001, ). A Mann–Whitney U test showed no significant difference between neurosurgeons and orthopedic surgeons ( p = 0.49). Rater agreement on DISCERN score was poor (Fleiss’ Kappa = −0.08). No association could be found between views or likes and the overall and individually assigned DISCERN scores. The median JAMA score assigned to the videos was 1. Overall, 10.0% of scores were 0, 48.3% were 1, 22.5% were 2, 11.7% were 3, and 7.5% were 4. All neurosurgeons and one orthopedic surgeon assigned median scores of 1, while 2 orthopedic surgeons assigned median scores of 3 and 4 ( ). These raters (R4 and R7) differed significantly from the other raters (Kruskal–Wallis test p < 0.001, Dunn’s corrected multiple comparisons p < 0.001 for combinations with these two raters). Neurosurgeons assigned significantly lower JAMA scores than orthopedic surgeons (Mann–Whitney U test p < 0.001). Rater agreement on JAMA score was poor (Fleiss’ Kappa = 0.03). No associations were found between views or likes and the overall and individually assigned JAMA scores. When assessing the content based on the aforementioned questions, there was very poor rater agreement (Fleiss’ Kappa for all questions < 0). Data is presented for each rater individually and the median of all raters ( ). In the median, 80% of videos described the procedure adequately, and 55% were easily understandable by laypersons. However, only 10% of videos were perceived to describe rehabilitation and 83% to describe risks/complications adequately. There was no statistically significant correlation between the answers of individuals or between surgical specialties. Subjective grades (1 = excellent to 5 = insufficient) were given to each video by each rater for overall, video, and sound quality. Overall quality grades had a median of 3 for both surgical specialties, combined with a 95% confidence interval of (2;3). This grade was weakly correlated with both views (Spearman’s rho = −0.39, p = 0.036) and likes (Spearman’s rho = −0.39, p = 0.032) in its median value, and for rater 2 (rho = −0.43, p = 0.017), rater 4 (rho = −0.41, p = 0.022), and rater 5 (rho = −0.44, p = 0.023) individually. No significant difference between neurosurgeons and orthopedic surgeons could be detected (Mann–Whitney U test p = 0.21) but the histograms of orthopedic surgeons appear more skewed to the right, indicating a higher number of worse grades ( ). Overall, 7.5% of videos received a grade of 1, 33.3% grade 2, 33.3% grade 3, 19.2% grade 4, and 6.7% grade 5. Fleiss’ Kappa for subjective overall grade was 0.18, indicating slight agreement between raters. Spearman’s rho indicated a very strong, highly significant correlation with GQS (rho = −0.90, p < 0.001). Median sound and video quality grades were both 2 with a 95% confidence interval of (1,2). In total, 41.3% of videos were graded with “excellent” video quality, 32.9% with a grade of 2, 15.4% with grade 3, 7.5% with grade 4, and 2.9% with grade 5. In total, 45.8% were graded with “excellent” sound quality, 30.0% with grade 2, 11.3% with grade 3, 6.7% with grade 4 and 6.3% with grade 5. Both video (Spearman’s rho = 0.62, p < 0.001) and sound (Spearman’s rho = 0.49, p < 0.001) were strongly and highly significantly correlated with subjective overall grade. When compared to GQS, both video (Spearman’s rho = −0.58, p < 0.001) and sound (Spearman’s rho = −0.44, p = 0.015) showed a strong and statistically significant association with this form of scoring. In summary, both GQS and subjective grades showed a moderate to strong, statistically significant association with views and likes. These two criteria could be used by laypersons to judge the quality of the video content. Subjective overall grade, and to a slightly lesser extent GQS, showed strong correlations with video and sound quality.
Nowadays, patients do not merely rely on their doctors to obtain information on their medical conditions and, more specifically, invasive procedures that they may be offered to treat their conditions. Although easily and quickly accessible, the reliability and validity of internet-based sources remain controversial topics and major concerns especially for patients without any medical education [ , , , , ]. Considering the constantly increasing volume of lumbar fusion surgeries performed during the preceding decades as well as concomitant technological advancements such as smartphones, and the popularity of YouTube and other social media platforms, it is hardly surprising that patients use these new tools to acquire health-related information [ , , , ]. Nevertheless, the quality and reliability of the available online videos are highly variable due to a lack of a peer-review process. However, the company is starting to take actions to address this relevant issue . Various authors have investigated the quality and reliability of YouTube videos on lumbar fusion or arthroplasty in general and, more specifically, for anterior lumbar interbody fusion (ALIF) or lateral lumbar fusion (LLIF); but to our knowledge, there are no studies that evaluated online videos on TLIF surgery [ , , , , ]. Given that TLIF surgery is a very common surgical procedure, it is crucial to evaluate the available YouTube videos on TLIF in regard to quality, reliability, and comprehensiveness. Unlike other studies, we did not limit our scoring system to merely one or two scores. We used three acknowledged scoring and grading systems to assess the media content: Global Quality Scale (GQS), modified DISCERN tool, and the JAMA benchmark score [ , , ]. Furthermore, the videos were rated on a subjective basis as well as in regard to technical sound and video quality, and it was determined whether advertisements were present. Additionally, the videos were evaluated in regard to their suitability for patient education by the following four questions: “Is it easily understandable for laypersons?”, “Are the risks/complications discussed?”, “Is the procedure described adequately?”, and “Is the rehabilitation described adequately?” All of these additional aspects that we considered and evaluated have given us an immense amount of information and profound insight into the current quality, reliability, and comprehensiveness of the available media. Another advantage of this study is the number of observers—four orthopedic surgeons and four neurosurgeons evaluated the videos—thereby providing a more objective overall assessment. We chose to involve a fairly large number of observers compared to other studies to compensate for any outliers. For example, R3 and R6 differed significantly in their ratings from the others. If we had been limited to these two raters, we would have come to completely different conclusions, which consequently highlights the importance of a larger number of raters or observers. Interestingly, we found that orthopedic surgeons assigned significantly lower GQS and JAMA score ratings to the videos, which may be a coincidence or may also reflect the specialties’ different approaches to this topic. We found a strong and highly significant correlation between median GQS and the number of views, which may indicate good quality in this specific inquiry. However, we also observed that GQS, as well as subjective grading, is strongly biased by technical sound and video quality. These finding consequently suggest that there is a need for more objective scoring that separately displays the information quality and the technical qualities more distinctively. In contrast to our study, various authors have found the information on YouTube on lumbar fusion to be poor quality, while we determined the median rater assessment for the videos on TLIF surgery to be “moderate quality” [ , , ]. Interestingly our literature review also yielded studies that focused on lateral lumbar interbody fusion, which found the according media content on YouTube to be of moderate quality . We observed a slight rater agreement on GQS; however, we also found that neurosurgeons rated the videos significantly higher than the orthopedic surgeons. Unlike other studies that evaluated online videos, we found that the median GQS also showed a correlation with the number of likes . Regarding the DISCERN tool, we found rater agreement was poor, and no association could be found between views or likes and the overall and individually assigned scores. Concerning the JAMA score, which evaluates aspects such as authorship, attribution, disclosures, and currency, we observed that the neurosurgeons rated the videos lower than the orthopedic surgeons. Analogous to our findings with the DISCERN tool, the rater agreement was poor and no association could be found between views or likes. These findings, especially in regard to rater agreement when using the different scoring systems, are highly interesting and moreover relevant for future studies in this field. The poor rater agreement with DISCERN reliability scores and JAMA scores appears especially crucial considering that various studies rely on either of these scores as the sole assessment tool. We consequently hypothesize that the outcome of these online media evaluation studies greatly depends on the chosen scores, which we found to show rather limited rater agreement. Consequently, we suggest that the GQS is included in future studies in this field, and we should consider establishing a new scoring system that offers a more objective assessment. We therefore conclude that the scores that are currently available to assess online media content are in fact not entirely suited for this task, and the according results have to be interpreted in regard to this limited capacity to assess the content in a reproducible manner. Moreover, the lack of suited assessment tools is one of the main limitations of this study. Further limitations of this study are that all of the applied scoring systems are based on the observer’s subjective judgement, and this study represents a snapshot in time, which may be controversial due to the dynamic structure of YouTube and the advances in online patient education . Additionally, we only included videos in English, and search results may vary due to demographic differences . Based on the content-related questions, we found that a median of 80% of videos described the procedure adequately, and 55% were easily understandable. However, only 10% of videos described rehabilitation. This indicates that there is a strong need for content that also addresses the aspect of the rehabilitation process. Furthermore, we observed that potential risks and complications were discussed in only 10% of the videos, which we found to be a major pitfall, and this crucially limits the usability of these videos in regard to patient education. We observed that the majority of videos were graded as good or excellent in regard to their technical sound and video quality, and they showed a strong and statistically significant association with subjective overall grading and GQS.
Overall, we have determined the available YouTube videos on TLIF surgery to be of moderate quality based on their GQS scoring, which, as well as overall subjective grading, showed a strong association with likes and views. In this specific case, a layperson may use the number of views or likes to identify good-quality content. However, this association seems to be topic-specific and rather coincidental. Furthermore, we found a strong correlation between technical video and sound quality with GQS and overall subjective grading, which may, in this specific case, be used to identify high-quality content; however, it also indicates that GQS may be relevantly biased by technical video and sound quality. This finding consequently emphasizes the need for a more objective grading system that potentially displays the technical quality and information quality separately. Considering the low inter-rater agreement with JAMA score and DISCERN reliability tool, this has to be taken into account for the design of future studies especially if only one or two observers rate the content. Therefore, we advocate for the use of the GQS, which at least showed slight rater agreement. Moreover, we found that the majority of videos do not address the aspect of rehabilitation nor the potential risks and complications, which is crucial in our opinion. Consequently, there is a need for more high-quality videos that also cover these aspects. Furthermore, we would like to highlight the importance of providing high-quality content that is easily comprehensible for the average patient. In conclusion, we advocate for the establishment of a more suited assessment tool for online media content. In the specific case of TLIF surgery, there is a relevant need for content that also addresses the rehabilitation process in particular.
|
Organisational Impact of a Remote Patient Monitoring System for Heart Failure Management: The Experience of 29 Cardiology Departments in France | ebd68c4f-982d-4c13-af1d-cde39e5e3327 | 10002348 | Internal Medicine[mh] | Chronic heart failure (CHF) is a global pandemic that currently affects the lives of 64 million people worldwide and 2.3% of the French population . The condition is characterised by a high mortality rate in the post-discharge period and a high likelihood of hospital readmission for acute, decompensated heart failure . Remote patient monitoring (RPM) systems (such as Chronic Care Connect TM (CCC TM ) e-health solution) are now emerging as additional tools for CHF care management. In France, a national scheme for promoting and funding RPM (Expérimentations de Télémédecine pour l’Amélioration des Parcours en Santé (ETAPES)) was launched in 2014. For inclusion in ETAPES, RPM systems must meet strict specifications and must combine a digital device, medical telemonitoring, and therapeutic support . Although an evaluation of the device’s clinical benefit, impact on quality of life, and economic impact is obligatory, other consequences must also be explored—notably with regard to access to care, quality of care, and care organisation . Until now, these evaluations were mainly focused on assessing the medical and economic impacts . The assessment of a device’s organisational impact has lagged behind the other assessments because of the lack of a specific methodology and guidelines —even though this is essential for fully appraising the influence of medical technologies . In order to tackle this problem and include the organisational impact in the assessment process, the French National Authority for Health (Haute Autorité de Santé (HAS)) published a guide in December 2020 . The guide gave a definition of organisational impact (“an effect or consequence of the health technology on the characteristics and functioning of an organisation involved in the care process or the user’s life pathway”) and suggested criteria for measuring and justifying the RPM device’s effects in this respect. The organisational impact has now been included in the criteria evaluated by the French National Medical Device and Health Technology Evaluation Committee (Commission Nationale d’Evaluation des Dispositifs Médicaux et Technologies de Santé (CNEDiMTS)) . The organisational impact can be documented from various perspectives (that of the patient, the healthcare professional, the healthcare system, the hospitals, etc.) and by using various methods and data sources. The objective of the present study was to describe the organisational impact of CCC TM on CHF management from the perspective of healthcare professionals using the device.
2.1. Survey Design An online survey was conducted among 31 French cardiology departments (CDs) known to use CCC TM for CHF management. The questionnaire was sent to all 31 public- or private-sector institutions known to have monitored at least 20 patients with CCC TM between 2018 and 2020. Data were collected between 16 April and 26 April 2021 using an online questionnaire. In each CD, only one questionnaire was filled; the survey was designed to be completed by the team as a whole. 2.2. Chronic Care Connect TM Chronic Care Connect TM is a remote patient monitoring (RPM) solution for heart failure (HF) management composed of a class IIA medical device and non-medical human assistance. A connected scale allows daily weight collection, and a mobile application allows patients recording of HF symptoms. One of the CCC TM ’s unique features is the integration of this new, non-hospital-based stakeholder: a monitoring centre whose nurses are specially trained in remote monitoring and initially screen the alerts received. If an alert is judged to be relevant, it is sent to the patient’s CD. 2.3. Study Measurements and Outcomes The entire questionnaire was based on the organisational impact map for health technology assessment when the standards were applicable . This mapping process is a structured way of identifying and quantifying organisational impacts, in which three macro-criteria are divided into several sub-criteria. For macro-criterion 1, the various sub-criteria covered time consumption, the speed or duration of care processes, the equipment and infrastructures used in the RPM process, and the skills needed so that the stakeholders could implement the care process. For macro-criterion 2, the sub-criteria were related to stakeholder training and skill transfers. Lastly, for macro-criterion 3, the sub-criteria covered impacts related to communication, society, and the environment. For each sub-criterion, an assessment of the potential impact of CCC TM was performed . When the sub-criterion was not relevant for CCC TM , “not applicable” was stated in the questionnaire. To assess the organisational impact of each sub-criterion, several data sources and methods are used, including this survey among health professionals. All the results only concern the organisational impact of CCC TM . 2.4. Statistical Analysis Most of the statistical analyses were descriptive. All statistical analyses were performed with SAS ® software (version 9.4, SAS ® Institute Inc., Cary, NC, USA).
An online survey was conducted among 31 French cardiology departments (CDs) known to use CCC TM for CHF management. The questionnaire was sent to all 31 public- or private-sector institutions known to have monitored at least 20 patients with CCC TM between 2018 and 2020. Data were collected between 16 April and 26 April 2021 using an online questionnaire. In each CD, only one questionnaire was filled; the survey was designed to be completed by the team as a whole.
TM Chronic Care Connect TM is a remote patient monitoring (RPM) solution for heart failure (HF) management composed of a class IIA medical device and non-medical human assistance. A connected scale allows daily weight collection, and a mobile application allows patients recording of HF symptoms. One of the CCC TM ’s unique features is the integration of this new, non-hospital-based stakeholder: a monitoring centre whose nurses are specially trained in remote monitoring and initially screen the alerts received. If an alert is judged to be relevant, it is sent to the patient’s CD.
The entire questionnaire was based on the organisational impact map for health technology assessment when the standards were applicable . This mapping process is a structured way of identifying and quantifying organisational impacts, in which three macro-criteria are divided into several sub-criteria. For macro-criterion 1, the various sub-criteria covered time consumption, the speed or duration of care processes, the equipment and infrastructures used in the RPM process, and the skills needed so that the stakeholders could implement the care process. For macro-criterion 2, the sub-criteria were related to stakeholder training and skill transfers. Lastly, for macro-criterion 3, the sub-criteria covered impacts related to communication, society, and the environment. For each sub-criterion, an assessment of the potential impact of CCC TM was performed . When the sub-criterion was not relevant for CCC TM , “not applicable” was stated in the questionnaire. To assess the organisational impact of each sub-criterion, several data sources and methods are used, including this survey among health professionals. All the results only concern the organisational impact of CCC TM .
Most of the statistical analyses were descriptive. All statistical analyses were performed with SAS ® software (version 9.4, SAS ® Institute Inc., Cary, NC, USA).
3.1. Participants A link to the online survey was sent to the 31 CDs equipped with the CCC TM device and which had sufficient experience of its use (i.e., RPM with at least 20 patients). Of the 31 CDs contacted, 29 (94%) completed the study questionnaire. In most CDs, several different healthcare professionals answered the survey: most were cardiologists (27 out of 29) and nurses (18 out of 29). The response rates per question were 100% unless otherwise stated. 3.2. Characteristics of the Participating Cardiology Departments From the system’s date of deployment in France (1 February 2018) to the survey closure date (26 April 2021), the 29 participating CDs monitored 63% of the patients remotely monitored by CCC TM over the same time frame. On average, 122 patients per CD were monitored . Twenty-four of the CDs (83%) were in public-sector institutions, and five (17%) were in private-sector institutions. The CDs had used the device for an average of 23 months; 14 (48%) had used it for more than 24 months, and 7 (24%) had used it for less than 12 months. 3.3. Impacts of the Health Technology on the Care Process (Macro-Criterion 1) 3.3.1. Impact on the Initiation of the Care Process Healthcare professionals were asked about the time interval between cardiac decompensation and the initiation of medical care. Most of the respondents declared that RPM for patients with CHF was associated with a shorter time interval: 21 of the 29 (72%) CDs “absolutely agreed” and 7 (24%) “mostly agreed”. 3.3.2. Impact on the Pace or Duration of the Process The use of RPM might modify the process pace and overall duration of the care process as it includes alert management, which necessitates additional time. Alert management includes several phases, acknowledging the alert, making a diagnosis, responding to the alert, and triggering the intervention . The survey results showed that nurses performed most of the tasks in alert management: nurses were primarily involved in overseeing the first phase (acknowledgement of the alert) in 23 of the 24 (96%) CDs, whereas cardiologists were primarily involved to a lesser extent (in 16 of the 29 CDs (55%)). Twenty-two of the twenty-nine CDs (76%) chose to deal with the alerts Monday to Friday, during office hours. In the seven other CDs, the various management procedures were explained by the low number of remotely monitored patients. Depending on the number of patients and the type of healthcare professional involved, alert management may require dedicated time. In CDs with less than 50 patients being remotely monitored, the average time spent on this task was 4.1 h per week for nurses and 1.3 h per week for cardiologists. In CDs with more than 50 patients being remotely monitored, the average time spent was 14.3 h per week for nurses and 1.3 h per week for cardiologists. Regarding the overall duration of the care process, RPM was initially prescribed for six months. In 28 of the 29 CDs (97%), this prescription was renewed. In 12 CDs (41%), the proportion of patients with a renewed prescription was over 80%. Renewal of the prescription was prompted by unstable disease (in 23 of the 29 CDs (79%)) or a patient request (again in 23 of the 29 (79%)). 3.3.3. Impact on Process Timing or Content The survey results demonstrated that all participating CDs had specifically changed their organisational structure in response to alert management and thus changed the care process content . Sixteen of the twenty-nine CDs (55%) had specifically allotted time for outpatient consultations after an emergency alert. Fourteen out of sixteen (88%) had started the consultations at the same time as (or shortly after) the introduction of RPM. Eighteen of the CDs (62%) had created an organisational structure dedicated to CHF medication titration ; only a third of the CDs had implemented it before RPM deployment. In 16 of the 18 CDs (88%), the medication was titrated during a face-to-face consultation. However, six of the 18 had the option to do it over the phone. To avoid admission to the emergency department (ED), 25 of the 29 CDs (86%) set up a procedure for direct emergency admission to the CD. In 21 of the 25 CDs (84%), this system had been implemented at the same time as (or shortly after) the introduction of RPM. 3.3.4. Impact on the Organisation of Human Resources The implementation of RPM for patients with CHF requires qualified, trained human resources. In 24 of the 29 CDs (83%), a dedicated RPM team had been set up. Two CDs out of five (40%) stated that they were not monitoring enough patients for a dedicated team, and two others (40%) declared that they lacked funding. Whether or not a team was assigned, in all questioned CDs, at least one cardiologist was involved in RPM, and at least one nurse was involved in 24 of the 29 CDs (83%). On average, 1.3 healthcare professionals (full-time equivalents (FTEs), and regardless of their role) per CD were involved in RPM (median: 1). For the period from June 2019 to June 2020, the mean number of remotely monitored patients per FTE healthcare professional was 74 (median: 38; range: 11–340). 3.3.5. Impact on the Allocation of Materials and Equipment Along with human resources, the survey also included questions on equipment. According to the healthcare professionals, the use of RPM did not require any additional equipment because CCC TM relied on the CD’s existing computers (according to 18 of the 24 (75%) healthcare professionals who answered this item), and the CD had implemented a dedicated telephone line (according to 16 of the healthcare professionals). Only four CDs (14%) had funded the creation of a dedicated remote monitoring room. 3.3.6. Impact on the Continuity of Care The healthcare professionals were also asked about the impact of RPM on the continuity of care and mainly reported difficulties when team members were on leave. Twenty of the 29 (76%) respondents declared difficulties in ensuring care continuity: 16 (55%) encountered some difficulties, and 4 (14%) encountered many difficulties). 3.4. Impacts of the Health Technology on the Abilities and Skills Required of Stakeholders to Implement the Care Process (Macro-Criterion 2) 3.4.1. Impact on the Skills Required of Stakeholders To manage RPM in a CHF setting, for both healthcare professionals and patients training was required . Patients were mainly trained to use the device (i.e., the smart scale and the tablet computer) but also had access to a disease management program supervised by a nurse to help them manage their disease daily. According to the survey respondents, this training took an average of 4 h per patient over a 12-month period. Thus, training in disease management for 50 patients would take a total of 6 weeks per FTE healthcare professional. Most of this training was delivered by nurses (in 22 of the 29 CDs (76%)). Disease management training was sometimes delivered by the device supplier (11 out of 29 (38%)), but this was mainly due to a lack of human resources in the CDs (mentioned by 10 out of 11 respondents (91%)). In 19 of the 29 CDs (66%), some or all of the healthcare professionals involved in RPM received specific training. In some cases, this training was delivered as part of continuing professional development on CHF or disease management. 3.4.2. Impact on Physician-to-Nurse Delegation of Duties In 15 of the 29 CDs (52%), the organisational structure implemented for RPM allowed cardiologists to delegate certain medical procedures to nurses. In 6 CDs (21%), task delegation was part of a collaborative agreement with the health authorities: for example, nurses were allowed to manage alerts, refer patients, and perform follow-up consultations for medication titration in the absence of a cardiologist. In the other 9 CDs (31%), task delegation was mostly limited to alert screening (mentioned five times), the prescription of standard laboratory tests (mentioned three times) and changes in treatment supervised by a cardiologist (mentioned three times). The device also demanded skills transfers: the nurses contributed significantly to the administrative work and the coding of medical procedures. 3.4.3. Impact on the Coordination between the Stakeholders A New, Non-Hospital-Based Stakeholder One of CCC TM ’s specific features is the introduction of a new, non-hospital-based stakeholder into the care process. Twenty-six of the twenty-nine (90%) survey participants stated that the existence of a monitoring centre with a dedicated team of nurses was the main reason for choosing CCC TM . The respondents considered that the alerts screened by the monitoring centre were relevant (mean relevancy score: 8.2 out of 10). The equipment provided and the manufacturer’s experience in the field of RPM were also mentioned as factors that facilitated the implementation of this new organisational structure (according to 22 (76%) and 17 (59%) CDs, respectively). Coordination between Ambulatory Care and Hospital Care RPM requires coordination between hospital stakeholders and those in ambulatory care, including general practitioners (GPs). The respondents noted that GPs did not have direct access to the RPM software. The GPs were sent updates about the patients’ follow-up by the hospital staff—usually by e-mail (according to 22 of the 28 respondents (79%)) or by phone (22 out of 28 (79%)). The respondents stated that the implementation of RPM helped to improve coordination with primary care: 8 of the 29 (28%) agreed absolutely with this statement, and 11(38%) agreed somewhat. Impact on Healthcare Professionals’ Working Conditions and the Patients’ Quality of Life Fourteen of the twenty-nine (48%) respondents agreed that RPM could also impact the healthcare professionals’ working conditions. All fourteen considered that the organisational structure implemented for RPM helped to improve the healthcare professional-patient relationship, gave the healthcare professionals more independence, and increased their level of satisfaction. However, twelve of the fourteen (86%) respondents also reported a decrease in the speed and quantity of work to some extent. According to the surveyed healthcare professionals, RPM had a positive impact on the patient’s quality of life and level of satisfaction: 16 of the 29 (55%) respondents considered that the quality of life was very much better, and 11 (38%) considered that quality of life was somewhat better.
A link to the online survey was sent to the 31 CDs equipped with the CCC TM device and which had sufficient experience of its use (i.e., RPM with at least 20 patients). Of the 31 CDs contacted, 29 (94%) completed the study questionnaire. In most CDs, several different healthcare professionals answered the survey: most were cardiologists (27 out of 29) and nurses (18 out of 29). The response rates per question were 100% unless otherwise stated.
From the system’s date of deployment in France (1 February 2018) to the survey closure date (26 April 2021), the 29 participating CDs monitored 63% of the patients remotely monitored by CCC TM over the same time frame. On average, 122 patients per CD were monitored . Twenty-four of the CDs (83%) were in public-sector institutions, and five (17%) were in private-sector institutions. The CDs had used the device for an average of 23 months; 14 (48%) had used it for more than 24 months, and 7 (24%) had used it for less than 12 months.
3.3.1. Impact on the Initiation of the Care Process Healthcare professionals were asked about the time interval between cardiac decompensation and the initiation of medical care. Most of the respondents declared that RPM for patients with CHF was associated with a shorter time interval: 21 of the 29 (72%) CDs “absolutely agreed” and 7 (24%) “mostly agreed”. 3.3.2. Impact on the Pace or Duration of the Process The use of RPM might modify the process pace and overall duration of the care process as it includes alert management, which necessitates additional time. Alert management includes several phases, acknowledging the alert, making a diagnosis, responding to the alert, and triggering the intervention . The survey results showed that nurses performed most of the tasks in alert management: nurses were primarily involved in overseeing the first phase (acknowledgement of the alert) in 23 of the 24 (96%) CDs, whereas cardiologists were primarily involved to a lesser extent (in 16 of the 29 CDs (55%)). Twenty-two of the twenty-nine CDs (76%) chose to deal with the alerts Monday to Friday, during office hours. In the seven other CDs, the various management procedures were explained by the low number of remotely monitored patients. Depending on the number of patients and the type of healthcare professional involved, alert management may require dedicated time. In CDs with less than 50 patients being remotely monitored, the average time spent on this task was 4.1 h per week for nurses and 1.3 h per week for cardiologists. In CDs with more than 50 patients being remotely monitored, the average time spent was 14.3 h per week for nurses and 1.3 h per week for cardiologists. Regarding the overall duration of the care process, RPM was initially prescribed for six months. In 28 of the 29 CDs (97%), this prescription was renewed. In 12 CDs (41%), the proportion of patients with a renewed prescription was over 80%. Renewal of the prescription was prompted by unstable disease (in 23 of the 29 CDs (79%)) or a patient request (again in 23 of the 29 (79%)). 3.3.3. Impact on Process Timing or Content The survey results demonstrated that all participating CDs had specifically changed their organisational structure in response to alert management and thus changed the care process content . Sixteen of the twenty-nine CDs (55%) had specifically allotted time for outpatient consultations after an emergency alert. Fourteen out of sixteen (88%) had started the consultations at the same time as (or shortly after) the introduction of RPM. Eighteen of the CDs (62%) had created an organisational structure dedicated to CHF medication titration ; only a third of the CDs had implemented it before RPM deployment. In 16 of the 18 CDs (88%), the medication was titrated during a face-to-face consultation. However, six of the 18 had the option to do it over the phone. To avoid admission to the emergency department (ED), 25 of the 29 CDs (86%) set up a procedure for direct emergency admission to the CD. In 21 of the 25 CDs (84%), this system had been implemented at the same time as (or shortly after) the introduction of RPM. 3.3.4. Impact on the Organisation of Human Resources The implementation of RPM for patients with CHF requires qualified, trained human resources. In 24 of the 29 CDs (83%), a dedicated RPM team had been set up. Two CDs out of five (40%) stated that they were not monitoring enough patients for a dedicated team, and two others (40%) declared that they lacked funding. Whether or not a team was assigned, in all questioned CDs, at least one cardiologist was involved in RPM, and at least one nurse was involved in 24 of the 29 CDs (83%). On average, 1.3 healthcare professionals (full-time equivalents (FTEs), and regardless of their role) per CD were involved in RPM (median: 1). For the period from June 2019 to June 2020, the mean number of remotely monitored patients per FTE healthcare professional was 74 (median: 38; range: 11–340). 3.3.5. Impact on the Allocation of Materials and Equipment Along with human resources, the survey also included questions on equipment. According to the healthcare professionals, the use of RPM did not require any additional equipment because CCC TM relied on the CD’s existing computers (according to 18 of the 24 (75%) healthcare professionals who answered this item), and the CD had implemented a dedicated telephone line (according to 16 of the healthcare professionals). Only four CDs (14%) had funded the creation of a dedicated remote monitoring room. 3.3.6. Impact on the Continuity of Care The healthcare professionals were also asked about the impact of RPM on the continuity of care and mainly reported difficulties when team members were on leave. Twenty of the 29 (76%) respondents declared difficulties in ensuring care continuity: 16 (55%) encountered some difficulties, and 4 (14%) encountered many difficulties).
Healthcare professionals were asked about the time interval between cardiac decompensation and the initiation of medical care. Most of the respondents declared that RPM for patients with CHF was associated with a shorter time interval: 21 of the 29 (72%) CDs “absolutely agreed” and 7 (24%) “mostly agreed”.
The use of RPM might modify the process pace and overall duration of the care process as it includes alert management, which necessitates additional time. Alert management includes several phases, acknowledging the alert, making a diagnosis, responding to the alert, and triggering the intervention . The survey results showed that nurses performed most of the tasks in alert management: nurses were primarily involved in overseeing the first phase (acknowledgement of the alert) in 23 of the 24 (96%) CDs, whereas cardiologists were primarily involved to a lesser extent (in 16 of the 29 CDs (55%)). Twenty-two of the twenty-nine CDs (76%) chose to deal with the alerts Monday to Friday, during office hours. In the seven other CDs, the various management procedures were explained by the low number of remotely monitored patients. Depending on the number of patients and the type of healthcare professional involved, alert management may require dedicated time. In CDs with less than 50 patients being remotely monitored, the average time spent on this task was 4.1 h per week for nurses and 1.3 h per week for cardiologists. In CDs with more than 50 patients being remotely monitored, the average time spent was 14.3 h per week for nurses and 1.3 h per week for cardiologists. Regarding the overall duration of the care process, RPM was initially prescribed for six months. In 28 of the 29 CDs (97%), this prescription was renewed. In 12 CDs (41%), the proportion of patients with a renewed prescription was over 80%. Renewal of the prescription was prompted by unstable disease (in 23 of the 29 CDs (79%)) or a patient request (again in 23 of the 29 (79%)).
The survey results demonstrated that all participating CDs had specifically changed their organisational structure in response to alert management and thus changed the care process content . Sixteen of the twenty-nine CDs (55%) had specifically allotted time for outpatient consultations after an emergency alert. Fourteen out of sixteen (88%) had started the consultations at the same time as (or shortly after) the introduction of RPM. Eighteen of the CDs (62%) had created an organisational structure dedicated to CHF medication titration ; only a third of the CDs had implemented it before RPM deployment. In 16 of the 18 CDs (88%), the medication was titrated during a face-to-face consultation. However, six of the 18 had the option to do it over the phone. To avoid admission to the emergency department (ED), 25 of the 29 CDs (86%) set up a procedure for direct emergency admission to the CD. In 21 of the 25 CDs (84%), this system had been implemented at the same time as (or shortly after) the introduction of RPM.
The implementation of RPM for patients with CHF requires qualified, trained human resources. In 24 of the 29 CDs (83%), a dedicated RPM team had been set up. Two CDs out of five (40%) stated that they were not monitoring enough patients for a dedicated team, and two others (40%) declared that they lacked funding. Whether or not a team was assigned, in all questioned CDs, at least one cardiologist was involved in RPM, and at least one nurse was involved in 24 of the 29 CDs (83%). On average, 1.3 healthcare professionals (full-time equivalents (FTEs), and regardless of their role) per CD were involved in RPM (median: 1). For the period from June 2019 to June 2020, the mean number of remotely monitored patients per FTE healthcare professional was 74 (median: 38; range: 11–340).
Along with human resources, the survey also included questions on equipment. According to the healthcare professionals, the use of RPM did not require any additional equipment because CCC TM relied on the CD’s existing computers (according to 18 of the 24 (75%) healthcare professionals who answered this item), and the CD had implemented a dedicated telephone line (according to 16 of the healthcare professionals). Only four CDs (14%) had funded the creation of a dedicated remote monitoring room.
The healthcare professionals were also asked about the impact of RPM on the continuity of care and mainly reported difficulties when team members were on leave. Twenty of the 29 (76%) respondents declared difficulties in ensuring care continuity: 16 (55%) encountered some difficulties, and 4 (14%) encountered many difficulties).
3.4.1. Impact on the Skills Required of Stakeholders To manage RPM in a CHF setting, for both healthcare professionals and patients training was required . Patients were mainly trained to use the device (i.e., the smart scale and the tablet computer) but also had access to a disease management program supervised by a nurse to help them manage their disease daily. According to the survey respondents, this training took an average of 4 h per patient over a 12-month period. Thus, training in disease management for 50 patients would take a total of 6 weeks per FTE healthcare professional. Most of this training was delivered by nurses (in 22 of the 29 CDs (76%)). Disease management training was sometimes delivered by the device supplier (11 out of 29 (38%)), but this was mainly due to a lack of human resources in the CDs (mentioned by 10 out of 11 respondents (91%)). In 19 of the 29 CDs (66%), some or all of the healthcare professionals involved in RPM received specific training. In some cases, this training was delivered as part of continuing professional development on CHF or disease management. 3.4.2. Impact on Physician-to-Nurse Delegation of Duties In 15 of the 29 CDs (52%), the organisational structure implemented for RPM allowed cardiologists to delegate certain medical procedures to nurses. In 6 CDs (21%), task delegation was part of a collaborative agreement with the health authorities: for example, nurses were allowed to manage alerts, refer patients, and perform follow-up consultations for medication titration in the absence of a cardiologist. In the other 9 CDs (31%), task delegation was mostly limited to alert screening (mentioned five times), the prescription of standard laboratory tests (mentioned three times) and changes in treatment supervised by a cardiologist (mentioned three times). The device also demanded skills transfers: the nurses contributed significantly to the administrative work and the coding of medical procedures. 3.4.3. Impact on the Coordination between the Stakeholders A New, Non-Hospital-Based Stakeholder One of CCC TM ’s specific features is the introduction of a new, non-hospital-based stakeholder into the care process. Twenty-six of the twenty-nine (90%) survey participants stated that the existence of a monitoring centre with a dedicated team of nurses was the main reason for choosing CCC TM . The respondents considered that the alerts screened by the monitoring centre were relevant (mean relevancy score: 8.2 out of 10). The equipment provided and the manufacturer’s experience in the field of RPM were also mentioned as factors that facilitated the implementation of this new organisational structure (according to 22 (76%) and 17 (59%) CDs, respectively). Coordination between Ambulatory Care and Hospital Care RPM requires coordination between hospital stakeholders and those in ambulatory care, including general practitioners (GPs). The respondents noted that GPs did not have direct access to the RPM software. The GPs were sent updates about the patients’ follow-up by the hospital staff—usually by e-mail (according to 22 of the 28 respondents (79%)) or by phone (22 out of 28 (79%)). The respondents stated that the implementation of RPM helped to improve coordination with primary care: 8 of the 29 (28%) agreed absolutely with this statement, and 11(38%) agreed somewhat. Impact on Healthcare Professionals’ Working Conditions and the Patients’ Quality of Life Fourteen of the twenty-nine (48%) respondents agreed that RPM could also impact the healthcare professionals’ working conditions. All fourteen considered that the organisational structure implemented for RPM helped to improve the healthcare professional-patient relationship, gave the healthcare professionals more independence, and increased their level of satisfaction. However, twelve of the fourteen (86%) respondents also reported a decrease in the speed and quantity of work to some extent. According to the surveyed healthcare professionals, RPM had a positive impact on the patient’s quality of life and level of satisfaction: 16 of the 29 (55%) respondents considered that the quality of life was very much better, and 11 (38%) considered that quality of life was somewhat better.
To manage RPM in a CHF setting, for both healthcare professionals and patients training was required . Patients were mainly trained to use the device (i.e., the smart scale and the tablet computer) but also had access to a disease management program supervised by a nurse to help them manage their disease daily. According to the survey respondents, this training took an average of 4 h per patient over a 12-month period. Thus, training in disease management for 50 patients would take a total of 6 weeks per FTE healthcare professional. Most of this training was delivered by nurses (in 22 of the 29 CDs (76%)). Disease management training was sometimes delivered by the device supplier (11 out of 29 (38%)), but this was mainly due to a lack of human resources in the CDs (mentioned by 10 out of 11 respondents (91%)). In 19 of the 29 CDs (66%), some or all of the healthcare professionals involved in RPM received specific training. In some cases, this training was delivered as part of continuing professional development on CHF or disease management.
In 15 of the 29 CDs (52%), the organisational structure implemented for RPM allowed cardiologists to delegate certain medical procedures to nurses. In 6 CDs (21%), task delegation was part of a collaborative agreement with the health authorities: for example, nurses were allowed to manage alerts, refer patients, and perform follow-up consultations for medication titration in the absence of a cardiologist. In the other 9 CDs (31%), task delegation was mostly limited to alert screening (mentioned five times), the prescription of standard laboratory tests (mentioned three times) and changes in treatment supervised by a cardiologist (mentioned three times). The device also demanded skills transfers: the nurses contributed significantly to the administrative work and the coding of medical procedures.
A New, Non-Hospital-Based Stakeholder One of CCC TM ’s specific features is the introduction of a new, non-hospital-based stakeholder into the care process. Twenty-six of the twenty-nine (90%) survey participants stated that the existence of a monitoring centre with a dedicated team of nurses was the main reason for choosing CCC TM . The respondents considered that the alerts screened by the monitoring centre were relevant (mean relevancy score: 8.2 out of 10). The equipment provided and the manufacturer’s experience in the field of RPM were also mentioned as factors that facilitated the implementation of this new organisational structure (according to 22 (76%) and 17 (59%) CDs, respectively). Coordination between Ambulatory Care and Hospital Care RPM requires coordination between hospital stakeholders and those in ambulatory care, including general practitioners (GPs). The respondents noted that GPs did not have direct access to the RPM software. The GPs were sent updates about the patients’ follow-up by the hospital staff—usually by e-mail (according to 22 of the 28 respondents (79%)) or by phone (22 out of 28 (79%)). The respondents stated that the implementation of RPM helped to improve coordination with primary care: 8 of the 29 (28%) agreed absolutely with this statement, and 11(38%) agreed somewhat. Impact on Healthcare Professionals’ Working Conditions and the Patients’ Quality of Life Fourteen of the twenty-nine (48%) respondents agreed that RPM could also impact the healthcare professionals’ working conditions. All fourteen considered that the organisational structure implemented for RPM helped to improve the healthcare professional-patient relationship, gave the healthcare professionals more independence, and increased their level of satisfaction. However, twelve of the fourteen (86%) respondents also reported a decrease in the speed and quantity of work to some extent. According to the surveyed healthcare professionals, RPM had a positive impact on the patient’s quality of life and level of satisfaction: 16 of the 29 (55%) respondents considered that the quality of life was very much better, and 11 (38%) considered that quality of life was somewhat better.
One of CCC TM ’s specific features is the introduction of a new, non-hospital-based stakeholder into the care process. Twenty-six of the twenty-nine (90%) survey participants stated that the existence of a monitoring centre with a dedicated team of nurses was the main reason for choosing CCC TM . The respondents considered that the alerts screened by the monitoring centre were relevant (mean relevancy score: 8.2 out of 10). The equipment provided and the manufacturer’s experience in the field of RPM were also mentioned as factors that facilitated the implementation of this new organisational structure (according to 22 (76%) and 17 (59%) CDs, respectively).
RPM requires coordination between hospital stakeholders and those in ambulatory care, including general practitioners (GPs). The respondents noted that GPs did not have direct access to the RPM software. The GPs were sent updates about the patients’ follow-up by the hospital staff—usually by e-mail (according to 22 of the 28 respondents (79%)) or by phone (22 out of 28 (79%)). The respondents stated that the implementation of RPM helped to improve coordination with primary care: 8 of the 29 (28%) agreed absolutely with this statement, and 11(38%) agreed somewhat.
Fourteen of the twenty-nine (48%) respondents agreed that RPM could also impact the healthcare professionals’ working conditions. All fourteen considered that the organisational structure implemented for RPM helped to improve the healthcare professional-patient relationship, gave the healthcare professionals more independence, and increased their level of satisfaction. However, twelve of the fourteen (86%) respondents also reported a decrease in the speed and quantity of work to some extent. According to the surveyed healthcare professionals, RPM had a positive impact on the patient’s quality of life and level of satisfaction: 16 of the 29 (55%) respondents considered that the quality of life was very much better, and 11 (38%) considered that quality of life was somewhat better.
The present survey of hospital-based healthcare professionals’ opinions provided a description of the organisational implementation of CCC TM for RPM for CHF management. This survey stands out because it used a standardised method to directly address all the organisational impacts (whether positive or negative) perceived by a representative panel of professionals with significant expertise in CHF management and RPM. Furthermore, it should be possible to apply this method to other health technologies in other contexts. Whereas, in previous studies, the organisational impact was less explored and poorly documented, partially due to the lack of a standardised method. Most studies on the impact of health technologies have only referred to certain organisational impacts, such as the effects on human resources and organisational structures . The results show that the CDs restructured their working practices over time. Most had implemented a new organisational structure upon or shortly after the implementation of RPM for decompensations of CHF. Twenty-four of the CDs (83%) had created a dedicated RPM team. More than half of the CDs had created specific outpatient consultations for patients with an emergency alert; in fourteen of the sixteen (88%), this initiative was taken upon or shortly after the introduction of RPM. Eighteen of the twenty-nine CDs (62%) had introduced specific procedures for medication titration; twelve of the eighteen (67%) did this after RPM had been implemented. Twenty-five of the twenty-nine CDs (86%) admitted patients directly and thus avoided an ED admission. Even though the CDs had changed their organisational structure over time, the survey results highlighted differences from one department to another. These differences were mainly related to each team’s level of familiarity with RPM and the number of remotely monitored patients. Healthcare professionals were generally satisfied with the introduction of a new stakeholder (a monitoring centre with a dedicated team of nurses) for some care management tasks. Moreover, RPM does not demand a substantial investment in equipment; only computers and a phone line are needed to manage the patients’ alerts. However, in the short term, RPM requires dedicated staff time to be set aside, and this can differ from one healthcare professional to another. Cardiologists spent an average of 2 h a week on alert management, whereas nurses spent an average of 10 h a week on the same task. Nevertheless, the literature data show that in the long term, RPM with an appropriate organisational structure is associated with shorter lengths of stay in the ED and in hospitals in general, which might free up time for patients and hospital staff . Our present survey also demonstrated that RPM led to more training and more skill transfer for the participating healthcare professionals. Lastly, the healthcare professionals stated that the changes in organisational structure improved the quality of patient care. Most of the respondents reported that care provision took less time, with a smoother care pathway and better quality of life for the patients. The main limitation of the present study is embedded in its design; the results reflected the opinions of hospital-based healthcare professionals of a relatively small sample of French CDs. Indeed, for this study, only CDs already using CCC TM were targeted, which is already limiting the survey pool. Among these CDs using the device, only those with at least 20 patients were included. Although it was the most relevant choice in this situation, the results cannot be extrapolated to RPM devices in general. Moreover, as there is no scoring system in the questionnaire, it is impossible to have a hierarchy of the mentioned organisational impacts. Secondly, some of the HAS’ criteria could not be adapted for the assessment of CCC TM . The “mapping” technique is a robust tool for identifying organisational impacts but cannot be used directly to assess health technology devices. Lastly, impacts that were not thought to be relevant to CCC TM were not assessed (i.e., macro-criterion 3). Given the great variety of organisational structures highlighted by our survey, it will now be necessary to assess the corresponding clinical and economic impacts on patients with CHF and to disseminate the best-performing practices. To this end, a quantitative study is currently performed using the French national healthcare database (Système National des Données de Santé), which should complement the qualitative data on healthcare professionals’ perceptions collected in the present study. The quantitative study will enable the assessment of other organisational impacts (such as the hospital readmission rate and the lengths of stay in the ED or in the hospital, as the clinical and economic impacts) as a function of the CD’s organisational structure and human resources.
The present survey is the first to have assessed the organisational impact of the implementation of the CCC TM RPM device for CHF management. The results highlighted the variety of organisational structures, which tends to structure with the use of the device.
|
Three Experimental Common High-Risk Procedures: Emission Characteristics Identification and Source Intensity Estimation in Biosafety Laboratory | 9626d27d-24b4-4feb-895a-9e727f2c3d53 | 10002466 | Microbiology[mh] | The frequent occurrence of infectious diseases such as novel coronavirus and SARS-CoV-2 has seriously affected human health and the social economy [ , , , ]. There are numerous reports on the transmission of infectious microbes in the form of aerosols [ , , ]. Biosafety laboratories have a set of preventive measures required for handling dangerous biological agents in a safe , reliable, and closed environment, and is the main location for studying unknown microbes . However, in the course of the research, due to accidents or the carelessness of operators , highly contagious microbes will spread to the surrounding environment in the form of aerosols , presenting a large exposure risk to researchers. Research has shown that 86.6% of operations can cause both microbe aerosols and unexplained laboratory infections that may be caused by the diffusion of microbe aerosols in the air, according to 276 types of operational testing in laboratories. Therefore, detailed information on the characteristics of aerosols in different experimental procedures is essential for assessing human health and the environment, as well as the source identification and apportionment of these particles. Risk assessment is the process of evaluating the risks associated with working with hazards. Factors such as the quantity and concentration of infectious substances, pathogenicity of biological agents, and possibility of aerosol generation in the working process should be considered in the evaluation process . A quantitative analysis of bioaerosol concentration produced by centrifuge centrifugation and freeze-dried powder being dropped in biosafety laboratories has been reported . To further reduce the risk of assessment, many studies chose Serratia marcescens, which is harmless to the human body, as a substitute for quantitative analysis for reduction experiment simulations . Zhuang et al. used Serratia marcescens as the model bacteria for experimental verification. Based on this, they used a numerical simulation method to explore the spatial transport and deposition behavior of a biosafety laboratory pollution source after leakage and determined the most serious pollution location in the vortex area and high-concentration area of pollutants. Long et al. carried out a biosafety risk assessment and control of laboratory tests, except for nucleic acid tests, in the clinical laboratory of a COVID-19-designated hospital, emphasizing the high risk of bioaerosol transmission. Wen used Serratia marcescens in a biosafety laboratory as a replacement for high-risk microbes and simulated experimental operations such as pipetting and high-speed centrifugation. The aerosol concentration generated in each experimental activity was quantitatively analyzed. Li et al. summarized the risk factors for the generation of bioaerosols in the experimental activities, where sample spill, sample drop, and injection were among the high-risk factors. However, there are few reports on a biosafety assessment based on the particle size segregation and source characteristics of bioaerosols produced by high-risk factors in advanced biosafety laboratories. In a laboratory environmental risk assessment, the source of pathogenic microorganisms is the decisive factor . Afshari detected 13 different particle sources in a 32 m 3 full-size chamber and quantified the emission of ultrafine and fine particles 22 for the first time . Clemente simulated the release of hazardous nanoparticle material in a specially designed 13 m 3 stainless steel vessel under accidental conditions. Many studies have designed facilities to study and verify the indoor environment in which the treated material is released . However, none of these studies have addressed the issue of source-released aerosols. Studies have pointed out that the study of source intensity plays an important role in reducing air pollution . Mei established a probability model based on Markov chains to simulate the transport and diffusion of air pollutants released by pollution sources, but the model did not consider the effect of temperature and humidity on the diffusion of pollutants. A Gaussian mixture model is often used to study the source intensity and spatial concentration of atmospheric pollutant emissions . Compared with other models, the temperature and humidity are included in the independent variables of the model, which further improves the accuracy of the spatial concentration prediction of biological aerosols [ , , , ]. However, there are few reports on the risk assessment of experimental operations from the source direction, and few studies have included aerosol source strength in the risk assessment. In this study, the characteristics of aerosol pollution release sources and the spatial distribution of the concentration affected by typical risk factors such as sample spill, injection, and sample drop were investigated through experiments in an exposure chamber. Concentration-monitoring experiments of different risk factors with time were carried out in this chamber. The main work and contributions are as follows: (1) three common high-risk factors, such as sample spill, injection, and sample drop, were experimentally reduced in a small chamber, and the concentration and particle size segregation of bioaerosols produced by them were sampled and monitored; (2) a quantitative analysis of the relationship between risk factors, concentration, and particle size segregation, and a determination of the characteristics of each risk factor release source, were carried out; (3) a Gaussian mixture model was used to calculate the pollution source intensity of each hazard factor; (4) a risk assessment of high-risk factors of biosafety laboratories was conducted from the source, and effective protection suggestions were put forward that provided a reference for the risk prediction.
2.1. Experimental Method 2.1.1. Measurement Instruments Aerosol sampling Bioaerosol samples are collected using the Anderson six-stage sampler, which is a common sampling methods . The sampling device consists of 6 stages according to the dynamic diameter: 0.65–1.1 μ m, 1.1–2.1 μ m, 2.1–3.3 μ m, 3.3–4.7 μ m, 4.7–7.0 μ m, and >7.0 μ m . Aerosol samples were collected using LB Nutrition Agar and incubated at 37 ∘ C for 24 h, then cultured at 27 ∘ C for 24 h until colonies turned bright red. After, they were counted . When sampling, the superposition of multiple colonies may occur in a sampling well, resulting in deviation in counting. Therefore, we performed a positive hole correction . The method used for calculating the bioaerosol concentration is as follows: (1) P r = N × 1 N + 1 N − 1 + 1 N − 2 + ⋯ + 1 N − r − 1 (2) C a = P r Q × T × 1000 where P r is the corrected colony number; r is the actual number of colonies; N Number of holes at each level of the sampler, N = 400; C a is aerosol concentration, P r is the sum of colony number corrected on six Petri dishes; Q is the collection flow rate 28.3 mL/min; T is the sampling time, min. In this study, impacted bacteria were sampled, and the austenitic transformation method was used to change the concentration of sedimentation bacteria. The method was used to investigate the amount of bacteria deposited on a 10 cm medium surface in 5 min, which was equivalent to 10 L of air . The formula is as follows: (3) C = 100 A × 5 t × 1000 10 × N where C is the number of airborne aerosols, CFU/m 3 ; A is the area of the plate used, cm 2 ; t is the exposure time of the plate, min; N is the number of the colony on the plate, CFU. Particle sampling In this study, TSI (3330) was used to monitor the released particles, the sampling flow was 10 L/min, and the particle sizes of the 16 sampling channels were 0.3 /0.4 /0.5 /0.6 /0.7 /0.9 /1.1 /1.4 /1.7 /2.1 /2.7 /3.3 /4.1 /5.2 /6.5 /8.1 /10 μ m. 2.1.2. Material Serratia marcescens is a model bacterium commonly used in laboratories . Its diameter is approximately 0.5–0.8 μ m it causes very little harm to humans and animals, it has no spores, and it is easy to distinguish from other miscellaneous bacteria. In addition, the strain used in this study can produce blood pigment—prodigiosin—for easy tracking and identification . Serratia marcescens was stored at −80 °C, and 10 mL of the microorganism was activated in 20 mL of Nutrient Broth at 37 °C and 250 rpm; after 24 h, media were striated in Nutrient Agar plates and incubated at 28 °C for 24 h. A single colony was taken and inoculated into the Nutrient Broth and cultured at 37 °C. A standard inoculum of 1 × 10 9 CFU/mL was established using NB and kept at −80 °C . Nutrient AGAR composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, AGAR 1.5–2.0%, distilled water preparation. pH 7.2–7.4. Nutritious broth composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, distilled water preparation, pH 7.2–7.4 . 2.1.3. Experimental Design In this study, three high-risk experimental factors were investigated: (I) Sample spill: the operator used a pipette to aspirate and mix 30 mL of concentrated bacterial solution. When pipetting bacterial solution, the operator must avoid the formation of bubbles and splashes. Pipetting was performed every 5 s for 5 min; (II) Injection: the operator injected a syringe containing 2 mL of bacterial solution into the air once every minute, 5 times in total. This operation is used to simulate an accidental jetting resulting from an animal injection; (III) Sample drop: a conical flask containing 30 mL of bacteria was dropped from 1.2 m at 45° to the ground. Air samples were collected for 10 min with an Anderson six-stage sampler, which was used at a flow rate of 28.3 L/min ( ) [ , , ]. Samples obtained from the Anderson six-stage sampler were taken to the laboratory for culture less than 4 h after sampling. The particle counter was sampled at 40 cm on the vertical ground, as shown in , and measured every minute. The above experiments were repeated 6 times for each group. This study was carried out in a glass chamber with dimensions of 1.5 m × 1.5 m × 2.0 m (corresponding, to length × width × height, respectively) ( ). A dispersal chamber is a qualified chamber with a tightly sealed door and walls, with a specified inflow of high-efficiency particulate air and filtered air and a controlled outflow. During the experiment, the clean room temperature was set at 26.0 °C and the relative humidity was set at 50% . The environmental parameters of the small room were monitored as shown in . The ventilation system can effectively keep the experimental environmental parameters stable. In addition, the vertical ventilation system can quickly eliminate aerosols in the chamber. Before the experiment, ultraviolet lamp sterilization and alcohol wiping were used for disinfection each time. Anderson’s six-stage sampler was sampled for 10 min before each experiment as a blank control. 2.2. Mathematical Method 2.2.1. Source Intensity The source intensity (Q) is defined as the number of bacteria released by the infectious source per second (CFU/s) . The Gaussian mixture model is commonly used to calculate the diffusion concentration of pollutants continuously discharged into the air from a point source . In this study, the diffusion concentration was measured experimentally. The Gaussian mixture model retrieved the aerosol source intensity . The establishment of the Gaussian diffusion model is based on four assumptions: 1. Uniform, stable, and continuous discharge of pollution point sources; 2. Air pollutants follow the conservation of mass in the diffusion process; 3. The wind direction in the diffusion area is uniform and stable; 4. The pollutant concentration is in accordance with normal distribution in the horizontal direction and normal distribution in the vertical direction. The core principle formula is as follows: (4) C ( x , y , z ) = Q 2 π u δ y δ z exp − 0.5 y δ y 2 · exp − 0.5 z − H δ z 2 + exp − 0.5 z + H δ z 2 where Q is pollutant discharge rate per unit time, CFU/s; H is the effective height of the pollution source, m; u is the average wind speed at the source of pollution, m; y is the horizontal coordinate perpendicular to the x -axis, m; z is the vertical coordinates, m; δ y , δ z , diffusion coefficients in horizontal (y) and vertical (z) directions, m. 2.2.2. Data Statistics In this study, we conducted three groups of parallel control experiments for each group of experiments, proofread the collected data using the 3-sigma method, and calibrated and rejected data with large errors. This study used SPSS 25.0 for statistical analysis. p < 0.05 is statistically significant, and all tests were two-tailed. In addition, one-way analysis of variance tests were used. For the data and continuous variables that did not conform to normal distributions, the non-parametric Wilcoxon rank sum test was used to test the aerosol-emission-related factors according to their distribution.
2.1.1. Measurement Instruments Aerosol sampling Bioaerosol samples are collected using the Anderson six-stage sampler, which is a common sampling methods . The sampling device consists of 6 stages according to the dynamic diameter: 0.65–1.1 μ m, 1.1–2.1 μ m, 2.1–3.3 μ m, 3.3–4.7 μ m, 4.7–7.0 μ m, and >7.0 μ m . Aerosol samples were collected using LB Nutrition Agar and incubated at 37 ∘ C for 24 h, then cultured at 27 ∘ C for 24 h until colonies turned bright red. After, they were counted . When sampling, the superposition of multiple colonies may occur in a sampling well, resulting in deviation in counting. Therefore, we performed a positive hole correction . The method used for calculating the bioaerosol concentration is as follows: (1) P r = N × 1 N + 1 N − 1 + 1 N − 2 + ⋯ + 1 N − r − 1 (2) C a = P r Q × T × 1000 where P r is the corrected colony number; r is the actual number of colonies; N Number of holes at each level of the sampler, N = 400; C a is aerosol concentration, P r is the sum of colony number corrected on six Petri dishes; Q is the collection flow rate 28.3 mL/min; T is the sampling time, min. In this study, impacted bacteria were sampled, and the austenitic transformation method was used to change the concentration of sedimentation bacteria. The method was used to investigate the amount of bacteria deposited on a 10 cm medium surface in 5 min, which was equivalent to 10 L of air . The formula is as follows: (3) C = 100 A × 5 t × 1000 10 × N where C is the number of airborne aerosols, CFU/m 3 ; A is the area of the plate used, cm 2 ; t is the exposure time of the plate, min; N is the number of the colony on the plate, CFU. Particle sampling In this study, TSI (3330) was used to monitor the released particles, the sampling flow was 10 L/min, and the particle sizes of the 16 sampling channels were 0.3 /0.4 /0.5 /0.6 /0.7 /0.9 /1.1 /1.4 /1.7 /2.1 /2.7 /3.3 /4.1 /5.2 /6.5 /8.1 /10 μ m. 2.1.2. Material Serratia marcescens is a model bacterium commonly used in laboratories . Its diameter is approximately 0.5–0.8 μ m it causes very little harm to humans and animals, it has no spores, and it is easy to distinguish from other miscellaneous bacteria. In addition, the strain used in this study can produce blood pigment—prodigiosin—for easy tracking and identification . Serratia marcescens was stored at −80 °C, and 10 mL of the microorganism was activated in 20 mL of Nutrient Broth at 37 °C and 250 rpm; after 24 h, media were striated in Nutrient Agar plates and incubated at 28 °C for 24 h. A single colony was taken and inoculated into the Nutrient Broth and cultured at 37 °C. A standard inoculum of 1 × 10 9 CFU/mL was established using NB and kept at −80 °C . Nutrient AGAR composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, AGAR 1.5–2.0%, distilled water preparation. pH 7.2–7.4. Nutritious broth composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, distilled water preparation, pH 7.2–7.4 . 2.1.3. Experimental Design In this study, three high-risk experimental factors were investigated: (I) Sample spill: the operator used a pipette to aspirate and mix 30 mL of concentrated bacterial solution. When pipetting bacterial solution, the operator must avoid the formation of bubbles and splashes. Pipetting was performed every 5 s for 5 min; (II) Injection: the operator injected a syringe containing 2 mL of bacterial solution into the air once every minute, 5 times in total. This operation is used to simulate an accidental jetting resulting from an animal injection; (III) Sample drop: a conical flask containing 30 mL of bacteria was dropped from 1.2 m at 45° to the ground. Air samples were collected for 10 min with an Anderson six-stage sampler, which was used at a flow rate of 28.3 L/min ( ) [ , , ]. Samples obtained from the Anderson six-stage sampler were taken to the laboratory for culture less than 4 h after sampling. The particle counter was sampled at 40 cm on the vertical ground, as shown in , and measured every minute. The above experiments were repeated 6 times for each group. This study was carried out in a glass chamber with dimensions of 1.5 m × 1.5 m × 2.0 m (corresponding, to length × width × height, respectively) ( ). A dispersal chamber is a qualified chamber with a tightly sealed door and walls, with a specified inflow of high-efficiency particulate air and filtered air and a controlled outflow. During the experiment, the clean room temperature was set at 26.0 °C and the relative humidity was set at 50% . The environmental parameters of the small room were monitored as shown in . The ventilation system can effectively keep the experimental environmental parameters stable. In addition, the vertical ventilation system can quickly eliminate aerosols in the chamber. Before the experiment, ultraviolet lamp sterilization and alcohol wiping were used for disinfection each time. Anderson’s six-stage sampler was sampled for 10 min before each experiment as a blank control.
Aerosol sampling Bioaerosol samples are collected using the Anderson six-stage sampler, which is a common sampling methods . The sampling device consists of 6 stages according to the dynamic diameter: 0.65–1.1 μ m, 1.1–2.1 μ m, 2.1–3.3 μ m, 3.3–4.7 μ m, 4.7–7.0 μ m, and >7.0 μ m . Aerosol samples were collected using LB Nutrition Agar and incubated at 37 ∘ C for 24 h, then cultured at 27 ∘ C for 24 h until colonies turned bright red. After, they were counted . When sampling, the superposition of multiple colonies may occur in a sampling well, resulting in deviation in counting. Therefore, we performed a positive hole correction . The method used for calculating the bioaerosol concentration is as follows: (1) P r = N × 1 N + 1 N − 1 + 1 N − 2 + ⋯ + 1 N − r − 1 (2) C a = P r Q × T × 1000 where P r is the corrected colony number; r is the actual number of colonies; N Number of holes at each level of the sampler, N = 400; C a is aerosol concentration, P r is the sum of colony number corrected on six Petri dishes; Q is the collection flow rate 28.3 mL/min; T is the sampling time, min. In this study, impacted bacteria were sampled, and the austenitic transformation method was used to change the concentration of sedimentation bacteria. The method was used to investigate the amount of bacteria deposited on a 10 cm medium surface in 5 min, which was equivalent to 10 L of air . The formula is as follows: (3) C = 100 A × 5 t × 1000 10 × N where C is the number of airborne aerosols, CFU/m 3 ; A is the area of the plate used, cm 2 ; t is the exposure time of the plate, min; N is the number of the colony on the plate, CFU. Particle sampling In this study, TSI (3330) was used to monitor the released particles, the sampling flow was 10 L/min, and the particle sizes of the 16 sampling channels were 0.3 /0.4 /0.5 /0.6 /0.7 /0.9 /1.1 /1.4 /1.7 /2.1 /2.7 /3.3 /4.1 /5.2 /6.5 /8.1 /10 μ m.
Serratia marcescens is a model bacterium commonly used in laboratories . Its diameter is approximately 0.5–0.8 μ m it causes very little harm to humans and animals, it has no spores, and it is easy to distinguish from other miscellaneous bacteria. In addition, the strain used in this study can produce blood pigment—prodigiosin—for easy tracking and identification . Serratia marcescens was stored at −80 °C, and 10 mL of the microorganism was activated in 20 mL of Nutrient Broth at 37 °C and 250 rpm; after 24 h, media were striated in Nutrient Agar plates and incubated at 28 °C for 24 h. A single colony was taken and inoculated into the Nutrient Broth and cultured at 37 °C. A standard inoculum of 1 × 10 9 CFU/mL was established using NB and kept at −80 °C . Nutrient AGAR composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, AGAR 1.5–2.0%, distilled water preparation. pH 7.2–7.4. Nutritious broth composition: peptone 1%, beef extract 0.3%, sodium chloride 0.5%, distilled water preparation, pH 7.2–7.4 .
In this study, three high-risk experimental factors were investigated: (I) Sample spill: the operator used a pipette to aspirate and mix 30 mL of concentrated bacterial solution. When pipetting bacterial solution, the operator must avoid the formation of bubbles and splashes. Pipetting was performed every 5 s for 5 min; (II) Injection: the operator injected a syringe containing 2 mL of bacterial solution into the air once every minute, 5 times in total. This operation is used to simulate an accidental jetting resulting from an animal injection; (III) Sample drop: a conical flask containing 30 mL of bacteria was dropped from 1.2 m at 45° to the ground. Air samples were collected for 10 min with an Anderson six-stage sampler, which was used at a flow rate of 28.3 L/min ( ) [ , , ]. Samples obtained from the Anderson six-stage sampler were taken to the laboratory for culture less than 4 h after sampling. The particle counter was sampled at 40 cm on the vertical ground, as shown in , and measured every minute. The above experiments were repeated 6 times for each group. This study was carried out in a glass chamber with dimensions of 1.5 m × 1.5 m × 2.0 m (corresponding, to length × width × height, respectively) ( ). A dispersal chamber is a qualified chamber with a tightly sealed door and walls, with a specified inflow of high-efficiency particulate air and filtered air and a controlled outflow. During the experiment, the clean room temperature was set at 26.0 °C and the relative humidity was set at 50% . The environmental parameters of the small room were monitored as shown in . The ventilation system can effectively keep the experimental environmental parameters stable. In addition, the vertical ventilation system can quickly eliminate aerosols in the chamber. Before the experiment, ultraviolet lamp sterilization and alcohol wiping were used for disinfection each time. Anderson’s six-stage sampler was sampled for 10 min before each experiment as a blank control.
2.2.1. Source Intensity The source intensity (Q) is defined as the number of bacteria released by the infectious source per second (CFU/s) . The Gaussian mixture model is commonly used to calculate the diffusion concentration of pollutants continuously discharged into the air from a point source . In this study, the diffusion concentration was measured experimentally. The Gaussian mixture model retrieved the aerosol source intensity . The establishment of the Gaussian diffusion model is based on four assumptions: 1. Uniform, stable, and continuous discharge of pollution point sources; 2. Air pollutants follow the conservation of mass in the diffusion process; 3. The wind direction in the diffusion area is uniform and stable; 4. The pollutant concentration is in accordance with normal distribution in the horizontal direction and normal distribution in the vertical direction. The core principle formula is as follows: (4) C ( x , y , z ) = Q 2 π u δ y δ z exp − 0.5 y δ y 2 · exp − 0.5 z − H δ z 2 + exp − 0.5 z + H δ z 2 where Q is pollutant discharge rate per unit time, CFU/s; H is the effective height of the pollution source, m; u is the average wind speed at the source of pollution, m; y is the horizontal coordinate perpendicular to the x -axis, m; z is the vertical coordinates, m; δ y , δ z , diffusion coefficients in horizontal (y) and vertical (z) directions, m. 2.2.2. Data Statistics In this study, we conducted three groups of parallel control experiments for each group of experiments, proofread the collected data using the 3-sigma method, and calibrated and rejected data with large errors. This study used SPSS 25.0 for statistical analysis. p < 0.05 is statistically significant, and all tests were two-tailed. In addition, one-way analysis of variance tests were used. For the data and continuous variables that did not conform to normal distributions, the non-parametric Wilcoxon rank sum test was used to test the aerosol-emission-related factors according to their distribution.
The source intensity (Q) is defined as the number of bacteria released by the infectious source per second (CFU/s) . The Gaussian mixture model is commonly used to calculate the diffusion concentration of pollutants continuously discharged into the air from a point source . In this study, the diffusion concentration was measured experimentally. The Gaussian mixture model retrieved the aerosol source intensity . The establishment of the Gaussian diffusion model is based on four assumptions: 1. Uniform, stable, and continuous discharge of pollution point sources; 2. Air pollutants follow the conservation of mass in the diffusion process; 3. The wind direction in the diffusion area is uniform and stable; 4. The pollutant concentration is in accordance with normal distribution in the horizontal direction and normal distribution in the vertical direction. The core principle formula is as follows: (4) C ( x , y , z ) = Q 2 π u δ y δ z exp − 0.5 y δ y 2 · exp − 0.5 z − H δ z 2 + exp − 0.5 z + H δ z 2 where Q is pollutant discharge rate per unit time, CFU/s; H is the effective height of the pollution source, m; u is the average wind speed at the source of pollution, m; y is the horizontal coordinate perpendicular to the x -axis, m; z is the vertical coordinates, m; δ y , δ z , diffusion coefficients in horizontal (y) and vertical (z) directions, m.
In this study, we conducted three groups of parallel control experiments for each group of experiments, proofread the collected data using the 3-sigma method, and calibrated and rejected data with large errors. This study used SPSS 25.0 for statistical analysis. p < 0.05 is statistically significant, and all tests were two-tailed. In addition, one-way analysis of variance tests were used. For the data and continuous variables that did not conform to normal distributions, the non-parametric Wilcoxon rank sum test was used to test the aerosol-emission-related factors according to their distribution.
3.1. Aerosol Emission Level In this study, an experimental operation of three risk factors was simulated, and emission characteristics of bioaerosol sources were quantitatively analyzed. As shown in , the concentration range of aerosols formed by the sample spill is 10– 10 2 CFU/m 3 , the total aerosol generated by the injection is 10 2 –10 3 CFU/m 3 , and the aerosol distribution generated by the drop is 10– 10 3 CFU/m 3 . The three groups of samples conform to the normal distribution after taking logarithms. The single-factor analysis of variance shows that the aerosol concentrations were statistically different between the accidental drop of conical bottles and the other two experimental groups, whereas the aerosol concentrations were not statistically significant between the accidental leakage of the spill and the injection. The average aerosol concentration generated by the sample drop was 1108.2 CFU/m 3 , and the upper quartile was 1808 CFU/m 3 . The average concentration was 78.3 CFU/m 3 , and the quartile was 988.4 CFU/m 3 . The aerosol concentration produced by the suction of a high concentration of bacterial liquid was 250.3 CFU/m 3 , and the upper quartile was 485.2 CFU/m 3 . The concentration of the bioaerosol produced by sample dropping was statistically significant ( p < 0.05), whereas the concentration of the bacterial liquid had no statistical significance regarding the concentration of the aerosol produced by the two risk factors. In this study, by monitoring the three sampling points in each experiment operation, it was found that the concentration of the bioaerosol located in front of the operator has obvious advantages compared to when it is located on either side, see . The shape of the injection and sample spill is the main flow type, and the main flow direction varies according to the operator’s practice. According to the assumption of the Gaussian mixture model application, we regarded the direction of the maximum concentration as the wind direction in the model. The source intensity is defined as the emission rate of pollutants. In this study, a Gaussian mixture model was used to inversely calculate the source intensity of the three test factors, as shown in the table. The source intensity of the spill factor was 3.6 CFU/s, the source intensity of the injection was 78.2 CFU/s, and the source intensity of the aerosol caused by the sample drop was 664.1 CFU/s. There were significant differences in the aerosol source intensity caused by the three risk factors ( p < 0.05), as shown in . The sample spill, injection, and sample drop sources differ by order of magnitude, which may be related to the volume of bacteria released into the air during each high-risk operation. 3.2. Bioaerosol and Particle Size Segregation We analyzed the size of the culturable bioaerosol collected. As shown in , the bioaerosol size distribution of culturable fungi produced by the three experimental schemes presented an n-type, with the particle size mainly distributed in the range of 3.3–4.7 μ m, accounting for 54.6% (sample spill), 32.6% (injection), and 27.8% (sample drop) of the total amount of emissions, respectively. With 3.3–4.7 μ m as the center, the number of aerosols decreased gradually with a change in the particle size. In addition, the main bioaerosol sizes of the aerosol produced by the injection were 2.1–3.3 μ m (22.7%), 3.3–4.7 μ m (32.1%), and 4.7–7.0 μ m (23.5%), respectively. The main particle size ranges of the aerosol in the sample drop were 2.1–3.3 μ m (21.7%), 3.3–4.7 μ m (27.6%), and 4.7–7.0 μ m (24.9%). The main particle size range produced by each experimental operation was 0.3–0.65 μ m, accounting for 28.9% of the total emission (sample spill), 56.4% (injection), and 31.9% (sample drop), respectively. The size of particles in the range of 0.65–7.0 μ m produced by the sample spill and injection was less than 20% of the total monitored size, and the number of particles tended to decrease with the increase in particle size. The particle size segregation generated by the sample drop was roughly consistent with that of aerosol, and the particle size mainly ranged from 4.7 to 7.0 μ m. Grain size distributions are traditionally described by the sums of several lognormal distributions. The lognormal distribution of the particle mass and particle number was obtained by measuring the three groups of experiments, as shown in . The logarithmic normal distribution of the overall mass of the particles released from the experimental activity is u-shaped. The mass of the particle is high at both endpoints and low at the middle point. With an increase in the particle size, the growth trend also gradually accelerates. The number of particles dropped from the sample increased exponentially, and the number of small particles was lower than that of spill.
In this study, an experimental operation of three risk factors was simulated, and emission characteristics of bioaerosol sources were quantitatively analyzed. As shown in , the concentration range of aerosols formed by the sample spill is 10– 10 2 CFU/m 3 , the total aerosol generated by the injection is 10 2 –10 3 CFU/m 3 , and the aerosol distribution generated by the drop is 10– 10 3 CFU/m 3 . The three groups of samples conform to the normal distribution after taking logarithms. The single-factor analysis of variance shows that the aerosol concentrations were statistically different between the accidental drop of conical bottles and the other two experimental groups, whereas the aerosol concentrations were not statistically significant between the accidental leakage of the spill and the injection. The average aerosol concentration generated by the sample drop was 1108.2 CFU/m 3 , and the upper quartile was 1808 CFU/m 3 . The average concentration was 78.3 CFU/m 3 , and the quartile was 988.4 CFU/m 3 . The aerosol concentration produced by the suction of a high concentration of bacterial liquid was 250.3 CFU/m 3 , and the upper quartile was 485.2 CFU/m 3 . The concentration of the bioaerosol produced by sample dropping was statistically significant ( p < 0.05), whereas the concentration of the bacterial liquid had no statistical significance regarding the concentration of the aerosol produced by the two risk factors. In this study, by monitoring the three sampling points in each experiment operation, it was found that the concentration of the bioaerosol located in front of the operator has obvious advantages compared to when it is located on either side, see . The shape of the injection and sample spill is the main flow type, and the main flow direction varies according to the operator’s practice. According to the assumption of the Gaussian mixture model application, we regarded the direction of the maximum concentration as the wind direction in the model. The source intensity is defined as the emission rate of pollutants. In this study, a Gaussian mixture model was used to inversely calculate the source intensity of the three test factors, as shown in the table. The source intensity of the spill factor was 3.6 CFU/s, the source intensity of the injection was 78.2 CFU/s, and the source intensity of the aerosol caused by the sample drop was 664.1 CFU/s. There were significant differences in the aerosol source intensity caused by the three risk factors ( p < 0.05), as shown in . The sample spill, injection, and sample drop sources differ by order of magnitude, which may be related to the volume of bacteria released into the air during each high-risk operation.
We analyzed the size of the culturable bioaerosol collected. As shown in , the bioaerosol size distribution of culturable fungi produced by the three experimental schemes presented an n-type, with the particle size mainly distributed in the range of 3.3–4.7 μ m, accounting for 54.6% (sample spill), 32.6% (injection), and 27.8% (sample drop) of the total amount of emissions, respectively. With 3.3–4.7 μ m as the center, the number of aerosols decreased gradually with a change in the particle size. In addition, the main bioaerosol sizes of the aerosol produced by the injection were 2.1–3.3 μ m (22.7%), 3.3–4.7 μ m (32.1%), and 4.7–7.0 μ m (23.5%), respectively. The main particle size ranges of the aerosol in the sample drop were 2.1–3.3 μ m (21.7%), 3.3–4.7 μ m (27.6%), and 4.7–7.0 μ m (24.9%). The main particle size range produced by each experimental operation was 0.3–0.65 μ m, accounting for 28.9% of the total emission (sample spill), 56.4% (injection), and 31.9% (sample drop), respectively. The size of particles in the range of 0.65–7.0 μ m produced by the sample spill and injection was less than 20% of the total monitored size, and the number of particles tended to decrease with the increase in particle size. The particle size segregation generated by the sample drop was roughly consistent with that of aerosol, and the particle size mainly ranged from 4.7 to 7.0 μ m. Grain size distributions are traditionally described by the sums of several lognormal distributions. The lognormal distribution of the particle mass and particle number was obtained by measuring the three groups of experiments, as shown in . The logarithmic normal distribution of the overall mass of the particles released from the experimental activity is u-shaped. The mass of the particle is high at both endpoints and low at the middle point. With an increase in the particle size, the growth trend also gradually accelerates. The number of particles dropped from the sample increased exponentially, and the number of small particles was lower than that of spill.
The experimental activities of the biosafety laboratory mainly involve sample collection, transportation, reception, processing, experimental operation and preservation, waste disposal, and other activities . If the control method is improper, there is a risk that pathogens may infect the laboratory staff or spread to the social population outside the laboratory. The risks of experimental activities vary. A risk assessment of experimental activities, identification of risk sources, and corresponding personal protection measures taken to avoid accidental injury and contact with pathogenic microorganisms are necessary to ensure the safety of experimental personnel . The complexity of the laboratory risk assessment and risk control activities depends on the actual hazard characteristics of laboratory activities. A risk assessment and risk control activities should be carried out according to the characteristics and intensity of the risk sources . Using a Gaussian mixture model, the source intensity was calculated by solving the equations of the aerosol concentration at the three positions measured. It was discovered that the aerosol concentration generated by an accidental injection is approximately 10 4 CFU/m 3 , the aerosol concentration generated by an accidental drop of culture bottles and accidental overflow of freeze-dried powder is approximately 10 3 CFU/m 3 , and the aerosol concentration generated by a centrifugal tube rupture and ultrasonic cracking in the process of blowing and suction is approximately 10–100 CFU/m 3 . The results of this laboratory research are consistent; thus, special attention should be paid to the bioaerosols generated by various experimental activities and accidents in the laboratory. The results of this study show that the source intensity of culturable bacteria produced by a spill, injection, and dropping gradually increased, and that there were significant differences between source intensities of bioaerosols produced by the three experimental operations ( p < 0.05). The reason for this may be that the perturbation effect of the experimental operation on the bacterial solution increases with a change in the intensity. In this study, for example, falling conical bottles resulted in more droplets spilling out of the bacteria than the spill. Droplets that splash into the air carry bacteria, which then aerosol nucleation to form bioaerosols. This leads to an increase in the number of aerosols detected in the air. In this study, the main aerosol size range measured by the Anderson six-stage sampler was 3.3–4.7 μ m, and the highest number of viable bacteria of infectious diseases was in the size range of 2.1–4.7 μ m . It is easy for aerosols smaller than 5 μ m to enter human lungs, causing harm to the human body . The main particle size range measured by the optical particle counter was 0.30–0.65 μ m. The Anderson six-stage sampler cannot pick up these tiny particles (0.3–0.65 μ m). Meanwhile, Serratia marcescens aerosols are mainly distributed in the size range of 3.3–4.7 μ m, so there were almost no Serratia marcescens aerosols on these 0.3–0.65 μ m particles. In general, the size of infectious microorganisms ranges from 0.02–0.30 μ m for viruses. Particle sizes of 0.3–0.65 μ m can become carriers of viruses, presenting an exposure risk to workers. Overall, when conducting experimental activities in the biosafety laboratory, the risk of microbial aerosol infection needs to be highly valued, good experimental habits should be maintained, actions such as touching the face should be reduced, and the risk of secondary contact infection caused by microorganisms adhering to the surfaces of laboratory clothes and gloves should be avoided. Personal protection should be carried out well, and the use of respiratory protective equipment in high-risk laboratories should be tested. Appropriate airflow organization should be adopted to quickly remove the carrier particles through the airflow to minimize or eliminate the infection caused by experimental activities and accidents.
To explore the relationship between risk factors, concentration, and particle size segregation, and to determine the characteristics of a sample spill, injection, and drop release source, experimental operations were conducted in the exposure room. Furthermore, the pollution source intensity of each risk factor was calculated by using a Gaussian mixture model. This study provides a reference for risk prediction during an experiment. The following conclusions are drawn: 1. There were significant differences in the aerosol release source intensities during common high-risk experimental procedures. The source intensities of the sample spill, injection, and drop were 3.6 CFU/s, 78.2 CFU/s, and 664 CFU/s, respectively. 2. The measurable culturable aerosol size segregation is mainly within the range of 3.3–4.7 μ m. At the same time, the sample spill and injection can produce a large number of particles with sizes ranging from 0.3–0.65 μ m. 3. It is recommended to strengthen the elimination of aerosols generated by experimental operations, especially those that can produce fine particles, and to select an effective air distribution.
|
Molecular Pathology, Diagnostics and Therapeutics: A Story of Success in 2022 | b530cd6a-e23e-4374-ad9e-c04dd29c2e33 | 10002953 | Pathology[mh] | Molecular pathology, diagnostics and therapeutics are three closely related topics of critical importance in medical research and clinical practice. Understanding the biology of diseases at the molecular level by identifying molecular and pathway alterations is important for several reasons: It can facilitate earlier and more accurate diagnoses of diseases, which are key to initiating appropriate treatments at the right time as well as reducing healthcare costs. It enables the development of new drugs and the design of more effective therapies, including the development of treatments that are tailored to the individual patient’s genetic makeup. It is useful for disease prevention by identifying individuals at risk of developing certain conditions, thus encouraging early targeted interventions and lifestyle changes. This field has transitioned into an extensive range of molecular and cell methodologies and techniques in the clinical arena that analyse an equally wide range of samples and diseases. A vast amount of complex data is generated and results in a taxing bioinformatic, bioanalytic and statistical workload. The molecular pathology, diagnostics and therapeutics section of the International Journal of Molecular Science (IJMS) aims to provide a go-to home for high quality, innovative and definitive publications in this critical area of health research. The significance of this field is reflected in the important contribution made by the molecular pathology, diagnostics and therapeutics section to the IJMS. This section includes contributions from basic research all the way to clinical and forensic applications and is especially open to studies that challenge existing concepts. It accounted for 14.4% of papers published in the journal during 2022, with the number of publications increasing to 2356 out of 4995 submissions (47%), resulting in 1.5 million downloads. Papers published in 2020/21 were cited nearly 15,000 times, with the most cited publications in the areas of cardiovascular diseases, cancer/tumour treatment, neurodegenerative diseases, endocrine diseases, and COVID-19-related research. Of these, 163 papers (6.9%) were cited more than five times and 20 of those more than 10 times. Consequently, papers published in this section have contributed to the steady rise of the journal’s impact factor, which currently stands at 6.208. This increase in quality and quantity is nurtured by the increased number of specialist editorial board and topical advisory panel members and is sustained by the many expert reviewers, whose comments and contributions are an essential part of this rise. Together this helps to ensure that the research presented in our section is of high quality, that methods are clearly described, results are fully reported and that the conclusions drawn are supported by the data. As a consequence, our section and the journal as a whole are respected by the scientific community and that the research is widely cited. Further to this topic, I would like to remind potential authors of the importance of method and data transparency, as both are critical components for evaluating the significance of reported results and any conclusions. For example, quantitative PCR (qPCR) and reverse transcription (RT)-qPCR are widely used technologies in this field, not least evidenced by the huge number of recent publications relating to SARS-CoV-2. Yet as an editor it continues to surprise me how little technical detail is generally included with the first submisssions of manuscripts and how inadequately results are presented. It is as though many authors either regard this technology as so common and standardised that no detail is required, or they have no understanding of its complexity of the various components such as RNA quality, PCR efficiency, primer specificity or significance of fold-changes in biomarker expression levels. I would urge authors to consult an earlier editorial describing the essential criteria that should accompany any submission to IJMS . The same requirement for transparency is obviously important for any technique used in a published paper. The percentage of primary research articles has increased to 60.1% in 2022, up from 52.2% in 2021. This is probably a reflection of the increasing impact factor of the journal, which encourages more researchers to trust it with their latest and most important research results and, in turn, fuels a virtuous circle of increased quality resulting in further increases in the impact factor. Conversely, this increase has led to a decrease in the number of review articles (36.6% from 45.9% in 2021). Clearly there is a need to strike a balance, since review articles tend to attract more citations and increase the scholarly impact of the journal. Nevertheless, as an editor I find it promising and exciting that so many authors choose our section for the dissemination of their precious data. There has been an increasing emphasis on Special Issues, to which publications are invited by guest editors and panel members, with a consistent review process ensuring high quality as well as topicality. In 2022 there were 547 Special Issues launches, compared with 368 in 2021. Indeed, the vast majority of papers in 2022 were published as parts of Special Issues. Nevertheless, the rise in the impact factor and the corresponding increase in our reputation saw an increasing number of regular submissions to the journal. I anticipate that this trend will continue for three reasons: (i) the IJMS impact factor is higher than that of competitor journals, (ii) the steady increase in its impact factor is not mirrored by those journals and (iii) their article processing charges are higher. The open access format combined with reasonable processing charges are two important reasons for the success of the journal. Papers can be accessed by anyone with an internet connection, regardless of their location or affiliation, thus can reach a much wider audience. Combined with a rapid processing workflow, this makes research published in our section available to the scientific community as well as the public sooner, making our contributions more topical and relevant to current news cycles. The increased visibility and accessibility also help articles being cited more frequently, so adding to their impact and influence. It follows, then, that the logistics of manuscript handling should be as clear, smooth, and rapid as possible as every author likes their manuscript to be reviewed and processed in as short a time frame as possible. IJMS in general and this section in particular have clear policies and guidelines on how to prepare and submit manuscripts, for ethical behaviour, and for the peer review process. The molecular pathology, diagnostics, and therapeutics section of the IJMS performs well, with a first decision provided to authors 16 days after submission and the median processing time being 36 days. This is remarkably swift, given that each manuscript is subject to strict peer review and considering the 16% increase in submissions since 2020. Feedback surveys indicate that authors appreciate our fast publication times, and the editors will strive to maintain this benefit for the future, whilst obviously ensuring that the section maintains the high paper quality. Amongst the top 15 countries publishing in this section, most continue to come from Europe (including Russia) at 41.5%, slightly down on the 45% recorded in 2021. Contributions from Asia (24.2% vs. 23.4%) and North America (13.7% vs. 14.8%) are roughly the same and the appearance of contributions from Australia (1.9%) is welcome. Within those groupings, there has been a sizeable increase in manuscripts submitted from China (+5.1%) and decrease from South Korea (−4.4%). An analysis of author origin in the pathology category of the Web of Science reveals that of the top three countries, only China has a similar number of submissions to our journal (10.9% vs. 8.6%). The USA (41.3% vs. 12% and the UK (6.2% vs. 2%) are under-represented. I also note that there are few submissions from India, which is home to 3.5% of submissions on the Web of Science. In contrast, the journal is rather popular with researchers from Japan (5.3% vs. 7.3%), Germany (4.8% vs. 6.6%) and, especially, Italy (4.1% vs. 13.6%). Clearly there is work to be done to encourage authors in countries such as India, South Africa and South America to consider IJMS as a destination for their research outputs. Author origin and online readership are closely aligned, suggesting a good cooperative relationship with the section. The readership extends way beyond the top contributor nations, though, with 26% of the online readership from countries with fewer than 2% views. Editorial board and topical advisory panel members are composed of experts in molecular pathology, diagnostics, therapeutics, and related field. Their understanding of, and indeed personal contribution to current research developments are important safeguards that allow unbiased and professional evaluation of the quality of the manuscripts submitted to the section. It is also important to note that the name of the academic editor accepting a manuscript after full peer review becomes associated with the published paper. This enhances the rigorous and unprejudiced review procedure, promotes maximum transparency for authors and readers alike, and provides a measure of responsibility if subsequent uncertainties arise. Board and advisory panel members play an essential role in ensuring that published manuscripts are of the highest quality, report innovative research results and are published with transparent methods and appropriate statistical analyses. They are from a wide range of countries, with most concentrated in two countries, Italy and the USA. Clearly one aim for next year must be to recruit more academic editors from a wider range of countries, especially China. With awareness of the current pandemic fading, it is essential to maintain the spotlight on the importance of molecular pathology and diagnostics as critical components of any country’s public health infrastructure and their role in opening new therapeutic pathways. Hence, the focus in 2023 will be to make our section an even better platform for authors and guest editors of Special Issues to publish relevant, influential, and conclusive research results. Ultimately, the basis of journal’s reputation is the quality of its primary research papers; it is enhanced by the publication of authoritative reviews that become influential in shaping the debate surrounding public health. Our aim must be to continue to provide a first-class publication experience to authors and encourage the submission of leading-edge research papers as well as of forward-looking and open-minded profiles of current work and future goals. |
Diagnostic Performance of Immunohistochemistry Compared to Molecular Techniques for Microsatellite Instability and p53 Mutation Detection in Endometrial Cancer | 4a794e4c-97b7-4606-8ff2-eec124a74b9a | 10002995 | Anatomy[mh] | Endometrial cancer (EC) is the sixth most frequent female cancer, affecting mainly post-menopausal women . The incidence of EC increased by 132% between 1999 and 2019, with the highest progression in developed countries. In contrast to EC incidence, EC mortality rates significantly decreased in around half of countries, and the mortality-to-incidence ratio decreased worldwide. This is the result of a better understanding of EC pathology and more effective treatments . Recent integration of molecular analysis provides insights into disease biology and improves the diagnosis and risk stratification of patients with EC. This new approach allows clinicians to individualise therapeutic management, particularly for adjuvant treatments, but also in the metastatic setting . The stepwise diagnostic algorithm, approved by the World Health Organization (WHO) in 2020, categorises EC into 4 molecular subgroups . The first step of this diagnostic algorithm is to identify EC with a pathogenic mutation in polymerase-E exonuclease domain. This leads to the first “ POLE ultra-mutated” ( POLE mut) subgroup. This subgroup has an excellent prognosis . The second assessment amongst POLE wildtype ( POLE wt) tumours is the loss of expression in one or more of the 4 mismatch repair (MMR) proteins (MLH1, MSH2, MSH6 or PMS2) categorised as the MMR-deficient (MMRd) EC. This second subgroup, the MMRd, has an intermediate prognosis with good response to immune checkpoints inhibitors when the EC is at an advanced or recurrent stage (for early stages no conclusion has yet been published) . Finally, the abnormal expression of p53 is investigated on MMR-proficient (MMRp) tumours to determine EC with (p53-abnormal–3rd subgroup) or without (nonspecific molecular profile/NSMP–4th subgroup) p53 anomalies . The p53-abnormal subgroup is the most aggressive and lethal molecular subtype. Recent data suggest that patients with p53-abnormal EC benefit from adjuvant treatment intensification with chemoradiotherapy followed by adjuvant chemotherapy . Concerning analysis methods, the identification of POLE mut is exclusively performed using DNA sequencing methods . The MMRd can be identified by immunohistochemistry (IHC), by molecular methods such as polymerase chain reaction (PCR) or by next generation sequencing (NGS) method. The PCR method consists of identifying the molecular hallmark of MMRd: microsatellite instability (MSI) . To determine the p53 status, IHC or TP53 NGS methods can be used . In the case of the detection of MMR and p53 mutations, the IHC method could be preferred over molecular techniques because it is rapid, widely available, less expensive, requires less tumour material and is readily interpretable . Discrepancies between the results obtained by each method have been described in several studies . Given the importance of molecular classification at the time of diagnosis and its influence on all aspects of EC care (surgical decision and adjuvant/metastatic therapies) and research, further studies are needed to investigate the performance characteristics of the methods used in this molecular classification. The results provide clinicians and researchers with some evidence to thereby choose the most appropriate method depending on their goals and resources. The objective of the study was to assess the diagnostic performance of IHC in comparison with molecular analyses, considered as the gold standard, to determine MMR/MSI and p53 gene status.
2.1. Patients and Tumour Characteristics Based on a retrospective medical chart review, 166 patients were enrolled in the study; however, 34 patients were excluded due to missing data. The remaining 132 patients constituted the study population. The clinicopathological characteristics of the 132 EC patients are summarised in . The median age was 69.0 years. The majority of the EC tumours had an endometrioid histology (82.3%), International Federation of Gynecology and Obstetrics (FIGO) tumour grade of 1 and 2 (65.9%), nodal stage N0 (91.5%) and FIGO stage IB (44.6%). 2.2. Molecular Profile of EC Tumours The molecular profile of the 132 EC patients is reported in . Eleven patients (9.7%) had a POLE mut tumour. MMR IHC analysis was performed on 131 EC tumours and revealed a MMRd status in 44 cases (33.6%). Among the 131 MMR IHC assays, 99 MSI PCR were carried out and 34 patients presented an MSI-high status (34.3%). Agreement between the two methods was thus evaluated on 99 subjects. Concerning the determination of p53 status, p53 IHC testing was achieved in 129 EC tumours and showed 34 cases with an abnormal p53 status (26.4%). TP53 sequencing was carried out on 106 EC tumours and identified 25 mutated cases (23.6%). Among the 129 p53 IHC tests, 104 cases were also examined by the TP53 sequencing method. These cases were used to determine any agreement between the IHC and molecular analyses. 2.3. Agreement between MMR IHC Status and MSI PCR Testing With the PCR analysis, MSI-high was observed in 34 of the 99 cases. When considering the MMR IHC analysis, 39 cases were classified as MMRd among 99 cases. Agreement between MSI PCR and MMR IHC analyses was observed in 88 of 99 cases using a binary classification . The proportion of MMRd/MSI-high was not statistically different using both analysis methods (34.3% vs. 39.4%, p = 0.13). Cohen’s kappa coefficient was 0.76 (95% CI: 0.63–0.89). displays the performance of the MMR IHC method. Sensitivity was 91.2% (95% CI: 76.3–98.1) and specificity was 87.7% (95% CI: 77.2–94.5), yielding a global accuracy level of 88.9% (95% CI: 81.0–94.3). The positive predictive value (PPV) was 79.5% (95% CI: 63.5–90.7) while the negative predictive value (NPV) was 95.0% (95% CI: 86.1–99.0). When analyses were performed on POLE wt tumours alone, as per the WHO algorithm, the accuracy of the MMR IHC method was 88.0% (95% CI: 79.0–94.1) with a sensitivity of 89.3% (95% CI: 71.8–97.7) and a specificity of 87.3% (95% CI: 75.5–94.7). Cohen’s kappa coefficient was 0.74 (95% CI: 0.59–0.89). The profile for MLH1, PMS2, MSH6, MSH2, of the ten discordant cases is presented in . Among the seven cases considered as MSS using the MSI PCR method, one case lost expression of MSH6 protein and six cases lost expression of MLH1 and PMS2 according to the MMR IHC method. For the three remaining cases, although all proteins were present, MSI-high was detected by PCR. 2.4. Agreement between p53 IHC and TP53 NGS With the sequencing analysis, TP53 mutation was observed in 24 of the 104 cases. When considering the p53 IHC analysis, 30 out of 104 patients were classified as having an abnormal status. Therefore, the agreement between TP53 NGS and p53 IHC analysis was observed in 86 of 104 cases using a binary classification . The proportion of tumours with an abnormal status was the same between the two methods (23.1% vs. 28.9%, p = 0.16). Cohen’s kappa coefficient was 0.55 (95% CI: 0.37–0.73). displays the performance of the p53 IHC analysis. Sensitivity was 75.0% (95% CI: 53.3–92.0) and specificity was 85.0% (95% CI: 75.3–92.0), yielding a global accuracy level of 82.7% (95% CI: 74.0–89.4). The PPV was 60.0% (95% CI: 41.6–77.3) and the NPV was 92.0% (95% CI: 83.2–97.0). When analyses were performed on POLE wt and MMRp cases (n = 48), the sensitivity increased to 92.3% (95% CI: 64.0–99.8) and the specificity decreased to 77.1% (95% CI: 59.9–89.6). The proportion of tumours with an abnormal status was lower with TP53 analysis than with p53 IHC. This proportion was different between the two analysis methods (27.1% vs. 41.7%, p = 0.020). Cohen’s kappa coefficient was 0.59 (95% CI: 0.37–0.82). The p53 IHC pattern of the 48 cases was matched with the type of TP53 mutation on . The most prevalent abnormal p53 pattern was nuclear overexpression (14 out of 20–70%) of which 10 cases were found with a missense mutation in TP53 and 4 cases with no TP53 mutation. Complete absence of p53 expression was observed in 5 out of 20 cases. Among them, one case presented a stop gain mutation, the other cases had no TP53 mutation. A subclonal pattern was observed in one tumour and was associated with a stop gain mutation. The false-negative case presented a missense mutation.
Based on a retrospective medical chart review, 166 patients were enrolled in the study; however, 34 patients were excluded due to missing data. The remaining 132 patients constituted the study population. The clinicopathological characteristics of the 132 EC patients are summarised in . The median age was 69.0 years. The majority of the EC tumours had an endometrioid histology (82.3%), International Federation of Gynecology and Obstetrics (FIGO) tumour grade of 1 and 2 (65.9%), nodal stage N0 (91.5%) and FIGO stage IB (44.6%).
The molecular profile of the 132 EC patients is reported in . Eleven patients (9.7%) had a POLE mut tumour. MMR IHC analysis was performed on 131 EC tumours and revealed a MMRd status in 44 cases (33.6%). Among the 131 MMR IHC assays, 99 MSI PCR were carried out and 34 patients presented an MSI-high status (34.3%). Agreement between the two methods was thus evaluated on 99 subjects. Concerning the determination of p53 status, p53 IHC testing was achieved in 129 EC tumours and showed 34 cases with an abnormal p53 status (26.4%). TP53 sequencing was carried out on 106 EC tumours and identified 25 mutated cases (23.6%). Among the 129 p53 IHC tests, 104 cases were also examined by the TP53 sequencing method. These cases were used to determine any agreement between the IHC and molecular analyses.
With the PCR analysis, MSI-high was observed in 34 of the 99 cases. When considering the MMR IHC analysis, 39 cases were classified as MMRd among 99 cases. Agreement between MSI PCR and MMR IHC analyses was observed in 88 of 99 cases using a binary classification . The proportion of MMRd/MSI-high was not statistically different using both analysis methods (34.3% vs. 39.4%, p = 0.13). Cohen’s kappa coefficient was 0.76 (95% CI: 0.63–0.89). displays the performance of the MMR IHC method. Sensitivity was 91.2% (95% CI: 76.3–98.1) and specificity was 87.7% (95% CI: 77.2–94.5), yielding a global accuracy level of 88.9% (95% CI: 81.0–94.3). The positive predictive value (PPV) was 79.5% (95% CI: 63.5–90.7) while the negative predictive value (NPV) was 95.0% (95% CI: 86.1–99.0). When analyses were performed on POLE wt tumours alone, as per the WHO algorithm, the accuracy of the MMR IHC method was 88.0% (95% CI: 79.0–94.1) with a sensitivity of 89.3% (95% CI: 71.8–97.7) and a specificity of 87.3% (95% CI: 75.5–94.7). Cohen’s kappa coefficient was 0.74 (95% CI: 0.59–0.89). The profile for MLH1, PMS2, MSH6, MSH2, of the ten discordant cases is presented in . Among the seven cases considered as MSS using the MSI PCR method, one case lost expression of MSH6 protein and six cases lost expression of MLH1 and PMS2 according to the MMR IHC method. For the three remaining cases, although all proteins were present, MSI-high was detected by PCR.
With the sequencing analysis, TP53 mutation was observed in 24 of the 104 cases. When considering the p53 IHC analysis, 30 out of 104 patients were classified as having an abnormal status. Therefore, the agreement between TP53 NGS and p53 IHC analysis was observed in 86 of 104 cases using a binary classification . The proportion of tumours with an abnormal status was the same between the two methods (23.1% vs. 28.9%, p = 0.16). Cohen’s kappa coefficient was 0.55 (95% CI: 0.37–0.73). displays the performance of the p53 IHC analysis. Sensitivity was 75.0% (95% CI: 53.3–92.0) and specificity was 85.0% (95% CI: 75.3–92.0), yielding a global accuracy level of 82.7% (95% CI: 74.0–89.4). The PPV was 60.0% (95% CI: 41.6–77.3) and the NPV was 92.0% (95% CI: 83.2–97.0). When analyses were performed on POLE wt and MMRp cases (n = 48), the sensitivity increased to 92.3% (95% CI: 64.0–99.8) and the specificity decreased to 77.1% (95% CI: 59.9–89.6). The proportion of tumours with an abnormal status was lower with TP53 analysis than with p53 IHC. This proportion was different between the two analysis methods (27.1% vs. 41.7%, p = 0.020). Cohen’s kappa coefficient was 0.59 (95% CI: 0.37–0.82). The p53 IHC pattern of the 48 cases was matched with the type of TP53 mutation on . The most prevalent abnormal p53 pattern was nuclear overexpression (14 out of 20–70%) of which 10 cases were found with a missense mutation in TP53 and 4 cases with no TP53 mutation. Complete absence of p53 expression was observed in 5 out of 20 cases. Among them, one case presented a stop gain mutation, the other cases had no TP53 mutation. A subclonal pattern was observed in one tumour and was associated with a stop gain mutation. The false-negative case presented a missense mutation.
This retrospective study included a cohort of unselected EC patients to assess the diagnostic performance of IHC compared with the molecular technique for the determination of MMR/MSI and p53 status. In the case of the detection of MMRd/MSI-high, our findings are in line with those observed in other studies regarding the agreement between both methods . A recent meta-analysis shows a pooled sensitivity of 96% (95% CI, 93–98%) with moderate heterogeneity among studies (I2 = 74.7%) and a pooled specificity of 95% (95% CI, 93–96%) with also minimal heterogeneity (I2 = 22.7%) for MMR IHC method. The overall accuracy, determined by area under the curve (AUC), is 99%. This meta-analysis concludes that IHC for the 4 MMR proteins is an accurate surrogate of MSI molecular testing in EC tumours . To reduce the cost of the four MMR proteins test, a combination of only two antibodies, MSH6 and PMS2, is proposed with an equivalent accuracy to testing all four proteins. However, the results of this combination can lead to pitfalls in the interpretation of MMR expression due to the heterodimer character of MLH1 pairing with PMS2, and MSH2 with MSH6 . The use of this combination is therefore discouraged , and for these reasons we did not assess its diagnostic performance. The IHC method is recognised as the preferred method used to identify MMRd/MSI. Recognised advantages of the IHC method are: 1/the short time-frame to obtain results (1–2 days); 2/its wide availability; 3/its low cost regarding all components used in the analysis; 4/the fact that it is readily interpretable by pathologists; 5/its ability to be performed on a limited amount of tissue; 6/its correlation with morphology; 7/its feasibility for all types of formalin-fixed paraffin-embedded (FFPE) specimens (biopsy and or surgical samples); 8/its amenability to IHC external quality assurance schemes; 9/its ability to identify which MMR gene is mutated, especially in the detection of MSH6 mutations that can potentially be missed in MSI testing. Additionally, the detection of mutations in MLH1, MSH2, PMS2 and MSH6 is of major importance for screening Lynch syndrome . However, the MMR IHC method is associated with pitfalls in its interpretation. Firstly, MMR IHC is a fixation-sensitive method. To avoid erroneous interpretation of one or more stains as loss of expression, it is important to adequately examine all fixed areas. Secondly, weak or focal MMR expression may be seen in the presence of MMR deficiency. In this case comparison with the internal control is an essential step. If the expression of MMR proteins is not strong and diffuse when compared to the internal control, the MMR expression should be noted and reported as defective or equivocal. To solve this problem a repeat staining in different sections is recommended. Thirdly, subclonal expression, defined as focal loss of expression by 10% of the tumour cells, could be observed in a minority of cases and should be assigned to the MMRd group. Also, a low proportion of MLH1-loss cases can reveal punctate nuclear expression that may be erroneously interpreted as retained/normal expression. This pattern should be reported as a loss of expression and is thought to be a technical artifact. Additionally, the MMR proteins are localised in the nucleus. In some cases, possibly related to technical reasons, there is a relatively conspicuous cytoplasmic or membranous staining in the absence of nuclear staining; such cases should be reported as abnormal. Finally, other patterns/problems may occur, such as loss of 3 or more proteins . PCR amplification of microsatellite markers to assess MSI status (BAT-25, BAT-26, NR-21, NR-24 and NR-27) provides rapid results (1–2 days) and at low cost. In contrast, PCR requires a significant tumour cell percentage (30%) in order to perform analysis . Despite a substantial agreement between MMR IHC and MSI PCR methods , 12% of discrepancies between both methods was observed in our study. Discrepancies can be explained by tumour heterogeneity or by an incomplete sensitivity/specificity of either method: poor DNA quality, insufficient or heterogenous antibody binding and retained expression of mutated proteins . Indeed, analyses performed on gastrointestinal tract tumours indicate a sensitivity of around 90% for each method, and these numbers are lower for EC . Aware of these weaknesses, some experts recommend a combination of the use of IHC and a molecular MSI method to achieve maximal sensitive and specific detection of MMRd/MSI-high tumours . Indeed, as either method shows a sensitivity around 90%, the use of a single approach might miss 10% of Lynch syndromes. Due to the risk of misclassified Lynch syndrome cases, the molecular testing used alone for MSI in EC patients is currently insufficient . Even though the NGS method shows promising results in detecting MSI tumours in colorectal cancer, further studies are needed to recommend this method in other tumours of the Lynch syndrome spectrum such as EC . The present study was also conducted to assess the agreement between IHC versus NGS method for the determination of the p53 status amongst EC patients. As described above, the IHC method is quick, easy to perform, and less expensive when compared to the NGS method . Our results show a moderate agreement between p53 IHC and TP53 NGS when analyses are performed after exclusion of POLE mut and MMRd cases, as per the WHO algorithm, with an accuracy of 81.3%. This accuracy is lower than those observed in the studies of Singh et al. and Vermij et al., who respectively noted an accuracy of 95.1% and 94.5% . Biense et al. observed discrepancies between the two methods. In their study the risk of misclassification is in the order of 5% if the p53 status is determined only by IHC rather than NGS . A recent meta-analysis shows that “overexpression or complete absence” of p53 are highly accurate immunohistochemical surrogates of TP53 mutation detected by NGS with an AUC of 0.97. The pooled sensitivity is 83% (95% CI, 71–91%) with high heterogeneity among studies (I2 = 76.9%) and the pooled specificity is 94% (95% CI, 89–97%) with minimal heterogeneity (I2 = 4.4%) . In order to achieve high diagnostic accuracy in predicting the presence of TP53 mutation with IHC, it is important to have optimal internal and external controls to correctly interpret the p53 staining . Different p53 patterns are observed in EC tumours and may be divided into “normal” or “wildtype” pattern and the “mutation-type”, “mutant”, “aberrant” or “abnormal” pattern. Abnormal patterns include overexpression of p53 in the nucleus, null or complete absence of p53 expression, cytoplasmic and subclonal p53 expression . As shown in the results of our study, overexpression is most commonly associated with non-synonymous missense mutation in TP53 . This mutation results in a nuclear accumulation of a degradation-resistant protein. The complete absence of p53 in tumour cells is the consequence of stop gain and splice site mutations. Lastly, the accumulation of p53 in the cytoplasm of tumour cells, without nuclear overexpression, is related to C-terminal mutations . Nevertheless, discordant cases can be observed between p53 expression and TP53 mutation. Among possible explanations of these discordant cases, the subclonal pattern which has recently been described, might not be detected by the NGS method because it depends on the area of DNA extraction . It can explain few discordant cases, but this was not the case in our study. An overexpression of p53 protein, without underlying TP53 mutations, can also explain discrepancies between IHC and NGS. This overexpression might be due to the dysregulation of factors such as estrogen receptor (ER) isoform ERβ and MDM2 (mouse double minute 2) . Some other factors can lead to misclassified p53 patterns such as: 1/the cellular state of differentiation and proliferation activity which can show a wide range of staining (weak to strong staining) in wildtype pattern, 2/preanalytical factors (fixation problem, antigen degradation) or splicing mutations which can explain “mosaic” pattern, and 3/technical artifacts which result in nonspecific nuclear blush or nonspecific cytoplasmic blush. Nuclear blush could be misinterpreted as wildtype in null pattern and the cytoplasmic blush could be interpreted as p53 abnormal when it should be ignored . The interpretation of p53 is also affected by the simultaneous presence of two or three molecular signatures which give heterogenous staining . About 3% of EC cases, called “multiple classifier”, can be classed as MMRd and p53 abnormal, POLE mut and p53 abnormal, POLE mut and MMRd and p53 abnormal. In these cases the driver molecular subtype is determined as follows: POLE mut prevails over the MMRd and p53 abnormal signature, and MMRd prevails over the p53 abnormal signature . To reduce the resources involved in molecular classification, in particular POLE NGS testing, Betella et al., propose a novel algorithm. This new algorithm consists of analysing MMR proteins and p53 using IHC method in early-stage (stage I–III) EC, which do not require POLE mutation analysis by NGS. In their study this new algorithm reduced the number of POLE sequencing tests by 67% and that of p53 IHC by 27% compared to the molecular classification of ESGO/ESTRO/ESP 2020 for EC . Likewise, the British Association of Gynaecological Pathologists provides an algorithm to limit POLE testing to those cases where it is essential for patient care . Recently, Jamieson et al. proposed a one-step DNA-based molecular classifier, ProMisE-2, to assess mutations in POLE , TP53 and presence of MSI. The first results show an excellent agreement (Cohen’s kappa: 0.93) with the initial ProMisE algorithm, which uses IHC for testing MMR and p53 proteins, and its conserved prognostic value. This one-step test could be performed on pre-operative biopsy with the combined advantages of having the molecular information available at the time of EC diagnosis and reducing the number of steps needed to define the molecular risk group of EC patients. Further investigations are needed before implementation in clinical practice . Furthermore, artificial intelligence is a promising solution to characterise histomorphological EC molecular subtypes and their disease prognosis . Studies are ongoing in this specific field. Our study presents some limitations and strengths. The main limitation is related to its retrospective design and the limited sample size. Additionally, the work had to be carried out with missing values, which is unavoidable in clinical research. Among strengths, pathology analyses were centralised. Furthermore, therapeutic management was constant throughout the inclusion period and followed international guidelines. In conclusion, for the determination of MMR/MSI status, IHC and PCR showed equivalent diagnostic performance. Nevertheless, these methods give complementary information for effective management of EC. Therefore, as both methods are currently available in most cancer centres at a cost that is reasonable considering the total cost of EC care, we would recommend using both methods. Concerning the determination of p53 status, the moderate agreement observed between IHC and NGS methods requires further prospective studies to explore the prognostic and predictive values of each method and how they would affect the associated algorithm, and in turn to eventually choose one method in preference to the other. These future results could help healthcare providers and researchers to adopt a more efficient evidence-based practice.
4.1. Study Population and Data Collection A retrospective cohort of EC patients who were treated in the Gynecological Department of the University Hospital of Liège in Belgium between January 2019 and December 2021 were analysed according to international guidelines. Eligibility criteria included all histological subtypes (endometrioid and non-endometrioid), all tumour grades, and all stages according to FIGO 2009. Patients with other concomitant cancer were excluded. Clinicopathological data included age, Body Mass Index (BMI), histological subtypes, tumour grades, nodal staging and FIGO stage. 4.2. Immunohistochemistry and Molecular Analyses Immunohistochemistry was performed on 4 µm thick formalin-fixed paraffin-embedded (FFPE) samples mounted on positively charged glass slides, by using the VENTANA P53 (ROCHE-CLONE DO-7), MLH1 (ROCHE-CLONE M1), MSH2 (ROCHE-CLONE G219-1129), MSH6 (ROCHE-CLONE SP93) and PMS2 (ROCHE-CLONE A16-4) antibodies on an automated BenchMark instrument (Ultra, Ventana Medical Systems, Tucson, AZ, USA). IHC expression of p53 was reported as either “normal” or “abnormal”. Normal p53 expression was defined as nuclear staining of variable intensity in 1–80% of the tumour. The p53 expression was considered “abnormal” in the following four cases: when strong nuclear staining was observed in more than 80% of the tumour (nuclear overexpression), when nuclear staining was totally absent (complete absence or null mutant), when cytoplasmic staining, without nuclear overexpression, was noticed (cytoplasmic overexpression) or when a combination of more than one pattern of staining, with each present in at least 5% of tumour cells, was observed (subclonal). . An internal positive control was used in order to determine these patterns. shows examples of normal and abnormal p53 expression using IHC. MMRd was defined when one or more of the four MMR proteins (MLH1, MSH2, MSH6 or PMS2) were unexpressed in the presence of an internal positive control (healthy stromal cells). If all four proteins were present, MMR was considered as “stable” or “proficient” (MMRp). MSI status was determined using the pentaplex PCR assay described by Suraweera et al. 2002 and Buhard et al., 2004 . Briefly, fluorescent multiplex PCR was performed for five quasimonomorphic mononucleotide repeats (NR-27, NR-21, NR-24, BAT-25 et BAT-26). One primer in each pair was labelled with one of the fluorescent markers (FAM for BAT-26 and NR-21, HEX for BAT-25 and NR-27 and TET for NR-24). All PCR conditions and primer sequences are available upon request. PCR products labelled with fluorescent dyes were analysed by an ABI 3500XL Genetic Analyzer (Applied Biosystems by Thermo Fisher Scientific, Waltham, MA, USA). Tumours were classified as MSI-High when at least 3 out of 5 mononucleotide repeats showed instability, MSI-low when one or two mononucleotide repeats showed instability, and MSI-Stable (MSS) when no mononucleotide repeats showed instability. Since MSI-low tumours should be considered as being MSS tumours , these MSI assay results have been grouped. Therefore, two groups were distinguished: MSI-High and MSS. POLE and TP53 mutation were determined by NGS. Regions of interest were amplified by multiplex PCR using Qiagen multiplex PCR plus (Qiagen, Hilden, Germany). The regions of interest included exonuclease domain (exons 3 to 14) of POLE (NM_006231.2) and coding regions (exons 2 to 11) of TP53 (NM_000546.4). All PCR conditions and primer sequences are available upon request. Molecular barcoding was performed with the MID kit for Illumina Miseq (Agilent-Multiplicom, Niel, Belgium) according to the manufacturer’s recommendations. PCR products from each patient were purified using Agencourt AMPure XP beads (Beckman Coulter, Brea, CA, USA) and then quantified by qPCR using KAPA Universal Library Quantification Kit (Roche, Basel, Switzerland) and the CFX Connect Reader (Biorad, Hercules, CA, USA). These individually tagged amplicon libraries were pooled in equimolar amounts to obtain the final library. This latter was then sequenced on the Illumina MiSeq sequencing platform using a MiSeq v2 cartridge (500 cycles). Data were finally analysed using the SeqNext module (version 4.1.1) (JSI Medical systems, Ettenheim, Germany). 4.3. Statistical Analysis Results were expressed as the median and interquartile range (IQR: P25–P75) for quantitative variables and as number for categorical findings. The diagnostic capacity of the IHC method for the determination of MMR and p53 status according to the molecular techniques was assessed in terms of sensitivity, specificity, accuracy, PPV and NPV. All diagnostic characteristics were associated with their 95% confidence interval (95%CI). The McNemar test was used to compare paired proportions and Cohen’s Kappa coefficient with 95% CI to evaluate the agreement between IHC methods and molecular techniques for the status of both indicators. For the first part of this study the molecular algorithm was not applied. The maximum amount of available molecular data was used to determine the diagnostic performance of IHC versus molecular techniques. For the second part the WHO algorithm was applied. Thus, if tumours presented the POLE mutation, whatever the MMR/MSI and the p53 status, they were allocated to the POLE mut subgroup. Tumours with a POLE wildtype and displaying a MMRd, whatever the p53 status, were classified in the MMRd subgroup. Finally, the two last subgroups were determined following the p53 status; either p53 abnormal or NSMP subgroups . If one molecular feature could not be determined, and thus the molecular subgroup could not be defined, the case was excluded from the study. Statistical calculations were always made on the maximum number of data available. Missing values were neither replaced nor imputed. Results were considered significant at the 5% critical level ( p < 0.05). Statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC, USA).
A retrospective cohort of EC patients who were treated in the Gynecological Department of the University Hospital of Liège in Belgium between January 2019 and December 2021 were analysed according to international guidelines. Eligibility criteria included all histological subtypes (endometrioid and non-endometrioid), all tumour grades, and all stages according to FIGO 2009. Patients with other concomitant cancer were excluded. Clinicopathological data included age, Body Mass Index (BMI), histological subtypes, tumour grades, nodal staging and FIGO stage.
Immunohistochemistry was performed on 4 µm thick formalin-fixed paraffin-embedded (FFPE) samples mounted on positively charged glass slides, by using the VENTANA P53 (ROCHE-CLONE DO-7), MLH1 (ROCHE-CLONE M1), MSH2 (ROCHE-CLONE G219-1129), MSH6 (ROCHE-CLONE SP93) and PMS2 (ROCHE-CLONE A16-4) antibodies on an automated BenchMark instrument (Ultra, Ventana Medical Systems, Tucson, AZ, USA). IHC expression of p53 was reported as either “normal” or “abnormal”. Normal p53 expression was defined as nuclear staining of variable intensity in 1–80% of the tumour. The p53 expression was considered “abnormal” in the following four cases: when strong nuclear staining was observed in more than 80% of the tumour (nuclear overexpression), when nuclear staining was totally absent (complete absence or null mutant), when cytoplasmic staining, without nuclear overexpression, was noticed (cytoplasmic overexpression) or when a combination of more than one pattern of staining, with each present in at least 5% of tumour cells, was observed (subclonal). . An internal positive control was used in order to determine these patterns. shows examples of normal and abnormal p53 expression using IHC. MMRd was defined when one or more of the four MMR proteins (MLH1, MSH2, MSH6 or PMS2) were unexpressed in the presence of an internal positive control (healthy stromal cells). If all four proteins were present, MMR was considered as “stable” or “proficient” (MMRp). MSI status was determined using the pentaplex PCR assay described by Suraweera et al. 2002 and Buhard et al., 2004 . Briefly, fluorescent multiplex PCR was performed for five quasimonomorphic mononucleotide repeats (NR-27, NR-21, NR-24, BAT-25 et BAT-26). One primer in each pair was labelled with one of the fluorescent markers (FAM for BAT-26 and NR-21, HEX for BAT-25 and NR-27 and TET for NR-24). All PCR conditions and primer sequences are available upon request. PCR products labelled with fluorescent dyes were analysed by an ABI 3500XL Genetic Analyzer (Applied Biosystems by Thermo Fisher Scientific, Waltham, MA, USA). Tumours were classified as MSI-High when at least 3 out of 5 mononucleotide repeats showed instability, MSI-low when one or two mononucleotide repeats showed instability, and MSI-Stable (MSS) when no mononucleotide repeats showed instability. Since MSI-low tumours should be considered as being MSS tumours , these MSI assay results have been grouped. Therefore, two groups were distinguished: MSI-High and MSS. POLE and TP53 mutation were determined by NGS. Regions of interest were amplified by multiplex PCR using Qiagen multiplex PCR plus (Qiagen, Hilden, Germany). The regions of interest included exonuclease domain (exons 3 to 14) of POLE (NM_006231.2) and coding regions (exons 2 to 11) of TP53 (NM_000546.4). All PCR conditions and primer sequences are available upon request. Molecular barcoding was performed with the MID kit for Illumina Miseq (Agilent-Multiplicom, Niel, Belgium) according to the manufacturer’s recommendations. PCR products from each patient were purified using Agencourt AMPure XP beads (Beckman Coulter, Brea, CA, USA) and then quantified by qPCR using KAPA Universal Library Quantification Kit (Roche, Basel, Switzerland) and the CFX Connect Reader (Biorad, Hercules, CA, USA). These individually tagged amplicon libraries were pooled in equimolar amounts to obtain the final library. This latter was then sequenced on the Illumina MiSeq sequencing platform using a MiSeq v2 cartridge (500 cycles). Data were finally analysed using the SeqNext module (version 4.1.1) (JSI Medical systems, Ettenheim, Germany).
Results were expressed as the median and interquartile range (IQR: P25–P75) for quantitative variables and as number for categorical findings. The diagnostic capacity of the IHC method for the determination of MMR and p53 status according to the molecular techniques was assessed in terms of sensitivity, specificity, accuracy, PPV and NPV. All diagnostic characteristics were associated with their 95% confidence interval (95%CI). The McNemar test was used to compare paired proportions and Cohen’s Kappa coefficient with 95% CI to evaluate the agreement between IHC methods and molecular techniques for the status of both indicators. For the first part of this study the molecular algorithm was not applied. The maximum amount of available molecular data was used to determine the diagnostic performance of IHC versus molecular techniques. For the second part the WHO algorithm was applied. Thus, if tumours presented the POLE mutation, whatever the MMR/MSI and the p53 status, they were allocated to the POLE mut subgroup. Tumours with a POLE wildtype and displaying a MMRd, whatever the p53 status, were classified in the MMRd subgroup. Finally, the two last subgroups were determined following the p53 status; either p53 abnormal or NSMP subgroups . If one molecular feature could not be determined, and thus the molecular subgroup could not be defined, the case was excluded from the study. Statistical calculations were always made on the maximum number of data available. Missing values were neither replaced nor imputed. Results were considered significant at the 5% critical level ( p < 0.05). Statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC, USA).
|
Proteomics as a New-Generation Tool for Studying Moulds Related to Food Safety and Quality | a4fd87e1-593b-47dd-bf75-9b8fc290227d | 10003330 | Microbiology[mh] | Moulds are a key microbial group in the food industry, since they are capable of growing in a wide range of environmental conditions. Firstly, the application of moulds and derived products to produce and preserve food and food ingredients is very broad . Mould enzymes are ubiquitous, used in starch processing, in the bakery, and brewery industries, and to produce beverages, including wines and in food fermentation . Apart from producing beneficial effects, moulds are the most commonly found spoilage microorganisms at every stage of the food chain and could be the primary causes of significant financial losses in some foodstuffs . Additionally, this microbial group poses issues to human health because of their potential production of undesirable compounds, especially mycotoxins. Both harmful activities linked to mould development are a concern in the food industry since they can seriously damage the brand image . During the period 2018-2022 a total of 131 notifications related to mould contamination of different animal and vegetal food products and food supplements were accepted in the European Union . Within these notifications, 1 was classified as “alert”, 42 as “border rejection”, and 88 as “information”. The highest occurrence of moulds in such products was declared in cereals and bakery products. Regarding mycotoxin notifications during this period, more than 1000 have been stated, with aflatoxins being the most frequently found, followed by ochratoxin A . Most of the notifications concerned the categories “nuts, nut products and seeds”, “cereals and bakery products”, “herbs and spices”, and “fruits and vegetables”. To guarantee the quality and safety of foodstuffs in relation to undesirable moulds, different techniques for their detection and the characterisation of their mechanisms of action have been reported. The quick development of modern technology has encouraged the application of omics for such a purpose. In the last decades, an increasing number of proteomics approaches have been proposed to be used specially to discover the key mechanisms of action of moulds with interest in foodstuffs. This omics technique has some advantages when compared to other techniques working with the same aim, such as transcriptomics. Thus, the primary RNA transcript of eukaryotic genes can be processed in more than one way, resulting in more than one protein from a single gene . The rise in the use of this tool is the result of different advancements, for example, the increased number of available protein sequences, the technological developments with respect to the analysis of protein mixtures, and the improvement of bioinformatics tools to generate and process large biological data sets . The proteomic methodology has been successfully utilised for investigating the microbe–host interactions, the pathogenic processes and toxin biosynthesis, and the responses of microorganisms to environmental factors [ , , , , , , , , ]. Thus, proteomics could provide the knowledge for boosting strategies to avoid the issues caused by the mould spoilage and the hazard related to mycotoxins in food. This review presents approaches of the high-throughput technology proteomics applied to foodborne moulds for understanding how to address the food quality and safety challenges, showing their advantages and downsides. Apart from being an overview of the proteomics techniques currently available for achieving such a purpose, this work intends to provide knowledge about how they could be improved when applied to other scientific fields.
As stated before, two downsides associated with the mould contamination of food are of interest: spoilage and mycotoxin production provoking food quality and food safety concerns, respectively. Regarding alteration, filamentous fungi are considered a severe pathogen of food due to their ability to penetrate and break down food components using extracellular enzymes . They thus cause different types of spoilage, including unwanted visible mycelium on the product surface and undesirable sensory characteristics, such as flavour, colour, odour, and texture , with the consequent consumers’ rejection. Penicillium , Aspergillus , Rhizopus , Mucor , Geotrichum , Fusarium , Alternaria , Cladosporium , Eurotium , Botrytis , and Byssochlamys genera are involved in the spoiling of different foodstuffs . Most of the problems related to mould spoilage have been described in fruits, vegetables, and grains and cereal products. For instance, bread and bakery products can rapidly spoil, mainly due to the growth of Aspergillus , Penicillium , Rhizopus , and Mucor species [ , , ]. Botrytis cinerea is the main biological cause of pre- and post-harvest damage since it is responsible for grey mould formation in many plant species , including tomatoes and table grapes . Indeed, this undesirable mould is ranked second in the “world top 10 fungal pathogens in molecular plant pathology” in terms of economic and scientific relevance, preceded only by Magnaporthe oryzae . Blue mould produced predominantly by Penicillium expansum and to a lesser extent other Penicillium spp. provokes the most detrimental infection of stored apples . The white mould disease, caused by Sclerotinia sclerotiorum , is a major problem in rapeseed oil production . Concerning food products of animal origin, black spot spoilage by moulds belonging to the Cladosporium genus ( Cladosporium oxysporum , C. cladosporioides , and C. herbarum ) has been reported in dry-cured ham and dry-cured fermented sausages . C. cladosporioides , C. herbarum , Penicillium hirsutum , and Aureobasidium pullulans were isolated from chilled meat spoiled by black spot . Considering the food safety issue associated with moulds, mycotoxins are a group of secondary metabolites with low molecular weight produced before and after the harvest of foodstuffs from vegetal origin and during ripening and the following processing of those from animal origin. In the latter, mycotoxin contamination could also be due to their presence in the animal feed . These metabolites can provoke harmful effects, such as carcinogenic, immunosuppressive, teratogenic, and mutagenic ones ( ). Hundreds of mycotoxins have been identified, but toxicity, frequency of outbreaks, and target organs differ among them . Mycotoxin contamination is a great challenge to food safety since many of them cannot be eliminated using heat, physical, and chemical treatments . The main mycotoxin-producing moulds belong to the genera Fusarium , Aspergillus , and Penicillium , which include several species producing toxins of the greatest concern worldwide, such as aflatoxins, ochratoxins, and fumonisins ( ). Other genera, such as Claviceps , Alternaria , etc., can also be involved ( ). Other ones, such as Fusarium beauvericin, enniatins, and moniliformin, are so-called emerging mycotoxins and their serious risk on human and animal health have been stated despite the fact that a proper risk assessment has not been performed . On the other hand, not every strain belonging to a mould species produce mycotoxins, and those that do usually produce them only in particular conditions . Risks associated with mycotoxins depend on both hazard and exposure . The hazard of mycotoxins to human beings is probably universal (while other factors are, occasionally, also important, for instance hepatitis B virus infection in relation to the hazard of aflatoxins). Exposure to mycotoxins is present worldwide; although, there are geographic and climatic differences in their production and occurrence as well as different dietary habits in various parts of the world . However, the implication of global climate change in the toxigenic mould ecology and their pattern of mycotoxin production has been stated. As a result, the number of crops damaged by insects will increased because of global warming, and, therefore, render them more susceptible to mould infection , but it could also modify the diversity of diseases invading crops, certain mould might disappear from an environment and appear in new regions previously considered safe, along with the consequent economic and social implications . Global warming will make crop growth impossible in some areas and, where growing crops will be possible, plants will be subjected to suboptimal climatic conditions, resulting in increased susceptibility to mould contamination . Furthermore, warmer climates will favour thermotolerant species, leading to the prevalence of Aspergillus over Penicillium species . Thus, climate change remains the primary factor for high levels of mycotoxins in African foods . Many countries have regulated maximum limits and guidelines for relevant mycotoxins, such as aflatoxins, ochratoxin A, deoxynivalenol, zearalenone, fumonisins, T-2 toxin, HT-2 toxin, citrinin, Ergot sclerotia, ergot alkaloids, and patulin [ , , ]. Current regulations are based on scientific opinions of authoritative bodies, such as the FAO/WHO Joint Expert Committee on Food Additives of the United Nations (JECFA) and the European Food Safety Authority (EFSA), which work with the known toxic effects . Control of mould contamination is a major concern for the food industry and scientists that are looking for efficient solutions to prevent and/or limit not only their growth, but also their mycotoxin production. Chemical fungicides and good hygiene practices are the primary strategy for the treatment of undesirable foodborne moulds. Nonetheless, there is a growing demand from consumers for food free of synthetic fungicides and with a minor impact on the environment. Among the problems described for such products are the development of resistance to fungicides and the presence of residues in food, apart from causing allergies or side effects in some consumers . As a result, major progress is being made in finding more sustainable and safer alternatives to such preservatives, including biopreservation, using microorganisms as well as legally permitted ingredients, and physical treatments. These alternatives generally do not have as wide a spectrum of activity as the synthetic fungicides and, consequently, their combined application has been suggested [ , , ]. Biological control using microorganisms have been reported for different food products. For instance, Candida intermedia provoked a significant reduction of ochratoxin A production when applied against Aspergillus carbonarius . Both yeasts and bacteria have been investigated as biocontrol agents against grey mould decay in table grapes . Biopreservation by lactic acid bacteria is considered the most promising alternative candidates to chemical fungicides in the dairy industry due to their Generally Regarded as Safe (GRAS) and Qualified Presumption of Safety (QPS) statuses in the United States and European Union, respectively . Antimicrobial compounds of biological origin have also been investigated against undesirable foodborne moulds. Natural antimicrobials, including plant extracts, edible coating, and putrescine, amongst others, have been investigated against grey mould decay in table grapes . Within plant extracts, essential oils from many plants have shown remarkable potential as biocontrol agents. Thus, numerous essential oils have been examined as antifungal agents for enhancing the shelf life of bread showing different degrees of impact; although the consumer does not always appreciate the flavour and aroma they provide . For instance, tea tree oil (TTO) inhibited the spore germination and mycelial growth of B. cinerea . Against such undesirable mould, the inhibitory biological effects of wuyiencin produced by Streptomyces albulus var. wuyiensis has also been reported .
Traditionally, the study of moulds’ transcripts has been employed to unveil the mould’s response under certain environmental conditions and the presence of biocontrol agents or antifungals [ , , , , ]. However, the proteomic study is a more robust technique than transcriptomics due to proteins and not genes or transcripts being responsible for the cellular phenotypes. Indeed, gene expression alone does not provide information on post-translational modifications (PTM) or even protein expression itself, whilst proteomics offers the possibility to directly explore the expressed proteins . These can be modified by the covalent attachment of substances, such as sugars, fats, phosphate groups, and others, affecting their performed function for the cell . This could explain why some studies did not find a correlation between transcriptomics and proteomics in moulds. For instance, the changes in aflatoxigenic Aspergillus flavus protein profiles showed low congruency with alterations in the corresponding transcript levels, indicating that the post-translational processes play a critical role in regulating the protein level in this mould species . Similarly, a proteomic investigation of Aspergillus fumigatus ∆gliT, related to gliotoxin production, did not reflect the large set of transcriptome changes . Barker et al. found decreased transcripts in abundance in two functional categories, glycolysis and amino acid metabolism, while the related proteins were enhanced in the spoilage mould A. fumigatus. A low correlation of transcriptome and proteome data was obtained in a toxigenic A. flavus grown in maize and peanut substrates . Another study about Tolypocladium guangdongense used in medicine showed a low correlation between the transcriptomic and proteomic data, suggesting the importance of the post-transcriptional processes in its growth . For all of them, together with the higher affordability of proteomics analyses in the last decades, it has emerged as the preferred omics tool in the study of the toxigenic mould physiology. Despite the benefits of using this technique in food, it has some limitations as it depends on the matrix complexity, the high ranges of protein concentrations needed, and the performing of multiple steps . To reduce some of these limitations, model systems have been used to simplify the experiments and avoid interferences with the food matrix and native microbial population. In parallel, and focusing on mould’s physiology, the model systems also facilitate the identification of mechanisms that could be hidden in a complex ecosystem . Several studies have used commercial media to explore the mechanisms involved in the antifungal action of different compounds. For example, the ochratoxigenic Aspergillus ochraceus was grown in yeast extract sucrose broth to determine the influence of citral in its proteome . The potato dextrose agar was used as substrate for the ochratoxin A-producing A. carbonarius growth in the presence of the volatile compound 2-phenylethanol and B. cinerea treated with the antifungal wuyiencin , before protein extraction. Nevertheless, the use of food-based artificial media in proteomics allows for the reduction of contamination with proteins from the food itself or its native microbial population, bringing the experimental design closer to the real product than a commercial culture medium. In this sense, Xia et al. employed an apple juice heated at 80 °C for 30 min to denature the present proteins in the juice as a food based-model system for P. expansum studies. Delgado et al. , also using apple as substrate, employed this fruit lyophilised together with agar and water to be sterilised, and subsequently, a layer of sterile cellophane was placed onto the solid medium to prevent cross contamination between proteins and P. expansum. Li et al. used different broths made with crops powder substrates (maize, rice, and peanut) that were autoclaved before the inoculation of toxigenic A. flavus . A dry-cured fermented sausage-based medium was used to identify the modes of action of biocontrol agents against the ochratoxigenic Penicillium nordicum and Aspergillus westerdijkiae . Similarly, a medium elaborated with dry-cured ham was inoculated with biocontrol agents and a cellophane sheet was laid over the surface of the agar before the inoculation of ochratoxigenic P. nordicum to prevent cross-contamination between the different microorganisms . Once the study matrix has been selected, different techniques can be carried out for the proteomic analyses of moulds of interest in food. The most affordable techniques include one- or two-dimensional gel electrophoresis, the latter, 2-DE, being the most common one due to the separation of proteins by two properties, molecular mass and isoelectric point (pI; ). However, the protein profile obtained from eukaryotic cells, such as moulds, is too complex to be resolved only by 2-DE. In general, the 2-DE works in a limited range of pI, excluding from the resolved part of the gel those more cationic or anionic proteins. The appearance of spots in the gels can provide information about protein identification acting like a map, while the intensity of those spots provides quantitative information about protein levels . For this, the spots should be analysed with image comparison software. This methodology is slow and labour-intensive, which can contribute to a loss of sensitivity, such as those reported when it is used in parallel to other more advanced ones [ , , ], as discussed later. Thus, this approach usually entails a low efficiency in protein identification and discrimination between batches and even the appearance of human errors. Subsequently, the spot identification entails the conversion of the mould proteins individually excised from the 2-DE into peptides by digestion and their analyses by mass spectrometry (MS; ). Before this, the samples must be purified to remove gel contaminants. The peptides can be analysed using different equipment for mass analysis as an ion trap , which cannot offer high resolution; although, it usually achieves enough sequence coverage to identify proteins. All these drawbacks related to protein separation, automatisation, or efficiency in the identification have led to the development of other advanced techniques using cutting-edge technologies, such as time-of-flight (TOF) , Orbitrap [ , , ], and Fourier transform ion cyclotron resonance mass spectrometers (FT-ICR-MS), all of them being categorised as high resolution mass spectrometry (HRMS) equipment. The TOF instruments have some advantages with respect to the Orbitrap analyser due to their speed performing full scans, allowing them to match well with ion mobility technologies. Despite this, TOF only achieves mass resolutions between 60,000 and 200,000 full width at half maximum (FWHM), while Orbitrap analysers reach up to 1 million FWHM . However, the time for analysis with the Orbitrap is longer than that used in a TOF, which can lead to lower performance. The FT-ICR-MS has not been used in the study of the mould proteome yet despite its high accuracy achieving resolutions exceeding 2.7 million FWHM . Currently, ion mobility separation techniques are being used in other fields, such as human medicine, for a further separation step in the mass spectrometer improving the measurement’s sensitivity and multiplexing capability . During a trapped ion mobility spectrometry (TIMS) separation event, trapped peptide ions are concentrated and eluted and, together with a TOF, it can analyse multiple targets per accumulation in a short time without compromising sensitivity . This is a promissory tandem-measurement parameter to be coupled to HRMS, as will be discussed later.
In general, proteomic techniques that allow the monitoring of protein levels can better reveal the metabolic process of moulds and can be used both to detect mould diversity in a microbial community as well as contribute to a better understanding of the mechanisms of action of different antifungal treatments and prevent any risk of mycotoxin contamination or food spoilage. 4.1. Detection and Identification of Moulds Proteomics for assessing food quality and safety has been applied for a long time with the use of analytical methods, achieving rapid and reliable analysis of food throughout the food chain. One of the strategies to improve food quality and safety related to moulds involves their identification. The detection of moulds by proteomic strategies focuses on two main techniques: the traditional and widely used matrix-assisted laser desorption/ionisation time-of-flight mass-spectrometry (MALDI-TOF MS) and the omics approach, metaproteomics, which allows not only the identification of proteins, but also the observation of their PTM . MALDI-TOF MS is based on the acquisition of protein mass spectrum fingerprints from an unknown isolate, which is then identified by comparing its mass spectrum data with those from a reference library . Mass spectrometric peptide/protein profiles of moulds display peaks in the m/z region of 1000–20000, where a unique set of biomarker ions may appear facilitating a differentiation of samples at the level of genus, species, or strain . MALDI-TOF MS analysis of subproteomic mass spectra has been shown to be a promising tool for species identification and differentiation in moulds [ , , ]. Reliable species identification by MALDI-TOF MS has been reported for both food spoilage and foodborne moulds . Furthermore, its usefulness has also been demonstrated in the identification of moulds in various types of foods, such as ripened cheeses [ , , ] and asparagus . Metaproteomics is an omics technique able to detect microorganisms in complex microbial communities, such as some foods. Proteins constitute the largest amount of cellular material, and therefore, total per species protein can be quantified to assess the biomass of every member of the microbial community . In addition, these proteins can be assigned to individual species or higher taxa using a protein sequence database and lead to an understanding of the functional roles and interactions of individual members in the community . Metaproteomic provides “snapshots” of microbial populations and can be used to directly study the nature of microbial function in specific environments and states as well as to understand complex substrate–microbiome interactions . Although metaproteomics is a very powerful method, some problems in bioinformatics evaluation impede its large-scale application since this analysis is a key part of this omics methodology. Protein identification requires software and platforms, such as Unipept, MetaLab, ProteoStorm, or Galaxy. Commonly used functional information databases include Cluster of Orthologous Groups (COG), Gene Ontology (GO), and Kyoto Encyclopaedia of Genes and Genomes (KEGG) . In particular, the construction of databases for protein identification, clustering of redundant proteins, and taxonomic and functional identifications pose great challenges . This is one of the main factors why its application to mould detection remains relatively underutilised. Metaproteomic techniques used for the detection of microorganisms in foods have been applied mainly in fermented foods to describe not only the microbial composition and succession, but also its role in the process and the relationship of microorganisms with flavour development . Metaproteomics can be used as a tool to optimise food fermentations, for example, by knowing the metabolic pathways, it is possible to choose the starter strains to produce specific metabolites of interest, to know the best nutrients to supplement the medium, and to enhance the performance of the starter or choose intermediate strains and know the appropriate moment to introduce them . Overall, metaproteomics has facilitated the detection of moulds belonging to the Aspergillus , Mucor , Rhizopus , Penicillium , and Geotrichum genera in various types of soy sauce foods [ , , ] and fermented beverages . 4.2. Study of Mould Growth and Physiology Proteomic studies provide a comprehensive vision of the differential protein accumulation during mould growth and the generation of mould secondary metabolites, arising as an important contribution to the identification of new proteins and genes linked to the biosynthesis of mycotoxins ( ). These findings contribute to a deeper knowledge of the pathways linked to mycotoxin production and can be very helpful for designing preventive actions to minimise mould spoilage and mycotoxin production in food. The identification of the proteins involved in this metabolite synthesis requires a comparison between the proteome of mycotoxin-producing and non-producing strains, and the proteome of producing strains under conditions of production and non-production of the toxin. The proteome analysis of A. flavus showed that in aflatoxin-producing conditions some aflatoxin biosynthetic enzymes, such as O -methyltransferase A (OmtA), AflK/vbs/VERB synthase, ver-1, norA, ver-1, and aflatoxin B1-aldehyde reductase GliO-like, are prevalent in the mycelium, together with proteins from metabolic processes . In this sense, proteins related to aflatoxin biosynthesis, such as AflR, nonribosomal peptide synthetase 10, subunits α and ẞ of fatty acid synthase, sterigmatocystin biosynthesis P450 monooxygenase, polyketide synthase (PksA), noranthrone synthase, noranthrone monooxygenase, NOR reductase, averantin hydrolase, oxidase, esterase, desaturase, and alcohol dehydrogenase, are expressed when mould grows in a favourable substrate, such as corn flour. On the other hand, most of proteins involved in aflatoxin biosynthesis ( O -methyl sterigmatocystin oxidoreductase, sterigmatocystin 8- O -methyltransferase, p450 monooxygenase AflN, versicolorin B desaturase, averufin oxidase A, averantin hydroxylase, and noranthrone synthase), as well as in carbohydrate metabolism, cell wall biogenesis, mitogen-activated protein kinase (MAPK) signalling pathways, heat shock proteins, autophagy, and dicer-like proteins, are yet expressed at the germinating conidial stage. These data suggest that the MAPK signalling pathway could be crucial in cell wall modulation and secondary metabolite synthesis, and that the biosynthesis of aflatoxins could start at early germination stages with favourable conditions . In the proteome of two strains of A. carbonarius differing in their ochratoxin A-producing potential, nine proteins (seven increased and two reduced in quantity) were detected as potentially involved in several biological functions, such as regulation, amino acid metabolism, oxidative stress, and sporulation . Among them, a protein homologous to CipC showed the highest relative abundance in the ochratoxin A-producing strain. Although the function of this protein is still unknown, it was concluded that it is probably involved in ochratoxin A biosynthesis . The composition of the substrate and the environmental conditions have a major impact on mould physiology, and consequently, these changes should also be apparent in the proteome. Thus, proteomics has been applied to explore the impact of different external factors, such as water activity (a w ), temperature, pH, nutrient substrate, salt content, or light on several foodborne moulds and mycotoxin production. The response to different a w in A. flavus resulted in variations in the relative amount of 837 proteins; 403 at higher abundance at 0.99 a w and 434 more abundant at 0.93 a w . Osmotic stress-related proteins, Sln1 and Glo1, belonging to the Hog1 pathway showed higher levels at 0.99 a w . These results are consistent with the fact that A. flavus grows better under high a w conditions . The secretion of extracellular hydrolases increased as a w rose, suggesting that they may play a critical role in the induction of aflatoxin biosynthesis. Furthermore, the export protein KapK may downregulate aflatoxin biosynthesis with the translocation of NirA, a specific transcription factor in the nitrate assimilation pathway. In addition, the abundance of 11 proteins directly related to aflatoxin biosynthesis (aflE, aflF, aflH, aflJ, aflK, aflM, aflO, aflP, aflQ, aflY, and aflYa) were higher at 0.99 a w , and just one (aflYc) was more abundant at 0.93 a w . The aflE and aflF genes encoding ketoreductases that convert norsolorinic acid to averantin in the aflatoxin synthesis pathway were expressed only in aflatoxin supportive conditions (0.99 a w ) . These data are valuable for understanding the impact of water stress on aflatoxin production and for the design of preventive measures for its control in foods . In the proteome of A. flavus growing at temperatures of 28 °C and 37 °C, using the iTRAQ labelling, 664 proteins were found in different relative abundance, especially belonging to translation-related pathways, metabolic pathways, and the biosynthesis of secondary metabolites . The growth, but not the production of aflatoxins by A. flavus , is favoured at 37 °C, while the opposite occurs at 28 °C. In this sense, 12 aflatoxin biosynthesis-related proteins (aflE, aflW, aflC, aflD, aflO, aflP, aflK, aflM, aflY, aflJ, aflS, and aflH) showed a higher abundance at 28 °C than at 37 °C . By SILAC tagging, 31 proteins were found in higher amounts at 37 °C (including AflM and AflP) and 18 were more abundant at 28 °C (including AflD, AflE, AflH, and AflO). The shift in the expression of the aflatoxin pathway enzymes is closely related to the strong repression of both aflatoxin biosynthesis and transcription of the aflatoxin pathway genes observed at 37 °C . The pathway-specific regulatory afl R gene, required for the activation of most aflatoxin pathway genes, was upregulated at 28 °C , but the aflR protein was not detected in the proteomic profiles of A. flavus at either 28 °C or 37 °C . Likewise, the aflR protein was not detected in A. flavus grown under different a w conditions, regardless of whether or not they favour aflatoxin production . These results lead to conclude that there is a low correlation between proteome and transcriptome data, suggesting that post-transcriptional gene regulation affects distinct biological pathways and secondary metabolite gene clusters . In a transcriptome and proteome analysis conducted to clarify the mechanisms explaining the higher production of aflatoxin B 1 by A. flavus in maize and rice broth than in peanut broth, fewer differences in the gene expression and protein abundances were observed between the maize and rice substrates than between the above substrates and peanut . Most of the proteins with different amounts are involved in metabolic process, cellular process, catalytic activity, binding, cell, and cell part, but the limited variations suggest that the growth and metabolism of A. flavus in these substrates are similar, mainly in rice and maize. The expressions of genes linked to the early phase of aflatoxin biosynthesis ( aflA , aflB , and aflC ) and the accA gene were significantly increased in maize and rice substrates. Genes related to carbon metabolism were upregulated in maize broth, while those involved in acetyl-CoA accumulation and consumption were up- and downregulated, respectively. Several genes involved in the aflatoxin biosynthesis regulation, namely veA , ppoB , snf1 , and G protein-coupled receptor (GPCR) genes, were differentially expressed in the three substrates, indicating that they may also be involved in the sugar signal sensing, transfer, and regulation. Notably, correlation analyses of the transcriptome and proteome showed that the trehalose metabolism genes, the aldehyde dehydrogenase gene, and the tryptophan synthase gene are important in the regulation of aflatoxin yield in different substrates . A low correlation of transcriptome and proteome data was obtained, similarly to the abovementioned studies regarding the effect of a w and temperature on the production of aflatoxin by A. flavus . This finding could be due to the insufficient number of recovered proteins, the different synthesis and turnover rate of proteins and mRNAs in various cell stages, and the post-transcriptional or post-translational modifications . The addition of salt in the processing of a variety of foods, such as dry-cured meat and dairy products, favours the growth of both beneficial and toxigenic moulds on their surface. Particularly in meat products, the proliferation of ochratoxin A-producing moulds is of special concern. Therefore, from a food safety perspective, the study of the influence of the salt added to meat products on the growth of these moulds and the production of toxins is of great interest. The addition of 20 g/L NaCl in a culture medium induced the spore production of Aspergillus ochraceus , while 70 g/L NaCl repressed it . Comparative proteomics analysis of A. ochraceus growing with 20 or 70 g/L NaCl revealed significant changes in the abundance of proteins involved in nutrient uptake, cell membrane integrity, cell cycle, energy metabolism, intracellular redox homeostasis, protein synthesis and processing, autophagy, and secondary metabolism. The latter activity, including ochratoxin A production, was stimulated by the addition of 20 g/L NaCl, with an increase of non-ribosomal peptide synthetases (NRPS), and repressed by 70 g/L NaCl. At the highest concentration, an increased extracellular hydrolase production was observed, probably for the adaptation to nutrient starvation due to a decrease in energy metabolism. A higher concentration of reactive oxygen species (ROS) was also detected, which was harmful for protein synthesis and even triggered autophagy . The extracellular proteome of Aspergillus niger differed considerably depending on the carbon substrate xylose or maltose . When the medium was supplemented with xylose, a variety of plant cell wall degrading enzymes were identified with xylanase B and ferulic acid esterase as the most abundant ones. In cultures with maltose, high levels of catalases were found and glucoamylase was the most abundant protein. However, the intracellular proteome was not significantly changed. Interestingly, other culture conditions, such as pH control, aeration, stirring, or shaking, strongly influenced the abundance of glycolytic and tricarboxylic acid (TCA) cycle enzymes, flavohemoglobin, CipC protein, superoxide dismutase, NADPH-dependent aldehyde reductase, ER-resident chaperones, and foldases in the intercellular proteome. On the other hand, the addition of lactate in a medium containing starch and nitrate provokes an increase in the production of fumonisin B 2 , but not of ochratoxin A by A. niger . The proteome of A. niger was affected, mainly in the abundance of proteins related to the intracellular level of acetyl-CoA or NADPH, such as enzymes in the pentose phosphate pathway, pyruvate metabolism, the TCA cycle, ammonium assimilation, fatty acid biosynthesis, and oxidative stress protection. These data support the hypothesis that fumonisin production by A. niger is regulated by acetyl-CoA . On the other hand, some compounds from foods can stimulate the germination of undesirable moulds. For example, limonene, a dominant volatile constituent in oil glands of most citrus, promotes spore germination, germ tube elongation, and mycelial growth of the citrus pathogen P. digitatum . Limonene alters the abundance of 340 proteins in P. digitatum including proteins related to energy metabolism and antioxidant proteins, such as glutathione S-transferases, superoxide dismutase, and catalases. Limonene thus induces the growth of P. digitatum , probably through the regulation of energy metabolism and ROS homeostasis . P. expansum commonly causes blue mould rot and postharvest decay in apples, pears, and other pome fruits, and is the main producer of patulin. Proteomics has been used for studying the molecular mechanism involved in the interaction of this mould with apple fruit. In an apple substrate, 28 proteins forming P. expansum were found in higher relative abundance. These proteins were mainly associated with pathogenesis, such as glyceraldehyde-3-phosphate dehydrogenase, catalase, and peptidase, and with secondary metabolism and patulin biosynthesis regulation, for instance, glucose dehydrogenase and FAD-binding monooxygenase . These changes in the proteome might be responsible for the observed medium acidification and patulin production . In the proteome of P. expansum growing on apple juice, up to 148 proteins were found in a high quantity, including cell-wall degrading enzymes and peptidases/proteases, especially a serine carboxypeptidase (PeSCP) required for conidiation, germination, fungal growth and morphology, tolerance to environmental stresses, extracellular carboxypeptidase activity, and fungal virulence . The influence of pH on the production of fumonisin by Fusarium proliferatum has been explored by proteomic analysis. The increase of fumonisin production at pH 10 was related to the higher quantity proteins, such as polyketide synthase, cytochrome P450, S-adenosylmethionine synthase, and O -methyltransferase, involved in the modification of the fumonisin backbone. In contrast, at pH 5, the higher abundance of L-amino-acid oxidase, isocitrate dehydrogenase, and citrate lyase was linked to the inhibition of the condensation of the fumonisin backbone and the concurring decrease of the mycotoxin production . The exposure to light of short wavelengths induces oxidative stress in Penicillium verrucosum together with a marked decrease in the synthesis of ochratoxin A and a significant increase in the production of citrinin. Through a proteomic analysis combining two-dimensional SDS-PAGE with HPLC-ESI-TOF-MS/MS, 56 significantly differential proteins between cultures grown in light versus dark were detected. Most of them are presumably involved in the stress response, such as antioxidant proteins or heat shock proteins, and in general, metabolic processes, for example, glycolysis or ATP supply . Neosartorya pseudofischeri is a heat-resistant fungus and can contaminate several juices. The cellular process of heat-resistance has been studied in ascospores subjected to heat treatment at 93 °C for 0, 1, or 8 min. A total of 150 proteins significantly altered in abundance were identified, of which, 126 showed decreased abundance after heat treatment mainly involved in the central carbon metabolism, heat stress responses, reactive oxygen intermediates elimination, and translation events. These proteins are potential targets to evaluate the efficiency of thermal treatment for processed food products . 4.3. Mode of Action of Antifungal Agents against Foodborne Moulds Several antifungal agents have been proposed to control the growth of undesirable moulds and mycotoxin accumulation on foods, including microorganisms and chemical compounds. To study the efficacy of these antifungal agents, deciphering both their mechanisms of action and their cellular targets in the moulds of interest is a crucial issue. A proper understanding of the target can provide valuable information on the spectrum of activity of the control agents and the possible sensitivity of the different toxigenic moulds. Moreover, information on possible modes of resistance can be obtained, as well as to guide designing strategies using combinations of different control agents that affect distinct targets. Potential side effects, such as the generation of unwanted by-products of treatment, such as mycotoxins or other undesirable secondary metabolites, can also be elucidated. Proteomic studies have provided valuable knowledge about the systems disturbed in response to antifungal agents and they have been applied to characterise the behaviour of both resistant and susceptible moulds, allowing for the recognition of mechanisms of resistance as well as the identification of promising susceptible targets ( ). Overall, these methods underline the range of tools available to provide a global overview of the molecular targets and biological pathways impacted by antifungal agents. These comprehensive perspectives can support the further targeting of complementary techniques, such as biochemical analysis, targeted gene disruption, or metabolite profiling. Proteomic analyses have been conducted to clarify the mode of action of antifungal agents against several toxigenic moulds and mycotoxin production, for example P. nordicum , Aspergillus westerdijkiae , A. flavus , A. carbonarius , P. digitatum, P. expansum, P. italicum, and Fusarium oxysporum . Concretely, P. nordicum and A. westerdijkiae have undergone many studies employing a variety of biocontrol agents as they have been described as the main producers of ochratoxin A in meat products. A. flavus is the subject of numerous studies for its control since it is the main producer of the high toxic and carcinogenic aflatoxins. Penicillium chrysogenum and Debaryomyces hansenii repressed ochratoxin A production by P. nordicum in a dry-cured ham-based medium, likely by nutritional competition. According to proteomic data, both agents inhibited P. nordicum through cell wall integrity (CWI) impairment, and they hamper the secondary metabolism, including ochratoxin A synthesis, lowering the levels of MAPK, and the carbon catabolite repression (CCR) pathway . Rosemary essential oil decreased the abundance of proteins involved in the polyketide synthase enoylreductase (PKS ER) domain in P. nordicum , which would explain the ochratoxin A reduction. The combination of rosemary leaves with D. hansenii lowered the abundance of proteins linked to the CWI and purine pathway . Rosemary essential oil decreased the abundance of proteins involved in the polyketide synthase enoylreductase domain in P. nordicum , which would explain the ochratoxin A reduction, and the mixture of rosemary leaves with D. hansenii reduced the abundance of proteins related to the CWI and purine pathway . D. hansenii singly or in combination with rosemary or its essential oil causes a large reduction in the production of ochratoxin A by A. westerdijkiae lowering the abundance of proteins involved in ochratoxin A production, such as PKS ER and NRPS, and in the CWI pathway . On the other hand, the combination of rosemary leaves and its essential oil decreases the ochratoxin A production disturbing the abundance of proteins from the PKS ER domain and CWI pathway of A. westerdijkiae . Volatile compounds generated by yeasts have demonstrated inhibitory effects against toxigenic moulds . The volatilome of Candida intermedia reduces the growth, sporulation, and ochratoxin A biosynthesis by Aspergillus carbonarius . Both the volatilome of C. intermedia and its major component 2-phenylethanol affected a variety of metabolic targets, the most concerned routes being the central metabolism, the energy production, and the stress response. Volatilome has a stronger effect on protein biosynthesis. Although 2-phenylethanol impacts some metabolic traits, other unidentified volatile components may involve a plurality of metabolic targets that may result in a higher effectiveness of the volatilome . Several compounds have been considered for the control of A. flavus , such as the antifungal protein PgAFP, the quercetin, and the Perilla frutescens essential oil (PEO). PgAFP, an antifungal protein secreted by a strain of P. chrysogenum , has been studied as a biocontrol agent against toxigenic moulds on dry-cured foods . PgAFP provoked apoptosis and necrosis in A. flavus hyphae with the reduction of energy metabolism, alteration of CWI, and increase of ROS. Label-free mass spectrometry-based proteomics ( ) showed changes in the proteome of A. flavus , with higher glutathione and heat shock proteins concentrations, and lower relative quantity of Rho1 and the β subunit of G-protein . However, PgAFP did not alter the metabolic capability, chitin deposition, or hyphal viability of A. flavus grown in cheese due to the calcium content. A total of 125 proteins were increased in the presence of calcium, including oxidative stress-related proteins, whereas 70 proteins were found at lower abundance, mainly involved in metabolic pathways and the biosynthesis of secondary metabolites. The resistance conferred by calcium to A. flavus appears to be mediated by calcineurin, G-protein, and g-glutamyltranspeptidase, which combat oxidative stress and impede apoptosis . On the other hand, a strain of Penicillium polonicum is natively resistant to PgAFP by increasing chitin content of its cell wall. Proteome changes allow for the attribution of this resistance to a higher abundance of glucosamine-6-phosphate N-acetyltransferase and Rho GTPase Rho1 that would lead to the increased chitin deposition via CWI signalling pathway . Therefore, proteomics has shed light on the mode of action of the antifungal protein PgAFP and some native or acquired mechanisms of mould resistance. This information is useful to design strategies to improve the PgAFP activity against toxigenic moulds in foods. Quercetin induces various oxidative stress response proteins, but suppresses the MAPK pathway and the expression of several enzymes involved in the aflatoxin biosynthesis, such as AflR, acetyl CoA synthetase, noranthrone synthase, noranthrone monooxygenase, NOR reductase, averantin hydrolase, sterigmatocystin biosynthesis polyketide synthase, and O -methyl transferase A . A comparative global proteomic analysis of A. flavus using the TMT labelling method revealed that PEO inhibits the growth of A. flavus by blocking the antioxidative defence, reducing the expression of superoxide dismutase and catalase associated with the elimination of ROS. Moreover, several proteins, such as ATP-dependent 6-phosphate fructokinase, triosephosphate isomerase, glyceraldehyde 3-phosphate glyceraldehyde dehydrogenase, and phosphoglycerate, were found in lower relative quantities, repressing the glycolysis pathway leading to the disturbance of energy metabolism, which cannot be overcome by A. flavus , even though additional energy-producing pathways, for instance, fatty acid degradation, amino acid metabolism, pyruvate metabolism, and glyoxylic acid metabolism, were activated . The essential oil citral, comprised of a mixture of terpenoids, geranial and neral, inhibits the A. ochraceus growth and ochratoxin A production by accumulation of ROS, resulting in the damage of mould cell membranes and cell walls. The treatment with subinhibitory concentrations of citral altered the amount of 218 proteins of A. ochraceus proteome studied by iTRAQ, perturbating proteins involved in the fungal growth and development, nutrient intake, and energy metabolism. Conversely, proteins associated with cell wall maintenance, membrane integrity, antioxidative defence, and secondary metabolism were increased. Nevertheless, the answer proved to be insufficient to overcome the stress resulting from citral-mediated ROS accumulation and repression of cell growth, resulting in a lower accumulation of ochratoxin A . The yeast Pichia caribbica has been proposed as a biocontrol agent against P. expansum in apples, and its effects are enhanced by vitamin C, by increasing the abundance of proteins related to the glucose metabolism, such as glyceraldehyde-3-phosphate dehydrogenase and alcohol dehydrogenase. These changes allowed the growth increase of P. caribbica and then enhanced its inhibitory effect over P. expansum . P. digitatum is responsible for the postharvest decay of citrus. A proteomic approach based on isobaric labelling and a nanoLC tandem mass spectrometry was used to explore changes in the mould as a response to treatments with the antifungal proteins α-sarcin and beetin 27, inhibitors of protein synthesis compared with those triggered by the chemical fungicide thiabendazole. Results showed differentially expressed proteins between treatments, including mainly cell wall-degrading enzymes, stress response proteins, antioxidant and detoxification mechanisms, and metabolic processes, such as thiamine biosynthesis, suggesting the existence of peculiar responses to each treatment . P. italicum is considered the principal cause of blue mould of citrus. The natural 2-methoxy-1,4-naphthoquinone (MNQ), isolated from the traditional Chinese medicinal plant Impatiens balsamina , had an anti- P. italicum effect. Analysing the proteome under different MNQ treatments, 129 proteins with differential quantity were identified, mainly related to energy generation (mitochondrial carrier protein, glycoside hydrolase, acyl-CoA dehydrogenase, and ribulose-phosphate 3-epimerase), NADPH supply (enolase and pyruvate carboxylase), oxidative stress (catalase and glutathione synthetase), and pentose phosphate pathway (ribulose-phosphate 3-epimerase and xylulose 5-phosphate). Thus, the inhibition of P. italicum by MNQ may be attributed to the disruption of the metabolic processes, especially the energy metabolism and the stimulus response . Pinocembrin is a flavonoid from propolis active against P. italicum . The treatment provokes in the proteome of P. italicum (studied by iTRAQ) the alteration in the relative abundance of proteins from the mitochondrial respiratory chain (MRC) complexes I and V and an increasing of proteins related to the programmed cell death, resulting in a ROS accumulation and ATP depletion, which may lead to the cell death through apoptosis, autophagy, and necrosis mechanism . Chitosan is a natural biocompatible, biodegradable, and non-toxic polysaccharide derived from chitin obtained from crustacean shells. Chitosan has been used both to inhibit pathogenic moulds, such as F. oxysporum , and to enhance the antifungal activity of some biocontrol agents, such as Rhodotorula mucilaginosa . F. oxysporum f. sp. cucumerinum that causes yield losses in cucumber plants is sensitive to chitosan that restricts plant disease severity. A proteomic approach using 2-DE coupled with LC-MS/MS analysis identified 62 differentially abundant chitosan-responsive proteins, most with proteolysis and hydrolase activity involved in metabolism and defence. Chitosan-treated F. oxysporum showed a lower abundance of proteins responsible for virulence, such as plant cell wall-degrading enzymes, structural and functional protein and DNA biosynthesis, and transporter proteins. Moreover, a decrease of the ROS-degrading enzymes glutathione peroxidase and catalase-peroxidase may result in ROS accumulation that can induce apoptosis, reducing mould virulence . The efficacy of R. mucilaginosa against the grey mould B. cinerea , which causes a postharvest disease of fruits and vegetables, can be enhanced by previously culturing the yeast in a medium containing chitosan. Chitosan triggered in R. mucilaginosa the higher quantity of proteins involved in the growth and reproduction, energy metabolism, antioxidant response, response to abiotic stress, and degradation of the pathogen cell. These changes can increase the growth rate of R. mucilaginosa and improve its capability to withstand and survive diverse abiotic stresses, allowing it to better compete for nutrients and space against B. cinerea . The spore germination and mycelial growth of B. cinerea is also inhibited by TTO, which alters the relative abundance of 85 proteins identified by label-free proteomic. The analysed data suggests that the TTO inhibits the TCA cycle, pyruvate metabolism, amino acid metabolism, and membrane-related pathways in mitochondria, and promotes sphingolipid metabolism, which may accelerate cell death in B. cinerea . The yeast Wickerhamomyces anomalus significantly reduces the natural decay of pear fruit. The proteome of pear fruit, analysed using 2-DE and MALDI-TOF/TOF, indicated that W. anomalus induces the accumulation of resistance-related proteins, such as PR family proteins, chitinase, and β-1,3-glucanase, which can inhibit the infection of the moulds whose cell walls contain β-1,3-glucan or chitin . Therefore, multiple approaches through proteomic tools have substantially contributed to the unravelling of those mechanisms beyond the biocontrol agents’ effect on foodborne moulds. This information has served to achieve a better understanding of the moulds’ cellular and pathway targets to improve their control. However, there is a wide window of proteomic applications to be introduced in the flowchart of the foodborne moulds analyses that clearly surpass the currently applied ones, to further maximise the results obtained in line with those already achieved.
Proteomics for assessing food quality and safety has been applied for a long time with the use of analytical methods, achieving rapid and reliable analysis of food throughout the food chain. One of the strategies to improve food quality and safety related to moulds involves their identification. The detection of moulds by proteomic strategies focuses on two main techniques: the traditional and widely used matrix-assisted laser desorption/ionisation time-of-flight mass-spectrometry (MALDI-TOF MS) and the omics approach, metaproteomics, which allows not only the identification of proteins, but also the observation of their PTM . MALDI-TOF MS is based on the acquisition of protein mass spectrum fingerprints from an unknown isolate, which is then identified by comparing its mass spectrum data with those from a reference library . Mass spectrometric peptide/protein profiles of moulds display peaks in the m/z region of 1000–20000, where a unique set of biomarker ions may appear facilitating a differentiation of samples at the level of genus, species, or strain . MALDI-TOF MS analysis of subproteomic mass spectra has been shown to be a promising tool for species identification and differentiation in moulds [ , , ]. Reliable species identification by MALDI-TOF MS has been reported for both food spoilage and foodborne moulds . Furthermore, its usefulness has also been demonstrated in the identification of moulds in various types of foods, such as ripened cheeses [ , , ] and asparagus . Metaproteomics is an omics technique able to detect microorganisms in complex microbial communities, such as some foods. Proteins constitute the largest amount of cellular material, and therefore, total per species protein can be quantified to assess the biomass of every member of the microbial community . In addition, these proteins can be assigned to individual species or higher taxa using a protein sequence database and lead to an understanding of the functional roles and interactions of individual members in the community . Metaproteomic provides “snapshots” of microbial populations and can be used to directly study the nature of microbial function in specific environments and states as well as to understand complex substrate–microbiome interactions . Although metaproteomics is a very powerful method, some problems in bioinformatics evaluation impede its large-scale application since this analysis is a key part of this omics methodology. Protein identification requires software and platforms, such as Unipept, MetaLab, ProteoStorm, or Galaxy. Commonly used functional information databases include Cluster of Orthologous Groups (COG), Gene Ontology (GO), and Kyoto Encyclopaedia of Genes and Genomes (KEGG) . In particular, the construction of databases for protein identification, clustering of redundant proteins, and taxonomic and functional identifications pose great challenges . This is one of the main factors why its application to mould detection remains relatively underutilised. Metaproteomic techniques used for the detection of microorganisms in foods have been applied mainly in fermented foods to describe not only the microbial composition and succession, but also its role in the process and the relationship of microorganisms with flavour development . Metaproteomics can be used as a tool to optimise food fermentations, for example, by knowing the metabolic pathways, it is possible to choose the starter strains to produce specific metabolites of interest, to know the best nutrients to supplement the medium, and to enhance the performance of the starter or choose intermediate strains and know the appropriate moment to introduce them . Overall, metaproteomics has facilitated the detection of moulds belonging to the Aspergillus , Mucor , Rhizopus , Penicillium , and Geotrichum genera in various types of soy sauce foods [ , , ] and fermented beverages .
Proteomic studies provide a comprehensive vision of the differential protein accumulation during mould growth and the generation of mould secondary metabolites, arising as an important contribution to the identification of new proteins and genes linked to the biosynthesis of mycotoxins ( ). These findings contribute to a deeper knowledge of the pathways linked to mycotoxin production and can be very helpful for designing preventive actions to minimise mould spoilage and mycotoxin production in food. The identification of the proteins involved in this metabolite synthesis requires a comparison between the proteome of mycotoxin-producing and non-producing strains, and the proteome of producing strains under conditions of production and non-production of the toxin. The proteome analysis of A. flavus showed that in aflatoxin-producing conditions some aflatoxin biosynthetic enzymes, such as O -methyltransferase A (OmtA), AflK/vbs/VERB synthase, ver-1, norA, ver-1, and aflatoxin B1-aldehyde reductase GliO-like, are prevalent in the mycelium, together with proteins from metabolic processes . In this sense, proteins related to aflatoxin biosynthesis, such as AflR, nonribosomal peptide synthetase 10, subunits α and ẞ of fatty acid synthase, sterigmatocystin biosynthesis P450 monooxygenase, polyketide synthase (PksA), noranthrone synthase, noranthrone monooxygenase, NOR reductase, averantin hydrolase, oxidase, esterase, desaturase, and alcohol dehydrogenase, are expressed when mould grows in a favourable substrate, such as corn flour. On the other hand, most of proteins involved in aflatoxin biosynthesis ( O -methyl sterigmatocystin oxidoreductase, sterigmatocystin 8- O -methyltransferase, p450 monooxygenase AflN, versicolorin B desaturase, averufin oxidase A, averantin hydroxylase, and noranthrone synthase), as well as in carbohydrate metabolism, cell wall biogenesis, mitogen-activated protein kinase (MAPK) signalling pathways, heat shock proteins, autophagy, and dicer-like proteins, are yet expressed at the germinating conidial stage. These data suggest that the MAPK signalling pathway could be crucial in cell wall modulation and secondary metabolite synthesis, and that the biosynthesis of aflatoxins could start at early germination stages with favourable conditions . In the proteome of two strains of A. carbonarius differing in their ochratoxin A-producing potential, nine proteins (seven increased and two reduced in quantity) were detected as potentially involved in several biological functions, such as regulation, amino acid metabolism, oxidative stress, and sporulation . Among them, a protein homologous to CipC showed the highest relative abundance in the ochratoxin A-producing strain. Although the function of this protein is still unknown, it was concluded that it is probably involved in ochratoxin A biosynthesis . The composition of the substrate and the environmental conditions have a major impact on mould physiology, and consequently, these changes should also be apparent in the proteome. Thus, proteomics has been applied to explore the impact of different external factors, such as water activity (a w ), temperature, pH, nutrient substrate, salt content, or light on several foodborne moulds and mycotoxin production. The response to different a w in A. flavus resulted in variations in the relative amount of 837 proteins; 403 at higher abundance at 0.99 a w and 434 more abundant at 0.93 a w . Osmotic stress-related proteins, Sln1 and Glo1, belonging to the Hog1 pathway showed higher levels at 0.99 a w . These results are consistent with the fact that A. flavus grows better under high a w conditions . The secretion of extracellular hydrolases increased as a w rose, suggesting that they may play a critical role in the induction of aflatoxin biosynthesis. Furthermore, the export protein KapK may downregulate aflatoxin biosynthesis with the translocation of NirA, a specific transcription factor in the nitrate assimilation pathway. In addition, the abundance of 11 proteins directly related to aflatoxin biosynthesis (aflE, aflF, aflH, aflJ, aflK, aflM, aflO, aflP, aflQ, aflY, and aflYa) were higher at 0.99 a w , and just one (aflYc) was more abundant at 0.93 a w . The aflE and aflF genes encoding ketoreductases that convert norsolorinic acid to averantin in the aflatoxin synthesis pathway were expressed only in aflatoxin supportive conditions (0.99 a w ) . These data are valuable for understanding the impact of water stress on aflatoxin production and for the design of preventive measures for its control in foods . In the proteome of A. flavus growing at temperatures of 28 °C and 37 °C, using the iTRAQ labelling, 664 proteins were found in different relative abundance, especially belonging to translation-related pathways, metabolic pathways, and the biosynthesis of secondary metabolites . The growth, but not the production of aflatoxins by A. flavus , is favoured at 37 °C, while the opposite occurs at 28 °C. In this sense, 12 aflatoxin biosynthesis-related proteins (aflE, aflW, aflC, aflD, aflO, aflP, aflK, aflM, aflY, aflJ, aflS, and aflH) showed a higher abundance at 28 °C than at 37 °C . By SILAC tagging, 31 proteins were found in higher amounts at 37 °C (including AflM and AflP) and 18 were more abundant at 28 °C (including AflD, AflE, AflH, and AflO). The shift in the expression of the aflatoxin pathway enzymes is closely related to the strong repression of both aflatoxin biosynthesis and transcription of the aflatoxin pathway genes observed at 37 °C . The pathway-specific regulatory afl R gene, required for the activation of most aflatoxin pathway genes, was upregulated at 28 °C , but the aflR protein was not detected in the proteomic profiles of A. flavus at either 28 °C or 37 °C . Likewise, the aflR protein was not detected in A. flavus grown under different a w conditions, regardless of whether or not they favour aflatoxin production . These results lead to conclude that there is a low correlation between proteome and transcriptome data, suggesting that post-transcriptional gene regulation affects distinct biological pathways and secondary metabolite gene clusters . In a transcriptome and proteome analysis conducted to clarify the mechanisms explaining the higher production of aflatoxin B 1 by A. flavus in maize and rice broth than in peanut broth, fewer differences in the gene expression and protein abundances were observed between the maize and rice substrates than between the above substrates and peanut . Most of the proteins with different amounts are involved in metabolic process, cellular process, catalytic activity, binding, cell, and cell part, but the limited variations suggest that the growth and metabolism of A. flavus in these substrates are similar, mainly in rice and maize. The expressions of genes linked to the early phase of aflatoxin biosynthesis ( aflA , aflB , and aflC ) and the accA gene were significantly increased in maize and rice substrates. Genes related to carbon metabolism were upregulated in maize broth, while those involved in acetyl-CoA accumulation and consumption were up- and downregulated, respectively. Several genes involved in the aflatoxin biosynthesis regulation, namely veA , ppoB , snf1 , and G protein-coupled receptor (GPCR) genes, were differentially expressed in the three substrates, indicating that they may also be involved in the sugar signal sensing, transfer, and regulation. Notably, correlation analyses of the transcriptome and proteome showed that the trehalose metabolism genes, the aldehyde dehydrogenase gene, and the tryptophan synthase gene are important in the regulation of aflatoxin yield in different substrates . A low correlation of transcriptome and proteome data was obtained, similarly to the abovementioned studies regarding the effect of a w and temperature on the production of aflatoxin by A. flavus . This finding could be due to the insufficient number of recovered proteins, the different synthesis and turnover rate of proteins and mRNAs in various cell stages, and the post-transcriptional or post-translational modifications . The addition of salt in the processing of a variety of foods, such as dry-cured meat and dairy products, favours the growth of both beneficial and toxigenic moulds on their surface. Particularly in meat products, the proliferation of ochratoxin A-producing moulds is of special concern. Therefore, from a food safety perspective, the study of the influence of the salt added to meat products on the growth of these moulds and the production of toxins is of great interest. The addition of 20 g/L NaCl in a culture medium induced the spore production of Aspergillus ochraceus , while 70 g/L NaCl repressed it . Comparative proteomics analysis of A. ochraceus growing with 20 or 70 g/L NaCl revealed significant changes in the abundance of proteins involved in nutrient uptake, cell membrane integrity, cell cycle, energy metabolism, intracellular redox homeostasis, protein synthesis and processing, autophagy, and secondary metabolism. The latter activity, including ochratoxin A production, was stimulated by the addition of 20 g/L NaCl, with an increase of non-ribosomal peptide synthetases (NRPS), and repressed by 70 g/L NaCl. At the highest concentration, an increased extracellular hydrolase production was observed, probably for the adaptation to nutrient starvation due to a decrease in energy metabolism. A higher concentration of reactive oxygen species (ROS) was also detected, which was harmful for protein synthesis and even triggered autophagy . The extracellular proteome of Aspergillus niger differed considerably depending on the carbon substrate xylose or maltose . When the medium was supplemented with xylose, a variety of plant cell wall degrading enzymes were identified with xylanase B and ferulic acid esterase as the most abundant ones. In cultures with maltose, high levels of catalases were found and glucoamylase was the most abundant protein. However, the intracellular proteome was not significantly changed. Interestingly, other culture conditions, such as pH control, aeration, stirring, or shaking, strongly influenced the abundance of glycolytic and tricarboxylic acid (TCA) cycle enzymes, flavohemoglobin, CipC protein, superoxide dismutase, NADPH-dependent aldehyde reductase, ER-resident chaperones, and foldases in the intercellular proteome. On the other hand, the addition of lactate in a medium containing starch and nitrate provokes an increase in the production of fumonisin B 2 , but not of ochratoxin A by A. niger . The proteome of A. niger was affected, mainly in the abundance of proteins related to the intracellular level of acetyl-CoA or NADPH, such as enzymes in the pentose phosphate pathway, pyruvate metabolism, the TCA cycle, ammonium assimilation, fatty acid biosynthesis, and oxidative stress protection. These data support the hypothesis that fumonisin production by A. niger is regulated by acetyl-CoA . On the other hand, some compounds from foods can stimulate the germination of undesirable moulds. For example, limonene, a dominant volatile constituent in oil glands of most citrus, promotes spore germination, germ tube elongation, and mycelial growth of the citrus pathogen P. digitatum . Limonene alters the abundance of 340 proteins in P. digitatum including proteins related to energy metabolism and antioxidant proteins, such as glutathione S-transferases, superoxide dismutase, and catalases. Limonene thus induces the growth of P. digitatum , probably through the regulation of energy metabolism and ROS homeostasis . P. expansum commonly causes blue mould rot and postharvest decay in apples, pears, and other pome fruits, and is the main producer of patulin. Proteomics has been used for studying the molecular mechanism involved in the interaction of this mould with apple fruit. In an apple substrate, 28 proteins forming P. expansum were found in higher relative abundance. These proteins were mainly associated with pathogenesis, such as glyceraldehyde-3-phosphate dehydrogenase, catalase, and peptidase, and with secondary metabolism and patulin biosynthesis regulation, for instance, glucose dehydrogenase and FAD-binding monooxygenase . These changes in the proteome might be responsible for the observed medium acidification and patulin production . In the proteome of P. expansum growing on apple juice, up to 148 proteins were found in a high quantity, including cell-wall degrading enzymes and peptidases/proteases, especially a serine carboxypeptidase (PeSCP) required for conidiation, germination, fungal growth and morphology, tolerance to environmental stresses, extracellular carboxypeptidase activity, and fungal virulence . The influence of pH on the production of fumonisin by Fusarium proliferatum has been explored by proteomic analysis. The increase of fumonisin production at pH 10 was related to the higher quantity proteins, such as polyketide synthase, cytochrome P450, S-adenosylmethionine synthase, and O -methyltransferase, involved in the modification of the fumonisin backbone. In contrast, at pH 5, the higher abundance of L-amino-acid oxidase, isocitrate dehydrogenase, and citrate lyase was linked to the inhibition of the condensation of the fumonisin backbone and the concurring decrease of the mycotoxin production . The exposure to light of short wavelengths induces oxidative stress in Penicillium verrucosum together with a marked decrease in the synthesis of ochratoxin A and a significant increase in the production of citrinin. Through a proteomic analysis combining two-dimensional SDS-PAGE with HPLC-ESI-TOF-MS/MS, 56 significantly differential proteins between cultures grown in light versus dark were detected. Most of them are presumably involved in the stress response, such as antioxidant proteins or heat shock proteins, and in general, metabolic processes, for example, glycolysis or ATP supply . Neosartorya pseudofischeri is a heat-resistant fungus and can contaminate several juices. The cellular process of heat-resistance has been studied in ascospores subjected to heat treatment at 93 °C for 0, 1, or 8 min. A total of 150 proteins significantly altered in abundance were identified, of which, 126 showed decreased abundance after heat treatment mainly involved in the central carbon metabolism, heat stress responses, reactive oxygen intermediates elimination, and translation events. These proteins are potential targets to evaluate the efficiency of thermal treatment for processed food products .
Several antifungal agents have been proposed to control the growth of undesirable moulds and mycotoxin accumulation on foods, including microorganisms and chemical compounds. To study the efficacy of these antifungal agents, deciphering both their mechanisms of action and their cellular targets in the moulds of interest is a crucial issue. A proper understanding of the target can provide valuable information on the spectrum of activity of the control agents and the possible sensitivity of the different toxigenic moulds. Moreover, information on possible modes of resistance can be obtained, as well as to guide designing strategies using combinations of different control agents that affect distinct targets. Potential side effects, such as the generation of unwanted by-products of treatment, such as mycotoxins or other undesirable secondary metabolites, can also be elucidated. Proteomic studies have provided valuable knowledge about the systems disturbed in response to antifungal agents and they have been applied to characterise the behaviour of both resistant and susceptible moulds, allowing for the recognition of mechanisms of resistance as well as the identification of promising susceptible targets ( ). Overall, these methods underline the range of tools available to provide a global overview of the molecular targets and biological pathways impacted by antifungal agents. These comprehensive perspectives can support the further targeting of complementary techniques, such as biochemical analysis, targeted gene disruption, or metabolite profiling. Proteomic analyses have been conducted to clarify the mode of action of antifungal agents against several toxigenic moulds and mycotoxin production, for example P. nordicum , Aspergillus westerdijkiae , A. flavus , A. carbonarius , P. digitatum, P. expansum, P. italicum, and Fusarium oxysporum . Concretely, P. nordicum and A. westerdijkiae have undergone many studies employing a variety of biocontrol agents as they have been described as the main producers of ochratoxin A in meat products. A. flavus is the subject of numerous studies for its control since it is the main producer of the high toxic and carcinogenic aflatoxins. Penicillium chrysogenum and Debaryomyces hansenii repressed ochratoxin A production by P. nordicum in a dry-cured ham-based medium, likely by nutritional competition. According to proteomic data, both agents inhibited P. nordicum through cell wall integrity (CWI) impairment, and they hamper the secondary metabolism, including ochratoxin A synthesis, lowering the levels of MAPK, and the carbon catabolite repression (CCR) pathway . Rosemary essential oil decreased the abundance of proteins involved in the polyketide synthase enoylreductase (PKS ER) domain in P. nordicum , which would explain the ochratoxin A reduction. The combination of rosemary leaves with D. hansenii lowered the abundance of proteins linked to the CWI and purine pathway . Rosemary essential oil decreased the abundance of proteins involved in the polyketide synthase enoylreductase domain in P. nordicum , which would explain the ochratoxin A reduction, and the mixture of rosemary leaves with D. hansenii reduced the abundance of proteins related to the CWI and purine pathway . D. hansenii singly or in combination with rosemary or its essential oil causes a large reduction in the production of ochratoxin A by A. westerdijkiae lowering the abundance of proteins involved in ochratoxin A production, such as PKS ER and NRPS, and in the CWI pathway . On the other hand, the combination of rosemary leaves and its essential oil decreases the ochratoxin A production disturbing the abundance of proteins from the PKS ER domain and CWI pathway of A. westerdijkiae . Volatile compounds generated by yeasts have demonstrated inhibitory effects against toxigenic moulds . The volatilome of Candida intermedia reduces the growth, sporulation, and ochratoxin A biosynthesis by Aspergillus carbonarius . Both the volatilome of C. intermedia and its major component 2-phenylethanol affected a variety of metabolic targets, the most concerned routes being the central metabolism, the energy production, and the stress response. Volatilome has a stronger effect on protein biosynthesis. Although 2-phenylethanol impacts some metabolic traits, other unidentified volatile components may involve a plurality of metabolic targets that may result in a higher effectiveness of the volatilome . Several compounds have been considered for the control of A. flavus , such as the antifungal protein PgAFP, the quercetin, and the Perilla frutescens essential oil (PEO). PgAFP, an antifungal protein secreted by a strain of P. chrysogenum , has been studied as a biocontrol agent against toxigenic moulds on dry-cured foods . PgAFP provoked apoptosis and necrosis in A. flavus hyphae with the reduction of energy metabolism, alteration of CWI, and increase of ROS. Label-free mass spectrometry-based proteomics ( ) showed changes in the proteome of A. flavus , with higher glutathione and heat shock proteins concentrations, and lower relative quantity of Rho1 and the β subunit of G-protein . However, PgAFP did not alter the metabolic capability, chitin deposition, or hyphal viability of A. flavus grown in cheese due to the calcium content. A total of 125 proteins were increased in the presence of calcium, including oxidative stress-related proteins, whereas 70 proteins were found at lower abundance, mainly involved in metabolic pathways and the biosynthesis of secondary metabolites. The resistance conferred by calcium to A. flavus appears to be mediated by calcineurin, G-protein, and g-glutamyltranspeptidase, which combat oxidative stress and impede apoptosis . On the other hand, a strain of Penicillium polonicum is natively resistant to PgAFP by increasing chitin content of its cell wall. Proteome changes allow for the attribution of this resistance to a higher abundance of glucosamine-6-phosphate N-acetyltransferase and Rho GTPase Rho1 that would lead to the increased chitin deposition via CWI signalling pathway . Therefore, proteomics has shed light on the mode of action of the antifungal protein PgAFP and some native or acquired mechanisms of mould resistance. This information is useful to design strategies to improve the PgAFP activity against toxigenic moulds in foods. Quercetin induces various oxidative stress response proteins, but suppresses the MAPK pathway and the expression of several enzymes involved in the aflatoxin biosynthesis, such as AflR, acetyl CoA synthetase, noranthrone synthase, noranthrone monooxygenase, NOR reductase, averantin hydrolase, sterigmatocystin biosynthesis polyketide synthase, and O -methyl transferase A . A comparative global proteomic analysis of A. flavus using the TMT labelling method revealed that PEO inhibits the growth of A. flavus by blocking the antioxidative defence, reducing the expression of superoxide dismutase and catalase associated with the elimination of ROS. Moreover, several proteins, such as ATP-dependent 6-phosphate fructokinase, triosephosphate isomerase, glyceraldehyde 3-phosphate glyceraldehyde dehydrogenase, and phosphoglycerate, were found in lower relative quantities, repressing the glycolysis pathway leading to the disturbance of energy metabolism, which cannot be overcome by A. flavus , even though additional energy-producing pathways, for instance, fatty acid degradation, amino acid metabolism, pyruvate metabolism, and glyoxylic acid metabolism, were activated . The essential oil citral, comprised of a mixture of terpenoids, geranial and neral, inhibits the A. ochraceus growth and ochratoxin A production by accumulation of ROS, resulting in the damage of mould cell membranes and cell walls. The treatment with subinhibitory concentrations of citral altered the amount of 218 proteins of A. ochraceus proteome studied by iTRAQ, perturbating proteins involved in the fungal growth and development, nutrient intake, and energy metabolism. Conversely, proteins associated with cell wall maintenance, membrane integrity, antioxidative defence, and secondary metabolism were increased. Nevertheless, the answer proved to be insufficient to overcome the stress resulting from citral-mediated ROS accumulation and repression of cell growth, resulting in a lower accumulation of ochratoxin A . The yeast Pichia caribbica has been proposed as a biocontrol agent against P. expansum in apples, and its effects are enhanced by vitamin C, by increasing the abundance of proteins related to the glucose metabolism, such as glyceraldehyde-3-phosphate dehydrogenase and alcohol dehydrogenase. These changes allowed the growth increase of P. caribbica and then enhanced its inhibitory effect over P. expansum . P. digitatum is responsible for the postharvest decay of citrus. A proteomic approach based on isobaric labelling and a nanoLC tandem mass spectrometry was used to explore changes in the mould as a response to treatments with the antifungal proteins α-sarcin and beetin 27, inhibitors of protein synthesis compared with those triggered by the chemical fungicide thiabendazole. Results showed differentially expressed proteins between treatments, including mainly cell wall-degrading enzymes, stress response proteins, antioxidant and detoxification mechanisms, and metabolic processes, such as thiamine biosynthesis, suggesting the existence of peculiar responses to each treatment . P. italicum is considered the principal cause of blue mould of citrus. The natural 2-methoxy-1,4-naphthoquinone (MNQ), isolated from the traditional Chinese medicinal plant Impatiens balsamina , had an anti- P. italicum effect. Analysing the proteome under different MNQ treatments, 129 proteins with differential quantity were identified, mainly related to energy generation (mitochondrial carrier protein, glycoside hydrolase, acyl-CoA dehydrogenase, and ribulose-phosphate 3-epimerase), NADPH supply (enolase and pyruvate carboxylase), oxidative stress (catalase and glutathione synthetase), and pentose phosphate pathway (ribulose-phosphate 3-epimerase and xylulose 5-phosphate). Thus, the inhibition of P. italicum by MNQ may be attributed to the disruption of the metabolic processes, especially the energy metabolism and the stimulus response . Pinocembrin is a flavonoid from propolis active against P. italicum . The treatment provokes in the proteome of P. italicum (studied by iTRAQ) the alteration in the relative abundance of proteins from the mitochondrial respiratory chain (MRC) complexes I and V and an increasing of proteins related to the programmed cell death, resulting in a ROS accumulation and ATP depletion, which may lead to the cell death through apoptosis, autophagy, and necrosis mechanism . Chitosan is a natural biocompatible, biodegradable, and non-toxic polysaccharide derived from chitin obtained from crustacean shells. Chitosan has been used both to inhibit pathogenic moulds, such as F. oxysporum , and to enhance the antifungal activity of some biocontrol agents, such as Rhodotorula mucilaginosa . F. oxysporum f. sp. cucumerinum that causes yield losses in cucumber plants is sensitive to chitosan that restricts plant disease severity. A proteomic approach using 2-DE coupled with LC-MS/MS analysis identified 62 differentially abundant chitosan-responsive proteins, most with proteolysis and hydrolase activity involved in metabolism and defence. Chitosan-treated F. oxysporum showed a lower abundance of proteins responsible for virulence, such as plant cell wall-degrading enzymes, structural and functional protein and DNA biosynthesis, and transporter proteins. Moreover, a decrease of the ROS-degrading enzymes glutathione peroxidase and catalase-peroxidase may result in ROS accumulation that can induce apoptosis, reducing mould virulence . The efficacy of R. mucilaginosa against the grey mould B. cinerea , which causes a postharvest disease of fruits and vegetables, can be enhanced by previously culturing the yeast in a medium containing chitosan. Chitosan triggered in R. mucilaginosa the higher quantity of proteins involved in the growth and reproduction, energy metabolism, antioxidant response, response to abiotic stress, and degradation of the pathogen cell. These changes can increase the growth rate of R. mucilaginosa and improve its capability to withstand and survive diverse abiotic stresses, allowing it to better compete for nutrients and space against B. cinerea . The spore germination and mycelial growth of B. cinerea is also inhibited by TTO, which alters the relative abundance of 85 proteins identified by label-free proteomic. The analysed data suggests that the TTO inhibits the TCA cycle, pyruvate metabolism, amino acid metabolism, and membrane-related pathways in mitochondria, and promotes sphingolipid metabolism, which may accelerate cell death in B. cinerea . The yeast Wickerhamomyces anomalus significantly reduces the natural decay of pear fruit. The proteome of pear fruit, analysed using 2-DE and MALDI-TOF/TOF, indicated that W. anomalus induces the accumulation of resistance-related proteins, such as PR family proteins, chitinase, and β-1,3-glucanase, which can inhibit the infection of the moulds whose cell walls contain β-1,3-glucan or chitin . Therefore, multiple approaches through proteomic tools have substantially contributed to the unravelling of those mechanisms beyond the biocontrol agents’ effect on foodborne moulds. This information has served to achieve a better understanding of the moulds’ cellular and pathway targets to improve their control. However, there is a wide window of proteomic applications to be introduced in the flowchart of the foodborne moulds analyses that clearly surpass the currently applied ones, to further maximise the results obtained in line with those already achieved.
As discussed before ( ), different major advantages have been pointed out in relation to HRMS when compared with 2-DE. Nevertheless, different massive acquisition methods within HRMS are available: data-dependent acquisition (DDA), based on the isolation and fragmentation of the “n” most intense signal peptides, and data-independent acquisition (DIA), which fragments all peptides, disregarding their intensities. Both acquisition methods entail advantages and drawbacks . The main advantage of DDA is the ease of processing after sample HRMS analysis in identifying peptides and proteins. The available software base their procedures, extremely simplifying them, on peptide-spectrum matches, and the number of protein/peptide quantifications corresponds to those having peak intensities . A FASTA file containing all the proteins from the target mould is required for these peptide-spectrum matches, usually available if the microorganism genome exists. The relative affordability of this data processing is probably the main reason why most of the reported mould proteomics linked to foodstuff using HRMS, gathered in this review, have used this approach. However, this is not exempt from disadvantages, such as the different magnitude order between the most and the least abundant proteins in a complex matrix, which means the impossibility of the detection of many proteins . This is translated into numerous proteins, probably relevant ones for the mould physiology in foodstuffs, being hidden to the researchers’ eyes when their abundances are below the lowest threshold of the dynamic range. DIA overcomes DDA concerning the dynamic range coverage, since it relies not on precursors selected individually, but on set systematic windows of precursors, and the fragmentation of all peptide ions contained in these windows , oversampling, in comparison to DDA . However, the main disadvantage linked to this approach, although recently overcome as discussed below, has been the requirement of a library building. This is time- and cost-consuming, only affordable for robust and powerful research groups. Nevertheless, different algorithms have been recently published/released, allowing the analyses of DIA raw data without the necessity of the library building, assisted by artificial intelligence (AI). Among the open-source tools, MaxDIA, developed by Max Plank and based on machine learning, has been postulated as an accurate alternative to DIA analyses without requiring a library building . Another interesting open-source tool is DIA-NN , which is included in different commercial software and provides the possibility of library-free DIA analyses. Furthermore, DIA-Umpire can perform this task; although the library-free mode has not achieved relevant results. Among them, DIA-NN could be considered the most robust tool for library-free DIA analyses . Looking at commercial tools, Spectronaut and ScaffoldDIA are able to successfully fulfil these . Additionally, some of them integrate a new and promising measurement parameter, yet unexploited in fungal proteomics in foodstuff, and considered a fourth dimension in MS, namely ion mobility. Beyond instrumental techniques based on HRMS, as well as software assisted by AI, to analyse the complexity of fungal proteome in terms of depth and coverage, PTM are still a somewhat unexplored field in food mycology. These comprise mechanisms to enhance the diversity of protein species and functions involved in a wide range of cellular processes . In other far away scientific fields, their study has outstandingly contributed to the scientific advance and development of state-of-the-art technologies, mainly in cancer research . These are slowly being implemented in other lower cutting-edge fields, such as our field. So far, the vast majority of the studies involving moulds not related to foodstuff that have evaluated PTMs, relied on phosphorylation [ , , ], and they were mainly focused on mould–host interactions . Although PTMs comprise a huge variety of possible variables, some of them offer new perspectives from the physiological view. This is the case of the phosphorylation on serine, threonine, or tyrosine residues as PTM. This reveals the ability to regulate signalling metabolic pathways, kinase cascade activation, membrane transport, gene transcription, and motor mechanisms , which otherwise would remain hidden. Again, this PTM measurement has not been extensively exploited in foodborne moulds. However, it has released critical information about the signalling of key pathways, complementary to the relative protein quantification, in non-foodborne moulds, such as A. fumigatus or Aspergillus nidulans [ , , ]. This approach would allow for the extraction of alternative information from moulds grown in foods or food systems, in relation to the deciphering of mycotoxin metabolic pathways and mechanisms of action of different antifungal strategies. Regarding the PTMs analyses, the aforementioned ion mobility, a new and promissory parameter, is gaining interest for their analyses. It is based on the size/shape features of the ion when it has undergone a gas flow and an electric field, working both in opposite directions. The new dimension that ion mobility offers achieves the distinguishing of signals from peptides that would otherwise be co-fragmented, thus obtaining cleaner spectra . Therefore, this new spectrometric feature allows for a better discrimination between peptides with PTMs when more than a similar residue is susceptible for modification in a given peptide, since this peptide size would be similar, but not their shapes. To the best of our knowledge, this fourth dimension has not been applied to foodborne moulds and it would be of utmost interest to decipher the PTMs, which currently means hidden signalling pathways.
An overview of the recent advancements of proteomics applied to foodborne moulds, as well as the potential of approaches based on this high-throughput technology not used yet for such moulds, have been given. Details about the preparation of samples and the techniques applied to evaluate the mould proteins for their identification and the characterisation of the mechanism of action involved in their negative effect on the foodstuffs have been discussed. Metaproteomics seems to be the most powerful method for mould identification despite the current problems related to bioinformatics tools. Proteomics can reveal the molecular mechanisms critical for mould adaptation to their ecological niche. Furthermore, it allows for the understanding of how different external factors, such as environmental conditions, as well as the presence of other microorganisms, may influence the mould development and mycotoxin production. Different HRMS tools have been useful for evaluating the proteome of foodborne moulds; although many authors have combined them with 2-DE despite their disadvantages in comparison with the whole proteomic analysis performed by the first ones. To overcome some of the limitations of the high-throughput technology proteomics applied to foodborne moulds, those used in other scientific fields could be valuable. Concretely, these analyses would be greatly benefited by using library-free DIA analyses, the implementation of ion mobility, and PTMs analysis, as isolated approaches and by combining them. All these are still available, and assuming their economic cost, comprise a various portfolio of improvements to be gradually implemented in this field. Research effort is thus required to address challenges related to the elucidation of key mechanisms of action of foodborne moulds. This knowledge is crucial for the future development of strategies to avoid the presence of unwanted moulds in foodstuffs.
|
A Semi-Automatic Method for the Quantification of Astrocyte Number and Branching in Bulk Immunohistochemistry Images | 22545ba4-1073-4470-b8e0-63f0a97bc6c0 | 10003611 | Anatomy[mh] | Astrocytes are glial cells representing a prominent component of the central nervous system (CNS), which participate in tissue homeostasis throughout the several stages of development, injury, and infection, as well as in the modulation of neuronal and immune responses . Astrocytes have a star-shaped morphology and express the glial fibrillary acidic protein (GFAP). GFAP is increasingly expressed upon an insult to the tissue . This protein thus may be used as a suitable biomarker in immunohistochemistry (IHC) to assess the homeostasis of the tissue, by allowing the quantification of the number of astrocytes, as well as their branching, and branches’ length. Nevertheless, quantification of IHC photomicrographs is challenging, as (1) the distinction between cells and branches is hindered by the large number of cells per area, and the superposition of astrocytes’ branches superposition with other cells’ bodies and branches ( a–c); (2) 3,3′-Diaminobenzidine (DAB) staining is not stochiometric, since the brown staining is a product of DAB oxidation by horseradish peroxidase, and its amount is correlated to the reaction time (which may, in turn, be influenced by factors like temperature or enzyme amounts); (3) sample groups have to be large enough to attain any statistical significance; and (4) it is highly dependent on photomicrographs’ quality and magnification, as in, the above mentioned, DAB quality reaction, target density, and microscope quality/settings, may influence the image characteristics, which, in turn, directly influence a user’s visual scoring. Classical quantification methods are usually based on visual scoring, rendering them highly subjective, time-consuming, and relying on features such as area occupancy , while branch quantification is highly dependent on the photomicrograph’s magnification and quality . Altogether, these issues hamper the proper analysis of large amounts of photomicrographs, often taken with a 20× magnification. Previously, Young & Morison developed a method based on the use of the ImageJ software and its plugin Skeletonize ( d), that may be applied to both IHC and immunofluorescence involving DAB staining . Still, this method was not developed to be applied to low magnification photomicrographs, presenting a high number of cells, nor does it explore the data trimming post-image analysis. Here, we describe a reliable, and inexpensive method, adapted from Young & Morison, to specifically quantify, in a semi-automatic manner, DAB-stained astrocytes in IHC, by applying a threshold algorithm fitter to astrocytes’ quantification . Of note, this improved method requires minimal image post-processing before its analysis, as it does not require any changes to brightness and contrast. Here, we present a detailed explanation of the steps taken to quantify GFAP-stained astrocytes, from creating a quantifiable mask in the photomicrographs, using ImageJ, to the mathematical analysis of ImageJ data and its graphical representation.
2.1. Analysis of Photomicrographs from Each Brain Area The analysis ran by AnalyzeSkeleton is presented within two data tables per image (“Branch information” and “Results”), which may be saved in a datasheet-based software, such as Microsoft Excel or MATLAB. In our case, only the “Branch information” table was considered of interest for further analysis. From each “Branch information” table, the “Skeleton ID” and “Branch length” columns were exported to a new spreadsheet . Each sheet in the spreadsheet document should represent a single image of the same anatomical area and animal. A filter was applied to the “Skeleton ID” column [select column B > data tab > filter, define filter for double entries] to select and remove non-duplicated values. This process allowed us to identify and remove all singular entries, as cells with only one branch were not considered to be astrocytes. Afterward, data was organized into 3 columns: A column with the remaining Skeleton IDs, without the duplicates ( , column G) [copy column B, paste it on column G, select new column Data tab > Delete duplicates]. A column with the number of branches each Skeleton ID has ( , column H) [insert the function COUNTIF, in which the first condition comprises the values in column B, and the second one is the cell identification from column G, e.g., =COUNTIF($B$2:$B$5000:G2)]. And a column for the sum of each Skeleton’s branches length ( , column I) [insert the function SUMIF, in which the first condition is the values in column B, the second is the cell identification from column G and the third condition is Column C, e.g., =SUMIF($B$2:$B$5000;G2;$C$2:$C$5000)]. In our analysis, each zone comprised several images, so we combined the information of each image analysis into a sum of the information, representing the zone. For this purpose, a ‘key’ table was designed to retrieve and organize the information from the sheet of each image into a table comprising : The sum of the number of cells from each image ( a); The mathematical mode distribution of the branches’ size, normalized to the number of cells ( b); The sum of the total length of the branches ( c). 2.2. Groups’ Analysis Per Area The comparison of the information per animal was performed in a table , in which we further determined: The number of branches per cell ( , column N), given by the [total branch number (column F) divided by the number of astrocytes (column E)]; The number of cells per area ( , column P), calculated as the [number of astrocytes (column E) divided by the area (column D)]; And the normalization of the total branch length per cell ( , column R), was obtained by the [normalization of the total branch length (column M) and by the astrocyte number (column E)]. 2.3. Graphical Representation Data Based on the information from the spreadsheet analysis, four main parameters can be represented as: A mathematical model ( a) representing the most frequent number of the branches’ length, distributed by length intervals, and normalized by the cell number. An increased branch size comparatively to the control group is a potential marker of an inflammatory state, since branching and branch size are known to increase upon astrocyte activation ; The number of astrocytes ( b), normalized per area. The fluctuation in the number of cells present in a tissue is a marker for tissue homeostasis changes; The mean branch length per cell ( c). The general increase in the branch length is another indicator of possible changes to tissue homeostasis; A correlation between the number of branches and their length that can be projected in a scatter-plot ( d), as a virtual cell size. This graphical representation provides a spatial perception of the increasing astrocyte size/activation, among treatments, visually providing more information about a possible change in the homeostasis of the tissue.
The analysis ran by AnalyzeSkeleton is presented within two data tables per image (“Branch information” and “Results”), which may be saved in a datasheet-based software, such as Microsoft Excel or MATLAB. In our case, only the “Branch information” table was considered of interest for further analysis. From each “Branch information” table, the “Skeleton ID” and “Branch length” columns were exported to a new spreadsheet . Each sheet in the spreadsheet document should represent a single image of the same anatomical area and animal. A filter was applied to the “Skeleton ID” column [select column B > data tab > filter, define filter for double entries] to select and remove non-duplicated values. This process allowed us to identify and remove all singular entries, as cells with only one branch were not considered to be astrocytes. Afterward, data was organized into 3 columns: A column with the remaining Skeleton IDs, without the duplicates ( , column G) [copy column B, paste it on column G, select new column Data tab > Delete duplicates]. A column with the number of branches each Skeleton ID has ( , column H) [insert the function COUNTIF, in which the first condition comprises the values in column B, and the second one is the cell identification from column G, e.g., =COUNTIF($B$2:$B$5000:G2)]. And a column for the sum of each Skeleton’s branches length ( , column I) [insert the function SUMIF, in which the first condition is the values in column B, the second is the cell identification from column G and the third condition is Column C, e.g., =SUMIF($B$2:$B$5000;G2;$C$2:$C$5000)]. In our analysis, each zone comprised several images, so we combined the information of each image analysis into a sum of the information, representing the zone. For this purpose, a ‘key’ table was designed to retrieve and organize the information from the sheet of each image into a table comprising : The sum of the number of cells from each image ( a); The mathematical mode distribution of the branches’ size, normalized to the number of cells ( b); The sum of the total length of the branches ( c).
The comparison of the information per animal was performed in a table , in which we further determined: The number of branches per cell ( , column N), given by the [total branch number (column F) divided by the number of astrocytes (column E)]; The number of cells per area ( , column P), calculated as the [number of astrocytes (column E) divided by the area (column D)]; And the normalization of the total branch length per cell ( , column R), was obtained by the [normalization of the total branch length (column M) and by the astrocyte number (column E)].
Based on the information from the spreadsheet analysis, four main parameters can be represented as: A mathematical model ( a) representing the most frequent number of the branches’ length, distributed by length intervals, and normalized by the cell number. An increased branch size comparatively to the control group is a potential marker of an inflammatory state, since branching and branch size are known to increase upon astrocyte activation ; The number of astrocytes ( b), normalized per area. The fluctuation in the number of cells present in a tissue is a marker for tissue homeostasis changes; The mean branch length per cell ( c). The general increase in the branch length is another indicator of possible changes to tissue homeostasis; A correlation between the number of branches and their length that can be projected in a scatter-plot ( d), as a virtual cell size. This graphical representation provides a spatial perception of the increasing astrocyte size/activation, among treatments, visually providing more information about a possible change in the homeostasis of the tissue.
Throughout the last decades, astrocytes have evolved from being considered stationary cells of the central nervous system (CNS), into being paramount supporters of neuronal architecture and function. They have been demonstrated to actively participate in, for example, the blood-brain barrier control, neuronal development, neuro-transmitter turnover, synaptogenesis, and immunological response . With such a broad spectrum of functions, it becomes clear that the assessment of astrocytes’ number and branches is a relevant biomarker to assess the homeostasis of the tissue. In this sense, the development of a quantification method that allows the analysis of a large number of samples of DAB-stained GFAP photomicrographs, in a fast, reliable and reproducible way has gained the utmost importance. Here, we presented a method that relies on free, straightforward, macro-friendly, open-source programs (e.g., ImageJ), and a spreadsheet platform (e.g., Microsoft Excel, MATLAB). Noteworthy, ImageJ is a well-established tool within the scientific community that does not require knowledge of complex programming language, thus being more accessible to a larger population of researchers. This method gathers important characteristics such as the ability to determine the number and branching of cells, in an accurate manner, without the need to rely on an expensive microscope equipped for stereology or user-biased manual quantification, being reliable even when applied to photomicrographs with a magnification as low as 20×. This results in a tool capable of accurately quantifying samples that arise from commonly used laboratory staining techniques, only requiring a basic light microscope, coupled to a camera, without needing any specific extra hardware or software. The main advantages of the method herein developed, relative to the conventional analysis of photomicrographs are summarized in . As with any other method, we are aware that this method presents some limitations. For example, this method is especially adapted to quantify DAB-stained GFAP photomicrographs, in which cell’ size does not change throughout activation. Therefore, it is not ideal to quantify, for example, CD11b-expressing glia cells (i.e., microglia), since these cells change their shape upon activation. Nevertheless, it is possible to adapt it to allow the accurate quantification of such cells. Also, the branch quantification component is not compatible with non-branching-stained targets, such as receptors, or intracellular components. In sum, this method is a reliable, fast, and reproducible tool that can be used to quantify DAB-stained GFAP samples in bulk. It is a straightforward, free, and macro-friendly adaptation of hardware and software already existing in a functioning laboratory. With it, the quantification of the number and branching of astrocytes in sample tissue becomes a simple task, which should not be underrated since astrocytes are extremely important participants in tissue development and homeostasis. Moreover, this method can be applied to accurately quantify any non-cell shape-changing, branching cells.
4.1. Samples All experiments were performed on drug-naïve male Sprague-Dawley rats (250–300 g, Taconic, Lille Skensved, Denmark) according to guidelines from the Swedish National Board for Laboratory Animals approved by the Animal Ethics Commit-tee of Uppsala, Sweden (ethical approval Dnr 5.8.18-12230-2019). All rats were housed in groups at 20 to 22 °C under a 12-h light/dark cycle with ad libitum access to food and water. Brain tissue samples, isolated from Sprague-Dawley rats, were obtained from Prof. Miroslav Savić’s lab (Belgrade University, Beograd, Serbia). Briefly, whole brains were collected following the animals’ perfusion with a saline solution, followed by 4% paraformaldehyde. The animals were decapitated, and the brains removed from the skull, washed three times in a phosphate buffer, for 10 min each. The brains were then post-fixed in the same solutions for 30 min, placed in a series of sucrose solutions (with increasing concentrations up to 30%) and then stored at −20 °C in OLMOS solution . For immunohistochemistry, the hemispheres were separated by a mid-sagittal cut and 35 µm-thick coronal sections were sliced from each hemisphere in a vibratome, according to a protocol previously described by Dias-Carvalho et al. (2022) . 4.2. Immunohistochemisstry Immunohistochemistry processing was done as previously described . Briefly, samples were recovered from OLMOS solution, washed with 0.01 M phosphate-buffered saline (PBS), treated for endogenous peroxidase inactivation with 10% hydrogen peroxidase (H 2 O 2 ) in PBS and blocked with 5% normal serum (Vector Laboratories). In free-floating, primary antibody incubation was performed at 4 °C for 72 h, 1:1000 dilution, in PBS with 0.5% Triton X-100 (polyclonal antibody rabbit anti-GFAP; Z0334, AB_10013382, Agilent Dako, Carpinteria, CA, USA). At room temperature, the secondary antibody, anti-rabbit IgG biotinylated antibody (BA-1100, Vector Laboratories, Burlin-game, CA, USA) was incubated for 1 h, followed by avidin-biotin complex (PK-6100, Vectastain Elite ABC Kit; Vector Laboratories) for 1 h, and 0.05% DAB/ 0.01% H 2 O 2 in PBS revelation. DAB reaction was stopped with PBS and sections were mounted in gelatin-coated slides, finishing the preparations with histomount mounting media (National Diagnostics, Atlanta, GA, USA). 4.3. Image Analysis Photomicrographs of different brain areas (e.g., hippocampal formation, prefrontal cortex and nucleus accumbens) were acquired using a Zeiss AXIO Imager 2, with a 20× objective, and the Axiovision 40v software, and further converted to JPG format. Photomicrograph analysis was performed using ImageJ software (FIJI or ImageJ https://imagej.net ) and the plugin Skeletonize [AnalyzeSkeleton (2D > 3D) < http://imagej.net > AnalyzeSkeleton> (accessed on 24 February 2022)] . After transforming the photomicrographs of GFAP-stained astrocytes into automatically quantifiable masks to extract the max amount of information possible, this analytical software presents the proper tools to process these images using the subsequently described steps. To calibrate ImageJ into micrometers, the image scale was set using the straight-line selection tool to draw a line over an existing scale bar in the image, and then selecting [Analyze > Set Scale]. The option [global] was selected to apply the scale settings to the whole set of images. This step ensures that the area and branch length information is accurately measured. The photomicrographs are loaded into ImageJ, converted to an 8-bit format [Image > Type > 8-bit], applied the FFT bandpass filter [Process > FFT > bandpass filter] and transformed to grey-scale [Image > Lookup tables > greys]. Some of the image processing performed is only possible with 8-bit and grey-scale converted images. The FFT bandpass filter clears small features (i.e., noise) from the image without changing the larger features (e.g., cells and branches). Of note, the ImageJ’s default settings are adequate for DAB-stained GFAP photomicrographs, so no changes in the FFT settings are required. The Unsharp mask [Process > Filters > Unsharp mask] and Despeckle were then applied [Process > Noise > Despeckle]. Once again, ImageJ settings are appropriate: application of the unsharp mask, and changing the settings, may result in a more fragmented mask of the areas of interest, and Despeckle is related to the removal of salt and pepper noise. After this image pre-processing, the Threshold tool was applied [Image > Adjust > Threshold]. The algorithm was set to MaxEnthropy, the option black background was selected, and the image was further converted to mask ( a,b). The MaxEnthropy algorithm is the most suitable, among the available algorithms, for astrocyte analysis, as suggested by Siritantikorn et al. (2012) . Defining the black background option false ensures that the background of the image is not considered by the skeletonize plugin. After conversion of the photomicrographs into the mask, Despeckle [Process > Noise > Despeckle] was reapplied, followed by the Close option [Process > Binary> Close]. The application of the Despeckle option again clears the salt and pepper noise resulting from the mask conversion, whereas the Close option connects dark pixels that are separated by 2 white pixels, uniformizing the open particles. Then, outliers were removed [Process > Noise > Remove Outliers]. The pixel radius was set to 2 and the threshold to 50. ImageJ calculates the median pixel of the radius and replaces the value of a pixel that falls outside the defined threshold, that deviates from the mean. Finally, Skeletonize [Process > Binary > Skeletonize] was applied ( a), followed by AnalyzeSkeleton [Plugin > Skeleton > AnalyzeSkeleton] ( b) .
All experiments were performed on drug-naïve male Sprague-Dawley rats (250–300 g, Taconic, Lille Skensved, Denmark) according to guidelines from the Swedish National Board for Laboratory Animals approved by the Animal Ethics Commit-tee of Uppsala, Sweden (ethical approval Dnr 5.8.18-12230-2019). All rats were housed in groups at 20 to 22 °C under a 12-h light/dark cycle with ad libitum access to food and water. Brain tissue samples, isolated from Sprague-Dawley rats, were obtained from Prof. Miroslav Savić’s lab (Belgrade University, Beograd, Serbia). Briefly, whole brains were collected following the animals’ perfusion with a saline solution, followed by 4% paraformaldehyde. The animals were decapitated, and the brains removed from the skull, washed three times in a phosphate buffer, for 10 min each. The brains were then post-fixed in the same solutions for 30 min, placed in a series of sucrose solutions (with increasing concentrations up to 30%) and then stored at −20 °C in OLMOS solution . For immunohistochemistry, the hemispheres were separated by a mid-sagittal cut and 35 µm-thick coronal sections were sliced from each hemisphere in a vibratome, according to a protocol previously described by Dias-Carvalho et al. (2022) .
Immunohistochemistry processing was done as previously described . Briefly, samples were recovered from OLMOS solution, washed with 0.01 M phosphate-buffered saline (PBS), treated for endogenous peroxidase inactivation with 10% hydrogen peroxidase (H 2 O 2 ) in PBS and blocked with 5% normal serum (Vector Laboratories). In free-floating, primary antibody incubation was performed at 4 °C for 72 h, 1:1000 dilution, in PBS with 0.5% Triton X-100 (polyclonal antibody rabbit anti-GFAP; Z0334, AB_10013382, Agilent Dako, Carpinteria, CA, USA). At room temperature, the secondary antibody, anti-rabbit IgG biotinylated antibody (BA-1100, Vector Laboratories, Burlin-game, CA, USA) was incubated for 1 h, followed by avidin-biotin complex (PK-6100, Vectastain Elite ABC Kit; Vector Laboratories) for 1 h, and 0.05% DAB/ 0.01% H 2 O 2 in PBS revelation. DAB reaction was stopped with PBS and sections were mounted in gelatin-coated slides, finishing the preparations with histomount mounting media (National Diagnostics, Atlanta, GA, USA).
Photomicrographs of different brain areas (e.g., hippocampal formation, prefrontal cortex and nucleus accumbens) were acquired using a Zeiss AXIO Imager 2, with a 20× objective, and the Axiovision 40v software, and further converted to JPG format. Photomicrograph analysis was performed using ImageJ software (FIJI or ImageJ https://imagej.net ) and the plugin Skeletonize [AnalyzeSkeleton (2D > 3D) < http://imagej.net > AnalyzeSkeleton> (accessed on 24 February 2022)] . After transforming the photomicrographs of GFAP-stained astrocytes into automatically quantifiable masks to extract the max amount of information possible, this analytical software presents the proper tools to process these images using the subsequently described steps. To calibrate ImageJ into micrometers, the image scale was set using the straight-line selection tool to draw a line over an existing scale bar in the image, and then selecting [Analyze > Set Scale]. The option [global] was selected to apply the scale settings to the whole set of images. This step ensures that the area and branch length information is accurately measured. The photomicrographs are loaded into ImageJ, converted to an 8-bit format [Image > Type > 8-bit], applied the FFT bandpass filter [Process > FFT > bandpass filter] and transformed to grey-scale [Image > Lookup tables > greys]. Some of the image processing performed is only possible with 8-bit and grey-scale converted images. The FFT bandpass filter clears small features (i.e., noise) from the image without changing the larger features (e.g., cells and branches). Of note, the ImageJ’s default settings are adequate for DAB-stained GFAP photomicrographs, so no changes in the FFT settings are required. The Unsharp mask [Process > Filters > Unsharp mask] and Despeckle were then applied [Process > Noise > Despeckle]. Once again, ImageJ settings are appropriate: application of the unsharp mask, and changing the settings, may result in a more fragmented mask of the areas of interest, and Despeckle is related to the removal of salt and pepper noise. After this image pre-processing, the Threshold tool was applied [Image > Adjust > Threshold]. The algorithm was set to MaxEnthropy, the option black background was selected, and the image was further converted to mask ( a,b). The MaxEnthropy algorithm is the most suitable, among the available algorithms, for astrocyte analysis, as suggested by Siritantikorn et al. (2012) . Defining the black background option false ensures that the background of the image is not considered by the skeletonize plugin. After conversion of the photomicrographs into the mask, Despeckle [Process > Noise > Despeckle] was reapplied, followed by the Close option [Process > Binary> Close]. The application of the Despeckle option again clears the salt and pepper noise resulting from the mask conversion, whereas the Close option connects dark pixels that are separated by 2 white pixels, uniformizing the open particles. Then, outliers were removed [Process > Noise > Remove Outliers]. The pixel radius was set to 2 and the threshold to 50. ImageJ calculates the median pixel of the radius and replaces the value of a pixel that falls outside the defined threshold, that deviates from the mean. Finally, Skeletonize [Process > Binary > Skeletonize] was applied ( a), followed by AnalyzeSkeleton [Plugin > Skeleton > AnalyzeSkeleton] ( b) .
|
Pharmacokinetic Markers of Clinical Outcomes in Severe Mental Illness: A Systematic Review | 4deefeca-95da-4054-a0b8-a33cc52f2872 | 10003720 | Pharmacology[mh] | Mental and substance use disorders are leading causes of disability on a global level , with a significant portion of this burden deriving from severe mental illnesses (SMIs) . Collectively, SMI represents an ill-defined category which has been inconsistently reported in the literature in the field but that, as a bare minimum, comprises conditions such as schizophrenia (SCZ), bipolar disorder (BD), and major depressive disorder (MDD) . Among individuals affected by SMI, life expectancy has been reported to be reduced by 20 years among males and up to 15 years among females . In the past, this gap in life expectancy was frequently attributed to suicide risk. However, over the years, it has been increasingly evident how cardiovascular and infectious disorders also represent significant causes of death in this population . The toll associated with SMI is not limited to the affected individuals but extends to their relatives and communities . Carers of individuals affected by SMI may indeed report lower employment levels, and social and economic difficulties with higher levels of food insecurities and expenditures related to care . Individuals affected by SMI represent a severely underserved population, despite significant advancement in their management. For example, only 41% of individuals affected by MDD may receive treatment at minimal standard of care . Even for the minority of individuals receiving treatment, finding the most effective therapeutic option could be challenging for healthcare providers and service users. In fact, even when the most updated protocols are employed, the treatment choice is based on a “trial-and-error” approach, which ultimately may result in frequent treatment failures and significant healthcare costs . Numerous factors should be considered when discussing the basic underpinnings for the observed heterogeneity in treatment response (HTR), such as the nosological classification systems used for the diagnoses , age of onset, co-morbidities, and clinical course. These factors likely represent a source of HTR intrinsic to the current standards of practice . Notwithstanding the previously mentioned limitations, this framework has produced most of the evidence for treatments (either pharmacological or psychotherapy) in psychiatry since clinical trials testing the efficacy and tolerability of a particular intervention have indeed selected study patients based on a categorical nosological system . Waiting for the development of more accurate diagnostic tools , one possible way to address HTR would be to tailor treatments to the individuals identified through the use of the current nosological classification systems by matching the right treatment to the right patient . In this setting, a growing body of evidence suggests that pharmacogenomics (PGx) may represent a useful tool for enabling personalized treatments. PGx is the research area dedicated to evaluating how multiple genetic variations may interact and influence the metabolism and action of a particular pharmacological treatment . With very few notable exceptions (e. g., lithium salts, gabapentin), nearly all medications currently employed for the treatment of psychiatric disorders are metabolized in the liver. The major metabolic reactions involved in the process are oxidation (phase I) and conjugation (phase II). Genetic variations for transporters expressed at different locations, such as the brain, gut, and liver, can also influence the pharmacokinetic profile of the different compounds employed in treatment, but their clinical impact has not been established . The metabolic system that has been most extensively studied is represented by cytochrome P450 (CYP450), comprising 57 genes and 58 pseudogenes . The two isoenzymes of CYP450 most extensively studied for psychiatric treatments are CYP2D6 and CYP2C19, as there is significant evidence that these two can significantly influence psychotropic metabolism , with CYP2D6 being involved in the metabolism of almost half of the most prescribed psychotropics . For a long period of time, it has been known that single-nucleotide polymorphisms (SNPs) could be associated with differential gene expression profiles and that these, in turn, could be studied to help estimate the risk of developing adverse effects or to quantify treatment response to a particular medication in a subgroup of individuals . Allelic variants of CYP genes are indicated with an asterisk , genotypes are then coded based on their projected metabolic activity, and the corresponding phenotypes are typically subdivided into Rapid, Normal, Ultrarapid, Intermediate, and Poor Metabolizer . Genes supposedly associated with the postulated mechanism of action at the biochemical, cellular, and physiological level are instead associated with the pharmacodynamic of a particular compound. In psychiatry, attention has been focused on possible allelic of genes involved in neurotransmitters’ receptors, signal transmission, gene transcription, or protein folding, among others . Gene variations in human leukocyte antigens or in proteins regulating immune mechanisms have also been the subject of research in the area and have yielded guidance in the projected risk of developing adverse reaction upon exposure to certain compounds . To improve the accessibility to treatment-informing guidance based on PGx, several scientific bodies have developed clinical practice guidelines with the most significant being summarized on easily accessible platforms such as PharmGKB . In theory, PGx holds great promise in terms of improving personalization of treatments as it would aid clinicians in streamlining the pharmacological treatment selection based on the expected efficacy and tolerability for the different available pharmacological treatments . However, in psychiatry the clinical application of this tool has lagged behind due to concerns regarding its efficacy and lack of knowledge on interpreting its results in a sizeable portion of healthcare providers. In the present study, we performed a systematic review of the literature in the field probing the use of PGx for SMI, specifically reporting on pharmacokinetic markers of treatment response, as defined by the authors. Importantly, we applied for the first time a transdiagnostic approach to explore whether we could be able to identify PGx markers associated with similar patterns of response across disorders. The main objective of this project is reviewing the existing evidence for pharmacokinetic markers in predicting pharmacological treatment response in individuals affected by SMI, focusing on the comparison with the usual standard of care when available. A double-blind systematic review was performed on Scopus, PubMed, and Web of Science according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA ). In this project, we considered including articles published in English probing the association of PGx tests with pharmacological treatment outcomes for SMI (i.e., BD, MDD, SCZ) and reporting on pharmacokinetic markers. We defined treatment outcomes as a response to the practiced treatment regimen, as reported by the authors. Accepted study designs included: (1) open-label trials, (2) randomized controlled trials, (3) cross-sectional studies, (4) retrospective cohort studies, (5) prospective cohort studies, and (6) studies recruiting human subjects ≥ 18 years old and assessing treatment outcomes as defined by the study authors. We excluded: (1) meta-analyses, (2) systematic reviews, (3) case reports, (4) case series, (5) letters to the editor, and (6) editorials. No time restriction was applied based on the year of publication. Pharmacodynamic markers and studies assessing the safety or tolerability profile of pharmacological treatments were excluded. The following search strategy was employed (“pharmacogenomic” OR “pharmacogenomics” OR “pharmacogenetics” OR “pharmacogenetic”) AND (“signature” OR “biomarkers” OR “marker” OR “determinants”) AND (“severe mental illness” OR “severe mental disorders” OR “schizophrenia” OR “psychosis” OR “schizoaffective disorder*” OR “bipolar disorder *” OR “major depressive disorder *”). Two reviewers independently screened the records identified through the primary search strategy. With the objective of reviewing the existing evidence for pharmacokinetic markers in predicting pharmacological treatment response in individuals affected by SMI, we focused on extracting the following data from the included studies: (1) study design, (2) sample composition, (3) main objective, (4) inclusion and (5) exclusion criteria, (6) country where the selected study was performed, and (7) reported outcomes pertinent to our project. The qualitative data extraction was performed independently by two authors (P.P.; L.B.) and whenever a discrepancy was found a third senior author was involved to reach a consensus. Rayyan, a semi-automated tool, was employed to facilitate the screening process . The primary search was further augmented using a comprehensive pearl-growing strategy. ROB 2 was employed for the assessment of bias for randomized controlled trials by two independent raters. Again, discrepancies were solved through discussion and, if needed, with a third author’s judgement. The last search was performed on the 17 September 2022. All tables are available in interactive mode on GitHub ( https://github.com/claudiapis/tables_pharmacokinetic_markers , accessed on 25 February 2023). Further, the main input set is available on GitHub ( https://github.com/pasqualeparibell/Pharmacokinetic-markers-of-clinical-outcomes-in-severe-mental-illness-a-systematic-review.---source/tree/main , accessed on 25 February 2023). 3.1. Search Results and Bias Assessment The selected search strategy resulted in the identification of 1975 records. After duplicate removal, 1456 records were assessed through an abstract and title screening, leading, in turn, to the identification of 587 records. Among them, 42 papers were selected for the qualitative analysis, summarized in 3 different tables dedicated to (1) SCZ ( and ) MDD ( and ) and BD . A complete description of the selection process is reported in the PRISMA flow diagram, in . A total of 13 studies originated in the USA and 14 in Asia. The remaining studies were carried out mainly in European countries. Among the included studies, eleven were randomized controlled trials (RCTs), with 10 recruiting individuals affected by MDD and only one focusing on individuals affected by SCZ . One RCT on MDD recruited a mixed sample of individuals affected by MDD and/or anxiety, but no description of the anxiety disorder was included . Only three of the included studies reported on individuals affected by BD, with one of the three including a heterogeneous population comprising MDD, BD, and post-traumatic stress disorder (PTSD) . Overall, the risk of bias of the included RCTs appears limited, save for three studies, judged at high risk of bias . summarizes the bias assessment for the included RCTs according to ROB 2. 3.2. PGx Outcomes Reported outcomes included service use reduction, symptom change from baseline, and rates of remission or response to treatment. In line with the literature in the field, there was significant heterogeneity in scales employed to report the symptom changes. The description of the sample composition of the included studies also appears inconsistent, with the vast majority providing the gender composition, age range, or the average age of the recruited sample. A discrete heterogeneity also emerged regarding the employed inclusion or exclusion criteria, even considering the heterogeneity of the analyzed diagnostic categories. Numerous different alleles and genotypes have been assessed, but no specific efficacy pattern emerged for a particular marker across the various studies. A relatively limited number of studies reported on the results of combinatorial PGx testing, introducing a further layer of complexity in the interpretation of the results for the included studies. 3.2.1. Schizophrenia Seventeen papers reported on studies comprising individuals affected by SCZ, with only one randomized controlled trial (RCT) . Among them, ten papers reported on the possible association between CYP2D6 and treatment outcomes as described by the authors , four papers described the association between ABCB1 genotypes and treatment outcomes with three out of four reporting a positive association . Overall, a significant heterogeneity of assessed outcomes is apparent. Two papers used retention in the treatment of antipsychotics (AP) as the primary outcome . As for symptom severity assessment, seven papers used the Positive and Negative Symptoms Scale (PANSS) score as a primary outcome measure, either focusing on total percent change or changes in some of its subscales , whilst seven papers used the Brief Psychiatric Rating Scale (BPRS) percent change . Seven out of a total of seventeen papers reporting on SCZ described a positive association between PGx markers of efficacy with treatment outcomes . One study assessed the association of pharmacodynamic together with pharmacokinetic markers of efficacy. An additional paper focused on the association of PGx tests with the change in BPRS-defined cognitive symptoms of SCZ. These results are summarized in . 3.2.2. Major Depressive Disorder Twenty-three of the included studies focused on individuals affected by MDD , and ten of them were RCTs . Seven papers specifically reported on the association of CYP2D6 polymorphisms and treatment outcomes as defined by the authors . Four papers focused on the association between ABCB1 genotypes/alleles and treatment outcomes. Sixteen studies employed the Hamilton Depression Rating Scale (HDRS) as the primary outcome measure; seven of them defined remission as HDRS ≤ 7 , whilst a single paper defined remission as HDRS ≤ 10 . One paper employed the Structured Interview Guide for the Hamilton Depression Rating Scale (SIGH-D17) as the primary outcome measure , and three papers the mean HDRS change . Other symptom rating scales employed as the primary outcome measure included the Quick Inventory of Depression Scale—Self Report (QIDS-SR) , Patient Global Impression of Improvement (PGI-I) , and the Patient Health Questionnaire-9 (PHQ-9) . Two additional papers reported on the association between PGx testing and hospital stay duration . One study recruited a mixed population of MDD and anxiety disorder (unclear diagnosis of the anxiety disorder) , and another one recruited individuals affected by MDD, BD, or PTSD . These findings are illustrated in . 3.2.3. Bipolar Disorder Three papers reported on PGx’s association with clinical outcomes in individuals affected by BD. One paper described the association between CYP2D6 and symptom improvement as defined according to the Clinical Global Impression Efficacy Index (CGI-E). An additional paper reported the association of CGI changes with PGx testing of a mixed population comprising BD, PTSD, and MDD. One paper probed the potential cost savings associated with PGx-guided pharmacological therapy changes, focusing on emergency service access. These results are listed in . The selected search strategy resulted in the identification of 1975 records. After duplicate removal, 1456 records were assessed through an abstract and title screening, leading, in turn, to the identification of 587 records. Among them, 42 papers were selected for the qualitative analysis, summarized in 3 different tables dedicated to (1) SCZ ( and ) MDD ( and ) and BD . A complete description of the selection process is reported in the PRISMA flow diagram, in . A total of 13 studies originated in the USA and 14 in Asia. The remaining studies were carried out mainly in European countries. Among the included studies, eleven were randomized controlled trials (RCTs), with 10 recruiting individuals affected by MDD and only one focusing on individuals affected by SCZ . One RCT on MDD recruited a mixed sample of individuals affected by MDD and/or anxiety, but no description of the anxiety disorder was included . Only three of the included studies reported on individuals affected by BD, with one of the three including a heterogeneous population comprising MDD, BD, and post-traumatic stress disorder (PTSD) . Overall, the risk of bias of the included RCTs appears limited, save for three studies, judged at high risk of bias . summarizes the bias assessment for the included RCTs according to ROB 2. Reported outcomes included service use reduction, symptom change from baseline, and rates of remission or response to treatment. In line with the literature in the field, there was significant heterogeneity in scales employed to report the symptom changes. The description of the sample composition of the included studies also appears inconsistent, with the vast majority providing the gender composition, age range, or the average age of the recruited sample. A discrete heterogeneity also emerged regarding the employed inclusion or exclusion criteria, even considering the heterogeneity of the analyzed diagnostic categories. Numerous different alleles and genotypes have been assessed, but no specific efficacy pattern emerged for a particular marker across the various studies. A relatively limited number of studies reported on the results of combinatorial PGx testing, introducing a further layer of complexity in the interpretation of the results for the included studies. 3.2.1. Schizophrenia Seventeen papers reported on studies comprising individuals affected by SCZ, with only one randomized controlled trial (RCT) . Among them, ten papers reported on the possible association between CYP2D6 and treatment outcomes as described by the authors , four papers described the association between ABCB1 genotypes and treatment outcomes with three out of four reporting a positive association . Overall, a significant heterogeneity of assessed outcomes is apparent. Two papers used retention in the treatment of antipsychotics (AP) as the primary outcome . As for symptom severity assessment, seven papers used the Positive and Negative Symptoms Scale (PANSS) score as a primary outcome measure, either focusing on total percent change or changes in some of its subscales , whilst seven papers used the Brief Psychiatric Rating Scale (BPRS) percent change . Seven out of a total of seventeen papers reporting on SCZ described a positive association between PGx markers of efficacy with treatment outcomes . One study assessed the association of pharmacodynamic together with pharmacokinetic markers of efficacy. An additional paper focused on the association of PGx tests with the change in BPRS-defined cognitive symptoms of SCZ. These results are summarized in . 3.2.2. Major Depressive Disorder Twenty-three of the included studies focused on individuals affected by MDD , and ten of them were RCTs . Seven papers specifically reported on the association of CYP2D6 polymorphisms and treatment outcomes as defined by the authors . Four papers focused on the association between ABCB1 genotypes/alleles and treatment outcomes. Sixteen studies employed the Hamilton Depression Rating Scale (HDRS) as the primary outcome measure; seven of them defined remission as HDRS ≤ 7 , whilst a single paper defined remission as HDRS ≤ 10 . One paper employed the Structured Interview Guide for the Hamilton Depression Rating Scale (SIGH-D17) as the primary outcome measure , and three papers the mean HDRS change . Other symptom rating scales employed as the primary outcome measure included the Quick Inventory of Depression Scale—Self Report (QIDS-SR) , Patient Global Impression of Improvement (PGI-I) , and the Patient Health Questionnaire-9 (PHQ-9) . Two additional papers reported on the association between PGx testing and hospital stay duration . One study recruited a mixed population of MDD and anxiety disorder (unclear diagnosis of the anxiety disorder) , and another one recruited individuals affected by MDD, BD, or PTSD . These findings are illustrated in . 3.2.3. Bipolar Disorder Three papers reported on PGx’s association with clinical outcomes in individuals affected by BD. One paper described the association between CYP2D6 and symptom improvement as defined according to the Clinical Global Impression Efficacy Index (CGI-E). An additional paper reported the association of CGI changes with PGx testing of a mixed population comprising BD, PTSD, and MDD. One paper probed the potential cost savings associated with PGx-guided pharmacological therapy changes, focusing on emergency service access. These results are listed in . Seventeen papers reported on studies comprising individuals affected by SCZ, with only one randomized controlled trial (RCT) . Among them, ten papers reported on the possible association between CYP2D6 and treatment outcomes as described by the authors , four papers described the association between ABCB1 genotypes and treatment outcomes with three out of four reporting a positive association . Overall, a significant heterogeneity of assessed outcomes is apparent. Two papers used retention in the treatment of antipsychotics (AP) as the primary outcome . As for symptom severity assessment, seven papers used the Positive and Negative Symptoms Scale (PANSS) score as a primary outcome measure, either focusing on total percent change or changes in some of its subscales , whilst seven papers used the Brief Psychiatric Rating Scale (BPRS) percent change . Seven out of a total of seventeen papers reporting on SCZ described a positive association between PGx markers of efficacy with treatment outcomes . One study assessed the association of pharmacodynamic together with pharmacokinetic markers of efficacy. An additional paper focused on the association of PGx tests with the change in BPRS-defined cognitive symptoms of SCZ. These results are summarized in . Twenty-three of the included studies focused on individuals affected by MDD , and ten of them were RCTs . Seven papers specifically reported on the association of CYP2D6 polymorphisms and treatment outcomes as defined by the authors . Four papers focused on the association between ABCB1 genotypes/alleles and treatment outcomes. Sixteen studies employed the Hamilton Depression Rating Scale (HDRS) as the primary outcome measure; seven of them defined remission as HDRS ≤ 7 , whilst a single paper defined remission as HDRS ≤ 10 . One paper employed the Structured Interview Guide for the Hamilton Depression Rating Scale (SIGH-D17) as the primary outcome measure , and three papers the mean HDRS change . Other symptom rating scales employed as the primary outcome measure included the Quick Inventory of Depression Scale—Self Report (QIDS-SR) , Patient Global Impression of Improvement (PGI-I) , and the Patient Health Questionnaire-9 (PHQ-9) . Two additional papers reported on the association between PGx testing and hospital stay duration . One study recruited a mixed population of MDD and anxiety disorder (unclear diagnosis of the anxiety disorder) , and another one recruited individuals affected by MDD, BD, or PTSD . These findings are illustrated in . Three papers reported on PGx’s association with clinical outcomes in individuals affected by BD. One paper described the association between CYP2D6 and symptom improvement as defined according to the Clinical Global Impression Efficacy Index (CGI-E). An additional paper reported the association of CGI changes with PGx testing of a mixed population comprising BD, PTSD, and MDD. One paper probed the potential cost savings associated with PGx-guided pharmacological therapy changes, focusing on emergency service access. These results are listed in . A growing amount of evidence points to the potential that PGx holds for treatment personalization in medicine , with notable examples of its applications in cardiology , oncology , pediatrics , and primary care , among others. With the right type of information support, PGx may further enhance the shared decision-making between service users and healthcare providers . Great efforts have been invested in testing PGx’s efficacy in the pharmacological treatment selection for SMI, and our results seem to confirm our impression regarding its potential value. Meta-analyses of RCTs assessing the effectiveness of gene-guided treatment (GGT) versus treatment as usual (TAU) for MDD point to a modest but statistically significant benefit in terms of a higher remission rate for GGT as compared with TAU . However, the clinical adoption of PGx testing in psychiatry appears somewhat delayed . Over the years, several reasons have been proposed to explain this phenomenon. Among them, there are a relative lack of RCTs exploring PGx efficacy , a lack of knowledge on how to interpret its results by a sizeable portion of healthcare providers , inconsistencies in the guidance provided by different clinical practice guidelines , and an apparent lack of confidence in the overall value of PGx testing in clinical practice . The results of our review seem to point to a significant heterogeneity in assessed outcomes and in the testing panels. Only three papers included in the present project reported on PGx testing in BD, with only one RCT . Considering the current relatively limited number of papers dedicated to the topic, the evidence regarding PGx testing for treatment selection in BD appears particularly scarce. In our data synthesis, less than half of the total studies dedicated to SCZ reported a positive association between PGx and treatment outcomes, and among them three focused on ABCB1 polymorphisms and three additional papers reported on CYP2D6 polymorphisms. The only RCT included in this project and dedicated to assessing PGx testing in SCZ was negative . At this stage, the evidence supporting the use of PGx testing alone to predict treatment outcomes in SCZ does not appear particularly poignant. Blood drug monitoring may represent an additional resource in guiding pharmacological treatment dosing, with clinical practice guidelines specifically dedicated to optimizing their use . Arguably, PGx testing may be synergistically integrated with psychotropic blood monitoring to fully exploit these two different sources of information in optimizing the therapeutic and safety profile for each medication trial. Our study selection did not include any study employing combinatorial pharmacogenomic testing for predicting pharmacological treatment outcomes in SCZ. Fourteen of the twenty-three included studies focusing on MDD described a positive association between PGx testing and pharmacological treatment outcomes . Five of the ten RCTs dedicated to MDD described a positive association for PGx testing and treatment outcomes , but considering the significant heterogeneity in the testing panels involved, no firm conclusion can be reasonably drawn from our results. Furthermore, a sizeable portion of the available evidence for PGx efficacy presents some financing biases, introducing additional complexity in the overall interpretation of the data . Even pondering the results of the available meta-analyses may be a daunting task, as the proprietary nature of the tested algorithms employed in the involved studies hinders an accurate assessment of the relative impact of each approach . Assessing the cost-effectiveness of PGx testing also needs careful consideration and individualized analyses. Commercial PGx costs vary significantly, and there might be differing reimbursement schemes depending on the geographic location with different corresponding healthcare systems and differing frequencies of actionable genotypes in the local population . All these factors lead to the necessity of assessing cost-effectiveness profiles in the specific context where PGx testing should be employed . The use of ethnicity as a guiding variable for treatment selection has been subjected to intensified scrutiny during recent decades. However, several clinical practice guidelines use the supposed ethnicity of origin as a possible element on which to base the decision on whether to perform PGx testing or not . Ethnicity-based guidance for screening HLA-B*1502 among individuals of Asian ancestry prior to the use of carbamazepine, as an example, appears misguided and a potential source of confusion as HLA-B*1502 is nearly absent in South Korea and Japan . Indeed, ethnicity represents a poor surrogate for the underlying biology. Therefore, such guidance should be abandoned in favor of more evidence-based, practical screening guidance . Notwithstanding the previously mentioned limitations, a progressive cost reduction and a growing number of tested alleles may expand the number of individuals who may benefit from actionable treatment guidance. These factors, taken together, may increase PGx adoption in clinical practice . Future efforts need to be devoted to improving the standardization for the tested algorithms and clinical practice guidelines, boosting educational programs on how to capitalize on PGx technologies in clinical care and assessing next-generation sequencing in PGx tests to address some of the lasting concerns surrounding PGx use . Limitations The present paper focused on the association of pharmacokinetic markers, as the available evidence appears to be more solid as compared with pharmacodynamic markers. However, numerous papers have been published on the latter markers, and it would be worthwhile exploring the subject in future review projects. Indeed, the number of published studies on the field is far too great to be covered in a single paper. We did not include papers probing the eventual association between PGx testing and the safety or tolerability of pharmacological treatments. This might have led to the exclusion of a substantial part of the literature and of evidence supporting PGx testing in clinical practice. The search was limited to three databases and to articles written in English, wich could have also impacted on the extensiveness of our analyses. Finally, the lack of consistency in SMI’s clinical definition might have hindered our capacity of fully grasping the significance of PGx testing for predicting pharmacological treatment response in psychiatry. The present paper focused on the association of pharmacokinetic markers, as the available evidence appears to be more solid as compared with pharmacodynamic markers. However, numerous papers have been published on the latter markers, and it would be worthwhile exploring the subject in future review projects. Indeed, the number of published studies on the field is far too great to be covered in a single paper. We did not include papers probing the eventual association between PGx testing and the safety or tolerability of pharmacological treatments. This might have led to the exclusion of a substantial part of the literature and of evidence supporting PGx testing in clinical practice. The search was limited to three databases and to articles written in English, wich could have also impacted on the extensiveness of our analyses. Finally, the lack of consistency in SMI’s clinical definition might have hindered our capacity of fully grasping the significance of PGx testing for predicting pharmacological treatment response in psychiatry. A growing amount of evidence points to the potential that PGx testing holds for improving pharmacological treatment selection in psychiatry. PGx should be seen as an essential tool of an integrated approach which should take advantage of robust and standardized algorithms to help (but not solve) the decision-making process in terms of pharmacological interventions. Another neglected approach is represented by therapeutic drug monitoring, largely underutilized in SMI, but that could further boost the utility of PGx testing if adequately integrated with it. Future efforts will have to address lasting concerns surrounding the lack of standardization of the field and its practical implementation. |
Predicting response to neoadjuvant therapy with glucose transporter-1 in breast cancer | 1655d28e-a792-4bbe-9ee7-5aaf658d6ee9 | 10004294 | Anatomy[mh] | Breast cancer (BC) is the most common tumor worldwide with a high mortality rate among women. Some parameters, such as tumor stage, molecular subtyping, and hormone receptor status, are used in the selection of treatment and in predicting the prognosis . Molecular subtyping is the most important parameter that predicts the response to neoadjuvant therapy (NT) . Molecular subtyping alone is insufficient to predict treatment. However, more parameters are needed. Therefore, it is important to investigate different biomarkers that will shed light on new agents in predicting the prognosis, response of patients, and even in choosing treatment method. Glucose transporters are membrane transporter proteins that catalyze the facilitative bidirectional transfer of their substrates across membranes . Glucose transporter-1 (Glut-1) is the first identified member of the glucose transporter family as well as the most common of all membrane transport proteins . It is highly expressed in the endothelium of tissues where selective glucose transfer from blood to tissues is important, such as the central nervous system, retina, iris, ciliary muscle, and endoneurium. Moreover, Glut-1 is also expressed in erythrocytes physiologically , and pathologically, it mediates basal glucose transport in cancer cells, which require considerably higher energy levels than normal cells, and provides glucose for energy metabolism . Various studies have also investigated whether insulin resistance, which regulates glucose metabolism in the body, is a risk factor in BCs. Some of these studies have defined a high risk of BC in obese and diabetic patients. However, the mechanisms are not clear . As a result, Glut-1 has been found to be overexpressed in various types of cancer, including prostate, stomach, lung, and BC; squamous cell carcinoma of the head and neck – ; and its overexpression is a poor prognostic parameter , . Therefore, it has been thought that tumor progression can be prevented via Glut-1 mechanism. In the present study, we aimed to investigate the potential use of Glut-1 antibody in tru-cut biopsy (TCB) as a new biomarker to predict the response and prognosis before NT. In addition, we studied the relationship between Glut-1 expression and clinicopathological parameters, such as hormone receptor status and Ki-67 labeling index (LI).
Study design and case selection In our retrospectively planned study, patients with a diagnosis of breast carcinoma and received NT between 2010 and 2021 were retrieved from the hospital electronic system. Patient data The age, details of NT protocol, the status of recurrence or distant metastasis, and survival status were retrieved from the hospital and national electronic database. Tumor size, status of hormone receptor and Her2 expression, Ki-67 LI, and the presence of lymphovascular and perineural invasion were obtained from pathological reports. Histopathological and immunohistochemical staining Hematoxylin and eosin-stained slides of both TCB and resection were retrieved from the pathology archive. Cases that did not have tumor slides or clinical data were excluded. H&E and immunohistochemical slides were re-evaluated by three different pathologists (SDÖ, ÇÖ, and GA). All cases were classified according to their molecular and histological subtypes according to the World Health Organization classification – . The cutoff value for Ki-67 LI was accepted as 14%. The best representative tumor block was selected from both TCB and resections, and 4-μm sections were obtained. The Ventana Medical Systems (SN: 714592, Ref: 750-700 Arizona, USA) automated immunohistochemistry device was used. Immunohistochemical staining was performed using the Ultra-view Universal DAB Detection Kit (REF: 760-500, Ventana) and Glut-1 antibody (PA1-46152, 1/200 diluted, Glut-1 Rabbit Polyclonal Antibody). An established scoring system that evaluates both the pattern and intensity of staining was used. Membranous and cytoplasmic staining were considered positive. Briefly, the staining pattern was scored according to the percentage of cells that showed cytoplasmic and/or membranous staining as follows: 0=less than 1%, 1+=1–10%, 2+=11–50%, 3+=51–80%, and 4+=over 80%. The intensity was scored as 1: weak, 2: moderate, and 3: strong. Blinded assessment was done by two different observers (SDO and OO). The overall score was then calculated as (1+intensity/3)×pattern . Tumor cells were scored as negative if no immunopositive cells were present after immunostaining. The total score was based on the percentage of positive tumor cells and the degree of immunostaining intensity . Statistically, the median value for staining score was 3.9. Score <4 was accepted as low, while score ³4 was accepted as high .
In our retrospectively planned study, patients with a diagnosis of breast carcinoma and received NT between 2010 and 2021 were retrieved from the hospital electronic system.
The age, details of NT protocol, the status of recurrence or distant metastasis, and survival status were retrieved from the hospital and national electronic database. Tumor size, status of hormone receptor and Her2 expression, Ki-67 LI, and the presence of lymphovascular and perineural invasion were obtained from pathological reports.
Hematoxylin and eosin-stained slides of both TCB and resection were retrieved from the pathology archive. Cases that did not have tumor slides or clinical data were excluded. H&E and immunohistochemical slides were re-evaluated by three different pathologists (SDÖ, ÇÖ, and GA). All cases were classified according to their molecular and histological subtypes according to the World Health Organization classification – . The cutoff value for Ki-67 LI was accepted as 14%. The best representative tumor block was selected from both TCB and resections, and 4-μm sections were obtained. The Ventana Medical Systems (SN: 714592, Ref: 750-700 Arizona, USA) automated immunohistochemistry device was used. Immunohistochemical staining was performed using the Ultra-view Universal DAB Detection Kit (REF: 760-500, Ventana) and Glut-1 antibody (PA1-46152, 1/200 diluted, Glut-1 Rabbit Polyclonal Antibody). An established scoring system that evaluates both the pattern and intensity of staining was used. Membranous and cytoplasmic staining were considered positive. Briefly, the staining pattern was scored according to the percentage of cells that showed cytoplasmic and/or membranous staining as follows: 0=less than 1%, 1+=1–10%, 2+=11–50%, 3+=51–80%, and 4+=over 80%. The intensity was scored as 1: weak, 2: moderate, and 3: strong. Blinded assessment was done by two different observers (SDO and OO). The overall score was then calculated as (1+intensity/3)×pattern . Tumor cells were scored as negative if no immunopositive cells were present after immunostaining. The total score was based on the percentage of positive tumor cells and the degree of immunostaining intensity . Statistically, the median value for staining score was 3.9. Score <4 was accepted as low, while score ³4 was accepted as high .
Ethics committee approval for our study was obtained from the ethics committee of the Recep Tayyip Erdogan University Faculty of Medicine, non-interventional clinical research (E-40465587-050.01.04-352). The study was conducted in accordance with the Declaration of Helsinki, the ethical standards of the institutional research committee, and the Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK) guideline .
Statistical post-hoc power and effect size were calculated by using the G*Power version 3.1.9.7 software . Statistical analyses were performed using IBM SPSS Statistics, Version 22.0 (SPSS Inc., Chicago, USA). Each group's descriptive statistics were reported as frequency and percentages within the group (n, %). Whether there was a correlation between the groups in terms of categorical variables was evaluated using the chi-square (Pearson's chi-square) and Fisher's exact test. The Kaplan-Meier method was used for survival analysis and was evaluated with the log-rank test. For statistical significance, the p-value was accepted as <0.05.
Clinicopathological parameters A total of 65 cases were included, and the median age was 58 years (range, 33–84 years). Estrogen receptor (ER) and progesterone receptor (PR) positivity were observed in 45 (69%) and negative in 41 (63%) cases. In all, 50 (77%) cases had high Ki-67 LI (≥15%). Complete and partial pathologic responses were observed in 25 (38%) and 31 (48%) cases, respectively, while 9 (14%) had no response to NT. Association of glucose transporter-1 expression with clinicopathological parameters in tru-cut biopsy before neoadjuvant therapy High Glut-1 expression was present in 31 of 65 cases. Glut-1 expression was high in cases that had no expression of ER and PR (p=0.016 and p=0.004, respectively). There was a statistically significant relationship between Glut-1 expression and high Ki-67 LI (p=0.001) . Glut-1 expression was statistically higher in cases that were classified as luminal A and luminal B compared to Her2 and triple-negative (TN) ones (p=0.032). Glut-1 expression was statistically low in cases with lymphovascular invasion (p=0.002) and lymph node metastasis (p=0.017). Cases with high Glut-1 expression had either a complete or a partial pathologic response. The result was statistically significant (p=0.028) . Relationship between glucose transporter-1expression and prognosis The median follow-up for the entire cohort was 36 months (range, 1–88 months). Notably, seven (11%) cases were died of disease, and two (29%) had high Glut-1 expression. Distant organ metastases were observed in 14 (22%) cases, and Glut-1 expression was low in 12 (86%) of them. Statistically, Glut-1 expression was found to be associated with disease-free survival (DFS), but no correlation was found with overall survival (OS) (log-rank p=0.014 and p=0.469, respectively) .
A total of 65 cases were included, and the median age was 58 years (range, 33–84 years). Estrogen receptor (ER) and progesterone receptor (PR) positivity were observed in 45 (69%) and negative in 41 (63%) cases. In all, 50 (77%) cases had high Ki-67 LI (≥15%). Complete and partial pathologic responses were observed in 25 (38%) and 31 (48%) cases, respectively, while 9 (14%) had no response to NT.
High Glut-1 expression was present in 31 of 65 cases. Glut-1 expression was high in cases that had no expression of ER and PR (p=0.016 and p=0.004, respectively). There was a statistically significant relationship between Glut-1 expression and high Ki-67 LI (p=0.001) . Glut-1 expression was statistically higher in cases that were classified as luminal A and luminal B compared to Her2 and triple-negative (TN) ones (p=0.032). Glut-1 expression was statistically low in cases with lymphovascular invasion (p=0.002) and lymph node metastasis (p=0.017). Cases with high Glut-1 expression had either a complete or a partial pathologic response. The result was statistically significant (p=0.028) .
The median follow-up for the entire cohort was 36 months (range, 1–88 months). Notably, seven (11%) cases were died of disease, and two (29%) had high Glut-1 expression. Distant organ metastases were observed in 14 (22%) cases, and Glut-1 expression was low in 12 (86%) of them. Statistically, Glut-1 expression was found to be associated with disease-free survival (DFS), but no correlation was found with overall survival (OS) (log-rank p=0.014 and p=0.469, respectively) .
Glut-1, a member of the glucose transporter family, expression is controlled by different transcription factors. For example, hypoxia-inducible factor (HIF-1 alpha) has been reported to regulate Glut-1 expression in hypoxic conditions. Moreover, c-Myc plays a role in Glut-1 expression in many different tumors . Abnormal expression of Glut-1 is also affected by the PI3K/Akt pathway. Changes in the stability of Glut-1 transcription are associated with changes in glucose concentration, the structure of growth factors, cytokines, and some hormones . The Glut-1 expression reflects increased glycolytic metabolism, so there is Glut-1 upregulation in many cancers to maintain high glucose levels in neoplastic cells . Glut-1 has been shown as an optimal biomarker in various types of cancer , and it has been reported that agents providing Glut-1 inhibition in BC can be used in targeted therapy in different studies – . Moreover, this is the first study regarding Glut-1 expression in BC patients receiving NT. BC is the most common type of cancer with a high mortality rate among women . Some parameters, such as tumor stage, molecular subtype, and hormone receptor status, have been used in daily practice to choose the treatment method and predict the prognosis. To the best of our knowledge, there has been no study regarding Glut-1 expression in BC patients receiving NT. According to Deng Y et al., Glut-1 expression was associated with higher tumor grade, ER, and PR negativity in BC patients who did not receive NT (1). In the current study, overexpression of Glut-1 was significantly related to the negative hormone receptor. In addition, higher expression was found in Her2 and TN BCs compared to luminal subtype. As a result, high expression of Glut-1 may indirectly be a sign of poor prognosis, since it is associated with hormone receptor negativity. In our study, there was a statistically significant relationship between high Glut-1 expression and high Ki-67 LI . In a study by Alba et al., BC patients with a high LI had a complete response to NT. As in the studies of Alba et al., other studies advocate the predictive use of the Ki-67 LI to predict response to chemotherapy in identifying patients with pathological complete response. In this way, the use of Ki-67 is very useful in determining the patient group with a long prognosis . On the contrary, Ki-67 LI in breast carcinomas is assessed by eyeballing method by choosing three hotspot areas, counting 10 different high-magnification areas, and taking the average of the values. Therefore, this assessment is highly subjective among pathologists. In our study, Glut-1 expression was high in almost all of the cases with complete response to treatment. With these results, we can suggest that the evaluation of Glut-1 expression, which is an objective parameter that can be easily done in routine practice, can be used to predict response to treatment, as well as Ki-67. In the meta-analysis by Yu Deng et al., the prognostic role of Glut-1 in BC was widely investigated but the results are reported to be inconsistent . Hussein et al. reported that Glut-1 expression was not associated with OS in BC . However, other researchers have presented significant associations between Glut-1 expression and poor prognosis in BC , . In our study, there was a statistically significant relationship between Glut-1 expression and DFS, but no relationship was found between its expression and OS. Glut-1 expression has not been studied in neoadjuvant patients before, and we think that higher expression can be used as a good prognostic marker in patients receiving NT. A significant correlation was found between low Glut-1 expression and lymphovascular invasion, perineural invasion, lymph node metastasis, and distant organ metastasis in patients receiving NT. This result also supports that high Glut-1 expression can be indirectly used as an indicator of good prognosis in patients receiving NT. There were some limitations in our study; for example, our cases did not show a homogeneous distribution in terms of molecular subtype, hormone receptor status, response to treatment and had a short follow-up time. Another limitation of our study is the small number of cases. In conclusion, cancer with high Glut-1 expression has a better response to NT. This is the first and pioneering study regarding Glut-1 expression in BC patients receiving NT. As a result, we suggest that Glut-1 could be used as an alternative biomarker to Ki-67 in objective evaluation of treatment response among BC patients.
|
LC-MS/MS Application in Pharmacotoxicological Field: Current State and New Applications | 1e95c4de-2e09-4f52-b826-8a168fb25479 | 10004468 | Forensic Medicine[mh] | This review aims to analyze the applications of liquid chromatography combined with mass spectrometry (LC-MS) in honor of the inventor of this powerful analytical instrument, Professor Gérard Hopfgartner. Analytical chemistry has a crucial role in preclinical and clinical studies, primarily regarding deaths and toxicity, because of its capability to develop accurate (precise and true) methods that allow for the quantification of drugs and illicit drugs from different biological matrices (both conventional and non-conventional) with high level of confidence. The use of liquid chromatography (LC) is growing rapidly, especially in pharmaceutical industries in their research and development studies. Particular attention is devoted to the instrument configuration that combines LC and mass spectrometry (MS) because, in this way, the central figures of merit related to an analytical method can be achieved (selectivity, sensitivity from the detector chosen, MS, and separation from LC) . Thanks to high sensitivity and selectivity, many studies have reported liquid chromatography-tandem mass spectrometry (LC-MS/MS) as the primary analytical instrument . In pharmacotoxicology, LC-MS/MS is used as the “gold” choice, even if in some cases it is not exchangeable with other types of instrumentations . Therefore, it assures excellent versatility and, even though trained personnel are required, reduced analysis time or low spending resources are still significant benefits . In this scenario, current laws continue to evolve, and the procedures and analytical methods must remain up to date . Pharmacotoxicology coupled with analytical chemistry can give a sort of “instant photography” of the situations around us, from screening procedures to quantitative applications in forensic toxicology. It is imperative to have a wide range of information to prevent lawlessness, overdose deaths, and other avoidable unwanted situations . Only analytical chemists can develop methods to determine different drugs starting from different and complex matrices such as whole blood, urine, plasma, saliva, etc., obtaining maximum advantages from the instrumentation in terms of sensitivity, selectivity, reproducibility, and ruggedness. Based on quantitative analysis, analytical chemistry can unravel forensic cases, as has been overwhelmingly evident in recent years . As Seger reported in his manuscript, minor issues can be surmountable by studies, innovations, and continued research because LC-MS/MS needs particular attention or trained personnel . The present review paper discusses the applications of LC-MS/MS in pharmacotoxicological cases because it is impossible to ignore the importance of this powerful instrument in the rapid development of pharmacological and forensic advanced research in recent years. On one hand, pharmacology is fundamental for drug monitoring and helping people to find the so-called “personal therapy” or “personalized therapy”. On the other hand, toxicological and forensic LC-MS/MS represents the most critical instrument configuration applied for drugs and illicit drugs screening and research, giving valuable support to law enforcement. Often the two areas are stackable, and for this reason, many methods include analytes attributable to both fields of application. In this manuscript, drugs were divided in separate sections. Particular consideration is given in the first section to therapeutic drug monitoring (TDM) and clinical approaches generally applied in pharmaceutical studies with a focus on the central nervous system (CNS), while the second section is focused on the methods developed in recent years for the determination of illicit drugs, often in combination with CNS drugs. All references considered herein cover the last 3 years, except for some specific and peculiar applications for which some more dated but still recent articles have been considered.
Frequently, LC-MS/MS plays a vital role in pharmacokinetics (PK) and pharmacodynamics (PD) studies. Thanks to research progress, people with different types of cancer can take oral antineoplastic drugs. These oral drugs enable patients to avoid hospitalization, allowing a reduction in care costs. The main drawbacks are related to the fact that they must be able to adhere to the prescribed therapy. Additionally, some oral antineoplastic drugs have particular characteristics of pharmacokinetics. These shortcomings become important following individual therapeutic drug monitoring (TDM), aiming to avoid sub or toxic drug concentrations . In TDM, the application of LC-MS/MS is one of the most used technologies, far surpassing older ones . In reference to these goals, Llopis et al. (2021) developed a rapid method to estimate nine kinase inhibitors, two metabolites of them, and two antiandrogen drugs used for different types of cancer. Indeed, cobimetinib, dasatinib, ibrutinib, imatinib, nilotinib, palbociclib, ruxolitinib, sorafenib, and vemurafenib (kinase inhibitors) are used mainly for the treatment of hematological cancer and solid gastrointestinal tumors, but were also administered for the treatment of renal cell carcinoma and hepatocellular carcinoma. Abiraterone acetate and enzalutamide, antiandrogenic drugs, were approved in clinical practice for metastatic prostate cancer treatment. The method was immediate and very fast because it needed just 2.8 min, with a non-linear mobile phase gradient. After a single step related to the sample pre-treatment, specifically protein precipitation (PP) associated with the selected matrix (plasma), 10 µL was directly injected into LC-MS/MS instrumentation . Ferrari et al. validated a “quick and robust LC-MS/MS method” to quantify four antibiotics: piperacillin, meropenem, linezolid, and teicoplanin. This choice came from research about hospitalized patients and their antibiotic treatment. Eighty plasma samples from 49 patients were considered and pre-treated with liquid–liquid extraction (LLE), and 5 μL was analyzed employing an LC gradient elution. The mobile phases were, respectively, water and methanol (both with 0.1% formic acid to improve the ionization efficiency allowing an improvement of the instrumental method sensitivity). Mobile phases play an important role during the analysis because they immediately reverse to finish as starting conditions. In addition, in this case, the group highlighted the importance of personalized therapy following the specific characteristics of every single patient following the concept of the so-called “personal therapy” or “personalized therapy”. Mazaraki et al., following Green Analytical Chemistry (GAC) principles, developed a method by combining fabric phase sorptive extraction (FPSE) with UHPLC-MS/MS to quantify six beta blockers, as atenolol, nadolol, metoprolol, oxprenolol, labetalol, and propranolol. In addition to drug monitoring, this study also aimed to quantify these drugs in case of doping. FPSE allows the time, analytes, and solvents consumption to be avoided in sample pre-treatment, which is very useful for these applications. A binary gradient allows the quantitative results to be obtained within 15 min . Another field of application can be ascribed to some specific diseases or emerging disease. Mathis et al. developed a rapid method to quantify 12 metabolites to diagnose nine types of inborn metabolism dysfunctions which cause epilepsy. Through LC-MS/MS, they analyzed plasma and urine samples through a gradient LC run for a total runtime analysis of 16 min . LC-MS/MS is very widely used within the field of the pharmacotoxicology thanks to its versatility, and for this reason commercial kits can often be found on the market. Thanks to older studies in TDM, using these kits it is possible to obtain a fast method transfer, maintaining the reproducibility of the results and a direct and fast method validation on-site. Furthermore, using these kits it is possible to reproduce the same method in different laboratories with the advantage of immediately comparing the results . In toxicology, LC-MS or tandem mass spectrometry (MS/MS) is so frequently used because firstly it can be used for non-volatile and heat-labile compounds, unlike gas-chromatography mass spectrometry (GC-MS). Another very important and advantageous factor in the use of LC-MS/MS compared to GC-MS is that it is possible to avoid processes of derivatization of the analytes in order to make them volatile and/or analyzable by GC. This factor not only reduces the analytical variability (fewer pre-analytical steps) but contributes to reducing the time for analysis. In addition, biological samples such as blood and urine can be easily analyzed with minimal sample manipulation. It allows the classical drawbacks generally encountered during this phase to be reduced (errors related to the sample treatment and the reduction of the time). In addition, the use of high-resolution mass spectrometry (HRMS) is not so widespread, mainly because this instrument configuration is especially devoted to proteomic approaches and for qualitative purposes. The last matrix has a vital role in detecting drugs, it is simpler to collect compared to blood and patients are more compliant. The use of these types of samples has another advantage related to the possibility of evaluating the analysis of drug metabolites to perform PK and PD studies . LC-MS/MS is used in toxicology to investigate antidepressants, antipsychotics, and benzodiazepines (BDZ). Now, benzodiazepines are used and, indeed, the number of prescriptions has increased in the last few years. Another problem can be the ease with which they can be acquired on the online market . LC-MS/MS, in addition to its common role for analyzing lawful drugs and illicit drugs, can also detect metabolites that come from phase I or II metabolisms. This characteristic differs from other instruments and represents the main advantage related to the use of LC-MS/MS in PK and PD studies or for pharmacotoxicological purposes . In their study, Merone et al. used LC-MS/MS to develop a rapid screening method to assess more than 739 licit and illicit substances. This vast number is possible thanks to the fast gradient LC run, the polarity switching mode available on all recent MS instrumentations (to detect positive and negative ions), and the availability of the different molecular ion to daughter ion transitions for multiple reaction monitoring acquisition mode (MRM). The study started from the popularity of these substances, especially benzodiazepines, which are not used as prescribed drugs, but often as recreational substances . Methling and his group examined hair to reveal drugs, including antidepressants, antipsychotics, and benzodiazepines. The selected matrix allows the “window” related to the possible drug assumption to be increased, especially related to the bioaccumulation in the hair keratin matrix. This point is further supported in the field of biological matrices analyses regarding the “time window” that can be monitored based on the type of sample being analyzed. For example, in the case of blood tests, a few hours are evaluated, while in urine tests, a few days are evaluated. It should be noted that, for the analysis of the hair, depending on the length, is possible to monitor a few months or years (completely similar for other keratin matrices). In 442 post mortem samples analyzed, 49 of 52 analytes were found. Antidepressants and antipsychotics are commonly found in post mortem toxicology due to their high prescription rates and relatively high toxicities in overdose cases. This method considers a fast run of 18 min in gradient elution mode. Thanks to the type of samples chosen (hair), the authors were able to highlight that reducing drug concentration in the different sections of the hair could represent a scaling down of the drug during the last four months. If there is a build-up of the drug, despite the above mentioned, it could coincide with the starting of therapy or the start of consumption. Campelo and coworkers detected that, in recent years, people have often resorted to antidepressants. For this reason, they developed an LC-MS/MS method with QuEChERS (Quick, Easy, Cheap, Effective, Rugged, and Safe) extraction procedure. The instrumentation condition sees a gradient run from 10 to 95% methanolic ammonium formate solution (2 mmol/L) with 0.1% formic acid. Instead, the mobile phase A was the same as B, but with an aqueous solution. In merely 8 min of analysis, it was possible to quantify the twenty most common antidepressants. The method was validated on post mortem blood, and when it was applied on real samples, the absence of antidepressants was confirmed. The limits of quantification (LOQ) were 10 ng/mL for all the analytes , highlighting the great sensitivity (in this matrix) of the hyphenated LC-MS/MS instrument configuration. In PK and PD field, drug concentration is affected by absorption, distribution, metabolism, and excretion (ADME). With metabolism and its enzymes, such as cytochrome, playing an important role, the concentration and relative effects can be different for each person. For these reasons, is important to follow therapeutic drug monitoring to personalize therapy . For example, when there is low quantity or low concentration of a drug, the better instrument is LC-MS/MS, thanks to its sensitivity. One example is the study conducted by Liu et al. in 2016. Because of the low absorption of Naloxone and its lower quantity compared to the other drug used in the formulation, the better method of analysis is LC-MS/MS allowing for the determination of up to three pg/mL of Naloxone . Da et al. in 2018 used dried blood spot (DBS) as sample to detect the concentration of Fluoxetine by LC-MS. This drug is common in patients suffering from depression but, at the same time, it needs therapeutic drug monitoring because each patient responds in a different way due to metabolism by cytochrome. Moreover, these patients are not hospitalized, so in this way, they found a good compromise between analysis and patients’ compliance . Additionally, Linder et al. emphasized the facility of analysis of dried blood spots using LC-MS . In this study, the group compared the concentration of drugs from dried blood spots and plasma, analyzed the first one in LC-MS/MS and the second one by immunochemical methods. Starting from this study, they aimed to convert plasma and dry blood spot (DBS) because these preliminary results showed excellent correlations. reports the LC-MS/MS characteristics of the most recent application in the field of pharmacotoxicology and the general instrument configuration applied.
Nowadays, illicit drugs are a real problem. In particular, these compounds can cause accidents and death on the road. Additionally, effects on the central nervous system (CNS) can be psycholeptic, psychoanaleptic, or psychodiseptic, and who consumes them can create problems related to public and social order due to pharmacological disorders such as tolerance, addiction, or dependence . For these reasons, to help law enforcement, oral fluid (OF) is considered on par with plasma because the illicit drug concentrations in both are similar (or a possible correlation between the concentrations in the different matrices is known). Law enforcement can collect saliva samples non-invasively without any medical supervision, thanks to standardized procedures and specific devices developed to assure the accuracy of the analysis and reproducibility. The use of oral fluid is mainly related to military and law enforcement investigations. Over the years, tools and devices have been developed which allow even untrained personnel in the sampling phases to proceed with the collection of oral biological fluids. These samples are of primary importance as, by means of these standardized devices, they allow presumptive and preliminary screening. This type of analysis can be used, for example, in conjunction with field sobriety tests to help confirm or dispel suspicions of abuse (both alcohol and illicit substances). Other applications may include, but are not limited to, test period monitoring, post-accident evaluation, surface testing of unknown substances, etc. In particular, the use of oral fluid allows a series of advantages to be obtained that can hardly be obtained with other procedures. This sampling is rapid (samples can be taken at or near the time of incidence and results are provided on site), easy (oral screening does not require observed same-sex collection), reliable (if the procedure and screening test is properly validated, it is difficult to adulterate the sample), non-invasive (no need for medical professionals to take samples), and hygienic (if handled properly, administrators will not encounter a donor’s oral fluid). Bassotti et al. report an example of the application of this concept in 2020. This study developed a rapid method to determine 17 different illicit drugs in OF, and performed the analysis of these analytes with a simple sample pre-treatment, named “dilute and shoot”, a run time of 12 min, and only water with 0.1% formic acid and 50:50 Acetonitrile (ACN)/Methanol (MeOH) with 0.1% formic acid as mobile phases in gradient elution . Following their in-depth studies in 2022, the same group published a new method for the determination of up to 739 compounds. The considerable number includes both licit and illicit substances, from antiepileptic drugs to cannabis, from benzodiazepines to hallucinogens. This study started from a consciousness of increased use of illicit substances and the corresponding decrease in licit ones, such as tobacco or alcohol. Thanks to this method, it can be possible to perform a rapid screening test (qualitative) followed by a confirmation analysis that needs more time. A method like this is essential, especially in legal cases, if there is a possibility of intoxication or overdose by unknown substance(s). The research group used for this method blood and urine from routine clinical TDM and post mortem blood from autopsies. In this study, particular attention was paid to green chemistry; for this aim, they used MeOH for protein precipitation for blood and a solution with glucuronidase for hydrolysis reaction for urine. The LC-MS/MS method involves a run time of 18 min, during which flow changes were applied for a gradient elution . A group of drugs, called new psychoactive substances (NPS), has been developed in recent years. NPSs are characterized by chemical modifications to classic drugs and pharmaceuticals. This class is not a medical drug, but it is only used to have fun, without PK study and/or toxicity information or mortality rate . In 2020, the European Monitoring Centre for Drug and Drug Addiction (EMCDDA) described 46 new substances of NPSs. In Italy, the most used are synthetic cathinones (mephedrone, α-PHP, 3-MMC, eutylone), synthetic cannabinoids (JWH-122 and JWH-210), and opioids (ocfentanil, 2-methyl-AP-237 and car-fentanyl) . Vaiano and his team developed an LC-MS/MS method to quantify drugs and illicit drugs in blood. For 120 NPS and 43 prescription drugs, the developed method required merely 37 min of runtime, a little bit too long if we compare with Merone et al. . However, they had good sensitivity and linearity and, compared with other studies, their sample pre-treatment is easier because it is just a protein precipitation using cold acetonitrile . It is vital to counter illicit drug use because they affect perception and driving skills, such as benzodiazepines (BDZ) and opioids. Another source of confusion originates from the package leaflets that create the wrong impression to the consumers . Lau and his group used post mortem blood, especially femoral and heart blood, to quantify up to 30 different synthetic cathinones using LC-MS/MS. They became aware of the disproportionate use of these illicit drugs because the abovementioned were very famous in America, inducing significant amphetamine-like symptoms . Sample pre-treatment consists of solid phase extraction (SPE), and thanks to a gradient LC run (mobile phase water and acetonitrile both with 0.1% formic acid) in 16 min, the analytes of interest are easily found. The LOQs were for every analyte at the concentration level of 1 ng/mL . Ferrari Junior and Caldas also used the same mobile phases in their research about the determination of 79 substances, which includes 23 prescription drugs, 13 synthetic cathinones, 11 phenethylamines, 8 synthetic cannabinoids, 7 amphetamines, and 17 other psychoactive substances. The researchers used biological samples, such as blood and urine, because the first one is the most used in intoxication cases, while the second one can trace these compounds well . The main difference between the two studies is Ferrari’s attention to the QuEChERS method. The group tried a pre-treatment protocol with different samples for the extraction using a different amount of water and ACN, succeeded by many procedures that ended with reconstituting with 200 µL of ACN and 0.1% formic acid. The mobile phases consist of a fast gradient elution starting by 1% of the mobile phase, up to 99% B at 10/12 min, and returning to 1% B at the end of the LC run, which is 14.1 min. Otherwise, the flow rate is 0.5 mL/min, and the temperature is 40 °C. In addition, the injection volume is assessed, optimizing at 1 µL to evaluate the highest ionization efficiency in the MS source (reflecting the highest sensitivity). Using urine as sample of analysis, Kahl and coworkers quantified drugs of abuse. It is interesting to pay attention to the methods used, because they compared LC-MS/MS to enzyme-linked immunosorbent assay (ELISA), concluding that the limit of detection was lower in the first one. Furthermore, LC-MS/MS showed better “flexibility” than the immunoassay, especially for the newest drugs, concluding that although the LC-MS/MS method needs more times than the immunoassay, the last one can be just a screening test . Broecker et al. chose hair as a specimen because it carries a clear drug usage history over time, even for years. This type of choice is used in toxicology to follow the use of illicit drugs. Hair samples were collected during the autopsy and from laboratory staff. The pre-treatment consists of adding methanol/ACN/2 mM ammonium formate (25/25/50, v / v / v ) and incubating for 18 h at 37 °C to extract analytes. The temperature was chosen based on the possible analyte decomposition. The choice of post mortem hair was made in collaboration with police because law enforcement significantly contributed to the history of a single person’s death. They followed 30 illicit drugs, mainly experiencing heroin, thanks to the presence of its metabolites and cocaine, following the decrease in the levels of crack and amphetamine . Broecker, and Rubicondo and coworkers used hair to check NPS and drugs. Specifically, Rubicondo and coworkers started from research performed last year, using blood as a sample and adding seven other substances. They obtained a complete method to quantify up to 120 NPSs, 43 BDZs/antidepressants, and 6 opiates/opioids. Sample pre-treatment was easier than Broecker’s plan, because it involved only a washing step followed by protein precipitation that, unlike mobile phases, is 5 mM aqueous with formic acid and Acetonitrile 99:1, respectively. Fascinating was the application to samples of 100 people who previously completed a questionnaire about drug consumption. In some cases, the answers do not match the results from the analysis. Tradozone was the most discovered drug during analysis, followed by BDZ as flunitrazepam and diazepam. In addition, two synthetic cathinones, methylone and mephedrone, were found . Baumgartner, in 2012, like his above-mentioned colleagues, used hair as sample. He focused attention on a screening test, called VMT-A, that is an immunochemical method, confirming it by LC-MS/MS. He obtained good results with VMT-A, but reported the importance of confirmation by LC-MS/MS . Another matrix that can be used in LC-MS/MS is muscle, because sometimes whole blood cannot be available for routine analysis of forensic cases. In fact, from April 2019 to January 2021, in 108 cases of 800, blood was unavailable for analysis. For this reason, Hansen et al. developed a UHPLC-MS/MS method correlating blood and muscle analysis results. They considered 29 drugs and metabolites using a C18 column with a gradient analysis of 0.025% v/v ammonia aqueous solution and methanol. The step that differs between the two types of matrices is a homogenization step for muscle, and then the method was fully validated . A different approach that can be considered in the case of lack of blood, can be skeletal tissue , as Orfanidis did. In their work, they first developed the method using bone, which was difficult because bone is a hard tissue that is complex to pre-treat, unlike blood. After different experiments, they found the right way to extract analytes using MeOH, ammonium hydroxide (NH 4 OH), ultrasonic bath, and centrifuge. After this sample treatment, the sample can be directly injected and resolved in LC by a gradient analysis with water and methanol, both with 0.1% formic acid, proving that bone can be a possible (and valuable) alternative matrix for forensic analysis. They detected 27 drugs that applied firstly in two cases from chronic abusers, from antidepressants to cocaine and opiates . They developed two UHPLC-MS/MS methods for determining 84 drugs, licit and illicit, starting from different QuEChERS protocols. The two methods were developed because the matrices differed from blood and liver . This review will discuss liver protocol because it is less common. Due to a large number of analytes of interest, they decided to work with a C18 column with a total length of 150 mm. In both methods, the mobile phases were water and 0.1% formic acid and methanol with the same percentage of acid. The run can continue with a linear gradient if starting with a high water percentage to lower the retention of hydrophobic compounds . Following different forensic cases, often is necessary the approach to new matrices. One of the last (and valuable) matrices used is nails . As reported by Mannocchi and coworkers, nails can be useful in toxicological analysis. Nails cannot replace conventional and safer matrices, but they can monitor chronic exposure (nails are a keratinic matrix, just like hair). Their study developed a method for nails and hair in LC-MS/MS, applied in actual samples of 87 NPSs and 32 other illicit drugs, encountering success . Thanks to this evidence, if we have new unconventional matrices to use in forensic cases, there are new goals to reach in analytical chemistry, both with the most significant aim of monitoring illicit drugs and subsequently decreasing illegal acts . In the pharmacotoxicological field, the last few years have been fundamental for the discovery of new methods, and the biggest aim is always the reduction in time and a lower consumption of samples. These goals were found by using MALDI/MS and often there are correlations in the literature between LC-MS/MS and MALDI/MS. Last year’s MALDI is prevailing for molecules with high molecular weight, such as nucleic acid, proteins, and microorganism such as bacteria and fungi. For this reason, MALDI coupled with Time-of-Flight (TOF) is frequently used in TDM, but often to search for genotype, gene mutations, etc. . This represents a good advantage because MALDI does not need method optimization for the type of sample or choice of column for each experiment, and last but not least it is less time-consuming . Over recent years, new methods have been validated to semi-quantify eight benzodiazepines and four pyrrolidino cathinones, both in human blood . In these studies, authors have highlighted analogies and differences of MALDI and LC-MS/MS in terms of sensibility and results. reports the main LC-MS/MS characteristics for the last recent applications.
The last several years have given an important impetus to the technologies and the study of LC-MS/MS, even if the type of samples is not conventionally used. Therefore, the wish is for the use of this instrument configuration of analysis to become a daily routine, because it demonstrates the required selectivity and sensitivity that performs better than other configurations, is more reliable, and is an ideal scenario for different applications. Especially in the pharmacotoxicological area, it has demonstrated great performance from different laboratories, becoming the first choice of instrument configuration for the analysis and study of death from overdose or instances of doubt. Certainly, the applications, the instrumental configurations, and the methods currently in use are very different from those that Professor Gérard Hopfgartner could perhaps have imagined, but certainly the fundamental merit lies in the fact that he has contributed to developing a new way (and a new instrumentation) to approach sensitive and selective quantitative analysis. In addition, it could be said that it has also paved the way for a new way of thinking for analytical chemists involved in this field of analysis, in which flexibility (not only of the instrumentation) is an essential requirement in order to be able to respond to the requests and needs of an ever-evolving society.
|
Predicting the eyebrow from the orbit using three-dimensional CT imaging in the application of forensic facial reconstruction and identification | df9b1956-4305-4947-bff1-7d48127dd73c | 10006220 | Forensic Medicine[mh] | Identifying the remains of a missing person, especially those whose faces cannot be recognized due to decomposition or skeletonization is often difficult for law enforcement and investigative agencies . Moreover, this is especially true when evidence cannot be obtained from objective identifiable methods such as DNA, fingerprints, dental records, and non-dental radiographic comparison. Hence, facial reconstruction or facial approximation, a face recreation tool aimed to reproduce the face before death based on interpretation of the skull is employed, with the objective of recognition leading to an identification , . Furthermore, accuracy assessment also assists in the analysis of specific regions of the face, such as eyes, nose, mouth, and ears, which are critical to facial recognition by predicting the location, size, and morphology of facial features . Considerable data has been derived from craniofacial reconstruction studies in forensic identification. Fedosyutkin and Nainys, who summarized and described the relationship of skull morphology to facial features, showed general characteristics of how the skull morphology affects facial features . The most frequently reported guidelines for facial feature properties is the study on the nose, such as nasal profile or projection – , followed by research on the eye, such as eyeball position or protrusion – , the mouth such as mouth width or lip morphology – , and ear shape estimation – . Conversely, Farkas et al. emphasized that facial morphology databases on various ethnic groups are still required . Among these features, eyebrows are the most important facial feature in recognizing emotions under the influence of cognitive load , . Specifically, eyebrow shape is more helpful than color or density in facial recognition . However, to our knowledge, no research has been conducted that estimates the position and morphological territory of the eyebrow from the orbit using 3D craniofacial reconstruction methods. There have been studies of eyebrows using 2D methods and their importance has been described in face restoration with suggestions that additional studies be conducted to the point of the most superior part of the eyebrow . This study establishes the parameters that may help estimate the position and shape of the eyebrow from the orbit using 3D computed tomography (CT) imaging methods. The findings of this study can be applied in the field of forensic facial reconstruction and are expected to increase the possibility of recognition and identification of persons.
In this study, 180 subjects were analyzed for each of the 35 measurements on both eyebrows and orbits. Descriptive statistics results are shown in Supplementary Table . In the intra- and inter-observer reliability analysis (effective N = 995), the alpha coefficients of each Cronbach were 0.999 and 0.998, showing very high reliability. In the analysis of sex difference by t-test, males showed significantly higher values than females in 23 of the 35 measurements. Descriptive statistical analyses represent the difference in the average value of each measurement section of the eyebrow and orbit between males and females obtained in the analysis. For the orbit, males had larger values for both width and height than females. In case of the eyebrow, lengths and heights of both sides were greater in males than females. It has been reported that there is sexual dimorphism of eyebrows in primates . Males had the larger mean value in the majority of measurements in this study. In males (Supplementary Table ), the regression equation of measurement 14 with measurement 3 showed the highest power of explanation on both sides (R 2 left 42%, right 47%). On the other hand, the regression equation taking measurement L10 to predict measurement L19 showed the lowest power of explanation (R 2 13%). On the right side, the equation predicting measurement R26 by measurement R9 showed the weakest power of explanation (R 2 18%). In females (Supplementary Table ), the equation which predicts measurement L18 with measurement L9 showed the strongest power of explanation on the left side (R 2 35%). The highest power of explanation on the right side was shown by the equation for R10 making prediction of measurement R20 (R 2 43%). The measurement showing the lowest power of explanation on both sides was measurement 19 predicted by measurement 10 (R 2 left 4%, right 5%). The measurements of the highest power of explanation differ in each sex. Therefore, to reconstruct a face, applying different measurements according to a subject’s sex would lead to a better outcome. Further, we observed that there is discrepancy between sides (Supplementary Tables , ). Measurements exhibiting the largest difference of power of explanation by sides in males were equation of measurement 19 by measurement 10 and equation of measurement 19 by measurement 8. On the left side, measurement 10 made a prediction of measurement 19 with R 2 13%; however, it was 31% on the right side. For the equation of measurement 19 by measurement 8, the power of explanation of the left side was 16% and the R 2 was 34% on the right side. The discrepancies between sides were also observed in females. Regression equation of measurement 18 taking measurement 10 as a factor showed a R 2 value of 28% on the left and 39% on the right. For the equation of measurement 21 estimated by measurement 9, the power of explanation was 21% on the left and 32% the right. R 2 of the equation measurement 24 predicted by measurement 9 was 29% on the left side and 18% on the right side. Therefore, one might need to consider the side to apply the regression equation. The correlation coefficient increased as the height of the orbit and eyebrow was closer to the center of the orbit (Fig. ). The height of the orbit (measurement code 8, 9, 10) showed a higher correlation with the height of the upper border (measurement code 17, 18, 20, 22, 24, 27) than the height of the lower border of the eyebrow (measurement code 19, 21, 23, 25, 26, 28). All regression equations were developed from the measurements (Supplementary Tables , ). Bivariate correlation analysis of pairs of bony and facial soft tissue sections in male and female groups revealed 14 pairs for males and females with Pearson’s correlation coefficients >|0.4|. Using the regression equations, the most effective prediction of eyebrow morphology was identified in Tables and . In both sex, except for the medial and lateral heights of the eyebrow from orbitale (No. 17 and 26), height distances related to the inferior margin of the eyebrow (No. 19 and 25) in females; the coefficients for the regression equations were relatively high.
The eyebrow is a significant feature for recognition of a face. It is distinctive to the adjacent structures because eyebrows are located on superciliary ridges that protrude from the frontal bone. Further, the eyebrow is covered by eyebrow hair, which gives shade and texture to the region. The eyebrow functions as a factor to express emotions and recognize a face . The shape of eyebrows affects accuracy of the facial reconstruction. The eyebrow, however, wholly consists of soft tissue and prone to change its shape with plucking, shaving, and makeup in the living. These characteristics make estimation of eyebrows difficult in post-mortem face estimation. In former studies, the relationship of the eye structure and the eyebrow was researched. The location of the highest point of the eyebrow was explained in relation of the iris border and structures surrounding the eye using photographs of subjects with open eyes . In this study, we considered whether measurement of the orbital rim is possible to estimate the shape of the eyebrow. The general size of the orbit was larger in males; this is expected since the body size of males tends to be larger than females. Shape analysis, such as geometric morphometric analysis (GMM) has been applied to assess sex or ethnicity in other studies – . Future research should investigate sexual dimorphism and age change in Koreans using GMM, which was not dealt in this study. However, the landmarks used in this study would easily apply in GMM adding sliding landmarks between the landmarks from this study. Every measurement except measurement 7 (LO-O), 16 (EBS-LO), and 25 (EB3I-O) was larger in males. The average of measurement 1 (MO-LO), a standard of measurement, is shorter in female. Therefore, EBS, the highest point of the eyebrow, is located more medial in females. The eyebrow of primates was reported to be sexually dimorphic, but the eyebrow solely as an indicator of sex has rarely been reported in humans. Stephan has suggested the average position of “superciliare,” the highest point of the eyebrow, for eyebrow reconstruction; however, he also noted that the application can be limited . In Stephan’s study the mean horizontal distance between the superciliare and most lateral point of the iris showed a standard deviation greater than the mean. In this study, the R 2 values ranged within 44%, suggesting a valid guideline, which provides a narrow-ranged figure, which can be worthwhile in facial reconstruction cases. However, caution should be taken in applying this method to other ancestries since this study was performed on Koreans.
This study provided data to estimate the position of the eyebrow using the basic width and height measurement values of the orbit rim. The findings reveal that the morphology of the orbit had more influence on the position of the superior margin than the inferior margin of the eyebrow. In addition, the middle part of the eyebrow was relatively more predictable when the regression equations were used; however, the medial and lateral ends of the eyebrow were not. Therefore, both ends of the eyebrow are barely affected by the morphology of the orbital margin. Most of the male eyebrows had larger values than the female. The highest point of the eyebrow in female was located more medially than in the male. However, this fact is not clear enough to show sexual dimorphism. Through this study, it is expected that the equations for estimating the position of eyebrow from the shape of the orbit would be used as useful information for face reconstruction or approximation.
Samples and measurements All methods performed in this study complied with the Declaration of Helsinki and were approved by the Institutional Review Board (IRB) of the National Forensic Service (No. 906-170118-HR-004-01). This retrospective study was approved, and prior informed consent was waived by Ethical Committee for National Forensic Service. The study is in accordance with relevant guidelines and regulations. The subject of all figures included in this study were images of Dr. Lee (Won-Joon Lee), a co-author and one of study participants, and written informed consent was obtained from him. We used craniofacial samples from 180 Koreans autopsied between March 2017 and September 2018 at the National Forensic Service Seoul Institute (NFS Seoul Institute). We conducted metric analyses for 180 subjects (125 males and 55 females) between the ages of 19 and 49 (mean, 35.1) years to minimize the influence of changes in eyebrow morphology due to aging (Table ). We divided the subjects into six groups according to sex and age. All subjects arrived at NFS Seoul Institute within 48 h of death. Subjects with marked changes in the morphology of the head or face due to illness or the cause of death were excluded, as were individuals with congenital malformations or prosthetics in their eyebrow and orbit areas. The subjects were scanned using a SOMATOM Definition AS + (Siemens Healthineers, Erlangen, Germany). Barium sulfate (BaSO4) solution, a contrast agent, was applied to the subject’s eyebrows before taking the CT scans to reveal the radioactive area on the CT images. During this process, subjects with severe hair removal traces were excluded from the study. 3D craniofacial data were created using Digital Imaging and Communications in Medicine (DICOM) data acquired from a 128-slice multidetector CT (MDCT) scanner (SOMATOM Definition AS + , Siemens, Germany) under the following properties; 120 kV, 175 mA, and slice-thickness 0.6 mm. 3D models built using soft and hard tissue images were imported into a biomedical image engineering program (Mimics, version 20.0, Materialize, Leuven, Belgium), to obtain measurements of distances for 18 anatomical landmarks of the eyebrows and orbits. The Frankfort horizontal plane, a plane passing through orbitale and auriculare, as well as coronal and sagittal planes, perpendicular to each other were adopted as reference planes for cranio-cephalometric analysis. In total, 18 craniofacial landmarks [12 cephalometric (eyebrow) and 6 craniometric (orbit)] were used to examine the morphometry of the eyebrow and orbit (Table and Fig. ). The shortest distance from each reference plane (i.e., the perpendicular distance) was used as the position value of each landmark. We measured thirty-five pairs of distances between landmarks and reference planes per subject (Fig. ; Supplementary Table ). Statistics We conducted a statistical analysis using SPSS (version 21.0, SPSS, Chicago, IL, USA). Independent t-test and ANOVA were conducted after obtaining descriptive statistics for the samples, to verify significant differences between sex and age groups, respectively. We applied Levene’s test for homogeneity of variance under the assumption of equal variances ( p > 0.05) as independent t-tests; otherwise, Mann–Whitney U-tests were used to determine sex differences. We also conducted intra-class correlation coefficient analysis to verify reproducibility of measurements by assessing intra- and inter-observer errors. Finally, we performed linear regression analyses using the SPSS to predict eyebrow shape from the orbit for every possible combination of variables using the command syntax. All statistical results were considered significant if p values were less than 0.05.
All methods performed in this study complied with the Declaration of Helsinki and were approved by the Institutional Review Board (IRB) of the National Forensic Service (No. 906-170118-HR-004-01). This retrospective study was approved, and prior informed consent was waived by Ethical Committee for National Forensic Service. The study is in accordance with relevant guidelines and regulations. The subject of all figures included in this study were images of Dr. Lee (Won-Joon Lee), a co-author and one of study participants, and written informed consent was obtained from him. We used craniofacial samples from 180 Koreans autopsied between March 2017 and September 2018 at the National Forensic Service Seoul Institute (NFS Seoul Institute). We conducted metric analyses for 180 subjects (125 males and 55 females) between the ages of 19 and 49 (mean, 35.1) years to minimize the influence of changes in eyebrow morphology due to aging (Table ). We divided the subjects into six groups according to sex and age. All subjects arrived at NFS Seoul Institute within 48 h of death. Subjects with marked changes in the morphology of the head or face due to illness or the cause of death were excluded, as were individuals with congenital malformations or prosthetics in their eyebrow and orbit areas. The subjects were scanned using a SOMATOM Definition AS + (Siemens Healthineers, Erlangen, Germany). Barium sulfate (BaSO4) solution, a contrast agent, was applied to the subject’s eyebrows before taking the CT scans to reveal the radioactive area on the CT images. During this process, subjects with severe hair removal traces were excluded from the study. 3D craniofacial data were created using Digital Imaging and Communications in Medicine (DICOM) data acquired from a 128-slice multidetector CT (MDCT) scanner (SOMATOM Definition AS + , Siemens, Germany) under the following properties; 120 kV, 175 mA, and slice-thickness 0.6 mm. 3D models built using soft and hard tissue images were imported into a biomedical image engineering program (Mimics, version 20.0, Materialize, Leuven, Belgium), to obtain measurements of distances for 18 anatomical landmarks of the eyebrows and orbits. The Frankfort horizontal plane, a plane passing through orbitale and auriculare, as well as coronal and sagittal planes, perpendicular to each other were adopted as reference planes for cranio-cephalometric analysis. In total, 18 craniofacial landmarks [12 cephalometric (eyebrow) and 6 craniometric (orbit)] were used to examine the morphometry of the eyebrow and orbit (Table and Fig. ). The shortest distance from each reference plane (i.e., the perpendicular distance) was used as the position value of each landmark. We measured thirty-five pairs of distances between landmarks and reference planes per subject (Fig. ; Supplementary Table ).
We conducted a statistical analysis using SPSS (version 21.0, SPSS, Chicago, IL, USA). Independent t-test and ANOVA were conducted after obtaining descriptive statistics for the samples, to verify significant differences between sex and age groups, respectively. We applied Levene’s test for homogeneity of variance under the assumption of equal variances ( p > 0.05) as independent t-tests; otherwise, Mann–Whitney U-tests were used to determine sex differences. We also conducted intra-class correlation coefficient analysis to verify reproducibility of measurements by assessing intra- and inter-observer errors. Finally, we performed linear regression analyses using the SPSS to predict eyebrow shape from the orbit for every possible combination of variables using the command syntax. All statistical results were considered significant if p values were less than 0.05.
Supplementary Information.
|
Root exudate-derived compounds stimulate the phosphorus solubilizing ability of bacteria | f789bebe-962b-4137-9989-bc7eed997cc5 | 10006420 | Microbiology[mh] | Most of the existing phosphorus (P) in soils globally is locked in primary minerals, absorbed on soil particle surfaces, or occurs in organically complexed forms , . Although P fertilizer is readily available for plants, once applied to soils, it faces constraints such as poor diffusion, limited solubility, and fixation on mineral surfaces; thus, increasing the pool of plant unavailable P in soil . Phosphate fertilizer originates from rock phosphate minerals, a non-renewable resource that is predicted to become scarce in the coming decades , . It has been estimated that unlocking residual P pools in soils can play an important role in reducing global P fertilizer demand by up to 50% by 2050 . Current strategies to access unavailable soil P and nutrient management practices to supply P to crops are often inefficient. Excessive applications of phosphate fertilizer to agricultural soils are common to overcome soil P fixation processes, and to maintain P in the soil solution at optimal levels . Overapplication of P often leads to increased pollution and decreased farm profitability. Thus, finding widely applicable and sustainable solutions to the inefficiencies in agricultural P use and its bioavailability offers great promise to support long-term productivity and the sustainability of agricultural systems. The desire to increase P bioavailability in soils has encouraged the study of phytochemicals and beneficial microbes in the plant rhizosphere to enhance P uptake and plant yield . Plant roots can exude a considerable amount of photosynthates within the rhizosphere and this leads to the proliferation of microorganisms within, on the surface, and outside the roots , . The diverse chemical composition of root exudates contributes to multiple functions including the direct solubilization and acquisition of non-soluble nutrients from the soil and regulation of plant–microbe interactions involved in nutrient acquisition . Plants possess the ability to modulate the chemical composition of root exudates, that in turn, influence members of the rhizosphere microbial community by discriminating between mutualist, commensal, and pathogenic root-microbe interactions , . For instance, plants associate with symbiotic and free-living organisms that help mediate plant P uptake; these organisms can be multicellular such as mycorrhizal fungi or single-cell bacteria such as those from the genera Enterobacter spp., Bacillus spp., or Pseudomonas spp. . Plants often initiate these interactions under conditions of soil P limitation , and such interactions are affected by soil type and abiotic factors , . The main mechanisms by which plants deal with P scarcity include changes in root morphology by modifying root branching, increasing root length, forming of root hairs, and generally investing more in belowground allocation to increase the root surface for P uptake , . However, even when plant roots can physically reach the immobile P in soils, this P is often in non-soluble forms that cannot be taken up. The root then switches to complementary strategies to improve solubilization such as the release of selected root exudates to improve P mobilization , . Some of the major chemical groups of P-mobilizing root exudates include organic acids, such as amino acids and fatty acids, with a range of reported biological functions in the plant rhizosphere , . P dissolution rates can be greatly accelerated in soil in the presence of organic acids leading to 10–1000-fold higher P concentration in the soil solution, depending on soil type and organic acid concentration , . Root exudates can induce the growth of microorganisms, act as chemo-attractants to motile microbes and are a source of carbon (C) for numerous microbes , . Some bacteria dominate the rhizosphere of certain plants based on specific metabolites secreted by a plant species. For instance, Burkholderia species that metabolize citrate and oxalate have been shown to be highly abundant in the rhizosphere of densely packed lateral roots of lupine . The artificial addition of phytochemicals to soils has also been shown to affect the composition and functions of soil microbiota – . Recent studies have shown that coumarins present in root exudates increase the abundance of single microbial strains or whole microbial communities present in the soil , . Similarly, the supplementation of soil with organic acids can change the phosphatase enzymatic activity and shift the community composition including beneficial rhizobacteria . In addition, tricarboxylic acids such as malic acid selectively signal and recruit free-living beneficial bacteria Bacillus subtilis . Testing the potential enhancement of root exudate-molecules on P solubilizing bacteria (PSB) offers a promising means to increase efficiency of commercial microbial inoculants already in use in farming systems as well as to improve P use efficiency by unlocking legacy P in soils. In a recent study, Pantigoso et al. found that certain molecules were exuded in high amounts by Arabidopsis thaliana roots grown under deficient P conditions. Some of those molecules containing organic acids directly solubilized non-soluble P under in vitro conditions. In the same study, a second group of molecules such as galactinol, threonine, and 4-hydroxybutyric acid were equally enriched but did not increase P solubilization directly. It was hypothesized that these compounds were involved in signaling with PSB. The objective of this study was to determine the role of specialized metabolites, previously screened , on the growth and activity of rhizosphere beneficial bacteria. We used corn as a model plant due to its importance as staple food crop. Here we hypothesize that galactinol, threonine, and 4-hydroxybutyric acid, exuded by plants under conditions of P deficiency, can be used to stimulate the growth and/or activity of specific PSB, thus improving the effectiveness of the bacterial inoculum. Further, we tested the possibility that root exudate-derived and specialized metabolites could positively stimulate the native PSBs contained in a natural soil; thus, facilitating nutrient acquisition for the plant.
Effects of root exudates on growth rate of phosphorus solubilizing bacteria The effect of the three root exudate-derived compounds was assessed on bacteria growing in an organic and inorganic P media. In the calcium phosphate media, galactinol and 4-hydroxybutyric acid significantly increased the growth rate of B. thuringiensis, but threonine, and the combination of compounds did not influence the bacterial growth rate (Fig. C). In contrast, the effect of threonine, 4-hydroxybutyric acid, and galactinol significantly decreased the growth rate of P. pseudoalcaligenes and E. cloacae , but applying a mixture of the compounds did not result in a significant change in growth rate (Fig. A,B). Similarly, galactinol and 4-hydroxybutyric acid significantly decreased the growth rate of the bacterial consortia, but no effect was observed for threonine and the combination of the compounds (Fig. D). When examining bacterial growth rate in the organic phytin media, galactinol significantly increased the growth rate of B. thuringiensis, but threonine, 4-hydroxybutyric acid and the combination of compounds did not have an effect (Fig. C). Similar to what it was observed in the inorganic calcium phosphate media threonine and 4-hydroxybutyric acid significantly decreased the growth rate of P. pseudoalcaligenes and E. cloacae in the phytin media, but the combination of compounds did not cause a significant change (Fig. A,B). Galactinol decreased the growth of P. pseudoalcaligenes but did not affect E. cloacae. Threonine, 4-hydroxybutyric acid, and galactinol significantly decreased the growth rate of the bacterial consortia but no effect was observed with the combination of compounds (Fig. D). In summary, only galactinol showed a significant increase in the growth rate of B. thuringiensis under both organic and inorganic P conditions. E. cloacae and P. pseudoalcaligenes showed significantly reduced growth rate in both P media with all compounds except for the mix, which had a lower concentration of each compound. Effects of root exudates on enhancing the phosphorus solubilization ability of bacteria The effect of the three root-exudate derived compounds on the enhancement of P solubilization by bacteria was assessed. In the calcium phosphate inorganic media, threonine, 4-hydroxybutyric acid, galactinol, and the combination of compounds significantly increased dissolved P in the medium for E. cloacae and P. pseudoalcaligenes (Fig. A,B). For B. thuringiensis , only threonine and 4-hydroxybutyric acid increased dissolved P (Fig. C). In contrast, threonine, galactinol and the combination of compounds significantly increased dissolved P in the bacterial consortia, but 4-hydroxybutyric acid did not (Fig. D). In the uninoculated media, there were no significant differences between the added root exudate compounds (Table ). In phytin (organic phosphate) media, the effect of the compound additions on the enhancement of P solubilization was not significant for any of the bacterial strains (data not shown). Effects of root exudate soil amendments on plant biomass The impact of exogenous application of root-exudate compounds on plant biomass was assessed after periodically adding compounds to corn plants growing in a nutrient-poor soil. Threonine addition significantly increased the fresh root biomass of corn compared to the control treatment (Table ) but did not influence the shoot or total plant biomass (shoots and roots). The other compounds, galactinol, 4-hydroxybutyric acid, and the combination of compounds, displayed no significant impacts on the corn root, shoot or total fresh biomass (Table ). We note that while no significant differences were detected (other than for threonine), all treatments receiving the compounds tended to have higher root, shoot and total plant biomass than the control pots (Table ). Effects of root exudates on plant and soil nutrient concentration Bi-weekly applications of threonine and 4-hydroxypropionic acid increased the concentration of N and P in plant roots related to the untreated control but did not significantly increase the levels of potassium, sulfur, calcium, or magnesium (Fig. ). Conversely, galactinol and the compound mixture did not affect the concentration of N, P, S, or Ca in root tissues. Galactinol did significantly increase magnesium concentration in roots (Table ). Effects in nutrient content were also calculated however, not significant differences were found (Table ). The same applications of threonine increased soil available potassium, calcium, and magnesium, but N and P were not significantly altered. The compound 4-hydroxybutyric acid increased calcium and magnesium in soil. Galactinol and the compound combination did not significantly affect K, S, Ca, or Mg levels. Galactinol, 4-hydroxypropionic acid, and the compound combination amended to the soil did not increase N and P content in soils (Table ).
The effect of the three root exudate-derived compounds was assessed on bacteria growing in an organic and inorganic P media. In the calcium phosphate media, galactinol and 4-hydroxybutyric acid significantly increased the growth rate of B. thuringiensis, but threonine, and the combination of compounds did not influence the bacterial growth rate (Fig. C). In contrast, the effect of threonine, 4-hydroxybutyric acid, and galactinol significantly decreased the growth rate of P. pseudoalcaligenes and E. cloacae , but applying a mixture of the compounds did not result in a significant change in growth rate (Fig. A,B). Similarly, galactinol and 4-hydroxybutyric acid significantly decreased the growth rate of the bacterial consortia, but no effect was observed for threonine and the combination of the compounds (Fig. D). When examining bacterial growth rate in the organic phytin media, galactinol significantly increased the growth rate of B. thuringiensis, but threonine, 4-hydroxybutyric acid and the combination of compounds did not have an effect (Fig. C). Similar to what it was observed in the inorganic calcium phosphate media threonine and 4-hydroxybutyric acid significantly decreased the growth rate of P. pseudoalcaligenes and E. cloacae in the phytin media, but the combination of compounds did not cause a significant change (Fig. A,B). Galactinol decreased the growth of P. pseudoalcaligenes but did not affect E. cloacae. Threonine, 4-hydroxybutyric acid, and galactinol significantly decreased the growth rate of the bacterial consortia but no effect was observed with the combination of compounds (Fig. D). In summary, only galactinol showed a significant increase in the growth rate of B. thuringiensis under both organic and inorganic P conditions. E. cloacae and P. pseudoalcaligenes showed significantly reduced growth rate in both P media with all compounds except for the mix, which had a lower concentration of each compound.
The effect of the three root-exudate derived compounds on the enhancement of P solubilization by bacteria was assessed. In the calcium phosphate inorganic media, threonine, 4-hydroxybutyric acid, galactinol, and the combination of compounds significantly increased dissolved P in the medium for E. cloacae and P. pseudoalcaligenes (Fig. A,B). For B. thuringiensis , only threonine and 4-hydroxybutyric acid increased dissolved P (Fig. C). In contrast, threonine, galactinol and the combination of compounds significantly increased dissolved P in the bacterial consortia, but 4-hydroxybutyric acid did not (Fig. D). In the uninoculated media, there were no significant differences between the added root exudate compounds (Table ). In phytin (organic phosphate) media, the effect of the compound additions on the enhancement of P solubilization was not significant for any of the bacterial strains (data not shown).
The impact of exogenous application of root-exudate compounds on plant biomass was assessed after periodically adding compounds to corn plants growing in a nutrient-poor soil. Threonine addition significantly increased the fresh root biomass of corn compared to the control treatment (Table ) but did not influence the shoot or total plant biomass (shoots and roots). The other compounds, galactinol, 4-hydroxybutyric acid, and the combination of compounds, displayed no significant impacts on the corn root, shoot or total fresh biomass (Table ). We note that while no significant differences were detected (other than for threonine), all treatments receiving the compounds tended to have higher root, shoot and total plant biomass than the control pots (Table ).
Bi-weekly applications of threonine and 4-hydroxypropionic acid increased the concentration of N and P in plant roots related to the untreated control but did not significantly increase the levels of potassium, sulfur, calcium, or magnesium (Fig. ). Conversely, galactinol and the compound mixture did not affect the concentration of N, P, S, or Ca in root tissues. Galactinol did significantly increase magnesium concentration in roots (Table ). Effects in nutrient content were also calculated however, not significant differences were found (Table ). The same applications of threonine increased soil available potassium, calcium, and magnesium, but N and P were not significantly altered. The compound 4-hydroxybutyric acid increased calcium and magnesium in soil. Galactinol and the compound combination did not significantly affect K, S, Ca, or Mg levels. Galactinol, 4-hydroxypropionic acid, and the compound combination amended to the soil did not increase N and P content in soils (Table ).
It has been previously reported that certain root exudates from A. thaliana exhibited distinct profiles under different conditions of P availability (sufficient vs. deficient), and that these exudates lead to an increase in dissolved P in a low P environment . In the same study, a second group of compounds were found in high abundance under low P conditions, but no direct enhancement of P-solubilization was observed by those compounds. Thus, we hypothesized that those root exudates must act on P-solubilization via other means. This study investigates whether certain root-derived compounds, under conditions of P scarcity, modulate bacterial functional traits such as growth and P-solubilizing activity. Recent studies have shown that the manipulation of root exudate composition from root apices enriches certain bacterial communities throughout the root system . Here we found that the application of the amino acid threonine, the sugar galactinol, and the fatty acid 4-hydroxybutyric acid, all exudate compounds shown to increase under low P conditions , modulated the growth and activity of PSB strains under in vitro conditions. In addition, our findings suggest that the periodic exogenous amendment of threonine to a natural soil increased the growth of corn roots and increased the levels of plant available K, Mg, and Ca in soils. We observed bacterial specificity in the effects of the amended compounds. For instance, galactinol increased the growth rate of B. thuringiensis but decreased the growth rate of E. cloacae and P. pseudoalcaligenes . Galactinol and other RFOs (Raffinose Family of Oligosaccharides) are currently emerging as crucial molecules produced by plants during stress responses that provide relief against pathogen infection, drought, and high salinity stress , . In addition, galactinol has been shown to be used by Agrobacterium as a nutrient source providing a competitive advantage to colonize the rhizosphere of tomatoes . The same mechanism to uptake RFOs is highly conserved in bacterial symbionts and pathogens from the Rhizobiaceae family ; thus, diverse bacteria appear to have the capability to uptake and metabolize this group of compounds. It has been reported that high sugar concentrations can inhibit bacterial growth, but lower levels of sugars can exhibit the opposite effect, which indicates that there is a threshold-concentration upon which certain sugars (and other compounds) act as growth inhibitors or as nutrient sources that stimulate growth . When assessing the effect of galactinol on PSB activity we observed that galactinol did not enhance the solubilization of P in B. thuringiensis but did increase P solubilization by E. cloacae , P. pseudoalcaligenes , and in the bacterial consortium. Sugar-like compounds such as galactose and galactosides have been reported to support microbial activity and growth of N-fixing Sinorhizobium meliloti before and during nodulation . Zhang et al. reported that free-living microorganisms in the rhizosphere can use root exudates such as sugars, amino acids, and other compounds to promote colonization and functional traits that support plant growth and nutrition . We note that galactinol increased P-solubilizing activity by E. cloacae and P. pseudoalcaligenes, but it reduced the growth rate of both bacteria. In contrast, galactinol increased the growth rate of B. thuringiensis , while maintaining its P-solubilizing activity. Aforementioned comparisons between bacterial growth rate and P-solubilization were only made under calcium phosphate due to P-solubilization not being significantly affected under the phytin-based media. Galactinol has been shown to be involved as signal molecule that can stimulate root colonization by Pseudomonas chlororaphis O6 in cucumber, eliciting an induced systemic resistance against the plant pathogen Corynespora cassiicola . When challenged by abiotic stresses such as drought and salinity, tobacco plants overexpressing galactinol synthase ( CsGolS1 ) demonstrated improved tolerance, however, bacteria meditation for abiotic stresses was not reported . In light of these findings, we hypothesize that galactinol could be involved in growth rate and P-solubilization activity of PSB and that this effect could be concentration specific. Previous studies have demonstrated that adding C compounds such as glucose to the soil can increase P microbial utilization as compared to solubilization , , influencing the enrichment of rhizosphere bacteria . Similar to galactinol, the effect of threonine on PSB growth rate was strain specific. Threonine at 0.1 mM concentration showed an inhibitory effect on the growth rate of E. cloacae and P. pseudoalcaligenes , but did not affect B. thuringiensis in either the organic or inorganic media. Interestingly, treatments with lower amounts of threonine (0.03 mM) from the compound combination did not decrease the growth rate of any of the bacterial strains studied here. Inhibitory effects of amino acids (i.e., cysteine) on E. coli at higher concentrations have been previously reported . Despite the negative effect on growth rate, threonine consistently enhanced the P solubilization of all the bacterial strains tested, suggesting a broader effect on PSB strain activity, but not growth rate. In support of this, recent findings show that amino acid metabolism is closely linked to plant–microbe interactions, providing signaling molecules, nutrients, and defense compounds . Amino acids such as threonine are constituents and important N, C or energy sources for growth and activity for a range of bacteria . Further, several bacterial species from the genera Bacillus , Pseudomonas and Enterobacter have been shown to exhibit chemotaxis toward multiple amino acids, including threonine , . Carvalhais et al. showed that exudation of different amino acids, in lower amounts, such as asparagine, ornithine, and tryptophan can increase abundance of rhizobacteria Bacillus sp. and Enterobacter sp. In addition, root exudation of amino acids in P-deficient roots can stimulate the growth and activity of organisms involved in nutrient acquisition . However, the effect of amino acids on bacterial growth and activity are highly variable among bacterial species and is influenced by the environment and the physiology of the organism . Furthermore, bacterial growth inhibition, attraction, and repellent responses are caused by certain amino acids, and these effects are often reversed when the concentration decreases; thus, suggesting the inability of some bacterial strains to metabolize higher concentrations of certain amino acids . For instance, Brisson et al. showed that shikimic and quinic acids were secreted by roots under phosphate stress and were preferentially absorbed by microorganisms and correlated with root growth . Similarly, Harbort et al. showed that coumarins improve plant performance by eliciting microbe-assisted iron nutrition. Lin et al. demonstrated that succinic acid and malonic acid altered the expression of functional genes of Enterobacter sp. PRd5 by increasing the concentration of pyrene degrading enzymes. In addition, organic acids triggered regulation of genes including signal transduction, energy metabolism, and carbohydrate and amino acid metabolisms . These findings suggest that plants can selectively modulate their root exudation profile to stimulate the proliferation of groups of microorganisms that aid in P acquisition. The effect of 4-hydroxybutyric acid (4-HA) on bacterial growth rate followed the pattern observed for threonine. 4-HA also reduced the growth rate of the bacterial consortia under calcium phosphate media but positively impacted P solubilization in all three PSB strains except for the bacterial consortia. Hydroxy fatty acids such as 4-HA function as modulators of many signal transduction pathways in plants in response to different stresses , . Recent studies evidenced that fatty acids from plant root exudates have the ability to participate in strong plant–microbe interactions, stimulating N metabolism in rhizosphere bacteria . Lu et al. demonstrated stimulation of bacterial enzymatic-mediated denitrification by fatty acid oleamide and erucamide from duckweed root exudates.This evidence supports the hypothesis that compounds such as threonine and 4-HA could be acting as a signal rather than simple C source for certain plant beneficial bacteria . We also noted that exogenous application of threonine to soils resulted in an increase of fresh corn root weight, while the other compounds applied did not affect plant growth. We hypothesize that the effect of threonine on plant biomass is a response to its ability to trigger activity and chemotaxis on a wide range of microbes favoring positive nutritional feedback for plants. In support of this hypothesis, a study by Harbort et al. used plant fitness data, coupled with elemental content and transcriptomic analysis, to confirm that the benefits conferred by commensal microbes under iron limitation occur via a coumarin signaling-molecule mechanism relieving iron starvation. It is commonly held that plants and rhizosphere microbes consume and compete for free amino acids in the rhizosphere , . Plant roots are often outcompeted by microbes in the uptake of externally applied amino acids , . These observations have led to the speculation that amino acids may be taken up from the rhizosphere, where they are first rescued and mineralized by bacteria, and then used as an inorganic N source by plants . In addition, under nutrient limited conditions bacterial survival strategies can increase their ability to catabolize amino acids . We found that threonine increased N and P concentration in plant root tissues, and the available Ca and Mg in soils were higher as well. It was also found that bacterial growth response was similar under organic and inorganic P, but the P-solubilizing activity varied. The three compounds tested impacted PSB activity under calcium phosphate but did not affect P solubilization under phytin. It has been reported that the ability of microbes to solubilize P is highly dependent on the source of P , . Thus, it appears that threonine, galactinol and 4-hydroxybutyric acid are inducing mineral dissolving compounds such as organic acids that help the bacteria to solubilize inorganic P. This is in contrast to the mechanism used by bacteria to solubilize/mineralize organic P such as the secretion of phosphatases and phytases . Lastly, this research expands on the potential application of specialized root exudate compounds that could lead to agricultural technologies such as its use as elicitors of indigenous bacteria fostering beneficial association with plant roots that positively impact health and productivity.
Specialized metabolites, derived from root exudates, act as signals and sources for rhizosphere microorganisms with implications for P availability and uptake by plants. This study has examined the effects of specialized root exudates, such as threonine, 4-hydroxybutyric acid and galactinol and their ability to stimulate P-solubilizing activity of bacteria as well as implications for soil and plant nutrient uptake. Effects of specialized compounds on bacteria were found to be species and P source dependent. Under greenhouse conditions, threonine was shown to stimulate root growth and, together with 4-hydroxybutyric acid, result in significantly higher N and P content in root tissues. Our findings expand on the function of exuded specialized compounds and suggest alternative approaches to effectively recover residual P from soil. Further work should focus on identifying and testing root exudate-derived compounds aiming to efficiently promote biological activity, growth and functional features, leading to improvements in nutrient use efficiency, and the reduction of excessive applications of synthetic fertilization to croplands.
Phosphorus solubilizing bacteria and root-exudate derived compounds This study used bacterial strains Enterobacter cloacae , Bacillus thuringiensis , and Pseudomonas pseudoalcaligenes that were isolated from wild potato ( Solanum bulbocastanum ) and previously screened for their ability to solubilize P and tested in vitro and in planta experiments , . Similarly, this study employed three root exudate-derived compounds: galactinol, threonine, and 4-hydroxybutyric acid, that were identified previously to occur in high concentrations in the root exudation profile of Arabidopsis thaliana grown under low P conditions . Effect of root exudates on bacterial growth The objective of this experiment was to measure the effects of root exudate-derived compounds on PSB growth with different sources of unavailable P. Five bacteria treatments ( E. cloacae , B. thuringiensis , P. pseudoalcaligenes, a consortium of the three strains and a sterile control) and five root exudate treatments (galactinol, threonine, and 4-hydroxybutyric acid, a combination of the three, and a control) were grown in two different P media with low P availability (calcium phosphate or phytin based). In total, there were 50 treatments with 4 replicates per treatment. A 10 μL diluted (OD 600 = 1; 1 × 10 8 ) aliquot from each pure culture of E. cloacae , B. thuringiensis , and/or P. pseudoalcaligenes and 5 μL of each of the three compounds at 10 mM concentration were combined with 150 μL calcium phosphate or phytin liquid medium separately (one bacterial strain per compound) and in combination (one strain combined with the compound mixture) in a 96-well plate. Subsequently, the plate was incubated for 48 h at 25 °C in a spectrophotometer, and growth, was monitored by optical density (660 nm). After incubation, the maximum specific growth rate for the culture ( μ max ) was used to compare the effect of each compound on bacterial growth, based on the calculations of Maier and Pepper . Liquid calcium phosphate/phytin medium without the addition of bacteria was used as a control. Deionized and DNA-free water was used to bring the controls to the same volume as the inoculated treatments. Root exudate and bacteria effects on P solubilization Using the same 50 treatments described above, we tested the effect of the root exudate compounds together with PSB on P solubilization. Using a 2.5 mm platinum wire loop, a streak of bacteria culture obtained from pure cultures of each of the three selected isolates was dipped into liquid Luria–Bertani medium , and incubated separately in a rotary shaker at 170 rev min −1 at room temperature overnight until reaching the mid-exponential growth phase. A 50 μL diluted (OD 600 = 1; 1 × 10 8 ) aliquot from each pure bacterial culture grown in an Erlenmeyer flask and 50 μL of 10 mM concentration from a given compound, stored in 15 mL cylindrical tubes, (galactinol, threonine, and 4-hydroxybutyric acid) was added to 4.95 mL liquid NBRIP (National Botanical Research Institute Phosphate) medium, with a final concentration of 0.1 mM, and incubated in a rotary shaker for 72 h . One of each of the three dissolved compounds was combined (one-third part per each compound) and mixed at the same final concentration of 0.1 mM. For the inoculation of the bacterial co-inoculum, one of each of the three bacterial strains was prepared and mixed at the same final concentration (OD 600 = 1; 1 × 10 8 ) and incubated for 72 h. Two plant-unavailable sources of P, calcium phosphate and phytin, were used to prepare NBRIP medium. The NBRIP medium is comprised of glucose (10.0 g), Ca 3 (PO 4 ) 2 (5.0 g), NaCl (0.2 g), MgSO 4 ·7H 2 O (0.5 g), (NH 4 ) 2 SO 4 (0.5 g), KCl (0.2 g), MnSO 4 (0.03 g), FeSO 4 ·7H 2 O (0.003 g) with a pH of 7.0–8.0. For phytin media preparation, calcium phosphate was replaced with 10 g of phytin (C 6 H 6 Ca 6 O 24 P 6 ). The pH of the initial P media was near neutral for both P media (~ 7 pH). Each bacterium treatment was run in an independent batch, thus non-bacterial control treatment were included with each batch run. After incubation, the solution was centrifuged at 6000 rpm for 20 min to remove both the suspended bacteria cells and the remaining calcium phosphate/phytate. Sterile, liquid calcium phosphate/phytin medium, with each compound separately, and without the addition of bacteria, were used as controls. The concentration of phosphate in the supernatant was analyzed according to the protocol of Soltanpour et al. and measured with an inductively coupled plasma-optical emission spectrometer (ICP-OES; Perkin Elmer 7300DV) at the Soil, Water and Plant Testing Laboratory of Colorado State University. Impacts of root exudates on soil nutrient availability and plant growth Certified organic seeds of commercial corn ( Zea mays ) cultivar ‘Natural Sweet F1’ from Johnny’s Selected Seeds (Windslow, Maine) were grown under greenhouse conditions at the Horticulture Center of Colorado State University, Fort Collins, CO. The average temperature in the greenhouse was 20 to 25 °C and the experiment lasted six weeks. Seeds were sown in squared pots (5 cm × 4 cm × 4 cm) containing 300 g pine forest soil, collected to a depth of 30 cm from a natural area (O horizon), Grey Rock Forest, Poudre Canyon, Bellvue, CO, (40.69°N, 105.28°W, 1700 masl). The climate is semiarid, with an average annual precipitation of 409 mm (usclimatedata.com, accessed 2021). The soil is classified as a sandy clay loam with an organic matter content of 3.3%, nitrogen (N) content of 0.4 ppm, available P 26.7 ppm based on AB-DTPA extract, and a pH of 6.8. Pine soil forest with no history of fertilizer amendment was used because of its undisturbed conditions relative to highly managed agricultural soils. No fertilization or amendments were applied, and the corn plants were irrigated based on growth and demand keeping a relatively constant moisture in the soil. Pots with corn plants were assigned to each of five treatments, with 10 repetitions per each treatment. The treatments consisted of pots receiving one individual compound and the three in combination, as well as the control. The compounds galactinol, threonine, 4-hydroxybutyric acid, and a combination were applied to the base of the corn plants twice a week. A volume of 1 mL at 1 mM concentration was added to pots each time, except for the control, which received an equivalent amount of pure water. The treatment with the combination of compounds also received addition with a total concentration of 1 mM (0.33 mL of each compound). Plants were harvested 6 weeks after emergence, roots were gently rinsed to removed soil particles, and the fresh weight of roots and shoots was recorded. Plants were oven dried at 90 °C for 72 h, and the dry weight was also recorded. Total P in the plant shoot and root tissues were analyzed separately by digesting the plant tissue in a block digester with HCl and HNO 3 and cleared with H 2 O 2 . Then the sample was brought to a volume of 50 mL, and total P was read on an ICP-OES. Available P in the soil samples was identified using the Olsen P method . Both plant and soil N, P, potassium (K), calcium (Ca), and magnesium (Mg) analysis were performed at the Ward Laboratories (Kearney, Nebraska). Data analysis The effect of different root-exudate compounds with and without bacterial strains were compared separately for each bacterium and P media treatment combination using one-way ANOVA. The effects of root-exudate derived compounds on bacterial growth rate were also compared separately for each bacteria treatment with one-way ANOVA. One-way ANOVA was also used to examine the effects of compound addition on plant dry biomass, and P content and other nutrients in soil and plant tissue. Homogeneity of variance and normality were assessed for all analyses. A probability level of p = 0.05 was considered statistically significant. A t-test was used to compared nutrient concentration between control and individual compounds. Research involving plants statement Plants used in this study come from organic and certified seeds commercially available. No special permits are required to obtain this seeds.
This study used bacterial strains Enterobacter cloacae , Bacillus thuringiensis , and Pseudomonas pseudoalcaligenes that were isolated from wild potato ( Solanum bulbocastanum ) and previously screened for their ability to solubilize P and tested in vitro and in planta experiments , . Similarly, this study employed three root exudate-derived compounds: galactinol, threonine, and 4-hydroxybutyric acid, that were identified previously to occur in high concentrations in the root exudation profile of Arabidopsis thaliana grown under low P conditions .
The objective of this experiment was to measure the effects of root exudate-derived compounds on PSB growth with different sources of unavailable P. Five bacteria treatments ( E. cloacae , B. thuringiensis , P. pseudoalcaligenes, a consortium of the three strains and a sterile control) and five root exudate treatments (galactinol, threonine, and 4-hydroxybutyric acid, a combination of the three, and a control) were grown in two different P media with low P availability (calcium phosphate or phytin based). In total, there were 50 treatments with 4 replicates per treatment. A 10 μL diluted (OD 600 = 1; 1 × 10 8 ) aliquot from each pure culture of E. cloacae , B. thuringiensis , and/or P. pseudoalcaligenes and 5 μL of each of the three compounds at 10 mM concentration were combined with 150 μL calcium phosphate or phytin liquid medium separately (one bacterial strain per compound) and in combination (one strain combined with the compound mixture) in a 96-well plate. Subsequently, the plate was incubated for 48 h at 25 °C in a spectrophotometer, and growth, was monitored by optical density (660 nm). After incubation, the maximum specific growth rate for the culture ( μ max ) was used to compare the effect of each compound on bacterial growth, based on the calculations of Maier and Pepper . Liquid calcium phosphate/phytin medium without the addition of bacteria was used as a control. Deionized and DNA-free water was used to bring the controls to the same volume as the inoculated treatments.
Using the same 50 treatments described above, we tested the effect of the root exudate compounds together with PSB on P solubilization. Using a 2.5 mm platinum wire loop, a streak of bacteria culture obtained from pure cultures of each of the three selected isolates was dipped into liquid Luria–Bertani medium , and incubated separately in a rotary shaker at 170 rev min −1 at room temperature overnight until reaching the mid-exponential growth phase. A 50 μL diluted (OD 600 = 1; 1 × 10 8 ) aliquot from each pure bacterial culture grown in an Erlenmeyer flask and 50 μL of 10 mM concentration from a given compound, stored in 15 mL cylindrical tubes, (galactinol, threonine, and 4-hydroxybutyric acid) was added to 4.95 mL liquid NBRIP (National Botanical Research Institute Phosphate) medium, with a final concentration of 0.1 mM, and incubated in a rotary shaker for 72 h . One of each of the three dissolved compounds was combined (one-third part per each compound) and mixed at the same final concentration of 0.1 mM. For the inoculation of the bacterial co-inoculum, one of each of the three bacterial strains was prepared and mixed at the same final concentration (OD 600 = 1; 1 × 10 8 ) and incubated for 72 h. Two plant-unavailable sources of P, calcium phosphate and phytin, were used to prepare NBRIP medium. The NBRIP medium is comprised of glucose (10.0 g), Ca 3 (PO 4 ) 2 (5.0 g), NaCl (0.2 g), MgSO 4 ·7H 2 O (0.5 g), (NH 4 ) 2 SO 4 (0.5 g), KCl (0.2 g), MnSO 4 (0.03 g), FeSO 4 ·7H 2 O (0.003 g) with a pH of 7.0–8.0. For phytin media preparation, calcium phosphate was replaced with 10 g of phytin (C 6 H 6 Ca 6 O 24 P 6 ). The pH of the initial P media was near neutral for both P media (~ 7 pH). Each bacterium treatment was run in an independent batch, thus non-bacterial control treatment were included with each batch run. After incubation, the solution was centrifuged at 6000 rpm for 20 min to remove both the suspended bacteria cells and the remaining calcium phosphate/phytate. Sterile, liquid calcium phosphate/phytin medium, with each compound separately, and without the addition of bacteria, were used as controls. The concentration of phosphate in the supernatant was analyzed according to the protocol of Soltanpour et al. and measured with an inductively coupled plasma-optical emission spectrometer (ICP-OES; Perkin Elmer 7300DV) at the Soil, Water and Plant Testing Laboratory of Colorado State University.
Certified organic seeds of commercial corn ( Zea mays ) cultivar ‘Natural Sweet F1’ from Johnny’s Selected Seeds (Windslow, Maine) were grown under greenhouse conditions at the Horticulture Center of Colorado State University, Fort Collins, CO. The average temperature in the greenhouse was 20 to 25 °C and the experiment lasted six weeks. Seeds were sown in squared pots (5 cm × 4 cm × 4 cm) containing 300 g pine forest soil, collected to a depth of 30 cm from a natural area (O horizon), Grey Rock Forest, Poudre Canyon, Bellvue, CO, (40.69°N, 105.28°W, 1700 masl). The climate is semiarid, with an average annual precipitation of 409 mm (usclimatedata.com, accessed 2021). The soil is classified as a sandy clay loam with an organic matter content of 3.3%, nitrogen (N) content of 0.4 ppm, available P 26.7 ppm based on AB-DTPA extract, and a pH of 6.8. Pine soil forest with no history of fertilizer amendment was used because of its undisturbed conditions relative to highly managed agricultural soils. No fertilization or amendments were applied, and the corn plants were irrigated based on growth and demand keeping a relatively constant moisture in the soil. Pots with corn plants were assigned to each of five treatments, with 10 repetitions per each treatment. The treatments consisted of pots receiving one individual compound and the three in combination, as well as the control. The compounds galactinol, threonine, 4-hydroxybutyric acid, and a combination were applied to the base of the corn plants twice a week. A volume of 1 mL at 1 mM concentration was added to pots each time, except for the control, which received an equivalent amount of pure water. The treatment with the combination of compounds also received addition with a total concentration of 1 mM (0.33 mL of each compound). Plants were harvested 6 weeks after emergence, roots were gently rinsed to removed soil particles, and the fresh weight of roots and shoots was recorded. Plants were oven dried at 90 °C for 72 h, and the dry weight was also recorded. Total P in the plant shoot and root tissues were analyzed separately by digesting the plant tissue in a block digester with HCl and HNO 3 and cleared with H 2 O 2 . Then the sample was brought to a volume of 50 mL, and total P was read on an ICP-OES. Available P in the soil samples was identified using the Olsen P method . Both plant and soil N, P, potassium (K), calcium (Ca), and magnesium (Mg) analysis were performed at the Ward Laboratories (Kearney, Nebraska).
The effect of different root-exudate compounds with and without bacterial strains were compared separately for each bacterium and P media treatment combination using one-way ANOVA. The effects of root-exudate derived compounds on bacterial growth rate were also compared separately for each bacteria treatment with one-way ANOVA. One-way ANOVA was also used to examine the effects of compound addition on plant dry biomass, and P content and other nutrients in soil and plant tissue. Homogeneity of variance and normality were assessed for all analyses. A probability level of p = 0.05 was considered statistically significant. A t-test was used to compared nutrient concentration between control and individual compounds.
Plants used in this study come from organic and certified seeds commercially available. No special permits are required to obtain this seeds.
Supplementary Tables.
|
Electronic consultation use by advanced practice nurses in older adult care—A descriptive study of service utilization data | 19d887b8-2175-473c-9faa-3b4d0f306fa3 | 10006590 | Internal Medicine[mh] | INTRODUCTION Older adults frequently face challenges when accessing care (World Health Organization, ). In a recent poll, 2 million Canadians aged 55+ identified difficulties seeing a primary care provider (PCP), and long wait times for physician specialist care, surgery and diagnostic tests as challenges encountered in their provincial healthcare systems (Angus Reid Institute, ). Like many other countries, Canada has a short supply of primary care physicians (Maier & Aiken, ) and decreasing numbers of geriatricians (Bloom et al., ; Gordon, ) further exacerbating the situation. These gaps can be bridged by Advanced Practice Nurses (APNs) – an internationally recognized group of healthcare providers improving access to care, reducing physician workload and mitigating physician shortages (Bryant‐Lukosius & Martin‐Misener, ; Martin‐Misener et al., ). Recognized APN roles in Canada are the clinical nurse specialist (CNS) and nurse practitioner (NP) (Canadian Nurses Association, ). APNs focus on the clinical domain in various practice settings, including care coordination and providing clinical expertise through patient‐centred consultation with other healthcare providers.
BACKGROUND Digital tools, such as electronic consultation (eConsult), could improve the quality of care for older adults and are uniquely positioned to further integrate APNs into the healthcare system. eConsult enables asynchronous, consultative provider‐to‐provider communication between PCPs and specialists. Its use allows PCPs to access timely specialist advice, decrease wait times and reduce burdens, such as unnecessary face‐to‐face visits, for patients (Joschko et al., ; Liddy, Drosinis, & Keely, ). The Champlain BASE™ (Building Access to Specialists through eConsultation) eConsult service, launched in Ottawa, Ontario, is one such program and has been available to the region's family physicians (FPs) and APNs alike since its inception in 2010. NPs, who have regulatory authority to diagnose, prescribe and order tests autonomously, can register as PCPs and submit questions to specialists. Additionally, CNSs and NPs are eligible to register as specialists to answer questions submitted to their specialty group via eConsult. APNs currently using the service have expressed high levels of satisfaction with eConsult, citing the tool's ability to reassure patients and facilitate high‐quality interactions with specialists (Liddy et al., ). This intersection between APNs and eConsult is encouraging since eConsult services can facilitate timely access to specialist advice for older adults (Liddy, Drosinis, Joschko, & Keely, ) and APNs have been listed as integral to efficient health systems (Canadian Nurses Association, ). While APNs seem well‐positioned to adopt eConsult when caring for older adults, the use of eConsult among geriatric APNs is not well understood. As such, we sought to describe APNs' use of and experience with the Champlain BASE™ eConsult service in their delivery of care to older adults.
METHODS We conducted a retrospective descriptive analysis of eConsults and PCP feedback survey data collected through the Champlain BASE™ eConsult service. Eligible eConsults were completed between January 1 and December 31, 2019, submitted by an NP or responded to by an APN, and concerned a patient aged 65 years or older. This study was reported in line with the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) guidelines (Appendix ). 3.1 The Champlain BASE eConsult service The Champlain BASE™ eConsult service operates in the Champlain health region. Located in Eastern Ontario, this health region has a population of 1.3 million, of which 250,000 are aged 65 years or older. All PCPs (which includes FPs and NPs) are eligible to use the service. Once registered, PCPs may submit a non‐urgent patient‐specific clinical question to one of 150 specialty and sub‐specialty groups. When submitting an eConsult, PCPs can attach additional files deemed relevant to the case (e.g., imaging or test results), which are then assigned to a specialist based on their availability. Specialists are asked to reply within 7 days. When responding, specialists can provide a recommendation, request more information or recommend a face‐to‐face referral. The exchange occurs until the PCP decides to close the case. After each case, PCPs complete a mandatory five‐question close‐out survey (Table ). 3.2 Data collection The eConsult service automatically collects the following information: the type of PCP (i.e., FP or NP) submitting the eConsult and the location of their practice (i.e., organization name and postal code), the specialty group referred to and the specialist's response time and self‐reported billing time associated with each eConsult. Data on the type of specialist were determined using unique identifiers assigned to specialists on the platform. The mandatory close‐out survey (Table ) consists of five questions asking the referring PCP about the perceived usefulness of the advice received, the referral outcome for the eConsult, its educational value, its relevance for upcoming continuing medical education (CME) activities and an optional open text question for any further feedback. Data from Question 5 were not included in the present analysis. The practice settings of referring NPs were identified and categorized as either: acute care hospitals, NP‐led clinics (NPLCs), Community Health Centres (CHCs), Family Health Teams (FHTs) and long‐term care (LTC) homes or “Other.” Cases from acute care hospitals, CHCs, FHTs, NPLCs and “Other” were identified by linking the name of the primary organization registered with the referring NP (Glazier et al., ; Mattison & Wilson, ) with publicly available information. Cases submitted from LTC settings were determined using a method our group previously developed (Fung et al., ), allowing us to identify a subset of providers working in LTC homes and their eConsult cases. We also assessed the rurality of each practice setting (urban versus rural). Cases were identified as “rural” if the Rurality Index for Ontario (RIO) scores were 40 or greater (Glazier et al., ). We displayed the geographical distribution of eConsults submitted and closed by NPs on a map of Ontario using the practice location of participating NPs. These were determined by the forward sortation area (the first three characters in a Canadian postal code). 3.3 Statistical analysis We present the total number of cases closed by NPs (the referring provider) or answered by APNs (the responding specialist) for older patients during the investigation period, and the number and types of specialty groups were consulted. We computed means and standard deviations, and medians and interquartile ranges (IQR) for the following continuous variables: specialist response time, the specialist time billed and the cost per case. We present frequencies of responses to the survey questions. The frequency and distribution of eConsults across different practice settings and the rurality of these settings are described. 3.4 Research ethics approval The Ottawa Health Science Network Research Ethics Board provided ethics approval for this study (Protocol 2009848‐0).
The Champlain BASE eConsult service The Champlain BASE™ eConsult service operates in the Champlain health region. Located in Eastern Ontario, this health region has a population of 1.3 million, of which 250,000 are aged 65 years or older. All PCPs (which includes FPs and NPs) are eligible to use the service. Once registered, PCPs may submit a non‐urgent patient‐specific clinical question to one of 150 specialty and sub‐specialty groups. When submitting an eConsult, PCPs can attach additional files deemed relevant to the case (e.g., imaging or test results), which are then assigned to a specialist based on their availability. Specialists are asked to reply within 7 days. When responding, specialists can provide a recommendation, request more information or recommend a face‐to‐face referral. The exchange occurs until the PCP decides to close the case. After each case, PCPs complete a mandatory five‐question close‐out survey (Table ).
Data collection The eConsult service automatically collects the following information: the type of PCP (i.e., FP or NP) submitting the eConsult and the location of their practice (i.e., organization name and postal code), the specialty group referred to and the specialist's response time and self‐reported billing time associated with each eConsult. Data on the type of specialist were determined using unique identifiers assigned to specialists on the platform. The mandatory close‐out survey (Table ) consists of five questions asking the referring PCP about the perceived usefulness of the advice received, the referral outcome for the eConsult, its educational value, its relevance for upcoming continuing medical education (CME) activities and an optional open text question for any further feedback. Data from Question 5 were not included in the present analysis. The practice settings of referring NPs were identified and categorized as either: acute care hospitals, NP‐led clinics (NPLCs), Community Health Centres (CHCs), Family Health Teams (FHTs) and long‐term care (LTC) homes or “Other.” Cases from acute care hospitals, CHCs, FHTs, NPLCs and “Other” were identified by linking the name of the primary organization registered with the referring NP (Glazier et al., ; Mattison & Wilson, ) with publicly available information. Cases submitted from LTC settings were determined using a method our group previously developed (Fung et al., ), allowing us to identify a subset of providers working in LTC homes and their eConsult cases. We also assessed the rurality of each practice setting (urban versus rural). Cases were identified as “rural” if the Rurality Index for Ontario (RIO) scores were 40 or greater (Glazier et al., ). We displayed the geographical distribution of eConsults submitted and closed by NPs on a map of Ontario using the practice location of participating NPs. These were determined by the forward sortation area (the first three characters in a Canadian postal code).
Statistical analysis We present the total number of cases closed by NPs (the referring provider) or answered by APNs (the responding specialist) for older patients during the investigation period, and the number and types of specialty groups were consulted. We computed means and standard deviations, and medians and interquartile ranges (IQR) for the following continuous variables: specialist response time, the specialist time billed and the cost per case. We present frequencies of responses to the survey questions. The frequency and distribution of eConsults across different practice settings and the rurality of these settings are described.
Research ethics approval The Ottawa Health Science Network Research Ethics Board provided ethics approval for this study (Protocol 2009848‐0).
RESULTS We identified 430 eConsults that involved APNs and related to patients 65 years or older, representing 11.0% ( n = 3,909) of all eConsults closed on the service in 2019. Of these, 421 (97.9%) were initiated by NPs, and 23 (5.3%) eConsults were submitted to a CNS serving as the specialist. The latter included NP‐to‐CNS cases (n = 14) and FP‐to‐CNS cases ( n = 9). One hundred and three individual NPs closed between one and 28 eConsults. The top 5 specialties accessed by NPs ( n = 421) were dermatology (25%), haematology (9%), cardiology (7%), gastroenterology (6%) and endocrinology (6%), accounting for 53% of all eConsults (Figure ). One CNS answered all 23 cases submitted to a CNS‐led specialty, responding on behalf of the wound care ( n = 22) and ostomy and peristomal complications ( n = 1) specialty groups. 4.1 Response times Table provides details on APN service utilization. Among NP‐submitted cases ( n = 421, 98%), the median response interval was 0.9 days (IQR: 0.2–3.0), the median specialist time billed was 15 minutes (IQR: 10.00–20.00) and the median cost per case was $50.00 (IQR: 33.3–66.6). Ninety‐two percent of NP‐submitted cases were responded to in 7 days or less. Among CNS‐answered cases ( n = 23), the median response interval was 0.8 days (IQR: 0.2–1.5), the median time billed was 20 minutes (IQR: 15.0–30.0 min) and the median cost per case was $16.70 (IQR: 12.5–25.0). All CNS‐answered cases were responded to in 7 days or less. 4.2 Close‐out survey Nurse practitioners' responses to the first four questions of the close‐out survey (Table ) are presented in Figure . Sixty‐seven percent of NPs received clear advice for a new or additional course of action that they could implement, and 5% received advice for a new or additional course of action that they could not implement (Figure ). Seventy‐three percent of eConsults did not require a face‐to‐face referral after the consultation; this includes 43% of eConsults where a referral was initially contemplated but could then be avoided after the eConsult interaction (Figure ). Overall, the platform facilitated a change in referral decision‐making in 45% of NP‐submitted eConsults. NPs rated eConsults to be valuable (20%) or very valuable (70%) in terms of their helpfulness and/or educational value (Figure ). Most responding NPs agreed (36%) or strongly agreed (22%) that clinical topics covered in their eConsult interactions were worthy of consideration for future CME events; 38% of responses were neutral (Figure ). 4.3 Geographical distribution One hundred sixty‐nine eConsults (40.1%) were submitted from a CHC, 100 (23.8%) from an FHT, 61 (14.5%) from an LTC setting, 27 (6.4%) from an NPLC and 15 (3.6%) from an acute care hospital. The “Other” category included 49 (11.6%) eConsults from various organizations and, given the small number of cases in each category, were combined into one group to maintain the anonymity of the organizations. Overall, we identified 80 eConsults (18.9%) submitted by NPs practicing in a rural setting. Geographically, 80% of eConsults were closed by NPs in the Champlain region and 20% in other regions of Ontario (Figure ).
Response times Table provides details on APN service utilization. Among NP‐submitted cases ( n = 421, 98%), the median response interval was 0.9 days (IQR: 0.2–3.0), the median specialist time billed was 15 minutes (IQR: 10.00–20.00) and the median cost per case was $50.00 (IQR: 33.3–66.6). Ninety‐two percent of NP‐submitted cases were responded to in 7 days or less. Among CNS‐answered cases ( n = 23), the median response interval was 0.8 days (IQR: 0.2–1.5), the median time billed was 20 minutes (IQR: 15.0–30.0 min) and the median cost per case was $16.70 (IQR: 12.5–25.0). All CNS‐answered cases were responded to in 7 days or less.
Close‐out survey Nurse practitioners' responses to the first four questions of the close‐out survey (Table ) are presented in Figure . Sixty‐seven percent of NPs received clear advice for a new or additional course of action that they could implement, and 5% received advice for a new or additional course of action that they could not implement (Figure ). Seventy‐three percent of eConsults did not require a face‐to‐face referral after the consultation; this includes 43% of eConsults where a referral was initially contemplated but could then be avoided after the eConsult interaction (Figure ). Overall, the platform facilitated a change in referral decision‐making in 45% of NP‐submitted eConsults. NPs rated eConsults to be valuable (20%) or very valuable (70%) in terms of their helpfulness and/or educational value (Figure ). Most responding NPs agreed (36%) or strongly agreed (22%) that clinical topics covered in their eConsult interactions were worthy of consideration for future CME events; 38% of responses were neutral (Figure ).
Geographical distribution One hundred sixty‐nine eConsults (40.1%) were submitted from a CHC, 100 (23.8%) from an FHT, 61 (14.5%) from an LTC setting, 27 (6.4%) from an NPLC and 15 (3.6%) from an acute care hospital. The “Other” category included 49 (11.6%) eConsults from various organizations and, given the small number of cases in each category, were combined into one group to maintain the anonymity of the organizations. Overall, we identified 80 eConsults (18.9%) submitted by NPs practicing in a rural setting. Geographically, 80% of eConsults were closed by NPs in the Champlain region and 20% in other regions of Ontario (Figure ).
DISCUSSION Our findings demonstrate that APNs use eConsult in a variety of practice settings to provide timely access to specialist advice for older patients. APNs almost exclusively served as the referring PCP submitting clinical questions for older patients (fulfilled by NPs; 97.9% of eConsults) rather than the responding specialist (fulfilled by a CNS; 2.0% of eConsults). For NPs submitting eConsults, the service was highly valued, delivered new or confirmatory clinical information and often led to the avoidance of a face‐to‐face referral. To our knowledge, this is the first study describing characteristics of utilization and uptake of eConsult in advanced practice nursing for Ontario's older population. Achieving timely specialty advice is important for older adults. Disability and co‐morbidity overlap with other deficits associated with frailty (Theou et al., ), producing complex, interacting medical and social difficulties that pose challenges for health systems (Prince et al., ). For example, transportation to appointments can be difficult with frailty and mobility issues being more common in older adults, who also have higher referral rates (Collard et al., ; Davis et al., ). eConsult fosters access to specialist advice while avoiding the burden of face‐to‐face referrals. Like a previous study examining PCPs' use of Champlain BASE™ eConsult service for older adults (Liddy, Drosinis, Joschko, & Keely, ), the most common specialties NPs submitted to were dermatology, haematology, cardiology, gastroenterology and endocrinology. While cardiovascular disease is a leading contributor to the burden of disease in people aged 60+ (Prince et al., ), the proportion of eConsults to cardiology (7%) in our sample was one‐third of those to dermatology (25%). This implies that contributors to the burden of disease among older adults are not necessarily associated with the drivers of eConsult use among APNs. Further research on clinical topics and types of questions asked could clarify the reasons behind NPs eConsult usage when seeking specialty advice for older patients. Only one CNS accounted for all eConsults sent to any APN in our sample. Although APNs can register as specialists on the Champlain BASE™ eConsult service, the scope of practice limitations and available payment models may present barriers to adoption. This highlights that there is room to grow the adoption of eConsult by APNs, particularly CNSs, providing specialty services, and further research should investigate specific barriers and enablers to facilitate further integration of this tool into the APN role. For example, the benefit of having clinical champions for eConsult uptake has already been observed in LTC (Helmer‐Smith et al., ) and, with the help of similar support for this tool, could be observed with APNs. An international scoping review found that, using a variety of markers including service utilization and patient satisfaction, NPs providing care for older adults consistently produce equivalent or better outcomes, compared with physician care alone or usual care across various geriatric settings (Chavez et al., ). Despite such findings, the APN role, which includes NPs and CNSs, is underused and its full potential in Canada has yet to be realized (Canadian Nurses Association, ). Perhaps, the underrepresentation of APNs being consulted on the eConsult platform for specialty advice regarding older adults is a reflection of this. Our findings showed that NPs most frequently submitted eConsults from CHCs, which, in Ontario, typically serve disadvantaged populations (Glazier et al., ). Older adults with low income are over‐represented in CHCs compared with settings employing other models of care (Glazier et al., ), suggesting that eConsult is well‐suited to equip NPs to improve access for these patients. eConsults submitted from LTC were less common (14.4% of cases), but they still represent a notable setting from which NPs have adopted eConsult. LTC homes are complex healthcare environments that can benefit from the addition of geriatric NPs, who typically have a broader scope of practice in this setting compared with others and who have been shown to positively impact key outcomes including reduced health service utilization (Chavez et al., ). There is evidence that LTC NPs may enhance their practice by adopting eConsult, which has been shown to be feasible in an LTC setting (Helmer‐Smith et al., ). Furthermore, eConsult adoption in LTC would be timely, given that the COVID‐19 pandemic experience and other long‐standing issues in Canada's LTC homes have spurred calls for increased adoption of advanced technologies in the sector (Gauvin et al., ). There is an opportunity to expand the implementation of eConsult services in new regions globally to advance the well‐being of older adults. An environmental scan of eConsult services available worldwide identified 53 eConsult services from 17 different regions in the United States, Canada, Brazil and Spain (Joschko et al., ). A more recent systematic review identified a similar distribution, with the majority of studies on eConsult conducted in the United States and Canada, with some in Brazil, Europe (i.e. Spain, Italy, Austria, The Netherlands) and Australia (Liddy et al., ). Internationally, NPs are already being used extensively in geriatric care (Chavez et al., ). Our findings demonstrate that eConsult can supplement advanced nursing practice in a variety of healthcare settings, supporting the notion that APNs are well‐positioned to help promote the adoption of this digital health innovation to address the unique needs of older adults across the globe. 5.1 Limitations Our study has several limitations: (1) routine utilization data collected automatically by the eConsult service does not permit the exploration of patient outcomes after eConsult case completion; (2) the practice location associated with each PCP registered on the service that was used to infer the practice settings of NPs in the sample may not always reflect the setting from which the NP is providing care; (3) since the mandatory close‐out survey is not distributed to specialists, no survey response data from the perspective of the CNS were available; and (4) given the small sample size of eConsults answered by CNSs, results may not be generalizable. Other studies on NPs in geriatric care have similarly been burdened, where outcomes were based on data generated by a limited number of NPs (Chavez et al., ). This is a limitation but also reflects the potential for prospective CNSs considering eConsult adoption. By championing the use of eConsult in their practice, the actions of one CNS are simultaneously impactful for their patients and the profession. Future studies should pursue larger sample sizes, perhaps by including patients of all ages. The extent to which CNSs are providing consultation services in other specialty areas and the frequency of FPs consulting CNSs through eConsult (FP‐to‐CNS) compared with NPs (NP‐to‐CNS) are important topics of future inquiry.
Limitations Our study has several limitations: (1) routine utilization data collected automatically by the eConsult service does not permit the exploration of patient outcomes after eConsult case completion; (2) the practice location associated with each PCP registered on the service that was used to infer the practice settings of NPs in the sample may not always reflect the setting from which the NP is providing care; (3) since the mandatory close‐out survey is not distributed to specialists, no survey response data from the perspective of the CNS were available; and (4) given the small sample size of eConsults answered by CNSs, results may not be generalizable. Other studies on NPs in geriatric care have similarly been burdened, where outcomes were based on data generated by a limited number of NPs (Chavez et al., ). This is a limitation but also reflects the potential for prospective CNSs considering eConsult adoption. By championing the use of eConsult in their practice, the actions of one CNS are simultaneously impactful for their patients and the profession. Future studies should pursue larger sample sizes, perhaps by including patients of all ages. The extent to which CNSs are providing consultation services in other specialty areas and the frequency of FPs consulting CNSs through eConsult (FP‐to‐CNS) compared with NPs (NP‐to‐CNS) are important topics of future inquiry.
CONCLUSION Our study describes the use of eConsult as a tool among APNs in various practice settings in Ontario and highlights the importance of advanced practice nursing in the care of older adults. Although APNs participated as senders and receivers of eConsults, APNs as specialists represented a small proportion of the overall utilization, with most participating NPs acting as PCPs. Further research is needed to better understand how to implement such technology in the profession. Advocacy should be considered to increase adoption, particularly for APNs providing care for older patients with complex health needs and barriers to accessing health services. Our results provide baseline data for academics, policymakers, nursing leaders and clinical champions interested in exploring innovative windows of opportunity to integrate APNs into the healthcare system. Relevance to clinical practice We propose several opportunities for eConsult adoption in advanced practice nursing. First, further expansion of eConsult is possible from the perspective of referrers and responders. Referrers – NPs adopting eConsult to submit questions on behalf of patients – may continue to use this tool to facilitate improved access for older adults in key geriatric settings such as LTC (Chavez et al., ). Responders – primarily CNSs answering eConsults submitted to their specialty area – offer high value in specific areas (e.g., wound care) (Canadian Nurses Association, ) and could experience increased uptake once current payment, credentialing and human health resource limitations are addressed. Second, further adoption of eConsult can benefit the integration of the APN role in the health workforce. Literature shows that models including APNs as part of an interprofessional team enable their integration into healthcare systems (Canadian Nurses Association, ; Sangster‐Gormley et al., ). For example, in a 2008 survey of Ontario's primary healthcare NPs, a high percentage of respondents agreed that the physician with whom they worked most often understood their role (87%) and supported their full scope of practice (93%) (Koren et al., ). This suggests that fostering interprofessional awareness and an understanding of each profession's role are building blocks for APN role integration, especially since a lack of role clarity has been identified as a barrier to the integration of advanced practice nursing roles (Donald et al., ). In this study, NPs consulted with specialists from 37 different specialty groups and the CNS responded to consultation requests from FPs and NPs, demonstrating eConsult's ability to promote collaborative interprofessional environments. Future work with larger sample sizes may further explore eConsult‐based interprofessional networks and the quality of the interactions arising from them. Lastly, another opportunity for eConsult adoption in advanced practice nursing exists in the potential for this tool to serve CME efforts. eConsult has received recognition from PCPs and specialists for its educational value and the learning opportunities aligning with the practice‐associated challenges that it generates (Archibald et al., ; Keely et al., ). NPs in this study frequently agreed (in 58% of cases) that the clinical topics covered in their interactions with specialists were highly educational and worthy of consideration in future CME events. Similarly, a 2016 study of NPs and FPs eConsults revealed that NPs considered their conversations with specialists as a great learning opportunity (Liddy, Deri Armstrong, McKellips, & Keely, ). The Canadian Nursing Association deems CME as a key element for enabling APNs to keep pace with the changing needs of the healthcare system, such as those posed by the growing population of older adults (Canadian Nurses Association, ). Additionally, there is potential for eConsult to serve as an accurate needs assessment for future CME activities by providing real‐time data on the questions most frequently posed by practicing APNs.
We propose several opportunities for eConsult adoption in advanced practice nursing. First, further expansion of eConsult is possible from the perspective of referrers and responders. Referrers – NPs adopting eConsult to submit questions on behalf of patients – may continue to use this tool to facilitate improved access for older adults in key geriatric settings such as LTC (Chavez et al., ). Responders – primarily CNSs answering eConsults submitted to their specialty area – offer high value in specific areas (e.g., wound care) (Canadian Nurses Association, ) and could experience increased uptake once current payment, credentialing and human health resource limitations are addressed. Second, further adoption of eConsult can benefit the integration of the APN role in the health workforce. Literature shows that models including APNs as part of an interprofessional team enable their integration into healthcare systems (Canadian Nurses Association, ; Sangster‐Gormley et al., ). For example, in a 2008 survey of Ontario's primary healthcare NPs, a high percentage of respondents agreed that the physician with whom they worked most often understood their role (87%) and supported their full scope of practice (93%) (Koren et al., ). This suggests that fostering interprofessional awareness and an understanding of each profession's role are building blocks for APN role integration, especially since a lack of role clarity has been identified as a barrier to the integration of advanced practice nursing roles (Donald et al., ). In this study, NPs consulted with specialists from 37 different specialty groups and the CNS responded to consultation requests from FPs and NPs, demonstrating eConsult's ability to promote collaborative interprofessional environments. Future work with larger sample sizes may further explore eConsult‐based interprofessional networks and the quality of the interactions arising from them. Lastly, another opportunity for eConsult adoption in advanced practice nursing exists in the potential for this tool to serve CME efforts. eConsult has received recognition from PCPs and specialists for its educational value and the learning opportunities aligning with the practice‐associated challenges that it generates (Archibald et al., ; Keely et al., ). NPs in this study frequently agreed (in 58% of cases) that the clinical topics covered in their interactions with specialists were highly educational and worthy of consideration in future CME events. Similarly, a 2016 study of NPs and FPs eConsults revealed that NPs considered their conversations with specialists as a great learning opportunity (Liddy, Deri Armstrong, McKellips, & Keely, ). The Canadian Nursing Association deems CME as a key element for enabling APNs to keep pace with the changing needs of the healthcare system, such as those posed by the growing population of older adults (Canadian Nurses Association, ). Additionally, there is potential for eConsult to serve as an accurate needs assessment for future CME activities by providing real‐time data on the questions most frequently posed by practicing APNs.
Ramtin Hakimjavadi, Sathya Karunananthan, Cheryl Levi, Kimberly LeBlanc, Sheena Guglani, Mary Helmer‐Smith, Erin Keely and Clare Liddy each made substantial contributions to either the conception and design, acquisition of data, or analysis and interpretation of data and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Ramtin Hakimjavadi, Sathya Karunananthan and Clare Liddy were involved in drafting the manuscript, and all authors were involved with revising it critically for important intellectual content. Ramtin Hakimjavadi, Sathya Karunananthan, Cheryl Levi, Kimberly LeBlanc, Sheena Guglani, Mary Helmer‐Smith, Erin Keely and Clare Liddy have given final approval for the version to be published. Each author has participated sufficiently in the work to take public responsibility for appropriate portions of the content.
Funding for this project was provided by the Ontario Ministry of Health and INSPIRE‐PHC. The authors affirm their independence from the funders. The funders played no role in the study design, data collection, analysis or interpretation of the findings, or in the preparation of this publication. The opinions, results and conclusions reported in this article are those of the authors and are independent from the funding sources. No endorsement by the Ontario Ministry of Health is intended or should be inferred.
Dr Liddy and Dr Keely are co‐founders of the Champlain BASE eConsult Service, but they have no commercial interest in the service and do not retain any proprietary rights. As Co‐executive Directors of the Ontario eConsult Centre of Excellence, they receive salary support from the Ontario Ministry of Health. Dr Keely answers occasional eConsults (less than 1 per month) as a specialist through the service, for which she is reimbursed. Other authors report none.
Appendix S1: Supporting Information Click here for additional data file.
|
Integration of Task-Based Exoskeleton with an Assist-as-Needed Algorithm for Patient-Centered Elbow Rehabilitation | ae8a6e66-0f6d-4f06-9be4-a3f198632325 | 10006945 | Patient-Centered Care[mh] | As people get older, they may experience age-related neuromuscular and sensorimotor degeneration, resulting in disabilities and limiting the range of motion (ROM) of joints and coordination. With an aging population, healthcare systems will face significant challenges to meet the growing demand. The COVID-19 pandemic has highlighted the need for safe and effective home-based and telerehabilitation settings. Cost-effective automation devices like exoskeletons, equipped with safety and assist-as-needed controls, are critical for the success of these interventions and rehabilitation programs. Home-based and telerehabilitation programs can reach more patients while reducing physician interactions and overall rehabilitation costs . Individuals who have lost their range of motion (ROM) for different reasons are commonly advised to pursue physical therapy as a means of recovering their lost ROM [ , , ]. Physical therapy can be a demanding, lengthy, and costly process, which can cause individuals undergoing therapy to lose interest. One potential solution to motivate individuals with disabilities to participate and continue physical therapy is to reduce recovery time and provide real-time feedback to promote motivation. By implementing these strategies, the cost of treatment can also be reduced. Research has shown that high-intensity repetitive tasks can improve recovery time for patients undergoing physical therapy . Increasing the number of repetitions and intensity of an exercise during a therapy session may impact its quality, as it can lead to physical therapist fatigue, particularly if multiple patients are treated within a short time frame. To mitigate this challenge, incorporating robotic devices into the therapy treatment can provide a more efficient approach . Most of these robots, exoskeletons, have different sensors that can be used to collect data for further analysis, such as for safety and to give feedback to patients regarding the treatment and their progress . Aside from the choice of exoskeleton type (joint-based or task-based ), the control algorithms to drive them are an important factor that can help to improve the outcome of physical rehabilitation treatments. A physical therapist assists the patient depending on the severity of the joint impairment. As the treatment progresses, the patient may need less assistance from the professional caregiver, to the point where the patient does not need assistance at all . Researchers have recently focused on assist-as-needed (AAN) algorithms as a means of enabling exoskeletons to detect when assistance is necessary for a person to perform a given task. Various AAN algorithms have been proposed in the literature, including impedance control. This control strategy aims to establish a connection between the force applied and the trajectories followed, modeled as a mass-spring-damper system. In the field of rehabilitation, this AAN approach has been utilized by estimating the algorithm parameters specific to each individual patient. In the study presented in , a three-degree-of-freedom (DOF) mechanism was utilized, which was constrained to a planar circular motion. However, it should be noted that this control strategy can only adjust the torque supplied by the exoskeleton, as it assumes accurate knowledge of the torques/forces generated by the human joints. Furthermore, the mass, spring, and damper parameters must be identified for each subject individually. An alternative AAN method documented in the literature is to estimate joint torques by analyzing surface electromyography (sEMG) signals that are produced by the joint muscles, as demonstrated by George et al. . Hu et al. also employed this technique to design an assist-as-needed (AAN) algorithm for controlling an elastic cable-driven elbow flexion-extension exoskeleton. They estimated joint torques offline, which differs from impedance control, as it estimates external joint torques/forces provided by the subject. However, this approach only relies on sEMG information, which can vary over time, leading to a decline in controller performance, particularly in patients who are commencing physical therapy. Although these AAN algorithms offer assistance to patients, they require patients to be able to move their limbs partially within the range of motion (ROM) of the given task to undergo training. In this study, we used a task-based exoskeleton designed to replicate the elbow’s ROM, which is capable of providing assistance to patients going through physical therapy treatments. In addition, an AAN scheme that is capable of providing support to the patient as long as it is needed for an elbow flexion and extension exercise is proposed. Instead of focusing on building a model to predict the torques generated by the patients’ joints, we build a model to predict when the patient is moving their limbs by themselves utilizing low-cost FSR sensors, allowing the exoskeleton to act as a follower without providing support, especially to patients that are starting the physical-rehabilitation treatment, where the use of physiological signals to establish a relation between the human-joint torques/forces is not a suitable strategy. On the other hand, if the patient is not able to move the elbow joint, the algorithm will allow the exoskeleton to provide assistance and allow the wearer to finish the task. To assess the mechanism and AAN algorithm’s efficacy, a human-subject test was conducted on five people with disabilities. The sEMGs from the main flexor muscle, the bicep, were measured and analyzed to determine when the individual was flexing their arm independently and when the exoskeleton was providing complete support. The remainder of this paper is structured as follows: describes the mechanism synthesis procedure and the resulting mechanism. In , the 3D-printed prototype of the exoskeleton, the coupled human-exoskeleton model, and the hardware used in the study are presented. outlines the proposed AAN algorithm. The real experiments and their findings are presented in . Lastly, the paper concludes with , which includes the study’s conclusions and recommendations for future research.
This study uses a Bennet Linkage , a spatial 1-DOF parallel mechanism with four revolute joints, as a task-based exoskeleton to create elbow flexion and extension motion. Most researchers have modeled the elbow joint as a simple hinge joint, assuming the elbow joint axis is fixed in its range of motion [ , , ]. However, research conducted with electromagnetic sensors has exposed that the joint’s axis moves throughout its range of motion, leading to three-dimensional motion of the forearm . In this study , Bottlang et al. modeled the ulna rotation around the humerus using screw displacement axes, and the results showed that these displacements varied from 2.6–5.7 ∘ and 1.4–2.0 mm in translation. similarly, comparable results were found through simulation in our previous work , where an assessment of a joint-based exoskeleton was performed in the musculoskeletal software, OpenSim. Thus, rather than utilizing a simplified 1-DOF hinge joint (joint-based) to simulate the elbow joint, this study has devised a task-based 1-DOF special mechanism to generate elbow flexion-extension motion. The synthesis was performed based on spatial kinematic information from a state-of-the-art motion capture system while a person is executing the task. Based on the trajectories generated from elbow flexion–extension, it was determined that a Bennet Linkage is a suitable solution to replicate the motion. Once the topology of the task-based exoskeleton is chosen, then the position and orientation of the trajectory points are transformed into dual quaternions. these are then utilized in an optimization algorithm to determine the link dimensions and joint orientations for the Bennet Linkage while meeting the mechanism constraints. The flowchart of the synthesis procedure is presented in . The results of the approach in our work were used to model and build a prototype in computer-aided design (CAD) software. The resultant mechanism is shown in . The detailed synthesis procedure is discussed in our previous work .
The exoskeleton prototype obtained through the synthesis procedure discussed in is presented in . The spatial four-bar mechanism has been 3D printed using PLA material, and it is equipped with metal reinforcement. Its active joint is powered by a NEMA 23 Stepper Motor with a 46:1 Planetary Gearbox, b. The NEMA 23 stepper motor is driven by a control unit that contains a Teensy 4.1 development board with a Cortex-M7 as a processor unit, a micro-stepper driver, and a 24-volt power supply, see a. Additionally, the task-based exoskeleton is equipped with passive joints that can be adjusted to different subjects’ anthropometric measurements, these are highlighted in red in . In addition, the exoskeleton has two attachments to secure the wearer’s arm and forearm. One attachment confines the wearer’s arm to the ground, while the other constrains the forearm to move along with the exoskeleton across its range of motion. To acquire feedback from the wearer while the exoskeleton is active, two different types of sensors are used: a Force Sensing Resistor (FSR) , see , and surface electromyography (sEMG) sensors, Delsys Research system, see . The FSR is utilized to measure the force exerted by the wearer on the forearm holder during task execution. The data from the FSR is integrated into the proposed AAN control strategy in this study. The FSR sensor is mounted onto the forearm holder as shown in . The application of force on the FSR results in a decrease in its electrical resistance. This change in resistance, combine with a 10 KΩ and 5-volt power supply, leads to a variation of the voltage drop at the FSR terminals. This variation in voltage is measured by a 10-bit analog–digital converter peripheral on an Arduino MEGA 2560 board, see . On the other hand, the sEMG sensors, placed on the bicep and tricep muscles, provide information on the level of muscle engagement in millivolts. This information can be used to assess the impact of the exoskeleton on the wearer. The Delsys acquisition system possesses 16 channels and is capable of collecting data at a sampling rate of approximately 2000 KHz. This system comes with the sEMG Works software, which allows the user to collect and analyze sEMG data by setting up timed experimental tasks. Additionally, the manufacturer provides the SDK (Software Development Kit) and API (Application Programming Interface) that allow the integration of the Delsys Acquisition system with 3rd-party software. For our purposes, we used the API in order to synchronize the sEMG data collection process with the FSR sensor, and the estimated position of the stepper motor, θ ( t ) . The interface schematic of the human-exoskeleton system with the sensors is presented in . The position of the motor is estimated based on the number of steps it takes, while the FSR measurements are sampled at a frequency of 100 Hz. This information is sent to the main computer, using the RS232 communication protocol, where the high-level control logic is processed. Likewise, the sEMG measurements are sent to the computer through USB communication. In the computer, a Python interface has been developed in order to communicate with each one of the peripherals and process the information received. This is done to create a mathematical model to infer when the wearer requires assistance while using the task-based exoskeleton.
We propose an Assist-as-Needed (AAN) algorithm to identify instances where the wearer cannot execute the elbow-flexion task using FSR measurements and feedback on the input angles, θ ( t ) , on the exoskeleton. The information obtained is used to develop a mathematical model to infer the period of inactivity of the person by employing the supervised machine learning algorithm, Least-Squares Regression Methods . 4.1. Model Development A preliminary analysis was conducted to study the FSR measurements while a person rested their forearm, as shown in , on the exoskeleton during 10 repetitions of changes in input angle from 0 ∘ –60 ∘ and vice versa. To reduce external noises affecting the FSR measurements, a 2nd-order Butterworth filter at a sample rate of 100 Hz was implemented. The point-cloud distribution of the FSR changes is shown as a scatter plot in . As can be observed, the FSR measurements at different angle values are not constant during the whole experiment. A weighted L 2 - n o r m , Equation ( ), was used to create another representation of the distribution of the FSR point cloud. The norm takes into account both the exoskeleton input angle, θ ( t ) , and the FSR measurement values. The resulting plot is shown in . (1) z = θ 2 ( t ) + ( w × F S R ( t ) ) 2 The FSR measurements range from 0 to 5 V, while the angle information ranges from 0 to 60 ∘ . In this study, the measurements of both FSR and angle are important. Therefore, a distinctive characteristic is computed based on the L 2 - norm of these two measurements. Additionally, considering the different ranges and relative importance of both values in the model, a weighting factor, w , is applied to the FSR measurements to increase their significance and impact in the predicting model. In our experiment, this value was set to 15, i.e., w = 15 . A mathematical model to establish each individual’s resting area based on the distribution shown in could be developed to detect when subjects flex their elbow unassisted. Subjects that flex their arm by themselves will apply less pressure onto the FSR sensor, leading to low measurements from the sensor, resulting in a decrease in the weighted l 2 - norm calculated using Equation ( ). The opposite can be detected as well since a higher FSR measurement will indicate that the wearer is applying more force to the FSR sensor. These different scenarios will yield three different regions, indicating whether the subjects are flexing, extending, or resting their arm. Our goal in this section is to determine a mathematical model that can infer the intention of the wearer using these 3 different regions and drive the exoskeleton accordingly. For this purpose, least-square regression methods are used. To describe the behavior of the resting region, , a curve representing the mean of the distribution across different input angles must be determined. Then, lower and upper boundary lines enclosing the resting area are defined by 2 times the standard deviation (std), σ , away from the mean curve. This process is repeated across n different segments of the distribution, see . The mean curve of the distribution is determined by approximating each segment to a linear line, z ^ i = a i × θ ( t ) + b i . The parameters of the line are determined by minimizing the residual sum of squares (RSS), Equation ( ), between the training data and the predicted value, least-square linear regression. (2) R S S ( a i , b i ) = ∑ k = 1 n ( z k − ( a i × θ k + b i ) ) 2 where i represents the segment, n is the number of data points in segment i , and a i , and b i are the coefficients representing each z ^ i line. This problem can be solved by grouping the lines in matrices, as presented in Equation ( ). (3) z 1 z 2 ⋮ z n ︸ Z i = θ 1 1 θ 2 1 ⋮ ⋮ θ n 1 ︸ A i a i b i ︸ X i Then, the least-square solution to Equation ( ) can be solved by Equation ( ). (4) X i = ( A i T A i ) − 1 A i T Z i Once the coefficients of each of the lines are determined, the standard deviation of each region is found, σ i , to bind the data points to ± 2 × σ i . The boundaries of the segment of the distribution must enclose 95 % of the data since each segment distribution resembles a normal distribution . The output of this procedure is presented in . where the red curve with squared markers represents the mean line of each segment, z ^ i , the blue-dot points represent the cloud-point distribution of the data, and the green lines with dot markers represent the linear boundaries found for each segment. The information obtained from the boundary lines of is used to determine the upper and lower boundary curves that surround the cloud-point distribution representing the resting region. Each boundary line is approximated to a 2 nd - o r d e r polynomial, Equation ( ): (5) m θ 2 ( t ) + n θ ( t ) + o where the m , n , and o are the coefficients to be found for each one of the curves. This problem is similar to the one shown in Equation ( ), by modifying matrix A to include the coefficient of θ 2 ( t ) as shown below by Equation ( ): (6) z ^ 1 z ^ 2 ⋮ z ^ n ︸ Z = θ 1 2 θ 1 1 θ 2 2 θ 2 1 θ 3 2 ⋮ ⋮ θ n 2 θ n 1 ︸ A m n o ︸ X Now, this problem can be solved as expressed in Equation ( ), obtaining coefficients for the upper boundary and lower boundary curves, respectively. An example of the resultant boundary curves is presented in , where the green curve with square markers is the lower boundary, the magenta curve with dot markers is the upper boundary, and the blue-dot points represent the data used for training this model, the resting region. Then, the information extracted from the FSR measurement and the exoskeleton input angles, θ ( t ) , can be classified into 3 different regions. Any point that lies in the first region, region 1, can be interpreted as the user trying to flex the elbow. Similarly, any point in the second region, region 2, is classified as the person resting the arm on the exoskeleton without applying any effort/force. Lastly, any point above the upper boundary curve is classified as the person exerting force against the exoskeleton, region 3. This can be interpreted as the person attempting to extend the forearm. The above procedure can be executed for each individual to obtain a unique model for them during the rehabilitation session. The algorithm summarizing the training process, Algorithm 1, is presented below: Algorithm 1: Mathematical model training. Require : Training Data Set F S R and θ Ensure : Trained Model, L U C and L L C 1: w ← w ▹ w is the weight for FSR measurements 2: K ← K ▹ K Number of segments to split data 3: z = θ 2 + ( w × F S R ) 2 4: Split z in k Segments 5: for i = 1 : K do 6: A i = [ θ i , 1 n × 1 ] 7: X i = ( A i T A i ) − 1 A i Z i 8: Z ^ i = θ i X ( 1 ) i + X ( 2 ) i 9: e = Z ^ i − Z i 10: σ i = s t d ( e ) ▹ Get Standard Deviation 11: L U i = Z ^ i + 2 × σ i ▹ Upper Line 12: L L i = Z ^ i − 2 × σ i ▹ Lower Line 13: end for 14: Y = [ L U 1 , ⋯ , L U K ] 15: A = [ θ i 2 , θ i , 1 n × 1 ] 16: Upper Boundary Curve, L U C = ( A T A ) − 1 A Y 17: Y = [ L L 1 , ⋯ , L L K ] 18: Lower Boundary Curve, L L C = ( A T A ) − 1 A Y 19: Save curve Model: L L C and L U C 4.2. Classification and Assist-as-Needed Strategy To detect the intended wearer’s action, the obtained model defined by the curves L U C and L L C is used to determine in which region of the trained model the current weighted l 2 - n o r m value, Equation ( ), lies with respect to the current input angle θ . For this purpose, a classification algorithm is presented in Algorithm 2. In this algorithm, the weighted l 2 - n o r m value, z , is compared with respect to two predicted values. z ^ u and z ^ l , which correspond to values on the upper boundary curve U B C and lower boundary curve L U C , respectively. If the difference between z and z ^ u is greater than 0, then the point is above L U C , Region 3. On the other hand, if the difference between z and z ^ l is less than 0, then the point lies in Region 1. However, if none of the previous conditions hold, then the point is on Region 2. Detecting the wearer’s action allows the exoskeleton to be commanded to perform different actions depending on its state. If the intention of the wearer is to perform an elbow flexion (region 1), then, the exoskeleton should follow the wearer through the range of motion. Similarly, the same can be done when the wearer is extending the forearm (region 3). However, if the wearer is not capable of performing the action by themselves, then the exoskeleton should assist them through the rest of the range of motion. To determine when the wearer needs assistance, the exoskeleton will be programmed to detect the time spent on the resting region (region 2). Once the exoskeleton detects that the user has spent too much time in the resting zone, the exoskeleton will take over and finish the exercise for them. The novelty of this approach lies in its adaptability to the regions of interest for each individual using the exoskeleton and its low-cost implementation. Algorithm 2: Classification algorithm. Require : Trained Model Curves L U C and L L C , and current F S R and θ values Ensure : Detected Action 1: w ← w ▹ w is the weight for FSR measurements 2: z = θ 2 + ( w × F S R ) 2 3: z ^ u = L U C ( 1 ) θ 2 + L U C ( 2 ) θ + L U C ( 3 ) 4: z ^ l = L L C ( 1 ) θ 2 + L L C ( 2 ) θ + L L C ( 3 ) 5: if z − z ^ u > 0 ) then 6: Return: Action = Extension ▹ Region 3 7: else if z − z ^ l < 0 then 8: Return: Action = Flexion ▹ Region 1 9: else 10: Return: Action = No action ▹ Region 2 11: end if
A preliminary analysis was conducted to study the FSR measurements while a person rested their forearm, as shown in , on the exoskeleton during 10 repetitions of changes in input angle from 0 ∘ –60 ∘ and vice versa. To reduce external noises affecting the FSR measurements, a 2nd-order Butterworth filter at a sample rate of 100 Hz was implemented. The point-cloud distribution of the FSR changes is shown as a scatter plot in . As can be observed, the FSR measurements at different angle values are not constant during the whole experiment. A weighted L 2 - n o r m , Equation ( ), was used to create another representation of the distribution of the FSR point cloud. The norm takes into account both the exoskeleton input angle, θ ( t ) , and the FSR measurement values. The resulting plot is shown in . (1) z = θ 2 ( t ) + ( w × F S R ( t ) ) 2 The FSR measurements range from 0 to 5 V, while the angle information ranges from 0 to 60 ∘ . In this study, the measurements of both FSR and angle are important. Therefore, a distinctive characteristic is computed based on the L 2 - norm of these two measurements. Additionally, considering the different ranges and relative importance of both values in the model, a weighting factor, w , is applied to the FSR measurements to increase their significance and impact in the predicting model. In our experiment, this value was set to 15, i.e., w = 15 . A mathematical model to establish each individual’s resting area based on the distribution shown in could be developed to detect when subjects flex their elbow unassisted. Subjects that flex their arm by themselves will apply less pressure onto the FSR sensor, leading to low measurements from the sensor, resulting in a decrease in the weighted l 2 - norm calculated using Equation ( ). The opposite can be detected as well since a higher FSR measurement will indicate that the wearer is applying more force to the FSR sensor. These different scenarios will yield three different regions, indicating whether the subjects are flexing, extending, or resting their arm. Our goal in this section is to determine a mathematical model that can infer the intention of the wearer using these 3 different regions and drive the exoskeleton accordingly. For this purpose, least-square regression methods are used. To describe the behavior of the resting region, , a curve representing the mean of the distribution across different input angles must be determined. Then, lower and upper boundary lines enclosing the resting area are defined by 2 times the standard deviation (std), σ , away from the mean curve. This process is repeated across n different segments of the distribution, see . The mean curve of the distribution is determined by approximating each segment to a linear line, z ^ i = a i × θ ( t ) + b i . The parameters of the line are determined by minimizing the residual sum of squares (RSS), Equation ( ), between the training data and the predicted value, least-square linear regression. (2) R S S ( a i , b i ) = ∑ k = 1 n ( z k − ( a i × θ k + b i ) ) 2 where i represents the segment, n is the number of data points in segment i , and a i , and b i are the coefficients representing each z ^ i line. This problem can be solved by grouping the lines in matrices, as presented in Equation ( ). (3) z 1 z 2 ⋮ z n ︸ Z i = θ 1 1 θ 2 1 ⋮ ⋮ θ n 1 ︸ A i a i b i ︸ X i Then, the least-square solution to Equation ( ) can be solved by Equation ( ). (4) X i = ( A i T A i ) − 1 A i T Z i Once the coefficients of each of the lines are determined, the standard deviation of each region is found, σ i , to bind the data points to ± 2 × σ i . The boundaries of the segment of the distribution must enclose 95 % of the data since each segment distribution resembles a normal distribution . The output of this procedure is presented in . where the red curve with squared markers represents the mean line of each segment, z ^ i , the blue-dot points represent the cloud-point distribution of the data, and the green lines with dot markers represent the linear boundaries found for each segment. The information obtained from the boundary lines of is used to determine the upper and lower boundary curves that surround the cloud-point distribution representing the resting region. Each boundary line is approximated to a 2 nd - o r d e r polynomial, Equation ( ): (5) m θ 2 ( t ) + n θ ( t ) + o where the m , n , and o are the coefficients to be found for each one of the curves. This problem is similar to the one shown in Equation ( ), by modifying matrix A to include the coefficient of θ 2 ( t ) as shown below by Equation ( ): (6) z ^ 1 z ^ 2 ⋮ z ^ n ︸ Z = θ 1 2 θ 1 1 θ 2 2 θ 2 1 θ 3 2 ⋮ ⋮ θ n 2 θ n 1 ︸ A m n o ︸ X Now, this problem can be solved as expressed in Equation ( ), obtaining coefficients for the upper boundary and lower boundary curves, respectively. An example of the resultant boundary curves is presented in , where the green curve with square markers is the lower boundary, the magenta curve with dot markers is the upper boundary, and the blue-dot points represent the data used for training this model, the resting region. Then, the information extracted from the FSR measurement and the exoskeleton input angles, θ ( t ) , can be classified into 3 different regions. Any point that lies in the first region, region 1, can be interpreted as the user trying to flex the elbow. Similarly, any point in the second region, region 2, is classified as the person resting the arm on the exoskeleton without applying any effort/force. Lastly, any point above the upper boundary curve is classified as the person exerting force against the exoskeleton, region 3. This can be interpreted as the person attempting to extend the forearm. The above procedure can be executed for each individual to obtain a unique model for them during the rehabilitation session. The algorithm summarizing the training process, Algorithm 1, is presented below: Algorithm 1: Mathematical model training. Require : Training Data Set F S R and θ Ensure : Trained Model, L U C and L L C 1: w ← w ▹ w is the weight for FSR measurements 2: K ← K ▹ K Number of segments to split data 3: z = θ 2 + ( w × F S R ) 2 4: Split z in k Segments 5: for i = 1 : K do 6: A i = [ θ i , 1 n × 1 ] 7: X i = ( A i T A i ) − 1 A i Z i 8: Z ^ i = θ i X ( 1 ) i + X ( 2 ) i 9: e = Z ^ i − Z i 10: σ i = s t d ( e ) ▹ Get Standard Deviation 11: L U i = Z ^ i + 2 × σ i ▹ Upper Line 12: L L i = Z ^ i − 2 × σ i ▹ Lower Line 13: end for 14: Y = [ L U 1 , ⋯ , L U K ] 15: A = [ θ i 2 , θ i , 1 n × 1 ] 16: Upper Boundary Curve, L U C = ( A T A ) − 1 A Y 17: Y = [ L L 1 , ⋯ , L L K ] 18: Lower Boundary Curve, L L C = ( A T A ) − 1 A Y 19: Save curve Model: L L C and L U C
To detect the intended wearer’s action, the obtained model defined by the curves L U C and L L C is used to determine in which region of the trained model the current weighted l 2 - n o r m value, Equation ( ), lies with respect to the current input angle θ . For this purpose, a classification algorithm is presented in Algorithm 2. In this algorithm, the weighted l 2 - n o r m value, z , is compared with respect to two predicted values. z ^ u and z ^ l , which correspond to values on the upper boundary curve U B C and lower boundary curve L U C , respectively. If the difference between z and z ^ u is greater than 0, then the point is above L U C , Region 3. On the other hand, if the difference between z and z ^ l is less than 0, then the point lies in Region 1. However, if none of the previous conditions hold, then the point is on Region 2. Detecting the wearer’s action allows the exoskeleton to be commanded to perform different actions depending on its state. If the intention of the wearer is to perform an elbow flexion (region 1), then, the exoskeleton should follow the wearer through the range of motion. Similarly, the same can be done when the wearer is extending the forearm (region 3). However, if the wearer is not capable of performing the action by themselves, then the exoskeleton should assist them through the rest of the range of motion. To determine when the wearer needs assistance, the exoskeleton will be programmed to detect the time spent on the resting region (region 2). Once the exoskeleton detects that the user has spent too much time in the resting zone, the exoskeleton will take over and finish the exercise for them. The novelty of this approach lies in its adaptability to the regions of interest for each individual using the exoskeleton and its low-cost implementation. Algorithm 2: Classification algorithm. Require : Trained Model Curves L U C and L L C , and current F S R and θ values Ensure : Detected Action 1: w ← w ▹ w is the weight for FSR measurements 2: z = θ 2 + ( w × F S R ) 2 3: z ^ u = L U C ( 1 ) θ 2 + L U C ( 2 ) θ + L U C ( 3 ) 4: z ^ l = L L C ( 1 ) θ 2 + L L C ( 2 ) θ + L L C ( 3 ) 5: if z − z ^ u > 0 ) then 6: Return: Action = Extension ▹ Region 3 7: else if z − z ^ l < 0 then 8: Return: Action = Flexion ▹ Region 1 9: else 10: Return: Action = No action ▹ Region 2 11: end if
5.1. Participant Description In order to showcase the effectiveness of the proposed AAN algorithm presented in , an experiment with different subjects wearing the exoskeleton was performed. The selected task involved performing elbow flexion from 0 ∘ to 60 ∘ 20 times. The first 10 trials were used to train the proposed AAN. Then, the remaining trials were performed by either the person or the exoskeleton, depending on whether the subject could finish the task. The human-subject testing was performed in collaboration with the Cerebral Palsy Research Foundation (CPRF), where five subjects volunteered to be part of the experiment. The testing group was composed of one female and four males, with an age that varies from 23 to 63 years old. Four out of the five subjects had different levels of spinal cord injury (SCI) , and one had Duchenne muscular dystrophy (DMD) . The protocol of the study as well as the results of the experiments are presented in the following subsections. The AAN algorithm would only be activated to help the participants to complete the elbow-flexion task when it detected the user could no longer flex the forearm. In addition, sEMG sensors were used to measure the bicep activity during the elbow-flexion task, to give some feedback on the effort applied at the end of the session. In addition, IRB approval was obtained to proceed with the experiment from the Institutional Review Board Committee at Wichita State University. 5.2. Protocol Prior to commencing the experiment, the experimental procedure was comprehensively explained to all participants, and each individual diligently signed the requisite IRB forms. The following step consisted of having two sEMG sensors placed on the participant’s arm, post cleansing the skin using alcohol pads. These sensors were placed specifically on the bicep and tricep muscles. These are the main flexor and extensor of the elbow, respectively. Then, the subjects were asked to sit next to the exoskeleton in order to adjust it to the participant’s anthropometric measurements. The CPRF personnel were responsible for ensuring that the exoskeleton was appropriately aligned and fitted. All necessary adjustments and alignments were accomplished by manipulating the passive joints shown in . Once the adjustments were made, the participant’s arm was attached with Velcro to the arm and forearm holders, respectively ( ). The compliant straps were used to accommodate different-sized patients and to constrain unwanted movements at the attachment points. after the patient was attached to the mechanism, they were physically constrained to follow the spatial displacement of the Bennet linkage without being influenced or propelled by the straps. In this stage, the subject was instructed to relax and rest their arm while the exoskeleton executed ten elbow flexion and extension motions at an angular velocity of 20 degrees/s from 0 to 60 ∘ . This range of motion was set between 0–60 ∘ to ensure the safety and maintain consistency in the experimental setup across the subjects since they were wheelchair bound. Upon completion, the AAN algorithm model was trained, and the participant was asked to attempt the remaining number of flexion repetitions. In the event that the participant was incapable of concluding one or more of the repetitions, the exoskeleton would have taken over and accomplished the task on their behalf. Finally, upon completion of the task, the wearer’s arm and forearm were released from the exoskeleton, by carefully removing the Velcro from both places. In addition, the sEMG sensors were also removed. During the experiment, the exoskeleton was controlled by a Python-based Graphical User Interface (GUI), see . In this GUI, the range of motion (in degrees), the speed in (degrees/second), and the number of trials to be performed could be specified. In addition, whether sEMG sensors were used and whether the AAN was activated could have also been selected. Furthermore, the exoskeleton could be instantaneously stopped in the event of an emergency by pressing the Stop button. 5.3. Results The experimental results obtained from each one of the subjects are presented in this section. The accuracy of each of the models is presented in . The accuracy of the model was computed by determining the number of predictor values Z i that were classified correctly, TP, divided by the total number of samples utilized in the training section, N, multiply by 100%, as depicted in Equation ( ). The average model prediction accuracy across participants is 91.225 % . These models were fed up to Algorithm 2 to determine each subject’s intention through the rest of the task. During the sessions described in the protocol, the sEMG values of the bicep muscle were recorded to analyze its Root Mean Square (RMS) and provide feedback on the amount of effort applied by the patient at the conclusion of the session. (7) m o d e l a c c u r a c y = T P N × 100 % In , the model to predict the action of subject one is presented along with the feature points extracted from the FSR sensor and the input angles of the stepper motor for the 10 trials outside the training session. This subject has SCI and has been diagnosed with general atrophy and a limited active range of motion and strength for all joints. The figure shows that the subject was capable of performing the requested task; however, as the exercise progressed, the subject started to experience some difficulties. These moments are highlighted by a black circle on top of the Elbow Curl Angle graph since the exoskeleton stops due to lack of activity from the subject to analyze whether it needs to finish the task for them. Moreover, to demonstrate the correlation of the FSR measurement to the algorithm output, the FSR measurements were added for the subject, see . Similar results were observed from subject 2, , who has a higher limitation in their range of motion. This was noticeable in their results since in eight out of the ten trials, the subject was unable to complete the task, and the exoskeleton had to complete it. Subject 3, , presented some difficulties in the first two trials. However, the patient was able to finish the rest of the trials by themselves. Unlike the previous two subjects, this patient has been diagnosed with an incomplete SCI. Therefore, the difficulties observed at the beginning may be attributed to the subject getting familiar with the exoskeleton. On the other hand, subject 4, , was unable to complete any of the trials. The feature point never left region 2 from the predicted model. This subject, in contrast to the previous ones, has Duchenne muscular dystrophy and has been diagnosed with greatly reduced strength and active range of motion in all joints. As can be observed from the figure, the exoskeleton waited an appropriate amount of time to detect any effort from the subject. Lastly, subject 5, , presented similar results as subject 3. This patient has also been diagnosed with an incomplete SCI. From all subjects, it can be observed that the proposed Algorithm 1 from segmented the subject’s intention during the elbow flexion task into three different regions. This was achieved by creating a subject-based predictive model for each individual. The AAN control strategy presented was capable of adapting to each individual with a minimum number of trials, as demonstrated by the FSR sensor measurement. The time required to train the model for each individual was 1 min. Unlike previous works, the proposed AAN does not rely on the joint forces/torques from the subject as feedback for the control algorithm, such as the impedance AAN control scheme presented in . In addition, the time and complexity to calculate the impedance parameters to adjust the AAN to each individual would make it unfeasible for clinical trials. On the other hand, an AAN algorithm that depends only on sEMG information to estimate the joint torques using a neural network with a 97% of accuracy has been presented in . However, the authors concluded that due to the number of parameters needed (137), it is very hard to deploy this strategy in a high-frequency real-time experiment. In addition, the AAN strategy was only implemented in subjects with no disabilities. In our case, the proposed approach has been implemented in people with disability. 5.4. sEMG Analysis Results The sEMG values of the Bicep muscle of each individual subject were recorded during the experiments. The objective of this signal was to offer patients feedback on the level of effort they exerted during the exercise. In the literature, it is very common to use the normalized windowed Root Mean Square Value (RMS) to estimate the effort provided by a muscle . For normalizing the sEMG, usually, the Maximum Voluntary Contraction (MVC) for each muscle being studied is used. MVCs values are performed using static loads to obtain a maximum contraction of the muscle. However, it is not advisable to obtain MVCs when working with patients undergoing physical therapy since they may not be able to perform them, and it may not be safe for them to do so. In addition, due to muscle impairment, the subject may suffer from sensitive tissues, which would cause them pain. In this case, another clinical term could be used to normalize the RMS values of the sEMG signals, Acceptable Muscle Contraction (AMC). The AMC in this study will be taken as the maximum sEMG value presented during the rehabilitation session. The effort of the bicep muscle for each one of the subjects is presented in , , , and . In these figures, a bar plot is used to show the RMS value of each trial sEMG, and a red-colored line is used to indicate the mean value for the RMS of the sEMG signals during the training trials. Subject 1, presented a mean effort for the training session of around 34%, then, the bicep effort started increasing as the subject was applying some torque at the joint to perform the requested elbow-flexion task. The same can be said for the rest of the patients; the only difference among them was the mean sEMG values computed during the training session. One peculiar case that we found in our study is that subject 4 could not perform the task by themselves; however, the patient was able to engage the bicep muscle to produce effort, . Therefore, this information could be used to inform the patient of the endurance performance of the bicep during the exercise. This information will keep them motivated to continue the physical-therapy treatment regardless of whether they can finish the exercise or not.
In order to showcase the effectiveness of the proposed AAN algorithm presented in , an experiment with different subjects wearing the exoskeleton was performed. The selected task involved performing elbow flexion from 0 ∘ to 60 ∘ 20 times. The first 10 trials were used to train the proposed AAN. Then, the remaining trials were performed by either the person or the exoskeleton, depending on whether the subject could finish the task. The human-subject testing was performed in collaboration with the Cerebral Palsy Research Foundation (CPRF), where five subjects volunteered to be part of the experiment. The testing group was composed of one female and four males, with an age that varies from 23 to 63 years old. Four out of the five subjects had different levels of spinal cord injury (SCI) , and one had Duchenne muscular dystrophy (DMD) . The protocol of the study as well as the results of the experiments are presented in the following subsections. The AAN algorithm would only be activated to help the participants to complete the elbow-flexion task when it detected the user could no longer flex the forearm. In addition, sEMG sensors were used to measure the bicep activity during the elbow-flexion task, to give some feedback on the effort applied at the end of the session. In addition, IRB approval was obtained to proceed with the experiment from the Institutional Review Board Committee at Wichita State University.
Prior to commencing the experiment, the experimental procedure was comprehensively explained to all participants, and each individual diligently signed the requisite IRB forms. The following step consisted of having two sEMG sensors placed on the participant’s arm, post cleansing the skin using alcohol pads. These sensors were placed specifically on the bicep and tricep muscles. These are the main flexor and extensor of the elbow, respectively. Then, the subjects were asked to sit next to the exoskeleton in order to adjust it to the participant’s anthropometric measurements. The CPRF personnel were responsible for ensuring that the exoskeleton was appropriately aligned and fitted. All necessary adjustments and alignments were accomplished by manipulating the passive joints shown in . Once the adjustments were made, the participant’s arm was attached with Velcro to the arm and forearm holders, respectively ( ). The compliant straps were used to accommodate different-sized patients and to constrain unwanted movements at the attachment points. after the patient was attached to the mechanism, they were physically constrained to follow the spatial displacement of the Bennet linkage without being influenced or propelled by the straps. In this stage, the subject was instructed to relax and rest their arm while the exoskeleton executed ten elbow flexion and extension motions at an angular velocity of 20 degrees/s from 0 to 60 ∘ . This range of motion was set between 0–60 ∘ to ensure the safety and maintain consistency in the experimental setup across the subjects since they were wheelchair bound. Upon completion, the AAN algorithm model was trained, and the participant was asked to attempt the remaining number of flexion repetitions. In the event that the participant was incapable of concluding one or more of the repetitions, the exoskeleton would have taken over and accomplished the task on their behalf. Finally, upon completion of the task, the wearer’s arm and forearm were released from the exoskeleton, by carefully removing the Velcro from both places. In addition, the sEMG sensors were also removed. During the experiment, the exoskeleton was controlled by a Python-based Graphical User Interface (GUI), see . In this GUI, the range of motion (in degrees), the speed in (degrees/second), and the number of trials to be performed could be specified. In addition, whether sEMG sensors were used and whether the AAN was activated could have also been selected. Furthermore, the exoskeleton could be instantaneously stopped in the event of an emergency by pressing the Stop button.
The experimental results obtained from each one of the subjects are presented in this section. The accuracy of each of the models is presented in . The accuracy of the model was computed by determining the number of predictor values Z i that were classified correctly, TP, divided by the total number of samples utilized in the training section, N, multiply by 100%, as depicted in Equation ( ). The average model prediction accuracy across participants is 91.225 % . These models were fed up to Algorithm 2 to determine each subject’s intention through the rest of the task. During the sessions described in the protocol, the sEMG values of the bicep muscle were recorded to analyze its Root Mean Square (RMS) and provide feedback on the amount of effort applied by the patient at the conclusion of the session. (7) m o d e l a c c u r a c y = T P N × 100 % In , the model to predict the action of subject one is presented along with the feature points extracted from the FSR sensor and the input angles of the stepper motor for the 10 trials outside the training session. This subject has SCI and has been diagnosed with general atrophy and a limited active range of motion and strength for all joints. The figure shows that the subject was capable of performing the requested task; however, as the exercise progressed, the subject started to experience some difficulties. These moments are highlighted by a black circle on top of the Elbow Curl Angle graph since the exoskeleton stops due to lack of activity from the subject to analyze whether it needs to finish the task for them. Moreover, to demonstrate the correlation of the FSR measurement to the algorithm output, the FSR measurements were added for the subject, see . Similar results were observed from subject 2, , who has a higher limitation in their range of motion. This was noticeable in their results since in eight out of the ten trials, the subject was unable to complete the task, and the exoskeleton had to complete it. Subject 3, , presented some difficulties in the first two trials. However, the patient was able to finish the rest of the trials by themselves. Unlike the previous two subjects, this patient has been diagnosed with an incomplete SCI. Therefore, the difficulties observed at the beginning may be attributed to the subject getting familiar with the exoskeleton. On the other hand, subject 4, , was unable to complete any of the trials. The feature point never left region 2 from the predicted model. This subject, in contrast to the previous ones, has Duchenne muscular dystrophy and has been diagnosed with greatly reduced strength and active range of motion in all joints. As can be observed from the figure, the exoskeleton waited an appropriate amount of time to detect any effort from the subject. Lastly, subject 5, , presented similar results as subject 3. This patient has also been diagnosed with an incomplete SCI. From all subjects, it can be observed that the proposed Algorithm 1 from segmented the subject’s intention during the elbow flexion task into three different regions. This was achieved by creating a subject-based predictive model for each individual. The AAN control strategy presented was capable of adapting to each individual with a minimum number of trials, as demonstrated by the FSR sensor measurement. The time required to train the model for each individual was 1 min. Unlike previous works, the proposed AAN does not rely on the joint forces/torques from the subject as feedback for the control algorithm, such as the impedance AAN control scheme presented in . In addition, the time and complexity to calculate the impedance parameters to adjust the AAN to each individual would make it unfeasible for clinical trials. On the other hand, an AAN algorithm that depends only on sEMG information to estimate the joint torques using a neural network with a 97% of accuracy has been presented in . However, the authors concluded that due to the number of parameters needed (137), it is very hard to deploy this strategy in a high-frequency real-time experiment. In addition, the AAN strategy was only implemented in subjects with no disabilities. In our case, the proposed approach has been implemented in people with disability.
The sEMG values of the Bicep muscle of each individual subject were recorded during the experiments. The objective of this signal was to offer patients feedback on the level of effort they exerted during the exercise. In the literature, it is very common to use the normalized windowed Root Mean Square Value (RMS) to estimate the effort provided by a muscle . For normalizing the sEMG, usually, the Maximum Voluntary Contraction (MVC) for each muscle being studied is used. MVCs values are performed using static loads to obtain a maximum contraction of the muscle. However, it is not advisable to obtain MVCs when working with patients undergoing physical therapy since they may not be able to perform them, and it may not be safe for them to do so. In addition, due to muscle impairment, the subject may suffer from sensitive tissues, which would cause them pain. In this case, another clinical term could be used to normalize the RMS values of the sEMG signals, Acceptable Muscle Contraction (AMC). The AMC in this study will be taken as the maximum sEMG value presented during the rehabilitation session. The effort of the bicep muscle for each one of the subjects is presented in , , , and . In these figures, a bar plot is used to show the RMS value of each trial sEMG, and a red-colored line is used to indicate the mean value for the RMS of the sEMG signals during the training trials. Subject 1, presented a mean effort for the training session of around 34%, then, the bicep effort started increasing as the subject was applying some torque at the joint to perform the requested elbow-flexion task. The same can be said for the rest of the patients; the only difference among them was the mean sEMG values computed during the training session. One peculiar case that we found in our study is that subject 4 could not perform the task by themselves; however, the patient was able to engage the bicep muscle to produce effort, . Therefore, this information could be used to inform the patient of the endurance performance of the bicep during the exercise. This information will keep them motivated to continue the physical-therapy treatment regardless of whether they can finish the exercise or not.
In the presented work, an AAN strategy alongside a task-based exoskeleton was presented to help people going through physical therapy. The proposed strategy was tested with five individuals, all of them have a disability. The algorithm was able to adapt to each individual by creating model profiles that were used to know when they were not capable of performing the elbow-flexion task. The average of the models’ accuracy was 91.22 % with an acceptable number of trials that could be used as the warm-up session for the task since the AAN algorithm does not depend on bio-electrical signals as other algorithms do. In addition, sEMG signals of the bicep were used to obtain feedback from the therapy session, showing valuable information that could be used by the physical therapist to provide feedback to the patients. Keeping records of the progress that the patients are making is an important process of physical therapy for them since, psychologically, it helps them to remain motivated to keep assisting in the sessions. In summary, the contributions of this paper are the following: first, a subject-centered AAN algorithm was developed for rehabilitative treatments, and second, visual feedback in the form of effort using sEMG is given to patients to keep track of their progress. In future works, we are planning to expand the current work by adding performance factors based on the collected sEMG signals. Moreover, this information could be used to detect when the patient is slacking off and has become technologically dependent on the exoskeleton. Additionally, a virtual reality environment will be explored as a means to keep the patients motivated during the therapy session, by adding games that simulate the task being performed with the exoskeleton.
|
What can autopsy say about COVID-19? A case series of 60 autopsies | cab81bad-d0ed-4785-92d1-1cae0fbab184 | 10008096 | Forensic Medicine[mh] | Introduction In COVID-19 patients, respiratory failure, septic shock and multiple organ failure are the main causes of death. COVID-19 average case-fatality ratio (CFR, the number of deaths per 100 confirmed cases) is about 2–3% . Age is the main predictor of death, followed by comorbidities such as diabetes, and hypertension . Mortality from COVID-19 increases with age and varies among countries: in the 80–89 age group, the annual mortality rate is 1,000 deaths per 100,000 people in the United States . Recent evidence suggests that clinical and forensic autopsies of infected cadavers can be safely performed in adequate facilities with proper protective equipment , . The main aim of these autopsies is to distinguish patients who died with SARS-CoV-2 infection from those who died of COVID-19. However, in these cases, causal inference process is extremely complex, as neither gross examination nor microscopic analysis can unambiguously differentiate the findings , , , , , . On the other side, several macroscopic and histopathological structural abnormalities have been frequently reported in COVID-19 patients. Collecting data on patients died of COVID-19 can help to predict clinical risks in affected patients and thus to improve the quality of care: for example, An et al. used machine learning systems to develop a prognosis prediction tool based on socio-economic and clinical data , , . In the current study, a retrospective analysis was performed on the first 60 patients with an ante-mortem diagnosis of COVID-19 who underwent full autopsy. The aim of this investigation is to evaluate the most frequent autopsy findings in patients who died of COVID-19 to assess a possible association with clinical information in health records.
Materials and methods Sixty COVID-19 patients died between April 2020 and March 2021 (i.e., during the so called “first wave” and “second wave”) at Fondazione Policlinico Universitario Agostino Gemelli IRCCS in Rome (Italy) underwent a full autopsy. All patients were immunologically naïve for SARS-CoV-2 infection (i.e., neither previous infection nor vaccination). All autopsies were performed according to national guidelines for investigation of patients died with SARS-CoV-2 infection . Post-mortem nasopharyngeal and tracheobronchial swabs were collected to confirm the presence of SARS-CoV-2 RNA. All swabs were preserved in universal transport medium (UTM, Copan S.p.A., Italy) and stored at 2–8 °C until testing by real-time reverse transcriptase–polymerase chain reaction (rRT-PCR) assay for total SARS-CoV-2 RNA detection. They were processed through Seegene NIMBUS Automated Liquid Handling Workstations, from nucleic acid extraction (using STARMag Universal Cartridge kit) to PCR setup, according to the manufacturer's directions (Arrow Diagnostics, Genova, Italy). Procedures to prevent specimen contamination and PCR carryover were in accordance with standard laboratory practices. All procedures were in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Data were processed in compliance with the European Union General Data Protection Regulation.
Results The results of our retrospective analysis i.e., the clinical characteristics of the study population and the main gross and histopathological findings at autopsy are reported below. 3.1 Clinical characteristics The cohort consisted of 60 patients (21 women and 39 men; average age of 80 +/− 12 years) died between April 2020 and March 2021 after SARS-COV-2 infection. 55 (92%) individuals had at least a comorbidity, hence, only 5 patients had no known comorbidities (8%); 4 patients had one comorbidity (7%), 8 patients had two comorbidities (14%), 16 patients had three comorbidities (27%) and 27 patients had at least four comorbidities (44%) ( ). The most common comorbidities were hypertension (n = 21; 38%), type-2 diabetes mellitus (n = 15; 27%) and chronic kidney disease (n = 10; 18%). Patients affected by chronic kidney disease (CKD) were classified by estimated glomerular filtration rate (eGFR) according to KDIGO grading system for CKD . At hospital admission, 1 of these patients were at CKD-G3a (10%), 4 were at CKD-G3b (40%), 4 were at CKD-G4 (40%) and 1 were at CKD-G5 (10%). The average hospital stay was about 11 +/− 8 days. During the hospitalization, 32 (53%) patients received anticoagulant therapy (i.e., heparin or enoxaparin). 70% of those who did not receive anticoagulant therapy (20 out of 28) were hospitalized during the so called “first wave” and early “second wave” of the pandemic (from March 2020 to November 2020), namely when the use anticoagulant therapy for COVID-19 patients was not yet widespread in clinical practice. Cardiopulmonary failure (n = 38; 64%) and multiorgan failure (n = 9; 15%) were the most frequently reported causes of death. 3.2 Cardiac findings At autopsy, signs of myocardial ischemia such as cardiac dilation were quite common (n = 23; 38%). Left ventricular hypertrophy was also a frequent finding in this cohort (n = 19; 31%) ( ) and it was in some cases associated with cardiac dilation (n = 3; 5%). The histopathological examination showed myocardiosclerosis in 44 (73%) cadavers ( ). In about a quarter of patients (n = 16; 27%) evidence of hypertrophy of cardiac muscle fiber was observed ( ). Moreover, the microscopic analysis of the heart revealed no cardiac microangiopathy or signs of myocarditis. 3.3 Lung findings At the examination of the pleura, 5 (9%) patients had pleural adhesions, 20 (34%) had pleural effusions, and 13 (22%) had both. Gross examination of the lungs revealed a massive inflammatory pattern with edematous and congested lungs in the 59% of cases (n = 35), associated with mucopus in the 29% of cases (n = 17). Pulmonary thromboembolism (n = 6; 10%) was the only finding in 2 (3%) patients and it was associated with inflammation in 4 (7%) cases. At microscopic examination, 40 (67%) patients had pulmonary intravascular coagulation associated with an inflammatory pattern (e.g., hyaline membranes formation, alveolar epithelial exfoliation, bronchiolitis, edema and fibrosis) ( ), while the inflammatory pattern alone was observed in 20% of patients (n = 12). Also pulmonary microangiopathy was a rare finding (n = 8; 13%) ( ). 3.4 Kidney findings In 29 (48%) cases, kidneys hypotrophy, cortical pallor and edema were found at gross examination of the kidneys. Histologically, exfoliation of renal tubular epithelial cells (n = 12; 20%) and intravascular coagulation (n = 4; 7%) were common findings. Nephroarteriosclerosis was found in n = 4 (7%) of patients ( ). 3.5 Other findings 22 (36%) patients showed adrenal colliquation, while liver involvement with hepatic congestion and hypotrophy were found in 33 (55%) of cadavers. No other findings were reported.
Clinical characteristics The cohort consisted of 60 patients (21 women and 39 men; average age of 80 +/− 12 years) died between April 2020 and March 2021 after SARS-COV-2 infection. 55 (92%) individuals had at least a comorbidity, hence, only 5 patients had no known comorbidities (8%); 4 patients had one comorbidity (7%), 8 patients had two comorbidities (14%), 16 patients had three comorbidities (27%) and 27 patients had at least four comorbidities (44%) ( ). The most common comorbidities were hypertension (n = 21; 38%), type-2 diabetes mellitus (n = 15; 27%) and chronic kidney disease (n = 10; 18%). Patients affected by chronic kidney disease (CKD) were classified by estimated glomerular filtration rate (eGFR) according to KDIGO grading system for CKD . At hospital admission, 1 of these patients were at CKD-G3a (10%), 4 were at CKD-G3b (40%), 4 were at CKD-G4 (40%) and 1 were at CKD-G5 (10%). The average hospital stay was about 11 +/− 8 days. During the hospitalization, 32 (53%) patients received anticoagulant therapy (i.e., heparin or enoxaparin). 70% of those who did not receive anticoagulant therapy (20 out of 28) were hospitalized during the so called “first wave” and early “second wave” of the pandemic (from March 2020 to November 2020), namely when the use anticoagulant therapy for COVID-19 patients was not yet widespread in clinical practice. Cardiopulmonary failure (n = 38; 64%) and multiorgan failure (n = 9; 15%) were the most frequently reported causes of death.
Cardiac findings At autopsy, signs of myocardial ischemia such as cardiac dilation were quite common (n = 23; 38%). Left ventricular hypertrophy was also a frequent finding in this cohort (n = 19; 31%) ( ) and it was in some cases associated with cardiac dilation (n = 3; 5%). The histopathological examination showed myocardiosclerosis in 44 (73%) cadavers ( ). In about a quarter of patients (n = 16; 27%) evidence of hypertrophy of cardiac muscle fiber was observed ( ). Moreover, the microscopic analysis of the heart revealed no cardiac microangiopathy or signs of myocarditis.
Lung findings At the examination of the pleura, 5 (9%) patients had pleural adhesions, 20 (34%) had pleural effusions, and 13 (22%) had both. Gross examination of the lungs revealed a massive inflammatory pattern with edematous and congested lungs in the 59% of cases (n = 35), associated with mucopus in the 29% of cases (n = 17). Pulmonary thromboembolism (n = 6; 10%) was the only finding in 2 (3%) patients and it was associated with inflammation in 4 (7%) cases. At microscopic examination, 40 (67%) patients had pulmonary intravascular coagulation associated with an inflammatory pattern (e.g., hyaline membranes formation, alveolar epithelial exfoliation, bronchiolitis, edema and fibrosis) ( ), while the inflammatory pattern alone was observed in 20% of patients (n = 12). Also pulmonary microangiopathy was a rare finding (n = 8; 13%) ( ).
Kidney findings In 29 (48%) cases, kidneys hypotrophy, cortical pallor and edema were found at gross examination of the kidneys. Histologically, exfoliation of renal tubular epithelial cells (n = 12; 20%) and intravascular coagulation (n = 4; 7%) were common findings. Nephroarteriosclerosis was found in n = 4 (7%) of patients ( ).
Other findings 22 (36%) patients showed adrenal colliquation, while liver involvement with hepatic congestion and hypotrophy were found in 33 (55%) of cadavers. No other findings were reported.
Discussion 4.1 Heart In our cases, signs of myocardial ischemia such as cardiac dilation were found in 23 cases. Left ventricular hypertrophy was found in 19 cases and it was in some cases associated with cardiac dilation. Furthermore, the histopathological examination showed myocardiosclerosis in 44 cadavers. In our cases we observed no direct damage to heart caused by SARS-CoV-2, but our findings suggest a heart failure in the context of general cardiopulmonary collapse. In accordance with Hessami et al. we found that pre-existing cardiovascular comorbidities were associated with higher mortality and intensive care unit (ICU) admission . The most common clinical manifestations of COVID-19 are myocarditis, acute myocardial infarction, acute heart failure and arrhythmias. These cardiac disorders are often associated with high levels of troponin, IL-6, ferritin and D-dimer . Epicarditis, pericarditis and endocarditis are also occasionally observed , . However, in our cases signs of myocarditis, epicarditis, pericarditis and endocarditis were not found. At the gross examination, myocardial ventricular hypertrophy and dilatation, mainly of the right cavity, are frequently reported , , , , . Moreover, we observed hypertrophy of cardiac muscle fiber in about a quarter of patients. This is a common finding detected also in other cohorts of patients, as it can be assumed as a pre-existing condition. Maccio et al. in their study reported a specific cardiac small vessels vasculitis with an inflammatory infiltrate composed of macrophages and CD4+ T lymphocytes. Moreover, the microscopic analysis of the heart revealed no cardiac microangiopathy in our cases. A distinctive pattern of myofibrillar fragmentation into individual sarcomeric units and a loss of nuclear DNA staining from intact cell bodies has also been described along with general signs of myopathy such as edema, occasional mononuclear infiltrate, and mild hypertrophy . Myocardial injury caused by SARS-CoV-2 infection is often thought to be due to a direct mechanism mediated by the ACE-2 receptor on myocytes and to an indirect mechanism determined by the inflammatory response (known as “Cytokine storm”) of the host . The cardiac damage is demonstrated by cardiac biomarker elevations that are significantly associated with an increased mortality risk in patients with COVID-19 . 4.2 Lungs In our cohort, lungs were the most affected organ by COVID-19, causing downstream collapse of the heart and kidneys. Gross examination of the lungs revealed a massive inflammatory pattern with edematous and congested lungs in the 59% of cases. Compared to other COVID-19 autopsies case series, pulmonary thromboembolisms of the lungs was a rare finding in our study population , which could be due to anticoagulant therapy administration since the hospital admission. While, at microscopic examination, pulmonary intravascular coagulation associated with an inflammatory pattern was a common finding which prove that COVID-19 is also and endothelial disease. At the gross examination, lungs have been frequently reported as heavy and boggy, while at microscopic analysis diffuse alveolar damage (DAD), interstitial and intra-alveolar edema, small vessels congestion, squamous metaplasia with atypia and platelet-fibrin thrombi, focal pneumocyte hyperplasia, pneumocyte necrosis, chronic inflammatory infiltrate, multinuclear giant cells, and hyaline membranes have been described as suggestive of COVID-19 , . Other common findings reported in scientific literature are: pulmonary vascular endothelialitis with thrombosis and angiogenesis , loss of pericytes combined with preserved endothelial cells in alveolar capillaries , . Molecular data on COVID-19 lung tissue identified a significant upregulation of vasoconstrictive mediators such as prostaglandins (phospholipase A, leukotrienes) and as well as an increase of nitric oxide synthase (NOS) . As a result, the combination of inflammation and hypoxemia caused by SARS-CoV-2 infection in the lungs has been demonstrated to cause intussusceptive angiogenesis . Therefore, the severe pathological scenario in the lungs of COVID-19-dead patients is well explained by the coexistence of the direct lung damage together with the endothelial disease of the pulmonary vessels. Moreover, Torres-Castro et al. reported the existence of different stages of alveolar destruction and interstitial fibrosis. They showed that the alveolar destruction found in several autopsies reflected the alteration of diffusing capacity for carbon monoxide (DLCO) during hospitalization, that was the most affected respiratory function parameter in COVID-19 patients . 4.3 Liver In our cases, we found no major macroscopic or microscopic abnormalities in the livers examined. However, liver findings, such as mild lobular lymphocytic infiltration, moderate micro-vesicular steatosis and minimal periportal lymphoplasmacellular infiltration and signs of fibrosis have been frequently reported in COVID-19 patients . In particular, the prevalence of preexisting liver abnormalities has been frequently observed in patients died of COVID-19 . From a clinical point of view, some authors have also observed an abnormal liver function in COVID-19 patients, which is associated with a longer hospitalization . Despite this evidence, in our cases, we found no major macroscopic or microscopic abnormalities in the livers examined or liver abnormalities at the clinical record examination. 4.4 Kidneys In our cases we observed exfoliation of renal tubular epithelial cells, signs of intravascular coagulation and nephroarteriosclerosis. In kidneys of COVID-19 patients, the main reported findings are acute tubular injury , , and fibrin microthrombi in the glomeruli . Flattened epithelium and lumens containing sloughed epithelial lining cells, granular casts, Tamm-Horsfall protein, and intraluminal accumulation of cellular debris in focal areas are also described . Some Authors have also detected the virions in proximal convoluted tubules and endothelial cells . Impaired glomerular filtration occurs with blood creatinine and urea nitrogen levels elevation. Proteinuria is also common in these patients . By examining laboratory data (especially serum creatinine level), we were able to assess the severity of kidney injury during COVID-19. During hospitalization 31% of patients (n = 19) developed acute kidney injury (AKI). Using KDIGO grading system for AKI we found that 33% of them had AKI stage I (S1), 50% had AKI stage II (S2) and 17% had AKI stage III (S3). As reported in Literature, the AKI documented in our patients is most likely of prerenal origin, resulting from the progressive worsening of the cardiopulmonary system function caused by COVID-19. Therefore, our kidney findings are in accordance with previous scientific studies, confirming the central role of this organ in COVID-19. 4.5 Adrenal glands In our cases, adrenal glands colliquation was likely due to corticosteroid therapy administered to COVID-19 patients since the hospital admission. This evidence is consistent with the fact that corticosteroids may cause adrenal failure, especially when used for more than two/four weeks .
Heart In our cases, signs of myocardial ischemia such as cardiac dilation were found in 23 cases. Left ventricular hypertrophy was found in 19 cases and it was in some cases associated with cardiac dilation. Furthermore, the histopathological examination showed myocardiosclerosis in 44 cadavers. In our cases we observed no direct damage to heart caused by SARS-CoV-2, but our findings suggest a heart failure in the context of general cardiopulmonary collapse. In accordance with Hessami et al. we found that pre-existing cardiovascular comorbidities were associated with higher mortality and intensive care unit (ICU) admission . The most common clinical manifestations of COVID-19 are myocarditis, acute myocardial infarction, acute heart failure and arrhythmias. These cardiac disorders are often associated with high levels of troponin, IL-6, ferritin and D-dimer . Epicarditis, pericarditis and endocarditis are also occasionally observed , . However, in our cases signs of myocarditis, epicarditis, pericarditis and endocarditis were not found. At the gross examination, myocardial ventricular hypertrophy and dilatation, mainly of the right cavity, are frequently reported , , , , . Moreover, we observed hypertrophy of cardiac muscle fiber in about a quarter of patients. This is a common finding detected also in other cohorts of patients, as it can be assumed as a pre-existing condition. Maccio et al. in their study reported a specific cardiac small vessels vasculitis with an inflammatory infiltrate composed of macrophages and CD4+ T lymphocytes. Moreover, the microscopic analysis of the heart revealed no cardiac microangiopathy in our cases. A distinctive pattern of myofibrillar fragmentation into individual sarcomeric units and a loss of nuclear DNA staining from intact cell bodies has also been described along with general signs of myopathy such as edema, occasional mononuclear infiltrate, and mild hypertrophy . Myocardial injury caused by SARS-CoV-2 infection is often thought to be due to a direct mechanism mediated by the ACE-2 receptor on myocytes and to an indirect mechanism determined by the inflammatory response (known as “Cytokine storm”) of the host . The cardiac damage is demonstrated by cardiac biomarker elevations that are significantly associated with an increased mortality risk in patients with COVID-19 .
Lungs In our cohort, lungs were the most affected organ by COVID-19, causing downstream collapse of the heart and kidneys. Gross examination of the lungs revealed a massive inflammatory pattern with edematous and congested lungs in the 59% of cases. Compared to other COVID-19 autopsies case series, pulmonary thromboembolisms of the lungs was a rare finding in our study population , which could be due to anticoagulant therapy administration since the hospital admission. While, at microscopic examination, pulmonary intravascular coagulation associated with an inflammatory pattern was a common finding which prove that COVID-19 is also and endothelial disease. At the gross examination, lungs have been frequently reported as heavy and boggy, while at microscopic analysis diffuse alveolar damage (DAD), interstitial and intra-alveolar edema, small vessels congestion, squamous metaplasia with atypia and platelet-fibrin thrombi, focal pneumocyte hyperplasia, pneumocyte necrosis, chronic inflammatory infiltrate, multinuclear giant cells, and hyaline membranes have been described as suggestive of COVID-19 , . Other common findings reported in scientific literature are: pulmonary vascular endothelialitis with thrombosis and angiogenesis , loss of pericytes combined with preserved endothelial cells in alveolar capillaries , . Molecular data on COVID-19 lung tissue identified a significant upregulation of vasoconstrictive mediators such as prostaglandins (phospholipase A, leukotrienes) and as well as an increase of nitric oxide synthase (NOS) . As a result, the combination of inflammation and hypoxemia caused by SARS-CoV-2 infection in the lungs has been demonstrated to cause intussusceptive angiogenesis . Therefore, the severe pathological scenario in the lungs of COVID-19-dead patients is well explained by the coexistence of the direct lung damage together with the endothelial disease of the pulmonary vessels. Moreover, Torres-Castro et al. reported the existence of different stages of alveolar destruction and interstitial fibrosis. They showed that the alveolar destruction found in several autopsies reflected the alteration of diffusing capacity for carbon monoxide (DLCO) during hospitalization, that was the most affected respiratory function parameter in COVID-19 patients .
Liver In our cases, we found no major macroscopic or microscopic abnormalities in the livers examined. However, liver findings, such as mild lobular lymphocytic infiltration, moderate micro-vesicular steatosis and minimal periportal lymphoplasmacellular infiltration and signs of fibrosis have been frequently reported in COVID-19 patients . In particular, the prevalence of preexisting liver abnormalities has been frequently observed in patients died of COVID-19 . From a clinical point of view, some authors have also observed an abnormal liver function in COVID-19 patients, which is associated with a longer hospitalization . Despite this evidence, in our cases, we found no major macroscopic or microscopic abnormalities in the livers examined or liver abnormalities at the clinical record examination.
Kidneys In our cases we observed exfoliation of renal tubular epithelial cells, signs of intravascular coagulation and nephroarteriosclerosis. In kidneys of COVID-19 patients, the main reported findings are acute tubular injury , , and fibrin microthrombi in the glomeruli . Flattened epithelium and lumens containing sloughed epithelial lining cells, granular casts, Tamm-Horsfall protein, and intraluminal accumulation of cellular debris in focal areas are also described . Some Authors have also detected the virions in proximal convoluted tubules and endothelial cells . Impaired glomerular filtration occurs with blood creatinine and urea nitrogen levels elevation. Proteinuria is also common in these patients . By examining laboratory data (especially serum creatinine level), we were able to assess the severity of kidney injury during COVID-19. During hospitalization 31% of patients (n = 19) developed acute kidney injury (AKI). Using KDIGO grading system for AKI we found that 33% of them had AKI stage I (S1), 50% had AKI stage II (S2) and 17% had AKI stage III (S3). As reported in Literature, the AKI documented in our patients is most likely of prerenal origin, resulting from the progressive worsening of the cardiopulmonary system function caused by COVID-19. Therefore, our kidney findings are in accordance with previous scientific studies, confirming the central role of this organ in COVID-19.
Adrenal glands In our cases, adrenal glands colliquation was likely due to corticosteroid therapy administered to COVID-19 patients since the hospital admission. This evidence is consistent with the fact that corticosteroids may cause adrenal failure, especially when used for more than two/four weeks .
Conclusions In conclusion, the results of our study confirmed that the lung is the most affected organ in COVID-19. However, we did not find a high rate of pulmonary thromboembolism, probably because of anticoagulant therapy administration since hospital admission. As stated by some Authors, also in our cohort the cardiac findings did not show a direct cytopathic damage caused by SARS-CoV-2 and pre-existing cardiovascular comorbidities were associated with higher mortality and intensive care unit (ICU) admission. According to our analysis of the clinical data, also kidney has been demonstrated to have a pivoltal role in COVID-19. The gradual worsening of renal function and AKI could be seen as the result of the progressive collapse of the cardiopulmonary system. Since there is a clear relationship between SARS-CoV-2 and direct renal tubular damage responsible for AKI, interventions such as blocking SARS-CoV-2/ACE2 binding, immune regulation and continuous renal replacement therapy (CRRT) to protect renal function in COVID-19 patients (especially in cases of AKI), could have an impact on preventing death.
AO, VA, GG developed the project. AO, VA, MDA, ES performed autopsies. All the authors partecipated in writing the paper.
This work has been supported by Fondi di Ateneo, Linea D.1, Università Cattolica del Sacro Cuore.
The autopsies were approved by the competent public authorithies. Consent for scientific research was waived because of the disposition of European Union GDPR regarding scientific publication of lawfully processed fully anonymized data.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Potential use of telemedicine in paediatrics: a single-centre retrospective review | 384afd6e-399c-4ea7-83d6-594935daca94 | 10008200 | Pediatrics[mh] | During the COVID-19 pandemic, telemedicine saw over a 600% increase in usage as the healthcare industry transitioned away from in-person care. Outside of a pandemic setting, telemedicine, particularly in the paediatric space, also has the potential to provide broader reach and access in areas without these resources. Additionally, telemedicine has shown equal efficacy compared with in-person visits in certain cases as demonstrated by a recent systematic review. It is important to note, however, that this systematic review examined conditions that do not rely heavily on objective data collection such as mental health evaluations or chronic condition management where an in-person visit would have been a part of the management plan. For the acute conditions included in the analysis, data collection tools such as cameras or cellular device otoscope attachment were part of the study design. Telemedicine also comes with several drawbacks, not least of which is that healthcare systems are largely based on in-person interactions. Additionally, quality telemedicine encounters are based on access to costly and non-widely distributed technologies, which tends to favour those with privilege. Our hypothesis is that telemedicine is likely ineffective in terms of definitive diagnosis and treatment in the acute and urgent care setting and that in the absence of data collection tools, most of the visits presenting to our clinic could not be definitively managed via telemedicine alone. In order to evaluate the effectiveness of telemedicine in the acute and urgent care setting, we retrospectively reviewed 2019 visits to our paediatric primary and urgent care clinic in Portland, Oregon, pre the SARS-COV-2 pandemic from 2019 to early 2020. Our clinic provides routine and urgent (non-emergent) care to paediatric patients aged 0–21 years. As the majority of our practice is acute care, telemedicine at our practice is almost entirely used to address new acute concerns or injuries. A retrospective chart review was conducted for all in-person visits at our paediatric primary and urgent care office located in Portland, Oregon. The current study includes 4 months leading up to the start of the COVID-19 pandemic (November 2019 to February 2020). Each patient visit (n=2019) was first categorised into groups deemed automatically incompatible with telemedicine such as need for a procedure or additional workup. Due to the hours of operation and lack of a pre-existing relationship with our patients, sending them to an outside centre for blood work or imaging is not practical. The remainder of the visits were reviewed by an experienced paediatric clinician and divided, based on the available documentation in the note, into whether definitive treatment via telemedicine was likely of value, potentially of value, or not of value. Information on data validation can be found in . 10.1136/bmjpo-2022-001819.supp1 Supplementary data Our study included 1567 distinct patients representing 2019 visits. The majority of patients (n=63.15%) were under 5 years of age. Telemedicine would not have led to definitive diagnosis and treatment for 1350 visits (n=66.86%). Telemedicine would have been potentially useful for definitive diagnosis and treatment for 578 visits (n=28.62%) and would likely have been useful for definitive diagnosis and treatment in 91 visits (n=4.51%). The diagnoses most likely to have been definitively treated with telemedicine are rashes and head/eye/ear/nose/throat/mouth concerns . Our findings support our hypothesis that the majority of paediatric patients who visited our practice during the study period could not have been treated definitively via telemedicine. Rather, telemedicine is of most utility as an augmented triage tool. Given the recent surge in telemedicine services and utilization around the COVID-19 pandemic, it is clear that a significant portion of the proliferation energy in telemedicine solutions could be better used perfecting remote data gathering tools geared specifically towards paediatric patients. We would expect that a substantially higher portion of our visits could have been definitively diagnosed and treated with telemedicine if some basic exam and laboratory data could be obtained at home. There are several limitations of our study. Selection bias by retrospectively analysing in-person visits is one such limitation. In addition, despite analysis by expert paediatric clinicians, there is some subjectivity to judging whether a patient might or might not have been definitively diagnosed and treated via telemedicine. Another limitation is that our clinic almost entirely caters to acute and urgent illnesses and concerns. It is likely that due to the nature of the presenting complaints, telemedicine alone would not be sufficient for diagnosis and treatment without additional patient data. Finally, due to its retrospective nature, our study cannot comment on whether the patients who were deemed to need in-person treatment would have been fine without a visit and vice versa. Reviewer comments Author's manuscript |
Color-resolved Cherenkov imaging allows for differential signal detection in blood and melanin content | d460d166-6d1f-49e3-bd06-c7d565826f5c | 10008915 | Internal Medicine[mh] | Introduction Radiotherapy is used in ∼ 50 % of cancer patients’ therapy. , Few measurement devices such as thermoluminescent dosimetry or optically stimulated luminescent dosimetry exist to verify dose to tissues and are usually limited to point dose determination. At present, no available technique can quickly and readily provide information regarding the dose distribution as well as the beam position and strength in real time; however, prior work has pioneered the implementation of Cherenkov emission imaging as a quality assurance tool in photon and electron beam radiation therapy. Cherenkov radiation is the visible-light emitted when charged particles, such as electrons, pass through a dielectric medium traveling at a speed greater than the phase velocity of light. This leads to an electromagnetic interaction for emission, commonly perceived as a blue light in clear media such as water. The emission is in fact broadband and covers the entire UV/visible/IR spectrum, but is of maximal intensity in the blue. The Cherenkov signal is directly proportional to the deposited dose because it results from the soft collisions of the electrons with the medium. The practical application of this observation is that the signal might be used as a means of quantitatively mapping radiation dose in humans. The exploration of this physical phenomenon within a therapeutic setting is a major aspect of this study. Cherenkov imaging is a useful tool to visualize the beam shape on tissue, but when used as a tool for dosimetry, it requires additional study at a fundamental level to potentially overcome quantitative limitations. Generation of Cherenkov light is directly related to dose, radiation energy spectrum, and the local refractive index. Transportation of this light is affected by attenuation from the intrinsic tissue optical properties. Monte Carlo simulations have predicted that tissue absorption and scattering events may contribute up to 45% variation in the detected light, and skin color change could alter the signal level by 90%. , Experimental results have shown that in tissue phantoms, varying the content of blood and Intralipid can result in a difference of up to 20% in surface Cherenkov emissions. The study here was focused on determining the differences between individual tissue types in the context of Cherenkov emission intensity, to further the potential for dosimetric information. In this study, color Cherenkov imaging was accomplished with a custom three-channel camera, that had time-gated image intensifiers on each color channel [red, green, and blue (RGB)]. Time-gated imaging allowed for the detection of low-intensity Cherenkov emission signal above background ambient light levels by acquiring image frames only when linear accelerator beam pulses were present. , Cumulative images of color Cherenkov emission were captured from tissue phantom samples irradiated with megavoltage (MV) x-rays from a clinical linear accelerator (LINAC). This imaging setup was used to explore the hypothesis that multi-spectral color Cherenkov imaging could allow for differential signal detection in tissues with variations in blood and melanin content. The results could provide practical insight into better point-of-care treatment with color Cherenkov imaging, as a tool for quantitative dose delivery, independent of the tissue being irradiated.
Materials and Methods 2.1 Tissue Phantoms A previously described color Cherenkov camera that captures RGB wavelength channels separately was used in this study. The acquisition was time-gated to the LINAC, capturing Cherenkov emission only during radiation pulses, thereby removing background ambient light. This setup is shown in . Image acquisition and processing were based on prior work, using the C-Dose research software, which samples image from the camera sensors at fixed numbers of radiation pulses, with the image intensifiers pulsing on with every x-ray pulse from the LINAC and the total image data accumulating on each chip to provide a higher intensity prior to readout to the computer. Background image data was sampled as well and saved onto the FPGA of each camera, such that background subtraction could be achieved on each channel of the camera. Mean intensities for each individual channel were obtained from consistent regions of interest (ROIs) on frame-averaged image stacks. To simulate tissue, synthetic epidermal layers 100 μ m ( + / − 5 μ m ) thick were fabricated, with varying biological concentrations of synthetic melanin (M8631, Sigma-Aldrich, St. Louis, Missouri) at 0.0018, 0.0038, 0.0114, 0.019, 0.027, 0.045, and 0.072 mg / ml . Thicknesses were calculated based upon the volume of material produced and the area. The epidermal layers were placed on top of 2-cm thick bulk tissue phantoms made of silicone with flesh-colored pigment, using a process outlined in a previous publication, composed of silicone embedded with flesh-colored pigments that match human soft tissue optical properties (Smooth-On, Macungie, Pennsylvania). Melanin concentrations were selected to match the expected range of human skin color based on several factors. , Primarily, we employed the Fitzpatrick scale, which assessed skin color based on reaction to ultraviolet radiation. Pigmented phantoms representing the following Fitzpatrick scale types were fabricated: Fitzpatrick type 1 (score 0 through 6), type II (score 7 through 13), type III (score 14 through 20), type IV (score 21 through 27), type V (score 29 through 34), and type VI (score 35+). Additionally, we used visual inspection to confirm that the phantoms were within the apparent visual range of human skin color as expected, as we found that the limitation of just the six skin values did not fully represent the range of pigmentation levels existing in human tissue. In particular, at the higher melanin content levels, more delimitation is needed to cover the range of human values. Base concentrations for the pigmented skin phantoms were previously reported. Average optical properties (absorption coefficient, μ a , and reduced scattering coefficient, μ s ′ ) of the phantoms were determined to confirm tissue-like optical behavior for a range of human skin colors. These measurements were done with spatial frequency domain imaging (SFDI), explained further below in Sec. . Intralipid was used as a scatterer at a fixed level of 1%. Bovine whole blood solutions (Lampire Biological Laboratories, Pipersville, Pennsylvania) with varying biological concentrations (0.5%, 1%, 1.5%, 2%, 2.5%, 3%, and 3.5%) were prepared in optically blacked out petri dishes. These values have been used in many previous studies to match the near infrared tissue optical property range for blood absorption in soft human tissues. Whole blood is fully oxygenated in ambient aqueous solution so that the hemoglobin (Hb) can be assumed to be 100% oxygenated hemoglobin for the spectral signature. During LINAC irradiation of the phantoms, images were captured for each color channel, and post-processing extracted average RGB Cherenkov emission intensities from the recorded images, as functions of melanin and blood concentrations, as is described below. 2.2 Color Cherenkov Camera The RGB color Cherenkov camera, as seen in , was composed of three independent intensified complementary metal oxide semiconductor (iCMOS) cameras (C-Dose, DoseOptics LLC, Lebanon NH) housed in a three-tube color video camera assembly (JVC, Yokohama, Japan). The red channel was outfitted with a red-sensitive intensifier and red filter, the blue and green channels were blue-green sensitive intensifiers, and appropriate color filters for each. Each camera was remotely triggered by leakage x-rays, allowing for synchronization with and gating to the linear accelerator pulses . The camera software supported 16 bit read capability for quantitative acquisition. The beam splitter assembly of the video camera, consisting of RGB dichroic beam splitters and bandpass filters, allowed for incoming Cherenkov light to be redirected according to wavelength to the appropriate camera channel resulting in three raw image stacks for each acquisition, shown in . These RGB image stacks were multiple images acquired per imaging time, such that each RGB channel resulted in a temporal image stack for each channel. The camera was equipped with a 10 to 100 mm, f / 1.6 zoom lens (JVC, Yokohama, Japan). 2.3 Imaged Color Cherenkov Emission The absorption spectra of melanin, – Hb, and HbO 2 are shown along with the emission spectrum of Cherenkov light in . The sensitivity spectra of RGB channel filters of the RGB color Cherenkov camera are shown in . Individual camera detection spectra for the three channels (RGB) were characterized to verify the efficacy and reliability of the filters in tracking and capturing individual channel intensities from experiment tissue samples within expected bands for RGB wavelengths. A tunable light source (TLS) (Optometrics Manual TLS, Optometrics Manufacturing, Ayer, Massachusetts) was used to characterize the optical system response. Wavelengths from 380 to 720 nm with a step of 20 nm were used for characterization and the response is shown in . This was done using a manual TLS and the tri-color camera, shown in . The TLS maximized throughput in the visible region of the spectrum using a 20W tungsten halogen lamp, with a spectral energy between 360 and 2000 nm. The light exiting through a small slit was imaged in close range with the tri-color camera using C-Dose software (DoseOptics LLC, Lebanon, New Hampshire), with images processed in MATLAB (v2022a, Natick, Massachusetts) and signal intensity graphed in . 2.4 Blood Phantoms Whole blood solutions at total volume 100 ml were created using stock solutions of bovine blood (Lampire Biological Laboratories, Pipersville, Pipersville), phosphate buffered saline solution (Cytiva, Marlborough, Massachusetts), and Intralipid (Sigma-Aldrich, Burlington, Massachusetts). The combined solutions were mixed with concentrations of 1% Intralipid (5 ml at 20% stock emulsion), using variations of blood at 0.5%, 1%, 1.5%, 2%, 2.5%, 3%, and 3.5%, with the remaining volume being phosphate buffered solution. The solutions were poured into 100-mm circular style cell culture dishes, with 20-mm depth (Corning, Corning, New York), which had been coated with matte black paint to reduce optical edge effects in the images. 2.5 Pigmented Melanin Layers for Phantoms Synthetic epidermal layers of 0.1-mm thickness were fabricated based on a previously described method. This was done by dissolving 1 g of gelatin powder of Type A porcine powder, 300 g Bloom (Sigma-Aldrich, St. Louis, Missouri) and 0.5-g glycerol ( ≥ 98 % purity, Sigma-Aldric, St. Louis, Missouri) in 10 ml of distilled water. Then 0.01% to 0.1% glutaraldehyde ( ≥ 99.5 % purity, Sigma-Aldrich, St. Louis, Missouri) was added, along with varying concentrations of synthetic melanin (0.0018, 0.0038, 0.0076, 0.019, 0.027, 0.045, and 0.072 mg / ml ), which came in the form of small crystals and was manually crushed to a fine powder (M0418-1G Lot# BCCB7179, Sigma-Aldrich, St. Louis, Missouri). The resulting solution was heated evenly with a 1200-W microwave for 5 s to 45°C temperature (Etekcity Lasergrip 1080 IR Thermometer, Anaheim, California) allowing for an even mixture. The solution was poured onto plastic molds (large Fischer weight boats) and then placed into a vacuum chamber to remove air bubbles and distortions. The volume of each melanin mixture was measured such that when it was poured onto the phantom that the layer thickness could be estimated to be 0.1 mm, and was based upon previous trials where after drying the thickness was measured with a micrometer. These layers were then dried for 48 h at 21°C in a fume hood to acquire thin, permanent pliable layers. 2.6 Verification of Color Values with Spatial Frequency Domain Imaging The impact of melanin concentration on the optical properties (absorption coefficient, μ a , and reduced scattering coefficient, μ s ′ ) of the epidermal layers was quantified using a validated, reflectance geometry SFDI and software (Reflect RS, Modulim, Irvine, California). SFDI separates the effects of scattering and absorption and can be used to estimate the concentrations of chromophores in the tissue. The technique works by illuminating different patterns light on the tissue, imaging the reemitted light, and demodulating the imaged reflectance. , Optical properties were quantified at each of eight wavelengths (471, 526, 591, 621, 659, 691, 731, and 851 nm) using five spatial projection frequencies (0.00, 0.05, 0.10, 0.15, and 0.20 mm − 1 ), and all experimental measurements were calibrated to the supplied tissue phantom from Modulim. The fit to the optical properties of the phantoms used all five spatial frequencies in the inversion, with fit to a lookup table of values within the supplied software. The epidermal layers were placed on top of 2-cm thick bulk tissue phantom made of silicone with flesh-colored pigment during this procedure.
Tissue Phantoms A previously described color Cherenkov camera that captures RGB wavelength channels separately was used in this study. The acquisition was time-gated to the LINAC, capturing Cherenkov emission only during radiation pulses, thereby removing background ambient light. This setup is shown in . Image acquisition and processing were based on prior work, using the C-Dose research software, which samples image from the camera sensors at fixed numbers of radiation pulses, with the image intensifiers pulsing on with every x-ray pulse from the LINAC and the total image data accumulating on each chip to provide a higher intensity prior to readout to the computer. Background image data was sampled as well and saved onto the FPGA of each camera, such that background subtraction could be achieved on each channel of the camera. Mean intensities for each individual channel were obtained from consistent regions of interest (ROIs) on frame-averaged image stacks. To simulate tissue, synthetic epidermal layers 100 μ m ( + / − 5 μ m ) thick were fabricated, with varying biological concentrations of synthetic melanin (M8631, Sigma-Aldrich, St. Louis, Missouri) at 0.0018, 0.0038, 0.0114, 0.019, 0.027, 0.045, and 0.072 mg / ml . Thicknesses were calculated based upon the volume of material produced and the area. The epidermal layers were placed on top of 2-cm thick bulk tissue phantoms made of silicone with flesh-colored pigment, using a process outlined in a previous publication, composed of silicone embedded with flesh-colored pigments that match human soft tissue optical properties (Smooth-On, Macungie, Pennsylvania). Melanin concentrations were selected to match the expected range of human skin color based on several factors. , Primarily, we employed the Fitzpatrick scale, which assessed skin color based on reaction to ultraviolet radiation. Pigmented phantoms representing the following Fitzpatrick scale types were fabricated: Fitzpatrick type 1 (score 0 through 6), type II (score 7 through 13), type III (score 14 through 20), type IV (score 21 through 27), type V (score 29 through 34), and type VI (score 35+). Additionally, we used visual inspection to confirm that the phantoms were within the apparent visual range of human skin color as expected, as we found that the limitation of just the six skin values did not fully represent the range of pigmentation levels existing in human tissue. In particular, at the higher melanin content levels, more delimitation is needed to cover the range of human values. Base concentrations for the pigmented skin phantoms were previously reported. Average optical properties (absorption coefficient, μ a , and reduced scattering coefficient, μ s ′ ) of the phantoms were determined to confirm tissue-like optical behavior for a range of human skin colors. These measurements were done with spatial frequency domain imaging (SFDI), explained further below in Sec. . Intralipid was used as a scatterer at a fixed level of 1%. Bovine whole blood solutions (Lampire Biological Laboratories, Pipersville, Pennsylvania) with varying biological concentrations (0.5%, 1%, 1.5%, 2%, 2.5%, 3%, and 3.5%) were prepared in optically blacked out petri dishes. These values have been used in many previous studies to match the near infrared tissue optical property range for blood absorption in soft human tissues. Whole blood is fully oxygenated in ambient aqueous solution so that the hemoglobin (Hb) can be assumed to be 100% oxygenated hemoglobin for the spectral signature. During LINAC irradiation of the phantoms, images were captured for each color channel, and post-processing extracted average RGB Cherenkov emission intensities from the recorded images, as functions of melanin and blood concentrations, as is described below.
Color Cherenkov Camera The RGB color Cherenkov camera, as seen in , was composed of three independent intensified complementary metal oxide semiconductor (iCMOS) cameras (C-Dose, DoseOptics LLC, Lebanon NH) housed in a three-tube color video camera assembly (JVC, Yokohama, Japan). The red channel was outfitted with a red-sensitive intensifier and red filter, the blue and green channels were blue-green sensitive intensifiers, and appropriate color filters for each. Each camera was remotely triggered by leakage x-rays, allowing for synchronization with and gating to the linear accelerator pulses . The camera software supported 16 bit read capability for quantitative acquisition. The beam splitter assembly of the video camera, consisting of RGB dichroic beam splitters and bandpass filters, allowed for incoming Cherenkov light to be redirected according to wavelength to the appropriate camera channel resulting in three raw image stacks for each acquisition, shown in . These RGB image stacks were multiple images acquired per imaging time, such that each RGB channel resulted in a temporal image stack for each channel. The camera was equipped with a 10 to 100 mm, f / 1.6 zoom lens (JVC, Yokohama, Japan).
Imaged Color Cherenkov Emission The absorption spectra of melanin, – Hb, and HbO 2 are shown along with the emission spectrum of Cherenkov light in . The sensitivity spectra of RGB channel filters of the RGB color Cherenkov camera are shown in . Individual camera detection spectra for the three channels (RGB) were characterized to verify the efficacy and reliability of the filters in tracking and capturing individual channel intensities from experiment tissue samples within expected bands for RGB wavelengths. A tunable light source (TLS) (Optometrics Manual TLS, Optometrics Manufacturing, Ayer, Massachusetts) was used to characterize the optical system response. Wavelengths from 380 to 720 nm with a step of 20 nm were used for characterization and the response is shown in . This was done using a manual TLS and the tri-color camera, shown in . The TLS maximized throughput in the visible region of the spectrum using a 20W tungsten halogen lamp, with a spectral energy between 360 and 2000 nm. The light exiting through a small slit was imaged in close range with the tri-color camera using C-Dose software (DoseOptics LLC, Lebanon, New Hampshire), with images processed in MATLAB (v2022a, Natick, Massachusetts) and signal intensity graphed in .
Blood Phantoms Whole blood solutions at total volume 100 ml were created using stock solutions of bovine blood (Lampire Biological Laboratories, Pipersville, Pipersville), phosphate buffered saline solution (Cytiva, Marlborough, Massachusetts), and Intralipid (Sigma-Aldrich, Burlington, Massachusetts). The combined solutions were mixed with concentrations of 1% Intralipid (5 ml at 20% stock emulsion), using variations of blood at 0.5%, 1%, 1.5%, 2%, 2.5%, 3%, and 3.5%, with the remaining volume being phosphate buffered solution. The solutions were poured into 100-mm circular style cell culture dishes, with 20-mm depth (Corning, Corning, New York), which had been coated with matte black paint to reduce optical edge effects in the images.
Pigmented Melanin Layers for Phantoms Synthetic epidermal layers of 0.1-mm thickness were fabricated based on a previously described method. This was done by dissolving 1 g of gelatin powder of Type A porcine powder, 300 g Bloom (Sigma-Aldrich, St. Louis, Missouri) and 0.5-g glycerol ( ≥ 98 % purity, Sigma-Aldric, St. Louis, Missouri) in 10 ml of distilled water. Then 0.01% to 0.1% glutaraldehyde ( ≥ 99.5 % purity, Sigma-Aldrich, St. Louis, Missouri) was added, along with varying concentrations of synthetic melanin (0.0018, 0.0038, 0.0076, 0.019, 0.027, 0.045, and 0.072 mg / ml ), which came in the form of small crystals and was manually crushed to a fine powder (M0418-1G Lot# BCCB7179, Sigma-Aldrich, St. Louis, Missouri). The resulting solution was heated evenly with a 1200-W microwave for 5 s to 45°C temperature (Etekcity Lasergrip 1080 IR Thermometer, Anaheim, California) allowing for an even mixture. The solution was poured onto plastic molds (large Fischer weight boats) and then placed into a vacuum chamber to remove air bubbles and distortions. The volume of each melanin mixture was measured such that when it was poured onto the phantom that the layer thickness could be estimated to be 0.1 mm, and was based upon previous trials where after drying the thickness was measured with a micrometer. These layers were then dried for 48 h at 21°C in a fume hood to acquire thin, permanent pliable layers.
Verification of Color Values with Spatial Frequency Domain Imaging The impact of melanin concentration on the optical properties (absorption coefficient, μ a , and reduced scattering coefficient, μ s ′ ) of the epidermal layers was quantified using a validated, reflectance geometry SFDI and software (Reflect RS, Modulim, Irvine, California). SFDI separates the effects of scattering and absorption and can be used to estimate the concentrations of chromophores in the tissue. The technique works by illuminating different patterns light on the tissue, imaging the reemitted light, and demodulating the imaged reflectance. , Optical properties were quantified at each of eight wavelengths (471, 526, 591, 621, 659, 691, 731, and 851 nm) using five spatial projection frequencies (0.00, 0.05, 0.10, 0.15, and 0.20 mm − 1 ), and all experimental measurements were calibrated to the supplied tissue phantom from Modulim. The fit to the optical properties of the phantoms used all five spatial frequencies in the inversion, with fit to a lookup table of values within the supplied software. The epidermal layers were placed on top of 2-cm thick bulk tissue phantom made of silicone with flesh-colored pigment during this procedure.
Results 3.1 Average Optical Property Characterization by SFDI Results from the phantom validation measurements by SFDI are summarized in . Property averages were computed based on the distribution of pixels within each wide-field, circular 5 cm ROI for each phantom. Visual inspection of these optical phantoms, combined with the SFDI property measurements, were used to confirm that the melanin layers exhibited optical properties that corresponded to the expected array of human skin colors. , 3.2 Cherenkov Images RGB resolved Cherenkov images for varying melanin and blood concentration variations are shown in and are compared to white light images. The attenuation in Cherenkov emission from tissue phantoms due to melanin absorption is shown in . Emission was observed to decrease as melanin concentration increased, without one particular color vastly differentiating itself in intensity from another. This is in line with melanin spectral properties showing no anomalous preference to a particular wavelength and decreasing in absorption units with increasing wavelength as noted in literature – as well as being confirmed via SFDI optical property validation in and . For purposes of visualization, the blue channel needed individualized windowing and leveling, and thus, a separate color bar is shown. The zero concentration of melanin was a gelatin layer with no melanin present in it. The attenuation in Cherenkov emission from the blood samples is shown in . The observed attenuation increased with increasing blood concentration with the red channel showing the greatest in mean signal intensity change compared to the green and blue channels. This is in line with blood spectral properties showing preference to a particular wavelength—Hb bound to oxygen absorbs blue-green light, reflecting red-orange light —appearing red and varying non uniformly in absorption units with increasing wavelength as noted in literature. – For purposes of visualization, as before, the blue channel needed individualized windowing and leveling and thus a separate color bar is shown. 3.3 Differential Signal Response Cherenkov images from each of the individual channels—gathered using C-Dose software and processed in MATLAB—resulted in the output of signal intensities across various concentrations in melanin and whole bovine blood, as shown in . Data gathered show an attenuation response for both melanin and blood. For melanin, as per visual aid of and , the quantitative response graphed in shows all channels following lower emission intensity with increasing pigment concentration. Comparing this response with the reference chart in , the trend seen in our results follows what is expected for Cherenkov emission and melanin concentration. Conversely, the whole bovine blood quantification , with visual aid from and , shows a differential response from the three channels. The red channel Cherenkov signal intensity attenuates to a much lesser extent, while the green and blue channels follow a tight diminution together to levels lower than red as a whole. Repeated measures of the phantoms confirmed these observed trends, with the error bars being smaller than the data points themselves. The increased absorption of blue and green by Hb results in increased signal in the red channel. Furthermore, corroborates this observation; oxygenated Hb exhibits two peaks where blue and green light wavelengths are represented. This phenomenon of different amounts of exiting light in the RGB bands in color Cherenkov imaging provides quantitative and qualitative confirmation of detection of signals that varied differently with changes in blood or melanin content. Further color Cherenkov imaging of skin pigmentation phantoms is seen in . About 3 Gy of dose was given via 6-MV photon and 6-MeV electron beams to the seven phantoms to show the limits of visual analysis of Cherenkov color imaging. With both beam energies, pigmentation past 0.0076 mg / ml is seen to be extremely difficult to distinguish from the background—effectively no observable Cherenkov emission. This observation also further illustrates the need for correcting Cherenkov signal attenuation due to biological tissue factors so that the emission can be seen as visually independent from patient-specific attenuating influences. It is noteworthy that the zero concentration values were removed because of reflection effects in the samples that placed their values outside of the trend with expected human tissues. Given that these concentrations are unrealistic of human tissues, their removal was deemed appropriate and provided a clearer interpretation of the trends seen here.
Average Optical Property Characterization by SFDI Results from the phantom validation measurements by SFDI are summarized in . Property averages were computed based on the distribution of pixels within each wide-field, circular 5 cm ROI for each phantom. Visual inspection of these optical phantoms, combined with the SFDI property measurements, were used to confirm that the melanin layers exhibited optical properties that corresponded to the expected array of human skin colors. ,
Cherenkov Images RGB resolved Cherenkov images for varying melanin and blood concentration variations are shown in and are compared to white light images. The attenuation in Cherenkov emission from tissue phantoms due to melanin absorption is shown in . Emission was observed to decrease as melanin concentration increased, without one particular color vastly differentiating itself in intensity from another. This is in line with melanin spectral properties showing no anomalous preference to a particular wavelength and decreasing in absorption units with increasing wavelength as noted in literature – as well as being confirmed via SFDI optical property validation in and . For purposes of visualization, the blue channel needed individualized windowing and leveling, and thus, a separate color bar is shown. The zero concentration of melanin was a gelatin layer with no melanin present in it. The attenuation in Cherenkov emission from the blood samples is shown in . The observed attenuation increased with increasing blood concentration with the red channel showing the greatest in mean signal intensity change compared to the green and blue channels. This is in line with blood spectral properties showing preference to a particular wavelength—Hb bound to oxygen absorbs blue-green light, reflecting red-orange light —appearing red and varying non uniformly in absorption units with increasing wavelength as noted in literature. – For purposes of visualization, as before, the blue channel needed individualized windowing and leveling and thus a separate color bar is shown.
Differential Signal Response Cherenkov images from each of the individual channels—gathered using C-Dose software and processed in MATLAB—resulted in the output of signal intensities across various concentrations in melanin and whole bovine blood, as shown in . Data gathered show an attenuation response for both melanin and blood. For melanin, as per visual aid of and , the quantitative response graphed in shows all channels following lower emission intensity with increasing pigment concentration. Comparing this response with the reference chart in , the trend seen in our results follows what is expected for Cherenkov emission and melanin concentration. Conversely, the whole bovine blood quantification , with visual aid from and , shows a differential response from the three channels. The red channel Cherenkov signal intensity attenuates to a much lesser extent, while the green and blue channels follow a tight diminution together to levels lower than red as a whole. Repeated measures of the phantoms confirmed these observed trends, with the error bars being smaller than the data points themselves. The increased absorption of blue and green by Hb results in increased signal in the red channel. Furthermore, corroborates this observation; oxygenated Hb exhibits two peaks where blue and green light wavelengths are represented. This phenomenon of different amounts of exiting light in the RGB bands in color Cherenkov imaging provides quantitative and qualitative confirmation of detection of signals that varied differently with changes in blood or melanin content. Further color Cherenkov imaging of skin pigmentation phantoms is seen in . About 3 Gy of dose was given via 6-MV photon and 6-MeV electron beams to the seven phantoms to show the limits of visual analysis of Cherenkov color imaging. With both beam energies, pigmentation past 0.0076 mg / ml is seen to be extremely difficult to distinguish from the background—effectively no observable Cherenkov emission. This observation also further illustrates the need for correcting Cherenkov signal attenuation due to biological tissue factors so that the emission can be seen as visually independent from patient-specific attenuating influences. It is noteworthy that the zero concentration values were removed because of reflection effects in the samples that placed their values outside of the trend with expected human tissues. Given that these concentrations are unrealistic of human tissues, their removal was deemed appropriate and provided a clearer interpretation of the trends seen here.
Discussions This study examined the expansion from prior work on Cherenkov imaging , , , but with the use of a three-channel RGB Cherenkov emission camera, to illustrate how imaging in color may provide a visual separation of tissue attenuation effects. The rationale for this study was to better understand how tissue optical properties affect the emission colors of Cherenkov light, to determine if there might be ways to use the spectrum for calibration or correction for tissue attenuation effects. The hypothesis driving this work was that differential RGB color Cherenkov emission levels would result from variations in the most dominant biological tissue absorption features, such as blood concentration within tissue and melanin concentration in the skin. Experimental measurements shown in – visually demonstrate that this is true. Furthermore, these changes in blood or melanin concentrations result in distinct visual changes in the RGB outputs values of the Cherenkov color emission. These findings support the idea that color or spectral imaging of Cherenkov might provide an experimental methodology for separation of biological attenuation of the intensity from the physical generation of Cherenkov with dose deposition. The goal would ideally be to use the Cherenkov intensity as an indicator of the dose delivered in the tissue, independent of the blood volume within it or the skin color, using color correction. While spectroscopic imaging of Cherenkov would likely provide a better quantitative measure of the color changes, the utility and convention of imaging in three RGB channels is ubiquitous today. The images provide a visual cue to the users of Cherenkov imaging about the biological origins of what is being seen. It is possible that corrected Cherenkov images could be displayed, side by side with information about the melanin or blood levels. Oxygenation studies have been examined in a previous paper, but here the focus was maintained on oxygenated blood. The light penetrance and imaged depth will vary with color, as a side effect of the attenuation values, meaning that blue and green light will likely have less than a mm penetration, while red light may have up to 1- to 5-mm penetrance. The penetrance is determined by the absorption and scattering coefficients at each wavelength band for this light, and the escape is exponentially attenuated with depth. So the average emission depths quoted here are simply average value for escaping photons that are distributed from an exponentially weighted depth of escape from the tissue. Major variations in depth will only be a large issue in tissues that have a highly layered variation with depth and might cause issues in areas of scar, tattoo, or burn, which are strongly different from the surrounding normal tissues. Perhaps one of the most central questions from this work is: how much attenuation occurs from skin color when acquiring Cherenkov images? The attenuation depends upon the concentration of melanin in the skin, and examining the data in and the images in and provide the answer to this. The skin colors imaged in the phantom correspond to the range of human skin colors expected, with the darkest having an extremely high melanin level, and in the data shows a 90% to 100% reduction in Cherenkov emission in all three color channels, with blue and green being the least emitted. It may not be possible to perform Cherenkov color imaging in people with the darkest skin tone because of the extreme emission attenuation, visually shown in . It should be noted that despite this, imaging in the next lower melanin level was possible with attenuation just at the 75% to 80% level. An increase in camera or image gain might be employed to overcome this. A major advantage to the setup in this work is the capacity to quickly and efficiently gather images showing contrast between changing biological factor concentration (blood and melanin here) with a portable setup. Future studies might examine and confirm if the separate effects of melanin and blood are indeed independent and separable when combined in the same phantoms. Based upon the fact that there was a monotonic response seen in intensity variation with melanin concentration (see ) and that there was independent information seen in the three color channels between melanin and blood volume, it seems like it would be possible to provide a linear correction factor for the attenuation due to melanin and blood volume when imaging in vivo, by developing a 2 × 3 matrix where the three normalized intensities of RBG are fitted to values of melanin and blood volume, and an inversion solved for correction of the corrected Cherenkov intensity. Scattering effects were not focused on here, largely because they are not as dominant a change between individuals and across individuals as much as blood or melanin does, as shown in previous studies. Hb content can vary by a factor of two between tissue types, and melanin can vary by a factor of 10 across individuals, and so these variations were thought to be most dominant. While there are large estimated scattering changes in the melanin phantoms here , these are likely erroneous due to the SFDI measurement being very surface weighted. The light fluence escape from tissue is dominated by the bulk tissue absorption and scattering coefficients, and the thin layer of melanin will appear as a thin attenuation filter for the emission light coming out, but not necessarily as a variation in the bulk tissue scattering coefficient. However future studies might examine the subtly of how much the normal variations of scattering might affect the Cherenkov color intensity measurement, and how layers of varying scatter might affect the signal independent of layers of absorption. , Future work would include automation of image processing as well as further investigation of correction factors for variation in blood and melanin concentration changes in patients, as well as correcting for lighting conditions and camera setup to ensure highest possible signal-to-noise ratio in fully color-resolved Cherenkov image output. The work could also be expanded to study the variances in RGB color Cherenkov imaging colors, comparing entrance and exit beams as well as MV and MeV energies. This would be interesting since the apparent color may likely change because the buildup curve is different for these two types of radiation, and hence spectral attenuations would likely be different. Patient studies could be performed on a large-scale clinical level, with a focus on the impact of how differential Cherenkov signal response contributes to Cherenkov dose readouts and guide better, more accurate patient surface dosimetry. This would require the manufacturing of an equal or better camera setup if it is to be done in multiple locations, and involve a careful step-by-step acquisition procedure with consistency in room and camera setup. Such future work would be done with a greater diversity of patients as opposed to previous studies done with mainly lower melanin pigmentation as well as varying levels of blood concentration or oxygenation in accordance with any present afflictions or ailments resulting in perturbation from a basal state. Further work could examine the context of other biological factors such as lipids and localized blood in vasculature. A whole skin model could potentially be used to further this goal, with tunable sections to monitor the effects of changing variables, allowing for a major expansion to this work on blood and melanin alone. It is now becoming more widely known that the skin color effects can alter emitted light signals, and so, it is possible that quantitative spectral measurements might be better utilized to compensate for these types of attenuation effects, and provide better information about the underlying light signals emitted from deeper in the tissue.
Conclusions This study quantitatively and visually showed how the imaging of biological tissue using RGB color Cherenkov imaging could provide independent information on intensity variations from pigmented skin or Hb level variations in the material, with this information being independent of the total dose. The differential signal response seen in blood versus melanin shows that it would be possible to differentiate the attenuation effects of the two spectrally. There is a possibility that future studies could use this information to correct Cherenkov intensity for the attenuation of one of these biological factors. However, an important observation is also that in the very darkest color phantoms, it seems as though there may be insufficient blue Cherenkov light emitted to gain a reliable signal without large improvements to the light capture approach. Still red wavelengths were sufficient in all skin color phantoms, albeit with a near 90% reduction in the darkest skin tones. Further focus on spectral distortion corrections for Cherenkov intensity changes might be used in quantitative patient dosimetric imaging.
|
Lung Cancer Screening Knowledge in Four Internal Medicine Programs | 7195ed34-b782-4eca-8923-0cef7bcb1a7f | 10009012 | Internal Medicine[mh] | The mortality burden of lung cancer in the United States remains elevated despite reducing smoking rates and better treatments. An initial lung cancer presentation in advanced stages is common due to the asymptomatic nature of most early stage disease. The last decade has seen a substantial increase in lung cancer screening (LCS) centers, stemming from multiple societal guidelines as early as 2013 cataloging low-dose computed tomography (LDCT) as a life-saving intervention in certain populations. However, less than 5% of eligible persons are being screened in the United States as of 2017. , In Canada, pilot feasibility studies are underway for a federal screening program. In the past 5 years, a Europe-wide policy to implement LCS has been developed with a focus on risk stratification, appropriate CT protocols, and smoking cessation. Data from the MILD/NELSON trials have also shown a continuous benefit the longer period of time LCS is implemented. Despite the call for screening, many physicians are unaware of the efficacy of LDCT. Furthermore, LDCT may not be appropriately recommended to high-risk populations seen in primary care clinics, where future primary care physicians (PCPs) are currently training. In 2018, it was estimated that residents may be involved in up to 30% of primary care clinics nationwide. Factors such as early detection, low harm in screening, and trust in the referring physician have shown to affect a patient’s preference for LCS. Physicians were concerned about the effectiveness of the test, the cost to the patient, and the possible harm from subsequent interventions. However, many of them still ordered chest x-rays, which is ineffective at reducing lung cancer mortality. Among internal medicine (IM) residents, knowledge about who the high-risk population appropriate for LCS and the effectiveness of LDCT to reduce mortality have been identified to be major barriers in recommending this measure. In the US Midwest, smoking rates continue to be higher than the national mean. In a recent review of 10 states, LCS rates ranged from 9–17%, including only 10.5% in 2 Midwestern U.S. states. Targeting high-risk populations in these states, recognizing knowledge gaps, and developing curricula to support its prospective preventative health physicians may prove valuable to reduce lung cancer mortality rates. The principal aim of this study is to evaluate LCS knowledge among IM residents from 4 residency programs in the US Midwest, where the total outpatient primary care visits are estimated to be 20 million every year.
The 2013 USPSTF recommendation rationale was the source material to assess the knowledge base of IM residents. Eligible participants were identified through their respective program’s residency leadership. Eligible participants had to be: a) active IM residents or medicine-pediatrics residents as of March of 2019. These residents were training in programs located in Indiana, Michigan, Nebraska, and Illinois. Data collection started in June of 2019 and stopped in January of 2020. The survey sought primarily to evaluate general knowledge. This was a composite variable calculated based on total number of correct responses divided by total number of questions. Additionally, it specifically measured: a) age and smoking history group eligibility, b) cancer specific and overall mortality benefit, c) populations that benefit the most from screening, d) mortality benefit of lung cancer screening with LDCT compared to mammogram and colonoscopy, and e) self-perceived LCS knowledge. Prior to taking the survey, residents were asked not to review the literature on LCS. The survey was distributed using REDCap (Research Electronic Data Capture) via email containing a public hyperlink leading to an online form. It was sent weekly to all residents and distributed by the authors and their respective program’s coordinators. REDCap is an online software toolset for electronic collection and management of research data. Data was hosted at Indiana University. The study received Ethics exemption from the office of research compliance at Indiana University (protocol #1904577492A001) because it involved research that only included interactions involving educational tests, survey procedures, interview procedures, or observation of public behavior. Data were analyzed using STATA 14. Descriptive statistics were used to stratify residents by post-graduate year (PGY). Statistical significance was set at P < .05 and was analyzed using Student’s t test and chi-square test, as appropriate.
Forty-six percent (166/360) of residents responded to the survey. The distribution was 42%, 30%, and 28% among PGY-1, PGY-2, and PGY- 3, respectively. The distribution per program was 37%, 15%, 28%, and 20%, respectively. A 2.9/7 (43.1%) general knowledge score was attained among all surveyed. Programs’ general knowledge ranged between 30% and 55% with no statistical significance among them (ρ = .56). General knowledge was statistically significantly better among PGY-1 (42%) outperforming PGY-2 and PGY-3 (30% and 28%, respectively; ρ = .022). Approximately one third of residents across all training years and programs correctly identified the target population for LCS. More than 90% of all respondents agreed that LCS improves cancer-specific mortality. Regarding all-cause mortality 64% of PGY-1 thought it improved it, whereas only 55% of PGY-2 and 38% of PGY-3 concurred . Eighty-three percent of Program 2 residents correctly answered that LDCT results in an all-cause mortality benefit, although only half of residents in the other programs answered this correctly. When comparing the reduction in cancer-specific mortality between LDCT and colonoscopy and mammogram, there were statistical differences between the programs . Two thirds of residents perceived their knowledge to be equal or less than 50%. There were no differences in perceived knowledge between PGY or programs .
According to this study, the knowledge of at-risk populations and impact of LDCT on mortality was low amongst IM residents at 4 large training programs in the Midwest U.S. This result is consistent with the finding that, as of 2017, less than 5% of the high risk for lung cancer population are being screened with LDCT in the United States. Improvement in screening rates for high-risk populations requires an improved knowledge base of future primary care physicians, who are most likely to recommend screening modalities for their patients. CMS has established the age range to be considered for LCS. This age range varies slightly from the landmark NLST trial (55–77 instead of 55–80). The reported lung cancer-specific mortality benefits by Pastorino et al and Becker et al were in the 20–39% range. Most of our respondents selected a lower lung cancer-specific mortality rate benefit, which may inform their decision on whether to recommend the intervention. In the NELSON and MILD trials, women benefited significantly more than men. Fewer than 10% of our respondents were aware of this finding. When we consider the trend in smoking behavior among women compared to men, women may represent a population at overall higher risk of developing lung cancer in the future. LDCT carries risks, especially in lower-risk populations. A veteran’s administrations study showed a high risk of false positives and an increase rate of this in low-risk population screening. It was suggested this may decrease the risk-benefit ratio for LCS. A different study found that patients who underwent LCS were more likely to continue smoking, possibly because of a false sense of security given by negative screening exams. Notwithstanding, it remains an internationally recommended method of screening. The national smoking rate in the United States is 16.7%. In the United States Midwest, smoking rates are 18.2%, only surpassed by the US South (18.8%). Interventions for early detection of lung cancer are essential to reduce mortality in these areas. The USPSTF evidence review suggested LDCT and mammogram in women aged 50–59 may have a comparable number needed to screen (NNS) to prevent one death. Based on this metric, LDCT outperformed mammogram and underperformed colonoscopy. Our respondents all agreed that they perceived LDCT to need more patients to prevent one death compared to the other two interventions. We believe that this assessment is consistent with an increased skepticism to new interventions. This is the first study to evaluate the multi-institutional knowledge of lung cancer screening among internal medicine residents. Similar studies in practicing primary care providers or residents echo these findings. The trend for PGY-1 to outperform PGY-2 and PGY-3 residents was consistent among all programs, ratifying a previously seen trend. This may be partially explained by increased motivation or recent medical school curricula or early residency training covering LCS recommendations. There was also a not statistically significantly higher proportion of PGY-1 residents in the sample analyzed. There are several limitations to this study. Our response group may be more motivated, increasing their willingness to respond and engage with the survey. This may skew the results to better overall knowledge—a concerning hypothesis. All residents must be exposed to primary care settings during their training as required by the American Board of Internal Medicine. However, in university-based programs, most of the residents may decide to go into subspecialty training. Additionally, the lack of time limit in for survey response may have allowed for literature review with no feasible way to control for this. Furthermore, knowledge may not be the only factor in preventing LCS recommendations. Many factors derive from patients, providers, system, and insurance characteristics which may be suboptimal to promote preventive care. Trainees providing primary care have a fundamental role in preventative health. Lung cancer screening knowledge in all respondents was unacceptably low. In their knowledge self-assessment, most were aware of their deficiencies. Early year residents performed better than their seniors. Uninformed skepticism and knowledge gaps continue to be significant barriers in recommending lung cancer screening.
|
Integrating Palliative Care into Oncology Care Worldwide: The Right Care in the Right Place at the Right Time | e8264627-3b23-4c7f-a2b1-ba8f88753c47 | 10009840 | Internal Medicine[mh] | Palliative care is defined as an approach that improves the quality of life of patients and their families facing problems associated with life-threatening illness through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial, and spiritual . The domains covered by palliative care include not only physical and psychological symptom control, but also other aspects of care, including social and family supports, financial and practical considerations, and spiritual or religious preferences, as well as goals of care and advance care planning. Compelling evidence from multiple randomized controlled trials (RCTs) and meta-analyses has demonstrated that early, specialist palliative care improves quality of life and satisfaction with care in patients with advanced solid tumors and hematological malignancies [ ••, – ]. As a result, international cancer societies recommend integrating palliative care early in the management of patients with advanced cancer . However, most evidence in support of early palliative care originates from resource-rich settings with comprehensive interdisciplinary teams, and from trials conducted in patients with solid tumors attending specialist cancer centres . Although the benefits of early palliative care have been demonstrated clearly in these high-resource settings, the results may not be generalizable to settings with limited resources. Moreover, there remain considerable global inequities in access to palliative care based on geography (high-income versus low- and middle-income countries; urban versus rural settings) and type of malignancy (solid tumor versus hematological malignancy). Indeed, it is estimated that up to two-thirds of the world’s population currently do not have access to palliative care supports or medications such as opioids for symptom relief . Here, we review recent evidence regarding palliative care provision throughout the course of a cancer illness. We argue that in order to ensure that patients worldwide have access to palliative care, all available resources will need to be utilized, including primary care clinicians as well as specialists, practicing in home, inpatient and outpatient settings, and providing care in rural areas in addition to urban centres. This is summarized in Table , which shows the timing and nature of palliative care according to the place of care and level of care provided, based on available evidence. While our premise is that for any given setting, the aim of palliative care should be to provide the right care at the right place at the right time, models of palliative care will need to be flexible and scalable, depending on the health care system and the resources available.
Primary palliative care Primary palliative care involves integration of the fundamental aspects of palliative care (basic symptom management and advance care planning) into the primary care delivered by non-specialist clinicians , and may be provided in inpatient, outpatient and home settings by physicians, nurses, and other interdisciplinary health care professionals. Given the shortage of specialist palliative care clinicians internationally, along with the increasing incidence of cancer in an aging population and the prolonged illness trajectory now typical of most cancers , upscaling primary palliative care is crucial [ , , ••]. Compared with specialist palliative care, strong evidence in support of primary palliative care is lacking. A recent meta-analysis comparing primary and specialist palliative care interventions identified a high risk of bias in all included primary palliative care studies [ ••]. While primary palliative care may improve quality of life, evidence in support of its impact on symptom burden and survival is limited . Barriers to the delivery of high-quality primary palliative care include poor communication between oncology teams and primary care providers, leading to lack of understanding around illness trajectories, especially at the end-of-life; time constraints within primary care; lack of reimbursement for the delivery of primary palliative care which often involves home visits and out-of-hours support of patients and lack of training in primary palliative care competencies . In the outpatient oncology setting, many patients lose touch with their primary care providers after entering the cancer system. Travel time to the office, a positive perception of care, and a 24-h support service have been associated with outpatients with cancer seeing their family physician for palliative care [ •]. In order to improve primary palliative care delivery, core elements of palliative care may need to be extrapolated from specialist models and integrated into the educational curricula of all primary healthcare providers. Models of care that support mentorship or supervision of primary palliative care providers by specialist clinicians should be considered. One existing model of a primary palliative care educational initiative is Pallium, in Canada . This not-for-profit organization is committed to expanding primary palliative care capacity nationally through its accredited “Learning Essential Approaches to Palliative Care” program, and has trained over 28,000 professionals through 1600 courses from 2015 to 2019. Secondary palliative care Secondary palliative care refers to care provided by oncology specialists to inpatients and outpatients in hospital settings. As with primary palliative care, secondary palliative care should be interdisciplinary and include care delivered by medical, radiation, and surgical oncologists and hematologists; oncology nurses; radiation therapists; and allied health professionals such as social workers, physical and occupational therapists, and spiritual care providers . Elements that constitute high-quality secondary palliative care have been defined by a partnership between the American Society of Clinical Oncology (ASCO) and the American Association for Hospice and Palliative Medicine . These include end-of-life care; communication and shared decision-making; advance care planning; referral to palliative care or hospice when appropriate; symptom assessment and management; caregiver supports; care coordination and continuity; psychosocial assessment and management; spiritual care; and cultural considerations . ASCO has published a statement endorsing individualized care for patients with advanced cancer that includes specific attention to symptom management and quality of life issues . In order to ensure providers are well equipped to incorporate these elements into their clinical care, mandatory rotations with specialist palliative care teams should form part of oncology training programs across all disciplines; it has been shown that oncologists who have completed these rotations are more likely to appropriately refer patients to palliative care services . Despite this, a recent survey of hematology-oncology fellowship programs in the United States revealed that only 68% of respondents offered such rotations, with lectures and seminars making up the majority of palliative care education in most programs . Beyond education, additional barriers to secondary palliative care delivery include time constraints within busy oncology practices, as well as remuneration models that often favour patient volumes over time spent with individual patients. As an incentive, both the European Society for Medical Oncology (ESMO) and the Multinational Association of Supportive Care in Cancer (MASCC) have highlighted designated centres of integrated oncology and palliative or supportive care, respectively, based on criteria related to educational, clinical and research domains . Tertiary palliative care Tertiary palliative care refers to the care provided by clinicians with specialist postgraduate training in palliative care, including physicians, nurses, social workers, spiritual care providers, occupational and physical therapists, and pharmacists, among others . Although tertiary palliative care should ideally be widely available, including in inpatient, outpatient and community settings in rural and urban areas, it is disproportionally represented in tertiary comprehensive cancer settings. In addition to providing palliative care to patients with complex needs, these providers should also be available to provide mentorship and clinical support to primary and secondary palliative care providers and to help support capacity-building for these clinicians. At this time, the strongest evidence around the benefits of palliative care is derived from RCTs and meta-analyses of tertiary palliative care interventions: structured interdisciplinary outpatient palliative care consultations have been shown to improve patient symptom burden, quality of life, mood, and survival, and caregiver satisfaction and quality of life . Early referral to specialist palliative care is now endorsed by ASCO, ESMO, and other international cancer organizations, but worldwide shortages of specialist trained clinicians limit the ability to meet the needs of all patients with advanced cancer . Funding to support expansion of tertiary palliative care within cancer centers appears to be limited internationally, even within tertiary centers .
Primary palliative care involves integration of the fundamental aspects of palliative care (basic symptom management and advance care planning) into the primary care delivered by non-specialist clinicians , and may be provided in inpatient, outpatient and home settings by physicians, nurses, and other interdisciplinary health care professionals. Given the shortage of specialist palliative care clinicians internationally, along with the increasing incidence of cancer in an aging population and the prolonged illness trajectory now typical of most cancers , upscaling primary palliative care is crucial [ , , ••]. Compared with specialist palliative care, strong evidence in support of primary palliative care is lacking. A recent meta-analysis comparing primary and specialist palliative care interventions identified a high risk of bias in all included primary palliative care studies [ ••]. While primary palliative care may improve quality of life, evidence in support of its impact on symptom burden and survival is limited . Barriers to the delivery of high-quality primary palliative care include poor communication between oncology teams and primary care providers, leading to lack of understanding around illness trajectories, especially at the end-of-life; time constraints within primary care; lack of reimbursement for the delivery of primary palliative care which often involves home visits and out-of-hours support of patients and lack of training in primary palliative care competencies . In the outpatient oncology setting, many patients lose touch with their primary care providers after entering the cancer system. Travel time to the office, a positive perception of care, and a 24-h support service have been associated with outpatients with cancer seeing their family physician for palliative care [ •]. In order to improve primary palliative care delivery, core elements of palliative care may need to be extrapolated from specialist models and integrated into the educational curricula of all primary healthcare providers. Models of care that support mentorship or supervision of primary palliative care providers by specialist clinicians should be considered. One existing model of a primary palliative care educational initiative is Pallium, in Canada . This not-for-profit organization is committed to expanding primary palliative care capacity nationally through its accredited “Learning Essential Approaches to Palliative Care” program, and has trained over 28,000 professionals through 1600 courses from 2015 to 2019.
Secondary palliative care refers to care provided by oncology specialists to inpatients and outpatients in hospital settings. As with primary palliative care, secondary palliative care should be interdisciplinary and include care delivered by medical, radiation, and surgical oncologists and hematologists; oncology nurses; radiation therapists; and allied health professionals such as social workers, physical and occupational therapists, and spiritual care providers . Elements that constitute high-quality secondary palliative care have been defined by a partnership between the American Society of Clinical Oncology (ASCO) and the American Association for Hospice and Palliative Medicine . These include end-of-life care; communication and shared decision-making; advance care planning; referral to palliative care or hospice when appropriate; symptom assessment and management; caregiver supports; care coordination and continuity; psychosocial assessment and management; spiritual care; and cultural considerations . ASCO has published a statement endorsing individualized care for patients with advanced cancer that includes specific attention to symptom management and quality of life issues . In order to ensure providers are well equipped to incorporate these elements into their clinical care, mandatory rotations with specialist palliative care teams should form part of oncology training programs across all disciplines; it has been shown that oncologists who have completed these rotations are more likely to appropriately refer patients to palliative care services . Despite this, a recent survey of hematology-oncology fellowship programs in the United States revealed that only 68% of respondents offered such rotations, with lectures and seminars making up the majority of palliative care education in most programs . Beyond education, additional barriers to secondary palliative care delivery include time constraints within busy oncology practices, as well as remuneration models that often favour patient volumes over time spent with individual patients. As an incentive, both the European Society for Medical Oncology (ESMO) and the Multinational Association of Supportive Care in Cancer (MASCC) have highlighted designated centres of integrated oncology and palliative or supportive care, respectively, based on criteria related to educational, clinical and research domains .
Tertiary palliative care refers to the care provided by clinicians with specialist postgraduate training in palliative care, including physicians, nurses, social workers, spiritual care providers, occupational and physical therapists, and pharmacists, among others . Although tertiary palliative care should ideally be widely available, including in inpatient, outpatient and community settings in rural and urban areas, it is disproportionally represented in tertiary comprehensive cancer settings. In addition to providing palliative care to patients with complex needs, these providers should also be available to provide mentorship and clinical support to primary and secondary palliative care providers and to help support capacity-building for these clinicians. At this time, the strongest evidence around the benefits of palliative care is derived from RCTs and meta-analyses of tertiary palliative care interventions: structured interdisciplinary outpatient palliative care consultations have been shown to improve patient symptom burden, quality of life, mood, and survival, and caregiver satisfaction and quality of life . Early referral to specialist palliative care is now endorsed by ASCO, ESMO, and other international cancer organizations, but worldwide shortages of specialist trained clinicians limit the ability to meet the needs of all patients with advanced cancer . Funding to support expansion of tertiary palliative care within cancer centers appears to be limited internationally, even within tertiary centers .
Outpatient clinics The outpatient setting is ideally suited for early tertiary palliative care delivery . In this setting, palliative care is typically offered concurrently with disease-modifying cancer therapies to support attending clinics longitudinally . While these clinics are for patients with a variety of cancer diagnoses, referrals tend to come more often from medical oncologists who specialize in solid tumors than malignant hematologists. Several different clinic models have been described, based on available palliative care resources and oncology structures . The two main models are embedded clinics, where palliative care is provided within an existing oncology clinic, and stand-alone clinics, where the palliative care clinic has its own designated clinic space . Both models were traditionally provided in person, although virtual care has become increasingly common during the COVID-19 pandemic, particularly for patients seen in follow up . This new method of communication could potentially overcome some of the factors that limit in person attendance to stand-alone clinics, particularly distance to the hospital . However, evidence regarding virtual care delivery models is limited , and further trials are needed. Embedded models are ideal for smaller palliative care teams working in centres where oncology clinics are not cancer site specific. The ability to see the oncologist and palliative care provider in the same clinic on the same day, and to pool resources between teams, may be advantageous, but the ability to expand or grow embedded palliative care clinics is often limited . Stand-alone palliative care clinics independent of the oncology clinic are more commonly offered at comprehensive cancer centres or centres with sufficient clinician support . Referrals from oncologists are triaged based on urgency, with prioritization of highly symptomatic patients for same-day visits while those with less urgent concerns are booked into a visit that coincides with a future clinic visit to their oncologist . While stand-alone clinics require upfront funding and independent administrative and other resources, they offer greater potential to customize the clinic space, to grow and expand based on demand, and to incorporate interdisciplinary team members in a more comprehensive way than an embedded model typically allows. Because no trials have compared an embedded versus a stand-alone model, the decision to adopt one over the other is pragmatic, based on factors such as cancer center size, palliative care team composition, clinic space availability, and financial considerations . Inpatient consultation services Compared with outpatient studies, where RCTs have focused on patient-reported outcomes, much of the research involving care provided by inpatient specialist palliative care consultation services has been retrospective, focusing on administrative outcomes and economic benefits. In a study of five US hospitals with comprehensive palliative care teams, consultations within 2 and 6 days of admission were shown to reduce hospitalization costs by 24% and 14%, respectively . Similar findings were reported in a meta-analysis of economic evaluations of interdisciplinary palliative care consultations for hospitalized patients with advanced illness . Most of these cost savings appear to come from reduced length of stay and reduced intensity of treatment, and tend to be greater for patients with more comorbidities (four or more), compared with two or fewer . Clinical benefits of inpatient specialist palliative care have also been demonstrated . A systematic review of the impact of palliative care consultations for inpatients showed improvements in pain, quality of life, satisfaction with care and advance care planning discussions . In addition, patients seen by inpatient palliative care teams were more likely to receive home care supports after discharge from hospital and less likely to be readmitted to acute care. Although the emergency department is not an ideal location for a first palliative care consultation, it has nevertheless been demonstrated that emergency department-initiated palliative care consultation in advanced cancer improves quality of life . Palliative care at home For patients whose performance status has declined, those with limited mobility, or older patients who have difficulty going to the hospital or to clinics, in-home palliative care is most practical . A recently published Cochrane review of home-based palliative care demonstrated an increased likelihood of dying at home and an association with improved satisfaction with care; effects on symptom control were unclear from the limited and heterogeneous data . Key elements of home-based palliative care, as identified by patients and caregivers, include the ability to access care 24 h a day, seven days per week, as well as expertise in communication and symptom management . Although home palliative care is ideally provided by primary care providers, logistical issues related to time and traveling to provide home visits, particularly outside regular office hours, represent prominent barriers . In a survey among family doctors and general practitioners, younger primary care physicians were more engageable to provide home palliative care; this was particularly the case if they were provided sufficient remuneration and resources, and if working in a team-based model with access to advice from specialist palliative care colleagues . In recent years, efforts have been made to integrate home-based palliative care earlier into the cancer trajectory . The advantages of early palliative care delivery in the home setting include the ability to focus on information-sharing; psychosocial elements of care; structured and systematic follow up; and future goal setting, whereas late involvement tends to be characterized by crisis-initiated visits and a need to focus on immediate problem-solving. RCTs investigating the feasibility and acceptability of early palliative care offered at home are ongoing [ •]. Palliative care units and residential hospices Inpatient hospices and palliative care units provide a specialist setting to support patients with advanced cancer and their families . Some palliative care units within comprehensive cancer centers provide acute symptom management, such as access to bloodwork, diagnostic imaging, intravenous antibiotics, and blood transfusions, and are suitable for patients who may require a brief admission to optimize their symptoms with a goal to return home. Others focus more on providing symptomatic relief for patients in the last days, weeks, or short months of life, and for whom remaining at home is not feasible or not aligned with their goals of care. Many hospices and palliative care units have admission criteria that include accepting a “do not resuscitate” order, and have limited abilities to support patients who continue to receive active anticancer therapies . Interdisciplinary care is a key component of the support provided in inpatient hospices or palliative care units, provided by specialized palliative care nurses, physicians, social workers, spiritual care providers, physiotherapists, occupational therapists, music therapists, art therapists, pharmacists, and others. Inpatient palliative care units within cancer centres may facilitate increased cancer-directed activities and reduced deaths on inpatient oncology units by supporting the timely transfer of patients to a specialized palliative care setting . Palliative care units also have financial benefits by reducing overall direct costs associated with an acute hospital admission .
The outpatient setting is ideally suited for early tertiary palliative care delivery . In this setting, palliative care is typically offered concurrently with disease-modifying cancer therapies to support attending clinics longitudinally . While these clinics are for patients with a variety of cancer diagnoses, referrals tend to come more often from medical oncologists who specialize in solid tumors than malignant hematologists. Several different clinic models have been described, based on available palliative care resources and oncology structures . The two main models are embedded clinics, where palliative care is provided within an existing oncology clinic, and stand-alone clinics, where the palliative care clinic has its own designated clinic space . Both models were traditionally provided in person, although virtual care has become increasingly common during the COVID-19 pandemic, particularly for patients seen in follow up . This new method of communication could potentially overcome some of the factors that limit in person attendance to stand-alone clinics, particularly distance to the hospital . However, evidence regarding virtual care delivery models is limited , and further trials are needed. Embedded models are ideal for smaller palliative care teams working in centres where oncology clinics are not cancer site specific. The ability to see the oncologist and palliative care provider in the same clinic on the same day, and to pool resources between teams, may be advantageous, but the ability to expand or grow embedded palliative care clinics is often limited . Stand-alone palliative care clinics independent of the oncology clinic are more commonly offered at comprehensive cancer centres or centres with sufficient clinician support . Referrals from oncologists are triaged based on urgency, with prioritization of highly symptomatic patients for same-day visits while those with less urgent concerns are booked into a visit that coincides with a future clinic visit to their oncologist . While stand-alone clinics require upfront funding and independent administrative and other resources, they offer greater potential to customize the clinic space, to grow and expand based on demand, and to incorporate interdisciplinary team members in a more comprehensive way than an embedded model typically allows. Because no trials have compared an embedded versus a stand-alone model, the decision to adopt one over the other is pragmatic, based on factors such as cancer center size, palliative care team composition, clinic space availability, and financial considerations .
Compared with outpatient studies, where RCTs have focused on patient-reported outcomes, much of the research involving care provided by inpatient specialist palliative care consultation services has been retrospective, focusing on administrative outcomes and economic benefits. In a study of five US hospitals with comprehensive palliative care teams, consultations within 2 and 6 days of admission were shown to reduce hospitalization costs by 24% and 14%, respectively . Similar findings were reported in a meta-analysis of economic evaluations of interdisciplinary palliative care consultations for hospitalized patients with advanced illness . Most of these cost savings appear to come from reduced length of stay and reduced intensity of treatment, and tend to be greater for patients with more comorbidities (four or more), compared with two or fewer . Clinical benefits of inpatient specialist palliative care have also been demonstrated . A systematic review of the impact of palliative care consultations for inpatients showed improvements in pain, quality of life, satisfaction with care and advance care planning discussions . In addition, patients seen by inpatient palliative care teams were more likely to receive home care supports after discharge from hospital and less likely to be readmitted to acute care. Although the emergency department is not an ideal location for a first palliative care consultation, it has nevertheless been demonstrated that emergency department-initiated palliative care consultation in advanced cancer improves quality of life .
For patients whose performance status has declined, those with limited mobility, or older patients who have difficulty going to the hospital or to clinics, in-home palliative care is most practical . A recently published Cochrane review of home-based palliative care demonstrated an increased likelihood of dying at home and an association with improved satisfaction with care; effects on symptom control were unclear from the limited and heterogeneous data . Key elements of home-based palliative care, as identified by patients and caregivers, include the ability to access care 24 h a day, seven days per week, as well as expertise in communication and symptom management . Although home palliative care is ideally provided by primary care providers, logistical issues related to time and traveling to provide home visits, particularly outside regular office hours, represent prominent barriers . In a survey among family doctors and general practitioners, younger primary care physicians were more engageable to provide home palliative care; this was particularly the case if they were provided sufficient remuneration and resources, and if working in a team-based model with access to advice from specialist palliative care colleagues . In recent years, efforts have been made to integrate home-based palliative care earlier into the cancer trajectory . The advantages of early palliative care delivery in the home setting include the ability to focus on information-sharing; psychosocial elements of care; structured and systematic follow up; and future goal setting, whereas late involvement tends to be characterized by crisis-initiated visits and a need to focus on immediate problem-solving. RCTs investigating the feasibility and acceptability of early palliative care offered at home are ongoing [ •].
Inpatient hospices and palliative care units provide a specialist setting to support patients with advanced cancer and their families . Some palliative care units within comprehensive cancer centers provide acute symptom management, such as access to bloodwork, diagnostic imaging, intravenous antibiotics, and blood transfusions, and are suitable for patients who may require a brief admission to optimize their symptoms with a goal to return home. Others focus more on providing symptomatic relief for patients in the last days, weeks, or short months of life, and for whom remaining at home is not feasible or not aligned with their goals of care. Many hospices and palliative care units have admission criteria that include accepting a “do not resuscitate” order, and have limited abilities to support patients who continue to receive active anticancer therapies . Interdisciplinary care is a key component of the support provided in inpatient hospices or palliative care units, provided by specialized palliative care nurses, physicians, social workers, spiritual care providers, physiotherapists, occupational therapists, music therapists, art therapists, pharmacists, and others. Inpatient palliative care units within cancer centres may facilitate increased cancer-directed activities and reduced deaths on inpatient oncology units by supporting the timely transfer of patients to a specialized palliative care setting . Palliative care units also have financial benefits by reducing overall direct costs associated with an acute hospital admission .
Most of the evidence demonstrating the benefits of outpatient palliative care interventions comes from trials of patients with solid tumors attending comprehensive cancer centres; less is known about the impact of early palliative care on patients with hematological malignancies (Table ). This section will summarize the evidence in support of early palliative care for patients with both solid tumors and hematological malignancies, highlighting the differences between the two groups as well as areas where further research may be needed. Timing of palliative care for patients with solid tumors Although the right time for palliative care intervention will depend on the setting of care and the resources available, the most compelling evidence has been from RCTs of “early” palliative care. In these trials, “early” was defined either as within 8–12 weeks of diagnosis of advanced cancer and/or a clinical prognosis of between 6 and 24 months . All of these trials utilized a specialized palliative care model and in most, the intervention was interdisciplinary, utilizing at minimum a palliative care physician and advanced practice nurse. The mode of delivery was generally in the outpatient setting, although some trials also enrolled inpatients [ ••, , ••], and one utilized telehealth . Studies have often been limited to patients with lung and/or gastrointestinal cancers, with only three studies with positive results also including other solid tumor malignancies . Overall, these studies demonstrated that early involvement of specialized palliative care resulted in improved quality of life, satisfaction with care and mood, albeit with small effect sizes. These trial results have been corroborated by meta-analyses ; in addition, largescale retrospective studies in real-world settings have shown that early palliative care is associated with a lower risk of dying in hospital, an increased possibility of receiving home-based end-of-life care, and reduced healthcare system costs [ ••, ]. The evidence presented above appears to have resulted in earlier referral to palliative care services in cancer centers [ •, ••], but barriers to early referral remain. These include a lack of trained specialists to provide palliative care and persistent stigma associating palliative care with end-of-life care . Ultimately, systematic screening of all patients with advanced cancer with targeted early referral for patients with particular need may be a more scalable model than uniform early palliative care for all patients with advanced cancer. A secondary analysis of an RCT showed that the benefit of early palliative care was greatest for patients with higher symptom burden [ •], and a recent phase II trial of symptom screening with targeted early specialized palliative care intervention demonstrated the feasibility of this model . However, this model assumes that oncologists and primary care providers will be able to provide basic palliative care, which necessitates better education than is currently provided . As well, a public health strategy is needed to educate and engage policymakers, stakeholders and the public about the relevance and importance of early palliative care . Timing of palliative care for patients with hematologic malignancies Hematologic malignancies (acute and chronic leukemias, lymphomas, and multiple myeloma) are often considered more heterogenous and unpredictable in terms of disease course and prognosis than solid tumors . Patients with hematological malignancies may often experience high physical symptom burden, as well as increased levels of psychosocial distress . Despite this, referrals to specialized palliative care tend to be later for patients with hematological malignancies compared to solid tumors. As a result, there is a relative paucity of evidence in support of early palliative care for patients with hematological malignancies. Unlike trials of early palliative care for patients with solid tumors, most RCTs enrolling patients with hematological malignancies have been conducted in the inpatient setting, and timing of intervention for these studies has been based on timing of admission rather than on prognosis. Two trials of palliative care in outpatient or emergency department settings included patients with haematological malignancies in addition to solid tumors, but the percentage of patients with hematologic malignancies for both of these was only approximately 5% and thus the results cannot be extrapolated to hematologic malignancy populations. Other RCTs exclusive to haematologic malignancies have all recruited inpatients shortly after admission for stem cell transplantation or admission for induction or re-induction chemotherapy [ ••, ••], although a nonrandomized pilot study included outpatients awaiting admission for allogeneic or autologous stem cell transplantation . These trials have all demonstrated the feasibility [ ••, ] of early palliative care and its effectiveness for improving quality of life, mood, and post-traumatic stress for patients with aggressive hematological malignancies such as acute leukemia who are awaiting or have received intensive treatment regimens for their disease [ ••, , ••, ]. Thus, the available evidence for patients with hematologic malignancies supports immediate referral to palliative care for patients with aggressive hematologic malignancies such as acute leukemia who are admitted for intensive treatment for their disease, or those awaiting hemopoietic stem cell transplantation. Further trials are necessary in outpatient populations and for patients with bone marrow failure and indolent hematologic malignancies that are not immediately life-threatening, but are nonetheless associated with a high burden of symptoms.
Although the right time for palliative care intervention will depend on the setting of care and the resources available, the most compelling evidence has been from RCTs of “early” palliative care. In these trials, “early” was defined either as within 8–12 weeks of diagnosis of advanced cancer and/or a clinical prognosis of between 6 and 24 months . All of these trials utilized a specialized palliative care model and in most, the intervention was interdisciplinary, utilizing at minimum a palliative care physician and advanced practice nurse. The mode of delivery was generally in the outpatient setting, although some trials also enrolled inpatients [ ••, , ••], and one utilized telehealth . Studies have often been limited to patients with lung and/or gastrointestinal cancers, with only three studies with positive results also including other solid tumor malignancies . Overall, these studies demonstrated that early involvement of specialized palliative care resulted in improved quality of life, satisfaction with care and mood, albeit with small effect sizes. These trial results have been corroborated by meta-analyses ; in addition, largescale retrospective studies in real-world settings have shown that early palliative care is associated with a lower risk of dying in hospital, an increased possibility of receiving home-based end-of-life care, and reduced healthcare system costs [ ••, ]. The evidence presented above appears to have resulted in earlier referral to palliative care services in cancer centers [ •, ••], but barriers to early referral remain. These include a lack of trained specialists to provide palliative care and persistent stigma associating palliative care with end-of-life care . Ultimately, systematic screening of all patients with advanced cancer with targeted early referral for patients with particular need may be a more scalable model than uniform early palliative care for all patients with advanced cancer. A secondary analysis of an RCT showed that the benefit of early palliative care was greatest for patients with higher symptom burden [ •], and a recent phase II trial of symptom screening with targeted early specialized palliative care intervention demonstrated the feasibility of this model . However, this model assumes that oncologists and primary care providers will be able to provide basic palliative care, which necessitates better education than is currently provided . As well, a public health strategy is needed to educate and engage policymakers, stakeholders and the public about the relevance and importance of early palliative care .
Hematologic malignancies (acute and chronic leukemias, lymphomas, and multiple myeloma) are often considered more heterogenous and unpredictable in terms of disease course and prognosis than solid tumors . Patients with hematological malignancies may often experience high physical symptom burden, as well as increased levels of psychosocial distress . Despite this, referrals to specialized palliative care tend to be later for patients with hematological malignancies compared to solid tumors. As a result, there is a relative paucity of evidence in support of early palliative care for patients with hematological malignancies. Unlike trials of early palliative care for patients with solid tumors, most RCTs enrolling patients with hematological malignancies have been conducted in the inpatient setting, and timing of intervention for these studies has been based on timing of admission rather than on prognosis. Two trials of palliative care in outpatient or emergency department settings included patients with haematological malignancies in addition to solid tumors, but the percentage of patients with hematologic malignancies for both of these was only approximately 5% and thus the results cannot be extrapolated to hematologic malignancy populations. Other RCTs exclusive to haematologic malignancies have all recruited inpatients shortly after admission for stem cell transplantation or admission for induction or re-induction chemotherapy [ ••, ••], although a nonrandomized pilot study included outpatients awaiting admission for allogeneic or autologous stem cell transplantation . These trials have all demonstrated the feasibility [ ••, ] of early palliative care and its effectiveness for improving quality of life, mood, and post-traumatic stress for patients with aggressive hematological malignancies such as acute leukemia who are awaiting or have received intensive treatment regimens for their disease [ ••, , ••, ]. Thus, the available evidence for patients with hematologic malignancies supports immediate referral to palliative care for patients with aggressive hematologic malignancies such as acute leukemia who are admitted for intensive treatment for their disease, or those awaiting hemopoietic stem cell transplantation. Further trials are necessary in outpatient populations and for patients with bone marrow failure and indolent hematologic malignancies that are not immediately life-threatening, but are nonetheless associated with a high burden of symptoms.
Specialist palliative care services tend to be disproportionately located in large urban academic centers and in high-income countries. With more than half of the world’s population residing in rural areas (which make up to 80% of most countries’ landmasses), there is an urgent need to improve access to palliative care in these areas . In low- and middle-income countries, cancer rates are increasing at an alarming rate: 50% of cancer diagnosed annually are in low- and middle-income countries where they are associated with high rates of morbidity and mortality. Policies and strategies that are tailored to resource-limited settings must be developed to maximize access to palliative care , recognizing that the nature, place and time of palliative care will often differ from those in high-income and urban settings. Trials of palliative care interventions in rural settings Patients with advanced cancer who live in rural settings have been shown to be less likely to access palliative care services compared to those residing in urban settings . In addition, living further from a palliative care program is associated with a higher likelihood of dying in hospital and higher costs at the end of life . Identified barriers to specialist palliative care provision in rural settings include lack of cohesive services and communication between clinical settings, demand for services that exceeds supplies of specialist teams where available, and educational gaps for both providers and patients alike . Primary care physicians are important providers of palliative care in rural communities; this includes providing palliative care at home as well as through cohorted inpatient beds designated for palliative care on hospital medical wards . Only a few RCTs of early palliative care have actively sought to include patients from rural settings. The ENABLE II and III trials, which utilized predominantly telehealth interventions delivered by a specialist nurse, recruited participants from three rural-serving cancer centres in the USA, and approximately 60% of participants came from rural communities . Apoyo con cariño was a tailored randomized trial that included urban and rural communities in the state of Colorado in the USA aiming to enhance access to palliative care services among the Latino population . Culturally tailored resources and lay navigator home visits were offered as part of the intervention, which demonstrated improved rates of advance care planning documentation, but there were no significant differences in pain, hospice utilization, or aggressiveness of care at the end of life. In addition, a lay navigator program to improve access to palliative care in 12 rural-serving cancer services in the USA demonstrated less aggressive end-of-life care . Several elements of successful palliative care provision for patients with advanced cancer in rural communities have been identified . These include developing local partnerships with healthcare, cultural, spiritual, and religious groups to appropriately support the needs of patients within each community, offering telehealth visits to minimize the direct and indirect costs associated with travelling to comprehensive cancer centers, utilizing models of care that foster local expertise with support from academic centres (e.g., virtual case conferences, mentorship programs), and initiatives that incentivize oncologists and palliative care specialists working in rural areas. Trials of palliative care interventions in low- and middle-income countries Palliative care is considered a human right based on two principles: the right to health, and the right to be free from cruel, inhuman, or degrading treatment . Based on these principles, several international cancer organizations and societies have advocated for the integration of palliative care services into routine oncology care . In 2018, ASCO published a resource-stratified guideline to provide guidance on the implementation of palliative care in resource-limited settings . The guideline listed seven recommendations, each subclassified based on the setting (basic, limited, or enhanced), intended to be used alongside local documents or policies. Globally, only half of countries currently include palliative care within their national Noncommunicable diseases (NCDs) policies, and only 68% have dedicated funding for palliative care, with a gap of 43% between high-income (91%) and low-and-middle-income countries (48%) . The level of palliative care development within countries is highly associated with each country’s ranking within the World Bank Group, the Human Development Index, and the presence or absence of universal health coverage; and it is classified in 6 groups according to the level of palliative care integration (Table ). In recent years, trials proposing models to enhance access to early palliative care in resource-limited settings have been published. Here we highlight studies from Latin America, Africa, and India as examples of successfully completed RCTs and public health initiatives from low- and middle-income countries. An RCT conducted in a tertiary hospital in Mexico found that a structured navigation program led to a significant increase in accessing specialized palliative care services (74% of the patients enrolled in the intervention arm, compared to 24% from the usual care group) [ ••]. Additionally, 48% of patients enrolled in the intervention group completed advanced directives compared to none in the usual care group and patients in the intervention group experienced better pain relief. In Ethiopia, an RCT demonstrated that early home-based palliative care delivered by palliative care-trained nurses for patients with newly diagnosed cancer significantly reduced health care costs compared with standard oncology care [ •]. In India, feasibility criteria were not met for a trial of early palliative care in patients with advanced lung cancer in a tertiary care center. Only 48% received follow-up at the palliative care clinic, with the remainder not followed up due to being fatigued, busy receiving chemotherapy, or returning to their hometown; however, quality of life and symptoms tended to improve, especially for pain and anxiety . In another RCT of patients with head and neck cancer in India, there was no difference in quality of life, symptom burden or survival at three months between patients randomized to receive early specialized palliative care and those receiving systemic therapy alone [ •], although the standard care arm received some elements of palliative care and 18% received a palliative care consultation. Elsewhere in India, the feasibility of home-based palliative care delivered by community health workers was successfully demonstrated, although additional training may be needed to improve pain and provide psychosocial supports . Finally, Panama represents a good example of a low- and middle-income country where the successful development of a sustainable national palliative care program has been possible . Through universal health coverage that includes palliative care, and the integration of health networks across all clinical settings, successful milestones have been possible; these include the accreditation of a specialist palliative medicine program and an amendment to the “Controlled Substances Act” to facilitate access to essential palliative care medicines.
Patients with advanced cancer who live in rural settings have been shown to be less likely to access palliative care services compared to those residing in urban settings . In addition, living further from a palliative care program is associated with a higher likelihood of dying in hospital and higher costs at the end of life . Identified barriers to specialist palliative care provision in rural settings include lack of cohesive services and communication between clinical settings, demand for services that exceeds supplies of specialist teams where available, and educational gaps for both providers and patients alike . Primary care physicians are important providers of palliative care in rural communities; this includes providing palliative care at home as well as through cohorted inpatient beds designated for palliative care on hospital medical wards . Only a few RCTs of early palliative care have actively sought to include patients from rural settings. The ENABLE II and III trials, which utilized predominantly telehealth interventions delivered by a specialist nurse, recruited participants from three rural-serving cancer centres in the USA, and approximately 60% of participants came from rural communities . Apoyo con cariño was a tailored randomized trial that included urban and rural communities in the state of Colorado in the USA aiming to enhance access to palliative care services among the Latino population . Culturally tailored resources and lay navigator home visits were offered as part of the intervention, which demonstrated improved rates of advance care planning documentation, but there were no significant differences in pain, hospice utilization, or aggressiveness of care at the end of life. In addition, a lay navigator program to improve access to palliative care in 12 rural-serving cancer services in the USA demonstrated less aggressive end-of-life care . Several elements of successful palliative care provision for patients with advanced cancer in rural communities have been identified . These include developing local partnerships with healthcare, cultural, spiritual, and religious groups to appropriately support the needs of patients within each community, offering telehealth visits to minimize the direct and indirect costs associated with travelling to comprehensive cancer centers, utilizing models of care that foster local expertise with support from academic centres (e.g., virtual case conferences, mentorship programs), and initiatives that incentivize oncologists and palliative care specialists working in rural areas.
Palliative care is considered a human right based on two principles: the right to health, and the right to be free from cruel, inhuman, or degrading treatment . Based on these principles, several international cancer organizations and societies have advocated for the integration of palliative care services into routine oncology care . In 2018, ASCO published a resource-stratified guideline to provide guidance on the implementation of palliative care in resource-limited settings . The guideline listed seven recommendations, each subclassified based on the setting (basic, limited, or enhanced), intended to be used alongside local documents or policies. Globally, only half of countries currently include palliative care within their national Noncommunicable diseases (NCDs) policies, and only 68% have dedicated funding for palliative care, with a gap of 43% between high-income (91%) and low-and-middle-income countries (48%) . The level of palliative care development within countries is highly associated with each country’s ranking within the World Bank Group, the Human Development Index, and the presence or absence of universal health coverage; and it is classified in 6 groups according to the level of palliative care integration (Table ). In recent years, trials proposing models to enhance access to early palliative care in resource-limited settings have been published. Here we highlight studies from Latin America, Africa, and India as examples of successfully completed RCTs and public health initiatives from low- and middle-income countries. An RCT conducted in a tertiary hospital in Mexico found that a structured navigation program led to a significant increase in accessing specialized palliative care services (74% of the patients enrolled in the intervention arm, compared to 24% from the usual care group) [ ••]. Additionally, 48% of patients enrolled in the intervention group completed advanced directives compared to none in the usual care group and patients in the intervention group experienced better pain relief. In Ethiopia, an RCT demonstrated that early home-based palliative care delivered by palliative care-trained nurses for patients with newly diagnosed cancer significantly reduced health care costs compared with standard oncology care [ •]. In India, feasibility criteria were not met for a trial of early palliative care in patients with advanced lung cancer in a tertiary care center. Only 48% received follow-up at the palliative care clinic, with the remainder not followed up due to being fatigued, busy receiving chemotherapy, or returning to their hometown; however, quality of life and symptoms tended to improve, especially for pain and anxiety . In another RCT of patients with head and neck cancer in India, there was no difference in quality of life, symptom burden or survival at three months between patients randomized to receive early specialized palliative care and those receiving systemic therapy alone [ •], although the standard care arm received some elements of palliative care and 18% received a palliative care consultation. Elsewhere in India, the feasibility of home-based palliative care delivered by community health workers was successfully demonstrated, although additional training may be needed to improve pain and provide psychosocial supports . Finally, Panama represents a good example of a low- and middle-income country where the successful development of a sustainable national palliative care program has been possible . Through universal health coverage that includes palliative care, and the integration of health networks across all clinical settings, successful milestones have been possible; these include the accreditation of a specialist palliative medicine program and an amendment to the “Controlled Substances Act” to facilitate access to essential palliative care medicines.
Although the scope of palliative care has expanded over the past decade to support early integration alongside cancer care, most evidence in support of this model comes from high-income, resource-rich settings and in patients with solid tumors. This model may not be easily applied in other settings, where challenges related to the patient population, as well as workforce shortages and lack of public policy in support of palliative care, must be acknowledged. Instead, the “best model” will inevitably vary between settings and must be one that allows maximum impact for patients with the greatest needs, starting at the end of life and expanding towards full integration only when the basic needs of dying patients are adequately met. Public health strategies aimed at developing local, sustainable policies integrated into national healthcare plans, as well as comprehensive training programs for healthcare providers across all clinical settings are needed to bridge the current gaps in care across national and international settings, and to ensure that patients can receive the right care, at the right place, and at the right time.
|
Pharmacogenomic profiling reveals molecular features of chemotherapy resistance in IDH wild-type primary glioblastoma | 2437ba5c-2928-4d80-b217-100e64acbd06 | 10010007 | Pharmacology[mh] | Isocitrate dehydrogenase wild-type (IDH-wt) glioblastoma (GBM) constitutes the most common and aggressive GBM subtype, with high inter- and intra-tumoral heterogeneity . Temozolomide (TMZ), in addition to radiotherapy and surgical resection, can improve both the progression-free survival (PFS) and overall survival (OS) in newly diagnosed GBMs . As a chemotherapeutic agent potentially suitable for long-term use owing to its relatively low toxicity, TMZ causes DNA damage by methylating the O 6 -position of guanine in DNA, initiating cell cycle arrest leading to cell death . Despite TMZ improving survival outcomes, the recurrence rate of GBM patients after standard TMZ therapy is over 90% . Identifying non-responders of TMZ in advance is especially important in neuro-oncology. To date, the promoter methylation status of O 6 -methylguanine-DNA-methyltransferase ( MGMT ), a protein that repairs the damages from TMZ, is the most widely used predictor of TMZ response in GBM . However, MGMT methylation alone was not sufficient. Besides, it was believed that the relapsed GBM is driven by invasive GBM stem-like cells (GSCs) . Under conventional treatment, invading GSCs are likely exposed to lower TMZ concentrations than the tumor cells within the contrast-enhancing tumor area highlighted by magnetic resonance imaging . Therefore, the existence of residual, heterogeneous populations of GSCs explains the temporal variability of the genomic profile during GBM progression . While previous studies utilized a small subset of conventional cancer cell lines to identify TMZ-resistant features , we propose to investigate patient-derived GSCs, which might indicate treatment outcomes and reveal clinical-relevant mechanisms of drug resistance. To identify predictive features and establish an integrative method to distinguish TMZ responder and non-responder before TMZ chemotherapy, we cultured a panel of GSCs derived from newly diagnosed treatment-naïve IDH-wt GBM patients and analyzed genomic traits and drug screening data of their early passages. Our recent work showed that these patient-derived GSCs better represent the traits of parental tumors compared to conventional cell lines . In this study, we aimed to collect molecular profiles of the GSCs and develop a classification model to predict TMZ sensitivity in order to improve patient management. Patient samples After written informed consent was obtained, we utilized tumor specimens of patients whose first therapeutic intervention was an open surgical resection at the Samsung Medical Center in accordance with the Institutional Review Board. Overall, 128 GBM specimens (108 primary, 19 recurrent, 1 unknown) were collected from 92 GBM patients with median age at diagnosis 57 (range 29-80) including 39 females and 53 males. GBMs were diagnosed based on the World Health Organization (WHO) criteria. The methylation status of the MGMT promoter was assessed by methylation-specific polymerase chain reaction (PCR) after sodium bisulfite DNA modification, and the mutation of IDH1 was detected by peptide nucleic acid-mediated clamping PCR and immunohistochemistry on the tumor tissues . Follow-up MRI was performed at a regular interval of 2 months during treatment and 3 or 6 months interval after treatment for disease recurrence. Among the 128 specimens, 126 samples (z-score cohort, Additional file : Table S1) were subjected to in vitro culture of patient-derived GSCs to study relative TMZ-sensitivity (Additional file : Fig. S1). Within these 126 samples, TMZ-treated IDH-wt primary GBMs were selected as the main cohort ( n =69) for downstream analysis. For intratumoral heterogeneity analysis, 18 patients with multi-sector samples (52/128) were included (multi-sector cohort). Longitudinal GBM cohort ( n = 40 pairs) for longitudinal expression analysis included 4 (2 pairs) out of 128 samples in addition to 64 samples (32 pairs) from Wang et al. and 12 samples (6 pairs) from Zhao et al. . Only the pairs that were IDH1-wt in the primary (untreated) and received TMZ after the first resection were selected. WES, targeted sequencing (GliomaSCAN), and/or RNAseq were performed on the main cohort and multi-sector cohort when available. Part of the sequencing data was retrieved from our previous publications . Detailed information can be found in Additional file : Table S1. Isolation and short-term in vitro culture of patient-derived GSCs Enrolled tumor specimens were enzymatically dissociated into single cells and cultured as described previously . These cells were then grown in Neurobasal-A medium with N2 and B27 supplements (0.5 × each, ThermoScientific, Bartlesville, OK, USA), basic fibroblast growth factor, and epidermal growth factor (20 ng/mL each, R&D Systems, McKinley Pl NE, Minneapolis, USA). As spheres appeared in the suspension culture, they were dissociated using StemPro® (Life Technologies, Woodward St, Austin, TX, USA) and expanded by reseeding under the same suspension culture conditions. Patient-derived GSCs were negative for mycoplasma contamination, as determined using the Universal Mycoplasma Detection Kit (American Type Culture Collection, University Blvd, Manassas, USA, 30-1012K). TMZ sensitivity evaluation of GSCs in vitro Patient-derived GSCs cultured in defined suspension culture conditions were seeded in 384-well plates at a density of 500 cells/well with technical duplicates or triplicates. TMZ was purchased from Selleck Chemicals (Houston, TX, USA) and stored following the manufacturer’s instructions. GSCs were treated with TMZ using a fourfold and seven-point serial dilution series ranging from 500 μM to 122 nM, using a Janus Automated Workstation (PerkinElmer, Waltham, MA, USA). After 6 days of incubation at 37°C in a 5% CO 2 humidified incubator, cell viability was assessed using an adenosine triphosphate assay system based on firefly luciferase (ATPLite™ 1step, PerkinElmer, Bridgeport Ave, Shelton, CT, USA). Cell viability was measured using an EnVision Multilabel Reader (PerkinElmer). Control wells containing only cells and vehicle (dimethyl sulfoxide) were included on each assay plate. The half maximal growth rate (GR) inhibitory concentration (GR 50 ) and traditional area under the curve of the dose-response curve (AUC) were calculated using an online GR calculator . These GR 50 and AUC values were used to compute the z -score in a total of 126 GSC samples from GBM tumor specimens (z-score cohort, Additional file : Table S1) for the determination of TMZ-resistant and sensitive samples. DNA sequencing Whole-exome sequencing (WES) and/or GliomaSCAN were performed on the DNA fragments of the tumor and matched blood. For WES, exonic DNA was captured by Agilent SureSelect Kit. GliomaSCAN is a massive parallel targeted sequencing protocol that covers exons of selected glioma-associated genes. Pair-end sequencing was sequenced on Illumina HiSeq 2000 instrument. FASTQ data was mapped to human genome reference (hg19) using Burrows-Wheeler Aligner . Duplicates were marked by Picard and alignments were sorted by SAMtools . Somatic mutation detection SAVI2 was used for identifying somatic mutations from WES and targeted sequencing (GliomaSCAN) . From the SAVI2 report, nonsynonymous somatic mutations with tumor variant allele frequency (VAF) higher than 5% and matched blood VAF equal to 0% were selected. Selected GBM driver genes were used in the following analysis. For epidermal growth factor receptor variant III (EGFRvIII), a sample was determined EGFRvIII-positive if two or more reads skipped exon 2–7 from the transcriptomic data. Copy number alteration by WES and GliomaSCAN We used the ngCGH python package (version 0.4.4) to generate estimated copy number alterations (CNAs) in a tumor specimen compared with its matched blood control. Gene-level read counts were calculated in both tumor and matched control. The output value from the package, which is the median-centered log2 ratio of tumor and normal sample, was used to define copy number status. If the value was above 0.5, the gene was annotated as “gain,” and “amplification” if above 1.58. Similarly, a value lower than −0.5 and −1.58 was labeled as “loss” and “deletion,” respectively. However, in the case of EGFR, GliomaSCAN’s copy number result was less accurate and therefore 0.3 and 1.58 were used as cut-offs to increase compatibility with WES results. The CNA result from WES data had the highest priority followed by CNA called from GliomaSCAN and RNA sequencing (RNA-seq). CNA estimation by RNA sequencing For samples with RNA-seq data but without WES, we estimated the CNAs from RNA-seq by adopting the CNAPE method with several modifications. Briefly, we used XGBoost to train our model instead of LASSO (Least Absolute Shrinkage and Selection Operator) regression, and also took the KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway gene set into consideration instead of only the STRING (Search Tool for the Retrieval of Interacting Genes/Proteins) protein-protein interaction. Then we used 38 samples with matched WES and RNA-seq data from our dataset to calibrate the cut-off values of normal, gain, and amplification (or loss and deletion) via optimizing F-score. Details can be found in Additional file : Supplementary Methods. RNA-Seq data processing and gene expression marker identification Sequencing reads were mapped to human genome reference (hg19) by the STAR (Spliced Transcripts Alignment to a Reference) pipeline . Read counts were then calculated using featureCounts . To identify genes that have conserved expression profiles between GSCs and the matched initial tumor tissues, RNA sequencing analyses were carried out on 12 matched GSC-tissue pairs to calculate the Spearman correlation coefficient for each gene based on log2-transformed raw read counts. A Gaussian mixture model was then used to separate conserved genes from the non-conserved genes (Additional file : Fig. S2). The conserved genes (with Spearman correlation coefficient > 0.177) were subsequently investigated by differential gene expression analysis via DESeq2 R package on RNA-seq data of tumor tissues from 12 TMZ-resistant and 22 TMZ-sensitive samples (evaluated by the above-mentioned in vitro TMZ screening). Principal component analysis was performed on these 34 samples to detect potential batch effects (Additional file : Fig. S3a, b). To make sure the marker genes are more reliable, we used stringent cut-offs (log 2 fold change > 2.5, adjusted P < 0.01) for identifying differentially expressed genes resulting in four TMZ-resistant markers (Additional file : Fig. S3c). To measure the level of gene expression, read counts were converted to Reads Per Kilobase per Million mapped reads (RPKM), followed by log2 transformation and quantile normalization. GBM subtyping We performed single sample gene set enrichment analysis (ssGSEA) using the GBM subtype gene sets defined by Wang et al. on the RNA-seq samples. The enrichment scores for each subtype were normalized across samples. The subtype with the highest normalized enrichment score was selected as the activated subtype for each sample. TCGA data Transcriptomic data of the TCGA cohort was downloaded directly from Broad GDAC Firehose (normalized RNAseqv2 RSEM, https://gdac.broadinstitute.org/ ). Mutation and CNA data were downloaded from cBioPortal. Clinical data was downloaded from the original publication by Ceccarelli et al. and cBioPortal . Modeling TMZ efficacy predictor (TMZep) The XGBoost classifier was trained to separate TMZ responder from non-responder based on genomic and transcriptomic profiles. A total of 25 features, including methylation status of MGMT promoter, single-nucleotide variants (SNVs), CNAs, and expression levels of selected genes, were incorporated to train a machine-learning model, based on samples in the main cohort ( n = 69). To address the issue of missing values, we first performed data imputation: for binary features, missing values were replaced by 0.5; for continuous features, missing values were imputed by KNNImputer ( K =5). The imputed data was used to train the XGBoost model (python xgboost v0.90), where 50 decision trees with a tree depth of no more than 3 were constructed under the learning rate of 0.74 and the subsampling ratio of 0.35 for each boosting iteration. The above hyperparameters were selected via optimization of (1) AUC score in 5-fold cross-validation; (2) capability of stratifying patients with different survival outcomes in the training set; and (3) biological significance of prioritized features. In the final model, we used 0.6 as the probability cutoff to segregate two risk groups. Furthermore, we added L2 regularization to the cost function to control overfitting and enhance the generalization ability of our model for unseen data. Lastly, the area under receiver operating characteristic curve (AUC) score was used to measure the model’s performance. Statistical analysis T -test, Wilcoxon rank-sum test, Spearman’s rank correlation coefficient test, and Fisher’s exact test were used to conduct different statistical analyses. Survival analyses were performed using the Kaplan–Meier method and the Cox proportional hazards regression method. Patients who were alive at the last known follow-up were considered censored in these analyses. Hazard ratios (HR) and their 95% confidence intervals (CIs) were calculated. Statistical analyses were conducted using Python (v.3.8) and R (3.6.3) software. After written informed consent was obtained, we utilized tumor specimens of patients whose first therapeutic intervention was an open surgical resection at the Samsung Medical Center in accordance with the Institutional Review Board. Overall, 128 GBM specimens (108 primary, 19 recurrent, 1 unknown) were collected from 92 GBM patients with median age at diagnosis 57 (range 29-80) including 39 females and 53 males. GBMs were diagnosed based on the World Health Organization (WHO) criteria. The methylation status of the MGMT promoter was assessed by methylation-specific polymerase chain reaction (PCR) after sodium bisulfite DNA modification, and the mutation of IDH1 was detected by peptide nucleic acid-mediated clamping PCR and immunohistochemistry on the tumor tissues . Follow-up MRI was performed at a regular interval of 2 months during treatment and 3 or 6 months interval after treatment for disease recurrence. Among the 128 specimens, 126 samples (z-score cohort, Additional file : Table S1) were subjected to in vitro culture of patient-derived GSCs to study relative TMZ-sensitivity (Additional file : Fig. S1). Within these 126 samples, TMZ-treated IDH-wt primary GBMs were selected as the main cohort ( n =69) for downstream analysis. For intratumoral heterogeneity analysis, 18 patients with multi-sector samples (52/128) were included (multi-sector cohort). Longitudinal GBM cohort ( n = 40 pairs) for longitudinal expression analysis included 4 (2 pairs) out of 128 samples in addition to 64 samples (32 pairs) from Wang et al. and 12 samples (6 pairs) from Zhao et al. . Only the pairs that were IDH1-wt in the primary (untreated) and received TMZ after the first resection were selected. WES, targeted sequencing (GliomaSCAN), and/or RNAseq were performed on the main cohort and multi-sector cohort when available. Part of the sequencing data was retrieved from our previous publications . Detailed information can be found in Additional file : Table S1. Enrolled tumor specimens were enzymatically dissociated into single cells and cultured as described previously . These cells were then grown in Neurobasal-A medium with N2 and B27 supplements (0.5 × each, ThermoScientific, Bartlesville, OK, USA), basic fibroblast growth factor, and epidermal growth factor (20 ng/mL each, R&D Systems, McKinley Pl NE, Minneapolis, USA). As spheres appeared in the suspension culture, they were dissociated using StemPro® (Life Technologies, Woodward St, Austin, TX, USA) and expanded by reseeding under the same suspension culture conditions. Patient-derived GSCs were negative for mycoplasma contamination, as determined using the Universal Mycoplasma Detection Kit (American Type Culture Collection, University Blvd, Manassas, USA, 30-1012K). Patient-derived GSCs cultured in defined suspension culture conditions were seeded in 384-well plates at a density of 500 cells/well with technical duplicates or triplicates. TMZ was purchased from Selleck Chemicals (Houston, TX, USA) and stored following the manufacturer’s instructions. GSCs were treated with TMZ using a fourfold and seven-point serial dilution series ranging from 500 μM to 122 nM, using a Janus Automated Workstation (PerkinElmer, Waltham, MA, USA). After 6 days of incubation at 37°C in a 5% CO 2 humidified incubator, cell viability was assessed using an adenosine triphosphate assay system based on firefly luciferase (ATPLite™ 1step, PerkinElmer, Bridgeport Ave, Shelton, CT, USA). Cell viability was measured using an EnVision Multilabel Reader (PerkinElmer). Control wells containing only cells and vehicle (dimethyl sulfoxide) were included on each assay plate. The half maximal growth rate (GR) inhibitory concentration (GR 50 ) and traditional area under the curve of the dose-response curve (AUC) were calculated using an online GR calculator . These GR 50 and AUC values were used to compute the z -score in a total of 126 GSC samples from GBM tumor specimens (z-score cohort, Additional file : Table S1) for the determination of TMZ-resistant and sensitive samples. Whole-exome sequencing (WES) and/or GliomaSCAN were performed on the DNA fragments of the tumor and matched blood. For WES, exonic DNA was captured by Agilent SureSelect Kit. GliomaSCAN is a massive parallel targeted sequencing protocol that covers exons of selected glioma-associated genes. Pair-end sequencing was sequenced on Illumina HiSeq 2000 instrument. FASTQ data was mapped to human genome reference (hg19) using Burrows-Wheeler Aligner . Duplicates were marked by Picard and alignments were sorted by SAMtools . SAVI2 was used for identifying somatic mutations from WES and targeted sequencing (GliomaSCAN) . From the SAVI2 report, nonsynonymous somatic mutations with tumor variant allele frequency (VAF) higher than 5% and matched blood VAF equal to 0% were selected. Selected GBM driver genes were used in the following analysis. For epidermal growth factor receptor variant III (EGFRvIII), a sample was determined EGFRvIII-positive if two or more reads skipped exon 2–7 from the transcriptomic data. We used the ngCGH python package (version 0.4.4) to generate estimated copy number alterations (CNAs) in a tumor specimen compared with its matched blood control. Gene-level read counts were calculated in both tumor and matched control. The output value from the package, which is the median-centered log2 ratio of tumor and normal sample, was used to define copy number status. If the value was above 0.5, the gene was annotated as “gain,” and “amplification” if above 1.58. Similarly, a value lower than −0.5 and −1.58 was labeled as “loss” and “deletion,” respectively. However, in the case of EGFR, GliomaSCAN’s copy number result was less accurate and therefore 0.3 and 1.58 were used as cut-offs to increase compatibility with WES results. The CNA result from WES data had the highest priority followed by CNA called from GliomaSCAN and RNA sequencing (RNA-seq). For samples with RNA-seq data but without WES, we estimated the CNAs from RNA-seq by adopting the CNAPE method with several modifications. Briefly, we used XGBoost to train our model instead of LASSO (Least Absolute Shrinkage and Selection Operator) regression, and also took the KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway gene set into consideration instead of only the STRING (Search Tool for the Retrieval of Interacting Genes/Proteins) protein-protein interaction. Then we used 38 samples with matched WES and RNA-seq data from our dataset to calibrate the cut-off values of normal, gain, and amplification (or loss and deletion) via optimizing F-score. Details can be found in Additional file : Supplementary Methods. Sequencing reads were mapped to human genome reference (hg19) by the STAR (Spliced Transcripts Alignment to a Reference) pipeline . Read counts were then calculated using featureCounts . To identify genes that have conserved expression profiles between GSCs and the matched initial tumor tissues, RNA sequencing analyses were carried out on 12 matched GSC-tissue pairs to calculate the Spearman correlation coefficient for each gene based on log2-transformed raw read counts. A Gaussian mixture model was then used to separate conserved genes from the non-conserved genes (Additional file : Fig. S2). The conserved genes (with Spearman correlation coefficient > 0.177) were subsequently investigated by differential gene expression analysis via DESeq2 R package on RNA-seq data of tumor tissues from 12 TMZ-resistant and 22 TMZ-sensitive samples (evaluated by the above-mentioned in vitro TMZ screening). Principal component analysis was performed on these 34 samples to detect potential batch effects (Additional file : Fig. S3a, b). To make sure the marker genes are more reliable, we used stringent cut-offs (log 2 fold change > 2.5, adjusted P < 0.01) for identifying differentially expressed genes resulting in four TMZ-resistant markers (Additional file : Fig. S3c). To measure the level of gene expression, read counts were converted to Reads Per Kilobase per Million mapped reads (RPKM), followed by log2 transformation and quantile normalization. We performed single sample gene set enrichment analysis (ssGSEA) using the GBM subtype gene sets defined by Wang et al. on the RNA-seq samples. The enrichment scores for each subtype were normalized across samples. The subtype with the highest normalized enrichment score was selected as the activated subtype for each sample. Transcriptomic data of the TCGA cohort was downloaded directly from Broad GDAC Firehose (normalized RNAseqv2 RSEM, https://gdac.broadinstitute.org/ ). Mutation and CNA data were downloaded from cBioPortal. Clinical data was downloaded from the original publication by Ceccarelli et al. and cBioPortal . The XGBoost classifier was trained to separate TMZ responder from non-responder based on genomic and transcriptomic profiles. A total of 25 features, including methylation status of MGMT promoter, single-nucleotide variants (SNVs), CNAs, and expression levels of selected genes, were incorporated to train a machine-learning model, based on samples in the main cohort ( n = 69). To address the issue of missing values, we first performed data imputation: for binary features, missing values were replaced by 0.5; for continuous features, missing values were imputed by KNNImputer ( K =5). The imputed data was used to train the XGBoost model (python xgboost v0.90), where 50 decision trees with a tree depth of no more than 3 were constructed under the learning rate of 0.74 and the subsampling ratio of 0.35 for each boosting iteration. The above hyperparameters were selected via optimization of (1) AUC score in 5-fold cross-validation; (2) capability of stratifying patients with different survival outcomes in the training set; and (3) biological significance of prioritized features. In the final model, we used 0.6 as the probability cutoff to segregate two risk groups. Furthermore, we added L2 regularization to the cost function to control overfitting and enhance the generalization ability of our model for unseen data. Lastly, the area under receiver operating characteristic curve (AUC) score was used to measure the model’s performance. T -test, Wilcoxon rank-sum test, Spearman’s rank correlation coefficient test, and Fisher’s exact test were used to conduct different statistical analyses. Survival analyses were performed using the Kaplan–Meier method and the Cox proportional hazards regression method. Patients who were alive at the last known follow-up were considered censored in these analyses. Hazard ratios (HR) and their 95% confidence intervals (CIs) were calculated. Statistical analyses were conducted using Python (v.3.8) and R (3.6.3) software. In vitro screening using patient-derived GSCs reflects personalized TMZ efficacy To evaluate GBM’s response to TMZ, we performed in vitro TMZ cytotoxicity assays in short-term (6 days) cultured patient-derived GSCs ( n = 69, main cohort) obtained from surgically resected IDH-wt primary GBM specimens. Since conventional metrics such as the effective concentration at 50% (IC 50 ) or maximum inhibition % (E max ) highly depends on cell division rate obscuring accurate sensitivity prediction, we adopted GR inhibition metrics, which are independent of division number and therefore superior to conventional metrics for assessing the effects of drugs in fast dividing cells . We calculated GR 50 values for each sample, and for those with infinite GR 50 values, we measured conventional AUC values. By calculating Z -scores for GR 50 and AUC, we divided our samples into TMZ-Sensitive and TMZ-Resistant groups (Fig. a). As expected, MGMT promoter methylation was observed to be related to Z -scores of GR 50 and AUC values (Fig. b, Wilcoxon rank sum test P = 0.018) . Strikingly, TMZ-Resistant ( n = 29) and TMZ-Sensitive ( n = 40) groups defined from in vitro sensitivity were highly predictive of survival outcomes for patients who were under a TMZ-based treatment regimen (Fig. c and d; PFS, P = 1.12e−4; OS, P = 3.63e−4; by log-rank test). Notably, the above-defined in vitro sensitivity surpasses the well-known MGMT promoter methylation status in predicting the patient prognosis (Additional file : Fig. S4a). Additionally, a Cox-regression multivariate survival analysis considering age, gender, extent of resection, and MGMT promoter methylation revealed that in vitro TMZ sensitivity and the extent of resection were independent factors associated with PFS and OS, while MGMT promoter methylation being related to in vitro sensitivity (Additional file : Fig. S3b, Fisher’s exact test P = 0.0156) was marginally significant (Table ). Collectively, these data reflect the reliability of our preclinical TMZ testing system for assessing clinical response to TMZ in patients newly diagnosed with IDH1-wt GBM. Genomic analysis reveals somatic mutational landscape of TMZ-resistant and sensitive groups To identify genetic factors contributing to TMZ response, we explored somatic genomic alterations in the TMZ-resistant and sensitive groups in our main cohort. WES and/or GliomaSCAN on 57 tissue specimens (with matched blood controls) and RNA-seq on 34 tissue specimens were either newly performed or downloaded from previous publications (Additional file : Table S1). Somatic SNVs and short insertions/deletions were identified by SAVI2 (Additional file : Table S2). A sample was labeled as hypermutated if the total number of somatic mutations was over 350 by WES. CNAs were calculated from WES, GliomaSCAN, or were predicted from RNA-seq by CNAPE (Additional file : Table S2-3, Additional file : Fig. S5-S6, and Additional file : Supplementary Methods). Variants with VAF over 5% and CNAs in previously reported GBM driver genes, together with EGFRvIII (Additional file : Table S4) and the expression-based GBM subtyping were shown in Fig. . Overall, no significant genomic difference was observed between the responder and non-responder groups. Yet, mesenchymal/proneural subtype and somatic mutations in genes including NF1 , NF2 , and PTEN were more often observed in TMZ-resistant samples, while PIK3R1 somatic mutations were slightly more frequent in TMZ-sensitive samples (Fig. ). These observations indicate that GBM’s response to TMZ might be determined by the combination of multiple factors but not by single ones. Transcriptomic sequencing reveals marker genes of TMZ resistance To identify marker genes of TMZ response, we first separated conserved and non-conserved gene expression between GSCs and the initial tumor tissue through a Gaussian mixture model (Additional file : Fig. S2). The conserved genes were then used to perform differential gene expression analysis between tissue RNA-seq data of TMZ-resistant ( n = 12) and sensitive ( n = 22) samples. Principal component analysis on these 34 samples showed slight clustering by GBM subtype (proneural vs. mesenchymal/classical) but by no other factors including age, gender, and MGMT promoter methylation status (Additional file : Fig. S3a, b). We identified four genes ( EGR4 , PAPPA , LRRC3 , and ANXA3 ) significantly up-regulated in the TMZ-resistant group (Fig. a, Additional file : Fig. S3c, log 2 fold change > 2.5, adjusted P < 0.01). To explore the prognostic value of the TMZ-resistant marker genes, we extracted 96 RNA-seq available TMZ-treated IDH-wt primary GBM patients from the TCGA dataset and classified them into high-risk and low-risk groups based on the expression of these four genes. Notably, the high-risk group had significantly worse PFS ( P = 1.59e−03 by log-rank test) and OS ( P = 3.46e−03 by log-rank test, Fig. b), compared to that of the low-risk group. To further investigate the expression change of these genes before and after TMZ treatment, we integrated a total of 40 paired RNA-seq data of initial and matched TMZ-treated recurrent IDH-wt GBM samples . As shown in Fig. c, the expression level of TMZ-resistant markers increased in the recurrent samples compared to the initial, suggesting that the TMZ-resistant marker-expressed cell population survived TMZ treatment and expanded in the recurrent GBM. A machine learning (ML) approach for integrating key features to predict TMZ response of IDH1-wt GBM Figure a presents the overall relevance of the genomic, transcriptomic, and other features on TMZ response. Along with the expression of four TMZ-resistant markers, MGMT expression, MGMT promoter methylation status, hypermutation status, GBM subtype, somatic mutations, and CNAs identified from the main cohort, we added 5-aminolevulinic acid (5-ALA) tendency as another feature (Additional file : Table S1). In order to integrate these features for patient evaluation, we constructed an XGBoost classifier to identify the TMZ response of a patient as TMZ-resistant or TMZ-sensitive. Among the 30 features shown in Fig. a, 25 features were used to train the machine learning model in the main cohort, excluding NF2 mutation, hypermutation, 5-ALA positive and 5-ALA negative which were not available in the TCGA testing cohort (Additional file : Table S5). Compared with MGMT promoter status as the only feature, adding other features provided more information for recognition of TMZ non-responders (Fig. b). Notably, the top five informative features from the model were the expression level of ANXA3 and LRRC3 , proneural subtype, ERG4 and MGMT expression (Additional file : Fig. S7a). In addition, incorporating the four expression markers together with other features achieved a stronger discrimination power compared to the presence of just an individual marker (Additional file : Fig. S7b). Within the training cohort (main cohort), a prediction of 88.4% (61 out of 69) of the samples matched the in vitro TMZ-response (Fig. c). We then tested our model in an independent cohort with 262 IDH-wt, TMZ-treated primary GBM patients from TCGA (inclusive of the 96 RNA-seq available patients from Fig. b). Importantly, patients predicted to be TMZ-resistant by the classifier had significantly worse PFS (Fig. d, P = 4.58e−04 by log-rank test) and OS (Fig. e, P = 3.66e−04 by log-rank test) validating the power of our model to predict prognostic outcome in patients treated by TMZ. Moreover, we investigated the survival difference across four subtypes (classical, proneural, neural, and mesenchymal) in the TCGA cohort and no significant segmentation was observed for PFS ( P = 0.531 by multivariate log-rank test) and OS ( P = 0.412 by multivariate log-rank test) in Additional file : Fig. S8a, which is expected and compatible with the observations previously reported by the TCGA group. We further correlated the four GBM subtypes with the TMZ response predicted from our machine learning model. Notably, the mesenchymal subtype is associated with TMZ resistance ( P = 0.039 by Fisher exact test, Additional file : Fig. S8b). Among the mesenchymal cases, the resistant group demonstrated worse PFS ( P = 1.98e−02 by log-rank test, Fig. f) and OS ( P = 1.26e−04 by log-rank test, Fig. g), compared to that of the sensitive group, highlighting the value of our model to unveil new responders/non-responders within the subtypes. In addition, when compared to using only MGMT promoter methylation status in the TCGA dataset ( n = 203), our model integrating multiple features provided a better way to segregate patients with different outcomes of both PFS and OS (Additional file : Fig. S9). Within MGMT methylated group, our model identified a limited number of high-risk resistant cases with worse PFS and OS (Additional file : Fig. S9c). Furthermore, to facilitate the use of our model, we designed a freely accessible website named TMZep that provides the function for evaluating potential TMZ response for GBM patients ( http://www.wang-lab-hkust.com:3838/TMZEP ) . Users can input patient’s data on part or all of the 25 features to the website, which will evaluate the potential TMZ treatment response of the corresponding GBM patient. Multi-sector TMZ screening underlines intratumoral heterogeneity in drug responsiveness Intratumoral heterogeneity (ITH) is a key factor that causes therapeutic resistance and recurrence in GBM . To reveal the impact of ITH on TMZ treatment, we compiled 52 GBM tumor tissue specimens from 18 patients, where each patient had 2 to 4 multi-sector samples taken from the same patient (Multi-sector cohort, Additional file : Table S1). Of the Multi-sector cohort, 15 tissues and patients overlapped with the main cohort. The three additional patients were patients with recurrent GBM, IDH mutant GBM, or IDH-wt primary GBM without TMZ treatment. We performed in vitro TMZ screening of the multi-sector GSCs (Fig. a) followed by WES in 19 samples and RNA-seq in 26 samples. Interestingly, almost half of the patients (8/18) carried both TMZ-resistant and TMZ-sensitive tumor samples (Fig. b and Additional file : Fig. S10). We termed these patients as TMZ-ITH, which harbored heterogeneous GSCs within one tumor in terms of in vitro TMZ treatment response. We confirmed several TMZ-associated factors identified earlier in this study by comparing the molecular signatures of multi-sectors. In particular, the TMZ-resistance markers were upregulated in the resistant sectors of M13 and M14 (Fig. c, d, Additional file : Fig. S11a-b). Meanwhile, a combination of PTEN loss, EGFR gain, and deeper deletion of CDKN2A/B was observed specifically in the sensitive sectors of these two patients (Fig. c, d, and Additional file : Fig. S11c). Motivated by this observation, we checked the concurrent CNAs in PTEN , EGFR , and CDKN2A/B back in our main cohort and found that it was significantly more frequent in TMZ-sensitive samples (Fisher’s exact P = 0.0102, Fig. e), while each individual factor did not have statistical significance. Back to the ITH analysis, eight TMZ-ITH patients had comparable survival time with patients harboring only TMZ-resistant sectors and significantly worse survival time than patients with only TMZ-sensitive sectors (OS, P = 0.027; PFS, P = 0.015; by log-rank test), indicating that although the TMZ treatment achieved the particular effect by eliminating a sensitive group of tumor cells, the resistant GSCs might quickly lead to tumor relapse (Fig. f and Additional file : Fig. S12). This observation underscores the importance of careful consideration of ITH via multi-sector evaluation before treatment delivery. Since the number of sectors analyzed from a tumor may influence the possibility of TMZ-ITH detection, we evaluated the optimal number of sectors to observe TMZ-ITH in a patient. We demonstrated that when two sectors were taken from a tumor, the TMZ-ITH detection rate was around 30% (17/56), followed by 43% (12/28) with three sectors, and 50% (3/6) with four sectors (Fig. g). Interestingly, while the TMZ-ITH detection increases with multi-sector number, purely resistant groups decreased but not the purely sensitive patients, underscoring the existence of good responders of TMZ treatment (Fig. h). To evaluate GBM’s response to TMZ, we performed in vitro TMZ cytotoxicity assays in short-term (6 days) cultured patient-derived GSCs ( n = 69, main cohort) obtained from surgically resected IDH-wt primary GBM specimens. Since conventional metrics such as the effective concentration at 50% (IC 50 ) or maximum inhibition % (E max ) highly depends on cell division rate obscuring accurate sensitivity prediction, we adopted GR inhibition metrics, which are independent of division number and therefore superior to conventional metrics for assessing the effects of drugs in fast dividing cells . We calculated GR 50 values for each sample, and for those with infinite GR 50 values, we measured conventional AUC values. By calculating Z -scores for GR 50 and AUC, we divided our samples into TMZ-Sensitive and TMZ-Resistant groups (Fig. a). As expected, MGMT promoter methylation was observed to be related to Z -scores of GR 50 and AUC values (Fig. b, Wilcoxon rank sum test P = 0.018) . Strikingly, TMZ-Resistant ( n = 29) and TMZ-Sensitive ( n = 40) groups defined from in vitro sensitivity were highly predictive of survival outcomes for patients who were under a TMZ-based treatment regimen (Fig. c and d; PFS, P = 1.12e−4; OS, P = 3.63e−4; by log-rank test). Notably, the above-defined in vitro sensitivity surpasses the well-known MGMT promoter methylation status in predicting the patient prognosis (Additional file : Fig. S4a). Additionally, a Cox-regression multivariate survival analysis considering age, gender, extent of resection, and MGMT promoter methylation revealed that in vitro TMZ sensitivity and the extent of resection were independent factors associated with PFS and OS, while MGMT promoter methylation being related to in vitro sensitivity (Additional file : Fig. S3b, Fisher’s exact test P = 0.0156) was marginally significant (Table ). Collectively, these data reflect the reliability of our preclinical TMZ testing system for assessing clinical response to TMZ in patients newly diagnosed with IDH1-wt GBM. To identify genetic factors contributing to TMZ response, we explored somatic genomic alterations in the TMZ-resistant and sensitive groups in our main cohort. WES and/or GliomaSCAN on 57 tissue specimens (with matched blood controls) and RNA-seq on 34 tissue specimens were either newly performed or downloaded from previous publications (Additional file : Table S1). Somatic SNVs and short insertions/deletions were identified by SAVI2 (Additional file : Table S2). A sample was labeled as hypermutated if the total number of somatic mutations was over 350 by WES. CNAs were calculated from WES, GliomaSCAN, or were predicted from RNA-seq by CNAPE (Additional file : Table S2-3, Additional file : Fig. S5-S6, and Additional file : Supplementary Methods). Variants with VAF over 5% and CNAs in previously reported GBM driver genes, together with EGFRvIII (Additional file : Table S4) and the expression-based GBM subtyping were shown in Fig. . Overall, no significant genomic difference was observed between the responder and non-responder groups. Yet, mesenchymal/proneural subtype and somatic mutations in genes including NF1 , NF2 , and PTEN were more often observed in TMZ-resistant samples, while PIK3R1 somatic mutations were slightly more frequent in TMZ-sensitive samples (Fig. ). These observations indicate that GBM’s response to TMZ might be determined by the combination of multiple factors but not by single ones. To identify marker genes of TMZ response, we first separated conserved and non-conserved gene expression between GSCs and the initial tumor tissue through a Gaussian mixture model (Additional file : Fig. S2). The conserved genes were then used to perform differential gene expression analysis between tissue RNA-seq data of TMZ-resistant ( n = 12) and sensitive ( n = 22) samples. Principal component analysis on these 34 samples showed slight clustering by GBM subtype (proneural vs. mesenchymal/classical) but by no other factors including age, gender, and MGMT promoter methylation status (Additional file : Fig. S3a, b). We identified four genes ( EGR4 , PAPPA , LRRC3 , and ANXA3 ) significantly up-regulated in the TMZ-resistant group (Fig. a, Additional file : Fig. S3c, log 2 fold change > 2.5, adjusted P < 0.01). To explore the prognostic value of the TMZ-resistant marker genes, we extracted 96 RNA-seq available TMZ-treated IDH-wt primary GBM patients from the TCGA dataset and classified them into high-risk and low-risk groups based on the expression of these four genes. Notably, the high-risk group had significantly worse PFS ( P = 1.59e−03 by log-rank test) and OS ( P = 3.46e−03 by log-rank test, Fig. b), compared to that of the low-risk group. To further investigate the expression change of these genes before and after TMZ treatment, we integrated a total of 40 paired RNA-seq data of initial and matched TMZ-treated recurrent IDH-wt GBM samples . As shown in Fig. c, the expression level of TMZ-resistant markers increased in the recurrent samples compared to the initial, suggesting that the TMZ-resistant marker-expressed cell population survived TMZ treatment and expanded in the recurrent GBM. Figure a presents the overall relevance of the genomic, transcriptomic, and other features on TMZ response. Along with the expression of four TMZ-resistant markers, MGMT expression, MGMT promoter methylation status, hypermutation status, GBM subtype, somatic mutations, and CNAs identified from the main cohort, we added 5-aminolevulinic acid (5-ALA) tendency as another feature (Additional file : Table S1). In order to integrate these features for patient evaluation, we constructed an XGBoost classifier to identify the TMZ response of a patient as TMZ-resistant or TMZ-sensitive. Among the 30 features shown in Fig. a, 25 features were used to train the machine learning model in the main cohort, excluding NF2 mutation, hypermutation, 5-ALA positive and 5-ALA negative which were not available in the TCGA testing cohort (Additional file : Table S5). Compared with MGMT promoter status as the only feature, adding other features provided more information for recognition of TMZ non-responders (Fig. b). Notably, the top five informative features from the model were the expression level of ANXA3 and LRRC3 , proneural subtype, ERG4 and MGMT expression (Additional file : Fig. S7a). In addition, incorporating the four expression markers together with other features achieved a stronger discrimination power compared to the presence of just an individual marker (Additional file : Fig. S7b). Within the training cohort (main cohort), a prediction of 88.4% (61 out of 69) of the samples matched the in vitro TMZ-response (Fig. c). We then tested our model in an independent cohort with 262 IDH-wt, TMZ-treated primary GBM patients from TCGA (inclusive of the 96 RNA-seq available patients from Fig. b). Importantly, patients predicted to be TMZ-resistant by the classifier had significantly worse PFS (Fig. d, P = 4.58e−04 by log-rank test) and OS (Fig. e, P = 3.66e−04 by log-rank test) validating the power of our model to predict prognostic outcome in patients treated by TMZ. Moreover, we investigated the survival difference across four subtypes (classical, proneural, neural, and mesenchymal) in the TCGA cohort and no significant segmentation was observed for PFS ( P = 0.531 by multivariate log-rank test) and OS ( P = 0.412 by multivariate log-rank test) in Additional file : Fig. S8a, which is expected and compatible with the observations previously reported by the TCGA group. We further correlated the four GBM subtypes with the TMZ response predicted from our machine learning model. Notably, the mesenchymal subtype is associated with TMZ resistance ( P = 0.039 by Fisher exact test, Additional file : Fig. S8b). Among the mesenchymal cases, the resistant group demonstrated worse PFS ( P = 1.98e−02 by log-rank test, Fig. f) and OS ( P = 1.26e−04 by log-rank test, Fig. g), compared to that of the sensitive group, highlighting the value of our model to unveil new responders/non-responders within the subtypes. In addition, when compared to using only MGMT promoter methylation status in the TCGA dataset ( n = 203), our model integrating multiple features provided a better way to segregate patients with different outcomes of both PFS and OS (Additional file : Fig. S9). Within MGMT methylated group, our model identified a limited number of high-risk resistant cases with worse PFS and OS (Additional file : Fig. S9c). Furthermore, to facilitate the use of our model, we designed a freely accessible website named TMZep that provides the function for evaluating potential TMZ response for GBM patients ( http://www.wang-lab-hkust.com:3838/TMZEP ) . Users can input patient’s data on part or all of the 25 features to the website, which will evaluate the potential TMZ treatment response of the corresponding GBM patient. Intratumoral heterogeneity (ITH) is a key factor that causes therapeutic resistance and recurrence in GBM . To reveal the impact of ITH on TMZ treatment, we compiled 52 GBM tumor tissue specimens from 18 patients, where each patient had 2 to 4 multi-sector samples taken from the same patient (Multi-sector cohort, Additional file : Table S1). Of the Multi-sector cohort, 15 tissues and patients overlapped with the main cohort. The three additional patients were patients with recurrent GBM, IDH mutant GBM, or IDH-wt primary GBM without TMZ treatment. We performed in vitro TMZ screening of the multi-sector GSCs (Fig. a) followed by WES in 19 samples and RNA-seq in 26 samples. Interestingly, almost half of the patients (8/18) carried both TMZ-resistant and TMZ-sensitive tumor samples (Fig. b and Additional file : Fig. S10). We termed these patients as TMZ-ITH, which harbored heterogeneous GSCs within one tumor in terms of in vitro TMZ treatment response. We confirmed several TMZ-associated factors identified earlier in this study by comparing the molecular signatures of multi-sectors. In particular, the TMZ-resistance markers were upregulated in the resistant sectors of M13 and M14 (Fig. c, d, Additional file : Fig. S11a-b). Meanwhile, a combination of PTEN loss, EGFR gain, and deeper deletion of CDKN2A/B was observed specifically in the sensitive sectors of these two patients (Fig. c, d, and Additional file : Fig. S11c). Motivated by this observation, we checked the concurrent CNAs in PTEN , EGFR , and CDKN2A/B back in our main cohort and found that it was significantly more frequent in TMZ-sensitive samples (Fisher’s exact P = 0.0102, Fig. e), while each individual factor did not have statistical significance. Back to the ITH analysis, eight TMZ-ITH patients had comparable survival time with patients harboring only TMZ-resistant sectors and significantly worse survival time than patients with only TMZ-sensitive sectors (OS, P = 0.027; PFS, P = 0.015; by log-rank test), indicating that although the TMZ treatment achieved the particular effect by eliminating a sensitive group of tumor cells, the resistant GSCs might quickly lead to tumor relapse (Fig. f and Additional file : Fig. S12). This observation underscores the importance of careful consideration of ITH via multi-sector evaluation before treatment delivery. Since the number of sectors analyzed from a tumor may influence the possibility of TMZ-ITH detection, we evaluated the optimal number of sectors to observe TMZ-ITH in a patient. We demonstrated that when two sectors were taken from a tumor, the TMZ-ITH detection rate was around 30% (17/56), followed by 43% (12/28) with three sectors, and 50% (3/6) with four sectors (Fig. g). Interestingly, while the TMZ-ITH detection increases with multi-sector number, purely resistant groups decreased but not the purely sensitive patients, underscoring the existence of good responders of TMZ treatment (Fig. h). To date, TMZ is the major standard chemotherapeutic agent for primary GBM treatment. However, recent studies do not support the indiscreet use of TMZ because of its side effects . Moreover, treatment outcome significantly differs among patients due to personalized genetic background and various tumor microenvironment . Therefore, precision identification of TMZ responders is in urgent need to optimize TMZ-related treatment and benefit patients. In this study, we demonstrated that in vitro screening of TMZ on patient-derived GSCs, which distinguishes TMZ-resistant and sensitive groups, is related to prognosis, reflecting TMZ efficacy in patients. However, this option has several challenges: culturing GSCs may not be always successful, is of high cost, and is not yet available widely. To develop a more easily accessible tool for TMZ-sensitivity prediction, we performed multi-omic analysis on the TMZ-resistant and sensitive GBM specimens. Transcriptomic comparison between these two groups revealed four TMZ-resistant markers, i.e., EGR4 , PAPPA , LRRC3 , and ANXA3 . Along with these markers, we investigated the association of TMZ sensitivity and other molecular features such as somatic mutations and CNAs. Systematically integrating these features, we constructed a machine learning-based model which was able to classify IDH-wt primary GBM patients into TMZ-resistant and sensitive groups with high prognostic value. In addition, we demonstrated the dramatic impact of ITH by evaluating multi-sector samples from the same patients. Noticeably, patients with all sectors sensitive to TMZ had the most optimistic treatment outcome. Meanwhile, the multi-sector study validated important features associated with TMZ response. Together, we proposed and summarized several new TMZ response-associated features in addition to the well-known factors in this study (Fig. ). The expression level of the four TMZ-resistant markers predicted poor survival not only in our cohort but also in an independent IDH-wt GBM cohort extracted from the TCGA dataset. In addition, higher expression of these genes was observed in the recurrent GBMs and the TMZ-resistant sectors of the TMZ-ITH patients, highlighting the role of these genes in contributing to TMZ resistance. Although further studies will be needed to investigate the underlying mechanisms of these genes, it was reported that the ANXA3 gene drives tumor growth through the c-Jun N-terminal kinase (JNK) pathway . Compelling evidence indicates a role for JNKs in the maintenance of GSCs and regulating TMZ resistance through MGMT expression . On the other hand, we observed co-occurrence of CDKN2A/B loss/deletion, PTEN loss/deletion, and EGFR gain/amplification more frequently in the TMZ-sensitive samples from the main cohort and the multi-sector cohort, while each single feature was not statistically significant. According to the fifth edition of the WHO classification of tumors of the central nervous system (CNS), EGFR amplification and +7/−10 copy number changes ( PTEN in chromosome 10) are the parameters for Glioblastoma IDH-wt diagnosis, while CDKN2A/B homozygous deletion is the parameter to diagnose IDH-mutant astrocytoma as WHO CNS grade 4 , suggesting that the CNAs in the three genes are related to more aggressive CNS tumors. While the prognostic value of EGFR alterations within GBMs is still controversial, some studies have reported its association with better outcomes . Hobbs et al. reported that high EGFR-amplified GBMs had a favorable response to TMZ compared to no or low-amplified GBMs . They speculated that EGFR-amplified GBMs may have higher genome fragility making them more susceptible to DNA damage induced by TMZ . Yet how the concurrent CDKN2A , PTEN , and EGFR CNAs affect TMZ-response in GBM is unknown and the investigation therefore belongs to the realm of future work. Considering that single features showed limited power to predict TMZ efficacy, we developed a machine learning model to integrate many features to predict TMZ responders. The model outperforms single feature models, which would assist the improvement of TMZ treatment on GBM patients. However, the model was validated only in the TCGA dataset, so the chance of over-fitting cannot be fully ruled out. Prediction results in other datasets may vary due to various reasons such as the clinical settings of the hospital and the treatment of the patients. Therefore, our model is yet preliminary to be directly applied to practice, and evaluation on a larger additional independent cohort would be necessary in future studies. In addition, we identified a new patient group with TMZ resistance within the MGMT methylated group, but due to the small sample size of these patients, additional follow-up is necessary to confirm these results. Although more accessible, our model’s prediction using multi-omic features is still less accurate than in vitro screening, partially due to that the current markers may not be complete, and more markers such as non-coding genomes or epigenomic features remain to be discovered. In addition, the features may not be independent, so more advanced multi-omics integration methods could be applied to reveal interactions between different data layers and further improve the model’s robustness. Moreover, utilizing single-cell sequencing or cell-type deconvolution technologies (e.g., CIBERSORT, xCell) to assess the TME composition as well as resistant and sensitive tumor samples could be promising future directions to further demonstrate how concordant the cell-type compositions can affect treatment outcomes. In summary, we demonstrated that in vitro TMZ screening of patient-derived GSCs can reflect treatment outcomes in IDH-wt GBM patients under the standard Stupp therapy (radiotherapy with concomitant TMZ followed by adjuvant TMZ). Genomic and transcriptomic characterization revealed MGMT promoter methylation status, hypermutation, and the expression of MGMT, EGR4, ANXA3, PARPA, and LRRC3, together with other features, as relevant molecular predictors of TMZ response for IDH-wt GBMs. The machine learning model TMZep for predicting TMZ efficacy from pharmacogenomic data integration provided an easily assessable computational tool to facilitate a more selective treatment towards the disease. Additional file 1: Table S1. Clinical annotations of 128 GBM samples used in the study. Table S2. Variants identified by WES and GliomaSCAN. Table S3. Copy number alteration by WES and/or RNA-seq. Table S4. EGFRvIII detection in RNA-seq available samples. Table S5. Input file used for machine learning training. Additional file 2: Fig. S1. Timeline of sample acquisition, sequencing, in vitro culture and TMZ screening. Fig. S2. Gaussian Mixture Model used to identify genes with the same expression profile between patient-derived cells (PDCs) and tumor tissues. Fig. S3. Principal Component Analysis and differentially expressed gene analysis on 34 tissue RNA-seq samples. Fig. S4. Association of MGMT promoter methylation status to survival and in vitro TMZ screening in the main cohort. Fig. S5. Copy number estimation by GliomaSCAN. Fig. S6. Copy number estimation by RNA-seq. Fig. S7. Machine learning model feature importance. Fig. S8. Correlations between GBM subtypes and TMZ response. Fig. S9. Comparison of survival prediction in TCGA cohort. Fig. S10. Genomic landscape of multi-sector samples. Fig. S11. TMZ-resistant marker expression and CNV comparison in patient M13 and M14. Fig. S12. Progression free survival difference in patients with multi-sector samples. Additional file 3. Supplementary methods. |
Observations on strategies used by people with dementia to manage being assessed using validated measures: A pilot qualitative video analysis | 32a01ca4-8f35-4dfc-a151-42ed568fcbdf | 10010081 | Patient Education as Topic[mh] | INTRODUCTION Growing evidence shows that people with dementia can report their views and experiences in research. , However, an area that has been little researched is how people with dementia react while being assessed using validated measures or what strategies they use in this situation. Validated measures in health research enable the assessment of the quality of care, the effectiveness of interventions and supporting decision‐making in clinical care and intervention settings. Such measures also enable an understanding of the cause and effect of health conditions and interventions. They have an important role in testing hypotheses to support decision‐making in health and social care. In dementia care and research, the use of validated measures also helps to provide a perspective on the way an individual's dementia is progressing, and therefore to understand how best to support them. Such tools are also an important part of the diagnosis. In the United Kingdom, NICE recommends the use of assessments of cognition, functional ability and mental state when diagnosing dementia. More is known about the experience of receiving a diagnosis of dementia than the impact of participating in validated measures. The communication of a diagnosis of dementia requires sensitivity, indicating that the process can be stressful and overwhelming. In Xanthopoulou and McCabe's study, participants with dementia reported they found the assessment outcome and subsequent diagnosis difficult to hear, and they were scared and upset to receive the diagnosis. Literature on using validated measures tends to report the outcomes and rationale for their validity and use. Literature reviews on using measures with people with dementia, and more widely in using patient outcome measures, have cautioned not to burden participants or cause harm. , , However, there is little discussion about what this means in practice. Ward et al.'s review on evaluating cognitive stimulation highlighted that insufficient information is given about how the assessment of people with dementia is conducted. They also noted that little is reported on how these tests are experienced and what impact there is on the subjective interpretation of the tests by people with dementia, something that has been criticized in cognitive stimulation effect research. , , However, it is important to identify measures that are acceptable for both the research community and for those diagnosed with dementia, including reducing any impact in terms of distress, confusion, anxiety or burden in participating. Heggestad et al. argue that the assessment process can be humiliating, and people with dementia may experience a loss of dignity in taking a test. This can have a negative impact on how they see themselves and can be a reminder of the progression of their dementia. Therefore, research to explore how people with dementia experience an assessment process will provide insight to support them through this process be it for diagnostic or research purposes. This paper provides findings from observed assessments used in a research setting. The authors provide insights into this little‐researched area, and ways to support the person with dementia, researchers and clinicians undertaking assessments. This is the second paper to present findings from this research, with the first available through Thoft et al. The first article highlighted the strategies of the researcher in undertaking assessments using validated measures. The aim of this paper is to provide a new perspective on the assessment process used in research and explore how people with dementia react and identify strategies they use when being assessed with validated measures.
MATERIALS AND METHODS This paper presents findings from a video analysis of conducting validated measures with people with dementia. This was part of a wider feasibility and pilot study that was conducted on lifelong learning services in Denmark. Lifelong learning is an education‐led programme that provides lessons to support cognitive function, decision making and activities of daily living. It is based on the premise that people living with dementia can learn, develop and grow. The project assessed an intervention group (Lifelong Learning intervention) and a control group (treatment as usual, e.g., services at day‐care centres). The study was conducted in six municipalities in Northern Denmark. Participants were tested at the outset of the study and after 5–6 months. Participants were assessed using five validated measures: Mini‐Mental State Examination (MMSE) ; Quality of Life in Alzheimer's Disease Scale (QoL‐AD) , ; General Self‐Efficacy Scale ; Rosenberg Self‐Esteem Scale ; Hawthorn Friendship Scale. A detailed method, background to the wider study and facilitator strategies are presented in Thoft et al. and Sørensen et al. This paper provides an overview of the methods in relation to the video analysis. 2.1 Public and patient engagement People with dementia and staff from the lifelong learning intervention took part in a workshop to identify the most appropriate measures to use for the wider study. Their input was gained through discussions about what they felt was important to research about the intervention and informed the final choice of validated measures used. These workshops will be the focus of future analyses. 2.2 Video analysis Fifty‐five participants were recruited into the main study ( n = 30 intervention group; n = 25 control group). All participants undertook pre‐ and postassessments, which were recorded using one video recorder. This was positioned to capture the participants' facial features and reactions, while also capturing the table, paperwork and side/back view of the assessor. Videos were chosen because they captured both verbal and nonverbal reactions and enabled multiple reviews of actions and behaviours which may not be identified in person. The decision to conduct an initial pilot analysis was based on the pragmatics of undertaking the analysis and testing the outcomes of this approach. Video analysis is time‐consuming, requiring multiple viewings, by several researchers. To ensure that this would elicit valuable and viable data, the team first conducted this as a pilot stage, with plans to extend this analysis. This paper, therefore, presents the findings from this initial stage. A stratified sample of 10 pre‐assessment videos was analysed. This stratification included: equal distribution across the intervention and control group ( n = 5 per group); each locality in which the service was delivered; level of dementia (high and low MMSE‐score) and diversity of gender. The pre‐assessment videos were chosen to avoid recall or familiarity with the measures as this could be a risk if including the postassessment videos. The demographic profile of the participants from the analysed videos is reported in Table . The videos were analysed using an adapted version of Ridder's video analysis approach. The identified videos were watched in full by four members of the research team to develop an analysis framework, which was tested and adapted using one video, and focused on identifying participants' reactions. The resulting framework was used to code all videos. A video graph was developed for all the videos in Excel. This notes by timeframe each reaction to the assessment situation, including physical movement, facial expressions and verbal comments, alongside researcher reflections on the action. Viewing the videos in full and reviewing this graph allowed the research team to identify clips for a deeper microanalysis that explored key moments and interactions during the assessments. These were coded alongside the verbal interaction to provide a detailed account of what occurred. Thirteen clips were chosen from the 10 videos for further microanalysis (see Table ). The final analysis stage was to draw themes from across each microanalysis and the video graph (see Thoft et al. for further details of the method, video and participant demographics). As Table demonstrates, the video analysis provides a description of the action alongside the reflections and observations of this action by the researchers. Supporting evidence is also provided through transcripts of the dialogue in the videos. The results are presented as observations that were made by the researchers with supporting statements provided in the form of descriptions of the action or participant's quotations. The research team consisted of two senior researchers with previous experience in leading dementia‐related research, and research with the lifelong learning service in Denmark (called the Aalborg Dementia School, at The Knowledge Centre for Dementia, Aalborg Municipality). One had expertise using video analysis methods. Two other researchers completed the team, having a background in nursing and expertise in qualitative research. All members of the team undertook the analysis in Denmark working in pairs to analyse each video. The researcher with video analysis expertise provided training, with review sessions at intervals during the analysis for the team to discuss the approach and how to correctly log and review the data. 2.3 Ethics Participants were recruited through their service. Each service attended a meeting with the lead researcher to inform them about the aims and process of the research. Participants were informed about the project through a participant information sheet and were able to discuss this with a member of the research team. This emphasized that their participation was voluntary and was not related to their continued use of their respective services. Participants completed a consent form before participating in both pre‐ and postdata collection phases. Where required, consent was discussed and gained with support from a family or staff member, although no proxy consent was used. All participants were self‐consenting. Danish legislation requires research studies to be based on informed consent and not on ethical approval from a national or public agency. The video recordings were not allowed to be shown outside of the research team due to the requirements of confidentiality and anonymity as stated by the Danish ethical requirements. All names used in the article are pseudonyms. In keeping with good research practice, the Regional Committee on Health Research Ethics was also consulted. It was judged that no further application was needed in relation to LBK nr 1083 of 15/09/2017 definition of a Health Science Research Project and the Committee law § 14, stk. 1, jf. § 2, nr 1‐3. These reference Danish ethical laws and recommendations of the Danish Ministry of Higher Education and Science that ensure participant safety and rights under the Danish Code of Conduct for Research Integrity.
Public and patient engagement People with dementia and staff from the lifelong learning intervention took part in a workshop to identify the most appropriate measures to use for the wider study. Their input was gained through discussions about what they felt was important to research about the intervention and informed the final choice of validated measures used. These workshops will be the focus of future analyses.
Video analysis Fifty‐five participants were recruited into the main study ( n = 30 intervention group; n = 25 control group). All participants undertook pre‐ and postassessments, which were recorded using one video recorder. This was positioned to capture the participants' facial features and reactions, while also capturing the table, paperwork and side/back view of the assessor. Videos were chosen because they captured both verbal and nonverbal reactions and enabled multiple reviews of actions and behaviours which may not be identified in person. The decision to conduct an initial pilot analysis was based on the pragmatics of undertaking the analysis and testing the outcomes of this approach. Video analysis is time‐consuming, requiring multiple viewings, by several researchers. To ensure that this would elicit valuable and viable data, the team first conducted this as a pilot stage, with plans to extend this analysis. This paper, therefore, presents the findings from this initial stage. A stratified sample of 10 pre‐assessment videos was analysed. This stratification included: equal distribution across the intervention and control group ( n = 5 per group); each locality in which the service was delivered; level of dementia (high and low MMSE‐score) and diversity of gender. The pre‐assessment videos were chosen to avoid recall or familiarity with the measures as this could be a risk if including the postassessment videos. The demographic profile of the participants from the analysed videos is reported in Table . The videos were analysed using an adapted version of Ridder's video analysis approach. The identified videos were watched in full by four members of the research team to develop an analysis framework, which was tested and adapted using one video, and focused on identifying participants' reactions. The resulting framework was used to code all videos. A video graph was developed for all the videos in Excel. This notes by timeframe each reaction to the assessment situation, including physical movement, facial expressions and verbal comments, alongside researcher reflections on the action. Viewing the videos in full and reviewing this graph allowed the research team to identify clips for a deeper microanalysis that explored key moments and interactions during the assessments. These were coded alongside the verbal interaction to provide a detailed account of what occurred. Thirteen clips were chosen from the 10 videos for further microanalysis (see Table ). The final analysis stage was to draw themes from across each microanalysis and the video graph (see Thoft et al. for further details of the method, video and participant demographics). As Table demonstrates, the video analysis provides a description of the action alongside the reflections and observations of this action by the researchers. Supporting evidence is also provided through transcripts of the dialogue in the videos. The results are presented as observations that were made by the researchers with supporting statements provided in the form of descriptions of the action or participant's quotations. The research team consisted of two senior researchers with previous experience in leading dementia‐related research, and research with the lifelong learning service in Denmark (called the Aalborg Dementia School, at The Knowledge Centre for Dementia, Aalborg Municipality). One had expertise using video analysis methods. Two other researchers completed the team, having a background in nursing and expertise in qualitative research. All members of the team undertook the analysis in Denmark working in pairs to analyse each video. The researcher with video analysis expertise provided training, with review sessions at intervals during the analysis for the team to discuss the approach and how to correctly log and review the data.
Ethics Participants were recruited through their service. Each service attended a meeting with the lead researcher to inform them about the aims and process of the research. Participants were informed about the project through a participant information sheet and were able to discuss this with a member of the research team. This emphasized that their participation was voluntary and was not related to their continued use of their respective services. Participants completed a consent form before participating in both pre‐ and postdata collection phases. Where required, consent was discussed and gained with support from a family or staff member, although no proxy consent was used. All participants were self‐consenting. Danish legislation requires research studies to be based on informed consent and not on ethical approval from a national or public agency. The video recordings were not allowed to be shown outside of the research team due to the requirements of confidentiality and anonymity as stated by the Danish ethical requirements. All names used in the article are pseudonyms. In keeping with good research practice, the Regional Committee on Health Research Ethics was also consulted. It was judged that no further application was needed in relation to LBK nr 1083 of 15/09/2017 definition of a Health Science Research Project and the Committee law § 14, stk. 1, jf. § 2, nr 1‐3. These reference Danish ethical laws and recommendations of the Danish Ministry of Higher Education and Science that ensure participant safety and rights under the Danish Code of Conduct for Research Integrity.
RESULTS The 13 clips varied in length from 17 s to over 3 min. This reflected the nature of the interactions, which were often short responses to questions asked during the assessment. Two core themes were identified about the way people with dementia react and the strategies they used while being assessed using validated measures. These were: ‘State of mind’ and ‘Mental resources’. 3.1 State of mind State of mind was observed as both positive and negative, with a positive outlook supporting the person with dementia to find the assessment process less stressful. 3.2 A positive state An individual's state of mind could impact how they experienced and responded to being assessed. State of mind was identified through emotional state/mood, emotional responses and body language. Participants commented on their emotional state, for example, Anni said that she is normally a ‘cheerful person’. This was also apparent in the way she presented during the assessment, especially when recalling memories of her family. She smiled and laughed as she shared her thoughts. Even when she responded incorrectly, Anni smiled while responding. For example, Anni was asked to provide the address where the assessment was taking place (MMSE), she did not know, but smiled and laughed as she recalled it was near a 10‐pin bowling alley where her husband was currently playing. Arne also commented on his mood. He asked the researcher for feedback on whether he was responding correctly during the self‐efficacy measure. The researcher commented that there was no right or wrong answer only what Arne was feeling. Arne commented that he was in a ‘good mood’ and together they reflected that if he had been in a bad mood, it could have impacted his responses: Arne: Yes, yes, but now I'm in a good mood today. (Smiling and laughing) Researcher: You are right, because if you are in a bad mood, I think it would look different—don't you think? Arne: Yes, I think. (video 75) This exchange suggests that when in a good mood, a person may respond more positively than when in a bad mood, thus having a potential effect on the test situation. The easy relationship observed between Arne and the researcher may also have had an impact on his mood, helping to ease the test situation. 3.3 A negative state The mood exhibited by participants was not always positive with some showing signs of disappointment or frustration, characterizing a more negative state of mind. For example, Lone showed disappointment when she could not recall her surname. Her body language and expression changed. She leaned forward, her smile disappeared into a sigh and she looked to the side while saying: ‘Suddenly I couldn't remember it…’ (video 31). Even though Lone succeeded in answering the question, given time to think, her tone and body language expressed, what the research team considered disappointment. It may also have been a moment of recognition of the challenges caused by her dementia. Participants also expressed frustration. This was mostly observed in relation to the participants' loss of ability to answer questions. This was usually directed towards themselves and their dementia. For example, Hans was telling the researcher about his former language skills: Earlier, I had five languages (showing five fingers). I was good as hell at languages and now I can, I can just speak a little Danish … And Swedish (talks in Swedish) I can't speak that anymore—I can't understand the damn prose. And that sucks when you are on a visit there. (video 34) During this dialogue Hans was initially relaxed in his body language, resting one arm on the table and leaning his head in the other hand, while speaking in a soft tone of voice with a slight smile on his face. This changed as he talked about his declining skills. He became increasingly restless, leaning backwards and quickly forward while pointing with his finger, brows furrowed, raising his voice and firmly placing both hands on the table. This was observed as frustration towards his failing abilities and recall of the skills he used to have. On some occasions, participants showed contradictory verbal and nonverbal expressions. This was observed in Bo who was asked to repeat the three words in the MMSE. Bo was smiling and laughing without being seemingly happy. Bo had a tense, forced, almost unnatural smile, and although he was laughing, his body language showed nervousness or discomfort, as he was tapping his finger and moving his legs, looking away and leaning back while answering: ‘That is worse! (laughing)’ (video 7). This was observed as a reaction to not being able to answer the question. Other verbal and nonverbal signs were observed. Examples of this include looking down at the table seeming disappointed, changing tone of voice and body language showing anger and frustration, for example, making strong hand gestures and smiling ruefully to express discomfort when being confronted with difficulties due to dementia. 3.4 Mental resources Participants were observed to use the mental resources of reflection, humour and bodily movement. All the participants at times were engaged and concentrating, showing different skills to help complete the assessments. 3.5 Reflective skills Reflective skills were observed in many participants. When Grethe was asked a question about her marital relationship (QoL‐AD), she replied that the responding category ‘excellent’ did not fit her usual wording; ‘It's probably excellent. No, good. I have difficulties using the word excellent—good means more to me than excellent’ (video 82). Grethe was able to reflect upon personal preferences towards the meaning of the categories showing her language and interpretation skills. The researchers experienced that several participants found it unnatural to use the category excellent. Some of the participants also talked through their reflective process. Bente recognized that an answer she gave in the self‐efficacy questionnaire about ‘When I am confronted with a problem, I can usually find several solutions‘ contradicted her earlier answers where she said could not manage difficult situations or unexpected events: Yeah, but well, it contradicts the other things, right but it can be that I have to change … I think about different solutions that is what I am thinking about? (fidgeting with her shirt, looking down at the paper). (video 49) She explained that she thinks about different solutions, but assesses her abilities as ‘moderately true’ and that she can come up with solutions to her problems. Other participants reflected by comparing their abilities before their dementia diagnosis and their present abilities, and by comparing their skills to those of others. The participants would use words such as ‘before’ and ‘now’, showing that their answers were considered in light of their diagnosis. This was particularly noticed during the QoL‐AD, as Anni commented: ‘Well, normally I would say it is good enough, I think so. I don't think it's bad, my memory’ (video 86). Even though the participants were confronted with their decline, they were observed to identify several solutions on how to handle a problem when asked in the self‐efficacy test and were aware of managing dementia in their everyday lives by seeking help from others, as Bente stated: ‘I can get help’ (video 49). 3.6 Supporting concentration The participants took the tests seriously, and these were completed without breaks (although these were offered), and by asking questions. Their concentration was particularly noticeable by their use of physical contact with items, such as pencils or test paper. Here the items seemed to work as a physical prompt or sensory stimulus. For example, when Knud responded to the self‐efficacy question ‘I am able to do things as good as most people’ (video 71), he was observed to follow the questions with a pencil and took time to think through his answer. The test paper for all the measures, apart from the MMSE, was placed on the table for the participant to see. Some used this, reading the questions, and pointing or touching the paper as they responded. The visual cues provided by the paper and pencil were observed to support their ability to answer. Participants were also observed to use pauses, and look to the side before answering a question, seemingly to give their response consideration and make sure they gave an accurate account of their experience. However, looking off to the side also led to a loss of focus as the participants could lose track of the question asked. 3.7 Shared connection Participants often looked at the researcher for confirmation or support when answering the questions. This sense of shared connection was also evident through their use of humour, which was observed with some participants making a joke about the question or their answer. This seemed to act as a coping strategy to mask their insecurity or difficulties in undertaking the assessment. Bente was joking about her handwriting, commenting: ‘my writing is not good’ (video 49) while apologizing to the camera, leaning back and laughing. Anni used laughter when she was not able to recall what day it was during the MMSE test: ‘Thursday? Wednesday … The days have been changed over here. Now I can't remember if its Wednesday or Thursday! (laughing)’ (video 86). This seeking confirmation and the shared humour seemed to establish a form of shared connection between the participants and the researchers. 3.8 Nonverbal communication Nonverbal communication in the form of facial expressions, gestures and bodily movement was observed across all the videos. Gestures were observed as a strategy to support individuals when faced with symptoms of their dementia, for example, challenges with language. During the MMSE Hans used gestures to explain which region he lived in. He drew a map of Denmark in the air, pointing towards the Northern part of Denmark. Hans was not able to verbalize his answer so used nonverbal communication instead. Also, during the MMSE in response to which floor they were on, Arne looked out the window, gesturing to show the building was built on terraced land. By doing this, he showed awareness of the building's challenging geographical layout, even though he was not able to verbally provide the correct floor level. Participants were also observed to use movement, fidgeting and self‐touch, for example, hugging themselves, keeping hands clasped or folded, resting them on the table, leaning backwards and forward in the chair and tapping fingers against the table. It was noted that these movements were most often used at times of potential stress.
State of mind State of mind was observed as both positive and negative, with a positive outlook supporting the person with dementia to find the assessment process less stressful.
A positive state An individual's state of mind could impact how they experienced and responded to being assessed. State of mind was identified through emotional state/mood, emotional responses and body language. Participants commented on their emotional state, for example, Anni said that she is normally a ‘cheerful person’. This was also apparent in the way she presented during the assessment, especially when recalling memories of her family. She smiled and laughed as she shared her thoughts. Even when she responded incorrectly, Anni smiled while responding. For example, Anni was asked to provide the address where the assessment was taking place (MMSE), she did not know, but smiled and laughed as she recalled it was near a 10‐pin bowling alley where her husband was currently playing. Arne also commented on his mood. He asked the researcher for feedback on whether he was responding correctly during the self‐efficacy measure. The researcher commented that there was no right or wrong answer only what Arne was feeling. Arne commented that he was in a ‘good mood’ and together they reflected that if he had been in a bad mood, it could have impacted his responses: Arne: Yes, yes, but now I'm in a good mood today. (Smiling and laughing) Researcher: You are right, because if you are in a bad mood, I think it would look different—don't you think? Arne: Yes, I think. (video 75) This exchange suggests that when in a good mood, a person may respond more positively than when in a bad mood, thus having a potential effect on the test situation. The easy relationship observed between Arne and the researcher may also have had an impact on his mood, helping to ease the test situation.
A negative state The mood exhibited by participants was not always positive with some showing signs of disappointment or frustration, characterizing a more negative state of mind. For example, Lone showed disappointment when she could not recall her surname. Her body language and expression changed. She leaned forward, her smile disappeared into a sigh and she looked to the side while saying: ‘Suddenly I couldn't remember it…’ (video 31). Even though Lone succeeded in answering the question, given time to think, her tone and body language expressed, what the research team considered disappointment. It may also have been a moment of recognition of the challenges caused by her dementia. Participants also expressed frustration. This was mostly observed in relation to the participants' loss of ability to answer questions. This was usually directed towards themselves and their dementia. For example, Hans was telling the researcher about his former language skills: Earlier, I had five languages (showing five fingers). I was good as hell at languages and now I can, I can just speak a little Danish … And Swedish (talks in Swedish) I can't speak that anymore—I can't understand the damn prose. And that sucks when you are on a visit there. (video 34) During this dialogue Hans was initially relaxed in his body language, resting one arm on the table and leaning his head in the other hand, while speaking in a soft tone of voice with a slight smile on his face. This changed as he talked about his declining skills. He became increasingly restless, leaning backwards and quickly forward while pointing with his finger, brows furrowed, raising his voice and firmly placing both hands on the table. This was observed as frustration towards his failing abilities and recall of the skills he used to have. On some occasions, participants showed contradictory verbal and nonverbal expressions. This was observed in Bo who was asked to repeat the three words in the MMSE. Bo was smiling and laughing without being seemingly happy. Bo had a tense, forced, almost unnatural smile, and although he was laughing, his body language showed nervousness or discomfort, as he was tapping his finger and moving his legs, looking away and leaning back while answering: ‘That is worse! (laughing)’ (video 7). This was observed as a reaction to not being able to answer the question. Other verbal and nonverbal signs were observed. Examples of this include looking down at the table seeming disappointed, changing tone of voice and body language showing anger and frustration, for example, making strong hand gestures and smiling ruefully to express discomfort when being confronted with difficulties due to dementia.
Mental resources Participants were observed to use the mental resources of reflection, humour and bodily movement. All the participants at times were engaged and concentrating, showing different skills to help complete the assessments.
Reflective skills Reflective skills were observed in many participants. When Grethe was asked a question about her marital relationship (QoL‐AD), she replied that the responding category ‘excellent’ did not fit her usual wording; ‘It's probably excellent. No, good. I have difficulties using the word excellent—good means more to me than excellent’ (video 82). Grethe was able to reflect upon personal preferences towards the meaning of the categories showing her language and interpretation skills. The researchers experienced that several participants found it unnatural to use the category excellent. Some of the participants also talked through their reflective process. Bente recognized that an answer she gave in the self‐efficacy questionnaire about ‘When I am confronted with a problem, I can usually find several solutions‘ contradicted her earlier answers where she said could not manage difficult situations or unexpected events: Yeah, but well, it contradicts the other things, right but it can be that I have to change … I think about different solutions that is what I am thinking about? (fidgeting with her shirt, looking down at the paper). (video 49) She explained that she thinks about different solutions, but assesses her abilities as ‘moderately true’ and that she can come up with solutions to her problems. Other participants reflected by comparing their abilities before their dementia diagnosis and their present abilities, and by comparing their skills to those of others. The participants would use words such as ‘before’ and ‘now’, showing that their answers were considered in light of their diagnosis. This was particularly noticed during the QoL‐AD, as Anni commented: ‘Well, normally I would say it is good enough, I think so. I don't think it's bad, my memory’ (video 86). Even though the participants were confronted with their decline, they were observed to identify several solutions on how to handle a problem when asked in the self‐efficacy test and were aware of managing dementia in their everyday lives by seeking help from others, as Bente stated: ‘I can get help’ (video 49).
Supporting concentration The participants took the tests seriously, and these were completed without breaks (although these were offered), and by asking questions. Their concentration was particularly noticeable by their use of physical contact with items, such as pencils or test paper. Here the items seemed to work as a physical prompt or sensory stimulus. For example, when Knud responded to the self‐efficacy question ‘I am able to do things as good as most people’ (video 71), he was observed to follow the questions with a pencil and took time to think through his answer. The test paper for all the measures, apart from the MMSE, was placed on the table for the participant to see. Some used this, reading the questions, and pointing or touching the paper as they responded. The visual cues provided by the paper and pencil were observed to support their ability to answer. Participants were also observed to use pauses, and look to the side before answering a question, seemingly to give their response consideration and make sure they gave an accurate account of their experience. However, looking off to the side also led to a loss of focus as the participants could lose track of the question asked.
Shared connection Participants often looked at the researcher for confirmation or support when answering the questions. This sense of shared connection was also evident through their use of humour, which was observed with some participants making a joke about the question or their answer. This seemed to act as a coping strategy to mask their insecurity or difficulties in undertaking the assessment. Bente was joking about her handwriting, commenting: ‘my writing is not good’ (video 49) while apologizing to the camera, leaning back and laughing. Anni used laughter when she was not able to recall what day it was during the MMSE test: ‘Thursday? Wednesday … The days have been changed over here. Now I can't remember if its Wednesday or Thursday! (laughing)’ (video 86). This seeking confirmation and the shared humour seemed to establish a form of shared connection between the participants and the researchers.
Nonverbal communication Nonverbal communication in the form of facial expressions, gestures and bodily movement was observed across all the videos. Gestures were observed as a strategy to support individuals when faced with symptoms of their dementia, for example, challenges with language. During the MMSE Hans used gestures to explain which region he lived in. He drew a map of Denmark in the air, pointing towards the Northern part of Denmark. Hans was not able to verbalize his answer so used nonverbal communication instead. Also, during the MMSE in response to which floor they were on, Arne looked out the window, gesturing to show the building was built on terraced land. By doing this, he showed awareness of the building's challenging geographical layout, even though he was not able to verbally provide the correct floor level. Participants were also observed to use movement, fidgeting and self‐touch, for example, hugging themselves, keeping hands clasped or folded, resting them on the table, leaning backwards and forward in the chair and tapping fingers against the table. It was noted that these movements were most often used at times of potential stress.
DISCUSSION This paper sheds light on a little researched area, to understand what takes place during a formal validated assessment process in research with people with dementia. The rationale for exploring this interaction was twofold, to provide an understanding of the assessment process and people with dementia's reactions to this, and to identify ways of providing support for the individuals at a time that could be stressful. One of the key findings related to the way personality and mood can influence a person's response, as one participant stated, being and calling oneself a cheerful person can be a way of showing one's personality and may affect the reactions towards the assessment. This individual did not seem to react negatively regardless of whether her answers were correct or not. It may be that this participant lacked insight into the progression of their dementia and how this affected her memory. Stress, hope or personality have been reported as having the potential to impact assessment scores, while people with dementia and caregivers have identified that individual traits can influence their choices during research. How these factors can affect a score requires greater investigation, especially when these assessments are used to determine care pathways and the impact of interventions. Another key strategy was the use of touch and movement to support people with dementia, whether this was through fidgeting, hugging themselves or touching the table and/or the answer sheet. This worked to ground the individual in the moment and act as a comfort and memory aid. People with dementia have been observed to use touch to connect in the moment and that this can support the sharing of memories, while the touch of paperwork or holding a pencil can support attention and concentration in a research context. Such connections may indicate increased physical and cognitive arousal, and fidgeting has been associated with increased motor and sensory activity in the brain. While there is limited research to explain the function of fidgeting, there appear to be links to increased neural activity and arousal that may be a physiological support mechanism for people with dementia under test‐like situations. The participants in this study were observed to fidget by tapping the table, moving their legs and making varied hand gestures, using this nonverbal communication as a way to express their emotions, both positive and negative and to support their concentration. Stress can support our decision‐making and social interactions, however, too much can negatively impact our behaviour and our cognitive function. One way to manage stress is through tactile stimulation. , Self‐touch has also been associated as a coping mechanism for managing stress, such as hugging oneself or touching a face or hands. Skovdahl et al. describe touch as a way of supporting communication, particularly nonverbally. Therefore, the provision of a pencil or paper as a tactile object for people with dementia to use, and an understanding of body language may be a way of supporting people with dementia in undertaking an assessment and helping them to answer to the best of their abilities. Humour was observed to work as a coping strategy when responding to the validated measures and seemingly acted to smooth over worries or tensions and to mitigate where an individual was unsure of what response to give. The use of humour to manage stressful situations, as observed in this study, has also been studied in health‐professional and patient interactions. , , Laughter can also result from a release of tension as a ‘basic biological form’ (p. 4), which helps to reduce stress and help the individual to relax. Mallett and A'Herne identified that patients, in clinical settings, used humour to deflect conflict, particularly if associated with criticism. This use of humour may be expected as people with dementia use humour as a form of tension release when under stress. However, the use of humour by people with dementia is also considered a natural part of their communication, and that humour is a strategy which is used as an expression of their ‘personhood and autonomy’ (p. 341). Humour has also been shown to make it easier for mistakes to be made, to laugh about these mistakes and to relieve stress when being with other people. While much of this research has been carried out in clinical settings, the effect of humour is similar to that which was observed in this present study and eased tense situations, supported decisions and showed individual personalities. The use of humour was a coping mechanism that could be adopted to provide a more comfortable setting and ease relationships to aid the assessment. What is starting to be evidenced is that many factors can impact how people with dementia respond to validated measures. These factors can aid their responses but also may be detrimental. Differences in personality, mood, ways of interpreting questions or response options or responding nonverbally can all influence the final assessment score. As an example of this, in Scandinavian countries there is a cultural law—the Law of Jante—that is drawn from Sandemose and in Anglo‐Saxon societies as the ‘tall poppy syndrome’. This sets out certain personality and cultural ways of being, for example, not thinking too highly of oneself, or boastful of one's successes. In an ethnographic study of Jante, it was reported that Danes were often worried about standing out. They downplayed successes and conformed to societal norms, fearing retribution for being too boastful. The use of Likert scales that ask a participant to respond positively about one's abilities, as in this present study, therefore may be affected by this Law of Jante and how a participant responds. This law was noted by the researchers to be particularly relevant to the older generation and may have resulted in ‘good’ rather than ‘excellent’ responses, as one participant exemplified. It is therefore a question about how researchers take account of this within the way they score and report their findings. Further research is needed to understand how much these factors need to be considered and how they are managed. At present, there is little evidence that these are considered, and a possible starting point would be for researchers to monitor such factors and include this within their write‐ups so that a fuller picture develops. Another factor worth consideration is the involvement of people with dementia in determining the core domains that led to the validated measures used in this study. This was viewed as an important aspect of the study as it ensured that the measures were reflective of the needs and experiences of those who used the service. This is not often considered when deciding on validated measures for people with dementia. Evidence from patient outcome measures research finds that such inclusive practice can lead to greater health and practice benefits, and more reliable evidence associated with the experiences of those being assessed. The production of guidance to ensure a robust and open process is followed would be a valuable resource. An example from the findings of this study also highlights the need for people with dementia to be involved in the use of and design of validated measures. The authors acknowledge that while some findings from this study may be expected, the way that validated measures are experienced by people living with dementia is not often considered in the literature. Therefore, it is not known if or how researchers or clinicians take account of mood, personality, and so forth, when conducting assessments. The authors believe that this is an aspect that could be more openly discussed as it can impact the outcomes for evidence of the impact of an intervention, but more importantly, on the care a person with dementia receives. Only with more open conversations and research can we find a way to mitigate these variables or develop more guidance on when an assessment should or should not be used. For example, the research team are taking lessons learned from this pilot forward for a new larger‐scale evaluation of the lifelong learning service across Denmark, Norway and the United Kingom where it is now being run, and this has influenced the training provided to assessors on how to undertake the assessments. 4.1 Limitations The key limitation is the number of videos analysed in this pilot analysis. The ability to generalize the findings is limited, however, this study has provided novel information on a situation that is not often researched. The identification of factors that could impact how people with dementia react and respond to validated measures warrants further investigation. People with dementia were not part of evaluating the assessment process to share what or how they had experienced the situation. This may be an area for future research so that findings are not based on observation alone but also on personal experience. A further limitation was the potential for the researchers' responses and behaviour to impact participants' responses, potentially influencing how they responded. Further research or training on how to mitigate this would be a valuable consideration for the future.
Limitations The key limitation is the number of videos analysed in this pilot analysis. The ability to generalize the findings is limited, however, this study has provided novel information on a situation that is not often researched. The identification of factors that could impact how people with dementia react and respond to validated measures warrants further investigation. People with dementia were not part of evaluating the assessment process to share what or how they had experienced the situation. This may be an area for future research so that findings are not based on observation alone but also on personal experience. A further limitation was the potential for the researchers' responses and behaviour to impact participants' responses, potentially influencing how they responded. Further research or training on how to mitigate this would be a valuable consideration for the future.
CONCLUSIONS What has emerged is the complexities of assessing people with dementia. People with dementia are using different strategies to manage their emotional responses to being assessed. These responses may hinder or help their answers and as such this opens a potential area for further research as responses to validated measurers may not provide an absolute answer. They rather need to be considered in relation to how the individual responds physically and verbally during the assessment and their cultural background. What this study provides is insight into the assessment process, highlighting that there may be more to consider when interpreting findings from validated measures and that there are approaches that can support the person with dementia to manage what can be a potentially stressful situation.
The authors declare no conflicts of interest.
Supplementary information. Click here for additional data file.
|
Accompanying patients in clinical oncology teams: Reported activities and perceived effects | 57bd7008-8a82-462c-9e09-b9a9ea7b13c0 | 10010089 | Internal Medicine[mh] | INTRODUCTION It is estimated that in 2022, 60,000 Quebecers were diagnosed with cancer, which represents 158 new cases per day. This number has been on the rise for several years and is expected to continue rising in the coming years due to testing delays and backlogs following the pandemic. Cancer is the country's leading cause of death, and Quebec is one of the provinces in Canada with the highest incidence and prevalence of cancer. , In this context, cancer prevention and treatment are a public health priority. In response, Quebec has a cancer directorate within the Ministry of Health and Social Services that has adopted multiple measures to reduce the incidence and prevalence of cancer, but also to improve the quality, safety and experience of care and services. The care and service partnership constitutes one way of achieving these goals by recognizing patients' experiential knowledge, status as full members of the care team and capacity for self‐determination to make decisions about themselves based on their needs and values. Moreover, the assessment of cancer patients' experience highlighted that emotional support was the most lacking aspect among the six areas of patient experience assessed in health and social service organizations in Quebec and across Canada. This need is all the more significant in the context of a pandemic where patients expect and hope to receive emotional support and benevolent accompaniment. In oncology, peer support has usually been provided by ‘patient navigators’ comprised of nurses, social workers, educators, as well as former patients. By helping patients access healthcare, patient navigators have facilitated and hence accelerated diagnosis and treatment journeys. Patients have benefited from these programmes as it was reported they participated in improving their health by, for instance, increasing adherence to treatment, bringing comfort and guiding them through the healthcare system. , This could be considered patient‐centred care, where patients' needs and preferences are integrated into the delivery of care, moving away from medical paternalism. However, the care and service partnership goes beyond patient‐centered care and can also be exercised at the clinical level by introducing accompanying patients (APs) into the clinical teams to meet patients' need for emotional support. APs are patient advisors who have acquired specific experiential knowledge related to living with cancer, using services and interacting with healthcare professionals. They are, therefore, in an optimal position to provide a distinct and unique touch to new patients' support by helping them, for instance, navigate, understand and eventually accept their health situation. APs can also accompany patients to facilitate their transition from acute care to front‐line teams and community cancer teams. They can improve patients' quality of life by promoting healthy lifestyle habits and reducing symptoms of anxiety and depression and have positive impacts on healthcare professionals (e.g., work satisfaction, empathy), managers, and decision‐makers (e.g., to better take into account the patients' experience) and the APs themselves (e.g., finalize their recovery). The PAROLE‐Onco programme aimed to integrate APs into the clinical teams of four different healthcare establishments in Quebec, Canada. Selected APs were trained and coached to intervene with patients, while giving them space to innovate in their own ways to accompany patients based on their experiential knowledge. Since 2019, healthcare professionals have introduced, during medical appointments with patients, APs accompanying services as an additional resource, and patients were free to accept or refuse such a resource. Research coordinators or clinical staff members monitored all procedures and collected essential clinical data on patients who had consented to participate in an anonymous and confidential manner to match them with an AP with a similar profile. Patients then made appointments with their AP according to their needs. To date, the perspective of APs directly involved at a clinical level has been poorly documented. We aim to assess the evolution of APs' perspectives regarding their activities over time when APs and the perceived effects of their intervention on themselves, on the patients and on the clinical team.
METHODS Data were collected on two separate occasions, at the beginning of the PAROLE‐Onco programme, where APs started APs (T1), and 2 years later (T2). 2.1 Settings Table presents the four establishments that were included in this study: the Centre hospitalier de l'Université de Montréal (E1), the Centre Hospitalier Universitaire de Québec‐Université Laval (E2), the Centre intégré universitaire de santé et de services sociaux (CIUSSS) de l'Est‐de‐l'Île‐de‐Montréal (E3) and the CIUSSS de la Mauricie‐et‐du‐Centre‐du‐Québec (E4). Each establishment recruited its own APs (29 in total), and one site (E3) set up monthly meetings including a doctor and a psychologist to better accompany APs. Some APs did not have the opportunity to accompany patients since they were involved in the preparation phase before the intervention began. Therefore, they were not included in the data collection. The programmes in which APs were implemented include two in breast cancer (E1 and E4), one in breast oncogenetics (E2) and one in breast and gynaecologic cancers (E3). 2.2 Data collection Data were collected via semistructured interviews and focus group discussions. All APs from the four establishments were invited to participate in T1 and T2. Participants were contacted by telephone or email to participate and to sign electronically the consent form approved by the Research Ethics Committee. No compensation was offered. All participants consented to partake in the research and be recorded. Due to the context of the COVID‐19 pandemic, the interviews were conducted either by telephone or videoconference, and the focus group discussions were carried out by videoconference. The questions in T1 (Supporting Information) aimed to identify, among other information, the roles of APs and the effects of their interventions and were co‐created and pilot‐tested with two patient–researchers (patients included in the research team; M.‐A. C. and M. D.). T1 data collection events were realized 4 months after APs were first introduced in the four establishments. Two years later (T2), the data collection aimed to assess the change in the APs' perspective regarding their roles and the effects of their interventions by presenting the T1 results. APs discussed how elements have changed since the new APs joined the team or whether new elements have emerged. Therefore, no interview guide was used in T2. Transcripts of the interviews and focus group discussions were prepared. All data collection events were carried out in French and were subsequently translated into English. 2.3 Participants In total, for the two rounds of data collection (T1/T2), we were able to interview 20 different APs (T1: n = 10, T2: n = 10). A summary of data collection in T1 and T2 is presented in Table , and Table presents a description of the participants. In T1, of the 10 APs that were involved in the four establishments and that have accompanied patients, all of them agreed to participate. One focus group with E4 was held in June 2019 ( n = 3 participants). Another focus group was held in September 2019 E1, E2 and E3 ( n = 4). The two focus groups were led by the principal researcher (M.‐P. P.) and lasted 58 and 178 min, respectively. Moreover, eight individual interviews were held between April and May 2020 and were conducted by research coordinators (K. B. and M. I.‐N.). They lasted between 30 and 63 min. Out of the 10 APs, 5 participated in two data collection events (individual interviews and focus groups). In T2, of the 20 APs that were APs, 16 agreed to participate (4 did not reply to our invitation). Of the 16 participants, 6 have participated in T1. The other four APs that participated in T1 were not reinvited in T2 because they were no longer involved in the PAROLE‐Onco programme due to personal issues. Therefore, there were 10 new APs that were interviewed in T2. An initial focus group with E1 and E3 ( n = 3 participants) was held in September 2021 and lasted 35 min. At that time, the APs had been APs for 12–22 months. Four other focus groups for each establishment ( n = 16 participants in total) were held between March and May 2022 and lasted between 80 and 115 min. The range of months of involvement was between 6 and 32 during this period. The events were led by the principal researcher or a research assistant (J. P.). Of the 16 APs, 3 participated in two data collection events. 2.4 Data analysis To analyse data, we followed the six‐step guideline of Braun and Clarke. First, all interviews were transcribed to familiarize ourselves with the data. Second, several meetings between the authors, including two patient researchers, took place to construct the codebook that contained four main categories: (1) APs activities regarding patients and clinical teams, (2) PAs perceived effects of their activities on the patients, (3) on the clinical team and (4) on themselves. Then, we used a thematic analysis approach to better ‘understand a set of experiences, thoughts, or behaviors’ pertaining to these categories. We used an inductive approach to theme identification—or patterned responses that occurred in the data set. Coding was done using the QDA Miner Software (version 6.0.2.). Steps 4 and 5 consisted of grouping some themes together to define APs activities. The final step is the writing of this manuscript.
Settings Table presents the four establishments that were included in this study: the Centre hospitalier de l'Université de Montréal (E1), the Centre Hospitalier Universitaire de Québec‐Université Laval (E2), the Centre intégré universitaire de santé et de services sociaux (CIUSSS) de l'Est‐de‐l'Île‐de‐Montréal (E3) and the CIUSSS de la Mauricie‐et‐du‐Centre‐du‐Québec (E4). Each establishment recruited its own APs (29 in total), and one site (E3) set up monthly meetings including a doctor and a psychologist to better accompany APs. Some APs did not have the opportunity to accompany patients since they were involved in the preparation phase before the intervention began. Therefore, they were not included in the data collection. The programmes in which APs were implemented include two in breast cancer (E1 and E4), one in breast oncogenetics (E2) and one in breast and gynaecologic cancers (E3).
Data collection Data were collected via semistructured interviews and focus group discussions. All APs from the four establishments were invited to participate in T1 and T2. Participants were contacted by telephone or email to participate and to sign electronically the consent form approved by the Research Ethics Committee. No compensation was offered. All participants consented to partake in the research and be recorded. Due to the context of the COVID‐19 pandemic, the interviews were conducted either by telephone or videoconference, and the focus group discussions were carried out by videoconference. The questions in T1 (Supporting Information) aimed to identify, among other information, the roles of APs and the effects of their interventions and were co‐created and pilot‐tested with two patient–researchers (patients included in the research team; M.‐A. C. and M. D.). T1 data collection events were realized 4 months after APs were first introduced in the four establishments. Two years later (T2), the data collection aimed to assess the change in the APs' perspective regarding their roles and the effects of their interventions by presenting the T1 results. APs discussed how elements have changed since the new APs joined the team or whether new elements have emerged. Therefore, no interview guide was used in T2. Transcripts of the interviews and focus group discussions were prepared. All data collection events were carried out in French and were subsequently translated into English.
Participants In total, for the two rounds of data collection (T1/T2), we were able to interview 20 different APs (T1: n = 10, T2: n = 10). A summary of data collection in T1 and T2 is presented in Table , and Table presents a description of the participants. In T1, of the 10 APs that were involved in the four establishments and that have accompanied patients, all of them agreed to participate. One focus group with E4 was held in June 2019 ( n = 3 participants). Another focus group was held in September 2019 E1, E2 and E3 ( n = 4). The two focus groups were led by the principal researcher (M.‐P. P.) and lasted 58 and 178 min, respectively. Moreover, eight individual interviews were held between April and May 2020 and were conducted by research coordinators (K. B. and M. I.‐N.). They lasted between 30 and 63 min. Out of the 10 APs, 5 participated in two data collection events (individual interviews and focus groups). In T2, of the 20 APs that were APs, 16 agreed to participate (4 did not reply to our invitation). Of the 16 participants, 6 have participated in T1. The other four APs that participated in T1 were not reinvited in T2 because they were no longer involved in the PAROLE‐Onco programme due to personal issues. Therefore, there were 10 new APs that were interviewed in T2. An initial focus group with E1 and E3 ( n = 3 participants) was held in September 2021 and lasted 35 min. At that time, the APs had been APs for 12–22 months. Four other focus groups for each establishment ( n = 16 participants in total) were held between March and May 2022 and lasted between 80 and 115 min. The range of months of involvement was between 6 and 32 during this period. The events were led by the principal researcher or a research assistant (J. P.). Of the 16 APs, 3 participated in two data collection events.
Data analysis To analyse data, we followed the six‐step guideline of Braun and Clarke. First, all interviews were transcribed to familiarize ourselves with the data. Second, several meetings between the authors, including two patient researchers, took place to construct the codebook that contained four main categories: (1) APs activities regarding patients and clinical teams, (2) PAs perceived effects of their activities on the patients, (3) on the clinical team and (4) on themselves. Then, we used a thematic analysis approach to better ‘understand a set of experiences, thoughts, or behaviors’ pertaining to these categories. We used an inductive approach to theme identification—or patterned responses that occurred in the data set. Coding was done using the QDA Miner Software (version 6.0.2.). Steps 4 and 5 consisted of grouping some themes together to define APs activities. The final step is the writing of this manuscript.
RESULTS The qualitative analysis enabled us to group APs activities into four categories: emotional support, navigational support, informational and cognitive support and collaborating with the clinical teams (Figure ). Elements of responses pertaining to the effects of APs activities can be found in Figure . 3.1 Activities 3.1.1 Emotional support In T1, one of the main roles reported by APs was to listen. Since APs have undergone similar experiences, they can better understand what patients are going through, and therefore lead conversations patients could not have with their loved ones: ‘The fact that we have experienced the situation, we are able to be more empathetic towards patients […]. The patients tell me things that they could not say to a partner because they do not want to disturb them’ (E4‐19). In T2, APs reported moving from an unconditional listening role to a more ‘active’ role towards patients. APs now put the emphasis on discussing and validating the patients' emotions to help them understand and accept their journey with cancer while also reducing their anxiety and reassuring them: ‘what I have experienced with several women is validation, validating them in what they feel, in the choices they can make’ (E2‐01). Overall, APs try to not only talk about the disease, treatment and care trajectories, but also about the difficulties experienced at home, in interpersonal relationships and in daily activities, thus participating with the patients in building relationships based on trust and openness. Some mentioned in T2 that they have accompanied the patients' loved ones to bring comfort to the whole family. Some also said they have accompanied patients to their medical appointments, especially patients who may have barriers that limit their ability to interact with their physician. 3.1.2 Navigational support Various resources inside and outside the hospital are offered to the patients, and one of the roles of APs, as mentioned both in T1 in T2, is to act as patient navigators. Not only are they familiar with the range of hospital services offered, but they also ‘know the entire chain of operations for having gone through it’ (E2‐01). APs noticed that ‘often women are not told about this. […] They don't know they have access to this, and they always think that you have to pay too’ (E1‐02). APs therefore ‘encourage them to get the right information’ (E2‐02) and make sure to direct patients to the external services made available to them to complete the support sessions they offer, if needed. They also suggest referring patients to other professionals, be it a psychologist, a nutritionist or a social worker, if they feel that the patients' degree of distress lies beyond their area of expertise and experiences undergone to effectively meet the latter's various needs. In T2, APs realized that the patients are often not informed of their rights. This touches on building patients' ability to advocate for their own rights. They mention their new role in ensuring they know their rights and become comfortable using them. Encouraging them to ask questions and to assume responsibility for fulfilling their desire to understand and learn about their disease are some of the aspects discussed during the meetings with their patients. Therefore, APs help patients make their own decisions by encouraging them to think through the situation, ask questions and express their concerns and uncertainties: ‘here are patients who are afraid to ask questions because they don't want to be perceived as annoying patients. You always have to reassure them. We say “no, it's your right.” We have to encourage them’ (E1‐05). 3.1.3 Informational and cognitive support The accompanying sessions with the patients allow APs, in T1, to share their own lived experiences with discernment without making it an example to be followed. Their role is not to teach, but rather to use their own experience as a way to answer patients' questions. Having experienced the system at hand, APs could serve as resource persons for individuals who are unaware of how to navigate a new healthcare structure: ‘We're able to guide them and encourage them. We are not there to pity them and take care of them. We're really there to support them and say, “Look, I've been there. Here are the steps”’ (E4‐19). In T2, APs explained that they give tips they have learned throughout their own journey with patients instead of giving advice which, according to them, they are not trained to do, nor do they have the expertise to give opinions that include clinical details: ‘I don't like giving advice because I feel it's not part of my mandate… it's really sharing [experiences]’ (E2‐02). Another activity mentioned in T2 is that of helping patients to understand and validate the information received by the clinical team, and thus help them prepare for their appointment with healthcare professionals. Indeed, as opposed to T1, they can help educate patients by popularizing some technical information transmitted by the healthcare professionals and talking to patients using the same language as them: ‘we have the same words because we have often experienced the same emotions, so we will share the same words that the professional will not share’ (E1‐01). Some APs specify that for medical information, patients instinctively know to direct their questions to healthcare providers. With APs, they prefer to ask questions about the establishment and the care pathway: ‘They will ask more questions about their facility: Did you stay in the hospital long? Was it hard? Did you have any pain? That kind of questions’ (E2‐02). By answering patients' questions based on their experiential knowledge, they ‘help patients become partners in their care [by having] a kind of educational role’ (E1‐07). 3.1.4 Collaborating with the clinical team In T1, some APs felt they were not integrated into the clinical team, but that ultimately it could bring a value‐added resource to healthcare professionals: ‘it would improve the contact they have with their patients’ (E4‐22). In T2, they specify that they have a complementary role with the clinical team with respect to the emotional and experiential aspects of the disease versus the therapeutic aspect provided by the clinical team. Some mentioned that ‘the professionals, they can't know if they haven't lived it … It's just a fact’ (E1‐01). Therefore, APs form a different relationship with patients than healthcare professionals can, and they complete the range of services offered to the establishment. APs felt that they are (or should be) ‘a link in a chain of all the different professionals, that [they] are part of the group’ (E1‐04), although some feel they have not yet fulfilled that role. Moreover, APs mentioned in T2 that their role also consists of acting as a liaison between the patients and the clinical team. For example, with patients' consent, they can transfer information to the clinical team. It is done by updating them regularly about their patients' health journey and their patients' personal situation through the provision of medical information about treatment and the disease they may not know: ‘we also serve to update the doctor on important facts that can have an impact on the patients' health’ (E3‐07). They can also relay how the patients experience their care and how the healthcare professionals can improve it. Even if information transmission is not homogeneous across establishments, some APs have developed good communication with team members: ‘the pivot nurse was a good ally. I would call her, leave a message, and she would call me back the same day’ (E3‐07). 3.2 Perceived effects of their intervention on patients Both in T1 and T2, APs have the perception that patients are less stressed at the end of a meeting after they were listened to and were able to be reassured: ‘I am always told that “It makes me feel good to talk to you”’ (E1‐10). There is no need for a full session to have that effect, ‘even 5 minutes with a patient in a corridor, in the elevator, the person is happy, she has really lowered her anxiety level’ (E1‐03). After talking with a patient, APs could sense that they were leaving them with a smile: ‘we ended … I'm not telling you with bursts of laughter but with a smile. I'm sure the patient on the other end of the phone line smiled’ (E4‐22). In T2, APs added the fact that their accompanying sessions help restore their patients' confidence and hope, and develop the patients' feeling of belonging as they feel understood and supported in their life experiences: ‘I find it positive for patients to be with other people who have had cancer from which they have recovered, that there is long‐term healing that exists. I find it encourages them to continue’ (E3‐15). Nonetheless, while in T2, the rapid and positive effects of their support on patients are pointed out, some APs mentioned that a few minutes are not sufficient to delve deep into the patients' concerns and questions, and thus have a positive long‐term impact on them. Sometimes, several meetings are necessary before a certain progression in the patients' journey is seen. 3.3 Perceived effects of their intervention on the clinical team Both in T1 and in T2, APs shared that they could facilitate the task of healthcare professionals by preparing patients to meet and feel comfortable with the information they receive from their physicians. They think that it could be easier for healthcare practitioners to have patients that are calm during a medical appointment: ‘If [the] patient is in a good mood, understands and feels safe because she has been spoken to, it is much easier to care for that patient. She will be a lot more open to treatment. I'm sure of that’ (E4‐22). In addition, since health teams can be understaffed and overwhelmed, APs can ‘recover a little from the overload of work’ (E1‐02). In T2, they put more emphasis on the complementary aspect of their role to the health professional's therapeutic and curative function through their emotional support and their backing in the process of the patients' adaptation and acceptance of the disease: ‘we're a bit of a buffer between the two; we come to sooth a lot of things that the work staff doesn't always have time to sort out or that the patient doesn't dare to say’ (E3‐02). Also, the information shared between APs and patients could make the appointments more efficient for the clinical team by ensuring that the tasks are separated. This way, patients can be directed to other resources that offer services that the healthcare team may not be able to provide: ‘The health professionals, to advise massage therapy … They didn't have cancer, so the process of reconciliation with the body, they don't know it that much’ (E4‐22). By relaying how the patients experience their care and how the healthcare professionals can improve it, APs mentioned feeling heard, and receiving openness and appreciation from the clinical team. For example, when patients made suggestions to improve how patients are received at the hospital, APs met with the staff and received positive feedback, and ‘they said it changed their whole outlook. As a result, what I understood was that it was to be an integral part of their training’ (E3‐16). In turn, this link that is created with the clinical team encourages the staff members to ask APs more questions, consult with them and ask for their opinion. 3.4 Perceived effects of their intervention on themselves In T1, they mentioned that being an AP is rewarding, and it satisfies their need to help others: ‘I'm retired, but still feel the need to do things for other people. So that satisfies my needs well. And that's something rewarding’ (E2‐01). They feel like they are making a difference, and this benefits them both as their discussions also serve as a learning experience: ‘It's a plus in both directions. When I talk to someone, it makes me feel just as good to see that I have lightened their mood, as I have helped them. She helps me’ (E1‐04). In T2, the APs discussed how the different patients they encounter represent a learning and experiential opportunity for them to improve their caregiving abilities and skills. Also, having the opportunity to share allows APs to give meaning to their own experience, and helping someone gives them a sense of purpose. However, some APs can find it emotionally difficult to listen to patients' distress: ‘For sure sometimes it can be hard for us. […] We may have lived with cancer, yes, but we haven't experienced all the distress that people can experience’ (E4‐22). But overall, APs in T2 are more capable of distancing themselves from their patients' life stories to prevent their emotions from taking over their role as unbiased listeners. They felt as if they have developed ways to help them maintain control over their emotions and lighten the heaviness of listening sessions, whether through the community of practice meetings they organize between APs, which help them to share ideas about the more difficult encounters they might have, or by adopting the neutral attitude discussed above. In T2, however, not all APs continue to consider their work as gratifying. Some perceive their role only as an opportunity to give back, which does not necessarily bring them anything personally: ‘The word gratifying is not what resonates with me anymore’ (E1‐07).
Activities 3.1.1 Emotional support In T1, one of the main roles reported by APs was to listen. Since APs have undergone similar experiences, they can better understand what patients are going through, and therefore lead conversations patients could not have with their loved ones: ‘The fact that we have experienced the situation, we are able to be more empathetic towards patients […]. The patients tell me things that they could not say to a partner because they do not want to disturb them’ (E4‐19). In T2, APs reported moving from an unconditional listening role to a more ‘active’ role towards patients. APs now put the emphasis on discussing and validating the patients' emotions to help them understand and accept their journey with cancer while also reducing their anxiety and reassuring them: ‘what I have experienced with several women is validation, validating them in what they feel, in the choices they can make’ (E2‐01). Overall, APs try to not only talk about the disease, treatment and care trajectories, but also about the difficulties experienced at home, in interpersonal relationships and in daily activities, thus participating with the patients in building relationships based on trust and openness. Some mentioned in T2 that they have accompanied the patients' loved ones to bring comfort to the whole family. Some also said they have accompanied patients to their medical appointments, especially patients who may have barriers that limit their ability to interact with their physician. 3.1.2 Navigational support Various resources inside and outside the hospital are offered to the patients, and one of the roles of APs, as mentioned both in T1 in T2, is to act as patient navigators. Not only are they familiar with the range of hospital services offered, but they also ‘know the entire chain of operations for having gone through it’ (E2‐01). APs noticed that ‘often women are not told about this. […] They don't know they have access to this, and they always think that you have to pay too’ (E1‐02). APs therefore ‘encourage them to get the right information’ (E2‐02) and make sure to direct patients to the external services made available to them to complete the support sessions they offer, if needed. They also suggest referring patients to other professionals, be it a psychologist, a nutritionist or a social worker, if they feel that the patients' degree of distress lies beyond their area of expertise and experiences undergone to effectively meet the latter's various needs. In T2, APs realized that the patients are often not informed of their rights. This touches on building patients' ability to advocate for their own rights. They mention their new role in ensuring they know their rights and become comfortable using them. Encouraging them to ask questions and to assume responsibility for fulfilling their desire to understand and learn about their disease are some of the aspects discussed during the meetings with their patients. Therefore, APs help patients make their own decisions by encouraging them to think through the situation, ask questions and express their concerns and uncertainties: ‘here are patients who are afraid to ask questions because they don't want to be perceived as annoying patients. You always have to reassure them. We say “no, it's your right.” We have to encourage them’ (E1‐05). 3.1.3 Informational and cognitive support The accompanying sessions with the patients allow APs, in T1, to share their own lived experiences with discernment without making it an example to be followed. Their role is not to teach, but rather to use their own experience as a way to answer patients' questions. Having experienced the system at hand, APs could serve as resource persons for individuals who are unaware of how to navigate a new healthcare structure: ‘We're able to guide them and encourage them. We are not there to pity them and take care of them. We're really there to support them and say, “Look, I've been there. Here are the steps”’ (E4‐19). In T2, APs explained that they give tips they have learned throughout their own journey with patients instead of giving advice which, according to them, they are not trained to do, nor do they have the expertise to give opinions that include clinical details: ‘I don't like giving advice because I feel it's not part of my mandate… it's really sharing [experiences]’ (E2‐02). Another activity mentioned in T2 is that of helping patients to understand and validate the information received by the clinical team, and thus help them prepare for their appointment with healthcare professionals. Indeed, as opposed to T1, they can help educate patients by popularizing some technical information transmitted by the healthcare professionals and talking to patients using the same language as them: ‘we have the same words because we have often experienced the same emotions, so we will share the same words that the professional will not share’ (E1‐01). Some APs specify that for medical information, patients instinctively know to direct their questions to healthcare providers. With APs, they prefer to ask questions about the establishment and the care pathway: ‘They will ask more questions about their facility: Did you stay in the hospital long? Was it hard? Did you have any pain? That kind of questions’ (E2‐02). By answering patients' questions based on their experiential knowledge, they ‘help patients become partners in their care [by having] a kind of educational role’ (E1‐07). 3.1.4 Collaborating with the clinical team In T1, some APs felt they were not integrated into the clinical team, but that ultimately it could bring a value‐added resource to healthcare professionals: ‘it would improve the contact they have with their patients’ (E4‐22). In T2, they specify that they have a complementary role with the clinical team with respect to the emotional and experiential aspects of the disease versus the therapeutic aspect provided by the clinical team. Some mentioned that ‘the professionals, they can't know if they haven't lived it … It's just a fact’ (E1‐01). Therefore, APs form a different relationship with patients than healthcare professionals can, and they complete the range of services offered to the establishment. APs felt that they are (or should be) ‘a link in a chain of all the different professionals, that [they] are part of the group’ (E1‐04), although some feel they have not yet fulfilled that role. Moreover, APs mentioned in T2 that their role also consists of acting as a liaison between the patients and the clinical team. For example, with patients' consent, they can transfer information to the clinical team. It is done by updating them regularly about their patients' health journey and their patients' personal situation through the provision of medical information about treatment and the disease they may not know: ‘we also serve to update the doctor on important facts that can have an impact on the patients' health’ (E3‐07). They can also relay how the patients experience their care and how the healthcare professionals can improve it. Even if information transmission is not homogeneous across establishments, some APs have developed good communication with team members: ‘the pivot nurse was a good ally. I would call her, leave a message, and she would call me back the same day’ (E3‐07).
Emotional support In T1, one of the main roles reported by APs was to listen. Since APs have undergone similar experiences, they can better understand what patients are going through, and therefore lead conversations patients could not have with their loved ones: ‘The fact that we have experienced the situation, we are able to be more empathetic towards patients […]. The patients tell me things that they could not say to a partner because they do not want to disturb them’ (E4‐19). In T2, APs reported moving from an unconditional listening role to a more ‘active’ role towards patients. APs now put the emphasis on discussing and validating the patients' emotions to help them understand and accept their journey with cancer while also reducing their anxiety and reassuring them: ‘what I have experienced with several women is validation, validating them in what they feel, in the choices they can make’ (E2‐01). Overall, APs try to not only talk about the disease, treatment and care trajectories, but also about the difficulties experienced at home, in interpersonal relationships and in daily activities, thus participating with the patients in building relationships based on trust and openness. Some mentioned in T2 that they have accompanied the patients' loved ones to bring comfort to the whole family. Some also said they have accompanied patients to their medical appointments, especially patients who may have barriers that limit their ability to interact with their physician.
Navigational support Various resources inside and outside the hospital are offered to the patients, and one of the roles of APs, as mentioned both in T1 in T2, is to act as patient navigators. Not only are they familiar with the range of hospital services offered, but they also ‘know the entire chain of operations for having gone through it’ (E2‐01). APs noticed that ‘often women are not told about this. […] They don't know they have access to this, and they always think that you have to pay too’ (E1‐02). APs therefore ‘encourage them to get the right information’ (E2‐02) and make sure to direct patients to the external services made available to them to complete the support sessions they offer, if needed. They also suggest referring patients to other professionals, be it a psychologist, a nutritionist or a social worker, if they feel that the patients' degree of distress lies beyond their area of expertise and experiences undergone to effectively meet the latter's various needs. In T2, APs realized that the patients are often not informed of their rights. This touches on building patients' ability to advocate for their own rights. They mention their new role in ensuring they know their rights and become comfortable using them. Encouraging them to ask questions and to assume responsibility for fulfilling their desire to understand and learn about their disease are some of the aspects discussed during the meetings with their patients. Therefore, APs help patients make their own decisions by encouraging them to think through the situation, ask questions and express their concerns and uncertainties: ‘here are patients who are afraid to ask questions because they don't want to be perceived as annoying patients. You always have to reassure them. We say “no, it's your right.” We have to encourage them’ (E1‐05).
Informational and cognitive support The accompanying sessions with the patients allow APs, in T1, to share their own lived experiences with discernment without making it an example to be followed. Their role is not to teach, but rather to use their own experience as a way to answer patients' questions. Having experienced the system at hand, APs could serve as resource persons for individuals who are unaware of how to navigate a new healthcare structure: ‘We're able to guide them and encourage them. We are not there to pity them and take care of them. We're really there to support them and say, “Look, I've been there. Here are the steps”’ (E4‐19). In T2, APs explained that they give tips they have learned throughout their own journey with patients instead of giving advice which, according to them, they are not trained to do, nor do they have the expertise to give opinions that include clinical details: ‘I don't like giving advice because I feel it's not part of my mandate… it's really sharing [experiences]’ (E2‐02). Another activity mentioned in T2 is that of helping patients to understand and validate the information received by the clinical team, and thus help them prepare for their appointment with healthcare professionals. Indeed, as opposed to T1, they can help educate patients by popularizing some technical information transmitted by the healthcare professionals and talking to patients using the same language as them: ‘we have the same words because we have often experienced the same emotions, so we will share the same words that the professional will not share’ (E1‐01). Some APs specify that for medical information, patients instinctively know to direct their questions to healthcare providers. With APs, they prefer to ask questions about the establishment and the care pathway: ‘They will ask more questions about their facility: Did you stay in the hospital long? Was it hard? Did you have any pain? That kind of questions’ (E2‐02). By answering patients' questions based on their experiential knowledge, they ‘help patients become partners in their care [by having] a kind of educational role’ (E1‐07).
Collaborating with the clinical team In T1, some APs felt they were not integrated into the clinical team, but that ultimately it could bring a value‐added resource to healthcare professionals: ‘it would improve the contact they have with their patients’ (E4‐22). In T2, they specify that they have a complementary role with the clinical team with respect to the emotional and experiential aspects of the disease versus the therapeutic aspect provided by the clinical team. Some mentioned that ‘the professionals, they can't know if they haven't lived it … It's just a fact’ (E1‐01). Therefore, APs form a different relationship with patients than healthcare professionals can, and they complete the range of services offered to the establishment. APs felt that they are (or should be) ‘a link in a chain of all the different professionals, that [they] are part of the group’ (E1‐04), although some feel they have not yet fulfilled that role. Moreover, APs mentioned in T2 that their role also consists of acting as a liaison between the patients and the clinical team. For example, with patients' consent, they can transfer information to the clinical team. It is done by updating them regularly about their patients' health journey and their patients' personal situation through the provision of medical information about treatment and the disease they may not know: ‘we also serve to update the doctor on important facts that can have an impact on the patients' health’ (E3‐07). They can also relay how the patients experience their care and how the healthcare professionals can improve it. Even if information transmission is not homogeneous across establishments, some APs have developed good communication with team members: ‘the pivot nurse was a good ally. I would call her, leave a message, and she would call me back the same day’ (E3‐07).
Perceived effects of their intervention on patients Both in T1 and T2, APs have the perception that patients are less stressed at the end of a meeting after they were listened to and were able to be reassured: ‘I am always told that “It makes me feel good to talk to you”’ (E1‐10). There is no need for a full session to have that effect, ‘even 5 minutes with a patient in a corridor, in the elevator, the person is happy, she has really lowered her anxiety level’ (E1‐03). After talking with a patient, APs could sense that they were leaving them with a smile: ‘we ended … I'm not telling you with bursts of laughter but with a smile. I'm sure the patient on the other end of the phone line smiled’ (E4‐22). In T2, APs added the fact that their accompanying sessions help restore their patients' confidence and hope, and develop the patients' feeling of belonging as they feel understood and supported in their life experiences: ‘I find it positive for patients to be with other people who have had cancer from which they have recovered, that there is long‐term healing that exists. I find it encourages them to continue’ (E3‐15). Nonetheless, while in T2, the rapid and positive effects of their support on patients are pointed out, some APs mentioned that a few minutes are not sufficient to delve deep into the patients' concerns and questions, and thus have a positive long‐term impact on them. Sometimes, several meetings are necessary before a certain progression in the patients' journey is seen.
Perceived effects of their intervention on the clinical team Both in T1 and in T2, APs shared that they could facilitate the task of healthcare professionals by preparing patients to meet and feel comfortable with the information they receive from their physicians. They think that it could be easier for healthcare practitioners to have patients that are calm during a medical appointment: ‘If [the] patient is in a good mood, understands and feels safe because she has been spoken to, it is much easier to care for that patient. She will be a lot more open to treatment. I'm sure of that’ (E4‐22). In addition, since health teams can be understaffed and overwhelmed, APs can ‘recover a little from the overload of work’ (E1‐02). In T2, they put more emphasis on the complementary aspect of their role to the health professional's therapeutic and curative function through their emotional support and their backing in the process of the patients' adaptation and acceptance of the disease: ‘we're a bit of a buffer between the two; we come to sooth a lot of things that the work staff doesn't always have time to sort out or that the patient doesn't dare to say’ (E3‐02). Also, the information shared between APs and patients could make the appointments more efficient for the clinical team by ensuring that the tasks are separated. This way, patients can be directed to other resources that offer services that the healthcare team may not be able to provide: ‘The health professionals, to advise massage therapy … They didn't have cancer, so the process of reconciliation with the body, they don't know it that much’ (E4‐22). By relaying how the patients experience their care and how the healthcare professionals can improve it, APs mentioned feeling heard, and receiving openness and appreciation from the clinical team. For example, when patients made suggestions to improve how patients are received at the hospital, APs met with the staff and received positive feedback, and ‘they said it changed their whole outlook. As a result, what I understood was that it was to be an integral part of their training’ (E3‐16). In turn, this link that is created with the clinical team encourages the staff members to ask APs more questions, consult with them and ask for their opinion.
Perceived effects of their intervention on themselves In T1, they mentioned that being an AP is rewarding, and it satisfies their need to help others: ‘I'm retired, but still feel the need to do things for other people. So that satisfies my needs well. And that's something rewarding’ (E2‐01). They feel like they are making a difference, and this benefits them both as their discussions also serve as a learning experience: ‘It's a plus in both directions. When I talk to someone, it makes me feel just as good to see that I have lightened their mood, as I have helped them. She helps me’ (E1‐04). In T2, the APs discussed how the different patients they encounter represent a learning and experiential opportunity for them to improve their caregiving abilities and skills. Also, having the opportunity to share allows APs to give meaning to their own experience, and helping someone gives them a sense of purpose. However, some APs can find it emotionally difficult to listen to patients' distress: ‘For sure sometimes it can be hard for us. […] We may have lived with cancer, yes, but we haven't experienced all the distress that people can experience’ (E4‐22). But overall, APs in T2 are more capable of distancing themselves from their patients' life stories to prevent their emotions from taking over their role as unbiased listeners. They felt as if they have developed ways to help them maintain control over their emotions and lighten the heaviness of listening sessions, whether through the community of practice meetings they organize between APs, which help them to share ideas about the more difficult encounters they might have, or by adopting the neutral attitude discussed above. In T2, however, not all APs continue to consider their work as gratifying. Some perceive their role only as an opportunity to give back, which does not necessarily bring them anything personally: ‘The word gratifying is not what resonates with me anymore’ (E1‐07).
DISCUSSION The objective of this study was to assess the evolution of APs' perspectives regarding their activities when APs, as well as the perceived effects of their intervention on the patients, on the clinical team and on themselves. 4.1 Different activities played Like many studies on peer support interventions for cancer, , , , , , our study shows that the primary activity of APs is to listen to patients and validate their emotions to facilitate their acceptance process of the disease and increase their ability to fight cancer in a positive way. This is done by sharing their own lived experiential knowledge and tips they acquired throughout their own journey with illness. They also share information not only about their experiences with the disease and treatments but also about community resources, a role that is also reflected in the work of Fisher et al. and Jacobson et al. They allow patients to visualize the care pathway and thus gain a better understanding of the different steps they will have to go through. In T2, APs' activities shifted from listening and sharing experiences to empowering patients by helping them become partners in their care. It is possible that the ‘listening role’ is a less threatening first step to finding a place within the care team, but time and experience APs have given APs the ability to try and take on a more active role in the clinical team. Other functions, like advocacy support, are potentially more contentious, and it is not surprising that it appears in T2 rather than in T1. Thus, these APs also have a patient navigator role as presented in the literature, , and they are all former patients of the establishment and have all been led through the same trajectory. Another capacity emphasized by APs was their ability to help patients better prepare for their medical appointments and better understand their illness, treatments and the consequences of decisions made. Often patients are reluctant to ask professionals to clarify information provided to them or ask questions, or take their place in the decision‐making process. By playing this role, APs can provide a safe space in which to ask questions. This educational activity is also found in the literature , , , but places less emphasis on APs playing a counsellor role. In our context, they help patients to explore coping resources in a nonconfrontational way using reflective listening rather than persuasion. Finally, they can talk about professionals and introduce them to patients in reference to their own experience of the patient‐professional relationship. Such a role is rarely reported in the literature outside of mental health. Therefore, APs provide meta literacy support, characterized by support on behavioural (patient behaviour), social‐emotional and cognitive levels, and not only at the educational level. 4.2 Particularities to be a member of the clinical team While there are many studies on the contribution of peer support programmes in cancer care, , , , , , there are few reports that address peer mentoring in which APs are integrated into the clinical team, except in the area of mental health. Our results show that, in T2, some APs felt more integrated into the clinical team and were able to communicate and collaborate with healthcare professionals, although not all establishments have succeeded in fully integrating APs. Introducing APs as full members of the clinical team translates into APs' having access to the relevant medical information on the patients with their consent to better understand the context of their accompaniment. It also means being able to interact with healthcare professionals when they identify situations that require the contribution of professionals and the possibility of leaving a note in the patient's medical file, with the patient's consent, summarizing the main points of the exchanges that may be relevant for the team. Being former patients of the establishment and thus being highly familiar with the professionals, APs become the ‘transmission agent’ between the professionals and the patients. On the patients' side, they encourage the development of a bond of trust with the professionals. They also embody hope in the team's ability to care for them, as the APs are there to tell them. For healthcare professionals, the feedback on the patients' health journey and personal life allows them to better understand the patients' reality and thus better respond to their needs to help them have a better experience. Also, APs emphasized the distinction of roles within the clinical team, as they did not consider that discussing treatment and clinical details was their responsibility. They were comfortable giving advice based on their own experience and did not seek to provide professional counselling. APs develop complicity with the patient based on a shared experience. This bond can bring to light important clinical situations that would otherwise not have been reported to the clinical team. By becoming a member of the team, they can suggest that other professionals, such as psychologists, would be able to meet patients' different needs. Again, such a role is not very present in the literature available on peer support programmes except in mental health. 4.3 AP's perception of the effects of their interventions Through this research, we were able to show that the APs had perceived a certain number of effects of their accompaniment to the patients. The first effect that stands out is the decrease in anxiety, whether it be at the time of the examination (genetic, biological, radiological, etc.), the announcement of the diagnosis, the choice of treatments and the end of the treatments. Having a safe place to discuss their fears and anxieties and being supported by people who have successfully dealt with them and are still alive allows them to lower their anxiety levels. By being less anxious, patients are then better able to retain the information given to them, be more able to prepare for their appointment and dare to ask questions. Such a change in patients' behaviour allows them to be more involved in their care, to regain power over their health, and to develop a partnership with their healthcare professionals. , APs foster a bond of trust between the clinical team and the patients by sharing their own relational experiences with the team. This lived experience allows patients to identify with and feel more comfortable communicating with their professionals. As discussed by Fisher et al., one of the key features of peer support revolves around encouraging self‐empowerment, as supporters focus on a person‐centred approach. In T2, APs also emphasized restoring patients' confidence through their accompanying sessions. The authors considered supporters' role in helping patients cope with negative emotions and insecurities, just as APs mentioned discussing with patients their fears and worries. For professionals, as evidenced by the role of APs within the team, they make them more aware of the patients' perspective and experience and may therefore realize that they may have to change their behaviour, in particular by improving their communication abilities. This contributes to improving the quality of care, as highlighted by Gates and Akabas and to humanizing the care process. For APs themselves, Brodar et al. mentioned that peer supporters could become emotionally charged following their encounters with patients as they can be reminded of their own experience with cancer. It was therefore suggested that there should be more support from clinical staff as well as from other peer supporters to create a sense of community which could comfort APs during difficult times and help them give meaning to their own experiences. However, in our study, such a need did not emerge. This can perhaps be explained by APs meeting regularly in a community of practice where they can share their accompaniments and find support from the other peers present. To APs, APs are seen more as a learning opportunity, which helps give meaning to their own journey with their illness while also giving them a sense of accomplishment. Such a result has been mentioned by Solomon ; being a peer provider offered the latter personal growth as it increased their confidence in their capabilities to support and their ability to cope with the illness as well as their self‐esteem. 4.4 Limitations The concept of APs as an integral member of a clinical team is quite recent. Our study is exploratory and requires further study over time and quantitative studies to test different models. We also recognize that APs have different perceptions of their integration, and thus the results may not be an exact representation for all APs, nor do all APs practice every activity mentioned above. Through their own experience and with time, they have developed their own way of APs. Therefore, it would be important to further explore the different accompanying profiles of APs in the future. Similarly, the contexts in the four establishments are different and, accordingly, our results cannot be generalized. Moreover, here we have presented APs' perspective of their roles and their effects on themselves, the patients and the clinical team, but it is also important to assess the challenges and facilitators of their integration into the clinical team. Those results are presented in another manuscript in preparation. Future work could assess how the roles of APs and their effects on their loved ones would change if they were paid as opposed to working as volunteers, as is currently the case. In addition, it would be important to assess the patients' as well as the clinical teams' perspectives on APs. Data collection for the two populations is currently underway. Also, of the 29 APs that were included in the clinical teams at the four establishments, 20 participated in the study because some had changed positions or were unable to respond to our request. However, in our data collection process, both in T1 and T2, we felt that we had reached data saturation.
Different activities played Like many studies on peer support interventions for cancer, , , , , , our study shows that the primary activity of APs is to listen to patients and validate their emotions to facilitate their acceptance process of the disease and increase their ability to fight cancer in a positive way. This is done by sharing their own lived experiential knowledge and tips they acquired throughout their own journey with illness. They also share information not only about their experiences with the disease and treatments but also about community resources, a role that is also reflected in the work of Fisher et al. and Jacobson et al. They allow patients to visualize the care pathway and thus gain a better understanding of the different steps they will have to go through. In T2, APs' activities shifted from listening and sharing experiences to empowering patients by helping them become partners in their care. It is possible that the ‘listening role’ is a less threatening first step to finding a place within the care team, but time and experience APs have given APs the ability to try and take on a more active role in the clinical team. Other functions, like advocacy support, are potentially more contentious, and it is not surprising that it appears in T2 rather than in T1. Thus, these APs also have a patient navigator role as presented in the literature, , and they are all former patients of the establishment and have all been led through the same trajectory. Another capacity emphasized by APs was their ability to help patients better prepare for their medical appointments and better understand their illness, treatments and the consequences of decisions made. Often patients are reluctant to ask professionals to clarify information provided to them or ask questions, or take their place in the decision‐making process. By playing this role, APs can provide a safe space in which to ask questions. This educational activity is also found in the literature , , , but places less emphasis on APs playing a counsellor role. In our context, they help patients to explore coping resources in a nonconfrontational way using reflective listening rather than persuasion. Finally, they can talk about professionals and introduce them to patients in reference to their own experience of the patient‐professional relationship. Such a role is rarely reported in the literature outside of mental health. Therefore, APs provide meta literacy support, characterized by support on behavioural (patient behaviour), social‐emotional and cognitive levels, and not only at the educational level.
Particularities to be a member of the clinical team While there are many studies on the contribution of peer support programmes in cancer care, , , , , , there are few reports that address peer mentoring in which APs are integrated into the clinical team, except in the area of mental health. Our results show that, in T2, some APs felt more integrated into the clinical team and were able to communicate and collaborate with healthcare professionals, although not all establishments have succeeded in fully integrating APs. Introducing APs as full members of the clinical team translates into APs' having access to the relevant medical information on the patients with their consent to better understand the context of their accompaniment. It also means being able to interact with healthcare professionals when they identify situations that require the contribution of professionals and the possibility of leaving a note in the patient's medical file, with the patient's consent, summarizing the main points of the exchanges that may be relevant for the team. Being former patients of the establishment and thus being highly familiar with the professionals, APs become the ‘transmission agent’ between the professionals and the patients. On the patients' side, they encourage the development of a bond of trust with the professionals. They also embody hope in the team's ability to care for them, as the APs are there to tell them. For healthcare professionals, the feedback on the patients' health journey and personal life allows them to better understand the patients' reality and thus better respond to their needs to help them have a better experience. Also, APs emphasized the distinction of roles within the clinical team, as they did not consider that discussing treatment and clinical details was their responsibility. They were comfortable giving advice based on their own experience and did not seek to provide professional counselling. APs develop complicity with the patient based on a shared experience. This bond can bring to light important clinical situations that would otherwise not have been reported to the clinical team. By becoming a member of the team, they can suggest that other professionals, such as psychologists, would be able to meet patients' different needs. Again, such a role is not very present in the literature available on peer support programmes except in mental health.
AP's perception of the effects of their interventions Through this research, we were able to show that the APs had perceived a certain number of effects of their accompaniment to the patients. The first effect that stands out is the decrease in anxiety, whether it be at the time of the examination (genetic, biological, radiological, etc.), the announcement of the diagnosis, the choice of treatments and the end of the treatments. Having a safe place to discuss their fears and anxieties and being supported by people who have successfully dealt with them and are still alive allows them to lower their anxiety levels. By being less anxious, patients are then better able to retain the information given to them, be more able to prepare for their appointment and dare to ask questions. Such a change in patients' behaviour allows them to be more involved in their care, to regain power over their health, and to develop a partnership with their healthcare professionals. , APs foster a bond of trust between the clinical team and the patients by sharing their own relational experiences with the team. This lived experience allows patients to identify with and feel more comfortable communicating with their professionals. As discussed by Fisher et al., one of the key features of peer support revolves around encouraging self‐empowerment, as supporters focus on a person‐centred approach. In T2, APs also emphasized restoring patients' confidence through their accompanying sessions. The authors considered supporters' role in helping patients cope with negative emotions and insecurities, just as APs mentioned discussing with patients their fears and worries. For professionals, as evidenced by the role of APs within the team, they make them more aware of the patients' perspective and experience and may therefore realize that they may have to change their behaviour, in particular by improving their communication abilities. This contributes to improving the quality of care, as highlighted by Gates and Akabas and to humanizing the care process. For APs themselves, Brodar et al. mentioned that peer supporters could become emotionally charged following their encounters with patients as they can be reminded of their own experience with cancer. It was therefore suggested that there should be more support from clinical staff as well as from other peer supporters to create a sense of community which could comfort APs during difficult times and help them give meaning to their own experiences. However, in our study, such a need did not emerge. This can perhaps be explained by APs meeting regularly in a community of practice where they can share their accompaniments and find support from the other peers present. To APs, APs are seen more as a learning opportunity, which helps give meaning to their own journey with their illness while also giving them a sense of accomplishment. Such a result has been mentioned by Solomon ; being a peer provider offered the latter personal growth as it increased their confidence in their capabilities to support and their ability to cope with the illness as well as their self‐esteem.
Limitations The concept of APs as an integral member of a clinical team is quite recent. Our study is exploratory and requires further study over time and quantitative studies to test different models. We also recognize that APs have different perceptions of their integration, and thus the results may not be an exact representation for all APs, nor do all APs practice every activity mentioned above. Through their own experience and with time, they have developed their own way of APs. Therefore, it would be important to further explore the different accompanying profiles of APs in the future. Similarly, the contexts in the four establishments are different and, accordingly, our results cannot be generalized. Moreover, here we have presented APs' perspective of their roles and their effects on themselves, the patients and the clinical team, but it is also important to assess the challenges and facilitators of their integration into the clinical team. Those results are presented in another manuscript in preparation. Future work could assess how the roles of APs and their effects on their loved ones would change if they were paid as opposed to working as volunteers, as is currently the case. In addition, it would be important to assess the patients' as well as the clinical teams' perspectives on APs. Data collection for the two populations is currently underway. Also, of the 29 APs that were included in the clinical teams at the four establishments, 20 participated in the study because some had changed positions or were unable to respond to our request. However, in our data collection process, both in T1 and T2, we felt that we had reached data saturation.
CONCLUSION This article assesses the evolution of APs' perception of their role and the effects they can have on people affected by breast cancer, mostly, on healthcare professionals and on themselves. It highlights that APs provide emotional, informational, cognitive and navigational support that allows patients to be more empowered in their care. As they gain experience, APs progressively endorse a broader set of roles within the teams. APs also help patients become partners in their care. They are able to mobilize their experiential knowledge to complement professionals' scientific and experiential knowledge. By integrating them into teams, they can also help professionals more effectively take into account patients' lived experiences in the way they respond to their needs. In this way, they contribute to improving patients' experience of care, but also the professionals' sensitivity to patients' experiences. However, to be able to respond to patients' needs and fit into teams, organizational factors may be more or less favourable. In a second article, we, therefore, propose to focus on the issues identified by APs and examine how healthcare establishments can further facilitate integrating APs into their team.
Marie‐Pascale Pomey, Monica Iliescu‐Nelea, Cécile Vialaron, Karine Bouchard, Louise Normandin, Marie‐Andrée Côté, Mado Desforges, Israël Fortin, Isabelle Ganache, Catherine Régis, Zeev Rosberger, Danielle Charpentier, Lynda Bélanger, Michel Dorva, Djahanchah P. Ghadiri, Mélanie Lavoie‐Tremblay, Antoine Boivin, Jean‐François Pelletier, Nicolas Fernandez, Alain M. Danino and Michèle de Guise have conceived and designed the project. Marie‐Pascale Pomey, Jesseca Paquette, Monica Iliescu‐Nelea, Cécile Vialaron, Rim Mourad, Karine Bouchard, Louise Normandin, Marie‐Andrée Côté, Monica Iliescu‐Nelea and Pénélope Pomey‐Carpentier have participated in data collection and analysis. All authors have made substantial contributions to this study and have participated in the writing of this paper.
The authors declare no conflict of interest.
This study received ethical approval from the Research Ethics Committee (17.260) of the Research Centre of the University of Montreal Hospital Centre (CRCHUM).
Supporting information. Click here for additional data file.
|
Tumour Suppressor Neuron Navigator 3 and Matrix Metalloproteinase 14 are Co-expressed in Most Melanomas but Downregulated in Thick Tumours | 768e3bee-dc25-498a-9b5d-2f560dab1b85 | 10010123 | Anatomy[mh] | Patient samples and melanoma cell lines The study was approved by the Ethics Committee of Southern Finland (Dnr 492/E5/05 and 208/E5/01). Archival, paraffin-embedded tissue samples of primary melanomas ( n = 27) and histologically benign naevi ( n = 15), obtained for diagnostic purposes from patients treated at the Tampere University Central Hospital and Department of Dermatology and Allergology, Helsinki University Central Hospital, Helsinki, Finland, were analysed. In addition, the study examined another 13 melanomas (Breslow thickness ≥ 2.5 mm) for NAV3 , MMP14 and MMP16 mRNA and protein expression. Two established melanoma cell lines, the primary WM-793 (VGP, vertical growth phase) cell line and the lymph node metastasis-derived WM-239 cell line provided by Dr Erkki Hölttä (Biomedicum Helsinki) and originally established by Dr Meenhard Herlyn (The Wistar Institute, Philadelphia, PA, USA (Wistar Institute melanoma cell line collection), were also used. Probe labelling Two bacterial artificial chromosome (BAC) clones specific to Nav3 DNA (RP11-36P3 and RP11-136F16; Research Genetics Inc., Huntsville, AL, USA) and the chromosome 12 centromere probe (pA12H8; American Type Cell Culture (ATCC), Manassas, VA, USA) were labelled with Alexa 594-5-dUTP (red; Thermo Fisher Scientific ,Waltham, MA, USA) and Alexa 488-5-dUTP (green; Thermo Fisher Scientific), respectively, using nick translation . The probes were mixed with human COT-1 DNA (Invitrogen), precipitated and diluted into hybridization buffer (15% w/v dextran sulphate, 70% formamide in 2× Saline-Sodium citrate (SSC), pH 7.0). Fluorescence in situ hybridization For fluorescence in situ hybridization (FISH), cell nuclei were extracted, pretreated, and hybridized as described previously . FISH signals were analysed both manually (by 2 blinded, independent researchers) and automatically, as described previously . The manual analysis was performed using an Olympus BX61 microscope (Tokyo, Japan) equipped with a 60× and 100× oil-immersion objectives and a triple bandpass filter for simultaneous detection of Atto488, Alexa594 and DAPI (Chroma Technology Corp., Brattleboro, VT, USA) according to standard operating procedure (SOP Q2QC005; Dermagene Oy, Helsinki, Finland), which standardizes the signal scoring practice. The results obtained by automat (Metasystems Metafer4 software, Altlussheim, Germany) were checked manually by the 2 of the authors experienced in FISH analysis (PM, SV) and false results were deleted. The result was determined as false, if manually detected signals in the image of a single nucleus did not match with the results provided by the automat. A total of 200–1,000 nuclei were analysed from each case and the nuclei were grouped as normal if they had 2 signals for chromosome 12 centromere and 2 for the NAV3 . Relative NAV3 deletion was defined when the number of NAV3 signals was lower than the number of centromere signals and relative NAV3 amplification was defined when the number of NAV3 signals was higher than centromere signals. This method also detects possible aneuploidy (polyploid cells with 3 or more centromere signals). A sample was considered NAV3 aberrant if the percentage of nuclei showing amplification/deletion exceeded the 7.8%/6.1% cut-off levels (mean ± 2 x standard deviation (2SD)) determined from the benign naevus samples, respectively. Statistical analysis Statistical analyses of the FISH results were performed using negative binomial regression in SPSS. The response variable in the first model was the number of nuclei with deletion, and in the second model the number of nuclei with amplification. The independent variable was the group variable, and the offset variable was the total number of counted nuclei. Multiple comparison of groups was done with Sequential Sidak adjustment method. Kaplan–Meier plot was used when evaluating the survival of the melanoma patients. Patients were divided into groups with relatively more NAV3 deletions, NAV3 amplifications, or with no change. Statistical analyses for migration and 3D growth assays were performed with a commercially available statistical software package, IBM SPSS for Mac (IBM, Armonk, NY, USA). For qPCR, the results of NAV3 correlation with MMP14 and MMP16 , Kolmogorov-Smirnov and Shapiro-Wilk tests suggested that both X and Y were non-normal. Therefore, data were log-transformed. Quantification of mRNA expression for NAV3, MMP14 and MMP16 RNA was isolated from 10-µm tissue sections using High Pure FFPE RNA Micro Kit (Roche, Basel, Switzerland) and from WM239 cells using RNeasy Micro Kit (Qiagen, Hilden, Germany). cDNA was synthesized using SuperScript ® VILO™ cDNA synthesis kit (Thermo Fisher Scientific). The TaqMan Gene Expression Assay’s probe was 20-fold and 0.5 µl was used in 1 reaction. The LightCycler performed 40 runs in each experiment. NAV3 , MMP14 , and MMP16 genes were amplified with iQ Supermix (control 730001045/730003323, cat. 170-8862, (Bio-Rad, Hercules, CA, USA), NAV3, Hs00372108_m1; MMP14 , Hs 01037006_gH; MMP16 , Hs 00234676_m1, GAPDH, Hs02786624g1, Thermo Fisher Scientific). Sequence analysis DNA was isolated from 10-µm tissue sections of 6 paraffin-embedded tumour samples (QIAamp DNA FFPE Tissue Kit, Qiagen) and NAV3 was amplified using gene-specific primers (forward primer 5’-ACTGCCACCAGCTCCTTTGGC-3’ and reverse primer 5’-TCTCTCAGTTTCGAGGTGGTG-3’). PCR products were purified (PCR Purification Kit, Qiagen) and sequenced at Biomedicum Sequencing Unit (Helsinki, Finland). The results were analysed using Gene Composer software (Biomedicum Bioinformatics Unit, Helsinki, Finland). 2-dimensional (2D) wound migration assay Removable, polystyrene medium chambers of 4-well configuration Lab-Tek™ II – Chamber Slide™ System (Thermo Fisher Scientific) was used for incubation and growth of WM-239 cell line cultures. When the cell monolayer displayed approximately over 90% confluence, the monolayer was wounded with a sterile rubber blade. The wound closure was assessed 9–24 h later under light microscopy. Cells were then fixed with 4% paraformaldehyde (PFA) in phosphate buffered saline (PBS) for 10 min, washed with PBS, EtOH 35% ethanol (EtOH) in PBS, 50%/PBS and 70%/PBS, and immersed in 70% EtOH/PBS for storage. The distance migrated by cells was measured using ImageJ software (National Institutes of Health, Bethesda, MD, USA). RNA interference Small interfering RNAs (siRNA) targeting NAV3 ((FlexiTube GeneSolution for NAV3 GS89795, siNAV3-1 (SI04141956), and siNAV3-2 (SI04272646), Qiagen) and non-silencing control siRNA (SI03650325, Qiagen) were transfected using Lipofectamine™ 2000 (Thermo Fisher Scientific). The silencing efficacies were assessed by quantitative polymerase chain reaction (qPCR). Growth in 3D collagen Type I collagen (4.8 mg/ml, rat tail, Sigma-Aldrich, St Louis, MO, USA) was mixed with an equal amount of 2 x minimal essential media (MEM), and the pH was adjusted to 7.4 using 20% sodium hydroxide (NaOH). A total of 5,000 cells were suspended in 40 µl hydrogel, the suspension was transferred to a 24-well plate, and incubated for 1 h at 37°C to allow complete gelling. After 48 h incubation in complete growth medium, the cultures were photographed and the percentage of elongated cells amongst the total was calculated using ImageJ software. Immunostaining Immunohistochemistry for NAV3 and MMP14 proteins was performed on 40 formalin-fixed and paraffin-embedded primary melanoma samples and 3 metastases, as described previously . After standard deparaffination, the sections were blocked with ready-to-use 2.5% normal horse blocking serum (Universal Impress, MP-7500, Vector Laboratories, Newark, CA, USA), incubated overnight at 4°C with either rabbit anti-Nav3 (HPA032111, Lot. R32215, Sigma Prestige Antibodies, Sigma-Aldrich) diluted 1:100 in 1% BSA, or mouse anti-MMP14 (MAB3328, Lot. 2450182, Millipore, Burlington, MA, USA), diluted 1:100 in 1% BSA, and further incubated with the secondary antibody Universal ImPress HRP (Universal Impress, MP-7500, Vector Laboratories) and VECTOR NovaRED (SK-4800, Vector Laboratories) substrate. The slides were counterstained with Meyer’s haematoxylin and mounted, after graded alcohol series, with Neo-Mount (HX934618, Merck, Rahway, NJ, USA). For the scratch wound slides, immunostaining was performed with the primary antibody anti-Nav3 (HPA032111, Lot. R32215, Sigma Prestige Antibodies, Sigma-Aldrich), diluted 1:300 in BSA 1%, and mouse anti-MMP14 (MAB3328, Lot. 2450182, Millipore) diluted 1:100 in BSA 1%. The slides were incubated overnight at 4°C, followed by the Vectastain imPRESS™ UNIVERSAL REAGENT anti-Mouse/Rabbit Ig peroxidase kit, (Vector Laboratories MP-7500) with Vector AEC Peroxidase Substrate Kit as the chromogen (Vector Laboratories SK-4200) and Mayer’s haematoxylin as counterstaining. Slides were mounted with Aquatex (VWR International Ltd, Lutterworth, UK). The immunostainings were analysed independently by 2 authors (OB and AR). Double immunofluorescence staining with NAV3 and MMP14 For the double immunofluorescence, 7 thick (Breslow thickness > 2.4 mm) formalin-fixed and paraffin-embedded melanoma samples were stained for MMP14 and NAV3. Anti-MMP14 antibody (MAB3328, Lot. 2450182, Millipore) was used at 1:50 dilution in Tris-buffered saline (TBS). The secondary antibody was goat anti-mouse Alexa 594, diluted 1:500 in TBS. The second primary antibody was the same anti-NAV3 antibody as above, used at a dilution of 1:100 in TBS. The primary antibody was incubated overnight at 4°C, and goat anti-rabbit Alexa 488 was used as the secondary antibody, diluted 1:500 in TBS. To visualize the cell nuclei (DNA), the sections were treated with Hoechst 33342 solution (Thermo Fisher Scientific) and the slides were mounted with Immu-mount (Thermo Fisher Scientific). The images were visualized using immunofluorescence Zeiss Axio Imager microscope (Carl Zeiss Ag, Oberkochen, Germany).
The study was approved by the Ethics Committee of Southern Finland (Dnr 492/E5/05 and 208/E5/01). Archival, paraffin-embedded tissue samples of primary melanomas ( n = 27) and histologically benign naevi ( n = 15), obtained for diagnostic purposes from patients treated at the Tampere University Central Hospital and Department of Dermatology and Allergology, Helsinki University Central Hospital, Helsinki, Finland, were analysed. In addition, the study examined another 13 melanomas (Breslow thickness ≥ 2.5 mm) for NAV3 , MMP14 and MMP16 mRNA and protein expression. Two established melanoma cell lines, the primary WM-793 (VGP, vertical growth phase) cell line and the lymph node metastasis-derived WM-239 cell line provided by Dr Erkki Hölttä (Biomedicum Helsinki) and originally established by Dr Meenhard Herlyn (The Wistar Institute, Philadelphia, PA, USA (Wistar Institute melanoma cell line collection), were also used.
Two bacterial artificial chromosome (BAC) clones specific to Nav3 DNA (RP11-36P3 and RP11-136F16; Research Genetics Inc., Huntsville, AL, USA) and the chromosome 12 centromere probe (pA12H8; American Type Cell Culture (ATCC), Manassas, VA, USA) were labelled with Alexa 594-5-dUTP (red; Thermo Fisher Scientific ,Waltham, MA, USA) and Alexa 488-5-dUTP (green; Thermo Fisher Scientific), respectively, using nick translation . The probes were mixed with human COT-1 DNA (Invitrogen), precipitated and diluted into hybridization buffer (15% w/v dextran sulphate, 70% formamide in 2× Saline-Sodium citrate (SSC), pH 7.0).
For fluorescence in situ hybridization (FISH), cell nuclei were extracted, pretreated, and hybridized as described previously . FISH signals were analysed both manually (by 2 blinded, independent researchers) and automatically, as described previously . The manual analysis was performed using an Olympus BX61 microscope (Tokyo, Japan) equipped with a 60× and 100× oil-immersion objectives and a triple bandpass filter for simultaneous detection of Atto488, Alexa594 and DAPI (Chroma Technology Corp., Brattleboro, VT, USA) according to standard operating procedure (SOP Q2QC005; Dermagene Oy, Helsinki, Finland), which standardizes the signal scoring practice. The results obtained by automat (Metasystems Metafer4 software, Altlussheim, Germany) were checked manually by the 2 of the authors experienced in FISH analysis (PM, SV) and false results were deleted. The result was determined as false, if manually detected signals in the image of a single nucleus did not match with the results provided by the automat. A total of 200–1,000 nuclei were analysed from each case and the nuclei were grouped as normal if they had 2 signals for chromosome 12 centromere and 2 for the NAV3 . Relative NAV3 deletion was defined when the number of NAV3 signals was lower than the number of centromere signals and relative NAV3 amplification was defined when the number of NAV3 signals was higher than centromere signals. This method also detects possible aneuploidy (polyploid cells with 3 or more centromere signals). A sample was considered NAV3 aberrant if the percentage of nuclei showing amplification/deletion exceeded the 7.8%/6.1% cut-off levels (mean ± 2 x standard deviation (2SD)) determined from the benign naevus samples, respectively.
Statistical analyses of the FISH results were performed using negative binomial regression in SPSS. The response variable in the first model was the number of nuclei with deletion, and in the second model the number of nuclei with amplification. The independent variable was the group variable, and the offset variable was the total number of counted nuclei. Multiple comparison of groups was done with Sequential Sidak adjustment method. Kaplan–Meier plot was used when evaluating the survival of the melanoma patients. Patients were divided into groups with relatively more NAV3 deletions, NAV3 amplifications, or with no change. Statistical analyses for migration and 3D growth assays were performed with a commercially available statistical software package, IBM SPSS for Mac (IBM, Armonk, NY, USA). For qPCR, the results of NAV3 correlation with MMP14 and MMP16 , Kolmogorov-Smirnov and Shapiro-Wilk tests suggested that both X and Y were non-normal. Therefore, data were log-transformed.
RNA was isolated from 10-µm tissue sections using High Pure FFPE RNA Micro Kit (Roche, Basel, Switzerland) and from WM239 cells using RNeasy Micro Kit (Qiagen, Hilden, Germany). cDNA was synthesized using SuperScript ® VILO™ cDNA synthesis kit (Thermo Fisher Scientific). The TaqMan Gene Expression Assay’s probe was 20-fold and 0.5 µl was used in 1 reaction. The LightCycler performed 40 runs in each experiment. NAV3 , MMP14 , and MMP16 genes were amplified with iQ Supermix (control 730001045/730003323, cat. 170-8862, (Bio-Rad, Hercules, CA, USA), NAV3, Hs00372108_m1; MMP14 , Hs 01037006_gH; MMP16 , Hs 00234676_m1, GAPDH, Hs02786624g1, Thermo Fisher Scientific).
DNA was isolated from 10-µm tissue sections of 6 paraffin-embedded tumour samples (QIAamp DNA FFPE Tissue Kit, Qiagen) and NAV3 was amplified using gene-specific primers (forward primer 5’-ACTGCCACCAGCTCCTTTGGC-3’ and reverse primer 5’-TCTCTCAGTTTCGAGGTGGTG-3’). PCR products were purified (PCR Purification Kit, Qiagen) and sequenced at Biomedicum Sequencing Unit (Helsinki, Finland). The results were analysed using Gene Composer software (Biomedicum Bioinformatics Unit, Helsinki, Finland).
Removable, polystyrene medium chambers of 4-well configuration Lab-Tek™ II – Chamber Slide™ System (Thermo Fisher Scientific) was used for incubation and growth of WM-239 cell line cultures. When the cell monolayer displayed approximately over 90% confluence, the monolayer was wounded with a sterile rubber blade. The wound closure was assessed 9–24 h later under light microscopy. Cells were then fixed with 4% paraformaldehyde (PFA) in phosphate buffered saline (PBS) for 10 min, washed with PBS, EtOH 35% ethanol (EtOH) in PBS, 50%/PBS and 70%/PBS, and immersed in 70% EtOH/PBS for storage. The distance migrated by cells was measured using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
Small interfering RNAs (siRNA) targeting NAV3 ((FlexiTube GeneSolution for NAV3 GS89795, siNAV3-1 (SI04141956), and siNAV3-2 (SI04272646), Qiagen) and non-silencing control siRNA (SI03650325, Qiagen) were transfected using Lipofectamine™ 2000 (Thermo Fisher Scientific). The silencing efficacies were assessed by quantitative polymerase chain reaction (qPCR).
Type I collagen (4.8 mg/ml, rat tail, Sigma-Aldrich, St Louis, MO, USA) was mixed with an equal amount of 2 x minimal essential media (MEM), and the pH was adjusted to 7.4 using 20% sodium hydroxide (NaOH). A total of 5,000 cells were suspended in 40 µl hydrogel, the suspension was transferred to a 24-well plate, and incubated for 1 h at 37°C to allow complete gelling. After 48 h incubation in complete growth medium, the cultures were photographed and the percentage of elongated cells amongst the total was calculated using ImageJ software.
Immunohistochemistry for NAV3 and MMP14 proteins was performed on 40 formalin-fixed and paraffin-embedded primary melanoma samples and 3 metastases, as described previously . After standard deparaffination, the sections were blocked with ready-to-use 2.5% normal horse blocking serum (Universal Impress, MP-7500, Vector Laboratories, Newark, CA, USA), incubated overnight at 4°C with either rabbit anti-Nav3 (HPA032111, Lot. R32215, Sigma Prestige Antibodies, Sigma-Aldrich) diluted 1:100 in 1% BSA, or mouse anti-MMP14 (MAB3328, Lot. 2450182, Millipore, Burlington, MA, USA), diluted 1:100 in 1% BSA, and further incubated with the secondary antibody Universal ImPress HRP (Universal Impress, MP-7500, Vector Laboratories) and VECTOR NovaRED (SK-4800, Vector Laboratories) substrate. The slides were counterstained with Meyer’s haematoxylin and mounted, after graded alcohol series, with Neo-Mount (HX934618, Merck, Rahway, NJ, USA). For the scratch wound slides, immunostaining was performed with the primary antibody anti-Nav3 (HPA032111, Lot. R32215, Sigma Prestige Antibodies, Sigma-Aldrich), diluted 1:300 in BSA 1%, and mouse anti-MMP14 (MAB3328, Lot. 2450182, Millipore) diluted 1:100 in BSA 1%. The slides were incubated overnight at 4°C, followed by the Vectastain imPRESS™ UNIVERSAL REAGENT anti-Mouse/Rabbit Ig peroxidase kit, (Vector Laboratories MP-7500) with Vector AEC Peroxidase Substrate Kit as the chromogen (Vector Laboratories SK-4200) and Mayer’s haematoxylin as counterstaining. Slides were mounted with Aquatex (VWR International Ltd, Lutterworth, UK). The immunostainings were analysed independently by 2 authors (OB and AR).
For the double immunofluorescence, 7 thick (Breslow thickness > 2.4 mm) formalin-fixed and paraffin-embedded melanoma samples were stained for MMP14 and NAV3. Anti-MMP14 antibody (MAB3328, Lot. 2450182, Millipore) was used at 1:50 dilution in Tris-buffered saline (TBS). The secondary antibody was goat anti-mouse Alexa 594, diluted 1:500 in TBS. The second primary antibody was the same anti-NAV3 antibody as above, used at a dilution of 1:100 in TBS. The primary antibody was incubated overnight at 4°C, and goat anti-rabbit Alexa 488 was used as the secondary antibody, diluted 1:500 in TBS. To visualize the cell nuclei (DNA), the sections were treated with Hoechst 33342 solution (Thermo Fisher Scientific) and the slides were mounted with Immu-mount (Thermo Fisher Scientific). The images were visualized using immunofluorescence Zeiss Axio Imager microscope (Carl Zeiss Ag, Oberkochen, Germany).
NAV3 aberrations were found in most primary melanomas but not in benign naevi To determine the possible role of NAV3 in melanoma progression, this study analysed NAV3 (chromosome 12q21) copy number alterations with the use of FISH in primary melanomas and benign naevi. In the primary melanomas, NAV3 copy number changes were observed in 18/27 (67%) tumours, while no copy number alterations were found in benign naevi . Chromosome 12 polysomy was found in 13/27 (48%) of primary melanomas but not in benign nevi. NAV3 deletions were found in 16/27 (59%) of the primary melanomas, and, in 5 of these, the proportion of nuclei with deletions was very high, ranging from 46% to 97% . NAV3 amplification was found in 12/27 (44%) of the primary melanomas, accompanied by chromosome 12 polysomy in 5 cases. NAV3 amplification was seen in 8–20% of the tumour cells at most, and 10 tumours (37%) showed heterogeneity in NAV3 copy number so that both types of aberrated nuclei were present. The proportion of NAV3 deletions in primary melanomas showed statistically significant difference compared with benign naevi ( p = 0.02; ). The differences in frequency of NAV3 amplifications did not reach statistical significance. The 2 melanoma cell lines were both heterogenic, since 20% of cells in the primary WM-793 cell culture showed amplifications and 36% of cells showed deletions of NAV3 , whereas in the metastasis-derived WM-239 cell culture 9% of cells showed amplifications and 21% of cells showed deletions . Since NAV3 has previously been shown to be mutated in melanoma samples with a nucleotide change c.598C > T (4), the current study analysed the corresponding NAV3 mutations in the melanoma patient samples (6 patients), but could not detect this transition in any of the samples (data not shown). NAV3 protein expression localizes to the leading edge of migrating melanoma cells in vitro To understand the role of NAV3 in melanoma cell migration, a 24-h 2-dimensional (2D) scratch wound assay was generated using the WM-239 melanoma cell line. The assay revealed different intracellular localization and patterns of NAV3 staining in the melanoma cells according to the location of the cells, either on the border of the wound or distant from the wound . A striking unilateral polarization of NAV3 was seen at the leading edge of migrating cells , while no polarization of NAV3 staining was seen in the non-scratched area . Migrating melanoma cells disclosed the morphology of migrating malignant cells, such as bi- or tri-polar dendritic melanocytes . Silencing of NAV3 reduces melanoma cell migration in 2D environment and invasive growth in 3D type I collagen To assess the role of NAV3 in melanoma cell motility, the current study silenced NAV3 from WM239 melanoma cells using specific siRNAs . In a 9-h 2D wound scratch migration assay, NAV3 silencing with the most efficient siRNA, siNAV3-2, reduced the distance migrated by cells by over 50% . While control cells exhibited elongated morphology at the migration front, cells silenced for NAV3 remained rounded , suggesting that NAV3 is also necessary for microtubule stabilization in melanoma cells. When the cells were embedded into dense 3D type I collagen, 50% of WM239 cells grown in 3D collagen showed elongated cell morphology and sprouting, reflective of invasive activity . Notably, silencing of NAV3 markedly reduced the cell elongation and invasive phenotype , suggesting that NAV3 is required for invasive melanoma cell invasive capability inside dense collagen type I. NAV3 expression in primary melanoma tumours To further investigate NAV3 functions in melanoma, the current study assessed NAV3 protein expression in human melanoma tumours using immunohistochemistry in 39 primary melanomas. NAV3 expression was observed in 27/39 of the melanoma tumours . Notably, NAV3 was strongly and homogenously expressed in all cells of every sample of thin tumours (8/8, Breslow thickness < 1 mm, ). In thicker tumours (1–7 mm), in turn, NAV3 was expressed in 19/31 (61%) tumours. In the mid-thickness samples (1–5 mm), NAV3 was expressed in 16/24 (67%) tumour samples. The expression pattern in mid-thickness tumours was heterogeneous, with only a proportion of the tumour cells expressing NAV3, either at the epidermal edge or at the invasive edge of the tumours . In 7/7 of the thickest tumours (> 5 mm) NAV3 was not expressed (4/7) or was expressed only weakly in the upper dermis, but lost from the lower parts of the dermis (3/7, ). NAV3 and MMP14 co-expression in primary melanoma tumours Since NAV3 was polarized to the leading edge of migrating melanoma cells and its silencing reduced invasive growth inside 3D collagen, the current study explored the possible co-expression of NAV3 with the membrane-tethered proteinase MMP14, which is major type I collagenase, which is required for invasive growth in 3D collagen and is also expressed at the leading edge of invading melanoma cells. In addition, this study assessed the expression of the close homologue of MMP14, MMP16, which modulates MMP14 function and is associated with poor prognosis in melanoma . mRNA expression of these proteins was analysed in 13 primary tumours, and expression of NAV3 correlated positively with MMP14 (r = 0.85, p = 0.0022), but not with MMP16 (r = 0.35, p = 0.041, and 4B). To further elucidate the association of NAV3 and MMP14 proteins, the current study tested for NAV3 and MMP14 co-expression, using immunohistochemistry and immunofluorescence staining of primary melanoma tumours. NAV3 and MMP14 were similarly expressed (i.e. either both proteins were expressed or were not expressed in the same sample) in 26/37 (70%) of all samples. Both proteins were expressed in all thin melanoma tumours with Breslow thickness < 1 mm (8/8, 100%), while of the mid-thickness tumours (1–5 mm) 11/23 (48%), and of the thickest tumours (5–7 mm) only 1/6 were positive for both NAV3 and MMP14 . The 12 tumours that were negative for either NAV3 alone ( n = 6) or for both NAV3 and MMP14 ( n = 6) were all > 1 mm. Notably, 4/6 of the double-negative tumours were ≥ 6 mm thick. NAV3 and MMP14 co-localized to the same cells in all double-positive samples . Consistently, NAV3 deletion or downregulation , as well as MMP14 downregulation have been reported in metastatic tumours of many cancer types. Collectively, these results show that NAV3 is often downregulated together with MMP14 in thick melanoma tumours, suggesting that NAV3 and MMP14 downregulation may favour melanoma tumour growth and/or dissemination.
To determine the possible role of NAV3 in melanoma progression, this study analysed NAV3 (chromosome 12q21) copy number alterations with the use of FISH in primary melanomas and benign naevi. In the primary melanomas, NAV3 copy number changes were observed in 18/27 (67%) tumours, while no copy number alterations were found in benign naevi . Chromosome 12 polysomy was found in 13/27 (48%) of primary melanomas but not in benign nevi. NAV3 deletions were found in 16/27 (59%) of the primary melanomas, and, in 5 of these, the proportion of nuclei with deletions was very high, ranging from 46% to 97% . NAV3 amplification was found in 12/27 (44%) of the primary melanomas, accompanied by chromosome 12 polysomy in 5 cases. NAV3 amplification was seen in 8–20% of the tumour cells at most, and 10 tumours (37%) showed heterogeneity in NAV3 copy number so that both types of aberrated nuclei were present. The proportion of NAV3 deletions in primary melanomas showed statistically significant difference compared with benign naevi ( p = 0.02; ). The differences in frequency of NAV3 amplifications did not reach statistical significance. The 2 melanoma cell lines were both heterogenic, since 20% of cells in the primary WM-793 cell culture showed amplifications and 36% of cells showed deletions of NAV3 , whereas in the metastasis-derived WM-239 cell culture 9% of cells showed amplifications and 21% of cells showed deletions . Since NAV3 has previously been shown to be mutated in melanoma samples with a nucleotide change c.598C > T (4), the current study analysed the corresponding NAV3 mutations in the melanoma patient samples (6 patients), but could not detect this transition in any of the samples (data not shown).
To understand the role of NAV3 in melanoma cell migration, a 24-h 2-dimensional (2D) scratch wound assay was generated using the WM-239 melanoma cell line. The assay revealed different intracellular localization and patterns of NAV3 staining in the melanoma cells according to the location of the cells, either on the border of the wound or distant from the wound . A striking unilateral polarization of NAV3 was seen at the leading edge of migrating cells , while no polarization of NAV3 staining was seen in the non-scratched area . Migrating melanoma cells disclosed the morphology of migrating malignant cells, such as bi- or tri-polar dendritic melanocytes .
To assess the role of NAV3 in melanoma cell motility, the current study silenced NAV3 from WM239 melanoma cells using specific siRNAs . In a 9-h 2D wound scratch migration assay, NAV3 silencing with the most efficient siRNA, siNAV3-2, reduced the distance migrated by cells by over 50% . While control cells exhibited elongated morphology at the migration front, cells silenced for NAV3 remained rounded , suggesting that NAV3 is also necessary for microtubule stabilization in melanoma cells. When the cells were embedded into dense 3D type I collagen, 50% of WM239 cells grown in 3D collagen showed elongated cell morphology and sprouting, reflective of invasive activity . Notably, silencing of NAV3 markedly reduced the cell elongation and invasive phenotype , suggesting that NAV3 is required for invasive melanoma cell invasive capability inside dense collagen type I.
To further investigate NAV3 functions in melanoma, the current study assessed NAV3 protein expression in human melanoma tumours using immunohistochemistry in 39 primary melanomas. NAV3 expression was observed in 27/39 of the melanoma tumours . Notably, NAV3 was strongly and homogenously expressed in all cells of every sample of thin tumours (8/8, Breslow thickness < 1 mm, ). In thicker tumours (1–7 mm), in turn, NAV3 was expressed in 19/31 (61%) tumours. In the mid-thickness samples (1–5 mm), NAV3 was expressed in 16/24 (67%) tumour samples. The expression pattern in mid-thickness tumours was heterogeneous, with only a proportion of the tumour cells expressing NAV3, either at the epidermal edge or at the invasive edge of the tumours . In 7/7 of the thickest tumours (> 5 mm) NAV3 was not expressed (4/7) or was expressed only weakly in the upper dermis, but lost from the lower parts of the dermis (3/7, ).
Since NAV3 was polarized to the leading edge of migrating melanoma cells and its silencing reduced invasive growth inside 3D collagen, the current study explored the possible co-expression of NAV3 with the membrane-tethered proteinase MMP14, which is major type I collagenase, which is required for invasive growth in 3D collagen and is also expressed at the leading edge of invading melanoma cells. In addition, this study assessed the expression of the close homologue of MMP14, MMP16, which modulates MMP14 function and is associated with poor prognosis in melanoma . mRNA expression of these proteins was analysed in 13 primary tumours, and expression of NAV3 correlated positively with MMP14 (r = 0.85, p = 0.0022), but not with MMP16 (r = 0.35, p = 0.041, and 4B). To further elucidate the association of NAV3 and MMP14 proteins, the current study tested for NAV3 and MMP14 co-expression, using immunohistochemistry and immunofluorescence staining of primary melanoma tumours. NAV3 and MMP14 were similarly expressed (i.e. either both proteins were expressed or were not expressed in the same sample) in 26/37 (70%) of all samples. Both proteins were expressed in all thin melanoma tumours with Breslow thickness < 1 mm (8/8, 100%), while of the mid-thickness tumours (1–5 mm) 11/23 (48%), and of the thickest tumours (5–7 mm) only 1/6 were positive for both NAV3 and MMP14 . The 12 tumours that were negative for either NAV3 alone ( n = 6) or for both NAV3 and MMP14 ( n = 6) were all > 1 mm. Notably, 4/6 of the double-negative tumours were ≥ 6 mm thick. NAV3 and MMP14 co-localized to the same cells in all double-positive samples . Consistently, NAV3 deletion or downregulation , as well as MMP14 downregulation have been reported in metastatic tumours of many cancer types. Collectively, these results show that NAV3 is often downregulated together with MMP14 in thick melanoma tumours, suggesting that NAV3 and MMP14 downregulation may favour melanoma tumour growth and/or dissemination.
Malignant transformation is thought to arise due to a series of chromosomal aberrations and somatic mutations, which finally lead to a cell phenotype that has the capacity for uncontrolled proliferation and metastasis. In the spread of melanoma, both the motility of cells and their ability to modulate extracellular matrix are required for cancer cell invasion into deeper dermis and blood and lymphatic vessels . Cancer cells use different types of invasion dependent on the microenvironment: protease-independent amoeboid invasion, in which cells squeeze through pre-existing spaces between fibres; or mesenchymal invasion, in which cells have invadopodia and protrude by degrading extracellular matrix . NAV3, a microtubule plus-end tracking protein and tumour suppressor, regulates cell migration and invasion by stabilizing microtubules needed for invadopodia formation, and inhibits metastasis in breast cancer and colon cancer . Membrane-type matrix metalloproteinase MMP14 is a major collagenase that confers cancer cells with an ability to degrade extracellular matrix and invade in a mesenchymal manner . The current study shows that, in human melanoma, NAV3 promotes directional melanoma cell migration, elongated morphology, and invasive growth in 3D collagen, and is strongly expressed in all early-stage melanomas together with MMP14. However, NAV3 and MMP14 protein expression is often downregulated or lost with tumour progression from thin to mid-thick and, further, to very thick melanomas, suggesting that their loss favours tumour spreading in melanoma. NAV3 deletion is associated with poorer outcome in cancer patients, and NAV3 silencing enhances metastases in breast cancer cells, probably by reducing apoptosis and reducing directional migration of tumour cells . Notably, NAV3 was recently shown to be a downstream target of SOX9 , which, in turn, has been shown to induce melanoma cell invasion and metastasis . The current study found that 59% of primary melanomas had NAV3 deletion. Moreover, NAV3 silencing in melanoma cells reduced their migration in a wound scratch assay and changed the morphology of migrating cells from elongated to rounded, as has also been seen in breast cancer cells . In addition, NAV3 was expressed at the tips of invadopodia of migrating melanoma cells, and its silencing reduced sprouting of melanoma cells in the 3D collagen, suggesting that NAV3 promotes melanoma cell motility both in the 2D and the 3D environment. Since MMP14 is a major mediator of protease-driven invasion inside type I collagen and is upregulated in melanoma , it was decided to explore the correlation of its expression with NAV3. Blocking the protease activity of MMP14 with a monoclonal antibody in a melanoma mouse xenograft model alleviated tumour metastatic burden , and MMP-based interventions are under investigation for cancer treatment. However, the function of MMP14 is more complex than being merely a tumour promoter. Previously we have shown that limiting the functional MMP14 protein in melanoma cell surfaces leads to a more aggressive phenotype and lymphatic invasion by switching the melanoma invasion pattern from infiltrating to expansive, whereby induced cell-cell contacts and reduced collagen degradation by tumour cells leads to expansive growth of melanoma cell nests and lymphatic invasion of the collective cell clusters . Cepeda et al. also report that lower MMP14 expression is associated with increased migratory capacity and tumourigenicity in a breast cancer model compared with high MMP14-expressing cells . The current study found that NAV3 mRNA expression correlated with MMP14 mRNA in human melanoma samples. NAV3 and MMP14 proteins were either co-expressed or their expression was lost in 26/37 (70%) of melanoma samples, suggesting the common regulation of these proteins. In addition, this study found that, at the early stage of melanoma, NAV3 and MMP14 were strongly expressed in all tumours (thin tumours with Breslow thickness under 1 mm). Of note, both NAV3 and MMP14 became weaker or completely undetectable in thicker tumours, which represent later stages of melanoma progression, suggesting that NAV3 and MMP14, although important at the beginning of tumour growth and invasion in collagen-rich and dense upper dermis, are not necessary or can even restrict tumour growth after the tumour has reached a certain size or permissive immediate microenvironment. Although the potential co-regulation mechanisms and functional collaboration of MMP14 and NAV3 proteins remain to be explored, there are several mechanisms that may explain the similar expression pattern of these 2 proteins. Being a plus-end protein of the microtubules, NAV3 may affect MMP14 localization to the invadopodia, since microtubules are required for MMP14 trafficking to these structures . Furthermore, NAV3 regulates epidermal growth factor receptor (EGFR) endocytosis, and epidermal growth factor (EGF) is an important inducer of MMP14 expression in ovarian cancer cells at least . MMP14, in turn, promotes the release of heparin-binding EGF-like growth factor (HB-EGF) , which induces NAV3. Taken together, these results show that NAV3 deletion is a common feature in human melanoma, and that, although NAV3 is required for the invasive growth of early melanoma and is strongly expressed at the early stages of melanoma, it is lost in most melanoma tumours at later stages, suggesting that NAV3 may serve as a marker of melanoma progression.
|
Fetoscopic endoluminal tracheal occlusion with Smart-TO balloon: Study protocol to evaluate effectiveness and safety of non-invasive removal | 16183841-6594-48b9-b56b-f3ef6d0c0e9d | 10010565 | Pediatrics[mh] | Background Congenital diaphragmatic hernia (CDH) is a birth defect characterized by failed closure of the diaphragm. This enables abdominal viscera to herniate into the thoracic cavity, leading to hypoplastic lungs and impaired lung vasculature . Fetoscopic Endoluminal Tracheal Occlusion (FETO) increases fetal lung volume and therefore can improve survival in selected fetuses with CDH. Recently two parallel randomized controlled trials in fetuses with isolated left-sided CDH with severe and moderate pulmonary hypoplasia respectively were concluded . In severe hypoplasia the balloon was inserted early (27 +0 to 29 +6 weeks’ gestation) and FETO improved survival from 15% to 40% . A comparable improvement in survival (20% to 42%) was achieved in fetuses with severe right-sided CDH . In moderate hypoplasia, the balloon was inserted later (30 +0 to 31 +6 weeks’ gestation) in an effort to reduce the risks of very preterm birth. In that study, FETO improved survival from 50% to 63%, but the difference in survival was not statistically significant . Analysis of the pooled data from the two randomized trials demonstrated that FETO increases survival in both severe and moderate disease , but the observed lesser effect in the moderate group is most likely a mere consequence of the delayed insertion of the balloon in moderate hypoplasia . An adverse side-effect of FETO is that it increases the risk for iatrogenic preterm membrane rupture and preterm birth . In the TOTAL trials, that risk was inversely related to the gestational age at the insertion of the balloon . Although the trials did not demonstrate any obvious differences between the FETO and control groups in prematurity-related complications, they were not powered to study differences in these secondary outcomes. Long-term outcomes will have to further elucidate that, but it would seem logical to expect a measurable effect of prematurity when large numbers are available. Another disadvantage of the current procedure is the need for an invasive, second intervention to reverse the occlusion and re-establish airway patency. Balloon removal is scheduled electively at 34 weeks, or earlier if required. Reversal of the occlusion is preferentially performed at least 24 hours before birth, as that seems associated with an increased survival . Reversal is at present an invasive procedure that can be performed prenatally by either ultrasound-guided puncture, fetoscopy, or, less ideal, after delivery of the baby prior to cord clamping while the fetus is maintained on placental circulation or after the cord is clamped at birth after vaginal delivery . Airway re-establishment requires a specialist team familiar with the procedure and that is available 24/7 . In a large series, 28% of balloon removals were in an emergency setting . The only neonatal deaths that occurred, were when balloon reversal was attempted in centers without experience or that were unprepared . Even in experienced centers balloon removal can fail, as observed in the TOTAL trial . Also, patients may be non-compliant and move away from the fetal surgery center . The second procedure inherently adds risks for the mother and fetus. These can be directly procedure-related, but also indirectly, by increasing the risk for membrane rupture later on . In conclusion, the occlusion period is a serious burden on patients who are requested to stay close to the FETO center until balloon removal, as well as for the fetal surgery centers because of the need for permanently available staff. All these conditions, limit the acceptability of FETO as being practiced today. The University of Strasbourg, France, in partnership with BS Medical Tech Industry (BS-MTI), Niederroedern, France, developed an alternative occlusion device, referred to as "Smart-TO" . Compared to the currently used Goldbal2 ® (Balt, Montmorency, France) balloon, the Smart-TO balloon has identical dimensions in its inflated state and is made of the same material (latex). Around the balloon neck, there is a metallic cylinder and inside a magnetic ball, which together act as a valve. Deflation occurs under the influence of a strong magnetic field, which is present around any clinical MRI machine. For that, it is sufficient for the pregnant woman to walk around the MRI machine. This enables non-invasive, externally controlled balloon deflation. Therefore, the Smart-TO balloon may address issues related to the unplug procedure, i.e. neonatal deaths by failure of balloon removal, morbidity related to a second fetal surgery procedure, need for FETO centers with experienced team available 24/7, and need for the pregnant women to stay close to a FETO center during the whole duration of the occlusion. The Smart-TO balloon been tested preclinically by BS-MTI (the manufacturer), University of Strasbourg, Simian Laboratory Europe and the KU Leuven. In-vitro tests including permeability, occlusion, and deflation in a simulated environment were performed by BS-MTI (unpublished data). Deflation tests were performed using a mannequin in a simulated “in-utero” environment, with the fetus and the mother in different positions and heights . In that experiment, deflation was successfully achieved using a 1.5T MRI in 100% of cases in a maternal standing position as well as when the maternal position was ‘lying on a stretcher’. The only case of failure occurred when the maternal position was ‘sitting in a wheelchair’, likely because of the distance between the MRI scanner and the patient in this scenario. In vivo animal tests included the demonstration of similar lung growth and short-term tracheal side effects as the Goldbal 2 balloon in fetal lambs . In the latter experiment, fetal lambs expelled the Smart-TO balloon following exposure to the fringe field of a 3T MRI. Finally, feasibility of balloon insertion, persisting occlusion until reversal, and spontaneous expulsion of the Smart-TO balloon was confirmed in non-human primates . Therefore, this novel medical device should now be evaluated in a first in human (patients) trial. For that purpose we designed two studies, one at Antoine–Béclère Hospital Paris–Saclay University, Clamart, France referred to as “Smart-FETO”, and one at the University Hospitals Leuven (UZ Leuven), Belgium, referred to as “Smart-Removal”. Conceived in parallel, protocols were amended by the local Ethics Committee on Clinical Studies or its equivalent, resulting in a limited number of differences . Objectives and hypotheses The main objective of these studies is to demonstrate the ability to consistently deflate the balloon prenatally by the magnetic fringe field generated by a clinical MRI scanner, and that it will be expelled from the airways. Secondary objective is to report on the safety of the balloon. We hypothesize that there will not be any serious adverse effects directly related to the Smart-TO balloon itself. Other objectives include assessment of prematurity, preterm premature rupture of membranes, lung growth, neonatal survival, and the need for oxygen supplementation at discharge from the hospital. Congenital diaphragmatic hernia (CDH) is a birth defect characterized by failed closure of the diaphragm. This enables abdominal viscera to herniate into the thoracic cavity, leading to hypoplastic lungs and impaired lung vasculature . Fetoscopic Endoluminal Tracheal Occlusion (FETO) increases fetal lung volume and therefore can improve survival in selected fetuses with CDH. Recently two parallel randomized controlled trials in fetuses with isolated left-sided CDH with severe and moderate pulmonary hypoplasia respectively were concluded . In severe hypoplasia the balloon was inserted early (27 +0 to 29 +6 weeks’ gestation) and FETO improved survival from 15% to 40% . A comparable improvement in survival (20% to 42%) was achieved in fetuses with severe right-sided CDH . In moderate hypoplasia, the balloon was inserted later (30 +0 to 31 +6 weeks’ gestation) in an effort to reduce the risks of very preterm birth. In that study, FETO improved survival from 50% to 63%, but the difference in survival was not statistically significant . Analysis of the pooled data from the two randomized trials demonstrated that FETO increases survival in both severe and moderate disease , but the observed lesser effect in the moderate group is most likely a mere consequence of the delayed insertion of the balloon in moderate hypoplasia . An adverse side-effect of FETO is that it increases the risk for iatrogenic preterm membrane rupture and preterm birth . In the TOTAL trials, that risk was inversely related to the gestational age at the insertion of the balloon . Although the trials did not demonstrate any obvious differences between the FETO and control groups in prematurity-related complications, they were not powered to study differences in these secondary outcomes. Long-term outcomes will have to further elucidate that, but it would seem logical to expect a measurable effect of prematurity when large numbers are available. Another disadvantage of the current procedure is the need for an invasive, second intervention to reverse the occlusion and re-establish airway patency. Balloon removal is scheduled electively at 34 weeks, or earlier if required. Reversal of the occlusion is preferentially performed at least 24 hours before birth, as that seems associated with an increased survival . Reversal is at present an invasive procedure that can be performed prenatally by either ultrasound-guided puncture, fetoscopy, or, less ideal, after delivery of the baby prior to cord clamping while the fetus is maintained on placental circulation or after the cord is clamped at birth after vaginal delivery . Airway re-establishment requires a specialist team familiar with the procedure and that is available 24/7 . In a large series, 28% of balloon removals were in an emergency setting . The only neonatal deaths that occurred, were when balloon reversal was attempted in centers without experience or that were unprepared . Even in experienced centers balloon removal can fail, as observed in the TOTAL trial . Also, patients may be non-compliant and move away from the fetal surgery center . The second procedure inherently adds risks for the mother and fetus. These can be directly procedure-related, but also indirectly, by increasing the risk for membrane rupture later on . In conclusion, the occlusion period is a serious burden on patients who are requested to stay close to the FETO center until balloon removal, as well as for the fetal surgery centers because of the need for permanently available staff. All these conditions, limit the acceptability of FETO as being practiced today. The University of Strasbourg, France, in partnership with BS Medical Tech Industry (BS-MTI), Niederroedern, France, developed an alternative occlusion device, referred to as "Smart-TO" . Compared to the currently used Goldbal2 ® (Balt, Montmorency, France) balloon, the Smart-TO balloon has identical dimensions in its inflated state and is made of the same material (latex). Around the balloon neck, there is a metallic cylinder and inside a magnetic ball, which together act as a valve. Deflation occurs under the influence of a strong magnetic field, which is present around any clinical MRI machine. For that, it is sufficient for the pregnant woman to walk around the MRI machine. This enables non-invasive, externally controlled balloon deflation. Therefore, the Smart-TO balloon may address issues related to the unplug procedure, i.e. neonatal deaths by failure of balloon removal, morbidity related to a second fetal surgery procedure, need for FETO centers with experienced team available 24/7, and need for the pregnant women to stay close to a FETO center during the whole duration of the occlusion. The Smart-TO balloon been tested preclinically by BS-MTI (the manufacturer), University of Strasbourg, Simian Laboratory Europe and the KU Leuven. In-vitro tests including permeability, occlusion, and deflation in a simulated environment were performed by BS-MTI (unpublished data). Deflation tests were performed using a mannequin in a simulated “in-utero” environment, with the fetus and the mother in different positions and heights . In that experiment, deflation was successfully achieved using a 1.5T MRI in 100% of cases in a maternal standing position as well as when the maternal position was ‘lying on a stretcher’. The only case of failure occurred when the maternal position was ‘sitting in a wheelchair’, likely because of the distance between the MRI scanner and the patient in this scenario. In vivo animal tests included the demonstration of similar lung growth and short-term tracheal side effects as the Goldbal 2 balloon in fetal lambs . In the latter experiment, fetal lambs expelled the Smart-TO balloon following exposure to the fringe field of a 3T MRI. Finally, feasibility of balloon insertion, persisting occlusion until reversal, and spontaneous expulsion of the Smart-TO balloon was confirmed in non-human primates . Therefore, this novel medical device should now be evaluated in a first in human (patients) trial. For that purpose we designed two studies, one at Antoine–Béclère Hospital Paris–Saclay University, Clamart, France referred to as “Smart-FETO”, and one at the University Hospitals Leuven (UZ Leuven), Belgium, referred to as “Smart-Removal”. Conceived in parallel, protocols were amended by the local Ethics Committee on Clinical Studies or its equivalent, resulting in a limited number of differences . The main objective of these studies is to demonstrate the ability to consistently deflate the balloon prenatally by the magnetic fringe field generated by a clinical MRI scanner, and that it will be expelled from the airways. Secondary objective is to report on the safety of the balloon. We hypothesize that there will not be any serious adverse effects directly related to the Smart-TO balloon itself. Other objectives include assessment of prematurity, preterm premature rupture of membranes, lung growth, neonatal survival, and the need for oxygen supplementation at discharge from the hospital. Design plan Study type These clinical trials are single-arms interventional feasibility studies. Eligible consecutive consenting women will have FETO with the Smart-TO balloon. Setting These trials are conducted at two centers i.e., the Antoine–Béclère Hospital—Paris–Saclay University, Clamart, France, and the University Hospitals Leuven, Belgium. Sampling plan Existing data Both trials have been registered prior to their inception (ClincalTrial.gov NCT04931212 and NCT05100693). The first inclusion in France was on August 4 th , 2021, and in Belgium on September 10 th , 2021. Recruitment Recruitment of participants will be at the latest one day before planned balloon placement. Written informed consent will be obtained from all participants. Inclusion criteria . Patient aged 18 years or more and who can consent, Singleton pregnancy with a fetus with an isolated congenital diaphragmatic hernia (i.e., no additional major structural malformation nor genetic abnormality) -Eligible for FETO, i.e. having severe pulmonary hypoplasia defined as, in left-sided cases, an observed-to-expected ’lung-to-head ratio’ (O/E LHR) <25% irrespective of the liver position, or moderate pulmonary hypoplasia defined as O/E LHR 25–34.9% irrespective of the liver position or O/E LHR 35–44.9% with liver herniation, and, in UZ Leuven, fetuses with right-sided CDH with severe hypoplasia (O/E LHR < 50%). The O/E LHR measurement can be performed either by trace or anteroposterior diameters of the contralateral lung. At a stage of pregnancy compatible with inserting the balloon of between 27 and 29 + 6 WA for severe hernias and between 30 and 31 + 6 WA for moderate hernias in France. Exclusion criteria . Maternal contraindication to fetoscopy Preterm premature rupture of the membranes (PPROM) or any condition strongly predisposing to PPROM or premature delivery Patient does not consent to stay close to the FETO center during the occlusion period. Sample size Independent sample size calculation has been performed in both centers. In Paris (France), we hypothesized a 100% deflation and expulsion rate. If this hypothesis is effectively verified on 20 patients, we can then say that the probability of the balloon deflating with the MRI is of 100% with a 95% confidence interval (CI) between 83 and 100% (calculation performed with the exact method). In Leuven (Belgium), the estimated number is 23 patients, in order to achieve a 95% CI with a lower boundary of 85% . The theoretical possibility of spontaneous balloon deflation, or the impossibility to expose the patient to MRI at the time of balloon removal (e.g., in an emergency requiring removal on placental circulation) was considered as possible (n = 2), so that a total of 25 patients are to be recruited. Variables Measured variables These include administrative data, data on the index pregnancy, characteristics of the fetus, on the FETO procedure, follow-up ultrasound measurements, balloon removal, delivery, and the neonatal follow-up period until discharge from the neonatal intensive care unit (NICU) . Primary endpoint In France, the primary endpoint is the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI, assessed through ultrasound immediately after MRI exposure and the expulsion of the Smart-TO balloon from the airways, as documented by a X-ray of the neonatal chest at birth (the valve of the balloon is radio-opaque). In Leuven, only the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI is required for efficacy; this endpoint will be then considered as the common primary endpoint. For Belgium, the expulsion of the Smart-TO balloon from the airways is considered as a secondary endpoint. Secondary endpoints Secondary endpoints are displayed in . Statistical analysis plan The percentage of fetuses in whom the balloon deflated at exposure and fetuses that expelled the balloon from the fetal airways will be calculated with its 95% confidence interval using the binomial method . Safety will be evaluated by reporting the nature, number, and percentage of serious unexpected or adverse reactions. Other secondary endpoints will be described. Quantitative data will be expressed as median and inter-quartile-range (IQR), qualitative data will be expressed as numbers and percentages. There will be no imputation of missing data for secondary outcomes. Intervention The study schedule is displayed in . FETO The FETO procedure will be performed as earlier described . Regarding the Smart TO use: The catheter system is introduced in the sheath of the endoscope and back loaded with the Smart-TO balloon. The balloon is then tested by inflation with 0.7 mL of sterile saline and deflated with its proper stylet, following which the latter is withdrawn. The balloon is positioned between the carina and the vocal cords, inflated with 0.7 mL sterile saline, and detached by the combination of gentle traction of the delivery system and counter pressure with the endoscope. Reestablishment of the fetal airways The Smart-TO deflation protocol is displayed in . The patient is positioned in front of the MRI, her abdomen facing the front of the tunnel of the machine. The patient walks (or is strolled) around the machine while staying as close as possible to the machine. When approaching the rear of the tunnel, the patient positions herself in the middle of it, facing the tunnel and makes a short stop. Then she continues to walk (or being strolled) around the MRI while staying as close as possible to the machine Once she has completed the turn, she can leave the MRI room. Ultrasound is then performed independently by two experienced sonographers, to assess balloon deflation. When inflated, the balloon is easily visible on ultrasound as an anechoic structure. Balloon deflation will be indicated by visualization of the balloon on ultrasound before MRI exposure and its disappearance immediately after MRI exposure. In the case of deflation failure, a second or third MRI exposure will be attempted, again followed by ultrasound confirmation of balloon deflation. Conventional reestablishment of the fetal airways In the case of failure to deflate, balloon removal will be done as currently done with the conventional balloon, either ultrasound-guided puncture, fetoscopy, or in an emergency during abdominal delivery while the fetus is on placental circulation, or after birth by puncture above the manubrium sterni . Ethics statement In France, approval was provided by the committee for the protection of persons concerned (CPP “Ile de France VIII”) in January 2021 (# 21 01 01), and the French medicines controls authorities (ANSM) in March (2020-A02834-35-A). In Belgium, approval was given by the Ethics Committee on Clinical Studies of the University Hospitals Leuven in July 2021 (S65423). The study was registered at the Federal Agency for Medicines and Health Products (FAGG/80M0892). Written informed consent will be obtained from all participants. Study type These clinical trials are single-arms interventional feasibility studies. Eligible consecutive consenting women will have FETO with the Smart-TO balloon. Setting These trials are conducted at two centers i.e., the Antoine–Béclère Hospital—Paris–Saclay University, Clamart, France, and the University Hospitals Leuven, Belgium. These clinical trials are single-arms interventional feasibility studies. Eligible consecutive consenting women will have FETO with the Smart-TO balloon. These trials are conducted at two centers i.e., the Antoine–Béclère Hospital—Paris–Saclay University, Clamart, France, and the University Hospitals Leuven, Belgium. Existing data Both trials have been registered prior to their inception (ClincalTrial.gov NCT04931212 and NCT05100693). The first inclusion in France was on August 4 th , 2021, and in Belgium on September 10 th , 2021. Recruitment Recruitment of participants will be at the latest one day before planned balloon placement. Written informed consent will be obtained from all participants. Inclusion criteria . Patient aged 18 years or more and who can consent, Singleton pregnancy with a fetus with an isolated congenital diaphragmatic hernia (i.e., no additional major structural malformation nor genetic abnormality) -Eligible for FETO, i.e. having severe pulmonary hypoplasia defined as, in left-sided cases, an observed-to-expected ’lung-to-head ratio’ (O/E LHR) <25% irrespective of the liver position, or moderate pulmonary hypoplasia defined as O/E LHR 25–34.9% irrespective of the liver position or O/E LHR 35–44.9% with liver herniation, and, in UZ Leuven, fetuses with right-sided CDH with severe hypoplasia (O/E LHR < 50%). The O/E LHR measurement can be performed either by trace or anteroposterior diameters of the contralateral lung. At a stage of pregnancy compatible with inserting the balloon of between 27 and 29 + 6 WA for severe hernias and between 30 and 31 + 6 WA for moderate hernias in France. Exclusion criteria . Maternal contraindication to fetoscopy Preterm premature rupture of the membranes (PPROM) or any condition strongly predisposing to PPROM or premature delivery Patient does not consent to stay close to the FETO center during the occlusion period. Sample size Independent sample size calculation has been performed in both centers. In Paris (France), we hypothesized a 100% deflation and expulsion rate. If this hypothesis is effectively verified on 20 patients, we can then say that the probability of the balloon deflating with the MRI is of 100% with a 95% confidence interval (CI) between 83 and 100% (calculation performed with the exact method). In Leuven (Belgium), the estimated number is 23 patients, in order to achieve a 95% CI with a lower boundary of 85% . The theoretical possibility of spontaneous balloon deflation, or the impossibility to expose the patient to MRI at the time of balloon removal (e.g., in an emergency requiring removal on placental circulation) was considered as possible (n = 2), so that a total of 25 patients are to be recruited. Both trials have been registered prior to their inception (ClincalTrial.gov NCT04931212 and NCT05100693). The first inclusion in France was on August 4 th , 2021, and in Belgium on September 10 th , 2021. Recruitment of participants will be at the latest one day before planned balloon placement. Written informed consent will be obtained from all participants. Inclusion criteria . Patient aged 18 years or more and who can consent, Singleton pregnancy with a fetus with an isolated congenital diaphragmatic hernia (i.e., no additional major structural malformation nor genetic abnormality) -Eligible for FETO, i.e. having severe pulmonary hypoplasia defined as, in left-sided cases, an observed-to-expected ’lung-to-head ratio’ (O/E LHR) <25% irrespective of the liver position, or moderate pulmonary hypoplasia defined as O/E LHR 25–34.9% irrespective of the liver position or O/E LHR 35–44.9% with liver herniation, and, in UZ Leuven, fetuses with right-sided CDH with severe hypoplasia (O/E LHR < 50%). The O/E LHR measurement can be performed either by trace or anteroposterior diameters of the contralateral lung. At a stage of pregnancy compatible with inserting the balloon of between 27 and 29 + 6 WA for severe hernias and between 30 and 31 + 6 WA for moderate hernias in France. Exclusion criteria . Maternal contraindication to fetoscopy Preterm premature rupture of the membranes (PPROM) or any condition strongly predisposing to PPROM or premature delivery Patient does not consent to stay close to the FETO center during the occlusion period. Independent sample size calculation has been performed in both centers. In Paris (France), we hypothesized a 100% deflation and expulsion rate. If this hypothesis is effectively verified on 20 patients, we can then say that the probability of the balloon deflating with the MRI is of 100% with a 95% confidence interval (CI) between 83 and 100% (calculation performed with the exact method). In Leuven (Belgium), the estimated number is 23 patients, in order to achieve a 95% CI with a lower boundary of 85% . The theoretical possibility of spontaneous balloon deflation, or the impossibility to expose the patient to MRI at the time of balloon removal (e.g., in an emergency requiring removal on placental circulation) was considered as possible (n = 2), so that a total of 25 patients are to be recruited. Measured variables These include administrative data, data on the index pregnancy, characteristics of the fetus, on the FETO procedure, follow-up ultrasound measurements, balloon removal, delivery, and the neonatal follow-up period until discharge from the neonatal intensive care unit (NICU) . Primary endpoint In France, the primary endpoint is the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI, assessed through ultrasound immediately after MRI exposure and the expulsion of the Smart-TO balloon from the airways, as documented by a X-ray of the neonatal chest at birth (the valve of the balloon is radio-opaque). In Leuven, only the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI is required for efficacy; this endpoint will be then considered as the common primary endpoint. For Belgium, the expulsion of the Smart-TO balloon from the airways is considered as a secondary endpoint. Secondary endpoints Secondary endpoints are displayed in . These include administrative data, data on the index pregnancy, characteristics of the fetus, on the FETO procedure, follow-up ultrasound measurements, balloon removal, delivery, and the neonatal follow-up period until discharge from the neonatal intensive care unit (NICU) . In France, the primary endpoint is the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI, assessed through ultrasound immediately after MRI exposure and the expulsion of the Smart-TO balloon from the airways, as documented by a X-ray of the neonatal chest at birth (the valve of the balloon is radio-opaque). In Leuven, only the successful deflation of the Smart-TO balloon after exposure to the fringe field of the MRI is required for efficacy; this endpoint will be then considered as the common primary endpoint. For Belgium, the expulsion of the Smart-TO balloon from the airways is considered as a secondary endpoint. Secondary endpoints are displayed in . The percentage of fetuses in whom the balloon deflated at exposure and fetuses that expelled the balloon from the fetal airways will be calculated with its 95% confidence interval using the binomial method . Safety will be evaluated by reporting the nature, number, and percentage of serious unexpected or adverse reactions. Other secondary endpoints will be described. Quantitative data will be expressed as median and inter-quartile-range (IQR), qualitative data will be expressed as numbers and percentages. There will be no imputation of missing data for secondary outcomes. The study schedule is displayed in . FETO The FETO procedure will be performed as earlier described . Regarding the Smart TO use: The catheter system is introduced in the sheath of the endoscope and back loaded with the Smart-TO balloon. The balloon is then tested by inflation with 0.7 mL of sterile saline and deflated with its proper stylet, following which the latter is withdrawn. The balloon is positioned between the carina and the vocal cords, inflated with 0.7 mL sterile saline, and detached by the combination of gentle traction of the delivery system and counter pressure with the endoscope. Reestablishment of the fetal airways The Smart-TO deflation protocol is displayed in . The patient is positioned in front of the MRI, her abdomen facing the front of the tunnel of the machine. The patient walks (or is strolled) around the machine while staying as close as possible to the machine. When approaching the rear of the tunnel, the patient positions herself in the middle of it, facing the tunnel and makes a short stop. Then she continues to walk (or being strolled) around the MRI while staying as close as possible to the machine Once she has completed the turn, she can leave the MRI room. Ultrasound is then performed independently by two experienced sonographers, to assess balloon deflation. When inflated, the balloon is easily visible on ultrasound as an anechoic structure. Balloon deflation will be indicated by visualization of the balloon on ultrasound before MRI exposure and its disappearance immediately after MRI exposure. In the case of deflation failure, a second or third MRI exposure will be attempted, again followed by ultrasound confirmation of balloon deflation. Conventional reestablishment of the fetal airways In the case of failure to deflate, balloon removal will be done as currently done with the conventional balloon, either ultrasound-guided puncture, fetoscopy, or in an emergency during abdominal delivery while the fetus is on placental circulation, or after birth by puncture above the manubrium sterni . The FETO procedure will be performed as earlier described . Regarding the Smart TO use: The catheter system is introduced in the sheath of the endoscope and back loaded with the Smart-TO balloon. The balloon is then tested by inflation with 0.7 mL of sterile saline and deflated with its proper stylet, following which the latter is withdrawn. The balloon is positioned between the carina and the vocal cords, inflated with 0.7 mL sterile saline, and detached by the combination of gentle traction of the delivery system and counter pressure with the endoscope. The Smart-TO deflation protocol is displayed in . The patient is positioned in front of the MRI, her abdomen facing the front of the tunnel of the machine. The patient walks (or is strolled) around the machine while staying as close as possible to the machine. When approaching the rear of the tunnel, the patient positions herself in the middle of it, facing the tunnel and makes a short stop. Then she continues to walk (or being strolled) around the MRI while staying as close as possible to the machine Once she has completed the turn, she can leave the MRI room. Ultrasound is then performed independently by two experienced sonographers, to assess balloon deflation. When inflated, the balloon is easily visible on ultrasound as an anechoic structure. Balloon deflation will be indicated by visualization of the balloon on ultrasound before MRI exposure and its disappearance immediately after MRI exposure. In the case of deflation failure, a second or third MRI exposure will be attempted, again followed by ultrasound confirmation of balloon deflation. In the case of failure to deflate, balloon removal will be done as currently done with the conventional balloon, either ultrasound-guided puncture, fetoscopy, or in an emergency during abdominal delivery while the fetus is on placental circulation, or after birth by puncture above the manubrium sterni . In France, approval was provided by the committee for the protection of persons concerned (CPP “Ile de France VIII”) in January 2021 (# 21 01 01), and the French medicines controls authorities (ANSM) in March (2020-A02834-35-A). In Belgium, approval was given by the Ethics Committee on Clinical Studies of the University Hospitals Leuven in July 2021 (S65423). The study was registered at the Federal Agency for Medicines and Health Products (FAGG/80M0892). Written informed consent will be obtained from all participants. Based on robust clinical evidence, one should consider the option of FETO in selected fetuses with CDH . One of the major concerns about FETO is the potential problems related to balloon removal . The Smart-TO balloon addresses this issue by allowing a noninvasive, easily triggered, and externally controlled reversal of occlusion . After extensive translational research, the time has come to assess the efficacy of reversal of the occlusion and the safety of this new device in a first-in-woman study. The main objective of this study is to demonstrate the ability to successfully deflate the Smart-TO balloon by the magnetic fringe field generated by an MRI scanner. The present trial also aims to demonstrate the Smart-TO balloon is no longer within the airways. Non-visualization of the balloon will provide evidence for airway permeability. In the Belgian site, the E.C. also required to positively identify the localization of the balloon following deflation, either within the amniotic fluid, membranes, or placenta (at delivery), and exclude its persistence in the uterus by postpartum ultrasound. Additional objectives of this study include the evaluation of safety, even though no serious adverse effects directly related to the Smart-TO balloon are anticipated. The dimensions of the Smart-TO balloon and material (latex) are the same as the Goldbal2 ® balloon. For this reason, it is anticipated that the Smart-TO will induce similar lung growth compared to the Goldbal2 ® balloon, as previously demonstrated in preclinical studies . Additional outcome measurements include the occurrence of membrane rupture and preterm delivery, which is consistently reported in all FETO series. We will also report on the consequence of the above. The limitations of our study will be that this is a non-comparative trial. However, including a second arm, where controls would have FETO by means of the Goldbal2 ® balloon, appears to be unethical, since this will not provide new data and there is sufficient data on file on outcomes when using the standard balloon. In conclusion, this first in-woman study aims to demonstrate the ability of Smart-TO balloon to be prenatally deflated by the magnetic fringe field generated by an MRI scanner, its expulsion from the airways, as well as the safety of its use. S1 Video (MP4) Click here for additional data file. S1 File SPIRIT 2013 checklist: Recommended items to address in a clinical trial protocol and related documents*. (DOC) Click here for additional data file. S1 Text Clinical trial protocol. (DOCX) Click here for additional data file. |
Person-centered care approach to prevention and management of falls among adults and aged in a Brazilian hospital: a best practice implementation project | e0cb242c-4729-47b8-b673-481e278eaca6 | 10010697 | Internal Medicine[mh] | Falls in the hospital environment are a cause for concern among health professionals because this adverse event may result in injuries and higher healthcare costs, compromise the quality of care, and generate ethical and legal implications for health institutions. – The worldwide incidence of falls in hospital is around 0.2–1.7% patients/day. In Brazil there is not an official statistic about the national incidence of falls in hospitals, nonetheless, studies showed an incidence of around 1.4–1.7% patients/day. , Injury is the main problem related to falls, occurring in 30–50% of cases, , and it may be mild, moderate, or severe and possibly lead to death. Globally over 80% of fatal falls occur in low- and middle-income countries. Consequently, injuries result in higher healthcare costs because of longer hospitalization periods and the need for further assistance, diagnostic tests, and drug or surgical treatments, in addition to the psychological and social impacts. , Several intrinsic and extrinsic factors contribute to the rise in the number of falls among hospitalized patients. – Related causes include balance and gait disorders, hypotension, anemia, paresis, osteoarthritis, neurological disorders, amputations, cachexia or severe obesity, sensory impairment, fasting, intense pain, dressings that may impair the patient's mobility, and the use of walking assistance devices. , , In addition to preexisting diseases, the use of medications that alter mobility and balance and polypharmacy are factors that also increase the risk of falling. – Extrinsic factors that increase the risk of falling include slippery floors, the absence of bed rails, inappropriate furniture and lighting, and an unfamiliar environment. , , There are many recommendations for preventing falls, such as training for healthcare professionals and health education for patients and caregivers. Literature has reported that this type of education has potential benefits in reducing falls, and many technological strategies were employed, including the use of digital media resources, equipment, and applications developed for these purposes. – Through systematic reviews and quality improvement projects, the best evidence found in the literature recommends strategies for preventing falls , , : (1) A comprehensive assessment is required to identify individual risk factors based on patients’ needs, values, and preferences and provide targeted strategies to mitigate risks. (2) Involving patients in fall prevention strategies can be an effective approach in a model based on structure, process, and results. This model allows patients to be involved and express their preferences regarding the care plan, thus sharing how their treatment will be conducted in the decision-making process. At the same time, the healthcare team moves from the role of expert to facilitator in motivating and supporting patients to maintain their health and greater independence. (3) A fall risk assessment is recommended, which contemplates the patients’ participation as part of the fall-prevention strategy by analyzing the patients’ intention and ability to adopt a well tolerated behavior. Therefore, patients and caregivers become more aware of their risks and can participate in individualized fall prevention activities. To implement best practices for patient-centered fall prevention and management, an evidence implementation project was developed in the oncology and medical–surgical inpatient units at the hospital located in the Municipality of São Paulo, Brazil, an international reference in healthcare, which provides services for patients in various specialties. We selected units presenting the highest fall rates in 2019 and 2020, corresponded to 22 (11.2%) and 8 (4.6%) falls in the medical–surgical, and 33 (16.8%) and 21 (12.1%) falls in the oncology, respectively. The JBI Practical Application of Clinical Evidence System (JBI-PACES) audit and feedback tool was adopted, as well as audit criteria following the recommendations of the JBI Evidence Summary ‘Person-centered fall prevention and management in hospital settings’ : patients actively participating in the fall risk assessment, decision-making, and treatment planning process and receiving information about fall prevention and management. Furthermore, fall prevention and management must be individualized; it must consider the patient's condition, individual risks, and intention to engage in behaviors that reduce their risk of falling. – This study set out to implement individualized fall prevention strategies based on the patient's needs and preferences and the risk assessment made by the nurse, which relied on the participation of the patient and/or their caregivers. In current practice, the patient receives preventive recommendations without participating in care planning.
The aim of this study was to assess compliance with evidence-based criteria regarding a person-centered care approach toward the prevention and management of falls among adults and the elderly in oncology and medical–surgical wards. The specific objectives were: (1) To determine current compliance with evidence-based criteria regarding person-centered care, an approach to prevent falls among adults and the elderly in oncology and medical–surgical wards. (2) To identify barriers and facilitators to achieve compliance with evidence-based criteria regarding person-centered care, an approach to prevent falls among adults and the elderly in oncology and medical–surgical wards.
This evidence implementation project used the JBI evidence implementation framework. , The JBI implementation approach is grounded in the audit and feedback process, along with a structured approach to the identification and management of barriers to compliance with recommended clinical practices. It consists of seven stages: identification of practice areas for change, engaging change agents, assessment of context and readiness to change, review of practice against evidence-based audit criteria, implementation of changes to practice, reassessment of practice using a follow-up audit, and consideration of the sustainability of practice changes. The study activities, which involved three distinct but interrelated phases, are described below. (1) Establishing a team and conducting a baseline audit based on criteria informed by evidence, (2) Reflecting on the results of the baseline audit and designing and implementing strategies to address noncompliance found in the baseline audit informed by Getting Research into Practice (GRiP), (3) Conducting a follow-up audit to assess the outcomes of the interventions implemented to improve practice and identify future practice issues to be addressed in subsequent audits. The project was developed in the oncology and medical–surgical units. The oncology unit has 31 beds and provides care for patients of both sexes over 16 years old either diagnosed with cancer or under diagnostic investigation, in the postoperative phase, or in the final stages of their lives. The medical–surgical unit has 19 beds and provides care for adult patients of both sexes receiving clinical and surgical treatment for diseases of various specialties. The oncology team is composed of 28 nurses, a care leader, and a nursing coordinator. The medical–surgical team consists of 12 nurses and a care leader. There are 36 work hours allocated in these units, distributed in 6 h day shifts and 12 h night shifts. This study was conducted from 6 August 2021 to 15 October 2021 at the hospital. This evidence implementation project was conducted using the JBI-PACES, GRiP audit, and feedback tools, which are used to promote evidence-based healthcare using audit re-audit cycle evidence-based criteria, a team-based analysis of organizational barriers, and identification of strategies to overcome these barriers (Appendix).
Ethical approval was obtained from the Plataforma Brasil (n o 47668821.2.0000.5461) and Ethics Committee of the hospital (ID number: 2124). Phase 1: Stakeholder engagement (or team establishment) and baseline audit The project team consisted of the oncology unit coordinator and member of the Fall Prevention Committee (FPC), the oncology unit care leader, and the medical–surgical unit nurse leader. They were responsible for supporting, supervising, and challenging the team in the project's implementation and assumed an essential role in the construction and execution process. The audit and training team for the educational program was composed of the nurses who authored this project. To identify audit criteria, we used the JBI-PACES criteria for person-centered approaches to prevent falls to assess compliance with evidence-based fall prevention (Table ). Sample The sample size and the method used to measure compliance with best practices criteria are described in Table . The inclusion criteria considered nurses working in the oncology and medical–surgical units and patients/caregivers hospitalized in the same units where the implementation project was carried out. Exclusion criteria included nurses who were out of work in the baseline audits and follow-up audits and patients/caregivers who refused to participate in the project. Baseline audit The baseline audit occurred in both units from 6 August 2021 to 6 September 2021. When inpatients or their caregivers were unable to respond, nurses were interviewed. There were 31 patients and 24 nurses participating in the oncology unit and 18 patients and 11 nurses participating in the medical–surgical unit. All participants agreed to collaborate in the study and signed the Informed Consent Form. Nine evidence-based criteria for fall prevention were audited in these units. Criteria 1, 2, 4, and 8 were audited and analyzed based on data extracted from the patients’ records. Criteria 3, 6, 7, and 9 were audited based on interviews with nurses, whereas criterion 5 was audited based on interviews with the patients/caregivers. Phase 2: Design and implementation of strategies to improve practice (Getting Research into Practice) We presented the baseline audit to the project team using the JBI GRiP tool and discussed potential barriers and opportunities for improving compliance with the audited criteria. We carried out phase 2 from 10 September 2021 to 2 October 2021. We report the JBI GRiP framework in Table in the Results section, in which we inform key stakeholders, gather opinions, and allocate the available resources to promote implementation changes. On the basis of a person-centered care approach to the prevention and management of falls, the team leaders guided the team members in identifying relevant issues concerning participation from patients/caregivers in the fall risk assessment and engagement of patients/caregivers in goal setting and treatment planning. The team leaders and the stakeholders formulated strategies to overcome the main barriers and made effective decision-making. Phase 3: Follow-up audit postimplementation of change strategy Following the same criteria for the baseline audit, a follow-up audit was conducted to assess whether there was more patient/caregiver participation in the fall risk assessment processes and better results in adherence to the best practices for fall prevention. The implementation team collected data in the oncology and medical–surgical units between 4 October 2021 and 15 October 2021. Patients/caregivers and nurses were interviewed. The data collection time needed to be adjusted, because the number of COVID-19 cases increased considerably in the period. In order to avoid loss of sample size, the strategy adopted was to increase the number of hours per day dedicated to data collection. It can be noted that even with the shorter period of data collection, the samples are similar between baseline audit and follow-up audit. Analysis Results data on changes in compliance were measured using descriptive statistics embedded in the JBI-PACES in the form of percentage changes from baseline.
The project team consisted of the oncology unit coordinator and member of the Fall Prevention Committee (FPC), the oncology unit care leader, and the medical–surgical unit nurse leader. They were responsible for supporting, supervising, and challenging the team in the project's implementation and assumed an essential role in the construction and execution process. The audit and training team for the educational program was composed of the nurses who authored this project. To identify audit criteria, we used the JBI-PACES criteria for person-centered approaches to prevent falls to assess compliance with evidence-based fall prevention (Table ).
The sample size and the method used to measure compliance with best practices criteria are described in Table . The inclusion criteria considered nurses working in the oncology and medical–surgical units and patients/caregivers hospitalized in the same units where the implementation project was carried out. Exclusion criteria included nurses who were out of work in the baseline audits and follow-up audits and patients/caregivers who refused to participate in the project.
The baseline audit occurred in both units from 6 August 2021 to 6 September 2021. When inpatients or their caregivers were unable to respond, nurses were interviewed. There were 31 patients and 24 nurses participating in the oncology unit and 18 patients and 11 nurses participating in the medical–surgical unit. All participants agreed to collaborate in the study and signed the Informed Consent Form. Nine evidence-based criteria for fall prevention were audited in these units. Criteria 1, 2, 4, and 8 were audited and analyzed based on data extracted from the patients’ records. Criteria 3, 6, 7, and 9 were audited based on interviews with nurses, whereas criterion 5 was audited based on interviews with the patients/caregivers.
We presented the baseline audit to the project team using the JBI GRiP tool and discussed potential barriers and opportunities for improving compliance with the audited criteria. We carried out phase 2 from 10 September 2021 to 2 October 2021. We report the JBI GRiP framework in Table in the Results section, in which we inform key stakeholders, gather opinions, and allocate the available resources to promote implementation changes. On the basis of a person-centered care approach to the prevention and management of falls, the team leaders guided the team members in identifying relevant issues concerning participation from patients/caregivers in the fall risk assessment and engagement of patients/caregivers in goal setting and treatment planning. The team leaders and the stakeholders formulated strategies to overcome the main barriers and made effective decision-making.
Following the same criteria for the baseline audit, a follow-up audit was conducted to assess whether there was more patient/caregiver participation in the fall risk assessment processes and better results in adherence to the best practices for fall prevention. The implementation team collected data in the oncology and medical–surgical units between 4 October 2021 and 15 October 2021. Patients/caregivers and nurses were interviewed. The data collection time needed to be adjusted, because the number of COVID-19 cases increased considerably in the period. In order to avoid loss of sample size, the strategy adopted was to increase the number of hours per day dedicated to data collection. It can be noted that even with the shorter period of data collection, the samples are similar between baseline audit and follow-up audit.
Results data on changes in compliance were measured using descriptive statistics embedded in the JBI-PACES in the form of percentage changes from baseline.
Phase 1: Baseline audit We performed the baseline audit on the medical–surgical and oncology wards, and Fig. shows the results. In Fig. , criteria 1, 2, and 4 showed 100% compliance in the oncology, whereas in the medical–surgical, there was greater than 60% compliance with the same criteria. The actions carried out by the nursing team to develop these criteria were part of the protocol for the prevention of falls already established in the hospital. Also, criterion 5, whose compliance was greater than 70% in both units, was part of the preventive actions for falls in the same hospital protocol. Criteria 3, 6, 7, and 8, whose compliance ranged from 0 to 23% in both units, referred to the person-centered care approach to preventing falls. This intervention was intended to be implemented with this project. No preventive actions were available in the institutional protocol focused on person-centered care (Fig. ). Criterion 9, which refers to the knowledge of professionals about person-centered care approach for the prevention of falls, showed an agreement of 36% in medical–surgical and 58% in oncology, demonstrating that the new approach to be implemented was known by some nurses. Phase 2: Strategies for Getting Research into Practice After reviewing the baseline audit results, the project team listed the barriers and strategies for the implementation project and built an action plan, which we documented by the GRIP tool (Table ). The main barriers identified were the absence of strategies that used person-centered care to prevent and manage falls in the hospital. We developed strategies related to the education of professionals and patient/caregivers to improve their participation in the assessment and planning of goals prevent falls. We also developed materials in the electronic medical record system to register preventive actions used by the professionals involved in the process of assessment and preventing falls. We presented the results of baseline audit to nurses and coordinators of the medical–surgical and oncology wards. We carried out online training by the Zoom platform, reinforcing the concepts related to assessing the risk of falls and preventive measures. We included patients/caregivers in this training. Furthermore, the following actions were developed for implementation: participation from patients/caregivers in the fall risk assessment using the Johns Hopkins Scale printed to fill out the bedside and subsequent transfer of data to the electronic medical record system; engagement of patients/caregivers in goal setting and treatment planning, to respect their opinions and preferences within the premises of patient safety; included were fall prevention strategies within the care plan in the electronic medical record to support the care planning process and preventive measures that would be personally executed by the nurse; review of the Bundle of Falls Prevention in the Institutional Protocol, defining the responsibilities of each professional category and designing flows to prevent falls with personalized actions; and preparation of educational material to deliver to the patient at hospital discharge that possibly offers written information and contributions to the health education of patients and their caregivers. Thus, the flow begins when the fall risk assessment is performed by the nurse with the patient engaged in this assessment; based on the results, care planning is established with the patient/caregiver considering their preferences; the responsibilities of each member of the multidisciplinary team are defined in the care planning; reassessment is done every 24 h or when the patient's clinical condition changes or when the patient falls. The flow finishes when the patient receives hospital discharge, and educational material about fall prevention at home is given to the patient/caregiver by nurses. The fall prevention strategies were developed from the ‘Bundle of Falls Prevention in the Institutional Protocol’ already used in the hospital. This bundle was revised and included evidence-based person-centered care and multifactorial interventions as exercises, medication adjustments, environment control and patient and health professional education about fall prevention. , , The lack of technological resources (laptop or tablet for bedside use), associated with the increase in time spent carrying out the risk fall assessment on paper and transcribing electronic medical records, were identified as barriers to the implementation project. However, there was no financial resource to overcome them. Phase 3: Follow-up audit(s) The follow-up audit showed that both oncology and medical–surgical wards presented increasing compliance with criteria 3, 4, 6, 7, and 9 (minimum 94% and maximum 100%) when compared with baseline audit. Medical–surgical ward showed 100% compliance with criterion 1, but the compliance was worse in criteria 2 (baseline audit 88% and follow-up audit 53%) and 5 (baseline audit 83% and follow-up audit). Oncology ward showed increasing compliance with criterion 8 (baseline audit 0% and follow-up audit 25%) and worse compliance with criteria 1 (baseline audit 100% and follow-up audit 75%) and 2 (baseline audit 100% and follow-up audit 73%). The postimplementation audit results and the sample size are shown in Fig. . When both units were evaluated together, criteria 1 and 2 showed worse compliance in the follow-up than baseline audits. In criterion 1, the compliance was 77% in the follow-up audit and 94% in the baseline audit. The result reflected the high demand for care required from the healthcare team during the COVID-19 pandemic, who spent more time on care activities. It is essential to review the process and perform the risk assessment for the patient within 2 h of admission (a period of 2 h was established prior to the COVID-19 pandemic and proved to be inadequate during the pandemic). . In criterion 2, the compliance was 63% in the follow-up audit and 93% in the baseline audit. The teams had trouble providing fall prevention orientation within 24 h of the patient's admission, which required adjusting the results (a period of 24 h was established prior to the COVID-19 pandemic and proved to be inadequate during the pandemic). The medical–surgical unit contributed the most to this result. Following the evaluation of both units together, criteria 4 and 5 showed a slight variation in compliance between the baseline audit (97 and 78%, respectively) and follow-up audit (98 and 77%, respectively). In criterion 4, it is indicated that the fall risk reassessment is already a well established practice in the institution, especially in the oncology unit. Criterion 5 demonstrates weaknesses in the process of verbal and written orientation on fall prevention. Therefore, it is necessary to hold reflections and develop action plans to improve performance in this criterion, especially in the medical–surgical unit. Criteria 3, 6, 7, 8, and 9 showed improvement between follow-up and baseline audit when evaluating both units together. Criterion 3 compliance in the follow-up audit (100%) demonstrated the effectiveness of the actions to patient participation in the fall risk assessment process. Criteria 6 and 7 compliances improved significantly between the baseline audit (23 and 3%, respectively) and follow-up audit (97% both units). This is evidence that all implemented actions enabled the participation of the patient/caregiver in establishing fall prevention strategies and care planning in both units. For criterion 6, compliance in the medical–surgical unit increased from 0 to 100%. Criterion 8 compliance improved between the baseline audit (0%) and follow-up audit (18%). There was no improvement in the medical–surgical unit (0% compliance in both audits). Although the aggregate result presented a low compliance (18%), there are opportunities for improvement in the orientation process for the patient upon discharge by investing in raising their awareness of fall prevention for home care. Criterion 9 compliance improved significantly between the baseline audit (51%) and follow-up audit (100%), demonstrating positive performance following the training preparations and application in both units.
We performed the baseline audit on the medical–surgical and oncology wards, and Fig. shows the results. In Fig. , criteria 1, 2, and 4 showed 100% compliance in the oncology, whereas in the medical–surgical, there was greater than 60% compliance with the same criteria. The actions carried out by the nursing team to develop these criteria were part of the protocol for the prevention of falls already established in the hospital. Also, criterion 5, whose compliance was greater than 70% in both units, was part of the preventive actions for falls in the same hospital protocol. Criteria 3, 6, 7, and 8, whose compliance ranged from 0 to 23% in both units, referred to the person-centered care approach to preventing falls. This intervention was intended to be implemented with this project. No preventive actions were available in the institutional protocol focused on person-centered care (Fig. ). Criterion 9, which refers to the knowledge of professionals about person-centered care approach for the prevention of falls, showed an agreement of 36% in medical–surgical and 58% in oncology, demonstrating that the new approach to be implemented was known by some nurses.
After reviewing the baseline audit results, the project team listed the barriers and strategies for the implementation project and built an action plan, which we documented by the GRIP tool (Table ). The main barriers identified were the absence of strategies that used person-centered care to prevent and manage falls in the hospital. We developed strategies related to the education of professionals and patient/caregivers to improve their participation in the assessment and planning of goals prevent falls. We also developed materials in the electronic medical record system to register preventive actions used by the professionals involved in the process of assessment and preventing falls. We presented the results of baseline audit to nurses and coordinators of the medical–surgical and oncology wards. We carried out online training by the Zoom platform, reinforcing the concepts related to assessing the risk of falls and preventive measures. We included patients/caregivers in this training. Furthermore, the following actions were developed for implementation: participation from patients/caregivers in the fall risk assessment using the Johns Hopkins Scale printed to fill out the bedside and subsequent transfer of data to the electronic medical record system; engagement of patients/caregivers in goal setting and treatment planning, to respect their opinions and preferences within the premises of patient safety; included were fall prevention strategies within the care plan in the electronic medical record to support the care planning process and preventive measures that would be personally executed by the nurse; review of the Bundle of Falls Prevention in the Institutional Protocol, defining the responsibilities of each professional category and designing flows to prevent falls with personalized actions; and preparation of educational material to deliver to the patient at hospital discharge that possibly offers written information and contributions to the health education of patients and their caregivers. Thus, the flow begins when the fall risk assessment is performed by the nurse with the patient engaged in this assessment; based on the results, care planning is established with the patient/caregiver considering their preferences; the responsibilities of each member of the multidisciplinary team are defined in the care planning; reassessment is done every 24 h or when the patient's clinical condition changes or when the patient falls. The flow finishes when the patient receives hospital discharge, and educational material about fall prevention at home is given to the patient/caregiver by nurses. The fall prevention strategies were developed from the ‘Bundle of Falls Prevention in the Institutional Protocol’ already used in the hospital. This bundle was revised and included evidence-based person-centered care and multifactorial interventions as exercises, medication adjustments, environment control and patient and health professional education about fall prevention. , , The lack of technological resources (laptop or tablet for bedside use), associated with the increase in time spent carrying out the risk fall assessment on paper and transcribing electronic medical records, were identified as barriers to the implementation project. However, there was no financial resource to overcome them.
The follow-up audit showed that both oncology and medical–surgical wards presented increasing compliance with criteria 3, 4, 6, 7, and 9 (minimum 94% and maximum 100%) when compared with baseline audit. Medical–surgical ward showed 100% compliance with criterion 1, but the compliance was worse in criteria 2 (baseline audit 88% and follow-up audit 53%) and 5 (baseline audit 83% and follow-up audit). Oncology ward showed increasing compliance with criterion 8 (baseline audit 0% and follow-up audit 25%) and worse compliance with criteria 1 (baseline audit 100% and follow-up audit 75%) and 2 (baseline audit 100% and follow-up audit 73%). The postimplementation audit results and the sample size are shown in Fig. . When both units were evaluated together, criteria 1 and 2 showed worse compliance in the follow-up than baseline audits. In criterion 1, the compliance was 77% in the follow-up audit and 94% in the baseline audit. The result reflected the high demand for care required from the healthcare team during the COVID-19 pandemic, who spent more time on care activities. It is essential to review the process and perform the risk assessment for the patient within 2 h of admission (a period of 2 h was established prior to the COVID-19 pandemic and proved to be inadequate during the pandemic). . In criterion 2, the compliance was 63% in the follow-up audit and 93% in the baseline audit. The teams had trouble providing fall prevention orientation within 24 h of the patient's admission, which required adjusting the results (a period of 24 h was established prior to the COVID-19 pandemic and proved to be inadequate during the pandemic). The medical–surgical unit contributed the most to this result. Following the evaluation of both units together, criteria 4 and 5 showed a slight variation in compliance between the baseline audit (97 and 78%, respectively) and follow-up audit (98 and 77%, respectively). In criterion 4, it is indicated that the fall risk reassessment is already a well established practice in the institution, especially in the oncology unit. Criterion 5 demonstrates weaknesses in the process of verbal and written orientation on fall prevention. Therefore, it is necessary to hold reflections and develop action plans to improve performance in this criterion, especially in the medical–surgical unit. Criteria 3, 6, 7, 8, and 9 showed improvement between follow-up and baseline audit when evaluating both units together. Criterion 3 compliance in the follow-up audit (100%) demonstrated the effectiveness of the actions to patient participation in the fall risk assessment process. Criteria 6 and 7 compliances improved significantly between the baseline audit (23 and 3%, respectively) and follow-up audit (97% both units). This is evidence that all implemented actions enabled the participation of the patient/caregiver in establishing fall prevention strategies and care planning in both units. For criterion 6, compliance in the medical–surgical unit increased from 0 to 100%. Criterion 8 compliance improved between the baseline audit (0%) and follow-up audit (18%). There was no improvement in the medical–surgical unit (0% compliance in both audits). Although the aggregate result presented a low compliance (18%), there are opportunities for improvement in the orientation process for the patient upon discharge by investing in raising their awareness of fall prevention for home care. Criterion 9 compliance improved significantly between the baseline audit (51%) and follow-up audit (100%), demonstrating positive performance following the training preparations and application in both units.
This study aimed to assess compliance with evidence-based criteria regarding a person-centered care approach to the prevention and management of falls among adults and the elderly in oncology and medical–surgical wards. The project used the JBI audit and feedback method to implement evidence into practice using the JBI PACES and GRiP audit tools to promote changes in the two wards. Compliance increased in the follow-up audit for criteria 3, 4, 6, 7, and 9. The changes were accomplished because of the planning in phase 2 of the project, where facilitators were identified and intervention plans were established based on the strategies described in the GRiP. The most noteworthy of these strategies are the following: the fall risk assessment tool that was printed for bedside completion with the patient's participation; the training of 100% of the oncology and medical–surgical nursing teams; the flowchart for postfall care and clinical notes in the electronic medical record to record information and the secure documentation of progression data, the review of the Fall Prevention Protocol in the computer system; the customization of the care planning process for a personalized approach that is compatible with each patient's particularities, with a focus on patient-centered care. However, we need to improve criteria 1, 2, 5, and 8, whose compliance either decreased or remained low. The change in the time markers for the patient's fall risk assessment upon admission and transfer (criteria 1 and 2) proved inadequate for the work process of the teams in both units during the COVID-19 pandemic. Despite the interventions, the slight variation in compliance in criterion 5 demonstrated that verbal and written orientation on fall prevention does not modify. Moreover, there was low compliance with criterion 8 to provide discharge instructions in an electronic document that can be printed and delivered to patients/caregivers. However, we need to improve criteria 1, 2, 5, and 8, whose compliance either decreased or remained low. We need to develop actions plans to achieve better results soon. We identified some challenges during the implementation of this project such as the lack of technological resources and electronic devices to perform fall risk assessments and bedside care planning. Despite these barriers, some actions initiated in this project are being expanded to other hospital areas because the results demonstrated possibilities of greater engagement between professionals and patients and good prospects for improving fall prevention processes. Including patients/caregivers in the fall risk assessment process and engaging them in goal setting and care planning proved beneficial. Furthermore, we based fall prevention and management strategies on individual risk factors, including multidisciplinary interventions within the care context. The nurses consistently adhered to and participated in the educational programs made available. In addition, they helped provide patient-centered care in fall prevention and management to implement the evidence in practice. The literature results demonstrate that person-centered interventions and personalized patient education may have the potential to be effective in reducing falls in hospitals, but the evidence is still limited. Patient and staff education can reduce the rate and risk of hospital falls and multifactorial interventions tended to produce a positive impact. – The results of our study corroborate the findings of the literature. , , – However, it advances by enabling the assessment of patient participation in processes that involve their safety in the hospital environment, especially in the area of risks, and the establishment of care strategies with preventive measures reflected in lower rates of falls because of their engagement throughout the process. Nonetheless, follow-up audits are required to ensure sustained success of this implementation project.
These findings support that baseline and follow-up audits allied to a fall training program. Changes in the electronic medical records increase compliance rates related to evidence-based practice regarding a person-centered care approach to preventing and managing of falls. The implications for practice and knowledge sustainability in this implementation project include the following: the patients were involved in the fall risk assessment process and encouraged to engage in goal setting and care planning. We targeted fall prevention and management strategies according to individual risk factors, including relevant multidisciplinary interventions. Nurses participated in educational programs and contributed to the implementation of patient-centered care in the practice of preventing and managing falls. In the future, we will implement new strategies to achieve overall success and change according to best practices.
We thank the Brazilian Centre for Evidence-based Healthcare: – a JBI Centre of Excellence (JBI Brazil), the Sírio-Libanês Hospital and the New Knowledge Center team, the nursing coordinator of the Oncology Unit, the nurses and nurse leaders of the Oncology and of the Medical-Surgical Clinic units, the Fall Prevention Committee, and the Nursing Management. Funding: The authors A.C.d.S.A. and R.P.F. received funding from the HSL to carry out the JBI Evidence Implementation Training Program conducted by JBI Brazil. Conflicts of interest There are no conflicts of interest.
There are no conflicts of interest.
Supplemental Digital Content
|
Network expansion of genetic associations defines a pleiotropy map of human cell biology | 59f793e9-1b61-4141-9382-c04377783076 | 10011132 | Anatomy[mh] | Proteins that interact tend to take part in the same cellular functions and be important for the same organismal traits , . Through a principle of guilt-by-association, it has been shown that molecular networks can be used to predict the function or disease relevance of human genes – . On the basis of this, protein interaction networks can augment genome-wide association studies (GWAS) by using GWAS-linked genes as seeds in a network to identify additional trait-associated genes – . It is well known that GWAS loci are enriched in genes encoding for approved drug targets , and genes linked to a trait by network expansion are similarly enriched, even when excluding genes with direct genetic support . This is an opportune time to revisit the application of network approaches to GWAS interpretation on the basis of recent large improvements in the human molecular networks available, single-nucleotide polymorphism (SNP) approaches to gene mapping and the extent of human traits/diseases mapped by GWAS. In particular, there have been substantial improvements in the identification of likely causal genes within GWAS loci using expression and protein quantitative trait loci analysis , , as well as integrative approaches based on machine learning . The genetic study of large numbers of diverse human traits also opens the door to the study of pleiotropy, which occurs when a single genetic change affects multiple traits. Studying pleiotropy can help in the drug discovery process by either increasing the number of potential indications for a drug or avoiding unwanted side effects. Large-scale investigations of the most pleiotropic cellular processes have relied primarily on gene deletion studies. For example, yeast gene deletion studies have revealed pleiotropic cellular processes that include endocytosis, stress response and protein folding, amino acid biosynthesis and global transcriptional regulation . Identification of these highly pleiotropic cellular systems highlights core conserved processes and the complex interconnections within cell biology. Human GWAS data have been extensively used to quantify pleiotropy at the SNP level – and although this has shed light on the degree of pleiotropy and the relationship between traits, it has not often led to identification of the molecular mechanisms that underlie their common genetic basis. Here, we augmented GWAS data for 1,002 traits by network expansion with the purpose of studying pleiotropic cellular processes at the level of the human organism. This network expansion recovers known disease genes not associated by GWAS, identifies groups of traits under the influence of the same cellular processes and defines a pleiotropy map of human cell biology. Finally, we illustrate the use of network expansion scores to characterize inflammatory bowel disease (IBD) genes at GWAS loci, and implicate IBD-relevant genes with strong functional and genetic support.
Systematic augmentation of GWAS with network propagation Recent studies have shown that a comprehensive protein interaction network is critical for network propagation efforts . Here, we combined the International Molecular Exchange physical protein interaction dataset from IntAct (protein–protein interactions) , Reactome (pathways) and SIGNOR (directed signaling pathways) . To facilitate re-use of these data (referred to as ‘OTAR interactome’) we have made the data available via a Neo4j Graph Database ( ftp://ftp.ebi.ac.uk/pub/databases/intact/various/ot_graphdb/current ). The physical interactions were combined with functional associations from the STRING database (v.11) to give a final network containing 571,917 edges connecting 18,410 proteins (nodes) (Fig. ). GWAS trait associations were mapped to genes using the locus-to-gene (L2G) score from Open Targets Genetics, a machine learning approach that integrates features such as SNP fine-mapping, gene distance and molecular quantitative trait locus (QTL) information to identify causal genes (Fig. ) . Genes with L2G scores higher than 0.5 are expected to be causal for the respective trait association in 50% of cases. For each GWAS, associated genes were used as seeds in the interaction network. Of 7,660 GWAS genes linked to at least one trait, 7,248 correspond to proteins present in the interaction network. We then used the Personalized PageRank (PPR) algorithm to score all other protein coding genes in the network where genes connected via short paths to GWAS genes receive higher scores (Fig. ). Genes in the top 25% of network propagation scores were used to identify gene modules, from which we selected those significantly enriched for high network propagation scores (Benjamini–Hochberg (BH)-adjusted P < 0.05 with Kolmogorov–Smirnov test) and with at least two GWAS-linked genes . We applied this approach to 1,002 traits (Supplementary Table ) with GWAS in the Open Targets Genetics portal that had at least two genes mapped to the interactome. These GWAS were spread across 21 therapeutic areas, and differed in the number of GWAS-linked genes (median 6, range 2–763) (Fig. ). To measure the capacity of the network expansion to recover trait-associated genes, we defined a ‘gold standard’ set of disease-associated genes (from https://diseases.jensenlab.org ) that are known drug targets for specific human diseases (from the ChEMBL database, ). To avoid circularity in benchmarking, we excluded gold standard genes that overlapped with GWAS-linked genes for the respective diseases. The network propagation score predicted disease-associated genes with an average area under the receiver operating characteristic (ROC) curve (AUC) >0.7 for the most stringent definition of disease-associated genes as well as known drug targets (Fig. and example ROC curves in Supplementary Fig. ). The performance was higher than that observed with random permutation of the gold standard gene sets (Fig. and Supplementary Fig. ; true positive permutations), suggesting that it is not strongly biased by the placement of the gold standard genes within the network. We also tested the impact of changing the interaction network, either by using subsets of the network defined here or by using the previously defined composite PCNet network (Supplementary Fig. ). Overall, the combined network performed best with an accuracy similar to that of the larger PCNet (Supplementary Fig. ). In total, we obtained network propagation scores for 1,002 traits and gene modules for 906 traits (Supplementary Table ). Network propagation identifies related human traits Identifying groups of traits likely to have a common genetic basis is of value because drugs used to treat one disease may also have effects in related diseases. Genetic sharing between human traits is often determined by correlation of SNP-level statistics from GWAS; however, this approach does not identify how the shared genetics corresponds to shared biological processes. In addition, many GWAS do not report the full summary statistics needed for such comparisons. By contrast, network propagation scores can be calculated from the set of candidate genes available for any GWAS. To benchmark trait–trait associations derived from network propagation, we used the similarity of annotations from the Experimental Factor Ontology (EFO), which include aspects of disease type, anatomy and cell type among others. For example, pairs of related neurological traits tend to share many annotation terms in the EFO. Using these annotations, we defined 796 pairs of traits that are functionally related and therefore likely to have a common genetic basis . An additional benchmark was obtained from trait-to-trait genetic correlations calculated from SNP-based analyses – . Using these benchmarks, we show that similarity in the network propagation scores can identify functionally and genetically related pairs of traits (Supplementary Fig. ). To explore trait–trait relationships on the basis of the similarity of their perturbed biological processes, we used the pairwise distance of network propagation scores to build a tree by hierarchical clustering (Fig. ), and defined 54 subgroups of traits. The traits tend to group according to functional similarity with 34 of 54 having an EFO term annotated to more than 50% of the traits in the group (Fig. ). In Fig. we show examples of traits that are grouped together according to the network propagation scores. These include known relationships between immune-associated traits such as cellulitis or psoriasis and immunoglobulin G measurements; the relationship between skin neoplasms and skin pigmentation or eye color; or the clustering of cardiovascular diseases (acute coronary symptoms) with lipoprotein measurements and cholesterol. We obtained drug indications from the ChEMBL database for the diseases in each cluster (Fig. ). This allows us to find clusters in which drugs may be considered for repurposing, as well as groups of traits in which drug development is most needed. Eighteen clusters representing 64 traits contain no associated drug and represent less well-explored areas of drug development. All trait clusters, genes and corresponding drugs are available in Supplementary Table . Pleiotropy of gene modules across human traits We can study the pleiotropy of human cell biology by identifying which gene modules tend to be associated with many human traits. This allows us to understand how perturbations in specific aspects of cell biology may have broad consequences across multiple traits. In total, we found 2,021 associations between gene modules and traits, of which 886 (43.8%) are gene modules linked to a single trait and the remaining can be collapsed to 73 gene modules linked to two or more traits (Fig. , Supplementary Table and ). The 73 modules associated with more than one trait did not have a significantly larger number of genes ( P = 0.72, Kolmogorov–Smirnov test), whereas the traits linked with the 73 pleiotropic gene modules tend to have a higher number of significant initial GWAS seed genes (Supplementary Fig. ). Therefore, traits with a larger number of linked loci are more likely to be associated with pleiotropic gene modules. The six most pleiotropic gene modules were linked to between 56 and 110 traits in our study, and were enriched (Gene Ontology Biological Process (GOBP) enrichment with one-sided Fisher’s exact test, BH-adjusted P < 0.05) for genes involved in protein ubiquitination, extracellular matrix organization, RNA processing and G protein-coupled receptor (GPCR) signaling (Fig. ). Gene deletion studies in yeast have identified some of the same cellular processes as being highly pleiotropic . Genes within pleiotropic modules linked to ten or more traits are enriched in genes that are ubiquitously expressed (fold enrichment = 1.42, P = 1.71 × 10 −16 , Fisher’s exact test, one-sided), have many deletion phenotypes (fold enrichment = 1.56, P = 1.71 × 10 −30 , Fisher’s exact test, one-sided) and higher numbers of genetic interaction (Fisher’s exact test, one-sided P = 4.155 × 10 − 10 ). Targeting pleiotropic processes with drugs could, therefore, have broad application, but may also raise safety concerns. However, despite these enrichments, there is no simple correlation between the number of traits linked to a gene module and the enrichment of ubiquitously expressed genes (Pearson’s r = 0.0793) or genes with many deletion phenotypes (Pearson’s r = −0.0345). This analysis allows us to connect gene deletion phenotypes with human traits (Supplementary Fig. ). For example, a pleiotropic module linked to traits such as ‘autism spectrum disorder’ and ‘osteoarthritis’ has a high fraction of gene deletion phenotypes impacting on protein transport, and a module linked with Alzheimer’s disease, balding measurement and bone density has genes with a high fraction of gene deletion phenotypes associated with cellular senescence (Supplementary Fig. ). We then related pleiotropy as defined by the module–trait associations derived here with pleiotropy defined by CRISPR gene deletion studies. For each Gene Ontology (GO) term, we calculated the enrichment in genes linked with many traits in our analysis with the enrichment in genes having many gene deletion phenotypes. GO terms specifically enriched in pleiotropic genes based on our definition are dominated by terms that relate to multicellularity, such as membrane signaling, cell-to-cell communication and cell migration (Supplementary Fig. ). For pleiotropy that is specifically found with CRISPR screens, we find terms related to essential processes such as cell cycle, ribosome biogenesis and RNA metabolism (Supplementary Fig. ). For each of the 73 pleiotropic gene modules, we highlighted those that are overrepresented in each group of related traits (Fig. and , one-sided Fisher’s exact test, BH-adjusted P < 0.05). To facilitate the study of cell biology and drug-repurposing opportunities we annotated (Fig. and Supplementary Table ) the genes found in overlapping modules for each of the clusters with data from: ChEMBL (targets of drugs in at least phase III clinical trials), ClinVar (genes linked to clinical variants) and mouse knockout (KO) phenotypes (phenotypic relevance and possible biological link). We explore a few examples of these modules in the following sections. Shared mechanisms and drug-repurposing opportunities We identified two groups of traits (bone and fasciitis related) that are predicted to have a common determining gene module (Fig. and Supplementary Table ). This module is enriched in Wnt signaling genes, which have been previously linked to bone homeostasis and to different types of fasciitis as well as Dupuytren’s contracture . We collected genes harboring likely pathogenic variants from ClinVar , hereafter referred to as ClinVar variants. This gene module is enriched in genes harboring ClinVar variants from patients with tooth agenesis and bone-related diseases (osteoporosis and osteopenia). Several genes with ClinVar variants, such as LRP6 , SOST , WNT1 , WNT10A and WNT10B , are not linked to bone diseases via GWAS. Genetic manipulation of several genes within this module causes changes in bone density in mouse models . In addition, this module contains the target (SOST) of Romosozumab, a drug proven effective to treat osteoporosis. In a second example (Fig. and Supplementary Table ), we identified a group of ten respiratory (for example, asthma) and cutaneous (for example, eczema) immune-related diseases that share three gene modules: a highly pleiotropic module related to regulation of transcription and proteasome, and two more specific modules related to pattern recognition receptor signaling and cytokine production with Janus kinase/signal transducer and activator of transcription (JAK–STAT) involvement. These modules were significantly enriched (one-sided Fisher’s exact test, P < 0.05) in genes having likely pathogenic variants from patients with asthma. The two most specific gene modules were grouped and are shown in Fig. highlighting several genes with known pathogenic variants not associated with these diseases via GWAS (for example, IRAK3 , TNF , ALOX5 , TBX21 ). IRAK3 , encoding a protein pseudokinase, is an example of a druggable gene not identified by GWAS for asthma, but with protein missense variants linked to this disease , and mice model studies have implicated the regulation of IRAK3 in airway inflammation induced by interleukin-33 (IL-33) . Although no drug for IRAK3 is used in the clinic, this analysis suggests it may serve as a relevant drug target for asthma and other related diseases. We identified a total of 41 targets of 126 drugs targeting the genes in the module shown in Fig. . To identify drugs that could have repurposing potential, we excluded those already targeting therapeutic areas that include the ten diseases linked to this gene module. This resulted in 18 drugs (Supplementary Table ) targeting 5 genes including: 14 drugs targeting PTGS2 , used to treat primarily rheumatic disease and osteoarthritis; interferon alfacon1 or alfa-2B (targeting IFNAR1 and IFNAR2 ), designed to counteract viral infections; galiximab and antibody for CD80 (phase III trials for lymphoma); and the antibody RA-18C3 targeting IL1A for colorectal cancer. These drugs may be suited to repurposing for respiratory or cutaneous autoimmune-related diseases. As an example, RA-18C3 has shown benefit in a small phase II trial for hidradenitis suppurativa (acne inversa) . Gene module analysis of related immune-mediated diseases Traits related to the immune system are well represented in our analysis, falling into three different groups: one cluster containing systemic and organ-specific diseases; one cluster of immune cell measurements; and a third, more heterogeneous, cluster (Fig. and Supplementary Table ). In Fig. we represent the first of these clusters, which can be further subdivided into a subgroup linking IBD, multiple sclerosis and systemic lupus erythematosus, and one linking celiac disease, vitiligo and other diseases. We found six gene modules that are specifically enriched with at least one of these two groups of traits, including gene modules related to GPCR signaling, neutrophil activation and interferon signaling. Genes present in these modules show higher relative expression (Fig. , right) in key immune tissues. The six gene modules are shown in Fig. with a connection between them when there is a significant gene-level overlap (Fig. ; ). For representation (Fig. ), we selected genes from modules linked with at least three immune-mediated diseases and kept a subset of interactions of high confidence . We found multiple genes with ClinVar variants from patients with primary immune deficiencies (for example, IRF9 , IRF7 , STAT1 , STAT2 ) that are not GWAS-linked genes but are in their network vicinity, providing evidence of the importance of this gene module for these diseases. To pinpoint drugs with repurposing potential, we excluded those targeting diseases in the same therapeutic areas as the immune-mediated group of diseases, identifying 49 drugs with 20 targets. These include ulimorelin, an agonist of the ghrelin hormone secretagogue receptor GHSR used to treat gastrointestinal obstruction. Ghrelin hormone signaling has been studied in the context of age-related chronic inflammation , psoriasis and IBD (reviewed in ref. ) indicating a potential repurposing opportunity. The 49 drugs with repurposing potential are listed in Supplementary Table with information on target genes and clinical trials. Network-assisted candidate gene prioritization for IBD Although the gene modules we have described can highlight biological pathways shared between genetically related traits, identifying causal genes at individual GWAS loci is important for prioritizing therapeutic targets. Existing methods such as GRAIL , DEPICT and MAGMA prioritize genes based on biological pathways but do not fully use genome-wide protein interaction networks, which can provide finer-grained information over GO terms. Here, we use network propagation to prioritize genes at IBD GWAS loci, similar to our previous work on Alzheimer’s disease . We used two alternative methods of defining seed genes for the network. First, we manually curated 37 genes with high confidence of being causally related to either Crohn’s disease or ulcerative colitis (Supplementary Table ) and second, we used the Open Targets L2G score to automatically select 110 genes with L2G > 0.5 at established IBD loci , ( and Supplementary Table ). To obtain network propagation scores, we compared each gene’s score with 1,000 runs using the same number of randomly selected input genes, to give the PPR percentile value . We obtained unbiased network propagation values for each seed gene by excluding them one at a time . The curated seed genes had far higher network scores than other genes within 200 kb ( P = 7.4 × 10 −6 , one-tailed Wilcoxon rank sum test), indicating that most seed genes have close interactions with other seed genes (Fig. ). The same was true when considering seed genes exclusively in the L2G gene set (Fig. ; P = 3 × 10 −10 , one-tailed Wilcoxon rank sum test), indicating that many of these are also strong IBD candidate genes. Finally, we examined the enrichment of low SNP P values within 10 kb of genes having high network scores. This revealed a progressive enrichment of low P values near genes with higher network scores (Fig. ), which held for the large number of genes linked to SNPs not reaching the typical genome-wide significance threshold of 5 × 10 −8 for locus discovery. Curated genes with strong network support include the drug targets TYK2, ICAM1 and ITGA4 , and NOD2 and IL23R , which have missense variants implicating them as modulators of IBD – . A small number of curated genes had lower network support, which could be due to these genes affecting IBD via pathways distinct from the biological functions covered most well by the curated gene set. Across IBD loci without curated genes, our network scores rank 42 candidates as being more highly functionally connected than the remaining genes at the locus (Supplementary Table and ). Although many of these were already strong IBD candidate genes, some have found strong support only recently. A clear example is the RIPK2 locus. Although OSGIN2 is nearest to IBD lead SNP rs7015630 (38 kb distal), it has no apparent functional links with IBD (network score 43%). By contrast, RIPK2 (108 kb distal, network score 99%) encodes for a mediator of inflammatory signaling via interaction with the bacterial sensor NOD2 (ref. ). Network information can also provide a comparison point for other evidence sources. At the DLD-SLC26A3 locus, there is moderate evidence of genetic colocalization between IBD and an expression quantitative trait loci (eQTL) for DLD in various tissues (Open Targets Genetics portal). However, DLD has no clear functional links with IBD and receives a low network score (14%). By contrast, SLC26A3 is a chloride anion transporter highly expressed in the human colon, with a high network score (98.4% in the L2G seed gene network), and its expression has been recently associated with clinical outcomes in ulcerative colitis . IBD candidate genes that have high network scores but have not been well characterized in the context of IBD include PTPRC (a phosphatase required for T cell activation) and BTBD8 , which is functionally connected to autophagy by the network analysis (via WIPI2 and ATG16L1 ). To study the pleiotropy of the curated and candidate genes we looked at the eight gene modules linked by our analysis to IBD (Supplementary Fig. ). Of the 37 curated and 42 candidate genes, 35 (14 curated and 21 candidate) are found within these modules. Interestingly, we found that most of these genes are in modules that are only linked to IBD; in particular, a module that is enriched for genes related to receptor signaling via the JAK–STAT pathway (Supplementary Fig. ). Conversely, the most pleiotropic modules linked to IBD have very few IBD candidate genes within them. As expected, these pleiotropic modules tend to be associated with traits that are related to the immune system, with the exception of the most pleiotropic module, which is enriched for genes related to protein ubiquitination (Supplementary Fig. ). This analysis suggests that the JAK–STAT-related module is likely to be the best source of novel candidate disease genes and drug targets that are more inclined to be specific to IBD.
Recent studies have shown that a comprehensive protein interaction network is critical for network propagation efforts . Here, we combined the International Molecular Exchange physical protein interaction dataset from IntAct (protein–protein interactions) , Reactome (pathways) and SIGNOR (directed signaling pathways) . To facilitate re-use of these data (referred to as ‘OTAR interactome’) we have made the data available via a Neo4j Graph Database ( ftp://ftp.ebi.ac.uk/pub/databases/intact/various/ot_graphdb/current ). The physical interactions were combined with functional associations from the STRING database (v.11) to give a final network containing 571,917 edges connecting 18,410 proteins (nodes) (Fig. ). GWAS trait associations were mapped to genes using the locus-to-gene (L2G) score from Open Targets Genetics, a machine learning approach that integrates features such as SNP fine-mapping, gene distance and molecular quantitative trait locus (QTL) information to identify causal genes (Fig. ) . Genes with L2G scores higher than 0.5 are expected to be causal for the respective trait association in 50% of cases. For each GWAS, associated genes were used as seeds in the interaction network. Of 7,660 GWAS genes linked to at least one trait, 7,248 correspond to proteins present in the interaction network. We then used the Personalized PageRank (PPR) algorithm to score all other protein coding genes in the network where genes connected via short paths to GWAS genes receive higher scores (Fig. ). Genes in the top 25% of network propagation scores were used to identify gene modules, from which we selected those significantly enriched for high network propagation scores (Benjamini–Hochberg (BH)-adjusted P < 0.05 with Kolmogorov–Smirnov test) and with at least two GWAS-linked genes . We applied this approach to 1,002 traits (Supplementary Table ) with GWAS in the Open Targets Genetics portal that had at least two genes mapped to the interactome. These GWAS were spread across 21 therapeutic areas, and differed in the number of GWAS-linked genes (median 6, range 2–763) (Fig. ). To measure the capacity of the network expansion to recover trait-associated genes, we defined a ‘gold standard’ set of disease-associated genes (from https://diseases.jensenlab.org ) that are known drug targets for specific human diseases (from the ChEMBL database, ). To avoid circularity in benchmarking, we excluded gold standard genes that overlapped with GWAS-linked genes for the respective diseases. The network propagation score predicted disease-associated genes with an average area under the receiver operating characteristic (ROC) curve (AUC) >0.7 for the most stringent definition of disease-associated genes as well as known drug targets (Fig. and example ROC curves in Supplementary Fig. ). The performance was higher than that observed with random permutation of the gold standard gene sets (Fig. and Supplementary Fig. ; true positive permutations), suggesting that it is not strongly biased by the placement of the gold standard genes within the network. We also tested the impact of changing the interaction network, either by using subsets of the network defined here or by using the previously defined composite PCNet network (Supplementary Fig. ). Overall, the combined network performed best with an accuracy similar to that of the larger PCNet (Supplementary Fig. ). In total, we obtained network propagation scores for 1,002 traits and gene modules for 906 traits (Supplementary Table ).
Identifying groups of traits likely to have a common genetic basis is of value because drugs used to treat one disease may also have effects in related diseases. Genetic sharing between human traits is often determined by correlation of SNP-level statistics from GWAS; however, this approach does not identify how the shared genetics corresponds to shared biological processes. In addition, many GWAS do not report the full summary statistics needed for such comparisons. By contrast, network propagation scores can be calculated from the set of candidate genes available for any GWAS. To benchmark trait–trait associations derived from network propagation, we used the similarity of annotations from the Experimental Factor Ontology (EFO), which include aspects of disease type, anatomy and cell type among others. For example, pairs of related neurological traits tend to share many annotation terms in the EFO. Using these annotations, we defined 796 pairs of traits that are functionally related and therefore likely to have a common genetic basis . An additional benchmark was obtained from trait-to-trait genetic correlations calculated from SNP-based analyses – . Using these benchmarks, we show that similarity in the network propagation scores can identify functionally and genetically related pairs of traits (Supplementary Fig. ). To explore trait–trait relationships on the basis of the similarity of their perturbed biological processes, we used the pairwise distance of network propagation scores to build a tree by hierarchical clustering (Fig. ), and defined 54 subgroups of traits. The traits tend to group according to functional similarity with 34 of 54 having an EFO term annotated to more than 50% of the traits in the group (Fig. ). In Fig. we show examples of traits that are grouped together according to the network propagation scores. These include known relationships between immune-associated traits such as cellulitis or psoriasis and immunoglobulin G measurements; the relationship between skin neoplasms and skin pigmentation or eye color; or the clustering of cardiovascular diseases (acute coronary symptoms) with lipoprotein measurements and cholesterol. We obtained drug indications from the ChEMBL database for the diseases in each cluster (Fig. ). This allows us to find clusters in which drugs may be considered for repurposing, as well as groups of traits in which drug development is most needed. Eighteen clusters representing 64 traits contain no associated drug and represent less well-explored areas of drug development. All trait clusters, genes and corresponding drugs are available in Supplementary Table .
We can study the pleiotropy of human cell biology by identifying which gene modules tend to be associated with many human traits. This allows us to understand how perturbations in specific aspects of cell biology may have broad consequences across multiple traits. In total, we found 2,021 associations between gene modules and traits, of which 886 (43.8%) are gene modules linked to a single trait and the remaining can be collapsed to 73 gene modules linked to two or more traits (Fig. , Supplementary Table and ). The 73 modules associated with more than one trait did not have a significantly larger number of genes ( P = 0.72, Kolmogorov–Smirnov test), whereas the traits linked with the 73 pleiotropic gene modules tend to have a higher number of significant initial GWAS seed genes (Supplementary Fig. ). Therefore, traits with a larger number of linked loci are more likely to be associated with pleiotropic gene modules. The six most pleiotropic gene modules were linked to between 56 and 110 traits in our study, and were enriched (Gene Ontology Biological Process (GOBP) enrichment with one-sided Fisher’s exact test, BH-adjusted P < 0.05) for genes involved in protein ubiquitination, extracellular matrix organization, RNA processing and G protein-coupled receptor (GPCR) signaling (Fig. ). Gene deletion studies in yeast have identified some of the same cellular processes as being highly pleiotropic . Genes within pleiotropic modules linked to ten or more traits are enriched in genes that are ubiquitously expressed (fold enrichment = 1.42, P = 1.71 × 10 −16 , Fisher’s exact test, one-sided), have many deletion phenotypes (fold enrichment = 1.56, P = 1.71 × 10 −30 , Fisher’s exact test, one-sided) and higher numbers of genetic interaction (Fisher’s exact test, one-sided P = 4.155 × 10 − 10 ). Targeting pleiotropic processes with drugs could, therefore, have broad application, but may also raise safety concerns. However, despite these enrichments, there is no simple correlation between the number of traits linked to a gene module and the enrichment of ubiquitously expressed genes (Pearson’s r = 0.0793) or genes with many deletion phenotypes (Pearson’s r = −0.0345). This analysis allows us to connect gene deletion phenotypes with human traits (Supplementary Fig. ). For example, a pleiotropic module linked to traits such as ‘autism spectrum disorder’ and ‘osteoarthritis’ has a high fraction of gene deletion phenotypes impacting on protein transport, and a module linked with Alzheimer’s disease, balding measurement and bone density has genes with a high fraction of gene deletion phenotypes associated with cellular senescence (Supplementary Fig. ). We then related pleiotropy as defined by the module–trait associations derived here with pleiotropy defined by CRISPR gene deletion studies. For each Gene Ontology (GO) term, we calculated the enrichment in genes linked with many traits in our analysis with the enrichment in genes having many gene deletion phenotypes. GO terms specifically enriched in pleiotropic genes based on our definition are dominated by terms that relate to multicellularity, such as membrane signaling, cell-to-cell communication and cell migration (Supplementary Fig. ). For pleiotropy that is specifically found with CRISPR screens, we find terms related to essential processes such as cell cycle, ribosome biogenesis and RNA metabolism (Supplementary Fig. ). For each of the 73 pleiotropic gene modules, we highlighted those that are overrepresented in each group of related traits (Fig. and , one-sided Fisher’s exact test, BH-adjusted P < 0.05). To facilitate the study of cell biology and drug-repurposing opportunities we annotated (Fig. and Supplementary Table ) the genes found in overlapping modules for each of the clusters with data from: ChEMBL (targets of drugs in at least phase III clinical trials), ClinVar (genes linked to clinical variants) and mouse knockout (KO) phenotypes (phenotypic relevance and possible biological link). We explore a few examples of these modules in the following sections.
We identified two groups of traits (bone and fasciitis related) that are predicted to have a common determining gene module (Fig. and Supplementary Table ). This module is enriched in Wnt signaling genes, which have been previously linked to bone homeostasis and to different types of fasciitis as well as Dupuytren’s contracture . We collected genes harboring likely pathogenic variants from ClinVar , hereafter referred to as ClinVar variants. This gene module is enriched in genes harboring ClinVar variants from patients with tooth agenesis and bone-related diseases (osteoporosis and osteopenia). Several genes with ClinVar variants, such as LRP6 , SOST , WNT1 , WNT10A and WNT10B , are not linked to bone diseases via GWAS. Genetic manipulation of several genes within this module causes changes in bone density in mouse models . In addition, this module contains the target (SOST) of Romosozumab, a drug proven effective to treat osteoporosis. In a second example (Fig. and Supplementary Table ), we identified a group of ten respiratory (for example, asthma) and cutaneous (for example, eczema) immune-related diseases that share three gene modules: a highly pleiotropic module related to regulation of transcription and proteasome, and two more specific modules related to pattern recognition receptor signaling and cytokine production with Janus kinase/signal transducer and activator of transcription (JAK–STAT) involvement. These modules were significantly enriched (one-sided Fisher’s exact test, P < 0.05) in genes having likely pathogenic variants from patients with asthma. The two most specific gene modules were grouped and are shown in Fig. highlighting several genes with known pathogenic variants not associated with these diseases via GWAS (for example, IRAK3 , TNF , ALOX5 , TBX21 ). IRAK3 , encoding a protein pseudokinase, is an example of a druggable gene not identified by GWAS for asthma, but with protein missense variants linked to this disease , and mice model studies have implicated the regulation of IRAK3 in airway inflammation induced by interleukin-33 (IL-33) . Although no drug for IRAK3 is used in the clinic, this analysis suggests it may serve as a relevant drug target for asthma and other related diseases. We identified a total of 41 targets of 126 drugs targeting the genes in the module shown in Fig. . To identify drugs that could have repurposing potential, we excluded those already targeting therapeutic areas that include the ten diseases linked to this gene module. This resulted in 18 drugs (Supplementary Table ) targeting 5 genes including: 14 drugs targeting PTGS2 , used to treat primarily rheumatic disease and osteoarthritis; interferon alfacon1 or alfa-2B (targeting IFNAR1 and IFNAR2 ), designed to counteract viral infections; galiximab and antibody for CD80 (phase III trials for lymphoma); and the antibody RA-18C3 targeting IL1A for colorectal cancer. These drugs may be suited to repurposing for respiratory or cutaneous autoimmune-related diseases. As an example, RA-18C3 has shown benefit in a small phase II trial for hidradenitis suppurativa (acne inversa) .
Traits related to the immune system are well represented in our analysis, falling into three different groups: one cluster containing systemic and organ-specific diseases; one cluster of immune cell measurements; and a third, more heterogeneous, cluster (Fig. and Supplementary Table ). In Fig. we represent the first of these clusters, which can be further subdivided into a subgroup linking IBD, multiple sclerosis and systemic lupus erythematosus, and one linking celiac disease, vitiligo and other diseases. We found six gene modules that are specifically enriched with at least one of these two groups of traits, including gene modules related to GPCR signaling, neutrophil activation and interferon signaling. Genes present in these modules show higher relative expression (Fig. , right) in key immune tissues. The six gene modules are shown in Fig. with a connection between them when there is a significant gene-level overlap (Fig. ; ). For representation (Fig. ), we selected genes from modules linked with at least three immune-mediated diseases and kept a subset of interactions of high confidence . We found multiple genes with ClinVar variants from patients with primary immune deficiencies (for example, IRF9 , IRF7 , STAT1 , STAT2 ) that are not GWAS-linked genes but are in their network vicinity, providing evidence of the importance of this gene module for these diseases. To pinpoint drugs with repurposing potential, we excluded those targeting diseases in the same therapeutic areas as the immune-mediated group of diseases, identifying 49 drugs with 20 targets. These include ulimorelin, an agonist of the ghrelin hormone secretagogue receptor GHSR used to treat gastrointestinal obstruction. Ghrelin hormone signaling has been studied in the context of age-related chronic inflammation , psoriasis and IBD (reviewed in ref. ) indicating a potential repurposing opportunity. The 49 drugs with repurposing potential are listed in Supplementary Table with information on target genes and clinical trials.
Although the gene modules we have described can highlight biological pathways shared between genetically related traits, identifying causal genes at individual GWAS loci is important for prioritizing therapeutic targets. Existing methods such as GRAIL , DEPICT and MAGMA prioritize genes based on biological pathways but do not fully use genome-wide protein interaction networks, which can provide finer-grained information over GO terms. Here, we use network propagation to prioritize genes at IBD GWAS loci, similar to our previous work on Alzheimer’s disease . We used two alternative methods of defining seed genes for the network. First, we manually curated 37 genes with high confidence of being causally related to either Crohn’s disease or ulcerative colitis (Supplementary Table ) and second, we used the Open Targets L2G score to automatically select 110 genes with L2G > 0.5 at established IBD loci , ( and Supplementary Table ). To obtain network propagation scores, we compared each gene’s score with 1,000 runs using the same number of randomly selected input genes, to give the PPR percentile value . We obtained unbiased network propagation values for each seed gene by excluding them one at a time . The curated seed genes had far higher network scores than other genes within 200 kb ( P = 7.4 × 10 −6 , one-tailed Wilcoxon rank sum test), indicating that most seed genes have close interactions with other seed genes (Fig. ). The same was true when considering seed genes exclusively in the L2G gene set (Fig. ; P = 3 × 10 −10 , one-tailed Wilcoxon rank sum test), indicating that many of these are also strong IBD candidate genes. Finally, we examined the enrichment of low SNP P values within 10 kb of genes having high network scores. This revealed a progressive enrichment of low P values near genes with higher network scores (Fig. ), which held for the large number of genes linked to SNPs not reaching the typical genome-wide significance threshold of 5 × 10 −8 for locus discovery. Curated genes with strong network support include the drug targets TYK2, ICAM1 and ITGA4 , and NOD2 and IL23R , which have missense variants implicating them as modulators of IBD – . A small number of curated genes had lower network support, which could be due to these genes affecting IBD via pathways distinct from the biological functions covered most well by the curated gene set. Across IBD loci without curated genes, our network scores rank 42 candidates as being more highly functionally connected than the remaining genes at the locus (Supplementary Table and ). Although many of these were already strong IBD candidate genes, some have found strong support only recently. A clear example is the RIPK2 locus. Although OSGIN2 is nearest to IBD lead SNP rs7015630 (38 kb distal), it has no apparent functional links with IBD (network score 43%). By contrast, RIPK2 (108 kb distal, network score 99%) encodes for a mediator of inflammatory signaling via interaction with the bacterial sensor NOD2 (ref. ). Network information can also provide a comparison point for other evidence sources. At the DLD-SLC26A3 locus, there is moderate evidence of genetic colocalization between IBD and an expression quantitative trait loci (eQTL) for DLD in various tissues (Open Targets Genetics portal). However, DLD has no clear functional links with IBD and receives a low network score (14%). By contrast, SLC26A3 is a chloride anion transporter highly expressed in the human colon, with a high network score (98.4% in the L2G seed gene network), and its expression has been recently associated with clinical outcomes in ulcerative colitis . IBD candidate genes that have high network scores but have not been well characterized in the context of IBD include PTPRC (a phosphatase required for T cell activation) and BTBD8 , which is functionally connected to autophagy by the network analysis (via WIPI2 and ATG16L1 ). To study the pleiotropy of the curated and candidate genes we looked at the eight gene modules linked by our analysis to IBD (Supplementary Fig. ). Of the 37 curated and 42 candidate genes, 35 (14 curated and 21 candidate) are found within these modules. Interestingly, we found that most of these genes are in modules that are only linked to IBD; in particular, a module that is enriched for genes related to receptor signaling via the JAK–STAT pathway (Supplementary Fig. ). Conversely, the most pleiotropic modules linked to IBD have very few IBD candidate genes within them. As expected, these pleiotropic modules tend to be associated with traits that are related to the immune system, with the exception of the most pleiotropic module, which is enriched for genes related to protein ubiquitination (Supplementary Fig. ). This analysis suggests that the JAK–STAT-related module is likely to be the best source of novel candidate disease genes and drug targets that are more inclined to be specific to IBD.
We identified gene modules associated with 906 human traits, taking advantage of the increased coverage of human interactome mapping and novel tools for SNP-to-gene mapping . As seen in other studies , network expansion can retrieve previously known disease genes not identified by GWAS, including those not in GWAS loci but that may modulate the same biological processes. Even when excluding genes with direct genetic support, such interacting genes are enriched for successful drug targets . Genes identified by network expansion will not have information on the direction of effect and additional work and interpretation are needed to gain insights into the direction of impact of modulating such genes. Although there are several algorithms to perform network propagation, recent studies have shown that they tend to perform similarly and the network used has a stronger impact on performance . For this reason, improvements in mapping coverage and computational or experimental approaches to deriving tissue- or cell-type-specific networks could have a large impact on the future effectiveness of network expansion. We showed examples of disease-linked gene modules that were also enriched in genes carrying clinical variants for the same or related diseases. In many cases, genes with clinical variants did not overlap with the GWAS-linked genes, which is likely due to a lower frequency of clinical variants. Testing for burden of loss-of-function variants within selected gene sets is an approach used to study the impact of low-frequency variants , and we suggest that the gene modules identified here could be ideally suited for this purpose. The gene modules identified here relate to specific aspects of cell biology with different human traits. Analysis of mouse phenotypes and ClinVar variants provided additional evidence for some of the identified relationships. Additional experimental work, in particular with appropriate models (for example, organoids, mouse models), is needed to follow up on some of the derived associations. Beyond identifying gene modules, our GWAS-based network approach can also be used to prioritize disease genes at individual loci by their role within specific biological processes, as we showed for IBD. The most pleiotropic gene modules share some aspects of cell biology that have been defined as highly pleiotropic in gene deletion studies of yeast . Gene modules linked with different traits could provide opportunities for drug repurposing or cross-disease drug development. However, targeting pleiotropic processes could raise safety concerns. We find that these modules are enriched for genes that are ubiquitously expressed, and have many gene deletion phenotypes and a higher number of genetic interactions. However, we do not find a simple correlation between the number of traits associated with a gene module and these metrics. This may suggest that some highly pleiotropic processes may be safe to target or that metrics such as CRISPR deletion phenotypes and ubiquitous expression may be insufficient to judge drug target safety. Comparing the pleiotropy of cellular processes as defined by module–trait associations with that defined by gene deletion studies suggests that, although there are some similarities, gene deletion studies tend to miss pleiotropy that relates to cell-to-cell communication. This is not surprising given that CRISPR screens in cell lines typically assay for phenotypes measured in single cells. Conversely, our trait-to-module analysis tends to miss pleiotropy that is highly essential to cells. We suggest that (some of) these essential cellular processes may be lethal if genetically perturbed, and therefore associated variants are not observed in human populations and not seen in genetic association studies. Interestingly, traits that are linked with highly pleiotropic gene modules tend to have a larger number of starting GWAS seed genes, which usually have larger sample sizes. This suggests that the larger the number of loci linked to a trait, and likely greater sample sizes, the higher the chances that this trait will be genetically linked to highly pleiotropic biological processes. Although it has been suggested that the heritability of complex traits is broadly spread along the genome , our analysis indicates that, across a large number of traits, this heritability overlaps in a nonrandom fashion. In summary, network expansion of GWAS is a powerful tool for the identification of genes and cellular processes linked to human traits, and application in multitrait analysis can reveal pleiotropy of human biological pathways at the level of the organism, as well as highlight new opportunities for drug development and repurposing.
Human interactome, GWAS traits and linked genes analyzed We created a comprehensive human interactome, merging an interactome developed for the Open Targets ( www.opentargets.org ) project (version from November 2019), with STRING v.11.0. The Open Targets Interactome network was constructed during this project and contains human data only, including physical interaction data from IntAct, causality associations from SIGNOR and binarized pathway reaction relationships from Reactome. More details about the network construction can be found in the Supplementary Information and at https://platform-docs.opentargets.org/target/molecular-interactions . STRING functional interactions were human only and selected to have a STRING edge score ≥0.75. All identifiers were mapped to Ensembl gene identifiers and, after removing duplicated edges and self-loops, the final network contained 18,410 nodes and 571,917 edges. Network propagation of GWAS-linked genes From a total of 1,221 traits, we selected 1,002 mapped to EFO terms ( www.ebi.ac.uk/efo/ ) included in the Open Targets genetic portal, with at least two genes mapped to our interactome with a L2G score of 0.5 or above (defined as seed nodes). The network-based approach was run individually for each trait, with each protein having a weight corresponding to the L2G score (between 0.5 and 1.0). The input was diffused through the interactome using the PPR algorithm included in the R package igraph (v.1.2.4.2). To generate the modules, we selected nodes with a PPR ranking score greater than the third quartile (Q3, 75%) and performed walktrap clustering (igraph v.1.2.4.2). When the number of nodes in one module was >300, we repeated the clustering inside this community until all resulting clusters were <300 genes. To define gene modules as significantly associated with a trait, we used a Kolmogorov–Smirnov test to determine whether ranks (based on PPR) of genes in a module were greater than the background ranks of all the nodes considered for the walktrap clustering. We tested only modules with at least ten genes and where two or more of them were seed genes (L2G > 0.5), and we corrected the resulting P values for multiple testing using BH adjustment. On the basis of this, we identified a total of 2,021 associations between a gene module and a trait. Benchmarking the capacity to predict disease-associated genes from the network expansion To benchmark both the predictive power of the ranking score resulting from the PPR and the genetic portal data when compared with a GWAS catalog ( https://www.ebi.ac.uk/gwas/ ; based on gene proximity), we computed ROC curves using as true positives the genes linked to diseases from the Jensen lab DISEASE database ( diseases.jensenlab.org ). This database provides a score measuring this association; benchmarking was done using five different score thresholds (DIS0, all genes; DIS1, score >25%; DIS2, score >50%; DIS3, score >75%; and DIS4, maximum value for the score). We calculated the ROC curves and the area under the ROC curve (AUC) for traits with at least ten true positives. Also, we randomized both nodes in the network (keeping the degree distribution) as well as the true positives 1,000 times each. We then calculated the AUC values and the subsequent Z- scores. As an extra benchmark, we used the clinical trial data contained in ChEMBL ( https://www.ebi.ac.uk/chembl/ ), considering as true positives drug targets tested for a certain disease at clinical phase II or higher. Trait–trait relationships defined by the similarity of the network propagation We calculated the Manhattan distance between the 1,002 traits using the full PPR ranking score, followed by hierarchical clustering, resulting in 54 clusters (height distance = 1). To further characterize the trait clusters, we selected those having at least five traits, obtained their EFO ancestry and calculated their frequency per cluster. The highest frequency per cluster is used to define nine groups color-coded in Fig. . To complement the description of clusters belonging to the most general group ‘measurement’ and ‘material property’, we extracted EFO ancestry terms using manually assigned terms from the EFO ancestry with a lower frequency (Fig. ). The ChEMBL database ( https://www.ebi.ac.uk/chembl/ ) was used to calculate the counts of both drugs and drug targets for each of the trait clusters, using information for drugs in clinical trials phases III and IV. To further illustrate the validity of this approach, we selected three trait clusters (Fig. ) as examples of valid trait-to-trait relations. Multitrait gene module analysis Significant modules identified for each trait (described above) were compared across traits by measuring the overlap in genes using the Jaccard index. Gene modules with a Jaccard index ≥0.70 were considered common across two traits. From the 2,021 pairs of gene module–trait associations, 886 are unique to a single trait and the remainder can be collapsed (that is, considered highly overlapping or the same gene module). This results in 73 gene modules that are enriched in network propagation signals for two or more traits. To identify subgroups of related traits, we clustered those linked to the 73 multitrait modules on the basis of the Manhattan distance of their full PPR ranking score (as above) using hierarchical clustering. Subgroups were defined with a height cutoff of 0.7 and we identified gene modules that were more specific to each subgroup of traits using a one-sided Fisher’s exact test and BH multiple testing correction. We retained trait subgroups with at least three traits and a significant presence of at least one group of overlapping modules. Relating pleiotropy from GWAS module with gene expression and deletion phenotypes We used the BioGRID Open Repository of CRISPR Screens (ORCS, v.1.1.11, https://orcs.thebiogrid.org/ ), which contains 1,342 studies measuring the impact of gene deletions on viability and other cellular measurements, including cell-cycle progression, response to different stresses, transport and others. On the basis of these CRISPR screens, we defined as pleiotropic those genes that had a cell-based phenotype in more than half of the screens. We defined genes likely to be expressed in many tissues as those having an expression level above the median for a given tissue in more than half of the tissues in the Human Protein Atlas ( https://www.proteinatlas.org/ ). To compare the enrichment of genes defined as highly pleiotropic in our analysis with those defined by CRISP studies, we performed an enrichment analysis for each GOBP term using a Gene Set Enrichment Analysis test (cluster profiler package, v.4.2.2). Gene module annotations and enrichment analysis The gene KD mouse phenotypes were extracted from the International Mouse Phenotyping Consortium ( https://www.mousephenotype.org/ ) and the clinical variants were extracted from the ClinVar database (National Center for Biotechnology Information (NCBI), https://www.ncbi.nlm.nih.gov/clinvar/ ). For the enrichment of genes from clinical variants, diseases were grouped into larger categories. For the enrichment of genes from clinical variants referred to in Figs. and , we downloaded data from ClinVar (NCBI), filtered out all benign associations and grouped the phenotypes into higher categories as follows: tooth agenesis (tooth agenesis, selective tooth agenesis 4, 7 and 8); bone-related diseases (sclerosteosis 1, osteoarthritis, osteopetrosis, osteoporosis, osteogenesis imperfecta and osteopenia); asthma (asthma and nasal polyps, susceptibility to asthma and asthma-related traits, diminished response to leukotriene treatment in asthma, asthma and aspirine intolerance); autoimmune condition (familial cold autoinflammatory syndromes); immunodeficiency (immunodeficiency due to a defect in MAPBP-interacting protein, hepatic veno-occlusive disease with immunodeficiency, immunodeficiency-centromeric instability-facial anomalies syndrome 1, immunodeficiency 31a, 31C, 32a, 32b, 38, 39, 44 and 45, immunodeficiency X-linked, with magnesium defect, Epstein–Barr virus infection, and neoplasia, combined immunodeficiency, severe T cell immunodeficiency and immunodeficiency 65 with susceptibility to viral infections); lymphocyte syndrome (bare lymphocyte syndrome types 1 and 2); arthritis (rheumatoid arthritis and juvenile arthritis); Kabuki syndrome (Kabuki syndrome 1 and 2); thrombocytopenia (thrombocytopenia, dyserythropoietic anemia with thrombocytopenia, GATA-1-related thrombocytopenia with dyserythropoiesis, X-linked thrombocytopenia without dyserythropoietic anemia, thrombocytopenia with platelet dysfunction, hemolysis, imbalanced globin synthesis, radioulnar synostosis with amegakaryocytic thrombocytopenia 2 and macrothrombocytopenia); anemia (anemia, dyserythropoietic anemia with thrombocytopenia, aplastic anemia, CD59-mediated hemolytic anemia with or without immune-mediated polyneuropathy and Diamond–Blackfan anemia); and Aicardi–Goutieres syndrome (Aicardi–Goutieres syndrome 4, 6 and 7). IBD network analyses for fine-mapping To identify robust IBD-associated loci, we extracted loci defined in the Open Targets Genetics portal (genetics.opentargets.org) for two IBD GWAS , . Because each GWAS may identify different lead variants, we merged loci defined by lead variants within 200 kb of each other. We extracted the L2G score reported for all genes at each locus, and for merged loci we took the average L2G score for each gene across the loci. We curated 37 high-confidence IBD genes on the basis of the presence of fine-mapped deleterious coding variants, genes whose protein products are the targets of approved IBD drugs and the literature. We defined additional seed gene sets by selecting the top gene at each locus that had an L2G score >0.5. We ran network propagation as described in the Results section of the main text. However, to obtain unbiased scores for seed genes themselves, we left each seed gene out of the input in turn, and ran network propagation to obtain a score based on the remaining N − 1 seed genes. To compute the PPR percentile for seed genes, we used the PPR percentile from the single network propagation run in which that seed gene was excluded from the input. For all other genes, we used the median PPR percentile across N seed gene runs. The plots in Fig. are based on PPR percentiles from the curated seed gene network. To assess the enrichment of low P value SNPs near high network genes (Fig. ), we first determined for each gene the minimum P value among SNPs within 10 kb of the gene’s footprint based on IBD GWAS summary statistics from de Lange et al. . We used Fisher’s exact test to determine the odds ratio for genes with a high network score (in each defined bin) having a low minimum SNP P value, relative to genes with low network scores (PPR percentile <50). PPR percentiles discussed in the text are the average for each gene across the curated and L2G > 0.5 networks. We identified IBD candidate genes that stand out on the basis of their network score (Supplementary Table ) by selecting all locus genes that had an average PPR percentile >90 and L2G > 0.1, and where no other gene at the same locus had PPR percentile >80 and L2G > 0.1. Statistics and reproducibility Data collection and analysis were not blind to the conditions of the experiments. Sample sizes ( n ) are indicated in the figure or figure caption when appropriate. No statistical method was used to predetermine sample size, but where appropriate sample size was considered in statistical tests. No data were excluded from the analyses and the experiments were not randomized. Ethics statement No ethical approval was required for this work. Reporting summary Further information on research design is available in the linked to this article.
We created a comprehensive human interactome, merging an interactome developed for the Open Targets ( www.opentargets.org ) project (version from November 2019), with STRING v.11.0. The Open Targets Interactome network was constructed during this project and contains human data only, including physical interaction data from IntAct, causality associations from SIGNOR and binarized pathway reaction relationships from Reactome. More details about the network construction can be found in the Supplementary Information and at https://platform-docs.opentargets.org/target/molecular-interactions . STRING functional interactions were human only and selected to have a STRING edge score ≥0.75. All identifiers were mapped to Ensembl gene identifiers and, after removing duplicated edges and self-loops, the final network contained 18,410 nodes and 571,917 edges.
From a total of 1,221 traits, we selected 1,002 mapped to EFO terms ( www.ebi.ac.uk/efo/ ) included in the Open Targets genetic portal, with at least two genes mapped to our interactome with a L2G score of 0.5 or above (defined as seed nodes). The network-based approach was run individually for each trait, with each protein having a weight corresponding to the L2G score (between 0.5 and 1.0). The input was diffused through the interactome using the PPR algorithm included in the R package igraph (v.1.2.4.2). To generate the modules, we selected nodes with a PPR ranking score greater than the third quartile (Q3, 75%) and performed walktrap clustering (igraph v.1.2.4.2). When the number of nodes in one module was >300, we repeated the clustering inside this community until all resulting clusters were <300 genes. To define gene modules as significantly associated with a trait, we used a Kolmogorov–Smirnov test to determine whether ranks (based on PPR) of genes in a module were greater than the background ranks of all the nodes considered for the walktrap clustering. We tested only modules with at least ten genes and where two or more of them were seed genes (L2G > 0.5), and we corrected the resulting P values for multiple testing using BH adjustment. On the basis of this, we identified a total of 2,021 associations between a gene module and a trait.
To benchmark both the predictive power of the ranking score resulting from the PPR and the genetic portal data when compared with a GWAS catalog ( https://www.ebi.ac.uk/gwas/ ; based on gene proximity), we computed ROC curves using as true positives the genes linked to diseases from the Jensen lab DISEASE database ( diseases.jensenlab.org ). This database provides a score measuring this association; benchmarking was done using five different score thresholds (DIS0, all genes; DIS1, score >25%; DIS2, score >50%; DIS3, score >75%; and DIS4, maximum value for the score). We calculated the ROC curves and the area under the ROC curve (AUC) for traits with at least ten true positives. Also, we randomized both nodes in the network (keeping the degree distribution) as well as the true positives 1,000 times each. We then calculated the AUC values and the subsequent Z- scores. As an extra benchmark, we used the clinical trial data contained in ChEMBL ( https://www.ebi.ac.uk/chembl/ ), considering as true positives drug targets tested for a certain disease at clinical phase II or higher.
We calculated the Manhattan distance between the 1,002 traits using the full PPR ranking score, followed by hierarchical clustering, resulting in 54 clusters (height distance = 1). To further characterize the trait clusters, we selected those having at least five traits, obtained their EFO ancestry and calculated their frequency per cluster. The highest frequency per cluster is used to define nine groups color-coded in Fig. . To complement the description of clusters belonging to the most general group ‘measurement’ and ‘material property’, we extracted EFO ancestry terms using manually assigned terms from the EFO ancestry with a lower frequency (Fig. ). The ChEMBL database ( https://www.ebi.ac.uk/chembl/ ) was used to calculate the counts of both drugs and drug targets for each of the trait clusters, using information for drugs in clinical trials phases III and IV. To further illustrate the validity of this approach, we selected three trait clusters (Fig. ) as examples of valid trait-to-trait relations.
Significant modules identified for each trait (described above) were compared across traits by measuring the overlap in genes using the Jaccard index. Gene modules with a Jaccard index ≥0.70 were considered common across two traits. From the 2,021 pairs of gene module–trait associations, 886 are unique to a single trait and the remainder can be collapsed (that is, considered highly overlapping or the same gene module). This results in 73 gene modules that are enriched in network propagation signals for two or more traits. To identify subgroups of related traits, we clustered those linked to the 73 multitrait modules on the basis of the Manhattan distance of their full PPR ranking score (as above) using hierarchical clustering. Subgroups were defined with a height cutoff of 0.7 and we identified gene modules that were more specific to each subgroup of traits using a one-sided Fisher’s exact test and BH multiple testing correction. We retained trait subgroups with at least three traits and a significant presence of at least one group of overlapping modules.
We used the BioGRID Open Repository of CRISPR Screens (ORCS, v.1.1.11, https://orcs.thebiogrid.org/ ), which contains 1,342 studies measuring the impact of gene deletions on viability and other cellular measurements, including cell-cycle progression, response to different stresses, transport and others. On the basis of these CRISPR screens, we defined as pleiotropic those genes that had a cell-based phenotype in more than half of the screens. We defined genes likely to be expressed in many tissues as those having an expression level above the median for a given tissue in more than half of the tissues in the Human Protein Atlas ( https://www.proteinatlas.org/ ). To compare the enrichment of genes defined as highly pleiotropic in our analysis with those defined by CRISP studies, we performed an enrichment analysis for each GOBP term using a Gene Set Enrichment Analysis test (cluster profiler package, v.4.2.2).
The gene KD mouse phenotypes were extracted from the International Mouse Phenotyping Consortium ( https://www.mousephenotype.org/ ) and the clinical variants were extracted from the ClinVar database (National Center for Biotechnology Information (NCBI), https://www.ncbi.nlm.nih.gov/clinvar/ ). For the enrichment of genes from clinical variants, diseases were grouped into larger categories. For the enrichment of genes from clinical variants referred to in Figs. and , we downloaded data from ClinVar (NCBI), filtered out all benign associations and grouped the phenotypes into higher categories as follows: tooth agenesis (tooth agenesis, selective tooth agenesis 4, 7 and 8); bone-related diseases (sclerosteosis 1, osteoarthritis, osteopetrosis, osteoporosis, osteogenesis imperfecta and osteopenia); asthma (asthma and nasal polyps, susceptibility to asthma and asthma-related traits, diminished response to leukotriene treatment in asthma, asthma and aspirine intolerance); autoimmune condition (familial cold autoinflammatory syndromes); immunodeficiency (immunodeficiency due to a defect in MAPBP-interacting protein, hepatic veno-occlusive disease with immunodeficiency, immunodeficiency-centromeric instability-facial anomalies syndrome 1, immunodeficiency 31a, 31C, 32a, 32b, 38, 39, 44 and 45, immunodeficiency X-linked, with magnesium defect, Epstein–Barr virus infection, and neoplasia, combined immunodeficiency, severe T cell immunodeficiency and immunodeficiency 65 with susceptibility to viral infections); lymphocyte syndrome (bare lymphocyte syndrome types 1 and 2); arthritis (rheumatoid arthritis and juvenile arthritis); Kabuki syndrome (Kabuki syndrome 1 and 2); thrombocytopenia (thrombocytopenia, dyserythropoietic anemia with thrombocytopenia, GATA-1-related thrombocytopenia with dyserythropoiesis, X-linked thrombocytopenia without dyserythropoietic anemia, thrombocytopenia with platelet dysfunction, hemolysis, imbalanced globin synthesis, radioulnar synostosis with amegakaryocytic thrombocytopenia 2 and macrothrombocytopenia); anemia (anemia, dyserythropoietic anemia with thrombocytopenia, aplastic anemia, CD59-mediated hemolytic anemia with or without immune-mediated polyneuropathy and Diamond–Blackfan anemia); and Aicardi–Goutieres syndrome (Aicardi–Goutieres syndrome 4, 6 and 7).
To identify robust IBD-associated loci, we extracted loci defined in the Open Targets Genetics portal (genetics.opentargets.org) for two IBD GWAS , . Because each GWAS may identify different lead variants, we merged loci defined by lead variants within 200 kb of each other. We extracted the L2G score reported for all genes at each locus, and for merged loci we took the average L2G score for each gene across the loci. We curated 37 high-confidence IBD genes on the basis of the presence of fine-mapped deleterious coding variants, genes whose protein products are the targets of approved IBD drugs and the literature. We defined additional seed gene sets by selecting the top gene at each locus that had an L2G score >0.5. We ran network propagation as described in the Results section of the main text. However, to obtain unbiased scores for seed genes themselves, we left each seed gene out of the input in turn, and ran network propagation to obtain a score based on the remaining N − 1 seed genes. To compute the PPR percentile for seed genes, we used the PPR percentile from the single network propagation run in which that seed gene was excluded from the input. For all other genes, we used the median PPR percentile across N seed gene runs. The plots in Fig. are based on PPR percentiles from the curated seed gene network. To assess the enrichment of low P value SNPs near high network genes (Fig. ), we first determined for each gene the minimum P value among SNPs within 10 kb of the gene’s footprint based on IBD GWAS summary statistics from de Lange et al. . We used Fisher’s exact test to determine the odds ratio for genes with a high network score (in each defined bin) having a low minimum SNP P value, relative to genes with low network scores (PPR percentile <50). PPR percentiles discussed in the text are the average for each gene across the curated and L2G > 0.5 networks. We identified IBD candidate genes that stand out on the basis of their network score (Supplementary Table ) by selecting all locus genes that had an average PPR percentile >90 and L2G > 0.1, and where no other gene at the same locus had PPR percentile >80 and L2G > 0.1.
Data collection and analysis were not blind to the conditions of the experiments. Sample sizes ( n ) are indicated in the figure or figure caption when appropriate. No statistical method was used to predetermine sample size, but where appropriate sample size was considered in statistical tests. No data were excluded from the analyses and the experiments were not randomized.
No ethical approval was required for this work.
Further information on research design is available in the linked to this article.
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41588-023-01327-9.
Supplementary Information Supplementary Figs. 1–9 Reporting Summary Supplementary Data 1 List and annotations of the 1,002 traits studied and their clustering by network propagation scores. Supplementary Data 2 Gene modules linked to each trait and their annotations. Supplementary Data 3 Detailed gene and gene module information for examples in Figs. 3c,d and 4c. Supplementary Data 4 IBD candidate gene information.
|
Randomized trials fit for the 21st century. A joint opinion from the European Society of Cardiology, American Heart Association, American College of Cardiology, and the World Heart Federation | e9a1d93b-3445-47fa-89a5-a67f61656df9 | 10011328 | Internal Medicine[mh] | Randomized controlled trials are the cornerstone for reliably evaluating therapeutic strategies. However, during the past 25 years, the rules and regulations governing randomized trials and their interpretation have become increasingly burdensome, and the cost and complexity of trials has become prohibitive. The present model is unsustainable, and the development of potentially effective treatments is often stopped prematurely on financial grounds, while existing drug treatments or non-drug interventions (such as screening strategies or management tools) may not be assessed reliably. The current ‘best regulatory practice’ environment, and a lack of consensus on what that requires, too often makes it unduly difficult to undertake efficient randomized trials able to provide reliable evidence about the safety and efficacy of potentially valuable interventions. Inclusion of underrepresented population groups and lack of diversity also remain among the challenges. The widespread availability of large-scale, population-wide, ‘real world data’ is increasingly being promoted as a way of bypassing the challenges of conducting randomized trials. Yet, despite the small random errors around the estimates of the effects of an intervention that can be yielded by analyses of such large datasets, non-randomized observational analyses of the effects of an intervention should not be relied on as a substitute, due to their potential for systematic error. That is, the estimated effects may be precise but inaccurate, due to design and statistical biases that cannot be reliably avoided irrespective of the sophistication of the analysis. With this joint opinion, the European Society of Cardiology (ESC), American Heart Association (AHA), World Heart Federation (WHF), and American College of Cardiology (ACC) call for action at a global scale to reinvent randomized clinical trials to be fit for purpose in the 21st century.
Among all medical specialities, cardiology has historically led the way in evidence-based practice. With ground-breaking randomized trials in the 1980s, such as the International Study of Infarct Survival (ISIS), Gruppo Italiano per lo Studio della Streptochinasi nell'Infarto (GISSI) and Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries (GUSTO) trials in acute myocardial infarction, cardiovascular ‘mega-trials’ were conceived and rapidly transformed clinical practice. High quality trials have also reliably demonstrated incremental clinical benefits with modification of major cardiovascular risk factors, such as hypertension and dyslipidaemia, saving millions of lives worldwide in recent decades. Despite these advances, cardiovascular disease remains the leading cause of death and disability globally, and there is a need to identify additional effective therapies, to increase upstream prevention and precision medicine efforts, and to determine how best to use the effective treatments that we already have (and, as a corollary, not use those that are not effective or safe). As age-specific rates of mortality and major morbidity decline due to better prevention and treatment, it becomes more difficult to conduct reliable assessments of new or existing interventions. Lower absolute risks of cardiovascular events mean that increasingly large samples are needed to generate the numbers of outcomes of interest, given the typically modest relative benefits of many interventions. Moreover, cardiovascular interventions often require sufficient time before the benefits emerge. As the size of trials increases, the cost rises, and there may be a temptation to limit the duration of follow-up, in order both to control costs and, from an industry perspective, to get new agents to market faster. The proprotein convertase subtilisin–kexin type 9 (PCSK9) inhibiting monoclonal antibodies (evolocumab and alirocumab) provide a recent example of such a strategy failing patients. , These agents have an impressive LDL cholesterol-lowering effect and, in large phase 3 randomized trials, were clearly shown to safely reduce major cardiovascular events. However, with only around 2–3 years of follow-up, it is likely that those trials underestimated the full benefits of prolonged PCSK9 inhibition on cardiovascular mortality and morbidity. So, despite the conduct of large trials which cost billions of dollars, the uptake of these agents has been limited (exacerbated by their high cost), and they have not realized their full potential for population health benefit even in high income countries. During the past 25 years, there has been an enormous increase in the rules and related bureaucracy governing clinical trials. First issued in 1996, the International Council for Harmonization (ICH) Good Clinical Practice (GCP) Guidelines describe the responsibilities and expectations of all those involved in the conduct of clinical trials. The intention of the ICH-GCP guideline was to ensure the safety and rights of participants in trials and also to ensure the reliability of trial results so that the safety of future patients would be protected. However, despite these well-intended aims, the guideline is now often over-interpreted and implemented in ways that are unnecessarily obstructive, prohibiting good trials from being done affordably. These problems are exacerbated by the financial incentive for some parties (in particular contract research organizations) to over-interpret ICH-GCP and profit from additional, often unnecessary, clinical trial procedures (such as frequent on-site monitoring visits when less costly data-driven monitoring approaches can be more informative [ https://ctti-clinicaltrials.org/our-work/quality/quality-by-design/ ]). While the increasing complexities have been obstacles to trials conducted by industry, the regulations have become much larger barriers for conducting trials of interventions that have little or no commercial support. Consequently, trials of important questions relevant to low-income populations (e.g. infections affecting the heart such as rheumatic heart disease, tuberculous pericarditis or Chagas disease) or those that may have the potential for large clinical and population benefits but involve generic drugs (e.g. a polypill) have been hard to conduct.
Streamline the trial processes: reinvent simple trials with global impact The COVID-19 pandemic has provided clinical trialists with an opportunity to rethink their trade and remember the landmark successes of the cardiovascular mega-trial concept established in the 1980s. Trials such as Randomised Evaluation of COVID-19 Therapy (RECOVERY) and World Health Organization Solidarity have been highly streamlined and designed to be easy to administer in the busy hospitals in which large numbers of COVID patients were being treated. Only essential data were to be collected and, wherever possible, much of the follow-up information was derived from national electronic health records (EHRs). Importantly, they showed that such trials can be conducted in accordance with the principles of GCP, but without over-interpretation or unnecessary complication. By contrast, many of the other COVID-19 trials had complex protocols (e.g. more restrictive eligibility criteria, significant additional data collection beyond that collected for routine care) with a focus on surrogate outcomes (e.g. time to clinical improvement, rather than mortality), such that their relatively small size did not allow them to yield clear evidence on the outcomes that matter most to patients. , Indeed, putative benefits observed in many small trials have not translated into mortality benefits when assessed in the larger streamlined trials. Use routine data to our advantage in trials, not as an inappropriate replacement Considerable opportunities for streamlined trial conduct are provided by digital healthcare in the 2020s, with high quality EHRs available for both recruitment and follow-up of trial participants. Part of the success of the RECOVERY trial was the nationwide availability of routine health data for comprehensive and complete follow-up. For many years, cardiovascular trials have successfully exploited EHRs for both recruitment and follow-up [as for example, in the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies (SWEDEHEART) series of trials], with important clinical findings. Current initiatives are extending this approach through development and use of local and national registries that can facilitate low-cost, pragmatic ‘randomized registry trials’. However, data access restrictions and regulatory authority reticence to accepting EHR-based outcome data in randomized trials (especially for drug registration) have led to an underuse of this approach to trial streamlining. Instead, inappropriate emphasis is being placed—including by regulators—on using so-called ‘real world’ observational studies, despite the potential biases inherent in such methods. Collaborative revision of ICH-GCP, making it fit for purpose in the 21st century Recent experience has shown that important clinical questions can be addressed rapidly in streamlined trials while remaining compliant with existing guidelines. However, the approach taken to the implementation of the ICH-GCP guidelines is typically inflexible and frequently involves over-interpretation that stifles innovation in the clinical trials enterprise, driving up costs through waste, delay and failure. In consultation with a range of stakeholders—from patients and the public who volunteer for clinical trials, to organizations that provide the skills, funding and infrastructure to conduct research—the Good Clinical Trials Collaborative (GCTC https://www.goodtrials.org/ ) has been established by Wellcome, the Gates Foundation and the African Academy of Sciences to build on the work of the FDA-funded Clinical Trials Transformation Initiative (CTTI, https://ctti-clinicaltrials.org/ ) by producing comprehensive revised guidelines fit for the purposes of doing randomized trials in the 21st century. The GCTC is reviewing the principles for all types of healthcare interventions, in all settings, to produce guidelines that aim to foster and promote informative, ethical and efficient randomized controlled trials (see ). Draft guidance was published for consultation and review in 2021, and it is anticipated that revised guidelines will be issued in 2022 ( https://www.goodtrials.org/guidance ). We strongly support the adoption of this guidance into regulation, guidance, and practice across the whole clinical trials ecosystem—including by regulators, sponsors, and healthcare and research organizations—to ensure that the principles are embedded across all aspects of clinical trial design, delivery, oversight, quality assurance, analysis, and interpretation. Professional societies and their members have a key role to play in providing training in the fundamental principles of clinical trials, recognizing contribution to clinical trials as a core clinical activity, ensuring diversity and representativeness of included participants, and building community trust in the research enterprise by considering the patient perspective throughout all stages of trial development. https://nap.nationalacademies.org/catalog/26349/envisioning-a-transformed-clinical-trials-enterprise-for-2030-proceedings-of )
The COVID-19 pandemic has provided clinical trialists with an opportunity to rethink their trade and remember the landmark successes of the cardiovascular mega-trial concept established in the 1980s. Trials such as Randomised Evaluation of COVID-19 Therapy (RECOVERY) and World Health Organization Solidarity have been highly streamlined and designed to be easy to administer in the busy hospitals in which large numbers of COVID patients were being treated. Only essential data were to be collected and, wherever possible, much of the follow-up information was derived from national electronic health records (EHRs). Importantly, they showed that such trials can be conducted in accordance with the principles of GCP, but without over-interpretation or unnecessary complication. By contrast, many of the other COVID-19 trials had complex protocols (e.g. more restrictive eligibility criteria, significant additional data collection beyond that collected for routine care) with a focus on surrogate outcomes (e.g. time to clinical improvement, rather than mortality), such that their relatively small size did not allow them to yield clear evidence on the outcomes that matter most to patients. , Indeed, putative benefits observed in many small trials have not translated into mortality benefits when assessed in the larger streamlined trials.
Considerable opportunities for streamlined trial conduct are provided by digital healthcare in the 2020s, with high quality EHRs available for both recruitment and follow-up of trial participants. Part of the success of the RECOVERY trial was the nationwide availability of routine health data for comprehensive and complete follow-up. For many years, cardiovascular trials have successfully exploited EHRs for both recruitment and follow-up [as for example, in the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies (SWEDEHEART) series of trials], with important clinical findings. Current initiatives are extending this approach through development and use of local and national registries that can facilitate low-cost, pragmatic ‘randomized registry trials’. However, data access restrictions and regulatory authority reticence to accepting EHR-based outcome data in randomized trials (especially for drug registration) have led to an underuse of this approach to trial streamlining. Instead, inappropriate emphasis is being placed—including by regulators—on using so-called ‘real world’ observational studies, despite the potential biases inherent in such methods.
Recent experience has shown that important clinical questions can be addressed rapidly in streamlined trials while remaining compliant with existing guidelines. However, the approach taken to the implementation of the ICH-GCP guidelines is typically inflexible and frequently involves over-interpretation that stifles innovation in the clinical trials enterprise, driving up costs through waste, delay and failure. In consultation with a range of stakeholders—from patients and the public who volunteer for clinical trials, to organizations that provide the skills, funding and infrastructure to conduct research—the Good Clinical Trials Collaborative (GCTC https://www.goodtrials.org/ ) has been established by Wellcome, the Gates Foundation and the African Academy of Sciences to build on the work of the FDA-funded Clinical Trials Transformation Initiative (CTTI, https://ctti-clinicaltrials.org/ ) by producing comprehensive revised guidelines fit for the purposes of doing randomized trials in the 21st century. The GCTC is reviewing the principles for all types of healthcare interventions, in all settings, to produce guidelines that aim to foster and promote informative, ethical and efficient randomized controlled trials (see ). Draft guidance was published for consultation and review in 2021, and it is anticipated that revised guidelines will be issued in 2022 ( https://www.goodtrials.org/guidance ). We strongly support the adoption of this guidance into regulation, guidance, and practice across the whole clinical trials ecosystem—including by regulators, sponsors, and healthcare and research organizations—to ensure that the principles are embedded across all aspects of clinical trial design, delivery, oversight, quality assurance, analysis, and interpretation. Professional societies and their members have a key role to play in providing training in the fundamental principles of clinical trials, recognizing contribution to clinical trials as a core clinical activity, ensuring diversity and representativeness of included participants, and building community trust in the research enterprise by considering the patient perspective throughout all stages of trial development. https://nap.nationalacademies.org/catalog/26349/envisioning-a-transformed-clinical-trials-enterprise-for-2030-proceedings-of )
Cardiology provided the foundation for an era of highly successful clinical trials, and is well-placed to reinvent trials for the 21st century. The ESC, AHA, ACC, and WHF are committed to ensuring that high quality trials continue to provide randomized evidence that improves the clinical care of all patients across different race and gender identities, socio-economic strata, and geographies. Technology has transformed medical practice in recent decades, and clinical trials need to keep pace if modern therapies and treatment strategies are to continue to be robustly evaluated. Digital advances provide streamlined solutions to trial conduct, such as app-based data collection, remote monitoring, and ‘virtual’ trial visits. The COVID-19 pandemic has forced us to think more critically about many elements of daily life with a rapid change in what is now considered ‘normal’. A timely opportunity exists to promote similarly radical changes into the conduct of trials, to enhance efficiencies while maintaining safety. The cardiovascular organizations, societies, and foundations provide a valuable forum to advocate for the appropriate use of routine EHRs (i.e. ‘real world’ data) within randomized trials, recognizing the huge potential of centrally or regionally-held electronic health data for trial recruitment and follow-up, as well as to highlight the severe limitations of using observational analyses when the purpose is to draw causal inference about the risks and benefits of an intervention. With this document, our societies wish to engage in the development and widespread adoption of consensus guidance for clinical trials, supporting a more effective regulatory environment and allowing researchers to conduct the trials that are needed to improve patient care much more efficiently. Finally, the COVID-19 pandemic has re-emphasized the importance of making it feasible for busy clinicians, and their patients, to participate in randomized trials. Without sustained efforts to increase the application of streamlined approaches, and a more supportive regulatory environment for those who do choose to generate randomized evidence (instead of the adversarial approach that is often taken in regulatory audits), patients will suffer from important clinical questions not being addressed reliably, either because trials are too small or, due to excessive financial or bureaucratic obstacles, are never done at all.
Stephan Achenbach, Department of Cardiology, Friedrich-Alexander, University Erlangen-Nürnberg, Erlangen, Germany; Louise Bowman, Nuffield Department of Population Health, University of Oxford, UK; Barbara Casadei, RDM, Division of Cardiovascular Medicine, NIHR Oxford Biomedical Research Centre, University of Oxford, UK; Rory Collins, Nuffield Department of Population Health, University of Oxford, UK; Philip J. Devereaux, Department of Medicine, McMaster University, Hamilton, Canada; Population Health Research Institute, Hamilton, Canada; Department of Health Research Methods, Evidence, and Impact, Canada; Pamela S. Douglas, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA; Ole Frobert, Örebro University, Faculty of Health, Department of Cardiology, Örebro, Sweden; Department of Clinical Medicine, Aarhus University Health, Aarhus, Denmark; Shinya Goto, Department of Medicine (Cardiology), Tokai University School of Medicine, Isehara, Japan; Cindy Grines, Northside Hospital Cardiovascular Institute, Atlanta, Georgia, USA; Robert A. Harrington, Department of Medicine, Division of Cardiovascular Medicine, Stanford University, CA, USA; Richard Haynes, MRC Population Health Research Unit, Nuffield Department of Population Health, University of Oxford, UK; Judith S. Hochman, Leon H. Charney Division of Cardiology, Department of Medicine, New York University Grossman School of Medicine, New York, USA; Stefan James, Uppsala Clinnical Research Center and Department of Medical Sciences, Uppsala University, Uppsala, Sweden; Paulus Kirchhof, Department of Cardiology, University Heart and Vascular Center Hamburg, University Medical Center Hamburg Eppendorf, Germany; Atrial Fibrillation Competence NETwork (AFNET), Münster, Germany; Institute of Cardiovascular Sciences, University of Birmingham, UK; Michel Komajda, Department of Cardiology, Groupe Hospitalier Paris Saint Joseph, Sorbonne University, Paris, France; Carolyn S.P. Lam, National Heart Centre Singapore & Duke-National University of Singapore, Singapore; Martin Landray, Nuffield Department of Population Health, University of Oxford, UK; Aldo Maggioni, ANMCO Research Centre, Florence, Italy; John McMurray, British Heart Foundation Cardiovascular Research Centre, Institute of Cardiovascular & Medical Sciences; University of Glasgow, UK; Nick Medhurst, Good Clinical Trials Collaborative https://www.goodtrials.org/ ; Roxana Mehran, Icahn School of Medicine at Mount Sinai, New York, USA; Bruce Neal, The George Institute for Global Health, University of New South Wales, Sydney, Australia; School of Public Health, Imperial College London, London, UK; Lars Rydén, Department Medicine K2, Karolinska Institutet, Stockholm, Sweden; Holger Thiele, Heart Center Leipzig at University of Leipzig and Leipzig Heart Institute, Department of Internal Medicine/Cardiology, Leipzig, Germany; Isabelle Van Gelder, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Lars Wallentin, Uppsala Clinnical Research Center and Department of Medical Sciences, Uppsala University, Uppsala, Sweden; Salim Yusuf, Population Health Research Institute, McMaster University and Hamilton Health Sciences, Hamilton, ON, Canada; Faiez Zannad, Université de Lorraine, Inserm and CHRU, Nancy, France. The ESC Patient Forum https://www.escardio.org/The-ESC/What-we-do/esc-patient-engagement .
|
Evaluating the use of large language model in identifying top research questions in gastroenterology | bf1cea70-d16e-4752-8516-60f4eeb60531 | 10011374 | Internal Medicine[mh] | The field of gastroenterology (GI) is constantly evolving, with new advances in technology and research offering insights into the diagnosis and treatment of GI conditions . In order to continue pushing the field forward, it is essential to identify the most important research questions that require further investigation. Traditionally, the identification of research priorities in GI has relied on expert opinion and consensus-building among researchers and clinicians. However, this approach may not always capture the full range of potential research questions. In recent years, the use of natural language processing (NLP) techniques has gained popularity as a means of identifying research priorities. In particular, large language models (LLMs), such as chatGPT, that are trained on vast amounts of text data have shown promise in suggesting research questions based on their ability to understand human-like language , . Previous publications evaluating large language models in various other fields of research included for example the evaluation of the commonsense ability of GPT, BERT, XLNet, and RoBERTa with promising results, evaluation of CODEX, GPT-3 and GPT-J for code generation capabilities ,evaluation of three approaches to personalizing a language model , and evaluating the text to Structured Query Language (SQL) capabilities of CODEX . In this paper, we evaluate the use of newly-released chatGPT in identifying top research questions in the field of GI. We focus on four key areas: inflammatory bowel disease (IBD), the microbiome, AI in GI, and advanced endoscopy in GI. We prompted the model to generate a list of research questions for each topic. These questions were then reviewed and rated by experienced gastroenterologists to assess their relevance and importance. We aimed to evaluate the potential of chatGPT as a tool for identifying important research questions in the field of GI. By utilizing the latest advances in NLP, we hope to shed light on the most pressing and important research questions in the field, and to contribute to the continued advancement of GI research.
The study was conducted by utilizing chatGPT, version released on December 15, a recently introduced LLM (Nov 2022). The model was trained by OpenAI , chatGPT was queried on four key topics in GI: inflammatory bowel disease (IBD), microbiome, AI in GI, and advanced endoscopy in GI and was requested to identify the most relevant research questions in each topic. A total of 5 research questions were generated for each topic, resulting in a total of 20 research questions. These questions were then reviewed and rated separately by a panel of experienced gastroenterologists with expertise in the respective topic areas. The panel consisted of 3 gastroenterologists, two of them with over 20 years of experience, and one with over 30 years of experience. All gastroenterologists work in an academic tertiary medical center and are the authors of dozens of academic research publications in Gastroenterology, and together cover most sub-specialized in Gastroenterology: IBD experts, motility, nutrition, and advanced endoscopy. Research key topics were selected in a consensus between all Gastroenterologists and two AI experts. ChatGPT was prompted with four key topics related to the field of gastrointestinal research. For each topic, a new thread was started in order to eliminate any potential bias from previous conversations and to ensure that the generated responses were directly related to the current prompt. The four topics were framed as research questions, and were carefully crafted to elicit relevant information about the most important questions in the four chosen topics of gastrointestinal research. Supplementary Table presents the prompts used to generate the research questions in each topic. The gastroenterologists were asked to rate each research question on a scale of 1–5, with 5 being the most important and relevant to current research in the field of GI. The mean rating ± SD for each research question was calculated. Each question was graded according to 4 parameters: relevance originality, clarity and specificity. To determine inter-rater reliability, we used the intraclass correlation coefficient (ICC) (see statistical analysis). All data were collected and analyzed using standard statistical methods. The research questions generated by chatGPT were compared to the current research questions being addressed in the field of GI, as identified through a comprehensive review of the literature. This allowed for an assessment of the novelty and relevance of the questions generated by chatGPT. Statistical analysis In this study, the mean, standard deviation, and median were utilized to describe the data. The Intra-Class analysis (Two-Mixed model, Absolute Agreement) or the Intraclass Correlation Coefficient (inter-rater agreement) were employed to assess the data. To determine the significance of the difference in grades among the four research topics (each with 20 questions), a Wilcoxon test for non-parametric paired samples was conducted. All calculations were performed using IBM SPSS Statistical Package version 28. To assess the reliability of the rating process, the ICC was calculated. The ICC was selected as the type of reliability estimate, with the ratings made by each of the three observers being treated as separate items. The ICC value was interpreted as follows: a value of 0 indicated no agreement among the ratings, a value of 1 indicated perfect agreement, and values between 0 and 1 indicated some degree of agreement, with higher values indicating greater agreement. Additionally, the mean ratings and standard deviations of the three observers were compared using the SPSS Explore function, and the correlations among the ratings made by the three observers were examined.
In this study, the mean, standard deviation, and median were utilized to describe the data. The Intra-Class analysis (Two-Mixed model, Absolute Agreement) or the Intraclass Correlation Coefficient (inter-rater agreement) were employed to assess the data. To determine the significance of the difference in grades among the four research topics (each with 20 questions), a Wilcoxon test for non-parametric paired samples was conducted. All calculations were performed using IBM SPSS Statistical Package version 28. To assess the reliability of the rating process, the ICC was calculated. The ICC was selected as the type of reliability estimate, with the ratings made by each of the three observers being treated as separate items. The ICC value was interpreted as follows: a value of 0 indicated no agreement among the ratings, a value of 1 indicated perfect agreement, and values between 0 and 1 indicated some degree of agreement, with higher values indicating greater agreement. Additionally, the mean ratings and standard deviations of the three observers were compared using the SPSS Explore function, and the correlations among the ratings made by the three observers were examined.
A diverse range of research questions was generated by the chat GPT. A panel of 3 expert gastroenterologists evaluated the created questions. All questions suggested by the chatGPT on the topics of IBD, microbiome, AI, and advanced endoscopy and their ratings by the expert gastroenterologists are shown in Supplementary Table . In order to establish the validity of the expert ratings in this study, we first assessed the inter-rater agreement among the evaluators. To eliminate the potential confounding influence of intraclass variability, we employed a Two-Mixed Model with random people effects and fixed measures effects to compute the Correlation Coefficient (ICC) among the raters. The ICC values obtained in this analysis ranged from 0.8 to 0.98 and were statistically significant ( p < 0.001), indicating a high level of reliability in the expert ratings. This strong agreement among the raters suggests that their assessments can be considered reliable indicators of expert opinion. Agreement among the experts according to topics is shown in Table . The results of the expert evaluation showed that chatGPT was able to generate research questions that were most relevant to the field of IBD, with the majority of questions receiving a relevance rating of 5—the highest rate, and a mean grade of 4.9 ± 0.26. In terms of clarity, chatGPT performed very well, with most questions receiving a rating of 4 or 5, and a mean grade of 4.8 ± 0.41. For specificity, the chatGPT reached a mean grade of 2.86 ± 0.64—a moderately good result. However, for originality, all grades were very low—with a mean of 1.07 ± 0.26. When assessing microbiome-related topics, results were similar to those achieved for IBD. As in IBD, grades reached almost the maximum for relevance and clarity, and the minimum for originality. Question 1 was even identical for both topics. The mean ± SD for relevance originality, clarity, and specificity were: 4.93 ± 0.26, 1.13 ± 0.35, 4.93 ± 0.26, and 3.13 ± 0.64, respectively. Results for AI and advanced endoscopy show the same trend- high relevance and clarity, good specificity but the lowest originality. Mean results for AI concerning all the above measures are 5 ± 0, 4.33 ± 0.89, 3.2 ± 0.67 and 1.87 ± 0.99, respectively. The mean results for advanced endoscopy for relevance, clarity, specificity, and originality were: 4. ± 0.89, 4.47 ± 0.74, 3.2 ± 0.77 and 1.73 ± 1.03, respectively. As shown in Table , the same trend was continuous in the mean and median grades across all topics, with high grades for relevance and clarity, good for specificity and very low for originality. Figure illustrates the level of inter-rater agreement and the mean grades in all categories for every topic. When the curves representing the ratings of different evaluators are closer together within the circle, it indicates a higher level of agreement among them. The further the curve is from the outer edge of the diagram, the higher the grades given by the evaluators. The monotonic nature of the curves suggests that the raters are consistent between their assessments. In general, ChatGPT demonstrated excellent results in terms of Clarity and Relevance, satisfactory performance in terms of Specificity, but inadequate performance in terms of Originality. Figure presents the mean scores for all readers for each category and each research topic.
When evaluating chatGPT for generating important research questions in IBD, microbiome, AI in gastroenterology, and advanced endoscopy in gastroenterology, we found that the model has the potential to be a valuable tool for generating high-quality research questions in these topics. In all the examined topics, chatGPT was able to produce a range of relevant, clear, and specific research questions, as evaluated by a panel of expert gastroenterologists. However, none of the questions was original. In fact, in terms of originality, the chatGPT showed poor performance. Overall, the results of our evaluation show that chatGPT has the potential to be a valuable resource for researchers. Its ability to generate a diverse range of high-quality research questions can help to advance the field by providing researchers with novel ideas for investigation. However, further research and development are needed to enhance chatGPT's ability in terms of originality. The results of the work reflect the general ability of ChatGPT to produce any type of text. Similar properties of clarity and relevance were part of the reward model of ChatGPT's original training, in which humans rated several outputs of the model according to their preferences. Thus, the model is able to produce outputs that are also rated as clear and relevant by other human raters. The limitation of the originality of the results is mentioned first on ChatGPT´s homepage , and is further emphasized in our current study. One potential area for future research is to explore the use of chatGPT in conjunction with other natural language processing techniques, such as topic modeling , to identify relevant research areas and generate more focused and specific research questions. Additionally, further studies could investigate the use of chatGPT in other subfields of gastroenterology, such as hepatology and gastrointestinal surgery, to assess its potential. Furthermore, we believe chatGPT can be relevant to many other fields of medical research.Importantly, the originality of the research topic received very low scores. This result highlights a key disadvantage of the large language models: NLP models are trained on a vast amount of text data and are able to generate responses based on these data – . While they are able to provide accurate and informative responses to a wide range of questions, these responses are not original or unique in the sense that they are not generated from their own experiences or insights. Instead, they are based on preexisting information and language patterns that the NLP models have learned from the data they were trained on. As a result, the responses generated by a language model are often not regarded as original ideas or insights. In this study we measured the intraclass correlation coefficient (ICC) between 3 experienced gastroenterologists, in order to evaluate the consistency and reliability of the questions’ assessments. A high ICC indicates that the observations made by different observers are highly consistent, which suggests that the results of the study are accurate. Despite the promising results of this study, there are limitations that should be considered when interpreting the findings. First, the expert panel that generated research questions consisted of only three gastroenterologists and two AI experts, and the panel that evaluated the questions consisted of three gastroenterologists. Though highly experienced, the results may not be representative of the broader community of researchers in these fields. Nevertheless, the results are solidified by the high degree of inter-observer agreement, which underscores the validity of the conclusions reached. Further studies with larger and more diverse panels of experts would be needed to confirm the generalizability of these results. Second, the evaluation of chatGPT's performance was based on subjective ratings by the expert panel, which may be subject to bias and variability. Objective measures, such as the citation frequency or impact factor of current academic papers focusing on the same topics of the research questions generated by chatGPT, would provide a more robust assessment of its performance. However, research questions often involve complex issues that cannot be easily quantified, such as the relevance of a question or the originality of a question in the existing literature. Therefore, subjective judgment is an essential component of the evaluation of research questions and helps to ensure that the questions are relevant, clear, feasible, original, evidence-based, and valid, taking into account the complex and context-specific nature of research questions. Furthermore, the quality of a research question can also be influenced by human values, such as ethical considerations, societal impact, and personal beliefs. These values cannot be easily quantified, and are best evaluated through subjective judgment. Third, this study focused on the performance of chatGPT in generating research questions in specific subfields of gastroenterology, but did not investigate its potential for generating research questions in other areas of medicine or science. Further research is needed to evaluate chatGPT's performance in a wider range of domains. Fourth, we used a single set of prompts for each of the four research topics to generate the research questions. Given that ChatGPT is sensitive to tweaks in the input, more experiments with different prompts would have been valuable in order to fully evaluate the potential of ChatGPT to generate diverse research questions. Additionally, we only used one instance of ChatGPT, and it is possible that the results could have been different with another instance of the model or a different language model. Further research is needed to determine the generalizability of our results to other models and contexts. It is noteworthy that the text summarization capabilities of GPT-3 were recently evaluated and displayed impressive results utilizing traditional benchmarks . Currently, as the utilization of the chatbot GPT is rapidly increasing, a vast amount of data is accumulating at a rapid pace regarding its various capabilities , . In conclusion, our evaluation of chatGPT as a research idea creator for four key topics in gastroenterology—inflammatory bowel disease, microbiome, AI in gastroenterology, and advanced endoscopy—showed promising results. ChatGPT was able to generate high-quality research questions in these fields, demonstrating its potential as a valuable tool for advancing the field of gastroenterology. While further research and development is needed to enhance chatGPT’s performance in terms of relevance and originality, its ability to generate a diverse range of clear and specific research questions has the potential to significantly contribute to the advancement of gastroenterology. Overall, chatGPT has the potential to be a valuable tool for researchers in the field of gastroenterology specifically and in other medical fields in general, and we believe it is worth further exploration and development.
Supplementary Information.
|
Spinach ( | 6c4f6c54-fa42-4bef-bf60-20001f90ae11 | 10011398 | Microbiology[mh] | Continuous monocropping is a modern agricultural practice in many parts of the world to increase yield on limited land – . To achieve and maintain high yields and economic benefits, high-value crop production is managed intensively year-round by applying high input chemical fertilizers and agricultural pesticides , . However, as a result of these practices, the deterioration of soil quality year after year as a result of soil acidification and imbalance of soil nutrient and the microbiome , which ultimately affect plant growth and causes continuous cultivation obstacles, has raised concerns about the sustainability of agroecosystems , . Similarly, soil nutrient imbalance and soil contamination are serious soil threats in South Korea, for which the government has devised action plans for sustainable soil management . Several measures have been proposed to overcome continuous cultivation obstacles, including crop rotation, chemical fumigation, soil solarization, and organic amendment – . Green manuring is an eco-friendly agricultural practice that improves soil fertility and crop productivity while alleviating impediments to continuing cultivation , . The use of green manures is widely practiced as a sustainable agricultural soil management option because it improves the biological, physical and chemical properties of soil – . Green manures scavenge nutrients from the soil, prevent nutrient leaching, and slowly release the nutrients that they have absorbed and locked in during decomposition . Incorporating green manures into the soil increases organic matter in the soil, which improves soil structure and fertility, allowing for better plant growth . In addition, many beneficial microbes that play a major role in soil nutrient cycling, soil health, and crop productivity have been found to be stimulated by incorporating green manures into the soil , , . The modification of soil nutrients, which ultimately alter soil microbe growth and colonization is partly responsible for the change in soil microbial community structure following addition of green manure – . For instance, after green manuring, a nutrient-rich environment favors copiotrophs , , . while a similar environment discourages slow-growing oligotrophic bacteria . The strong positive link between the alteration of soil microbial community and suppression of soil-borne pathogens . suggests that green manure not only improves soil nutrition but also enriches beneficial microbes with bio-control potential . However, depending on the type of green manure utilized, the efficacy of green manuring might vary greatly . Spinach, a cool-season vegetable that matures quickly, has been suggested as a potential rotation crop for increasing cucumber yield by enhancing beneficial fungal microbes in continuous monocropping . Nevertheless, to our knowledge, there has not been any recorded research on the usage of spinach as a green manure. Furthermore, Brassica species, when used as a green manure, also contain glucosinolates (GSLs), which are hydrolyzed into isothiocyanates and become toxic to soil-borne pests and weeds. Thus, GSL-containing brassicas, such as mustard cultivars, would have better effects on green manuring to effectively alleviate the continuous cultivation obstacles . Nevertheless, the effect of green manuring with Korean mustard cultivars (green and red mustard) and spinach on the productivity of chili pepper ( Capsicum annum ) and the taxonomic and functional diversity of the soil bacterial and fungal communities remains unknown. Chili pepper is a highly profitable crop grown in many parts of the world, including South Korea. However, a recent study found that long-term pepper monoculture made the soil more acidic, causing a significant effect on soil microbial communities . Given that spinach matures quickly, grows in autumn and spring when pepper cultivation does not overlap, and is also nutrient-rich with the potential to increase soil suppressiveness and plant productivity through soil microbiota modification , we hypothesize that spinach would be a suitable alternative to other green manures for tackling soil sickness brought on by long-term monocropping. Thus, we aimed to the investigate the effects of spinach and Korean mustard cultivars as green manures on soil chemical properties, weed suppression, pepper productivity, and soil microbiome.
Effect of green manures on soil chemical properties, weed emergence and pepper performance The impact of green manures on soil chemical properties is shown in Table . Although the soil in all treatments was initially taken from a single composite soil sample, the addition of green manures significantly ( p ≤ 0.05) increased soil pH, NH 4 + , and K, but not AP compared to the non-amended control. Spinach also strongly increased EC, SOM, and TN content when compared to control. The highest NO 3 − and K contents were recorded in green mustard- and spinach-amended soils, which were 1.95- and 2.8-times higher than those in control, respectively. Furthermore, green mustard showed the highest C:N ratio while spinach had the lowest. The highest Overall, the nutritional status of the soil was improved by green manures. The addition of green manures had a remarkable effect on weed emergence reduction and pepper productivity (Table ). Spinach and green mustard incorporation showed a significant reduction in the emergence of weed populations, particularly monocots, compared with control. In addition, similar to the results of soil nutritional status, control showed the lowest pepper fruit yield whereas spinach had by far the highest fruit yield, over 100% yield increment over control and mustard cultivars. Similarly, spinach improved pepper growth, including plant height, stem diameter, chlorophyll content, canopy diameter, and primary branch diameter. Green mustard also significantly ( p ≤ 0.05) increased pepper growth compared to control, but showed insignificant ( p > 0.05) differences in terms of fruit yield. Changes in soil microbial diversity and composition structure after green manuring Alpha diversity indices of bacterial and fungal communities were estimated for all green manure amendments, as indicated in Fig. a–d and Table . Most diversity indices showed that green manuring had a strong negative effect on bacterial diversity than on fungal diversity. Red mustard significantly ( p ≤ 0.05) increased fungal diversity compared to control and other treatments. Spinach containing no GSL, however, remarkably reduced microbial diversity, implying that the impact of green manuring on soil microbial diversity is highly dependent not only on the type of GSL content but also on other nutritional aspects of the type of green manures. Green manures had a remarkable impact on the taxonomic composition and structure of bacterial and fungal communities (Fig. e,f, Table ). In comparison to control, all green manures significantly ( p ≤ 0.05) enriched Bacillota population, while Acidobacteriota and Chloroflexi abundances were greatly reduced (Fig. g, Table ). Bacteroidota abundance was also slightly elevated with green manure amendments. At the class level, Clostridia was the dominant class at all green manure-amended soils, whereas, in control, it was rare group. Spinach also highly reduced the relative abundance of Acidobacteriae (Fig. g). Ascomycota was the dominant phylum in the fungal community, accounting for more than 90% of the total across all treatments (Fig. h, Table ). With the exception of spinach, fungal family Chaetomiaceae dominated the phylum Ascomycota. On the other hand, Stachybotryaceae was the most abundant family in spinach-amended soil. Among the other phyla, Basidiomycota, particularly Rhynchogastremataceae, increased in relative abundance with spinach application (Fig. h). Differential abundant taxa after green manuring Potential microbial biomarkers following green manuring were identified using four differential abundance testing tools, namely metastat, metagenomeSeq, LEfSe analysis, and the random forest model (Fig. , Supplementary file ). We identified 160 bacterial and 35 fungal taxa as the key taxa that were differentially abundant between the treatment groups. Significant number of members from p_Acidobacteriota and p_Chloroflexi, such as c_Ktedonobacteria and o_Gaiellales, were reduced in green manured soil compared to control. On the other hand, members of Bacillota including Clostridium and Bacillus were the most highly stimulated genera in green manure-amended soils and were consistently detected by all the microbiome differential abundance methods (Fig. a,c, Supplementary file ). In addition, several members of f__Sphingomonadaceae, and o__Xanthomonadales, such as Luteimonas and Sphingomonas were considerably more abundant in spinach-treated soil when compared to control and the other mustard green manures. On the other hand, Sedimentibacter was found especially abundant only in mustard-amended soils. In the fungal community, most tools used in the current study indicated members of Rhynchogastremataceae, such as Papiliotrema were the most markedly enriched fungal genera in spinach whereas Chaetomium and Fusarium were most reduced by the same treatment (Fig. , Supplementary file ). This implies that the stated genera can be considered key fungal biomarkers for spinach. The relative abundance of f_Aspergillaceae and Emericellopsis were enriched in red mustard amendment. Furthermore, most biomarker detection tools identified Chaetomium , Fusarium , and c_Leotiomycetes as the differentially abundant taxa in control (Fig. b,d). Relationships between soil chemical properties and soil microbial communities The impact of soil chemical properties changes following green manuring on microbial community structure (bacteria and fungi) was determined using the Mantel test (Table ). Soil pH, K, TN, and TC were significantly ( p ≤ 0.05) correlated with both bacterial and fungal community assemblies (Fig. , Table ). Furthermore, RDA analysis exhibited that the soil chemical properties explained 46.9 and 79.0% of the total bacterial and fungal variation, respectively (Fig. a,b). The first two RDA components separated the bacterial and fungal communities in the treatments into three clusters. Bacterial and fungal communities of mustard cultivars-treated soils were clustered together and were separated from control and spinach-treated soils. In the case of bacterial alpha diversity, K, pH and NH 4 + were significantly ( p ≤ 0.05) negatively correlated with almost all indices of bacterial diversity, whereas soil AP showed a significant positive correlation (Fig. c). On the other hand, soil EC was significantly ( p ≤ 0.05) negatively correlated with fungal diversity (Fig. d). Spearman correlation analysis at the phylum level also showed that Acidobacteriota, Nitrospirota and Armatimonadota had a significant ( p ≤ 0.05) negative correlation with soil pH, NH 4 + and K but a positive correlation to AP (Fig. e). Basidiomycota and Ascomycota showed an inverse relationship with soil chemical properties, including the SOM (Fig. f). The SEM analysis also demonstrated that changes in soil chemical properties had a significant ( p < 0.05) impact on the soil microbiota (Fig. ). Furthermore, a SEM analysis was performed to determine whether changes in the soil chemical properties affected pepper yield directly or indirectly (through microbiota shift). The results showed that the model was fit and that soil chemical properties, particularly TN and K alteration, had a greater impact pepper fruit yield than soil microbiota (Fig. ). Functional diversity after green manuring According to FAPROTAX analysis, a total of 41 predicted bacterial functions were identified from all treatments, with chemoheterotrophy and aerobic chemoheterotrophy being the most functionally redundant predicted functions. Bacterial communities in spinach-amended soil were clustered separately based on their predicted functional profiles (Fig. a). LEfSe analysis was performed to identify bacterial functions highly associated with green manures. Among the treatments, spinach-treated soil was differentially abundant in function related to hydrocarbon degradation, while the same treatment had the lowest predicted abundance of nitrate respiration, nitrogen respiration, predatory/exoparasitic, photoheterotrophy, and phototrophy (Fig. a). Furthermore, spinach had no predicted denitrification function, whereas green mustard treated-soil showed the highest rates of nitrate and nitrogen respiration. Control, however, showed the lowest abundant functions of anaerobic chemoheterotrophy (Fig. a,b). Further correlation analysis showed that anaerobic chemoheterotrophy, fermentation and hydrocarbon degradation were strongly positively associated ( p ≤ 0.05) with soil pH and K. The soil C:N ratio was positively associated with predicted cellulolysis function. Nitrite and nitrate denitrification showed a significant negative correlation ( p ≤ 0.05) with K. Soil EC increased the process of functions related to chemoheterotrophy but not with sulfur, fumarate, or manganese respiration (Fig. a). FUNGuild is used to predict changes in the ecological functions of the fungal communities following green manuring. The results showed that the most dominant functional guilds in all treatments were saprotrophs (Fig. a). The functional guilds of symbiotrophs and ectomycorrhizal were positively impacted by spinach green manure; however, soil amendments with mustard cultivars did not enhance such predicted functions (Fig. b,c). These functions are beneficial to plants because symbiotrophs and ectomycorrhizal fungi make beneficial relationship with plants. Furthermore, some of the predicted functions, including symbiotroph were significantly ( p > 0.05) associated with soil EC (Fig. b).
The impact of green manures on soil chemical properties is shown in Table . Although the soil in all treatments was initially taken from a single composite soil sample, the addition of green manures significantly ( p ≤ 0.05) increased soil pH, NH 4 + , and K, but not AP compared to the non-amended control. Spinach also strongly increased EC, SOM, and TN content when compared to control. The highest NO 3 − and K contents were recorded in green mustard- and spinach-amended soils, which were 1.95- and 2.8-times higher than those in control, respectively. Furthermore, green mustard showed the highest C:N ratio while spinach had the lowest. The highest Overall, the nutritional status of the soil was improved by green manures. The addition of green manures had a remarkable effect on weed emergence reduction and pepper productivity (Table ). Spinach and green mustard incorporation showed a significant reduction in the emergence of weed populations, particularly monocots, compared with control. In addition, similar to the results of soil nutritional status, control showed the lowest pepper fruit yield whereas spinach had by far the highest fruit yield, over 100% yield increment over control and mustard cultivars. Similarly, spinach improved pepper growth, including plant height, stem diameter, chlorophyll content, canopy diameter, and primary branch diameter. Green mustard also significantly ( p ≤ 0.05) increased pepper growth compared to control, but showed insignificant ( p > 0.05) differences in terms of fruit yield.
Alpha diversity indices of bacterial and fungal communities were estimated for all green manure amendments, as indicated in Fig. a–d and Table . Most diversity indices showed that green manuring had a strong negative effect on bacterial diversity than on fungal diversity. Red mustard significantly ( p ≤ 0.05) increased fungal diversity compared to control and other treatments. Spinach containing no GSL, however, remarkably reduced microbial diversity, implying that the impact of green manuring on soil microbial diversity is highly dependent not only on the type of GSL content but also on other nutritional aspects of the type of green manures. Green manures had a remarkable impact on the taxonomic composition and structure of bacterial and fungal communities (Fig. e,f, Table ). In comparison to control, all green manures significantly ( p ≤ 0.05) enriched Bacillota population, while Acidobacteriota and Chloroflexi abundances were greatly reduced (Fig. g, Table ). Bacteroidota abundance was also slightly elevated with green manure amendments. At the class level, Clostridia was the dominant class at all green manure-amended soils, whereas, in control, it was rare group. Spinach also highly reduced the relative abundance of Acidobacteriae (Fig. g). Ascomycota was the dominant phylum in the fungal community, accounting for more than 90% of the total across all treatments (Fig. h, Table ). With the exception of spinach, fungal family Chaetomiaceae dominated the phylum Ascomycota. On the other hand, Stachybotryaceae was the most abundant family in spinach-amended soil. Among the other phyla, Basidiomycota, particularly Rhynchogastremataceae, increased in relative abundance with spinach application (Fig. h).
Potential microbial biomarkers following green manuring were identified using four differential abundance testing tools, namely metastat, metagenomeSeq, LEfSe analysis, and the random forest model (Fig. , Supplementary file ). We identified 160 bacterial and 35 fungal taxa as the key taxa that were differentially abundant between the treatment groups. Significant number of members from p_Acidobacteriota and p_Chloroflexi, such as c_Ktedonobacteria and o_Gaiellales, were reduced in green manured soil compared to control. On the other hand, members of Bacillota including Clostridium and Bacillus were the most highly stimulated genera in green manure-amended soils and were consistently detected by all the microbiome differential abundance methods (Fig. a,c, Supplementary file ). In addition, several members of f__Sphingomonadaceae, and o__Xanthomonadales, such as Luteimonas and Sphingomonas were considerably more abundant in spinach-treated soil when compared to control and the other mustard green manures. On the other hand, Sedimentibacter was found especially abundant only in mustard-amended soils. In the fungal community, most tools used in the current study indicated members of Rhynchogastremataceae, such as Papiliotrema were the most markedly enriched fungal genera in spinach whereas Chaetomium and Fusarium were most reduced by the same treatment (Fig. , Supplementary file ). This implies that the stated genera can be considered key fungal biomarkers for spinach. The relative abundance of f_Aspergillaceae and Emericellopsis were enriched in red mustard amendment. Furthermore, most biomarker detection tools identified Chaetomium , Fusarium , and c_Leotiomycetes as the differentially abundant taxa in control (Fig. b,d).
The impact of soil chemical properties changes following green manuring on microbial community structure (bacteria and fungi) was determined using the Mantel test (Table ). Soil pH, K, TN, and TC were significantly ( p ≤ 0.05) correlated with both bacterial and fungal community assemblies (Fig. , Table ). Furthermore, RDA analysis exhibited that the soil chemical properties explained 46.9 and 79.0% of the total bacterial and fungal variation, respectively (Fig. a,b). The first two RDA components separated the bacterial and fungal communities in the treatments into three clusters. Bacterial and fungal communities of mustard cultivars-treated soils were clustered together and were separated from control and spinach-treated soils. In the case of bacterial alpha diversity, K, pH and NH 4 + were significantly ( p ≤ 0.05) negatively correlated with almost all indices of bacterial diversity, whereas soil AP showed a significant positive correlation (Fig. c). On the other hand, soil EC was significantly ( p ≤ 0.05) negatively correlated with fungal diversity (Fig. d). Spearman correlation analysis at the phylum level also showed that Acidobacteriota, Nitrospirota and Armatimonadota had a significant ( p ≤ 0.05) negative correlation with soil pH, NH 4 + and K but a positive correlation to AP (Fig. e). Basidiomycota and Ascomycota showed an inverse relationship with soil chemical properties, including the SOM (Fig. f). The SEM analysis also demonstrated that changes in soil chemical properties had a significant ( p < 0.05) impact on the soil microbiota (Fig. ). Furthermore, a SEM analysis was performed to determine whether changes in the soil chemical properties affected pepper yield directly or indirectly (through microbiota shift). The results showed that the model was fit and that soil chemical properties, particularly TN and K alteration, had a greater impact pepper fruit yield than soil microbiota (Fig. ).
According to FAPROTAX analysis, a total of 41 predicted bacterial functions were identified from all treatments, with chemoheterotrophy and aerobic chemoheterotrophy being the most functionally redundant predicted functions. Bacterial communities in spinach-amended soil were clustered separately based on their predicted functional profiles (Fig. a). LEfSe analysis was performed to identify bacterial functions highly associated with green manures. Among the treatments, spinach-treated soil was differentially abundant in function related to hydrocarbon degradation, while the same treatment had the lowest predicted abundance of nitrate respiration, nitrogen respiration, predatory/exoparasitic, photoheterotrophy, and phototrophy (Fig. a). Furthermore, spinach had no predicted denitrification function, whereas green mustard treated-soil showed the highest rates of nitrate and nitrogen respiration. Control, however, showed the lowest abundant functions of anaerobic chemoheterotrophy (Fig. a,b). Further correlation analysis showed that anaerobic chemoheterotrophy, fermentation and hydrocarbon degradation were strongly positively associated ( p ≤ 0.05) with soil pH and K. The soil C:N ratio was positively associated with predicted cellulolysis function. Nitrite and nitrate denitrification showed a significant negative correlation ( p ≤ 0.05) with K. Soil EC increased the process of functions related to chemoheterotrophy but not with sulfur, fumarate, or manganese respiration (Fig. a). FUNGuild is used to predict changes in the ecological functions of the fungal communities following green manuring. The results showed that the most dominant functional guilds in all treatments were saprotrophs (Fig. a). The functional guilds of symbiotrophs and ectomycorrhizal were positively impacted by spinach green manure; however, soil amendments with mustard cultivars did not enhance such predicted functions (Fig. b,c). These functions are beneficial to plants because symbiotrophs and ectomycorrhizal fungi make beneficial relationship with plants. Furthermore, some of the predicted functions, including symbiotroph were significantly ( p > 0.05) associated with soil EC (Fig. b).
Soil acidification, soil nutrient depletion, weeds, and soil-borne diseases are major problems in the continuous monocropping of intensive cultivation production systems , . Thus, alternative solutions are required to tackle these problems . Green manuring is an eco-friendly traditional agricultural practice that improves soil fertility and crop productivity while alleviating impediments to continuing cultivation , , . Our study showed that the incorporated spinach green manures resulted in an improvement in soil nutritional conditions (e.g. pH, SOM, TN, NH 4 + , and K). Previous reports noted that increased ammonification may contribute to the pH increase in green manure-amended soils . Increased mineralization of organic matter, as seen in spinach with the lowest C:N ratio, enhances the hydroxyl group consumption of H + , increasing soil pH . The low pH soil condition might cause high soil AP because of the higher solubility of AP in acidic soil conditions . Thus, addressing the problem of soil acidification would help improve nutrient balance in soil-degraded monocropping agroecosystems . In addition, increased nutrient availability in the green manure-amended soil was observed in our study. Previous studies have revealed that soil nitrogen and exchangeable potassium, following green manuring were the determinant factors for yield enhancement, which is consistent with our work . Weeds are a major crop production constraint that increases production costs, which necessitates efficient and sustainable weed control. The most effective green manure for suppressing weed populations in our study was spinach followed by mustard. This is consistent with previous reports on weed suppression by various green manures . Even while it is predicted that soil amendment with brassicas will suppress weeds , it was interesting to find that spinach, a non-brassica with no GSL (Table ), had the highest weed suppression effect. Notably, high fermentation and hydrocarbon degradation (as seen in FAPROTAX predicted function, Fig. b) by the presence of abundant Clostridium may increase the conversion of carbohydrates to organic acids, which may aid in the suppression of weed growth . Nevertheless, further research is needed to determine why spinach is effective in weed control. Furthermore, spinach had by far the highest fruit yield among green manures, with a yield increase of more than 100% over control. Including spinach in crop rotation increased cucumber yield considerably by increasing beneficial microbes, according to a recent study . The government of South Korea has developed action plans to address challenges with monocropping through sustainable soil management programs . Our study thus raises prospect of encouraging the use of spinach as a green manure preplant soil treatment in the pepper growing regions. Changes in soil mineral composition following agricultural practices are known to cause a shift in soil microbial community structure and functional diversity , , . This can lead to improved crop yield owing to enhanced soil suppressiveness, nutrient cycling, and availability , The shift in soil bacterial and fungal community structure after green manure amendment , has been previously reported partly because the incorporated substrate modifies the soil nutrient for soil microbe growth and colonization . Our results supports the previous studies that soil pH, K, and TN are major influencing factors in both bacterial and fungal community assemblies . Soil pH, which is associated with cation release during decomposition , is an important factor that markedly influences the bacterial and fungal community structure assembly . Furthermore, the reduced microbial diversity following green manuring can be linked to intra- and inter-kingdom competition caused by changes in soil chemical properties , , . In our study, where green manures altered soil nutritional status, bacterial diversity was significantly negatively associated with NH 4 + . Previous research has also found that soil bacterial diversity is highly negatively associated with N application . Green manuring with low GSL-containing red mustard cultivar had higher microbial diversity than green mustard cultivar with high GSL content. Although GSL is known to have negative effect on microbial diversity, more research is required to confirm it . Our results support previous reports that many members of Bacillota, including Clostridium, Bacillus and Sedimentibacter, were enriched in response to green manure soil amendments. Bacillota are copiotrophs in which substrate-amendment enhances nutrient-rich environment for their growth , , . The genus Clostridium are diazotrophs capable nitrogen fixation , and the production of toxic organic acids that could suppress soil-borne pathogens and weeds , , . On the other hand, Chloroflexi and Acidobacteriota were reduced with green manure addition. These findings comply with the previous studies that such oligotrophic bacteria are adapt to low available soil nutrients and low pH conditions , , . Chloroflexi are often strongly associated with low crop productivity , , and some studies label them as disease inducible . In addition, the majority of Chloroflexi members do not fix nitrogen; instead, they compete with other beneficial microbes and the host plant itself for nitrogen resources , . Basidiomycota and Ascomycota had an inverse relationship with soil chemical properties as reported in the previous study . The positive correlation between Basidiomycota and soil organic matter supports the previous report that Basidiomycota are the primary decomposers of soil debris . Fusarium was differentially more abundant in control as opposed to the spinach-amended soil. Fusarium is serious soil-borne pathogens that affect a variety of crops, including peppers, and are well adapted to pepper monoculture , . The reduction in Fusarium abundance with organic amendment complies with the previous finding , suggesting the potential of spinach as green manure for the suppression of Fusarium -incited diseases. Furthermore, the enrichments of beneficial soil fungi, such as Papiliotrema , , that have the potential to engage in biocontrol activities following spinach amendment shows that spinach as green manure not only improves soil nutrition but also promotes resident soil microbes with biocontrol potential to flourish. In summary, spinach improved soil nutrition (e.g., pH, SOM, TN, NH 4 + , and K), pepper growth, pepper fruit yield and suppressed weed population. Green mustard also increased soil nutrition and suppressed weed growth but had no significant effect on pepper yield. The major influencing factors in both bacterial and fungal community assemblies were soil pH, TC, TN, and K. All green manures highly stimulated members of Bacillota, including Clostridium and Bacillus . Spinach also highly reduced the abundance of members of Acidobacteriota and Chloroflexi while enriching fungal members of Rhynchogastremataceae, such as Papiliotrema . Overall, spinach outperformed other treatments in terms of weed control and yield improvement, whereas red mustard exceeded for positive effect on soil fungal diversity. This study contributes significantly to our understanding of how the soil microbiome and soil fertility alteration via green manure application as a pre-plant soil treatment might help alleviate continuous cropping obstacles.
Materials, study design and sampling Seeds of mustard cultivars (green and red mustard) were acquired from the National Institute of Crop Science (NICS), Rural Development Administration, South Korea. Spinach seeds were obtained from Jeilseed Company in Doan-myeon, Chungcheongbuk-do, South Korea. Seeds of these Brassica cultivars and spinach were planted in a polyhouse at Kyungpook National University, South Korea, and plant biomass was collected two months after planting. The soil for this study pot experiment was collected in January 2021 from a long-year pepper-monocropped soil in Gunwi-gun, Gyeongsangbuk-do province, South Korea (36°10′09′′N,128°38′24′′E), whose productivity had declined substantially (Fig. ). The soil was sieved through an 8-mm sieve and completely homogenized. The initial soil chemical properties are indicated in the Table . The fresh harvested biomass of green manures, which contained a variable range of total GSL concentrations (Table ), was mix homogeneously and separately with the soil at 0.5% (w/w) on a dry weight basis. Soil with no green manure amendment served as the non-amended control. The soil from each treatment group was placed in plastic containers with three replicates. Each treatment, including control, was watered (sterile distilled water) to 70% field capacity and covered for 30 days (Start date: June 6, 2021 End date: February 5, 2021) with a plastic transparent polythene film in the polyhouse. The polythene film was uncovered and the soil was air-drained for 60 days (Start date: February 5, 2021 End date: April 9, 2021). Pots (15 cm diameter, 31 cm height, aerated with holes at the bottom) were filled with 2 kg green manure-amended soil (10 g pot −1 ). One pepper (cultivar Dongmudae) seedling, one-month-old, was transplanted into each pot (April 9, 2021). Four different treatments were used in the current study: control, spinach, red mustard and green mustard. All treatments were replicated three times and laid out in a completely randomized experimental design, with each replicate containing five pots (15 pots per treatment). Pepper plants were grown in a polyhouse for three months (Start date: April 2021 End date: July 2021) and watered twice a week. Soil samples for chemical property analysis and DNA extraction were collected after soil treatment immediately before transplantation. Soil samples were collected at three points within each pot and the samples were pooled to yield three pooled samples (replicates). The soil samples were kept at − 80 °C until the DNA extraction. Soil chemical analysis The soil chemical properties were analyzed from dried soil samples. Using a pH and EC meter (SP2000, Skalar BV, Netherlands), the electrical conductivity (EC) and pH of the soil were determined in a 1:5 (w/v) soil: deionized distilled water suspension. A titrando automatic titrator (Metrohm 888, Switzerland) was used to analyze soil organic matter (SOM). The BaCl 2 -H 2 SO 4 exchange method was used to determine the soil cation exchange capacity (CEC). The ammonium-nitrogen (NH 4 + ) nitrate-nitrogen (NO 3 − ) concentration in the soil were measured colorimetrically by salicylate method and cadmium reduction method , respectively, using BLTEC QuAAtro (BLTEC KK, Japan). The soil total nitrogen (TN) concentration was determined by the method described by Dumas with S832DR (Leco, USA). Soil exchangeable potassium (K) concentration was analyzed using a PerkinElmer® Optima 8300 ICP-OES (PerkinElmer, Inc., MA, USA). The soil available P 2 O 5 (AP) concentration was analyzed using a SKALAR San + + system autoanalyzer (Skalar Analytical B.V., Breda, Netherlands). Weed emergence and pepper performance The reduction in the emergence counts of monocot and dicot weeds following green manure application was determined before pepper transplanting. Pepper growth parameters, such as plant height, stem diameter, primary branch length and diameter, and chlorophyll content, were measured at the end of the experiment (July 2021), three months after transplanting. Chlorophyll content (SPAD unit) was measured using a chlorophyll-meter (Konica Minolta, Japan). Fully shiny matured green fruits with over 5 cm were collected three times. DNA extraction, library preparation and sequencing The DNeasy® PowerSoil® Pro Kit (Qiagen, Hilden, Germany) was to used extract microbial DNA from soil samples (0.5 g) according to the manufacturer’s protocol. The extracted DNA quantity and purity were measured using a Qubit® 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and NanoDrop™ One C spectrophotometer (Thermo Fisher Scientific). The extracted DNA was stored at − 80 °C until it was used for Illumina MiSeq sequencing. The fungal internal transcribed spacer 1 (ITS1) region and the bacterial V4-V5 hypervariable region of the 16S rRNA gene were PCR-amplified with ITS86F/ITS4R , and the universal primers 515F/907R , using an Eppendorf Mastercycler® Nexus PCR Cycler (Eppendorf, Hamburg, Germany). The 50 µl PCR reaction mixture included 25 µl EmeraldAmp® PCR Master Mix (Takara, Shiga, Japan), 1 µl DNA template, 1 µl (0.5 µM/µl) per primer, and 22 µl of double-distilled water. The PCR reaction conditions and primer sequences are shown in Table . The Nextera®XT Index Kit (Illumina, San Diego, CA, USA) was used to ligate the Illumina sequence adapters to the PCR products, according to the manufacturer’s protocol. The final PCR products were purified using AMPure XP beads (Beckman Coulter Life Sciences, CA, USA) and kept at − 20 °C until use. The size variation in the amplicon product was considered while pooling samples of both 16S rRNA and ITS2 indexed amplicons at equimolar concentration. The libraries were checked for their concentration and size using an Agilent Bioanalyzer (Santa Clara, Ca, USA), and the pooled library with a final loading concentration of 20 pM was sequenced using the Illumina MiSeq platform (Illumina) at Kyungpook National University’s NGS Core Facility Center in South Korea. Bioinformatics analysis Bacterial and fungal raw sequences were demultiplexed using the QIIME2 pipeline ( https://qiime2.org ), and the reads were denoised in QIIME2 using DADA2, and chimeric sequences and singletons were removed . Reads were truncated, and the ones with quality scores of ≥ 25 were retained. Non-chimeric representative sequences that made up amplicon sequence variants (ASVs) were aligned using MAFFT and taxonomy was assigned using a classify-sklearn-based qiime feature-classifier trained on the reference SILVA 99% full-length database (version 138.1) and UNITE database (version 8.3) for bacteria and fungi, respectively. ASVs assigned as mitochondria, chloroplasts, and unclassified taxa at the kingdom level were excluded. The sample reads were rarefied to equal size to enable a similarity comparison between treatments. The normalized data set contained 1786 and 202 ASVs of bacteria and fungi, respectively. FAPROTAX, functional annotation of prokaryotic taxa, was used to predict the ecological functions of bacterial communities – . Fungal functional guild (FUNGuild) was used to predict the functional changes in fungal communities following BF treatment. Statistical analysis All downstream statistical data analyses were conducted using the R statistical software (v4.1.3) . Data visualization was performed using different R packages: ggplot and ComplexHeatmap (neatmap v2.1.0) . Homogeneity of variance and multivariate homogeneity of dispersion were checked using Levene’s test and PERMDISP , , respectively. The data normality assumption was tested using the Shapiro–Wilk test. ANOVA with Duncan’s multiple range test with dplyr package were used to compare the statistical difference between treatments in soil chemical properties, plant phenotype and alpha diversity indices (at ASVs-level). The overall statistical difference in microbial community composition between treatments was determined using permutational multivariate analysis of variance (PERMANOVA) (Adonis; vegan, version 2.5.7) . The association between soil chemical properties and abundance of soil microbial communities was assessed using dbRDA in R. LEfSe , metastat , metagenomeSeq , and Random forest in R were used to identify potential microbial biomarkers that were statistically differentially abundant between control and green manure-amended treatments. Using the vegan and sem packages in R (v4.1.3) , structural equation modeling (SEM) analysis was carried out to comprehend how a change in the soil chemical properties and microbial community following the addition of green manure effects pepper yield. Additionally, the of first PCOA values were served as a representation of the bacterial and fungal community structures in the SEM analysis . Microbial diversity was a representation of bacterial and fungal diversities. Low chi-square (X2) value/degree of freedom (< 2), non-significant X2 test ( p > 0.05), a low root mean squared error of approximation (RMSEA < 0.05), high comparative fit index (CFI > 0.9) and low standard root mean square residual (SRMR < 0.05) were used to determine the model’s fit.
Seeds of mustard cultivars (green and red mustard) were acquired from the National Institute of Crop Science (NICS), Rural Development Administration, South Korea. Spinach seeds were obtained from Jeilseed Company in Doan-myeon, Chungcheongbuk-do, South Korea. Seeds of these Brassica cultivars and spinach were planted in a polyhouse at Kyungpook National University, South Korea, and plant biomass was collected two months after planting. The soil for this study pot experiment was collected in January 2021 from a long-year pepper-monocropped soil in Gunwi-gun, Gyeongsangbuk-do province, South Korea (36°10′09′′N,128°38′24′′E), whose productivity had declined substantially (Fig. ). The soil was sieved through an 8-mm sieve and completely homogenized. The initial soil chemical properties are indicated in the Table . The fresh harvested biomass of green manures, which contained a variable range of total GSL concentrations (Table ), was mix homogeneously and separately with the soil at 0.5% (w/w) on a dry weight basis. Soil with no green manure amendment served as the non-amended control. The soil from each treatment group was placed in plastic containers with three replicates. Each treatment, including control, was watered (sterile distilled water) to 70% field capacity and covered for 30 days (Start date: June 6, 2021 End date: February 5, 2021) with a plastic transparent polythene film in the polyhouse. The polythene film was uncovered and the soil was air-drained for 60 days (Start date: February 5, 2021 End date: April 9, 2021). Pots (15 cm diameter, 31 cm height, aerated with holes at the bottom) were filled with 2 kg green manure-amended soil (10 g pot −1 ). One pepper (cultivar Dongmudae) seedling, one-month-old, was transplanted into each pot (April 9, 2021). Four different treatments were used in the current study: control, spinach, red mustard and green mustard. All treatments were replicated three times and laid out in a completely randomized experimental design, with each replicate containing five pots (15 pots per treatment). Pepper plants were grown in a polyhouse for three months (Start date: April 2021 End date: July 2021) and watered twice a week. Soil samples for chemical property analysis and DNA extraction were collected after soil treatment immediately before transplantation. Soil samples were collected at three points within each pot and the samples were pooled to yield three pooled samples (replicates). The soil samples were kept at − 80 °C until the DNA extraction.
The soil chemical properties were analyzed from dried soil samples. Using a pH and EC meter (SP2000, Skalar BV, Netherlands), the electrical conductivity (EC) and pH of the soil were determined in a 1:5 (w/v) soil: deionized distilled water suspension. A titrando automatic titrator (Metrohm 888, Switzerland) was used to analyze soil organic matter (SOM). The BaCl 2 -H 2 SO 4 exchange method was used to determine the soil cation exchange capacity (CEC). The ammonium-nitrogen (NH 4 + ) nitrate-nitrogen (NO 3 − ) concentration in the soil were measured colorimetrically by salicylate method and cadmium reduction method , respectively, using BLTEC QuAAtro (BLTEC KK, Japan). The soil total nitrogen (TN) concentration was determined by the method described by Dumas with S832DR (Leco, USA). Soil exchangeable potassium (K) concentration was analyzed using a PerkinElmer® Optima 8300 ICP-OES (PerkinElmer, Inc., MA, USA). The soil available P 2 O 5 (AP) concentration was analyzed using a SKALAR San + + system autoanalyzer (Skalar Analytical B.V., Breda, Netherlands).
The reduction in the emergence counts of monocot and dicot weeds following green manure application was determined before pepper transplanting. Pepper growth parameters, such as plant height, stem diameter, primary branch length and diameter, and chlorophyll content, were measured at the end of the experiment (July 2021), three months after transplanting. Chlorophyll content (SPAD unit) was measured using a chlorophyll-meter (Konica Minolta, Japan). Fully shiny matured green fruits with over 5 cm were collected three times.
The DNeasy® PowerSoil® Pro Kit (Qiagen, Hilden, Germany) was to used extract microbial DNA from soil samples (0.5 g) according to the manufacturer’s protocol. The extracted DNA quantity and purity were measured using a Qubit® 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and NanoDrop™ One C spectrophotometer (Thermo Fisher Scientific). The extracted DNA was stored at − 80 °C until it was used for Illumina MiSeq sequencing. The fungal internal transcribed spacer 1 (ITS1) region and the bacterial V4-V5 hypervariable region of the 16S rRNA gene were PCR-amplified with ITS86F/ITS4R , and the universal primers 515F/907R , using an Eppendorf Mastercycler® Nexus PCR Cycler (Eppendorf, Hamburg, Germany). The 50 µl PCR reaction mixture included 25 µl EmeraldAmp® PCR Master Mix (Takara, Shiga, Japan), 1 µl DNA template, 1 µl (0.5 µM/µl) per primer, and 22 µl of double-distilled water. The PCR reaction conditions and primer sequences are shown in Table . The Nextera®XT Index Kit (Illumina, San Diego, CA, USA) was used to ligate the Illumina sequence adapters to the PCR products, according to the manufacturer’s protocol. The final PCR products were purified using AMPure XP beads (Beckman Coulter Life Sciences, CA, USA) and kept at − 20 °C until use. The size variation in the amplicon product was considered while pooling samples of both 16S rRNA and ITS2 indexed amplicons at equimolar concentration. The libraries were checked for their concentration and size using an Agilent Bioanalyzer (Santa Clara, Ca, USA), and the pooled library with a final loading concentration of 20 pM was sequenced using the Illumina MiSeq platform (Illumina) at Kyungpook National University’s NGS Core Facility Center in South Korea.
Bacterial and fungal raw sequences were demultiplexed using the QIIME2 pipeline ( https://qiime2.org ), and the reads were denoised in QIIME2 using DADA2, and chimeric sequences and singletons were removed . Reads were truncated, and the ones with quality scores of ≥ 25 were retained. Non-chimeric representative sequences that made up amplicon sequence variants (ASVs) were aligned using MAFFT and taxonomy was assigned using a classify-sklearn-based qiime feature-classifier trained on the reference SILVA 99% full-length database (version 138.1) and UNITE database (version 8.3) for bacteria and fungi, respectively. ASVs assigned as mitochondria, chloroplasts, and unclassified taxa at the kingdom level were excluded. The sample reads were rarefied to equal size to enable a similarity comparison between treatments. The normalized data set contained 1786 and 202 ASVs of bacteria and fungi, respectively. FAPROTAX, functional annotation of prokaryotic taxa, was used to predict the ecological functions of bacterial communities – . Fungal functional guild (FUNGuild) was used to predict the functional changes in fungal communities following BF treatment.
All downstream statistical data analyses were conducted using the R statistical software (v4.1.3) . Data visualization was performed using different R packages: ggplot and ComplexHeatmap (neatmap v2.1.0) . Homogeneity of variance and multivariate homogeneity of dispersion were checked using Levene’s test and PERMDISP , , respectively. The data normality assumption was tested using the Shapiro–Wilk test. ANOVA with Duncan’s multiple range test with dplyr package were used to compare the statistical difference between treatments in soil chemical properties, plant phenotype and alpha diversity indices (at ASVs-level). The overall statistical difference in microbial community composition between treatments was determined using permutational multivariate analysis of variance (PERMANOVA) (Adonis; vegan, version 2.5.7) . The association between soil chemical properties and abundance of soil microbial communities was assessed using dbRDA in R. LEfSe , metastat , metagenomeSeq , and Random forest in R were used to identify potential microbial biomarkers that were statistically differentially abundant between control and green manure-amended treatments. Using the vegan and sem packages in R (v4.1.3) , structural equation modeling (SEM) analysis was carried out to comprehend how a change in the soil chemical properties and microbial community following the addition of green manure effects pepper yield. Additionally, the of first PCOA values were served as a representation of the bacterial and fungal community structures in the SEM analysis . Microbial diversity was a representation of bacterial and fungal diversities. Low chi-square (X2) value/degree of freedom (< 2), non-significant X2 test ( p > 0.05), a low root mean squared error of approximation (RMSEA < 0.05), high comparative fit index (CFI > 0.9) and low standard root mean square residual (SRMR < 0.05) were used to determine the model’s fit.
Supplementary Information 1. Supplementary Information 2.
|
A sequential, multiple assignment randomized trial comparing web-based education to mobile video interpreter access for improving provider interpreter use in primary care clinics: the mVOCAL hybrid type 3 study protocol | 5a7af81a-34f1-49d1-969d-fe610684f1fc | 10012737 | Patient-Centered Care[mh] | Effective communication is essential for safe and equitable care. Twenty-five million people in the United States of America (USA) report speaking English less than “very well” and, as a result, have limited access to safe and high-quality medical care . Language barriers in healthcare are associated with lower patient comprehension, adherence, and satisfaction [ – ]; higher costs, longer hospital stays, and increased odds of readmission [ – ]; less treatment for pain ; increased risk of serious adverse events [ – ]; and increased mortality . Given the importance of effective communication to high-quality medical care, improving communication with patients who use a language other than English for medical care has been named a national priority . The research-to-practice gap: underuse of interpretation is a persistent problem Interpretation provided by trained medical interpreters, whether in person or via telephone or video, has repeatedly been shown to mitigate disparities in care for patients with language barriers . However, despite clear evidence of benefit, wide availability, and federal, state, and regulatory mandates requiring professional interpreter use for patients who use a language other than English , underuse remains pervasive [ – ]. Nearly half of US pediatricians report using no professional interpreters with families who use a language other than English . Interpreter use in acute care settings is similarly low, with 17–45% of patients receiving any [ – ]. Providers often use English or untrained ad hoc interpreters (family or friends), a practice associated with clinically important errors up to 77% of the time [ , , – ]. Barriers to interpreter use exist at multiple levels, with evidence that providers weigh barriers against anticipated benefit for every communication . Commonly identified barriers map onto the Theoretical Domains Framework (TDF) , which integrates behavior change theories for application in health services and implementation research. These include provider-level barriers such as conceptual and technical knowledge (uncertainty about need for or how to access interpreters), beliefs about capabilities (lack of confidence in interpreter use, belief their own non-English language skills are adequate), beliefs about consequences (uncertainty of benefit, anticipated frustration), and environmental context (time pressure); team-level barriers including social influences (a culture of “getting by” without an interpreter); and system-level barriers including environmental context and resources (difficulty identifying patients with language barriers and lengthy or difficult processes to access interpreters) [ , – ]. While in-person interpreters are preferred by providers [ , – ], remote methods have benefits, such as being widely accessible, immediately available, and the only option for uncommon languages. Among remote methods, video costs more than telephone but is often preferred by providers [ – ]. Previously studied strategies lack attribution, scalability, and data on costs and mechanisms Strategies to improve interpreter use generally fall into three categories: provider education (focused on provider-level barriers), systems improvements (focused on system-level barriers and the provider-system interaction), and multifaceted, multilevel interventions. Provider education is most common, typically delivered via in-person workshops [ – ]. Though such trainings typically improve knowledge and confidence, it is unknown whether such improvements lead to improved interpreter use [ , , ]. Systems interventions aim to make access easier or offer access to preferred interpreter types, such as installing dedicated bedside interpreter phones with 1-touch dialing or enabling access to shared video interpreter units . Systems interventions have generally yielded only modest improvement [ – ], likely because important barriers have remained: improving access to telephone interpretation did not address provider dislike for it, and current models for video interpreter use involve shared devices (e.g., clinic laptop), which introduce barriers around finding and using it. Multifaceted, multilevel interventions, combining education, systems interventions, and facilitation, have been most successful [ , – ], yet such approaches are time and resource intensive and lack data on which aspects were most effective . No studies have yet considered mechanisms of action, few have measured cost, and many interventions are not scalable. Preliminary studies Our previous work showed the effectiveness of video interpretation for improving communication with families with a language barrier . In a randomized clinical trial enrolling 249 Spanish-speaking families in an emergency department, we found that assignment to video interpretation, compared to telephone, was associated with significantly higher interpreter use , parent understanding , and provider satisfaction . However, half of video-recorded interactions still did not use a professional interpreter, and 43% of providers reported trouble accessing an interpreter. These_findings support video interpretation as an effective evidence-based practice for communicating across language barriers, but without an optimized platform or strategy for engaging with it. We therefore explored the feasibility and acceptability of mobile video interpreting on personal devices, as a novel strategy to deliver the evidence-based practice of video interpretation. Mobile video interpreting overcomes barriers associated with conventional access via shared devices [ , , ]. To determine its feasibility and acceptability in primary care, we conducted 6 simulated patient sessions with mobile video interpreting and then interviewed the provider. Providers were universally positive about it, with scores on the acceptability of intervention measure and feasibility of intervention measure of 4.7 and 4.9 out of 5 . We also surveyed a panel of 67 PCPs in our region to assess mobile video interpreting acceptability in practice. Most (71%) said they would be “very likely” or “somewhat likely” to use mobile video interpreting if offered. These results support mobile video interpreting as an acceptable and potentially feasible strategy for accessing the evidence-based practice of professional video interpreter use. Study aims To address current knowledge gaps, we will test two implementation strategies for improving interpreter use in primary care and examining implementation and effectiveness outcomes, cost-effectiveness, and mechanisms of action. Providers will be enrolled and randomized to one of two strategies, alone or in sequence, using a Sequential Multiple Assignment Randomized Trial (SMART) design [ – ]. One strategy, web-based educational modules, targets known deficits in provider knowledge, confidence, and motivation around interpreter use. The second strategy, mobile video interpreting, provides quick access to video interpretation. Providers are more likely not only to use video interpretation versus telephone but also mobile video interpreting overcomes system-level barriers to shared device use, as providers will access professional video interpreters on their own smartphone or tablet. Data will be collected from enrolled providers and their patients/families who use a language other than English, via administrative data, surveys, qualitative interviews, and video-recorded clinic visits. Our specific aims are as follows: Aim 1: To compare the effectiveness of two implementation strategies, alone and in combination, to improve use of interpretation and comprehension for patients/parents who use a language other than English, seen in adult and pediatric primary care settings Aim 2: To explore mobile video interpreting and education implementation strategies’ ability to activate putative provider-level mechanisms Aim 3: To determine the incremental cost-effectiveness from a healthcare organization perspective of each implementation strategy (mobile video interpreting, education, both)
Interpretation provided by trained medical interpreters, whether in person or via telephone or video, has repeatedly been shown to mitigate disparities in care for patients with language barriers . However, despite clear evidence of benefit, wide availability, and federal, state, and regulatory mandates requiring professional interpreter use for patients who use a language other than English , underuse remains pervasive [ – ]. Nearly half of US pediatricians report using no professional interpreters with families who use a language other than English . Interpreter use in acute care settings is similarly low, with 17–45% of patients receiving any [ – ]. Providers often use English or untrained ad hoc interpreters (family or friends), a practice associated with clinically important errors up to 77% of the time [ , , – ]. Barriers to interpreter use exist at multiple levels, with evidence that providers weigh barriers against anticipated benefit for every communication . Commonly identified barriers map onto the Theoretical Domains Framework (TDF) , which integrates behavior change theories for application in health services and implementation research. These include provider-level barriers such as conceptual and technical knowledge (uncertainty about need for or how to access interpreters), beliefs about capabilities (lack of confidence in interpreter use, belief their own non-English language skills are adequate), beliefs about consequences (uncertainty of benefit, anticipated frustration), and environmental context (time pressure); team-level barriers including social influences (a culture of “getting by” without an interpreter); and system-level barriers including environmental context and resources (difficulty identifying patients with language barriers and lengthy or difficult processes to access interpreters) [ , – ]. While in-person interpreters are preferred by providers [ , – ], remote methods have benefits, such as being widely accessible, immediately available, and the only option for uncommon languages. Among remote methods, video costs more than telephone but is often preferred by providers [ – ].
Strategies to improve interpreter use generally fall into three categories: provider education (focused on provider-level barriers), systems improvements (focused on system-level barriers and the provider-system interaction), and multifaceted, multilevel interventions. Provider education is most common, typically delivered via in-person workshops [ – ]. Though such trainings typically improve knowledge and confidence, it is unknown whether such improvements lead to improved interpreter use [ , , ]. Systems interventions aim to make access easier or offer access to preferred interpreter types, such as installing dedicated bedside interpreter phones with 1-touch dialing or enabling access to shared video interpreter units . Systems interventions have generally yielded only modest improvement [ – ], likely because important barriers have remained: improving access to telephone interpretation did not address provider dislike for it, and current models for video interpreter use involve shared devices (e.g., clinic laptop), which introduce barriers around finding and using it. Multifaceted, multilevel interventions, combining education, systems interventions, and facilitation, have been most successful [ , – ], yet such approaches are time and resource intensive and lack data on which aspects were most effective . No studies have yet considered mechanisms of action, few have measured cost, and many interventions are not scalable.
Our previous work showed the effectiveness of video interpretation for improving communication with families with a language barrier . In a randomized clinical trial enrolling 249 Spanish-speaking families in an emergency department, we found that assignment to video interpretation, compared to telephone, was associated with significantly higher interpreter use , parent understanding , and provider satisfaction . However, half of video-recorded interactions still did not use a professional interpreter, and 43% of providers reported trouble accessing an interpreter. These_findings support video interpretation as an effective evidence-based practice for communicating across language barriers, but without an optimized platform or strategy for engaging with it. We therefore explored the feasibility and acceptability of mobile video interpreting on personal devices, as a novel strategy to deliver the evidence-based practice of video interpretation. Mobile video interpreting overcomes barriers associated with conventional access via shared devices [ , , ]. To determine its feasibility and acceptability in primary care, we conducted 6 simulated patient sessions with mobile video interpreting and then interviewed the provider. Providers were universally positive about it, with scores on the acceptability of intervention measure and feasibility of intervention measure of 4.7 and 4.9 out of 5 . We also surveyed a panel of 67 PCPs in our region to assess mobile video interpreting acceptability in practice. Most (71%) said they would be “very likely” or “somewhat likely” to use mobile video interpreting if offered. These results support mobile video interpreting as an acceptable and potentially feasible strategy for accessing the evidence-based practice of professional video interpreter use.
To address current knowledge gaps, we will test two implementation strategies for improving interpreter use in primary care and examining implementation and effectiveness outcomes, cost-effectiveness, and mechanisms of action. Providers will be enrolled and randomized to one of two strategies, alone or in sequence, using a Sequential Multiple Assignment Randomized Trial (SMART) design [ – ]. One strategy, web-based educational modules, targets known deficits in provider knowledge, confidence, and motivation around interpreter use. The second strategy, mobile video interpreting, provides quick access to video interpretation. Providers are more likely not only to use video interpretation versus telephone but also mobile video interpreting overcomes system-level barriers to shared device use, as providers will access professional video interpreters on their own smartphone or tablet. Data will be collected from enrolled providers and their patients/families who use a language other than English, via administrative data, surveys, qualitative interviews, and video-recorded clinic visits. Our specific aims are as follows: Aim 1: To compare the effectiveness of two implementation strategies, alone and in combination, to improve use of interpretation and comprehension for patients/parents who use a language other than English, seen in adult and pediatric primary care settings Aim 2: To explore mobile video interpreting and education implementation strategies’ ability to activate putative provider-level mechanisms Aim 3: To determine the incremental cost-effectiveness from a healthcare organization perspective of each implementation strategy (mobile video interpreting, education, both)
Conceptual model The TDF, mapped to the Behavior Change Wheel’s COM-B (capability, opportunity, motivation—behavior) system, informed our conceptual model (Fig. ) . The TDF is an integrative theoretical framework that has been used across healthcare settings to inform implementation strategies, especially those requiring behavior change . It underwent rigorous refinement using discriminant content validation and fuzzy cluster analysis and was then mapped to the COM-B system to provide theory-based relationships between the barriers laid out in the TDF. In our conceptual model, we identify relevant TDF domains for each of the COM-B’s major categories as contributing to the target behavior, interpreter use. These COM-B categories are capability, divided into psychological, which includes knowledge and decision-making, and physical, which includes skills for interpreter access; motivation, divided into reflective, including provider beliefs about their abilities and the consequences of their decisions, and automatic, which includes professional identity and positive reinforcement; and opportunity, divided into social, which includes clinic interpreter use norms, and physical, which includes environmental context and resources, such as current interpreter access (see Table for detailed list). We expect both of our study strategies to influence provider capability and motivation to use interpreters, but we expect education to influence provider capability more markedly and mobile video interpreting to have a strong and unique influence on opportunity. Study design and randomization This type 3 hybrid implementation-effectiveness study will test two discrete implementation strategies for improving professional interpreter use (primary implementation outcome) and patient comprehension (secondary effectiveness outcome) in primary care. The implementation strategies—interactive web-based educational modules and access to mobile video interpreting—target different sets of barriers to professional interpreter use, an evidence-based practice [ , , , , ]. Our results will therefore provide insights into how best to promote implementation of a well-studied, well-established practice known to improve outcomes but inconsistently used. As the potency of barriers may vary by provider and clinic, we will test the strategies alone and in combination, using a SMART design, with provider-level randomization. A total of 55 providers from 3 to 5 primary care clinical organizations will be randomized 1:1 to either education or mobile video interpreting access, stratified by baseline interpreter use and clinic (phase 1; Fig. ). Randomization will occur within REDCap, using a sequence generated by the study biostatistician and implemented by a research coordinator. After 9 months, providers with interpreter use in the top tertile (within strategy) will remain with the original strategy; those in the bottom two tertiles will be randomized 1:1 again, to continue the original strategy or to add the second strategy to the first (phase 2). After another 9 months of data collection, we will provide free access to both mobile video interpreting and educational modules to all enrolled providers and then track voluntary uptake by those not previously exposed for another 9 months (phase 3). Data collection will include administrative data to track interpreter use (primary outcome); patient surveys and qualitative interviews to determine diagnosis comprehension (secondary outcome) and communication quality; provider surveys and qualitative interviews to assess contextual and intrapersonal barriers and moderators; and visit video recording to capture additional barriers and determine fidelity of strategy implementation. We will assess each strategy’s effectiveness, alone and in combination, for improving professional interpreter use and patient comprehension. We will explore mechanisms by which these strategies work and evaluate the relative strategy-specific costs. Implementation strategies Our selected implementation strategies target primarily intrapersonal barriers to interpreter use, although mobile video interpreting does so by altering the environment and resources (i.e., opportunities) available to that provider . Strategy assignment will thus happen by individual provider. However, knowing the importance of team, clinic, and patient-level factors for influencing provider behavior, we will also capture data at these levels. Detailed strategy specification, following Proctor’s recommendations , is presented in Table . Web-based educational modules The education implementation strategy will consist of six 10- to 15-min web-based modules, a tip sheet with clinic-specific interpreter access and use information, and four 5-min booster modules, all delivered online, along with quarterly reports on interpreter use to the enrolled provider. Education aims to improve provider motivation and capability related to interpreter use, by increasing conceptual and technical knowledge, enhancing interpreter access skills, shifting beliefs about their own capabilities and the consequences of use or nonuse, and increasing the intention to use an interpreter. The educational module content is based on Seattle Children’s Hospital’s rigorously developed in-person workshop series, CONNECTing Through Interpreters [ – ]. In partnership with the interactive Medical Training Resources (iMTR) group at the University of Washington (UW; depts.washington.edu/imtr/) and content experts including experienced interpreters and providers, we transformed the workshops into interactive web-based modules. Modules were pilot tested with 15 primary care providers (PCPs) and revised based on feedback. Module-assigned providers will view them at a time and place they choose. We will track when participants access and complete modules as a marker of engagement. The online modules cover 5 topics: (1) importance and fundamentals of good communication (delivered in 2 modules), (2) importance of professional interpreter use and disparities for populations with language barriers, (3) how to use an interpreter effectively, (4) what to do when the interpreted encounter is not going well, and (5) remote interpreter use and system’s challenges. Each module is 10–15 min long with audio, visual, and video content, developed using best practices from adult learning theory. Providers will be prompted to view a new module each week until all have been viewed. During months 3–6 post-randomization, 4 brief (5 min) booster modules will be released, reviewing crucial points from initial modules. Boosters have been found to support behavior change in other settings . Weekly reminders will be sent until they are complete. Providers who complete all modules will be eligible for points for continuing medical education (CME) and/or Maintenance of Certification (MOC); these points must be earned to maintain medical licensure and board certification and thus provide incentive for completion. The clinic-specific interpreter access and use information will be distributed via email. This sheet will include instructions for accessing interpreters in their clinic via the normal process, including the vendor phone number, tips for using the clinic telephones (e.g., how to adjust the speakerphone volume), ideas for streamlining the process, where shared equipment is stored, and how to report problems. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation. Mobile video interpreting access The mobile video interpreting access strategy will provide access to mobile video interpreting, technical support, a tip sheet for mobile video interpreting use, and an extra charger, shock-resistant case, disposable antimicrobial sleeves, and a positioning stand to support clinical use of the provider’s own device, along with quarterly reports on the enrolled provider’s interpreter use. Mobile video interpreting-assigned providers can use a study-issued smartphone instead of their own. The mobile video interpreting strategy aims to improve provider motivation, capability, and opportunity related to interpreter use, by decreasing cognitive overload, enhancing interpreter access skills, shifting provider beliefs about capabilities and the consequences of interpreter use, reinforcing use via satisfaction, and altering the environmental context and resources to make access easier and use more rewarding (Table ). Access to mobile video interpreting is achieved by downloading the application (app) online and then entering an access code linked to a billing account; after being entered, the code is no longer visible. Access can thus be controlled by study staff. Study staff will download and orient providers to the app, demonstrate use, and answer questions. Technical support will be offered on demand. A tip sheet will be emailed that includes mobile video interpreting instructions and best practices. Several interpretation vendors have similar apps that can be downloaded onto personal devices but are rarely used in this way. These apps are HIPAA compliant, use end-to-end encryption, and are accessed with one touch (i.e., no additional log in or passwords); no data is downloaded to the device. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation. Study populations and setting Providers We will enroll 55 PCPs from 3 to 5 primary care organizations in Washington state. These organizations will include both academically affiliated and nonacademic sites and vary in terms of leadership and governance structures. Clinics will enroll based on provider interest, but each provider will choose whether to enroll. Eligible providers will practice at the enrolled clinic at least 40% time and see at least 7 patients requiring interpretation per month, on average. If the provider is proficient in a non-English language, they will see at least 7 patients per month who use a different language (in which they are not proficient). We will enroll and initially randomize 55 providers, to retain 47 through the second interview and 40 through the third (73% retention; see next section for sample size considerations). Patients We will enroll 3 populations of adult patients or parents of pediatric patients (henceforth “patients”) who use a language other than English, all being seen by enrolled providers. For our administrative population , we will include administrative data from all patients who were recorded as using a language other than English in the medical record and were seen by enrolled providers, for the interpreter use outcome. For our survey population , we will enroll patients who prefer medical care in the four most common non-English languages across clinics, who are in clinic for an acute concern (e.g., sore throat, new ankle pain). These individuals will be invited to complete a survey ( n = 648), and a subset will be invited to complete a 20–30 min qualitative interview ( n = 75). We will also recruit patients for our video-recording population ( n = 100). Patients who use a language other than English with any visit type who consent will be eligible for video recording. Data collection, study measures, and sample size Outcome measures include our primary implementation outcome of interpreter use and our secondary effectiveness outcome of patient/parent comprehension. Additional measures related to organizational context, provider-reported barriers and facilitators of interpreter use, and intervention fidelity are laid out in Table . Interpreter use Interpreter vendor invoices will be collected from companies that clinics currently contract with; mobile video interpreting invoices will be managed by the study team. All professional interpreter invoices (not just mobile video interpreting) will be matched to clinic visits for patients who use a language other than English (all languages) for enrolled providers. We will calculate baseline interpreter use for enrolled providers for the six months pre-randomization and then randomize 1:1 to education or mobile video interpreting, stratified by baseline use and clinic. We will calculate interpreter use, both overall and strategy consistent, continuously throughout phases 1–3; other data collection will end after phase 2. For analysis, interpreter use will be defined as a dichotomous variable at the level of the clinic visit. Visits with patients who use a language other than English with any billed professional interpreter use will be coded as “yes,” and the remainder will be coded as “no.” Sample size calculations consider aim 1 group comparisons (mobile video interpreting, education, combination) at the end of phase 2. We assume loss of up to 9 providers (e.g., to job change; 16%) over the study; we expect attrition (up to 27%) in provider interviews and surveys, but that will not impact aim 1 power. With 5796 total encounters with patients who use a language other than English (7 visits/provider/month), we expect 1932 non-English visits per group, which will provide > 80% power to detect a 5% difference in proportion of professionally interpreted visits by groups . This will be readily feasible with administrative data. Patient/parent comprehension Patient comprehension will be determined by asking surveyed patients ( n = 648) to report the diagnosis they received during their visit with an enrolled provider. The parent-reported diagnosis will then be compared to the provider-documented diagnosis, which trained abstractors will have abstracted from the EMR. Two coders blinded to study assignment will compare the documented diagnosis to the patient-reported diagnosis to determine comprehension, coded as yes, concordant; no, not concordant; or unclear, based on the standard of whether a different follow-up provider would likely know the diagnosis based on the information provided by the patient. For analysis, comprehension will be coded as yes or no/unclear. We have successfully used these procedures previously . In addition to measuring comprehension, the survey will use validated measures to collect demographics and satisfaction with communication and interpretation. The tablet-based survey will have an audio feature to allow patients to read or hear the questions in 4 non-English languages. The survey will be completed in the clinic whenever possible; otherwise, the patient will complete it within 7 days, independently online or over the telephone with a bilingual research coordinator or professional interpreter. Based on aim 1 analyses, with 216 completed patient surveys per group (648 total), we will have ≥ 80% power to detect a 14% difference in diagnosis comprehension by group . This will also be feasible, achieved by surveying 7–12 patients per clinic per month for 18 months. Provider attributes and organizational context These data will be collected via 2 surveys and 3 interviews over the course of the study. Providers will complete a web-based survey at baseline, before initial randomization, to assess demographics and barriers to interpreter use via the TDF Questionnaire, Organizational Readiness for Implementing Change (ORIC) questionnaire , and the Implementation Leadership Scale (ILS) . We will repeat the survey at the end of phase 2, to capture changes over time and provider time and costs associated with the implementation strategies. Enrolled providers will also complete qualitative interviews (1) before initial randomization, (2) during phase 1, and (3) during phase 2. Interviews will explore contextual and personal factors that serve as barriers, moderators, mechanisms, and proximal outcomes of interpreter use (see Figs. & for preliminary causal pathway diagrams). We will use qualitative interviews given the lack of survey measures for many factors, and concern for social desirability bias, as providers may not endorse interpreter nonuse on surveys but may be more likely to in the context of a conversation. Provider qualitative and quantitative data will be analyzed together (see “ ”). Patient communication experiences A subset of patients completing the survey will be invited to complete a 30-min qualitative interview . Survey respondents who endorse having a concern about how their provider communicated with them will be invited to interview , as will a random sample of others (total n = 75). Our goal is to understand how communication occurred during the visit, how effective the patient found that communication to be and why, and the details of any concerns the patient had. The interview will be completed in the clinic prior to departure whenever possible; otherwise, the patient will have 7 days to complete it, over the telephone with a bilingual research coordinator or via professional interpreter, in one of our 4 eligible non-English languages. We estimated initial qualitative sample size based on the heterogeneity of our target group, the number of research sites, and the complexity of the areas of inquiry. The initial sample estimates will be adjusted as needed to achieve data sufficiency . Video recording Video-recorded visits with patients who use a language other than English ( n = 100) will provide granular, objective data regarding interpreter use, technical difficulties, communication delays, and provider use of best-practice techniques for communicating with an interpreter, to supplement provider- and patient-reported data. Trained coders will code videos for specific behaviors, based on the coding scheme developed previously , to provide data on barriers, mechanisms, proximal outcomes of interpreter use, and strategy fidelity (Table ). The video recording sample size is based on our previous work and logistical considerations, with 100 recordings both feasible and likely to achieve data sufficiency. Cost data Administrative cost data collected from clinics will include costs associated with interpreter vendor invoices and contracts; interpreter-specific clinic hardware (e.g., dedicated speakerphones); wireless Internet; and educational module development, following recommendations for economic analysis in implementation science . Provider-incurred time and costs will be collected via the final survey, including time spent on each strategy, excess data charges associated with mobile video interpreting use (if any), and wear or damage to personal devices. Study team time related to implementing each strategy (e.g., installing mobile video interpreting, reminder emails) will be tracked in real time, as they would be performed by clinic staff with real-world implementation. We do not expect changes in clinic visit length, based on time-motion studies of interpreted patient visits . Data analysis Primary quantitative analyses will be conducted using an intention-to-treat approach. Provider and patient characteristics will be summarized overall and by strategy. Missing data will be minimized through communication with participants regarding the importance of completing surveys and interviews, participant incentives, offering multiple languages and modalities for survey and interview completion, and completing surveys and interviews on-site when possible. For our primary outcome, we expect interpreter invoice data to be complete, given our previous experience [ , , ]. We will track interpreter use for all enrolled providers for the entire study, even if they do not complete interviews or surveys. For our secondary outcome, diagnosis comprehension, patterns of data missingness will be examined. We expect randomization will help protect against imbalance in unobserved confounders, so our main concern will be with missing data. We will conduct sensitivity analyses based on multiple imputations to assess the impact of missing data, in which we will generate multiple imputed datasets with missing values imputed by pooling information from observed data, and then combine statistical inferences across the multiply-imputed datasets [ – ]. Aim 1: Compare the effectiveness of two implementation strategies, alone and in combination, to improve use of interpretation and comprehension for patients/parents with language barriers seen in adult/pediatric primary care settings We hypothesize that, compared to educational modules, provider access to mobile video interpreting will lead to ( H1 ) greater odds of interpreter use for visits with patients/parents with language barriers (primary outcome) and ( H2 ) better comprehension among patients/parents with language barriers. We also hypothesize ( H3 ) that mobile video interpreting and educational modules together will yield greater odds of interpreter use than either strategy alone. To test H1 and H3 , we will use assigned strategy and data collected during phases 1 and 2. Under the SMART design, comparisons of first-stage interventions, comparisons of second-stage interventions, and comparisons of the adaptive intervention with both stages can be conducted simultaneously using standard software with a technique called a “weighted and replicated” regression approach, using weighted generalized estimating equations (GEE) . Weighted GEE allows us to work with binary outcomes and weights and adjust for clustering within providers. Within-clinic correlations will be assessed by including clinic-specific random effects in our regression models and estimating the intra-cluster correlation coefficients. Significance of the intra-cluster correlation coefficients will be examined by comparing models with and without clinic-specific random effects using likelihood ratio tests. If no strong within-clinic correlation is detected, we will use fixed-effects regression models for their better power; otherwise, estimates and inference based on random-effects regression models will be reported. H1 and H3 will be tested using the Wald test and robust standard error estimates . Model-based estimates of odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals . To test H2 , our analytic sample will include only patients who completed a post-visit survey ( n = 648). A weighted GEE logistic regression model predicting patient/parent comprehension at the visit level will be estimated. Baseline covariates will include the clinic, patient demographics (age, sex, language), and patient comorbid conditions [ – ], pooled at the provider level. Model-based estimates of the odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals computed via parametric bootstrapping . Aim 2 Explore mobile video interpreting and education implementation strategies’ ability to activate putative provider-level mechanisms We predict that implementation via mobile video interpreting will activate mechanisms that are more directly and strongly linked to provider behavior, while education’s mechanism activation will more often affect intrapersonal barriers without changing behavior. We will use a quantitative plus qualitative approach to explore putative mechanisms, where both are analyzed together to understand data in context . Interviews will be audio-recorded, transcribed, translated as appropriate, and reviewed for accuracy. Using an iteratively developed codebook, we will code all data stratified by interpreter use and TDF attributes, upload data into Dedoose Version 9.0.17 for thematic analysis [ – ], and use the 6 analysis steps outlined by Braun and Clarke . Data synthesis will be conducted from code reports utilizing an annotation and tabular system. We will analyze provider and patient data separately. Video-recording analysis will be based on our previously developed coding scheme , with modifications based on coding the first 5 videos. We expect coding to include communication/interpretation method, duration, interpretation technical difficulties (e.g., dropped calls), interpreter or device positioning in room, provider use of jargon and acronyms, and clarifications between provider and interpreter. Initial videos will be double coded, until kappa statistics for interrater reliability are greater than 0.75. Subsequent videos will be single coded, with a random 10% double coded. Fidelity to assigned strategy will be defined as use of mobile video interpreting for assigned providers and use of best practices for communicating through an interpreter for education-assigned providers. Qualitative analysis of interviews and video recordings will occur with reference to provider quantitative data, for example, by interpreter use (high vs low) and survey-reported TDF attributes, following NIH guidelines for mixed-methods best practices . Provider interviews and videos will be considered as a set, to assess for changes over time, by assigned strategy. The relationships we investigate will be guided by preliminary causal pathway models (Figs. & ). These models, developed with best available evidence, lay out the putative mechanisms of each implementation strategy, including organizational and intrapersonal moderators, specific barriers, and proximal and distal outcomes. In this approach, we will explore hypothesized relationships and invite emergent mechanisms we had not previously considered given this work’s exploratory nature. Little is known about the mechanisms by which particular strategies influence interpreter use or even if things like acquiring facts serve as mediators on the pathway from strategy to outcome . Per Kazdin, identifying mediators and mechanisms of change allows greater reason and parsimony in selecting implementation strategies and should allow attainment of greater improvements over time as we understand exactly how improvement occurs . We will refine our causal pathway diagrams and generate new ones reflecting the evidence gathered through this study. Aim 3: Determine the incremental cost-effectiveness from a healthcare organization perspective of each implementation strategy (mobile video interpreting, education, and both) We hypothesize that, relative to educational modules, mobile video interpreting will be more cost-effective ( H4a ) per additional interpreted clinic visit and ( H4b ) per additional instance of patient comprehension. The estimated incremental cost-effectiveness ratios (ICER) will provide evidence of the resources required to increase interpreted clinic visits and improve patient comprehension . Our goal is to support decision-making about which strategy healthcare organization leaders may choose to implement, and thus, we will estimate ICERs from the organization perspective. Effectiveness measures will be based on Aim 1 analyses; cost data will come from two sources. The first source is administrative, including vendor invoices and budgets for payroll. Costs that cannot be determined will be estimated with a micro-costing approach in which unit cost multipliers are applied to the quantity of each type of service or resource utilized; examples include the use of shared resources (space, office equipment) and opportunity costs experienced by clinic staff. All cost data are summed to obtain total costs , using an approach we have used previously . While mobile video interpreting-assigned providers may also have used other professional interpretation, we will assign mobile video interpreting-related costs to the mobile video interpreting and combination groups and nonmobile video interpreting interpreter costs (which would not be necessary if a clinic used mobile video interpreting only) to the education group. Interpreter costs will be based on actual usage from vendor invoices, attributed to assigned group. Education module development will be annuitized over the study period. Time costs for providers (time on modules, learning to use mobile video interpreting) and study staff (reminder emails, mobile video interpreting support) will be estimated using the mean hourly wage from the National Compensation Survey, plus fringe rates from the Bureau of Labor Statistics Employer Costs for Employee Compensation. Provider costs due to own-device use for mobile video interpreting will be estimated with hardware depreciation allowances per the US Internal Revenue Code. Costs will be inflation-adjusted to common-year dollars using the Personal Health Care Expenditure Deflator or Personal Consumption Expenditure price index . We will calculate total costs associated with each implementation strategy by summing the above costs. To test H4a , we will calculate the ICER for each additional interpreted clinic visit, by calculating the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education, and then divide by the difference in number of professionally interpreted visits for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education. To test H4b , we will calculate the ICER for each additional instance of patient comprehension. To do so, we will calculate the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education and then divide by the difference in proportion of patients who correctly reported their diagnosis for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education. Regulatory approvals The mVOCAL Trial was registered on ClinicalTrials.gov on September 22, 2022 (NCT05591586). The Seattle Children’s Hospital institutional review board (IRB) serves as the single IRB (sIRB). The study was initially approved on October 29, 2021 (no. 00003332). All providers and patients will provide informed consent for their participation, with the exception of those participating only through the inclusion of their administrative data, for whom a waiver of informed consent has been obtained.
The TDF, mapped to the Behavior Change Wheel’s COM-B (capability, opportunity, motivation—behavior) system, informed our conceptual model (Fig. ) . The TDF is an integrative theoretical framework that has been used across healthcare settings to inform implementation strategies, especially those requiring behavior change . It underwent rigorous refinement using discriminant content validation and fuzzy cluster analysis and was then mapped to the COM-B system to provide theory-based relationships between the barriers laid out in the TDF. In our conceptual model, we identify relevant TDF domains for each of the COM-B’s major categories as contributing to the target behavior, interpreter use. These COM-B categories are capability, divided into psychological, which includes knowledge and decision-making, and physical, which includes skills for interpreter access; motivation, divided into reflective, including provider beliefs about their abilities and the consequences of their decisions, and automatic, which includes professional identity and positive reinforcement; and opportunity, divided into social, which includes clinic interpreter use norms, and physical, which includes environmental context and resources, such as current interpreter access (see Table for detailed list). We expect both of our study strategies to influence provider capability and motivation to use interpreters, but we expect education to influence provider capability more markedly and mobile video interpreting to have a strong and unique influence on opportunity.
This type 3 hybrid implementation-effectiveness study will test two discrete implementation strategies for improving professional interpreter use (primary implementation outcome) and patient comprehension (secondary effectiveness outcome) in primary care. The implementation strategies—interactive web-based educational modules and access to mobile video interpreting—target different sets of barriers to professional interpreter use, an evidence-based practice [ , , , , ]. Our results will therefore provide insights into how best to promote implementation of a well-studied, well-established practice known to improve outcomes but inconsistently used. As the potency of barriers may vary by provider and clinic, we will test the strategies alone and in combination, using a SMART design, with provider-level randomization. A total of 55 providers from 3 to 5 primary care clinical organizations will be randomized 1:1 to either education or mobile video interpreting access, stratified by baseline interpreter use and clinic (phase 1; Fig. ). Randomization will occur within REDCap, using a sequence generated by the study biostatistician and implemented by a research coordinator. After 9 months, providers with interpreter use in the top tertile (within strategy) will remain with the original strategy; those in the bottom two tertiles will be randomized 1:1 again, to continue the original strategy or to add the second strategy to the first (phase 2). After another 9 months of data collection, we will provide free access to both mobile video interpreting and educational modules to all enrolled providers and then track voluntary uptake by those not previously exposed for another 9 months (phase 3). Data collection will include administrative data to track interpreter use (primary outcome); patient surveys and qualitative interviews to determine diagnosis comprehension (secondary outcome) and communication quality; provider surveys and qualitative interviews to assess contextual and intrapersonal barriers and moderators; and visit video recording to capture additional barriers and determine fidelity of strategy implementation. We will assess each strategy’s effectiveness, alone and in combination, for improving professional interpreter use and patient comprehension. We will explore mechanisms by which these strategies work and evaluate the relative strategy-specific costs.
Our selected implementation strategies target primarily intrapersonal barriers to interpreter use, although mobile video interpreting does so by altering the environment and resources (i.e., opportunities) available to that provider . Strategy assignment will thus happen by individual provider. However, knowing the importance of team, clinic, and patient-level factors for influencing provider behavior, we will also capture data at these levels. Detailed strategy specification, following Proctor’s recommendations , is presented in Table . Web-based educational modules The education implementation strategy will consist of six 10- to 15-min web-based modules, a tip sheet with clinic-specific interpreter access and use information, and four 5-min booster modules, all delivered online, along with quarterly reports on interpreter use to the enrolled provider. Education aims to improve provider motivation and capability related to interpreter use, by increasing conceptual and technical knowledge, enhancing interpreter access skills, shifting beliefs about their own capabilities and the consequences of use or nonuse, and increasing the intention to use an interpreter. The educational module content is based on Seattle Children’s Hospital’s rigorously developed in-person workshop series, CONNECTing Through Interpreters [ – ]. In partnership with the interactive Medical Training Resources (iMTR) group at the University of Washington (UW; depts.washington.edu/imtr/) and content experts including experienced interpreters and providers, we transformed the workshops into interactive web-based modules. Modules were pilot tested with 15 primary care providers (PCPs) and revised based on feedback. Module-assigned providers will view them at a time and place they choose. We will track when participants access and complete modules as a marker of engagement. The online modules cover 5 topics: (1) importance and fundamentals of good communication (delivered in 2 modules), (2) importance of professional interpreter use and disparities for populations with language barriers, (3) how to use an interpreter effectively, (4) what to do when the interpreted encounter is not going well, and (5) remote interpreter use and system’s challenges. Each module is 10–15 min long with audio, visual, and video content, developed using best practices from adult learning theory. Providers will be prompted to view a new module each week until all have been viewed. During months 3–6 post-randomization, 4 brief (5 min) booster modules will be released, reviewing crucial points from initial modules. Boosters have been found to support behavior change in other settings . Weekly reminders will be sent until they are complete. Providers who complete all modules will be eligible for points for continuing medical education (CME) and/or Maintenance of Certification (MOC); these points must be earned to maintain medical licensure and board certification and thus provide incentive for completion. The clinic-specific interpreter access and use information will be distributed via email. This sheet will include instructions for accessing interpreters in their clinic via the normal process, including the vendor phone number, tips for using the clinic telephones (e.g., how to adjust the speakerphone volume), ideas for streamlining the process, where shared equipment is stored, and how to report problems. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation. Mobile video interpreting access The mobile video interpreting access strategy will provide access to mobile video interpreting, technical support, a tip sheet for mobile video interpreting use, and an extra charger, shock-resistant case, disposable antimicrobial sleeves, and a positioning stand to support clinical use of the provider’s own device, along with quarterly reports on the enrolled provider’s interpreter use. Mobile video interpreting-assigned providers can use a study-issued smartphone instead of their own. The mobile video interpreting strategy aims to improve provider motivation, capability, and opportunity related to interpreter use, by decreasing cognitive overload, enhancing interpreter access skills, shifting provider beliefs about capabilities and the consequences of interpreter use, reinforcing use via satisfaction, and altering the environmental context and resources to make access easier and use more rewarding (Table ). Access to mobile video interpreting is achieved by downloading the application (app) online and then entering an access code linked to a billing account; after being entered, the code is no longer visible. Access can thus be controlled by study staff. Study staff will download and orient providers to the app, demonstrate use, and answer questions. Technical support will be offered on demand. A tip sheet will be emailed that includes mobile video interpreting instructions and best practices. Several interpretation vendors have similar apps that can be downloaded onto personal devices but are rarely used in this way. These apps are HIPAA compliant, use end-to-end encryption, and are accessed with one touch (i.e., no additional log in or passwords); no data is downloaded to the device. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation.
The education implementation strategy will consist of six 10- to 15-min web-based modules, a tip sheet with clinic-specific interpreter access and use information, and four 5-min booster modules, all delivered online, along with quarterly reports on interpreter use to the enrolled provider. Education aims to improve provider motivation and capability related to interpreter use, by increasing conceptual and technical knowledge, enhancing interpreter access skills, shifting beliefs about their own capabilities and the consequences of use or nonuse, and increasing the intention to use an interpreter. The educational module content is based on Seattle Children’s Hospital’s rigorously developed in-person workshop series, CONNECTing Through Interpreters [ – ]. In partnership with the interactive Medical Training Resources (iMTR) group at the University of Washington (UW; depts.washington.edu/imtr/) and content experts including experienced interpreters and providers, we transformed the workshops into interactive web-based modules. Modules were pilot tested with 15 primary care providers (PCPs) and revised based on feedback. Module-assigned providers will view them at a time and place they choose. We will track when participants access and complete modules as a marker of engagement. The online modules cover 5 topics: (1) importance and fundamentals of good communication (delivered in 2 modules), (2) importance of professional interpreter use and disparities for populations with language barriers, (3) how to use an interpreter effectively, (4) what to do when the interpreted encounter is not going well, and (5) remote interpreter use and system’s challenges. Each module is 10–15 min long with audio, visual, and video content, developed using best practices from adult learning theory. Providers will be prompted to view a new module each week until all have been viewed. During months 3–6 post-randomization, 4 brief (5 min) booster modules will be released, reviewing crucial points from initial modules. Boosters have been found to support behavior change in other settings . Weekly reminders will be sent until they are complete. Providers who complete all modules will be eligible for points for continuing medical education (CME) and/or Maintenance of Certification (MOC); these points must be earned to maintain medical licensure and board certification and thus provide incentive for completion. The clinic-specific interpreter access and use information will be distributed via email. This sheet will include instructions for accessing interpreters in their clinic via the normal process, including the vendor phone number, tips for using the clinic telephones (e.g., how to adjust the speakerphone volume), ideas for streamlining the process, where shared equipment is stored, and how to report problems. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation.
The mobile video interpreting access strategy will provide access to mobile video interpreting, technical support, a tip sheet for mobile video interpreting use, and an extra charger, shock-resistant case, disposable antimicrobial sleeves, and a positioning stand to support clinical use of the provider’s own device, along with quarterly reports on the enrolled provider’s interpreter use. Mobile video interpreting-assigned providers can use a study-issued smartphone instead of their own. The mobile video interpreting strategy aims to improve provider motivation, capability, and opportunity related to interpreter use, by decreasing cognitive overload, enhancing interpreter access skills, shifting provider beliefs about capabilities and the consequences of interpreter use, reinforcing use via satisfaction, and altering the environmental context and resources to make access easier and use more rewarding (Table ). Access to mobile video interpreting is achieved by downloading the application (app) online and then entering an access code linked to a billing account; after being entered, the code is no longer visible. Access can thus be controlled by study staff. Study staff will download and orient providers to the app, demonstrate use, and answer questions. Technical support will be offered on demand. A tip sheet will be emailed that includes mobile video interpreting instructions and best practices. Several interpretation vendors have similar apps that can be downloaded onto personal devices but are rarely used in this way. These apps are HIPAA compliant, use end-to-end encryption, and are accessed with one touch (i.e., no additional log in or passwords); no data is downloaded to the device. Feedback to enrolled providers will be provided quarterly with both strategies, as a report of the percent of visits with patients who use a language other than English for which the provider used professional interpretation.
Providers We will enroll 55 PCPs from 3 to 5 primary care organizations in Washington state. These organizations will include both academically affiliated and nonacademic sites and vary in terms of leadership and governance structures. Clinics will enroll based on provider interest, but each provider will choose whether to enroll. Eligible providers will practice at the enrolled clinic at least 40% time and see at least 7 patients requiring interpretation per month, on average. If the provider is proficient in a non-English language, they will see at least 7 patients per month who use a different language (in which they are not proficient). We will enroll and initially randomize 55 providers, to retain 47 through the second interview and 40 through the third (73% retention; see next section for sample size considerations). Patients We will enroll 3 populations of adult patients or parents of pediatric patients (henceforth “patients”) who use a language other than English, all being seen by enrolled providers. For our administrative population , we will include administrative data from all patients who were recorded as using a language other than English in the medical record and were seen by enrolled providers, for the interpreter use outcome. For our survey population , we will enroll patients who prefer medical care in the four most common non-English languages across clinics, who are in clinic for an acute concern (e.g., sore throat, new ankle pain). These individuals will be invited to complete a survey ( n = 648), and a subset will be invited to complete a 20–30 min qualitative interview ( n = 75). We will also recruit patients for our video-recording population ( n = 100). Patients who use a language other than English with any visit type who consent will be eligible for video recording.
We will enroll 55 PCPs from 3 to 5 primary care organizations in Washington state. These organizations will include both academically affiliated and nonacademic sites and vary in terms of leadership and governance structures. Clinics will enroll based on provider interest, but each provider will choose whether to enroll. Eligible providers will practice at the enrolled clinic at least 40% time and see at least 7 patients requiring interpretation per month, on average. If the provider is proficient in a non-English language, they will see at least 7 patients per month who use a different language (in which they are not proficient). We will enroll and initially randomize 55 providers, to retain 47 through the second interview and 40 through the third (73% retention; see next section for sample size considerations).
We will enroll 3 populations of adult patients or parents of pediatric patients (henceforth “patients”) who use a language other than English, all being seen by enrolled providers. For our administrative population , we will include administrative data from all patients who were recorded as using a language other than English in the medical record and were seen by enrolled providers, for the interpreter use outcome. For our survey population , we will enroll patients who prefer medical care in the four most common non-English languages across clinics, who are in clinic for an acute concern (e.g., sore throat, new ankle pain). These individuals will be invited to complete a survey ( n = 648), and a subset will be invited to complete a 20–30 min qualitative interview ( n = 75). We will also recruit patients for our video-recording population ( n = 100). Patients who use a language other than English with any visit type who consent will be eligible for video recording.
Outcome measures include our primary implementation outcome of interpreter use and our secondary effectiveness outcome of patient/parent comprehension. Additional measures related to organizational context, provider-reported barriers and facilitators of interpreter use, and intervention fidelity are laid out in Table . Interpreter use Interpreter vendor invoices will be collected from companies that clinics currently contract with; mobile video interpreting invoices will be managed by the study team. All professional interpreter invoices (not just mobile video interpreting) will be matched to clinic visits for patients who use a language other than English (all languages) for enrolled providers. We will calculate baseline interpreter use for enrolled providers for the six months pre-randomization and then randomize 1:1 to education or mobile video interpreting, stratified by baseline use and clinic. We will calculate interpreter use, both overall and strategy consistent, continuously throughout phases 1–3; other data collection will end after phase 2. For analysis, interpreter use will be defined as a dichotomous variable at the level of the clinic visit. Visits with patients who use a language other than English with any billed professional interpreter use will be coded as “yes,” and the remainder will be coded as “no.” Sample size calculations consider aim 1 group comparisons (mobile video interpreting, education, combination) at the end of phase 2. We assume loss of up to 9 providers (e.g., to job change; 16%) over the study; we expect attrition (up to 27%) in provider interviews and surveys, but that will not impact aim 1 power. With 5796 total encounters with patients who use a language other than English (7 visits/provider/month), we expect 1932 non-English visits per group, which will provide > 80% power to detect a 5% difference in proportion of professionally interpreted visits by groups . This will be readily feasible with administrative data. Patient/parent comprehension Patient comprehension will be determined by asking surveyed patients ( n = 648) to report the diagnosis they received during their visit with an enrolled provider. The parent-reported diagnosis will then be compared to the provider-documented diagnosis, which trained abstractors will have abstracted from the EMR. Two coders blinded to study assignment will compare the documented diagnosis to the patient-reported diagnosis to determine comprehension, coded as yes, concordant; no, not concordant; or unclear, based on the standard of whether a different follow-up provider would likely know the diagnosis based on the information provided by the patient. For analysis, comprehension will be coded as yes or no/unclear. We have successfully used these procedures previously . In addition to measuring comprehension, the survey will use validated measures to collect demographics and satisfaction with communication and interpretation. The tablet-based survey will have an audio feature to allow patients to read or hear the questions in 4 non-English languages. The survey will be completed in the clinic whenever possible; otherwise, the patient will complete it within 7 days, independently online or over the telephone with a bilingual research coordinator or professional interpreter. Based on aim 1 analyses, with 216 completed patient surveys per group (648 total), we will have ≥ 80% power to detect a 14% difference in diagnosis comprehension by group . This will also be feasible, achieved by surveying 7–12 patients per clinic per month for 18 months. Provider attributes and organizational context These data will be collected via 2 surveys and 3 interviews over the course of the study. Providers will complete a web-based survey at baseline, before initial randomization, to assess demographics and barriers to interpreter use via the TDF Questionnaire, Organizational Readiness for Implementing Change (ORIC) questionnaire , and the Implementation Leadership Scale (ILS) . We will repeat the survey at the end of phase 2, to capture changes over time and provider time and costs associated with the implementation strategies. Enrolled providers will also complete qualitative interviews (1) before initial randomization, (2) during phase 1, and (3) during phase 2. Interviews will explore contextual and personal factors that serve as barriers, moderators, mechanisms, and proximal outcomes of interpreter use (see Figs. & for preliminary causal pathway diagrams). We will use qualitative interviews given the lack of survey measures for many factors, and concern for social desirability bias, as providers may not endorse interpreter nonuse on surveys but may be more likely to in the context of a conversation. Provider qualitative and quantitative data will be analyzed together (see “ ”). Patient communication experiences A subset of patients completing the survey will be invited to complete a 30-min qualitative interview . Survey respondents who endorse having a concern about how their provider communicated with them will be invited to interview , as will a random sample of others (total n = 75). Our goal is to understand how communication occurred during the visit, how effective the patient found that communication to be and why, and the details of any concerns the patient had. The interview will be completed in the clinic prior to departure whenever possible; otherwise, the patient will have 7 days to complete it, over the telephone with a bilingual research coordinator or via professional interpreter, in one of our 4 eligible non-English languages. We estimated initial qualitative sample size based on the heterogeneity of our target group, the number of research sites, and the complexity of the areas of inquiry. The initial sample estimates will be adjusted as needed to achieve data sufficiency . Video recording Video-recorded visits with patients who use a language other than English ( n = 100) will provide granular, objective data regarding interpreter use, technical difficulties, communication delays, and provider use of best-practice techniques for communicating with an interpreter, to supplement provider- and patient-reported data. Trained coders will code videos for specific behaviors, based on the coding scheme developed previously , to provide data on barriers, mechanisms, proximal outcomes of interpreter use, and strategy fidelity (Table ). The video recording sample size is based on our previous work and logistical considerations, with 100 recordings both feasible and likely to achieve data sufficiency. Cost data Administrative cost data collected from clinics will include costs associated with interpreter vendor invoices and contracts; interpreter-specific clinic hardware (e.g., dedicated speakerphones); wireless Internet; and educational module development, following recommendations for economic analysis in implementation science . Provider-incurred time and costs will be collected via the final survey, including time spent on each strategy, excess data charges associated with mobile video interpreting use (if any), and wear or damage to personal devices. Study team time related to implementing each strategy (e.g., installing mobile video interpreting, reminder emails) will be tracked in real time, as they would be performed by clinic staff with real-world implementation. We do not expect changes in clinic visit length, based on time-motion studies of interpreted patient visits .
Interpreter vendor invoices will be collected from companies that clinics currently contract with; mobile video interpreting invoices will be managed by the study team. All professional interpreter invoices (not just mobile video interpreting) will be matched to clinic visits for patients who use a language other than English (all languages) for enrolled providers. We will calculate baseline interpreter use for enrolled providers for the six months pre-randomization and then randomize 1:1 to education or mobile video interpreting, stratified by baseline use and clinic. We will calculate interpreter use, both overall and strategy consistent, continuously throughout phases 1–3; other data collection will end after phase 2. For analysis, interpreter use will be defined as a dichotomous variable at the level of the clinic visit. Visits with patients who use a language other than English with any billed professional interpreter use will be coded as “yes,” and the remainder will be coded as “no.” Sample size calculations consider aim 1 group comparisons (mobile video interpreting, education, combination) at the end of phase 2. We assume loss of up to 9 providers (e.g., to job change; 16%) over the study; we expect attrition (up to 27%) in provider interviews and surveys, but that will not impact aim 1 power. With 5796 total encounters with patients who use a language other than English (7 visits/provider/month), we expect 1932 non-English visits per group, which will provide > 80% power to detect a 5% difference in proportion of professionally interpreted visits by groups . This will be readily feasible with administrative data.
Patient comprehension will be determined by asking surveyed patients ( n = 648) to report the diagnosis they received during their visit with an enrolled provider. The parent-reported diagnosis will then be compared to the provider-documented diagnosis, which trained abstractors will have abstracted from the EMR. Two coders blinded to study assignment will compare the documented diagnosis to the patient-reported diagnosis to determine comprehension, coded as yes, concordant; no, not concordant; or unclear, based on the standard of whether a different follow-up provider would likely know the diagnosis based on the information provided by the patient. For analysis, comprehension will be coded as yes or no/unclear. We have successfully used these procedures previously . In addition to measuring comprehension, the survey will use validated measures to collect demographics and satisfaction with communication and interpretation. The tablet-based survey will have an audio feature to allow patients to read or hear the questions in 4 non-English languages. The survey will be completed in the clinic whenever possible; otherwise, the patient will complete it within 7 days, independently online or over the telephone with a bilingual research coordinator or professional interpreter. Based on aim 1 analyses, with 216 completed patient surveys per group (648 total), we will have ≥ 80% power to detect a 14% difference in diagnosis comprehension by group . This will also be feasible, achieved by surveying 7–12 patients per clinic per month for 18 months.
These data will be collected via 2 surveys and 3 interviews over the course of the study. Providers will complete a web-based survey at baseline, before initial randomization, to assess demographics and barriers to interpreter use via the TDF Questionnaire, Organizational Readiness for Implementing Change (ORIC) questionnaire , and the Implementation Leadership Scale (ILS) . We will repeat the survey at the end of phase 2, to capture changes over time and provider time and costs associated with the implementation strategies. Enrolled providers will also complete qualitative interviews (1) before initial randomization, (2) during phase 1, and (3) during phase 2. Interviews will explore contextual and personal factors that serve as barriers, moderators, mechanisms, and proximal outcomes of interpreter use (see Figs. & for preliminary causal pathway diagrams). We will use qualitative interviews given the lack of survey measures for many factors, and concern for social desirability bias, as providers may not endorse interpreter nonuse on surveys but may be more likely to in the context of a conversation. Provider qualitative and quantitative data will be analyzed together (see “ ”).
A subset of patients completing the survey will be invited to complete a 30-min qualitative interview . Survey respondents who endorse having a concern about how their provider communicated with them will be invited to interview , as will a random sample of others (total n = 75). Our goal is to understand how communication occurred during the visit, how effective the patient found that communication to be and why, and the details of any concerns the patient had. The interview will be completed in the clinic prior to departure whenever possible; otherwise, the patient will have 7 days to complete it, over the telephone with a bilingual research coordinator or via professional interpreter, in one of our 4 eligible non-English languages. We estimated initial qualitative sample size based on the heterogeneity of our target group, the number of research sites, and the complexity of the areas of inquiry. The initial sample estimates will be adjusted as needed to achieve data sufficiency .
Video-recorded visits with patients who use a language other than English ( n = 100) will provide granular, objective data regarding interpreter use, technical difficulties, communication delays, and provider use of best-practice techniques for communicating with an interpreter, to supplement provider- and patient-reported data. Trained coders will code videos for specific behaviors, based on the coding scheme developed previously , to provide data on barriers, mechanisms, proximal outcomes of interpreter use, and strategy fidelity (Table ). The video recording sample size is based on our previous work and logistical considerations, with 100 recordings both feasible and likely to achieve data sufficiency.
Administrative cost data collected from clinics will include costs associated with interpreter vendor invoices and contracts; interpreter-specific clinic hardware (e.g., dedicated speakerphones); wireless Internet; and educational module development, following recommendations for economic analysis in implementation science . Provider-incurred time and costs will be collected via the final survey, including time spent on each strategy, excess data charges associated with mobile video interpreting use (if any), and wear or damage to personal devices. Study team time related to implementing each strategy (e.g., installing mobile video interpreting, reminder emails) will be tracked in real time, as they would be performed by clinic staff with real-world implementation. We do not expect changes in clinic visit length, based on time-motion studies of interpreted patient visits .
Primary quantitative analyses will be conducted using an intention-to-treat approach. Provider and patient characteristics will be summarized overall and by strategy. Missing data will be minimized through communication with participants regarding the importance of completing surveys and interviews, participant incentives, offering multiple languages and modalities for survey and interview completion, and completing surveys and interviews on-site when possible. For our primary outcome, we expect interpreter invoice data to be complete, given our previous experience [ , , ]. We will track interpreter use for all enrolled providers for the entire study, even if they do not complete interviews or surveys. For our secondary outcome, diagnosis comprehension, patterns of data missingness will be examined. We expect randomization will help protect against imbalance in unobserved confounders, so our main concern will be with missing data. We will conduct sensitivity analyses based on multiple imputations to assess the impact of missing data, in which we will generate multiple imputed datasets with missing values imputed by pooling information from observed data, and then combine statistical inferences across the multiply-imputed datasets [ – ]. Aim 1: Compare the effectiveness of two implementation strategies, alone and in combination, to improve use of interpretation and comprehension for patients/parents with language barriers seen in adult/pediatric primary care settings We hypothesize that, compared to educational modules, provider access to mobile video interpreting will lead to ( H1 ) greater odds of interpreter use for visits with patients/parents with language barriers (primary outcome) and ( H2 ) better comprehension among patients/parents with language barriers. We also hypothesize ( H3 ) that mobile video interpreting and educational modules together will yield greater odds of interpreter use than either strategy alone. To test H1 and H3 , we will use assigned strategy and data collected during phases 1 and 2. Under the SMART design, comparisons of first-stage interventions, comparisons of second-stage interventions, and comparisons of the adaptive intervention with both stages can be conducted simultaneously using standard software with a technique called a “weighted and replicated” regression approach, using weighted generalized estimating equations (GEE) . Weighted GEE allows us to work with binary outcomes and weights and adjust for clustering within providers. Within-clinic correlations will be assessed by including clinic-specific random effects in our regression models and estimating the intra-cluster correlation coefficients. Significance of the intra-cluster correlation coefficients will be examined by comparing models with and without clinic-specific random effects using likelihood ratio tests. If no strong within-clinic correlation is detected, we will use fixed-effects regression models for their better power; otherwise, estimates and inference based on random-effects regression models will be reported. H1 and H3 will be tested using the Wald test and robust standard error estimates . Model-based estimates of odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals . To test H2 , our analytic sample will include only patients who completed a post-visit survey ( n = 648). A weighted GEE logistic regression model predicting patient/parent comprehension at the visit level will be estimated. Baseline covariates will include the clinic, patient demographics (age, sex, language), and patient comorbid conditions [ – ], pooled at the provider level. Model-based estimates of the odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals computed via parametric bootstrapping . Aim 2 Explore mobile video interpreting and education implementation strategies’ ability to activate putative provider-level mechanisms We predict that implementation via mobile video interpreting will activate mechanisms that are more directly and strongly linked to provider behavior, while education’s mechanism activation will more often affect intrapersonal barriers without changing behavior. We will use a quantitative plus qualitative approach to explore putative mechanisms, where both are analyzed together to understand data in context . Interviews will be audio-recorded, transcribed, translated as appropriate, and reviewed for accuracy. Using an iteratively developed codebook, we will code all data stratified by interpreter use and TDF attributes, upload data into Dedoose Version 9.0.17 for thematic analysis [ – ], and use the 6 analysis steps outlined by Braun and Clarke . Data synthesis will be conducted from code reports utilizing an annotation and tabular system. We will analyze provider and patient data separately. Video-recording analysis will be based on our previously developed coding scheme , with modifications based on coding the first 5 videos. We expect coding to include communication/interpretation method, duration, interpretation technical difficulties (e.g., dropped calls), interpreter or device positioning in room, provider use of jargon and acronyms, and clarifications between provider and interpreter. Initial videos will be double coded, until kappa statistics for interrater reliability are greater than 0.75. Subsequent videos will be single coded, with a random 10% double coded. Fidelity to assigned strategy will be defined as use of mobile video interpreting for assigned providers and use of best practices for communicating through an interpreter for education-assigned providers. Qualitative analysis of interviews and video recordings will occur with reference to provider quantitative data, for example, by interpreter use (high vs low) and survey-reported TDF attributes, following NIH guidelines for mixed-methods best practices . Provider interviews and videos will be considered as a set, to assess for changes over time, by assigned strategy. The relationships we investigate will be guided by preliminary causal pathway models (Figs. & ). These models, developed with best available evidence, lay out the putative mechanisms of each implementation strategy, including organizational and intrapersonal moderators, specific barriers, and proximal and distal outcomes. In this approach, we will explore hypothesized relationships and invite emergent mechanisms we had not previously considered given this work’s exploratory nature. Little is known about the mechanisms by which particular strategies influence interpreter use or even if things like acquiring facts serve as mediators on the pathway from strategy to outcome . Per Kazdin, identifying mediators and mechanisms of change allows greater reason and parsimony in selecting implementation strategies and should allow attainment of greater improvements over time as we understand exactly how improvement occurs . We will refine our causal pathway diagrams and generate new ones reflecting the evidence gathered through this study. Aim 3: Determine the incremental cost-effectiveness from a healthcare organization perspective of each implementation strategy (mobile video interpreting, education, and both) We hypothesize that, relative to educational modules, mobile video interpreting will be more cost-effective ( H4a ) per additional interpreted clinic visit and ( H4b ) per additional instance of patient comprehension. The estimated incremental cost-effectiveness ratios (ICER) will provide evidence of the resources required to increase interpreted clinic visits and improve patient comprehension . Our goal is to support decision-making about which strategy healthcare organization leaders may choose to implement, and thus, we will estimate ICERs from the organization perspective. Effectiveness measures will be based on Aim 1 analyses; cost data will come from two sources. The first source is administrative, including vendor invoices and budgets for payroll. Costs that cannot be determined will be estimated with a micro-costing approach in which unit cost multipliers are applied to the quantity of each type of service or resource utilized; examples include the use of shared resources (space, office equipment) and opportunity costs experienced by clinic staff. All cost data are summed to obtain total costs , using an approach we have used previously . While mobile video interpreting-assigned providers may also have used other professional interpretation, we will assign mobile video interpreting-related costs to the mobile video interpreting and combination groups and nonmobile video interpreting interpreter costs (which would not be necessary if a clinic used mobile video interpreting only) to the education group. Interpreter costs will be based on actual usage from vendor invoices, attributed to assigned group. Education module development will be annuitized over the study period. Time costs for providers (time on modules, learning to use mobile video interpreting) and study staff (reminder emails, mobile video interpreting support) will be estimated using the mean hourly wage from the National Compensation Survey, plus fringe rates from the Bureau of Labor Statistics Employer Costs for Employee Compensation. Provider costs due to own-device use for mobile video interpreting will be estimated with hardware depreciation allowances per the US Internal Revenue Code. Costs will be inflation-adjusted to common-year dollars using the Personal Health Care Expenditure Deflator or Personal Consumption Expenditure price index . We will calculate total costs associated with each implementation strategy by summing the above costs. To test H4a , we will calculate the ICER for each additional interpreted clinic visit, by calculating the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education, and then divide by the difference in number of professionally interpreted visits for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education. To test H4b , we will calculate the ICER for each additional instance of patient comprehension. To do so, we will calculate the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education and then divide by the difference in proportion of patients who correctly reported their diagnosis for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education.
We hypothesize that, compared to educational modules, provider access to mobile video interpreting will lead to ( H1 ) greater odds of interpreter use for visits with patients/parents with language barriers (primary outcome) and ( H2 ) better comprehension among patients/parents with language barriers. We also hypothesize ( H3 ) that mobile video interpreting and educational modules together will yield greater odds of interpreter use than either strategy alone. To test H1 and H3 , we will use assigned strategy and data collected during phases 1 and 2. Under the SMART design, comparisons of first-stage interventions, comparisons of second-stage interventions, and comparisons of the adaptive intervention with both stages can be conducted simultaneously using standard software with a technique called a “weighted and replicated” regression approach, using weighted generalized estimating equations (GEE) . Weighted GEE allows us to work with binary outcomes and weights and adjust for clustering within providers. Within-clinic correlations will be assessed by including clinic-specific random effects in our regression models and estimating the intra-cluster correlation coefficients. Significance of the intra-cluster correlation coefficients will be examined by comparing models with and without clinic-specific random effects using likelihood ratio tests. If no strong within-clinic correlation is detected, we will use fixed-effects regression models for their better power; otherwise, estimates and inference based on random-effects regression models will be reported. H1 and H3 will be tested using the Wald test and robust standard error estimates . Model-based estimates of odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals . To test H2 , our analytic sample will include only patients who completed a post-visit survey ( n = 648). A weighted GEE logistic regression model predicting patient/parent comprehension at the visit level will be estimated. Baseline covariates will include the clinic, patient demographics (age, sex, language), and patient comorbid conditions [ – ], pooled at the provider level. Model-based estimates of the odds ratio comparing education to mobile video interpreting or both will be reported, along with 95% confidence intervals computed via parametric bootstrapping .
We predict that implementation via mobile video interpreting will activate mechanisms that are more directly and strongly linked to provider behavior, while education’s mechanism activation will more often affect intrapersonal barriers without changing behavior. We will use a quantitative plus qualitative approach to explore putative mechanisms, where both are analyzed together to understand data in context . Interviews will be audio-recorded, transcribed, translated as appropriate, and reviewed for accuracy. Using an iteratively developed codebook, we will code all data stratified by interpreter use and TDF attributes, upload data into Dedoose Version 9.0.17 for thematic analysis [ – ], and use the 6 analysis steps outlined by Braun and Clarke . Data synthesis will be conducted from code reports utilizing an annotation and tabular system. We will analyze provider and patient data separately. Video-recording analysis will be based on our previously developed coding scheme , with modifications based on coding the first 5 videos. We expect coding to include communication/interpretation method, duration, interpretation technical difficulties (e.g., dropped calls), interpreter or device positioning in room, provider use of jargon and acronyms, and clarifications between provider and interpreter. Initial videos will be double coded, until kappa statistics for interrater reliability are greater than 0.75. Subsequent videos will be single coded, with a random 10% double coded. Fidelity to assigned strategy will be defined as use of mobile video interpreting for assigned providers and use of best practices for communicating through an interpreter for education-assigned providers. Qualitative analysis of interviews and video recordings will occur with reference to provider quantitative data, for example, by interpreter use (high vs low) and survey-reported TDF attributes, following NIH guidelines for mixed-methods best practices . Provider interviews and videos will be considered as a set, to assess for changes over time, by assigned strategy. The relationships we investigate will be guided by preliminary causal pathway models (Figs. & ). These models, developed with best available evidence, lay out the putative mechanisms of each implementation strategy, including organizational and intrapersonal moderators, specific barriers, and proximal and distal outcomes. In this approach, we will explore hypothesized relationships and invite emergent mechanisms we had not previously considered given this work’s exploratory nature. Little is known about the mechanisms by which particular strategies influence interpreter use or even if things like acquiring facts serve as mediators on the pathway from strategy to outcome . Per Kazdin, identifying mediators and mechanisms of change allows greater reason and parsimony in selecting implementation strategies and should allow attainment of greater improvements over time as we understand exactly how improvement occurs . We will refine our causal pathway diagrams and generate new ones reflecting the evidence gathered through this study.
We hypothesize that, relative to educational modules, mobile video interpreting will be more cost-effective ( H4a ) per additional interpreted clinic visit and ( H4b ) per additional instance of patient comprehension. The estimated incremental cost-effectiveness ratios (ICER) will provide evidence of the resources required to increase interpreted clinic visits and improve patient comprehension . Our goal is to support decision-making about which strategy healthcare organization leaders may choose to implement, and thus, we will estimate ICERs from the organization perspective. Effectiveness measures will be based on Aim 1 analyses; cost data will come from two sources. The first source is administrative, including vendor invoices and budgets for payroll. Costs that cannot be determined will be estimated with a micro-costing approach in which unit cost multipliers are applied to the quantity of each type of service or resource utilized; examples include the use of shared resources (space, office equipment) and opportunity costs experienced by clinic staff. All cost data are summed to obtain total costs , using an approach we have used previously . While mobile video interpreting-assigned providers may also have used other professional interpretation, we will assign mobile video interpreting-related costs to the mobile video interpreting and combination groups and nonmobile video interpreting interpreter costs (which would not be necessary if a clinic used mobile video interpreting only) to the education group. Interpreter costs will be based on actual usage from vendor invoices, attributed to assigned group. Education module development will be annuitized over the study period. Time costs for providers (time on modules, learning to use mobile video interpreting) and study staff (reminder emails, mobile video interpreting support) will be estimated using the mean hourly wage from the National Compensation Survey, plus fringe rates from the Bureau of Labor Statistics Employer Costs for Employee Compensation. Provider costs due to own-device use for mobile video interpreting will be estimated with hardware depreciation allowances per the US Internal Revenue Code. Costs will be inflation-adjusted to common-year dollars using the Personal Health Care Expenditure Deflator or Personal Consumption Expenditure price index . We will calculate total costs associated with each implementation strategy by summing the above costs. To test H4a , we will calculate the ICER for each additional interpreted clinic visit, by calculating the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education, and then divide by the difference in number of professionally interpreted visits for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education. To test H4b , we will calculate the ICER for each additional instance of patient comprehension. To do so, we will calculate the difference in total costs for (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education and then divide by the difference in proportion of patients who correctly reported their diagnosis for providers assigned to (i) mobile video interpreting vs education and (ii) mobile video interpreting plus education vs education.
The mVOCAL Trial was registered on ClinicalTrials.gov on September 22, 2022 (NCT05591586). The Seattle Children’s Hospital institutional review board (IRB) serves as the single IRB (sIRB). The study was initially approved on October 29, 2021 (no. 00003332). All providers and patients will provide informed consent for their participation, with the exception of those participating only through the inclusion of their administrative data, for whom a waiver of informed consent has been obtained.
In this type 3 hybrid implementation-effectiveness study, we will test two discrete implementation strategies for improving professional interpreter use and patient comprehension in primary care. Using a SMART design will allow us to study the effect of the strategies alone and together mirroring the way a practice might implement a staged strategy, with additional intervention for providers with worse performance [ – ]. Given the different barriers targeted by the different strategies, we expect a greater response together, while a single strategy may suffice for many. Our SMART design, mixed methods, and inquiry into mechanisms will illuminate which provider and clinic characteristics would most likely benefit from each strategy, focusing on how, when, where, and why each is effective, rather than simply whether it is effective . As both strategies are inherently scalable but not currently in widespread use, our study will provide actionable data to inform where and how to most effectively implement these strategies to improve safety and equity for patients who use a language other than English for medical care. We will study these two implementation strategies without additional facilitation, in order to isolate the effect of each, as either could represent the minimum intervention needed to produce change (MINC) . The MINC concept addresses the issue that many effective strategies are not widely adopted due to time and resource limitations in non-research settings. We will therefore test strategies that are relatively simple, with fewer barriers to real-world implementation, as they may lead to greater population impact through wide uptake, even if their individual effect is not as large as might be found for a complex intervention. With provider-level randomization, contamination between groups is a concern; however, we do not believe it will undermine our ability to test our hypotheses for several reasons. First, we do not expect contamination with the mobile video interpreting strategy, as app access will be controlled by the study team, and we will request that providers not share mobile video interpreting-enabled devices with others. Second, we will measure mobile video interpreting contamination, as every mobile video interpreting use will be linked to a visit via billing invoices, and each mobile video interpreting account will be associated with a specific provider. Mobile video interpreting use at visits with nonmobile video interpreting providers will prompt an inquiry and remediating measures. Third, we will ask providers who are not assigned to the modules not to view them. It is possible that each strategy’s tip sheets may be printed and visible in shared clinic space. However, provider behavior is difficult to change, so we would not expect a minor exposure to meaningfully impact behavior . Finally, we will explore possible contamination in provider qualitative interviews. Evidence of contamination would suggest we should interpret results with caution, but also that the implementation strategy could be widely adopted in practice. The planned study will generate novel data regarding how effective each strategy is, under what circumstances, through which mechanisms, and at what cost. With these new data, healthcare organizations will be able to make informed decisions to best address the persistent communication-mediated inequities experienced by their patients with language barriers.
|
Impacts of coniferous bark-derived organic soil amendments on microbial communities in arable soil – a microcosm study | 2fc9ac59-40ce-473c-8a8a-326604ecb3d2 | 10013654 | Microbiology[mh] | Deforestation and conversion of land to agriculture depletes one third of the soil carbon (C) pool during the first 10 years and thereafter continues, for example, in Finland at a rate of 0.4% yr −1 (Heikkinen et al. ), which is in line with estimations of LUCAS surveys conducted from European arable soils (Panagos et al. ). Long-term soil C loss also leads to lower crop yields that can be converted if soil C sequestration is increased (Lal ). Soil microbes, particularly fungi, are connected to increased C sequestration potential during the restoration of agricultural soils (Morriën et al. , Yang et al. ). Indeed, high fungal abundance is reported to result in microbial-derived soil organic matter (SOM) accumulation (Godbold et al. , Kallenbach et al. ). Also, organic interactions between the necromass of Gram-negative bacteria and soil yeasts contributed to retention of necromass-C and N in soils (Buckeridge et al. ). Furthermore, stable soil organic C formation is linked to soil microbe activity according to the Microbial Carbon Pump hypothesis (Liang et al. ), which counteracts the traditional plant litter-centered perspective. Most arable soils were once forests forming a "wood-wide-web" with fungi as the key soil organisms bridging trees together (Helgason et al. 1998). Fungi affect major ecological processes such as nutrient cycles, good soil structure and disease control, likely preventing further degradation and C loss of arable soils. Thus, management practices restoring fungi are important (Frąc et al. , Hannula and Morriën ). There are a few indications of potential positive effects after the application of forestry- or wood-based amendments on soil conditions and soil organisms. For instance, processed pulp mill and fiber sludges from paper mills, used as organic amendments, protected soil from erosion and had a promising positive impact on soil microbial communities (Rasa et al. ). Furthermore, wood-derived organic amendments favored saprotrophs over potential plant pathogenic fungi (Clocchiatti et al. ), and forest litter amendments decreased Fusarium infections of wheat, probably via forest soil microbes (Ridout and Newcombe ). Wood sawdust stimulated the activity of various soil fungi but also the abundance of potentially beneficial rhizosphere bacteria (Clocchiatti et al. ). Another recent study showed that composted pulp mill sludge and fiber sludge induced a very different soil microbial community compared with addition of chopped clover roots (Heikkinen et al. ). The study by Heikkinen et al. ( ) showed that wood-derived material has the potential to trigger the fungal community to diversify, which can be reflected in soil C acquisition in agricultural soil, indicating the power of soil amendments to induce community shifts lasting for several years. Forestry-derived side-streams with potential as amendments to promote soil health can be obtained from wood chips and bark from sawmills, residuals of wood and bark from bioethanol and biogas plants and side-streams from paper pulping. The Finnish Forest industry generated almost 7 million m 3 of bark as a by-product in 2019, which is usually combusted to generate energy in the form of steam, heat and electricity (Rasi et al. ). Forest industry side-streams are an important asset for a sustainable and circular bioeconomy as, instead of burning for energy, they can be used for improving soil structure and health. Although there is substantial potential to use forestry-derived bark side-streams, their suitability as soil amendments is, yet, largely undetermined. Fresh bark is rich in polyphenolic compounds such as stilbenes and tannins, which have antimicrobial properties preventing microbial growth and functions (Jyske et al. ) and might need processing before applied as soil amendment. These naturally bioactive compounds can be isolated from the bark biomass by methods of green chemistry, such as hot water extraction, and processed further into added-value use as functional bioproducts and chemicals (Raitanen et al. , Välimaa et al. , Pap et al. , Granato et al. ). The extracted residual bark can be further utilized in follow-up bioprocessing, such as anaerobic digestion. Thus, by cascade processing combining different unit operations, such as hot water extraction and further anaerobic digestion (Rasi et al. ), the bark biomass can be fully utilized. Digestates from conventional biogas processes are commonly used as soil amendments and they have proven to improve soil properties and to increase soil carbon content (Jurgutis et al. 2021). However, the effect of anaerobically treated bark-derived amendments is not well studied. To estimate the impact of bark-derived organic amendments on the abundance and composition of soil microbes (bacteria and fungi), we established a laboratory-scale microcosm experiment to simulate the possible effects throughout an entire growing season, from sowing to harvest. We used barley: a common cereal cash crop plant typically grown in boreal fields. Bark-derived organic amendment materials of two boreal coniferous tree species, namely Scots pine ( Pinus sylvestris L.) and Norway spruce ( Picea abies L. Karst), were used. Furthermore, we investigated the effect of digestates resulting from anaerobic biogas processes with rarely used bark-derived organic fractions. Based on the previous findings (Heikkinen et al. ), we expected that the addition of bark-derived organic amendments to agricultural soils might serve to increase soil fungal biomass and change the community structure, harboring species connected to soil C sequestration. Thus, we investigated the impact of different coniferous bark-derived organic amendments on (i) microbial abundance (gene copy amounts) and (ii) community composition (amplicon sequencing) in silt and clay arable soils. Our hypotheses were that (1) the unextracted industrial bark and processed bark-derived organic amendments affect bacterial and fungal communities differently, and (2) the soil type (silt vs. clay) determines microbial response to amendments.
Microcosm laboratory experiment For the microcosms, about 20 liters of arable soil were taken with a clean shovel from the tilled (10 cm) surface of two separate agricultural field sites without plant cover in May 2018. These two sites represented a silt soil (pH 6.9, 7% organic and 72.7% dry matter) from Mikkeli (southeastern Finland 61.68°N, 27.22°E; Pakarinen et al. ) and a clay soil (pH 6.5, 2% organic and 76.8% dry matter) from Jokioinen (southwestern Finland 60.80 N, 23.46 E; Rasa et al. ). The soils were kept outdoors for up to 48 h before taking them to the laboratory where they were stored at 4°C for a few days prior to starting the microcosm experiment. The microcosm soil pH was determined in distilled water (1 : 3.5, vol/vol), and total C and N were measured from sieved and air-dried samples using a CN analyzer (Leco-TruMac, Leco Corp., MI, USA) (Table ). The microcosms comprised four replicate aerated plastic containers (107×94×65 mm, sterivent low container, Duchefa Biochemie, Haarlem, The Netherlands) per treatment (n = 72). Controls had soil only and, in the amended treatments, soil was mixed with, altogether, eight different types of bark-derived materials of both pine and spruce tree species: (1) industrial conifer bark without extraction treatment (B), (2) industrial conifer bark after hot water extraction treatment (BH), (3) digestate containing industrial, unextracted bark after an anaerobic digestion process (BA) and (4) digestate containing industrial, hot water extracted bark after an anaerobic digestion process (BHA). Industrial bark of Norway spruce ( Picea abies (L.) Karst) and Scots pine ( Pinus sylvestris L.) trees was obtained from a sawmill (Veljekset Vaara Oy) in Tervola, north-western Finland. Trees were felled in late 2017 and peeled in the sawmill in early January 2018. The fresh bark was collected, transported to the laboratory and stored in the dark at −20°C until further processing. Prior to any processing, bark was milled into 5 cm chips with a cutting mill shredder (Fritsch, Pulverizette, Germany). The hot water extraction was done at 75°C for 1 h using 3 L flow-through extraction equipment (see Väisänen et al. ). Anaerobic digestion processing for unextracted and hot water extracted bark was done as a biochemical methane potential (BMP) experiment in mesophilic conditions (37°C). Inoculum for the BMP process was from a farm-scale biogas plant treating cattle slurry, and the volatile solids ratio of 0.5 for substrate/inoculum (VS: VS) was used (Rasi et al. ). Dry weights of the materials were pine B 42%, spruce B 44%: BH 35% and all the BA and BHA amendments 6%; 21 g fresh weight (fw) of B, 28 g (fw) of BH and 50 g (fw) of BA and BHA amendments were added to the microcosms filled with 380 g of fresh soil. The C additions to the microcosms with the amendments corresponded to 11 500 kg ha −1 (3.8 g C per microcosm) for B and BH and 3000 kg ha −1 (1 g C per microcosm) for BA and BHA treatments, if performed at field level. The amounts of organic materials in microcosms were adjusted to correspond approximately to the levels of C in fields applied with organic amendments (see, for example, Rasa et al. ). We estimated that, in the field, the amendments are mixed into the uppermost 15 cm ploughing depth. As the thickness of soil layer in the microcosms was only 5 cm, we adjusted the amounts of amendments to corresponding proportions by dividing by three. The microcosm study was conducted in climate chambers (Binder KBW 720/400, WTB Binder Labortechnik GmbH, Tuttlingen, Germany) simulating soil temperature conditions occurring over 1.5 years in a boreal agricultural field with barley as a crop plant in southern Finland. Barley was chosen because it is the most common crop grown in Finland. The temperature cycling was adjusted so that the change in soil temperature was two times faster than in the field, and thereby we achieved a temperature cycle of over 1.5 years and two growing seasons in 10 months. Simulation of the annual changes in soil temperature was important because fluctuating temperature is relevant to activation of environmental microbes (Korkama-Rajala et al. ). Moisture in the microcosms was followed by weighing and water was added when needed. After 7 months of incubation, barley seeds were sown in the microcosms, which were transferred to 18 h light/6 h darkness at 20ºC for 1.5 months ( ). After 2 weeks the lids were opened because the barley grew taller than the containers. Full daylight tubes were used and the photosynthetically active radiation in the chambers, measured with an Apogee MQ-200 quantum meter (Apogee Instruments, Inc., Logan, Utah, USA), was 170 µmol m −2 s −1 . Barley shoots were harvested by cutting the green parts after 1.5 months growth and biomass was determined by weighing the mass after drying at 50°C for 48 h. Dried shoots were returned to the respective microcosms to let them decompose there. The experiment was sampled four times: we performed the first sampling during the simulated early winter (4-month incubation); the second during the simulated spring before sowing (6-month incubation); the third during the post-harvest of barley in simulated autumn (8-month incubation); and the fourth from the bare field before simulated winter (10-month incubation) ( ). Sampling was performed by combing small portions of soil, taken by spoon from over the microcosm, into one bulk sample per microcosm. DNA extraction and sequencing At each of the four sampling times, soil sub-samples (n = 288 total amount of samples; n = 72 for each sampling) were taken from each microcosm with a sterile spatula and frozen at -20°C for DNA extraction. DNA was extracted using a NucleoSpin Soil kit (Macherey Nagel, Düren, Germany), according to the protocol of the manufacturer. DNA concentration and purity were determined with a NanoDrop Lite spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Quantitative PCR (qPCR) for the bacterial 16S rRNA genes and partial fungal ITS2 region from the DNA obtained from all four samplings was conducted as described by Peltoniemi et al. (2015). DNA from the second and third samplings, respectively, originating from three out of four replicate microcosms (n = 54 for each sampling), was used for sequencing at the Institute of Genomics of Tartu University, Estonia. For bacteria, the targeted V4 region of the 16S SSU rRNA, and for fungi the ITS2 region, were amplified in a two-step PCR. Bacterial and fungal PCR were performed using the 16S rRNA primers 515F and 806R (Caporaso et al. , ) and the ITS primers ITS4 (White et al. ) and gITS7 (Ihrmark et al. ), respectively, with 8 bp dual index for 24 cycles. The final PCR fragments were run as paired-end 2×300 bp with the MiSeq platform (Illumina, San Diego, CA, USA) using a MiSeq v3 kit producing ∼20–25 M reads per flow cell. Processing of raw sequence data and bioinformatics Sequence assembly, quality filtering, removal of chimeras, primer-dimers and primers from raw 16S and ITS2 sequence reads, along with clustering and taxonomical annotations, were conducted with a PipeCraft 1.0 pipeline (Anslan et al. ) as described by Soinne et al. ( ) with slight modifications. Briefly, in the second quality filtering, OTUs that had similarity below 90% for 16S rRNA data and 75% for fungal ITS2 data, query coverage below 70% and reads that were observed fewer than 10 times and OTUs with associations other than bacteria or fungi, were removed from the data. In addition, all mitochondrial and chloroplast matches were removed. Fungal ITS2 derived OTUs were combined according to species hypothesis (Kõljalg et al. ; Nilsson et al. ). The bacterial 16S rRNA raw data for the second and third samplings consisted of 1981 774 and 2074 812 reads clustering into 17 975 and 18 145 OTUs, respectively, and the respective values after trimming were 1909 593 and 1997 696 reads clustering into 7144 and 7213 OTUs. The fungal ITS2 raw data for the second and the third sampling consisted of 1483 987 and 1530 156 reads clustering into 3699 and 3023 OTUs, respectively, and the respective values after trimming were 1317 334 and 1321 392 clustering into 901 and 784 OTUs. Raw sequence data are deposited in the sequence read archive of the NCBI database under BioProject PRJNA607883 with the accession numbers SAMN30932759–SAMN30932812 for 16S rRNA and ITS2 data. Statistics The total sample count for the experiment was 288 (including four different organic amendments from spruce and pine bark with four samplings in two different soil types, silt and clay, and four replicated control treatments for both silt and clay). All statistical analyses were conducted using R studio version 1.2.5001 and R version 3.6.0 or 3.6.1 (R Core Team 2021). To study the effects of organic amendments on the 16S rRNA gene and ITS-region copy numbers, a linear mixed effects model was fitted by maximum likelihood (lme function from the nlme package). Fungal and bacterial qPCR copy numbers were the response variable and organic amendment as the explanatory factor and tree species (pine vs. spruce) and sample id (microcosm nr) nested within the sampling timepoint as the random factor. The response variables were square root transformed prior to analysis to normalize the distributions. Dunnett's post-hoc comparisons between the control and the four amendments were done with glht from the multcomp package and Bonferroni correction of P values to adjust for multiple comparisons. OTU data from the amplicon sequencing from the second and third datasets (n = 54 for each sampling) were combined and normalized using the geometric mean of pairwise ratios method (Chen et al. ). PERMANOVA was used separately for the two soil types (clay vs. silt) to test the effect of sampling time (second and third sampling), and the type of organic amendment (B, BH, BA, BHA) on fungal and bacterial community composition having the sample id (microcosm nr) as the strata. Bray-Curtis distance matrices were used in the adonis function from vegan 2.5–5 (Oksanen et al. ) with 999 permutations. Pairwise comparisons between organic amendments were calculated with the pairwise.adonis2 function using the same explanatory and random factors as for adonis (Martinez Arbizu ). Homogeneity of variances was studied with betadisper function for soil type, organic amendment type and sampling time separately. We conducted 3-D NMDS with stable solution from random starts, and axis scaling and species scores with the metaMDS function from vegan using the Bray-Curtis dissimilarity index for visualization fungal and bacterial community composition. Differential abundant bacterial and fungal OTU for microcosms from the second and third separate samplings were obtained from the phyloseq object by differential abundance analysis (DESeq2), which identified significantly abundant groups for control versus four different organic amendments, and for silt and clay soil separately (>|2.0| log2 fold change with adjusted P < 0.001) (Love et al. , McMurdie and Holmes ).
For the microcosms, about 20 liters of arable soil were taken with a clean shovel from the tilled (10 cm) surface of two separate agricultural field sites without plant cover in May 2018. These two sites represented a silt soil (pH 6.9, 7% organic and 72.7% dry matter) from Mikkeli (southeastern Finland 61.68°N, 27.22°E; Pakarinen et al. ) and a clay soil (pH 6.5, 2% organic and 76.8% dry matter) from Jokioinen (southwestern Finland 60.80 N, 23.46 E; Rasa et al. ). The soils were kept outdoors for up to 48 h before taking them to the laboratory where they were stored at 4°C for a few days prior to starting the microcosm experiment. The microcosm soil pH was determined in distilled water (1 : 3.5, vol/vol), and total C and N were measured from sieved and air-dried samples using a CN analyzer (Leco-TruMac, Leco Corp., MI, USA) (Table ). The microcosms comprised four replicate aerated plastic containers (107×94×65 mm, sterivent low container, Duchefa Biochemie, Haarlem, The Netherlands) per treatment (n = 72). Controls had soil only and, in the amended treatments, soil was mixed with, altogether, eight different types of bark-derived materials of both pine and spruce tree species: (1) industrial conifer bark without extraction treatment (B), (2) industrial conifer bark after hot water extraction treatment (BH), (3) digestate containing industrial, unextracted bark after an anaerobic digestion process (BA) and (4) digestate containing industrial, hot water extracted bark after an anaerobic digestion process (BHA). Industrial bark of Norway spruce ( Picea abies (L.) Karst) and Scots pine ( Pinus sylvestris L.) trees was obtained from a sawmill (Veljekset Vaara Oy) in Tervola, north-western Finland. Trees were felled in late 2017 and peeled in the sawmill in early January 2018. The fresh bark was collected, transported to the laboratory and stored in the dark at −20°C until further processing. Prior to any processing, bark was milled into 5 cm chips with a cutting mill shredder (Fritsch, Pulverizette, Germany). The hot water extraction was done at 75°C for 1 h using 3 L flow-through extraction equipment (see Väisänen et al. ). Anaerobic digestion processing for unextracted and hot water extracted bark was done as a biochemical methane potential (BMP) experiment in mesophilic conditions (37°C). Inoculum for the BMP process was from a farm-scale biogas plant treating cattle slurry, and the volatile solids ratio of 0.5 for substrate/inoculum (VS: VS) was used (Rasi et al. ). Dry weights of the materials were pine B 42%, spruce B 44%: BH 35% and all the BA and BHA amendments 6%; 21 g fresh weight (fw) of B, 28 g (fw) of BH and 50 g (fw) of BA and BHA amendments were added to the microcosms filled with 380 g of fresh soil. The C additions to the microcosms with the amendments corresponded to 11 500 kg ha −1 (3.8 g C per microcosm) for B and BH and 3000 kg ha −1 (1 g C per microcosm) for BA and BHA treatments, if performed at field level. The amounts of organic materials in microcosms were adjusted to correspond approximately to the levels of C in fields applied with organic amendments (see, for example, Rasa et al. ). We estimated that, in the field, the amendments are mixed into the uppermost 15 cm ploughing depth. As the thickness of soil layer in the microcosms was only 5 cm, we adjusted the amounts of amendments to corresponding proportions by dividing by three. The microcosm study was conducted in climate chambers (Binder KBW 720/400, WTB Binder Labortechnik GmbH, Tuttlingen, Germany) simulating soil temperature conditions occurring over 1.5 years in a boreal agricultural field with barley as a crop plant in southern Finland. Barley was chosen because it is the most common crop grown in Finland. The temperature cycling was adjusted so that the change in soil temperature was two times faster than in the field, and thereby we achieved a temperature cycle of over 1.5 years and two growing seasons in 10 months. Simulation of the annual changes in soil temperature was important because fluctuating temperature is relevant to activation of environmental microbes (Korkama-Rajala et al. ). Moisture in the microcosms was followed by weighing and water was added when needed. After 7 months of incubation, barley seeds were sown in the microcosms, which were transferred to 18 h light/6 h darkness at 20ºC for 1.5 months ( ). After 2 weeks the lids were opened because the barley grew taller than the containers. Full daylight tubes were used and the photosynthetically active radiation in the chambers, measured with an Apogee MQ-200 quantum meter (Apogee Instruments, Inc., Logan, Utah, USA), was 170 µmol m −2 s −1 . Barley shoots were harvested by cutting the green parts after 1.5 months growth and biomass was determined by weighing the mass after drying at 50°C for 48 h. Dried shoots were returned to the respective microcosms to let them decompose there. The experiment was sampled four times: we performed the first sampling during the simulated early winter (4-month incubation); the second during the simulated spring before sowing (6-month incubation); the third during the post-harvest of barley in simulated autumn (8-month incubation); and the fourth from the bare field before simulated winter (10-month incubation) ( ). Sampling was performed by combing small portions of soil, taken by spoon from over the microcosm, into one bulk sample per microcosm.
At each of the four sampling times, soil sub-samples (n = 288 total amount of samples; n = 72 for each sampling) were taken from each microcosm with a sterile spatula and frozen at -20°C for DNA extraction. DNA was extracted using a NucleoSpin Soil kit (Macherey Nagel, Düren, Germany), according to the protocol of the manufacturer. DNA concentration and purity were determined with a NanoDrop Lite spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Quantitative PCR (qPCR) for the bacterial 16S rRNA genes and partial fungal ITS2 region from the DNA obtained from all four samplings was conducted as described by Peltoniemi et al. (2015). DNA from the second and third samplings, respectively, originating from three out of four replicate microcosms (n = 54 for each sampling), was used for sequencing at the Institute of Genomics of Tartu University, Estonia. For bacteria, the targeted V4 region of the 16S SSU rRNA, and for fungi the ITS2 region, were amplified in a two-step PCR. Bacterial and fungal PCR were performed using the 16S rRNA primers 515F and 806R (Caporaso et al. , ) and the ITS primers ITS4 (White et al. ) and gITS7 (Ihrmark et al. ), respectively, with 8 bp dual index for 24 cycles. The final PCR fragments were run as paired-end 2×300 bp with the MiSeq platform (Illumina, San Diego, CA, USA) using a MiSeq v3 kit producing ∼20–25 M reads per flow cell.
Sequence assembly, quality filtering, removal of chimeras, primer-dimers and primers from raw 16S and ITS2 sequence reads, along with clustering and taxonomical annotations, were conducted with a PipeCraft 1.0 pipeline (Anslan et al. ) as described by Soinne et al. ( ) with slight modifications. Briefly, in the second quality filtering, OTUs that had similarity below 90% for 16S rRNA data and 75% for fungal ITS2 data, query coverage below 70% and reads that were observed fewer than 10 times and OTUs with associations other than bacteria or fungi, were removed from the data. In addition, all mitochondrial and chloroplast matches were removed. Fungal ITS2 derived OTUs were combined according to species hypothesis (Kõljalg et al. ; Nilsson et al. ). The bacterial 16S rRNA raw data for the second and third samplings consisted of 1981 774 and 2074 812 reads clustering into 17 975 and 18 145 OTUs, respectively, and the respective values after trimming were 1909 593 and 1997 696 reads clustering into 7144 and 7213 OTUs. The fungal ITS2 raw data for the second and the third sampling consisted of 1483 987 and 1530 156 reads clustering into 3699 and 3023 OTUs, respectively, and the respective values after trimming were 1317 334 and 1321 392 clustering into 901 and 784 OTUs. Raw sequence data are deposited in the sequence read archive of the NCBI database under BioProject PRJNA607883 with the accession numbers SAMN30932759–SAMN30932812 for 16S rRNA and ITS2 data.
The total sample count for the experiment was 288 (including four different organic amendments from spruce and pine bark with four samplings in two different soil types, silt and clay, and four replicated control treatments for both silt and clay). All statistical analyses were conducted using R studio version 1.2.5001 and R version 3.6.0 or 3.6.1 (R Core Team 2021). To study the effects of organic amendments on the 16S rRNA gene and ITS-region copy numbers, a linear mixed effects model was fitted by maximum likelihood (lme function from the nlme package). Fungal and bacterial qPCR copy numbers were the response variable and organic amendment as the explanatory factor and tree species (pine vs. spruce) and sample id (microcosm nr) nested within the sampling timepoint as the random factor. The response variables were square root transformed prior to analysis to normalize the distributions. Dunnett's post-hoc comparisons between the control and the four amendments were done with glht from the multcomp package and Bonferroni correction of P values to adjust for multiple comparisons. OTU data from the amplicon sequencing from the second and third datasets (n = 54 for each sampling) were combined and normalized using the geometric mean of pairwise ratios method (Chen et al. ). PERMANOVA was used separately for the two soil types (clay vs. silt) to test the effect of sampling time (second and third sampling), and the type of organic amendment (B, BH, BA, BHA) on fungal and bacterial community composition having the sample id (microcosm nr) as the strata. Bray-Curtis distance matrices were used in the adonis function from vegan 2.5–5 (Oksanen et al. ) with 999 permutations. Pairwise comparisons between organic amendments were calculated with the pairwise.adonis2 function using the same explanatory and random factors as for adonis (Martinez Arbizu ). Homogeneity of variances was studied with betadisper function for soil type, organic amendment type and sampling time separately. We conducted 3-D NMDS with stable solution from random starts, and axis scaling and species scores with the metaMDS function from vegan using the Bray-Curtis dissimilarity index for visualization fungal and bacterial community composition. Differential abundant bacterial and fungal OTU for microcosms from the second and third separate samplings were obtained from the phyloseq object by differential abundance analysis (DESeq2), which identified significantly abundant groups for control versus four different organic amendments, and for silt and clay soil separately (>|2.0| log2 fold change with adjusted P < 0.001) (Love et al. , McMurdie and Holmes ).
Microbial abundances Pine- and spruce-derived materials induced similar changes (data not shown) in qPCR and therefore the data were combined in the analyses as presented in Fig. . The average bacterial 16S rRNA and fungal ITS region copy numbers were lower in silt soil compared with clay soil (Fig. ). Average bacterial 16S rRNA gene copy numbers increased in microcosms with all other bark-derived amendments, except in those with industrial bark (B) in silt and clay soils (Fig. and , ). By contrast, average fungal ITS numbers increased in the silt soil microcosm with almost all the bark-derived amendments (B, BH, BHA), whereas average fungal ITS numbers in clay soil increased only in the microcosm with B and BH treatments (Fig. and , ). Shoot biomasses of barley were significantly higher in microcosms with all amendments in clay soil and in microcosms with BA amendment in silt soil compared with controls ( ). Microbial community composition The NMDS for the bacterial 16S rRNA data showed clearly that the bacterial communities in the simulated spring and autumn (second and third samplings) separate from each other in both silt and clay soil (Fig. and ). Axes 1 and 2 of the NMDS also showed that the bacterial community of B and BH amendments were more similar and separate from the communities of the microcosms with BA and BHA amendments. By contrast, while the NMDS of the fungal ITS2 data also separated the fungal communities of all amendments, the effect of sampling time was not so drastic compared with bacteria (Fig. and ). Sampling time explained 35% of the variation in bacterial community composition in both silt and clay soil (PERMANOVA, df = 1, F = 37.1/37.6, R2 = 0.35, P < 0.001) and the organic amendment type 19% and 18%, respectively (df = 4, F = 4.9/4.8, R2 = 0.19/0.18, P < 0.001). In both soil types sampling time explained 8% of the variation in fungal community composition (df = 4, F = 5.9/6.6, R2 = 0.08, P < 0.001). The type of amendment explained 25%, and 29% of the variation in fungal community composition in silt and clay soil, respectively (df = 4, F = 4.5/6.6, R2 = 0.25/0.29, P < 0.001). The interaction term was significant for both soil types explaining 8% of the variation in fungal community composition (df = 4, F = 1.5, R2 = 0.08, P < 0.017). According to paired comparisons, bacterial communities did not differ in silt soil between B and BH, or BA and BHA ( ). Both soil amendment type and sampling time affected other bacterial communities for all amendments in clay soil. Whereas, both amendment type and sampling time affected other fungal communities for almost all amendments in silt soil; BA and BHA differed only by sampling time ( ). In turn, fungal communities of C were statistically undistinguishable from both B and BH, and B differed from both BH and BHA only by sampling time in clay soil. Differentially abundant bacterial OTUs Taxa from 10 of the most differentially abundant bacterial OTUs for each microcosm from paired comparisons are presented in Fig. . The full list of all significant differentially abundant bacterial OTUs and their closest association to taxa in databases are shown in . When comparing control soils with microcosms having organic bark-derived amendments, there were less differentially abundant OTUs for B and BH treatments compared with those for BA and BHA treatments (Fig. , ). There were differentially abundant OTUs for both soil types from all treatments although they were among 10 of the most significant ones only in some treatments (Fig. , ). Those included OTUs representing, for example, genera of Verrucomicrobia ( Chthoniobacter) , Proteobacteria ( Novosphingobium ) and Bacteroidetes ( Mucilaginibacter ), which were more differentially abundant in B and BH treatments. Among the most significant differentially abundant OTUs in both clay and silt soil with B or BH amendments were two genera of Verrucomicrobia and Rhizobiaceae, but OTUs for B amendments represented genera of Bacteroidetes, Cytophaga in clay and Niastella in silt. These were recorded only from samples of the simulated spring (second sampling). Differentially abundant OTUs for BH amendments included the representative genera of Proteobacteria Serratia in clay and Bactoiredetes ( Filimonas ) and Verrucomicrobia ( Methylacidiphilaceae ) in silt. In general, differentially abundant OTUs for BA and BHA treatments were more diverse compared with those for B or BH treatments. These included representatives of many phyla, for example, Cloacimonetes, Fibrobacteres, Firmicutes, Plantomycetes and Synergistetes, which were not recorded from B or BH treatments. OTUs in both soil types with both BA and BHA treatments included representative genera of, for example, Actinobacteria ( Fodinicola ), Chloroflexi ( Anaerolineaceae ), Planctomycetes ( Pirellula ), Proteobacteria ( Devosia, Nitrosospira, Luteimonas, Lysobacter, Mesorhizobium, Pseudoxanthomonas ) and Verrucomicrobia ( Opitutus ). Differentially abundant OTUs for both BA and BHA treatments included genera of Firmicutes ( Romboutsia ) and Bactoidetes ( Adhaeribacter ) in clay soil, and a genus of Proteobacteria ( Polaromonas) in silt soil. Differentially abundant fungal OTUs Taxa from 10 of the most significant differentially abundant fungal OTUs for each microcosm from paired comparisons are presented in Fig. . The full list of all significant differentially abundant OTUs and their closest associations to taxa in databases are provided in . When comparing control soils and microcosms with bark-derived amendments, there were more differentially abundant fungal OTUs in microcosms with B and BH treatments compared with those with BA and BHA treatments (Fig. , ). Differentially abundant fungal OTU in control soils for microcosms with B amendment were represented by the genus Trichoderma ( T. ivoriense ) in silt. Differentially abundant OTUs that were detected both in silt and clay soils with B or BH amendments comprised many representatives, including Hymenoscyphus, Mucor, Oidiodendron, Peterozyma and members of the family Serendipitaceae. Differentially abundant OTUs with B or BH amendments were represented by the genera Clitopilus and Serendipita in clay soil and Eucasphaeria and Sistotrema in silt. Among the most significant differentially abundant OTUs detected in both soil types with B amendment were representatives of the genus Xenopolyscytalium . OTUs that were more abundant in clay soil with B amendment included, for example, the genera Ceratocystiopsis, Chlara and Hydnomerulius . Differentially abundant OTUs representing two of the first-mentioned taxa were observed only in samples from the simulated spring (second sampling) and that of the last-mentioned in samples from autumn after harvest. Among the most significant differentially abundant OTUs in silt soil with B amendment were representatives, for example, of the genus Tetracladium of the family Helotiaceae, and of the genera Thanatephorus and Calyptella of the family Tricholomataceae. Differentially abundant OTUs for BH amendment included representatives of, for example, the genus Pesotum in both soil types, and the genus Paecilomyces in clay. Differentially abundant OTUs in control clay soils compared with microcosms with BA and BHA amendment included representatives of the genus Phialocephala ( P. humicola ) and, correspondingly, in control soils compared with microcosms with BHA amendment, representatives of the genera Trichoderma ( T. hamatum ) in silt and Rhizopus both in silt and clay. Among the most significant differentially abundant OTUs in both soil types with both BA or BHA amendments were representatives of the genera Cirrenalia, Fusarium ( F. solani ) and Pseudoproboscispora . The first-mentioned genera were detected in simulated spring samples (second sampling) and the latter after the harvest in autumn (third sampling), and differentially abundant OTUs in clay with both BA and BHA included the genera Natantispora and Funneliformis . The two latter-mentioned taxa were observed only in samples from autumn after harvest (third sampling). Differentially abundant OTUs for BA amendment were represented by the family Bionectriaceae, and for BHA amendment the genus Chrysosporium in clay.
Pine- and spruce-derived materials induced similar changes (data not shown) in qPCR and therefore the data were combined in the analyses as presented in Fig. . The average bacterial 16S rRNA and fungal ITS region copy numbers were lower in silt soil compared with clay soil (Fig. ). Average bacterial 16S rRNA gene copy numbers increased in microcosms with all other bark-derived amendments, except in those with industrial bark (B) in silt and clay soils (Fig. and , ). By contrast, average fungal ITS numbers increased in the silt soil microcosm with almost all the bark-derived amendments (B, BH, BHA), whereas average fungal ITS numbers in clay soil increased only in the microcosm with B and BH treatments (Fig. and , ). Shoot biomasses of barley were significantly higher in microcosms with all amendments in clay soil and in microcosms with BA amendment in silt soil compared with controls ( ).
The NMDS for the bacterial 16S rRNA data showed clearly that the bacterial communities in the simulated spring and autumn (second and third samplings) separate from each other in both silt and clay soil (Fig. and ). Axes 1 and 2 of the NMDS also showed that the bacterial community of B and BH amendments were more similar and separate from the communities of the microcosms with BA and BHA amendments. By contrast, while the NMDS of the fungal ITS2 data also separated the fungal communities of all amendments, the effect of sampling time was not so drastic compared with bacteria (Fig. and ). Sampling time explained 35% of the variation in bacterial community composition in both silt and clay soil (PERMANOVA, df = 1, F = 37.1/37.6, R2 = 0.35, P < 0.001) and the organic amendment type 19% and 18%, respectively (df = 4, F = 4.9/4.8, R2 = 0.19/0.18, P < 0.001). In both soil types sampling time explained 8% of the variation in fungal community composition (df = 4, F = 5.9/6.6, R2 = 0.08, P < 0.001). The type of amendment explained 25%, and 29% of the variation in fungal community composition in silt and clay soil, respectively (df = 4, F = 4.5/6.6, R2 = 0.25/0.29, P < 0.001). The interaction term was significant for both soil types explaining 8% of the variation in fungal community composition (df = 4, F = 1.5, R2 = 0.08, P < 0.017). According to paired comparisons, bacterial communities did not differ in silt soil between B and BH, or BA and BHA ( ). Both soil amendment type and sampling time affected other bacterial communities for all amendments in clay soil. Whereas, both amendment type and sampling time affected other fungal communities for almost all amendments in silt soil; BA and BHA differed only by sampling time ( ). In turn, fungal communities of C were statistically undistinguishable from both B and BH, and B differed from both BH and BHA only by sampling time in clay soil.
Taxa from 10 of the most differentially abundant bacterial OTUs for each microcosm from paired comparisons are presented in Fig. . The full list of all significant differentially abundant bacterial OTUs and their closest association to taxa in databases are shown in . When comparing control soils with microcosms having organic bark-derived amendments, there were less differentially abundant OTUs for B and BH treatments compared with those for BA and BHA treatments (Fig. , ). There were differentially abundant OTUs for both soil types from all treatments although they were among 10 of the most significant ones only in some treatments (Fig. , ). Those included OTUs representing, for example, genera of Verrucomicrobia ( Chthoniobacter) , Proteobacteria ( Novosphingobium ) and Bacteroidetes ( Mucilaginibacter ), which were more differentially abundant in B and BH treatments. Among the most significant differentially abundant OTUs in both clay and silt soil with B or BH amendments were two genera of Verrucomicrobia and Rhizobiaceae, but OTUs for B amendments represented genera of Bacteroidetes, Cytophaga in clay and Niastella in silt. These were recorded only from samples of the simulated spring (second sampling). Differentially abundant OTUs for BH amendments included the representative genera of Proteobacteria Serratia in clay and Bactoiredetes ( Filimonas ) and Verrucomicrobia ( Methylacidiphilaceae ) in silt. In general, differentially abundant OTUs for BA and BHA treatments were more diverse compared with those for B or BH treatments. These included representatives of many phyla, for example, Cloacimonetes, Fibrobacteres, Firmicutes, Plantomycetes and Synergistetes, which were not recorded from B or BH treatments. OTUs in both soil types with both BA and BHA treatments included representative genera of, for example, Actinobacteria ( Fodinicola ), Chloroflexi ( Anaerolineaceae ), Planctomycetes ( Pirellula ), Proteobacteria ( Devosia, Nitrosospira, Luteimonas, Lysobacter, Mesorhizobium, Pseudoxanthomonas ) and Verrucomicrobia ( Opitutus ). Differentially abundant OTUs for both BA and BHA treatments included genera of Firmicutes ( Romboutsia ) and Bactoidetes ( Adhaeribacter ) in clay soil, and a genus of Proteobacteria ( Polaromonas) in silt soil.
Taxa from 10 of the most significant differentially abundant fungal OTUs for each microcosm from paired comparisons are presented in Fig. . The full list of all significant differentially abundant OTUs and their closest associations to taxa in databases are provided in . When comparing control soils and microcosms with bark-derived amendments, there were more differentially abundant fungal OTUs in microcosms with B and BH treatments compared with those with BA and BHA treatments (Fig. , ). Differentially abundant fungal OTU in control soils for microcosms with B amendment were represented by the genus Trichoderma ( T. ivoriense ) in silt. Differentially abundant OTUs that were detected both in silt and clay soils with B or BH amendments comprised many representatives, including Hymenoscyphus, Mucor, Oidiodendron, Peterozyma and members of the family Serendipitaceae. Differentially abundant OTUs with B or BH amendments were represented by the genera Clitopilus and Serendipita in clay soil and Eucasphaeria and Sistotrema in silt. Among the most significant differentially abundant OTUs detected in both soil types with B amendment were representatives of the genus Xenopolyscytalium . OTUs that were more abundant in clay soil with B amendment included, for example, the genera Ceratocystiopsis, Chlara and Hydnomerulius . Differentially abundant OTUs representing two of the first-mentioned taxa were observed only in samples from the simulated spring (second sampling) and that of the last-mentioned in samples from autumn after harvest. Among the most significant differentially abundant OTUs in silt soil with B amendment were representatives, for example, of the genus Tetracladium of the family Helotiaceae, and of the genera Thanatephorus and Calyptella of the family Tricholomataceae. Differentially abundant OTUs for BH amendment included representatives of, for example, the genus Pesotum in both soil types, and the genus Paecilomyces in clay. Differentially abundant OTUs in control clay soils compared with microcosms with BA and BHA amendment included representatives of the genus Phialocephala ( P. humicola ) and, correspondingly, in control soils compared with microcosms with BHA amendment, representatives of the genera Trichoderma ( T. hamatum ) in silt and Rhizopus both in silt and clay. Among the most significant differentially abundant OTUs in both soil types with both BA or BHA amendments were representatives of the genera Cirrenalia, Fusarium ( F. solani ) and Pseudoproboscispora . The first-mentioned genera were detected in simulated spring samples (second sampling) and the latter after the harvest in autumn (third sampling), and differentially abundant OTUs in clay with both BA and BHA included the genera Natantispora and Funneliformis . The two latter-mentioned taxa were observed only in samples from autumn after harvest (third sampling). Differentially abundant OTUs for BA amendment were represented by the family Bionectriaceae, and for BHA amendment the genus Chrysosporium in clay.
Factors affecting microbial abundance and community composition in microcosms The purpose of this experiment was to study if synergies exist between three very timely global issues, recycling of organic material from industrial side-streams, to promote the circular economy and security of supply and to counteract the degradation of agricultural soils. We studied the potential of cascade process materials as soil amendments, meaning forest industry coniferous bark by-products untreated or treated in three incremental ways, in supporting microbial activity and diversity in agricultural soils. Our microcosm experiment simulated an over 1.5-year period of boreal arable soil and is one of the first attempts to investigate soil microbial communities after amendment with bark-derived organic materials. The addition of bark-derived organic amendments changed both the size and composition of the soil microbial communities and was supporting the crop yield. According to our hypothesis, community changes differed for bacteria and fungi and were linked to the soil type, especially for fungi. Industrial unextracted bark (B) and hot water extracted bark (BH) changed fungal community composition in silt soil and the abundance in clay soil, whereas all processed bark materials (BH, BA, BHA) had a greater influence on the bacterial numbers, and bark materials from anaerobic digestion (BA, BHA) on bacterial community composition. The obtained results show the same trend as previous findings, that anaerobic digestates used as biofertilizers increased bacterial gene copy numbers (Coelho et al. ). Increased bacterial abundance in microcosms with treated amendments may be partially explained by reduction of inhibiting polyphenolic compounds after cascade processing. The applied hot water treatment aimed at extraction of water-soluble polyphenols for further added-value use. At the same time, the extraction treatment may also have broken the structures of the bark, thus facilitating the microbial digestibility/degradability of the bark (Rasi et al. , Jyske et al. ). However, because the B/BH, and the two other BA/BHA amendments that contained processed slurry, were not added at the same C ha −1 rate and induced a highly different microcosm soil pH and soil C and N content, the results are not comparable between the two amendment types and must also be compared with the control (C) treatment. This comparison indicated for the bacterial community that the BA/BHA treatments in both soils and for the fungal community that the B/BH treatments induced a change in silt soil. The BA/BHA addition increased soil pH compared with the C treatment, whereas the B/BH treatment decreased it. It is known that bacterial community changes are correlated to pH, whereas fungal community changes are not to that extent, due to a broader pH tolerance (Rousk et al. ). This could imply that the B/BH-induced changes to the fungal community in silt soil were due to other reasons than pH, like, for instance, the bark material itself, and therefore this has the potential to trigger soil C acquisition through changed fungal presence. Phospholipid fatty acid profiling has shown that Gram-positive bacteria increased the C incorporation into temperate beech forest soil, contributing to the C stock of the entire soil profile (Preusser et al. ) and thus also bacteria are important, but our qPCR and amplicon sequencing approach does not quantitatively differentiate between Gram-positive and -negative bacteria. As expected, soil type was one of the determinants affecting both bacterial and fungal abundance and especially fungal community composition (for previously reported soil microbiota see Pakarinen et al. , Rasa et al. ). Both fungal and bacterial gene copy numbers were higher in clay compared with silt soil. Indeed, clay properties include unique physical and chemical characteristics, such as water-retention and cation exchange capacities, surface-to-volume ratio, ability to serve as a reservoir of adsorbed organic C, and by being so the clay minerals are the key in the interaction between microorganisms and the lithosphere (Cuadros ). The microbial interactions with the clay minerals are a fundamental component of the processes of soil genesis and functioning because clay can significantly alter microbial growth and biosynthetic activity by facilitating nutrients and providing protection against unfavorable physico-chemical conditions (reviewed by Fomina and Skorochod ). Sampling time, spring before sowing or autumn after harvest, affected bacterial community composition more than that of fungi. Therefore, type of arable soil and season must always be considered when estimating the effects of agricultural management on microbial communities (Bossio et al. ). Effect of amendments and soil type on bacterial taxa The results indicated that there are bacteria that can take advantage of very different qualities of bark-derived amendment, from fresh industrial to highly processed bark material (also containing digested slurry) originating from anaerobic digestion. These included, for instance, cellulolytic Mucilacibacter , which are active in the decomposition of cellulose and hemicellulose (López-Mondéjar et al. ), Chthoniobacter ( C. flavus ), which grows on many of the saccharides found in plant biomass (Sangwan ), and Novosphingobium , which promotes plant growth and can degrade lignin-related and xenobiotic aromatic compounds (Tiirola et al. , Hashimoto et al. , Notomista et al. , Choi et al. ). Also, the denitrifier genus Rhodanobacter was most abundant after all amendments and it contains members capable of complete denitrification of nitrate, nitrate and N 2 O to N 2 (Prakash et al. ). The detected representatives include the same functionally important microorganisms that were also detected earlier in the digestates, such as plant-growth promoting, denitrifying and cellulolytic bacteria (Coelho et al. ). However, the results suggest that there are bacteria that prefer unextracted bark (B) or hot water extracted bark (BH) amendments. These included Niastella and Cytophaga , which contain soil plant-associated bacteria or endophytic bacteria and are involved in the decomposition of plant-derived compounds (cellulose, chitin and pectin) (Reichenbach and Dworkin ). Similarly, to the sawdust amendments reported by Clocchiatti et al. (2021), BH treatment seemed to benefit Rhizobiaceae, common nitrogen-fixers associated with roots of legumes and other flowering plants. Some members of this group are also able to solubilize phosphorus (Sridevi and Mallaiah ). Other examples of bacteria that may increase after BH treatment were Filimonas , which have been detected from plant roots and may act as putative carbohydrate degraders, and Serratia , which are known to have antifungal properties, promote nitrogen-fixing symbionts as plant growth promoting bacteria and act as insect pathogens (Kalbe et al. ; Zhang et al. , Grimont and Grimont ). Another taxon, Methylacidiphilaceae , includes verrucomicrobial methanotrophs that can oxidize methane (Op den Camp et al. ), which makes them important in the greenhouse gas (GHG) balance. These examples of bacterial taxa suggest that adding unprocessed bark, or even bark from which water-soluble carbohydrates are extracted for bioenergy use, to arable soil, has the potential to increase beneficial soil bacteria, promote nitrogen and carbon cycling and benefit plants directly. The results show clearly that the soil bacterial community comprises a diverse group of bacteria that can take advantage of highly processed bark material with high pH and N content. For example, amendments originating from the anaerobic digestion process (BA, BHA) seem to increase some representatives of the Planctomycetes that are distributed in a variety of habitats, and some are known to grow anaerobically and autotrophically via oxidation of ammonium (Fuerst ). In addition, degraders of many toxins Devosia , nitrite oxidizing Nitrospira and strictly anaerobic Anaerolineaceae were detected. The genus Devosia was reported to enrich soils applied with manure containing the antibiotic compound sulfadiazine (Ding et al. ). Interestingly, a syntrophic relationship between hydrogenotrophic methanogens and species of Anaerolineaceae was reported (Yamada and Sekiguchi ), which are probably essential microbial partners in the anaerobic digestion process. Thus, most likely at least some of the observed anaerobes originate from the anaerobic digestion process. However, we detected OTUs representing Cloacimodetes, Firmicutes and Synergistetes only from microcosms with BA or BHA amendments, both of which are commonly found in biogas reactors (Solli et al. ). The community from the highly processed bark material, from the anaerobic digestion process, dominated the pre-amendment soil community and could be an example of community coalescence introduced by Rillig et al. ( ). There were several bacterial taxa that were apparently season or season and soil specific. For example, Lysobacter species, detected in B and BHA treatments only in simulated spring before sowing, can produce a range of extracellular enzymes and metabolites that are active against other soil organisms and that are more abundant in soils that suppress the fungal root pathogen Rhizoctonia solani (Gómez Expósito et al. ). However, Lysobacter species were detected previously and were indicative for autumn bulk soil in the same field (Pakarinen et al. ), from which the soil for this experiment was collected. In turn, anaerobic Romboutsia seemed to be clay specific and showed differential abundance in autumn after harvest. Some species have been isolated from the anaerobic digestion process as well as from soil (Dabrowski et al. , Gao et al. , Gerritsen et al. ). Members of the Romboutsia genus seem to have a versatile array of metabolic capabilities with respect to carbohydrate utilization, fermentation of single amino acids, anaerobic respiration and end products (Gerritsen et al. ). Thus, it may be very important in which period of the growing season organic amendments are applied to the fields in terms of which microbes’ benefit. It has been demonstrated that Gram-positive bacteria have raised C incorporation into subsoil over time (Preusser et al. ). Thus, Gram-positive bacteria are suggested to be better adapted to resource-limited conditions and feed on previously processed C sources (Kramer et al. 2008, Wang et al. ), whereas Gram-negative bacteria prefer to use labile C sources (Creamer et al. ). We detected only a few OTU representatives from the major Gram-positive bacterial phyla Actinobacteria and Firmicutes, which were more abundant in microcosms with BA and BHA amendments. However, the majority of differentially abundant OTUs detected from microcosms with amendments were Gram-negative bacteria, such as Proteobacteria, Bacteroidetes, Verrucomicrobia and Planctomycetes. BA and BHA digestates may provide more labile C sources for Gram-negative bacteria, which can quickly represent a relatively high proportion of the microbial biomass (Elfstrand et al. ). Subsequently, slow-growing Gram-positive bacteria can utilize more recalcitrant substrates and form a stable C stock. However, differential abundance results cannot be treated as truly quantitative data and thus the true ratio of Gram-positive to Gram-negative bacteria cannot be estimated with this dataset. Effect of amendments and soil type on fungal taxa A closer look at the differentially abundant fungal OTUs for B or BH amendments revealed that many fungal representatives may originate from bark-associated insects. For example, one of these representatives from the genus, Peterozyma toletana , is a common yeast found in the great spruce bark beetle ( Dendroctonus micans ) (Menkis et al. ). Also, other observed yeast-like representative genera, for example, Ceratocystiopsi s and Pesotum (anamorph of Ophiostoma ), have also been reported to be associated with the spruce bark beetle Ips typographus (Viiri and Lieutier ). The genus Mucor was detected as a representative taxon in decomposed wood blocks and suggested to contribute to wood decomposition via the breakdown of complex sugars (Fukasawa et al. , Gómez-Brandón et al. ). Among other saprotrophic representative genera for bark material is the cellulose-degrading genus Chlara (Boberg et al. ), the genus Xenopolyscytalum , an indicator of near mature and mature forests (Zhao et al. ), Hydnomerulius , a brown rot cellulose and hemi-cellulose degrader (Kohler et al. ), and Clitopilus (Raj and Manimohan ), which promotes high biomass in spruce monocultures planted on former arable land (Mihál ). Saprotrophic fungi that originated from the bark-derived organic amendment are certainly important in improving the quality and health of arable soil because they can help decrease losses of mineral nutrients (de Vries et al. ) and increase C sequestration (Six et al. ) and water retention (Beare et al. , Helfrich et al. , Liao et al. ). Moreover, saprotrophic fungi contribute to the suppression of root-infecting fungal pathogens (van Beneden et al. , Xiong et al. , Siegel-Hertz et al. ). Thus, the ability of saprotrophic fungi to attack recalcitrant wood-derived polymers like hemicellulose, cellulose and lignin gives them an advantage to survive in the environment. Indeed, incorporation of wood-derived material in arable soil was associated with increased saprotrophic fungal biomass (van der Wal et al. , Moll et al. , Reardon and Wuest ). Interestingly, a Hymenoscyphus , containing dark-septate endophyte (DSE) fungi, was the representative for silt soil microcosms with BH amendment in autumn after harvest. DSE fungi produce dark melanized hyphae and are found inside plant roots or in rhizosphere soil (Berthelot et al. ), and frequently co-occur with mycorrhizal fungi (Mandyam and Jumpponen ). DSE fungi can be found from a diverse range of ecosystems and host plants and are of a wide taxonomic and functional variety, providing nutrients for plants from SOM and protecting plants from pathogens and harsh soil conditions (Berthelot et al. ). Because of their ability to produce highly melanized and recalcitrant hyphae, DSE fungi have an important role in C sequestration into more stable SOM (Siletti et al. ). DSE fungi are suggested to link plant roots to soil crusts, fixing carbon and nitrogen via the hyphal network and representing the basis of the Fungal Loop Hypothesis (Porras-Alfalo et al. ). Like mycorrhizal fungi, DSE fungi can protect hosts through the production of antibacterial or antifungal metabolites, physical exclusion of other microorganisms, or melanized hyphae (Mandyam and Jumpponen ). In addition, another two representative OTUs for the microcosms with the B and BH amendment in autumn after harvest were serendipitoid fungi (genus Serendipita , family Serendipitacea), which are known for their mycorrhizal and endophytic associations with a variety of plant species (Craven and Ray ). Serendipitoids enhance the growth and stress resistance of barley by upregulating several proteins involved in photosynthesis and carbohydrate metabolism (Sepehri et al. ). It was hypothesized that serendipitoids would be able to decompose SOM, contributing to efficient organic matter (OM) turnover and preventing unnecessary losses of C and nutrients (Craven and Ray ). Serendipitoids have recently been recorded after fiber sludge amendment in another Finnish agricultural soil, confirming their relevance in forest-derived organic material (Heikkinen et al. ). Interestingly, a common arbuscular mycorrhizal fungi (AMF) genus, Funneliformis , was representative for microcosms with BA and BHA amendments after harvest. Funneliformis is common in neutral and slightly alkaline soils (Mukerji et al. ), and the BA/BHA material was originally very alkaline. On the other hand, BA/BHA materials have introduced excess N (originating from cattle slurry) into soil, and N addition has been noted to reduce the diversity and richness of AMF and suppress spore numbers and hyphal length density (Zhang et al. ). Furthermore, increased abundance of Funneliformis has been detected 3 years after pulp-mill sludge amendment to a field (Rasa et al. ) and Clocchiatti et al. (2021) reported an increase in AMF following sawdust amendment of agricultural soil. Overall, the results suggest that unextracted or mildly treated organic amendments may have the potential to introduce fungi that could promote resilience and sustainability of arable soil. However, unprocessed bark could potentially introduce plant pathogens (Kazartsev et al. ). Increased abundance of Thanatephorus cucumeris , which is a teleomorph of the important plant pathogen Rhizoctonia solani , was detected in a microcosm with unextracted bark (B). In turn, another Fusarium species ( F. solani ) was representative for microcosms with BA and BHA amendments in autumn after harvest. Yet another Fusarium species ( F. culmorum ) was representative for control clay soil compared with the microcosm associated with B and BH amendments. Fusarium culmorum produces head blight, especially on small-grain cereals such as barley (Scherm et al. ). Because species of Fusarium are also saprotrophs like Rhizoctonia , they might have benefited from fresh plant material after harvest from root and shoot residues in addition to the bark-derived amendment. Thus, the results suggest that the response of fungi depends on the quality of organic material as well as on the type of pathogen. Along with increased abundance of positive and beneficial fungi from bark-derived amendments, the amendments might also decrease the abundance of some soil fungi. For example, Phialocephala, Rhizopus and Trichoderma were representative for control soils without bark-derived amendments. Species of Phialocephala include common soil fungi and well-known dark-septate endophytes . Trichoderma include plant-growth-promoters that suppress plant diseases and have been widely used as biocontrol agents (Zin and Badaluddin ). Most Rhizopus species are saprotrophs, feeding on a variety of dead organic matter, but some species are also parasitic or pathogenic (Petruzzello ). Our results may be explained by increased competition among fungi, that is, bark-derived organic compounds could favor some fungi over others. Another explanation could be that conditions created by bark-derived amendments may have induced these fungi to act as endophytes, resulting in their decreased abundance in soil.
The purpose of this experiment was to study if synergies exist between three very timely global issues, recycling of organic material from industrial side-streams, to promote the circular economy and security of supply and to counteract the degradation of agricultural soils. We studied the potential of cascade process materials as soil amendments, meaning forest industry coniferous bark by-products untreated or treated in three incremental ways, in supporting microbial activity and diversity in agricultural soils. Our microcosm experiment simulated an over 1.5-year period of boreal arable soil and is one of the first attempts to investigate soil microbial communities after amendment with bark-derived organic materials. The addition of bark-derived organic amendments changed both the size and composition of the soil microbial communities and was supporting the crop yield. According to our hypothesis, community changes differed for bacteria and fungi and were linked to the soil type, especially for fungi. Industrial unextracted bark (B) and hot water extracted bark (BH) changed fungal community composition in silt soil and the abundance in clay soil, whereas all processed bark materials (BH, BA, BHA) had a greater influence on the bacterial numbers, and bark materials from anaerobic digestion (BA, BHA) on bacterial community composition. The obtained results show the same trend as previous findings, that anaerobic digestates used as biofertilizers increased bacterial gene copy numbers (Coelho et al. ). Increased bacterial abundance in microcosms with treated amendments may be partially explained by reduction of inhibiting polyphenolic compounds after cascade processing. The applied hot water treatment aimed at extraction of water-soluble polyphenols for further added-value use. At the same time, the extraction treatment may also have broken the structures of the bark, thus facilitating the microbial digestibility/degradability of the bark (Rasi et al. , Jyske et al. ). However, because the B/BH, and the two other BA/BHA amendments that contained processed slurry, were not added at the same C ha −1 rate and induced a highly different microcosm soil pH and soil C and N content, the results are not comparable between the two amendment types and must also be compared with the control (C) treatment. This comparison indicated for the bacterial community that the BA/BHA treatments in both soils and for the fungal community that the B/BH treatments induced a change in silt soil. The BA/BHA addition increased soil pH compared with the C treatment, whereas the B/BH treatment decreased it. It is known that bacterial community changes are correlated to pH, whereas fungal community changes are not to that extent, due to a broader pH tolerance (Rousk et al. ). This could imply that the B/BH-induced changes to the fungal community in silt soil were due to other reasons than pH, like, for instance, the bark material itself, and therefore this has the potential to trigger soil C acquisition through changed fungal presence. Phospholipid fatty acid profiling has shown that Gram-positive bacteria increased the C incorporation into temperate beech forest soil, contributing to the C stock of the entire soil profile (Preusser et al. ) and thus also bacteria are important, but our qPCR and amplicon sequencing approach does not quantitatively differentiate between Gram-positive and -negative bacteria. As expected, soil type was one of the determinants affecting both bacterial and fungal abundance and especially fungal community composition (for previously reported soil microbiota see Pakarinen et al. , Rasa et al. ). Both fungal and bacterial gene copy numbers were higher in clay compared with silt soil. Indeed, clay properties include unique physical and chemical characteristics, such as water-retention and cation exchange capacities, surface-to-volume ratio, ability to serve as a reservoir of adsorbed organic C, and by being so the clay minerals are the key in the interaction between microorganisms and the lithosphere (Cuadros ). The microbial interactions with the clay minerals are a fundamental component of the processes of soil genesis and functioning because clay can significantly alter microbial growth and biosynthetic activity by facilitating nutrients and providing protection against unfavorable physico-chemical conditions (reviewed by Fomina and Skorochod ). Sampling time, spring before sowing or autumn after harvest, affected bacterial community composition more than that of fungi. Therefore, type of arable soil and season must always be considered when estimating the effects of agricultural management on microbial communities (Bossio et al. ).
The results indicated that there are bacteria that can take advantage of very different qualities of bark-derived amendment, from fresh industrial to highly processed bark material (also containing digested slurry) originating from anaerobic digestion. These included, for instance, cellulolytic Mucilacibacter , which are active in the decomposition of cellulose and hemicellulose (López-Mondéjar et al. ), Chthoniobacter ( C. flavus ), which grows on many of the saccharides found in plant biomass (Sangwan ), and Novosphingobium , which promotes plant growth and can degrade lignin-related and xenobiotic aromatic compounds (Tiirola et al. , Hashimoto et al. , Notomista et al. , Choi et al. ). Also, the denitrifier genus Rhodanobacter was most abundant after all amendments and it contains members capable of complete denitrification of nitrate, nitrate and N 2 O to N 2 (Prakash et al. ). The detected representatives include the same functionally important microorganisms that were also detected earlier in the digestates, such as plant-growth promoting, denitrifying and cellulolytic bacteria (Coelho et al. ). However, the results suggest that there are bacteria that prefer unextracted bark (B) or hot water extracted bark (BH) amendments. These included Niastella and Cytophaga , which contain soil plant-associated bacteria or endophytic bacteria and are involved in the decomposition of plant-derived compounds (cellulose, chitin and pectin) (Reichenbach and Dworkin ). Similarly, to the sawdust amendments reported by Clocchiatti et al. (2021), BH treatment seemed to benefit Rhizobiaceae, common nitrogen-fixers associated with roots of legumes and other flowering plants. Some members of this group are also able to solubilize phosphorus (Sridevi and Mallaiah ). Other examples of bacteria that may increase after BH treatment were Filimonas , which have been detected from plant roots and may act as putative carbohydrate degraders, and Serratia , which are known to have antifungal properties, promote nitrogen-fixing symbionts as plant growth promoting bacteria and act as insect pathogens (Kalbe et al. ; Zhang et al. , Grimont and Grimont ). Another taxon, Methylacidiphilaceae , includes verrucomicrobial methanotrophs that can oxidize methane (Op den Camp et al. ), which makes them important in the greenhouse gas (GHG) balance. These examples of bacterial taxa suggest that adding unprocessed bark, or even bark from which water-soluble carbohydrates are extracted for bioenergy use, to arable soil, has the potential to increase beneficial soil bacteria, promote nitrogen and carbon cycling and benefit plants directly. The results show clearly that the soil bacterial community comprises a diverse group of bacteria that can take advantage of highly processed bark material with high pH and N content. For example, amendments originating from the anaerobic digestion process (BA, BHA) seem to increase some representatives of the Planctomycetes that are distributed in a variety of habitats, and some are known to grow anaerobically and autotrophically via oxidation of ammonium (Fuerst ). In addition, degraders of many toxins Devosia , nitrite oxidizing Nitrospira and strictly anaerobic Anaerolineaceae were detected. The genus Devosia was reported to enrich soils applied with manure containing the antibiotic compound sulfadiazine (Ding et al. ). Interestingly, a syntrophic relationship between hydrogenotrophic methanogens and species of Anaerolineaceae was reported (Yamada and Sekiguchi ), which are probably essential microbial partners in the anaerobic digestion process. Thus, most likely at least some of the observed anaerobes originate from the anaerobic digestion process. However, we detected OTUs representing Cloacimodetes, Firmicutes and Synergistetes only from microcosms with BA or BHA amendments, both of which are commonly found in biogas reactors (Solli et al. ). The community from the highly processed bark material, from the anaerobic digestion process, dominated the pre-amendment soil community and could be an example of community coalescence introduced by Rillig et al. ( ). There were several bacterial taxa that were apparently season or season and soil specific. For example, Lysobacter species, detected in B and BHA treatments only in simulated spring before sowing, can produce a range of extracellular enzymes and metabolites that are active against other soil organisms and that are more abundant in soils that suppress the fungal root pathogen Rhizoctonia solani (Gómez Expósito et al. ). However, Lysobacter species were detected previously and were indicative for autumn bulk soil in the same field (Pakarinen et al. ), from which the soil for this experiment was collected. In turn, anaerobic Romboutsia seemed to be clay specific and showed differential abundance in autumn after harvest. Some species have been isolated from the anaerobic digestion process as well as from soil (Dabrowski et al. , Gao et al. , Gerritsen et al. ). Members of the Romboutsia genus seem to have a versatile array of metabolic capabilities with respect to carbohydrate utilization, fermentation of single amino acids, anaerobic respiration and end products (Gerritsen et al. ). Thus, it may be very important in which period of the growing season organic amendments are applied to the fields in terms of which microbes’ benefit. It has been demonstrated that Gram-positive bacteria have raised C incorporation into subsoil over time (Preusser et al. ). Thus, Gram-positive bacteria are suggested to be better adapted to resource-limited conditions and feed on previously processed C sources (Kramer et al. 2008, Wang et al. ), whereas Gram-negative bacteria prefer to use labile C sources (Creamer et al. ). We detected only a few OTU representatives from the major Gram-positive bacterial phyla Actinobacteria and Firmicutes, which were more abundant in microcosms with BA and BHA amendments. However, the majority of differentially abundant OTUs detected from microcosms with amendments were Gram-negative bacteria, such as Proteobacteria, Bacteroidetes, Verrucomicrobia and Planctomycetes. BA and BHA digestates may provide more labile C sources for Gram-negative bacteria, which can quickly represent a relatively high proportion of the microbial biomass (Elfstrand et al. ). Subsequently, slow-growing Gram-positive bacteria can utilize more recalcitrant substrates and form a stable C stock. However, differential abundance results cannot be treated as truly quantitative data and thus the true ratio of Gram-positive to Gram-negative bacteria cannot be estimated with this dataset.
A closer look at the differentially abundant fungal OTUs for B or BH amendments revealed that many fungal representatives may originate from bark-associated insects. For example, one of these representatives from the genus, Peterozyma toletana , is a common yeast found in the great spruce bark beetle ( Dendroctonus micans ) (Menkis et al. ). Also, other observed yeast-like representative genera, for example, Ceratocystiopsi s and Pesotum (anamorph of Ophiostoma ), have also been reported to be associated with the spruce bark beetle Ips typographus (Viiri and Lieutier ). The genus Mucor was detected as a representative taxon in decomposed wood blocks and suggested to contribute to wood decomposition via the breakdown of complex sugars (Fukasawa et al. , Gómez-Brandón et al. ). Among other saprotrophic representative genera for bark material is the cellulose-degrading genus Chlara (Boberg et al. ), the genus Xenopolyscytalum , an indicator of near mature and mature forests (Zhao et al. ), Hydnomerulius , a brown rot cellulose and hemi-cellulose degrader (Kohler et al. ), and Clitopilus (Raj and Manimohan ), which promotes high biomass in spruce monocultures planted on former arable land (Mihál ). Saprotrophic fungi that originated from the bark-derived organic amendment are certainly important in improving the quality and health of arable soil because they can help decrease losses of mineral nutrients (de Vries et al. ) and increase C sequestration (Six et al. ) and water retention (Beare et al. , Helfrich et al. , Liao et al. ). Moreover, saprotrophic fungi contribute to the suppression of root-infecting fungal pathogens (van Beneden et al. , Xiong et al. , Siegel-Hertz et al. ). Thus, the ability of saprotrophic fungi to attack recalcitrant wood-derived polymers like hemicellulose, cellulose and lignin gives them an advantage to survive in the environment. Indeed, incorporation of wood-derived material in arable soil was associated with increased saprotrophic fungal biomass (van der Wal et al. , Moll et al. , Reardon and Wuest ). Interestingly, a Hymenoscyphus , containing dark-septate endophyte (DSE) fungi, was the representative for silt soil microcosms with BH amendment in autumn after harvest. DSE fungi produce dark melanized hyphae and are found inside plant roots or in rhizosphere soil (Berthelot et al. ), and frequently co-occur with mycorrhizal fungi (Mandyam and Jumpponen ). DSE fungi can be found from a diverse range of ecosystems and host plants and are of a wide taxonomic and functional variety, providing nutrients for plants from SOM and protecting plants from pathogens and harsh soil conditions (Berthelot et al. ). Because of their ability to produce highly melanized and recalcitrant hyphae, DSE fungi have an important role in C sequestration into more stable SOM (Siletti et al. ). DSE fungi are suggested to link plant roots to soil crusts, fixing carbon and nitrogen via the hyphal network and representing the basis of the Fungal Loop Hypothesis (Porras-Alfalo et al. ). Like mycorrhizal fungi, DSE fungi can protect hosts through the production of antibacterial or antifungal metabolites, physical exclusion of other microorganisms, or melanized hyphae (Mandyam and Jumpponen ). In addition, another two representative OTUs for the microcosms with the B and BH amendment in autumn after harvest were serendipitoid fungi (genus Serendipita , family Serendipitacea), which are known for their mycorrhizal and endophytic associations with a variety of plant species (Craven and Ray ). Serendipitoids enhance the growth and stress resistance of barley by upregulating several proteins involved in photosynthesis and carbohydrate metabolism (Sepehri et al. ). It was hypothesized that serendipitoids would be able to decompose SOM, contributing to efficient organic matter (OM) turnover and preventing unnecessary losses of C and nutrients (Craven and Ray ). Serendipitoids have recently been recorded after fiber sludge amendment in another Finnish agricultural soil, confirming their relevance in forest-derived organic material (Heikkinen et al. ). Interestingly, a common arbuscular mycorrhizal fungi (AMF) genus, Funneliformis , was representative for microcosms with BA and BHA amendments after harvest. Funneliformis is common in neutral and slightly alkaline soils (Mukerji et al. ), and the BA/BHA material was originally very alkaline. On the other hand, BA/BHA materials have introduced excess N (originating from cattle slurry) into soil, and N addition has been noted to reduce the diversity and richness of AMF and suppress spore numbers and hyphal length density (Zhang et al. ). Furthermore, increased abundance of Funneliformis has been detected 3 years after pulp-mill sludge amendment to a field (Rasa et al. ) and Clocchiatti et al. (2021) reported an increase in AMF following sawdust amendment of agricultural soil. Overall, the results suggest that unextracted or mildly treated organic amendments may have the potential to introduce fungi that could promote resilience and sustainability of arable soil. However, unprocessed bark could potentially introduce plant pathogens (Kazartsev et al. ). Increased abundance of Thanatephorus cucumeris , which is a teleomorph of the important plant pathogen Rhizoctonia solani , was detected in a microcosm with unextracted bark (B). In turn, another Fusarium species ( F. solani ) was representative for microcosms with BA and BHA amendments in autumn after harvest. Yet another Fusarium species ( F. culmorum ) was representative for control clay soil compared with the microcosm associated with B and BH amendments. Fusarium culmorum produces head blight, especially on small-grain cereals such as barley (Scherm et al. ). Because species of Fusarium are also saprotrophs like Rhizoctonia , they might have benefited from fresh plant material after harvest from root and shoot residues in addition to the bark-derived amendment. Thus, the results suggest that the response of fungi depends on the quality of organic material as well as on the type of pathogen. Along with increased abundance of positive and beneficial fungi from bark-derived amendments, the amendments might also decrease the abundance of some soil fungi. For example, Phialocephala, Rhizopus and Trichoderma were representative for control soils without bark-derived amendments. Species of Phialocephala include common soil fungi and well-known dark-septate endophytes . Trichoderma include plant-growth-promoters that suppress plant diseases and have been widely used as biocontrol agents (Zin and Badaluddin ). Most Rhizopus species are saprotrophs, feeding on a variety of dead organic matter, but some species are also parasitic or pathogenic (Petruzzello ). Our results may be explained by increased competition among fungi, that is, bark-derived organic compounds could favor some fungi over others. Another explanation could be that conditions created by bark-derived amendments may have induced these fungi to act as endophytes, resulting in their decreased abundance in soil.
Our microcosm experiment, which simulated the time span within one growing season, shows that bark-derived organic amendments from sawmills have the potential to both increase the biomass and diversify the communities of agricultural soil microbes. Most of the stimulated microbes represented groups that are known for their beneficial impacts on plants and soil, such as symbiotic AMF and N-fixing bacteria. The treatment effects were largely dependent on the intensity of the bark processing and the type of soil. Soil bacterial communities were mostly induced by highly processed bark treated anaerobically with cattle slurry, whereas the response of soil fungal community to fresh and mildly treated hot water extracted bark was dependent on soil type. These differences may be due to soil pH, the higher-level processing, eliminating the indigenous microbiota of bark, changing the bark to be less recalcitrant for decomposers and having compounds originating from cattle slurry that favored bacteria. If fungi are to be favored, then addition of bark or hot water extracted bark are the potential organic amendments to choose. However, our study was a short-term laboratory experiment, and field studies are needed to verify our observations and particularly to estimate the long-term effects of the side-streams from sawmills and biogas production. Our study shows the way towards seeking new solutions for increasing the sustainability of soils by utilizing by-products more efficiently.
fiad012_Supplemental_File Click here for additional data file.
|
The potential and pitfalls of artificial intelligence in clinical pharmacology | 1355fdf3-d5d0-4484-bcc8-83e8e1eb8138 | 10014043 | Pharmacology[mh] | For decades, clinical pharmacologists have embraced the mathematical representation of physiology and explored modeling options to derive relationships between drug and temporal changes in pharmacokinetics (PK) and pharmacodynamics. Now, as an evolution toward defining better therapies, we strive toward more digitization. Digitized drug interaction databases, for example, consisted of curated qualitative and quantitative data related to various extrinsic and intrinsic factors, including comedications, excipients, food products, organ impairment, and genetics that can affect human systemic drug exposure. Besides, digital biomarkers (measured by means of digital devices such as portables, wearables, implantable) provide new and faster data in real time, giving clinicians a better understanding of how medication impacts the disease and its interaction with an individual's overall health. With recent advancements in collecting electronic health records and processing patient genomics data, digital twins and virtual populations are becoming achievable. With the development of natural language processing–like techniques, artificial intelligence (AI) models could use physician notes and laboratory books as data for predictive modeling. With the development of the Internet of Things (network of devices work together seamlessly connecting medical devices and databases), it is now possible to collect more electronic data than ever using wearable devices. The availability of curated databases, real‐world evidence databases, patient‐centric sampling, and futuristic wearable data would provide the foundation for AI/clinical pharmacology (CP) to develop and deliver life‐changing medicine for patients. Our expectation for AI‐augmented CP is to enable accurate predictions, drive making unbiased decisions, and provide efficient CP systems (Figure ) to deliver the core part of the guidance to the prescriber (e.g., labels, summaries of product characteristics). In this perspective we focus on the intersection of AI and the field of clinical pharmacology with a focus on the potential impact of AI in these aspects: dose recommendations, drug interaction, variability in PK and patient stratification/selection. A dose‐recommender system based on AI/machine learning (ML), which integrates data across domains including but not limited to multiple safety and efficacy measures, electronic records about current health status, information about the disease and previous treatment history, and patient‐reported outcomes, would provide tailored dosing options for patients, enhancing efficacy and minimizing adverse events. Currently, reinforcement learning–based algorithms showed potential for dose predictions and dose modifications during treatment by precision dosing for oncology patients. The Dose–Response Network uses deep‐learning (DL) approaches that can estimate individual patient outcomes at different intervals of dose–response curves. The ability of AI to recommend doses in counterfactual conditions is questionable. However, generative adversarial networks, with their ability to learn from current data and expand the learning to the unknown dose–response surface, could revolutionize individualized dose–response curve predictions. CP‐based impact on prescribing information depends predominantly on studies evaluating drug–drug interactions (DDIs), drug–food interaction, bioavailability, and PK changes in special populations. However, this information is limited compared to the potential drug interactions in clinical practice and real‐world settings. AI/CP can expand beyond the patient population evaluated in clinical studies. Innovative algorithms based on knowledge graphs (KGs) showed the potential to predict unknown adverse drug reactions, DDIs, and drug–food interactions. Bougiatiotis et al. demonstrated the utility of biomedical literature KGs and link prediction models to assess the DDIs in Alzheimer's disease and lung cancer. With the help of the KG framework, existing physiologically based PK (PBPK) expertise and data sources (see Digital and Data Considerations), clinical pharmacologists can project potential and dangerous DDIs due to simultaneous administration of multiple drugs. This provides opportunities to include both known and unknown (potential) DDIs in patient information leaflets. In clinical practice, therapeutic drug monitoring offers dose recommendations for drugs with relatively high variability in PK and a narrow therapeutic index. AI/ML has demonstrated better dose recommendations for propofol and remifentanil with less error in predicting bispectral index during anesthesia than traditional modeling methods. AI/ML approaches can now recognize patterns by identifying complex and nonlinear relationships and the influence of intrinsic and extrinsic factors on the variability of PK in different subpopulations. We envision that integrating ML capability with population‐based approaches would help further explain PK variability and offer options to modify dose in subgroups of patients. US Food and Drug Administration guidance on clinical trial enhancement strategies suggest including patients with a high chance of showing a disease‐related end point (prognostic indicators) and patients who are likely to respond to the treatment (predictive indicators). DL methods can handle the array of data (liquid biopsy, pathology imaging, computerized tomography scans, and extensive omics data) and understand/recognize patterns with the prognostic/predictive potential. Reduced population heterogeneity includes choosing patients with baseline measurements of a disease or a biomarker characterizing the disease in a narrow range. In contrast, excluding patients whose disease or symptoms improve spontaneously or whose measurements are highly variable would help to increase study power, reduce costs, and bring new medicine to patients faster. Decreasing variability often uses a process known as electronic phenotyping, which focuses on reducing population heterogeneity. Electronic phenotyping requires mining large databases of electronic health records and accounting for heterogeneity between patient records and data types. Applying AI technologies, especially ML and DL, to electronic phenotyping processes can accelerate the identification of eligible patients for clinical trials. AI Efforts to automate pharmacometric modeling and the development of neural network/neural ordinary differential equation–based predictive modeling and novel algorithm‐based clinical trial designs (Table ) lay the foundation for the future of model‐based drug development (MBDD). We postulate that exploiting AI methods extracting information from unstructured data (e.g., imaging/electronic records) would enhance current approaches for MBDD by improving personalized projections and decision making across clinical trials. The potential to hybridize ML and pharmacological models helps ML to perform well in limited data scenarios and conversely the ML models help with improving misspecification of pharmacological models (Table ). Meta‐analyses of clinical and observational studies aggregate meaningful inferences supporting drug development, but these analyses are hugely time‐consuming. However, with the combination of AI and human intelligence, Michelson et al. performed a rapid meta‐analysis to generate insights indicating ocular toxicity as a side effect of hydroxychloroquine in a much shorter period (<30 min) than traditional meta‐analysis. Similarly, unsupervised ML assisted with the automated screening and study selection process for meta‐analysis. Efficient and rapid ML‐based literature analysis could help with a well‐informed comparative analysis in early clinical trials. A predictive modeling ecosystem including nonlinear mixed‐effect models, mechanistic models, PBPK, quantitative systems pharmacology, AI/ML algorithms, structured/unstructured data, and ML‐assisted meta‐analysis would drive advancements in MBDD. Overall, a synergism in terms of efficiency and developing accurate predictive models are expected while integrating pharmacometrics and ML. Causality and bias In general, ML approaches use inductive reasoning, and the inferences are correlative, not causal. Furthermore, true causal‐based understanding comes from applying deductive reasoning and the application of a scientific method. In health care, it is vital to understand the root cause of any changes (physiological, pharmacological) causally related to an underlying disease. In the absence of such knowledge, we may make poor medical decisions. Although AI in health care has promise, algorithms are trained on big and extensive datasets with high variability and imbalance, resulting in algorithmic bias. Several other avenues of data accumulation can import bias into the algorithm, including but not limited to differences in infrastructure used for data collection (e.g., wearable device data) and the quality of training provided for patients and practitioners for data collection. Data privacy and ethical concerns Regarding data privacy, much progress has been made with the influential “General Data Protection Regulation” compliance efforts. Most healthcare companies have more awareness of data privacy and structural components in terms of infrastructure and governance boards for data use. However, applying these tools in a clinical setting requires robust information technology platforms for commercial technology giants. To protect patient privacy and control the utility of data by third parties, strong data privacy regulation (across the globe) is required. Implementing AI tools in clinical practice requires further ethical considerations regarding data privacy and patient consent. Recommendations and guidance for using AI in clinical practice are scarce or nonexistent; from a patient perspective, patient consent for data usage and awareness of using AI for their life/health decisions needs well‐formulated moral guidance. In general, ML approaches use inductive reasoning, and the inferences are correlative, not causal. Furthermore, true causal‐based understanding comes from applying deductive reasoning and the application of a scientific method. In health care, it is vital to understand the root cause of any changes (physiological, pharmacological) causally related to an underlying disease. In the absence of such knowledge, we may make poor medical decisions. Although AI in health care has promise, algorithms are trained on big and extensive datasets with high variability and imbalance, resulting in algorithmic bias. Several other avenues of data accumulation can import bias into the algorithm, including but not limited to differences in infrastructure used for data collection (e.g., wearable device data) and the quality of training provided for patients and practitioners for data collection. Regarding data privacy, much progress has been made with the influential “General Data Protection Regulation” compliance efforts. Most healthcare companies have more awareness of data privacy and structural components in terms of infrastructure and governance boards for data use. However, applying these tools in a clinical setting requires robust information technology platforms for commercial technology giants. To protect patient privacy and control the utility of data by third parties, strong data privacy regulation (across the globe) is required. Implementing AI tools in clinical practice requires further ethical considerations regarding data privacy and patient consent. Recommendations and guidance for using AI in clinical practice are scarce or nonexistent; from a patient perspective, patient consent for data usage and awareness of using AI for their life/health decisions needs well‐formulated moral guidance. Rapid growth in digitization is the foundation for implementing AI in CP. With the transformation in data and digital systems, AI could help improve dose recommendations that are the core CP deliverable and improve the efficiency of pharmacometrics. Overall, the patient can benefit by expanding the CP section of the label (providing expected DDI and unknown adverse reactions). However, a lack of causal inference and ethical data‐sharing issues must be addressed for a successful and thriving AI in CP. No funding was received for this work. Martin Johnson, Alex Phipps, Dave Boulton, and Megan Gibbs are AstraZeneca employees and shareholders. Megan Gibbs is on the editorial board of the journal Clinical Pharmacology and Therapeutics . Mishal Patel previously worked for AstraZeneca and has nothing to disclose. Mihaela van der Schaar has nothing to disclose. |
Quantitative systems pharmacology of the eye: Tools and data for ocular QSP | f703078a-7ecd-4f35-86f6-306d72ec9486 | 10014063 | Pharmacology[mh] | Diseases of the eye The eye is the most important sensory organ in the human body and visual impairment places a huge burden on affected patients. Leading causes for irreversible visual impairment are cataracts, age‐related macular degeneration (AMD), glaucoma, diabetic retinopathy, and retinitis pigmentosa. Furthermore, for a good visual acuity of the eye, the refractive cornea is of crucial importance. Due to its transparency and its high refractive power, it is responsible for the optimal focusing of the incident light on the retina. Various diseases of the cornea, such as keratoconus, can lead to a severe deterioration of vision. Additionally, trauma, surgical intervention, or wound healing processes of the cornea, induced by infection such as trachoma, can trigger fibrotic processes and neovascularization, which can subsequently lead to a loss of corneal transparency and progress to complete stromal opacification and blindness. One of the most common irreversible causes of blindness in industrialized countries is glaucoma. The term glaucoma covers various eye diseases in which the optic nerve is damaged by a progressive course of the disease, which can initially lead to a decrease of the visual field, and in later stages, to blindness. In 2010, 60.5 million people worldwide were affected and this number is expected to increase to 111.8 million people in 2040 due to an increasing life expectancy. , Ten percent of patients with glaucoma are bilaterally blind. Challenges in the development of ophthalmic drugs The complex anatomy of the eye hampers the quantification of drug exposure at pharmacological sites of action. Likewise, the permeation of therapeutic agents within the eye across different layers and barriers is difficult to track. Ocular drug delivery is governed by the highly specialized anatomy of the eye, , which can be differentiated in an anterior and a posterior segment. The anterior segment includes the cornea, conjunctiva, iris, ciliary body, and the lens, whereas the posterior one encompasses the vitreous, retina, choroid, and the optic nerve (Figure ). Ocular drug administration includes noninvasive routes of administration, such as topical or oral applications, as well as posterior, periocular, and intravitreal (IVT) injections. Topical administration is the main route of drug delivery in ophthalmic pharmacotherapy, because it is easily applied. For this reason, topical solutions, ointments, and suspensions comprise 90% of ocular drug administrations. However, a considerable drawback of topical as well as periocular drug administration is limited bioavailability due to tear film turnover, which limits topical residence time, often rendering drug levels in the vitreous and retina insufficient. Posterior segments of the eye, such as the vitreous, retina, and the retinal pigment epithelium (RPE) are usually accessible by IVT application. However, most macromolecules have a relatively short half‐life, such that repeated IVT administration is often necessary. Other periocular routes of drug administration, such as subconjunctival, suprachoroidal, subretinal, and trans‐scleral injections may provide alternatives. Oral application is another possibility, yet it leads to systemic drug distribution with potential off‐target adverse drug effects. In addition, many ocular tissues, such as the cornea and lens, are avascular so that any convective transport of drugs in blood plasma is hardly possible. Transcellular permeation through the cornea is generally favored by drug lipophilicity, whereas higher paracellular permeation correlates with smaller molecule sizes. Hence, small hydrophilic drugs are frequently administered intraocularly (e.g., by injection under the conjunctiva). Maintaining sufficiently high drug levels is generally difficult in the eyes. In particular, several physiological barriers, such as the endothelial monolayer between corneal stroma and aqueous humor, hamper drug distribution within the eyes. A potential alternative is the local administration of small molecules with slow release formulations, based on hydrogels, microparticles, or nanoparticles. A quantitative understanding of drug half‐life is of utmost importance because clearance determines the minimum required release rates in order to maintain the required intraocular therapeutic drug level. The eye is the most important sensory organ in the human body and visual impairment places a huge burden on affected patients. Leading causes for irreversible visual impairment are cataracts, age‐related macular degeneration (AMD), glaucoma, diabetic retinopathy, and retinitis pigmentosa. Furthermore, for a good visual acuity of the eye, the refractive cornea is of crucial importance. Due to its transparency and its high refractive power, it is responsible for the optimal focusing of the incident light on the retina. Various diseases of the cornea, such as keratoconus, can lead to a severe deterioration of vision. Additionally, trauma, surgical intervention, or wound healing processes of the cornea, induced by infection such as trachoma, can trigger fibrotic processes and neovascularization, which can subsequently lead to a loss of corneal transparency and progress to complete stromal opacification and blindness. One of the most common irreversible causes of blindness in industrialized countries is glaucoma. The term glaucoma covers various eye diseases in which the optic nerve is damaged by a progressive course of the disease, which can initially lead to a decrease of the visual field, and in later stages, to blindness. In 2010, 60.5 million people worldwide were affected and this number is expected to increase to 111.8 million people in 2040 due to an increasing life expectancy. , Ten percent of patients with glaucoma are bilaterally blind. The complex anatomy of the eye hampers the quantification of drug exposure at pharmacological sites of action. Likewise, the permeation of therapeutic agents within the eye across different layers and barriers is difficult to track. Ocular drug delivery is governed by the highly specialized anatomy of the eye, , which can be differentiated in an anterior and a posterior segment. The anterior segment includes the cornea, conjunctiva, iris, ciliary body, and the lens, whereas the posterior one encompasses the vitreous, retina, choroid, and the optic nerve (Figure ). Ocular drug administration includes noninvasive routes of administration, such as topical or oral applications, as well as posterior, periocular, and intravitreal (IVT) injections. Topical administration is the main route of drug delivery in ophthalmic pharmacotherapy, because it is easily applied. For this reason, topical solutions, ointments, and suspensions comprise 90% of ocular drug administrations. However, a considerable drawback of topical as well as periocular drug administration is limited bioavailability due to tear film turnover, which limits topical residence time, often rendering drug levels in the vitreous and retina insufficient. Posterior segments of the eye, such as the vitreous, retina, and the retinal pigment epithelium (RPE) are usually accessible by IVT application. However, most macromolecules have a relatively short half‐life, such that repeated IVT administration is often necessary. Other periocular routes of drug administration, such as subconjunctival, suprachoroidal, subretinal, and trans‐scleral injections may provide alternatives. Oral application is another possibility, yet it leads to systemic drug distribution with potential off‐target adverse drug effects. In addition, many ocular tissues, such as the cornea and lens, are avascular so that any convective transport of drugs in blood plasma is hardly possible. Transcellular permeation through the cornea is generally favored by drug lipophilicity, whereas higher paracellular permeation correlates with smaller molecule sizes. Hence, small hydrophilic drugs are frequently administered intraocularly (e.g., by injection under the conjunctiva). Maintaining sufficiently high drug levels is generally difficult in the eyes. In particular, several physiological barriers, such as the endothelial monolayer between corneal stroma and aqueous humor, hamper drug distribution within the eyes. A potential alternative is the local administration of small molecules with slow release formulations, based on hydrogels, microparticles, or nanoparticles. A quantitative understanding of drug half‐life is of utmost importance because clearance determines the minimum required release rates in order to maintain the required intraocular therapeutic drug level. Animal models The distribution of drugs in the eyes can rarely be measured directly in humans. Our understanding of the various physiological processes governing ocular pharmacokinetics (PKs) in humans is hence incomplete, and animal models are still widely used in ophthalmology. Rabbits, pigs, dogs, cats, mice, rats, and monkeys are standard model species for PK studies of the eyes despite physiological differences with human morphology and physiology. For example, the consistency of the vitreous is different in rabbits, mice, and humans as is the melanin content. There are also several specific differences in the physiology of the eyes between humans and rabbits, the most widely used animal model in ophthalmology. For example, the serum compartment, the retina vascular density, and the vitreous cavity, as well as the conjunctival surface area are comparatively larger in humans than in rabbits and vice versa for the cornea and lens. Moreover, rabbits have a lower blinking rate. As a consequence, drug PKs may differ significantly between rabbits and humans. For example, the half‐life of ranibizumab was 2.18–2.88 days in rabbit eyes but 7.19 days in human eyes. Similarly, the half‐life of bevacizumab was 4.32–7.56 days in rabbits and 9.82 days in humans. Nevertheless, animal models are often the only way to investigate pathological mechanisms. For this reason, animal models have been established for many diseases of the eyes. , Again, the consistency of the tissues is a major issue. The 3R principles (3R: Reduction, Refinement, and Replacement of animal testing) urge researchers to minimize animal suffering. Here, physiologically‐based pharmacokinetic (PBPK) modeling has the potential to significantly limit the need for animal experiments through cross‐species extrapolation. Whole‐body PBPK models for rabbits, mice, rats, and humans are available. , , This, together with a clear understanding of interspecies differences in ocular physiology, as outlined above, and the use of cell systems derived for humans and animals significantly supports model‐based cross‐species extrapolation. Omics data (such as gene expression data) can be a further aid in the understanding of molecular mechanisms. In ophthalmology, the Ocular Compartmental Absorption and Transit (OCAT) model has been successfully applied to predict PKs of levofloxacin, moxifloxacin, as well as gatifloxacin in humans based on a rabbit PBPK model. Extrapolation of semimechanistic rabbit models for proteins to clinical applications in humans has been outlined as well. However, concepts for model‐based support of first‐in‐human studies are far from being standard in ophthalmology. Cell systems Cell lines as well as primary cells of animal and human origin are used in ophthalmology, whereas organoid models of the eye are still in their infancy. Cell systems may include primary human cells of one type or co‐cultures in which different cell types of one tissue are cultivated together to account for intracellular (e.g., cytokine‐mediated) communication. Using co‐culture systems, it is possible to replicate complex ocular structures, such as the retina for experimental studies in vitro. The use of polymer‐based scaffolds in co‐culture systems allows colonization by the different cells and leads to the formation of cellular interactions within a spatial 3D structure in vitro, allowing it to partially reproduce in vivo situations. , The cornea, for example, is a complex tissue containing five well‐defined layers: the epithelium, Bowman's layer, stroma, Descemet's membrane, and the endothelium. The cornea is the main barrier that topically administered drugs must overcome. Corneal models based on human primary cells are now commercially available and can be used to assess drug permeability. Cytokine‐mediated intercellular communication of the various corneal cell types mediates corneal homeostasis, but it also initiates a targeted response to challenges, such as injury or drug application. , Concerning the conjunctiva, a simplified fibrotic cell culture model was developed based on the cultivation of human primary fibroblasts of Tenon's space, allowing to describe fibrotic processes by gene expression. Comparing this description to changes in gene expression triggered in cancer cell lines allowed to identify an antibiotic counteracting the fibrotic processes. The mechanisms of action of this antibiotic is currently investigated in cell systems of several species. The long‐term goal of cell systems in ophthalmology is to understand the complex molecular processes in all cells in an affected tissue in order to be able to predict its response to a specific intervention. Bioinformatics and omics data support Abundance of genes and proteins involved in absorption, distribution, metabolism, and excretion (ADME) in specific cells or tissues can be identified from omics data. Likewise, omics data may provide insight in drug pharmacodynamics (PDs; e.g., by describing the regulation of drug targets, such as receptor proteins). The gene expression (transcriptome and proteome) of cells and tissues can thus be indicative about specific drug effects, and modern single‐cell molecular data can provide detailed descriptions of tissues, down to the individual cell types and their molecular capabilities, sometimes at spatial resolution. High‐throughput omics data are thus becoming an invaluable tool in drug development despite their inherent noisiness due to biological variability and measurement challenges. In particular, omics data hold great promise to enable the identification of molecular biomarkers, which may predict treatment response or toxicity. Aggregating genes into gene sets or pathways and checking their enrichment among the over‐ or underexpressed genes frequently matches existing knowledge or provides the basis for further analyses. For instance, in toxicology, “adverse outcome pathways” are frequently checked for their activation based on gene expression data, and “points of departure” can thus be estimated. In ophthalmology, omics data are still scarce. Tissues or cells from the eye are not usually covered by large databases, such as GTEx, ENCODE, or LINCS, irrespective of species. Most of the work in dedicated studies deals with the cornea and the retina. Human gene expression data are often derived from postmortem tissue, because, for many eye tissues, it is difficult to justify biopsies. The aim of describing, as closely as possible, the in vivo situation in humans is thus hard to accomplish, and artifacts caused by cell culture, tissue degradation, or the use of a model organism are often substantial. A long‐established dataset is the OTDB database (GSE41102) which comprises microarray data of 10 human eye tissues, that is, retina, optic nerve head, optic nerve, ciliary body, trabecular meshwork, sclera, lens, cornea, choroid/RPE, and iris. More recently, the Eye in a Disk: eyeIntegration collection provides data from cornea, eyelid, lens, retina, retinal epithelium, and RPE (choroid), whereas the Mega Single Cell Transcriptome Ocular Meta‐Atlas focuses exclusively on retinal tissues and the eye‐transcriptome.com dataset was derived from 10 healthy (conjunctiva, cornea, eyelid, lacrimal gland, optic nerve, retina periphery, retina center, choroid/RPE, retinal microglia, and hyalocytes) and nine diseased tissues. For more details on these transcriptomics datasets with a focus on the anterior segments, please see ref. The low prominence of omics data use in ophthalmology may be explained in part by the scarcity of such data, and by the difficulty of finding and handling them. , The increasing adoption of findable, accessible, interoperable, reusable (FAIR) data principles and the increasing precision, sophistication and utility of the data types (cf. single‐cell) and methods of analysis and integration (e.g., deep learning and transfer learning) for an increasing number of ocular tissues for more and more species give hope that ophthalmology will profit more and more from omics data efforts, similarly to research in fields such as cellular senescence. Ocular PK / PBPK modeling Ocular PK models describe drug distribution in different regions of the eye. A basic example is a compartmental PK model for small molecules which describes distribution of pilocarpine in the precorneal area as well as the aqueous humor following topical application in rabbits. Similarly, a four‐compartment PK model was used to simulate drug disposition in the periocular space, a choroid‐containing transfer compartment, the retina, as well as an additional distribution compartment. Molecular drug structures are important in semi‐physiological compartmental PK models for small molecules. For example, vitreal clearance may govern drug PKs and can be estimated based on quantitative structure–property relationships (QSPR) or on correlations with physicochemical drug properties, such as lipophilicity, polar surface area, or molecular weight. , The contribution of ADME genes to drug metabolism, in particular cytochrome P450 or phase II enzymes in the vitreous, appears to be limited to a few active proteins. Caco‐2 cell permeability was used as a surrogate for permeability through the posterior segment tissues. Further ocular PK models were developed specifically addressing drug transport across the cornea and the conjunctiva, including clearance by tear drainage or the simulation of diffusion kinetics of the transient solute transport through the cornea for periocular drug administration. Distribution across blood‐ocular barriers was simulated to describe the dependency of systemic circulation on vitreous drug levels. In this model, vitreal clearance was also estimated from QSPR data and used analogously to describe the distribution between plasma and the vitreous. Simulations of vitreal drug concentrations correlated well with experimental measurements in rabbit eyes. For proteins, a two‐compartment PK model of the vitreous and the aqueous chamber was used to simulate vascular endothelial growth factor (VEGF) suppression by the antibody ranibizumab. This semi‐physiological model was extended by geometrical and biophysical considerations to investigate IVT PKs. It was found that the ocular half‐life of large molecules is proportional to the vitreous diffusion time, which in turn can be estimated from the Stokes‐Einstein‐relation for the diffusion coefficient. The proportionality factor in turn follows from the fractional area of the vitreous/aqueous chamber interface. These results were confirmed in an extended model which additionally included the retina. The model predicts the same half‐lives for all three compartments, calculated from the hydrodynamic radius of each molecule. The model also estimated the permeabilities in the RPE and the internal limiting membrane as well as the efficacy of clearance pathways between the retina and the choroid. Diffusion from the vitreous into the aqueous humor was found to be the main elimination pathway, whereas a minor part of the drug is transported between the vitreous and retina, and between the retina and choroid. This three‐compartment model was then further extended by additionally including permeability coefficients between the retina and vitreous or choroid, which were identified from rabbit PK data. Ranibizumab−VEGF binding kinetics were next included in the model to simulate ranibizumab treatment of human patients with wet‐AMD. , Notably, the extended model allows to couple IVT PKs with suppression profiles of VEGF in the retina and aqueous humor as a PD readout. An extension of compartmental PK models for proteins are partial differential equations‐based models pioneered by the work of Missel who developed an anatomically accurate geometric model to simulate IVT injections in rabbits, monkeys, and humans. The model considers outer surfaces as well as interior structures in specific coordinate systems to account for mass flow, pressure, and concentration. It was validated for rabbits using drugs with a molecular weight up to 157 kDa. Extensions of this anatomically accurate geometric model for rabbit eyes describe diffusion of IgG and Fab in the retina and the RPE/choroid. , The model was also used as a spatio‐temporal model for drugs against macular degeneration as well as to describe drug delivery from an episcleral implant. First examples for PBPK modeling in ophthalmology include representations of the cornea, iris, lens, and aqueous humor to describe the intraocular distribution of pilocarpine in rabbits or timolol in a physiological ocular model in rats. An extension of these earlier models is the OCAT model, which encompasses aqueous and vitreous humor, retina, ciliary body, iris, choroid, cornea, lens, and sclera. A PBPK model has also been developed for therapeutic antibodies, considering segmentation into the aqueous humor, retina, and vitreous humor. Physiologically relevant PK models have been used to compare ocular exposure for different topically administered drug forms. Moreover, PBPK models are of particular relevance for the design of treatment strategies to optimize dosing schedules or pharmaceutical formulations. The OCAT model was used to investigate the bioavailability of topical ophthalmic suspension formulations. , PBPK models may be further informed by targeted animal model data. Ocular effect models The biochemical and physiological effect of a drug to the body can be described through PD modeling. Classically, such PD models describe the dose–response relation at a single molecular target or pathway. Alternatively, associated clinical end points can be considered. With regard to ophthalmology, there are several mechanistic studies on eye biomechanics as well as on eye development. In contrast, there are only a few descriptions of PD models of the eye available in the literature. Among these are binding of VEGF‐A or central macular thickness in a maximum effect model as PD end points. In another study, an indirect response PD model was used to simulate the decrease in intraocular pressure (IOP) following topical administration of a single dose of timolol maleate. Detailed network models from cellular systems biology, however, are even more rare. Such network models may generally involve stoichiometric models of cellular metabolism, ordinary differential equation‐based intracellular signaling cascades or interaction maps. One of the few studies using such extended models for the metabolism of the eyes concern the identification of marker metabolites in the aqueous humor from patients with cataract or the role of sphingolipids in retinal pathophysiology. The reason for the limited availability of systems biology network models may largely lie in the lack of adequate large‐scale molecular data from different eye tissues which would otherwise provide a knowledge‐base for subsequent network analyses. An exception in this regard is the work of the EYE‐RISK Consortium, focused on the cornea, which enables the cross‐omics investigation of metabolomics, genomics, and disease pathways. Existing signaling models of the eye mainly address the functioning of photoreceptors to describe a phototransduction model in rod cells. A single‐cell atlas was compiled of cornea, iris, ciliary body, NR, RPE, and choroid in humans and pigs, which was also used to develop a disease map of genes involved in different eye disorders. Ocular quantitative systems physiology Quantitative systems physiology (QSP) aims for the integration of cellular models from computational systems biology into PBPK models to overcome the focus on isolated drug targets. Considering the whole body, such multiscale models allow to simultaneously describe drug exposure in plasma or ocular tissue as well as the resulting drug‐induced response within the cellular networks. QSP models have among others been used to simulate cellular signaling models within the context of whole‐body PBPK models. Likewise, metabolic network models have been integrated in PBPK models. Finally, gene expression data have been correlated with on‐and‐off target drug exposure. Not unexpectedly, many QSP studies deal with central internal organs, such as the liver, kidneys, or the heart. Applications for the eyes, however, are lacking. This is not surprising given the limited amount of cellular effect models in ophthalmology, as discussed above. However, QSP concepts bear great promise in ophthalmology, because they allow PD analyses in segments of the eye which are experimentally not accessible, at least not in humans. Here, cellular systems play a crucial role because, ideally, they allow a systematic and dense sampling of data, in terms of both time and drug exposure, without the need of animal euthanizations. , This allows to experimentally track the longitudinal emergence of drug responses at different medium concentrations in cell systems. Such data can in turn be contextualized in QSP models to correlate downstream drug responses at the cellular scale with upstream dose administration at the patient level. This includes both markers for drug efficacy as well as toxicity. The latter can be described in detail by adverse outcome pathways. Ocular QSP models can thus be used to screen the therapeutic window of a drug for various treatment regimes. Ocular QSP models should be based on physiologically based descriptions of the eye or at least of its relevant segments to quantify in vivo drug exposure (Figure ). Availability of ocular PBPK models for both small molecules as well as therapeutic proteins is an important prerequisite so that relevant regions of the eye are mechanistically represented. Drug effects can then be described with mechanistic effect (PD) models to support the establishment of dose‐effect models. Given the rather limited number of cellular systems biology models in ophthalmology, molecular gene/protein interaction and regulation networks appear to be a promising approach. With the help of a dense sampling scheme, including different doses and timepoints, it is then possible to establish dose‐effect correlations for marker genes or pathways to perform differential network analyses. The integration of effect models in PBPK models support the contextualization of omics data in a systemic context, allowing detailed dose–response correlations through reverse dosimetry. In the future, ocular QSP models can enable important insights in ophthalmology, including the identification of optimal dosing schedules or the comparison of different routes of administration as well as formulations. Ocular QSP models may also be used to analyze adverse side effects in the eyes by quantifying the effects of off‐target drug exposure, for example, after oral drug administration. The distribution of drugs in the eyes can rarely be measured directly in humans. Our understanding of the various physiological processes governing ocular pharmacokinetics (PKs) in humans is hence incomplete, and animal models are still widely used in ophthalmology. Rabbits, pigs, dogs, cats, mice, rats, and monkeys are standard model species for PK studies of the eyes despite physiological differences with human morphology and physiology. For example, the consistency of the vitreous is different in rabbits, mice, and humans as is the melanin content. There are also several specific differences in the physiology of the eyes between humans and rabbits, the most widely used animal model in ophthalmology. For example, the serum compartment, the retina vascular density, and the vitreous cavity, as well as the conjunctival surface area are comparatively larger in humans than in rabbits and vice versa for the cornea and lens. Moreover, rabbits have a lower blinking rate. As a consequence, drug PKs may differ significantly between rabbits and humans. For example, the half‐life of ranibizumab was 2.18–2.88 days in rabbit eyes but 7.19 days in human eyes. Similarly, the half‐life of bevacizumab was 4.32–7.56 days in rabbits and 9.82 days in humans. Nevertheless, animal models are often the only way to investigate pathological mechanisms. For this reason, animal models have been established for many diseases of the eyes. , Again, the consistency of the tissues is a major issue. The 3R principles (3R: Reduction, Refinement, and Replacement of animal testing) urge researchers to minimize animal suffering. Here, physiologically‐based pharmacokinetic (PBPK) modeling has the potential to significantly limit the need for animal experiments through cross‐species extrapolation. Whole‐body PBPK models for rabbits, mice, rats, and humans are available. , , This, together with a clear understanding of interspecies differences in ocular physiology, as outlined above, and the use of cell systems derived for humans and animals significantly supports model‐based cross‐species extrapolation. Omics data (such as gene expression data) can be a further aid in the understanding of molecular mechanisms. In ophthalmology, the Ocular Compartmental Absorption and Transit (OCAT) model has been successfully applied to predict PKs of levofloxacin, moxifloxacin, as well as gatifloxacin in humans based on a rabbit PBPK model. Extrapolation of semimechanistic rabbit models for proteins to clinical applications in humans has been outlined as well. However, concepts for model‐based support of first‐in‐human studies are far from being standard in ophthalmology. Cell lines as well as primary cells of animal and human origin are used in ophthalmology, whereas organoid models of the eye are still in their infancy. Cell systems may include primary human cells of one type or co‐cultures in which different cell types of one tissue are cultivated together to account for intracellular (e.g., cytokine‐mediated) communication. Using co‐culture systems, it is possible to replicate complex ocular structures, such as the retina for experimental studies in vitro. The use of polymer‐based scaffolds in co‐culture systems allows colonization by the different cells and leads to the formation of cellular interactions within a spatial 3D structure in vitro, allowing it to partially reproduce in vivo situations. , The cornea, for example, is a complex tissue containing five well‐defined layers: the epithelium, Bowman's layer, stroma, Descemet's membrane, and the endothelium. The cornea is the main barrier that topically administered drugs must overcome. Corneal models based on human primary cells are now commercially available and can be used to assess drug permeability. Cytokine‐mediated intercellular communication of the various corneal cell types mediates corneal homeostasis, but it also initiates a targeted response to challenges, such as injury or drug application. , Concerning the conjunctiva, a simplified fibrotic cell culture model was developed based on the cultivation of human primary fibroblasts of Tenon's space, allowing to describe fibrotic processes by gene expression. Comparing this description to changes in gene expression triggered in cancer cell lines allowed to identify an antibiotic counteracting the fibrotic processes. The mechanisms of action of this antibiotic is currently investigated in cell systems of several species. The long‐term goal of cell systems in ophthalmology is to understand the complex molecular processes in all cells in an affected tissue in order to be able to predict its response to a specific intervention. Abundance of genes and proteins involved in absorption, distribution, metabolism, and excretion (ADME) in specific cells or tissues can be identified from omics data. Likewise, omics data may provide insight in drug pharmacodynamics (PDs; e.g., by describing the regulation of drug targets, such as receptor proteins). The gene expression (transcriptome and proteome) of cells and tissues can thus be indicative about specific drug effects, and modern single‐cell molecular data can provide detailed descriptions of tissues, down to the individual cell types and their molecular capabilities, sometimes at spatial resolution. High‐throughput omics data are thus becoming an invaluable tool in drug development despite their inherent noisiness due to biological variability and measurement challenges. In particular, omics data hold great promise to enable the identification of molecular biomarkers, which may predict treatment response or toxicity. Aggregating genes into gene sets or pathways and checking their enrichment among the over‐ or underexpressed genes frequently matches existing knowledge or provides the basis for further analyses. For instance, in toxicology, “adverse outcome pathways” are frequently checked for their activation based on gene expression data, and “points of departure” can thus be estimated. In ophthalmology, omics data are still scarce. Tissues or cells from the eye are not usually covered by large databases, such as GTEx, ENCODE, or LINCS, irrespective of species. Most of the work in dedicated studies deals with the cornea and the retina. Human gene expression data are often derived from postmortem tissue, because, for many eye tissues, it is difficult to justify biopsies. The aim of describing, as closely as possible, the in vivo situation in humans is thus hard to accomplish, and artifacts caused by cell culture, tissue degradation, or the use of a model organism are often substantial. A long‐established dataset is the OTDB database (GSE41102) which comprises microarray data of 10 human eye tissues, that is, retina, optic nerve head, optic nerve, ciliary body, trabecular meshwork, sclera, lens, cornea, choroid/RPE, and iris. More recently, the Eye in a Disk: eyeIntegration collection provides data from cornea, eyelid, lens, retina, retinal epithelium, and RPE (choroid), whereas the Mega Single Cell Transcriptome Ocular Meta‐Atlas focuses exclusively on retinal tissues and the eye‐transcriptome.com dataset was derived from 10 healthy (conjunctiva, cornea, eyelid, lacrimal gland, optic nerve, retina periphery, retina center, choroid/RPE, retinal microglia, and hyalocytes) and nine diseased tissues. For more details on these transcriptomics datasets with a focus on the anterior segments, please see ref. The low prominence of omics data use in ophthalmology may be explained in part by the scarcity of such data, and by the difficulty of finding and handling them. , The increasing adoption of findable, accessible, interoperable, reusable (FAIR) data principles and the increasing precision, sophistication and utility of the data types (cf. single‐cell) and methods of analysis and integration (e.g., deep learning and transfer learning) for an increasing number of ocular tissues for more and more species give hope that ophthalmology will profit more and more from omics data efforts, similarly to research in fields such as cellular senescence. PK / PBPK modeling Ocular PK models describe drug distribution in different regions of the eye. A basic example is a compartmental PK model for small molecules which describes distribution of pilocarpine in the precorneal area as well as the aqueous humor following topical application in rabbits. Similarly, a four‐compartment PK model was used to simulate drug disposition in the periocular space, a choroid‐containing transfer compartment, the retina, as well as an additional distribution compartment. Molecular drug structures are important in semi‐physiological compartmental PK models for small molecules. For example, vitreal clearance may govern drug PKs and can be estimated based on quantitative structure–property relationships (QSPR) or on correlations with physicochemical drug properties, such as lipophilicity, polar surface area, or molecular weight. , The contribution of ADME genes to drug metabolism, in particular cytochrome P450 or phase II enzymes in the vitreous, appears to be limited to a few active proteins. Caco‐2 cell permeability was used as a surrogate for permeability through the posterior segment tissues. Further ocular PK models were developed specifically addressing drug transport across the cornea and the conjunctiva, including clearance by tear drainage or the simulation of diffusion kinetics of the transient solute transport through the cornea for periocular drug administration. Distribution across blood‐ocular barriers was simulated to describe the dependency of systemic circulation on vitreous drug levels. In this model, vitreal clearance was also estimated from QSPR data and used analogously to describe the distribution between plasma and the vitreous. Simulations of vitreal drug concentrations correlated well with experimental measurements in rabbit eyes. For proteins, a two‐compartment PK model of the vitreous and the aqueous chamber was used to simulate vascular endothelial growth factor (VEGF) suppression by the antibody ranibizumab. This semi‐physiological model was extended by geometrical and biophysical considerations to investigate IVT PKs. It was found that the ocular half‐life of large molecules is proportional to the vitreous diffusion time, which in turn can be estimated from the Stokes‐Einstein‐relation for the diffusion coefficient. The proportionality factor in turn follows from the fractional area of the vitreous/aqueous chamber interface. These results were confirmed in an extended model which additionally included the retina. The model predicts the same half‐lives for all three compartments, calculated from the hydrodynamic radius of each molecule. The model also estimated the permeabilities in the RPE and the internal limiting membrane as well as the efficacy of clearance pathways between the retina and the choroid. Diffusion from the vitreous into the aqueous humor was found to be the main elimination pathway, whereas a minor part of the drug is transported between the vitreous and retina, and between the retina and choroid. This three‐compartment model was then further extended by additionally including permeability coefficients between the retina and vitreous or choroid, which were identified from rabbit PK data. Ranibizumab−VEGF binding kinetics were next included in the model to simulate ranibizumab treatment of human patients with wet‐AMD. , Notably, the extended model allows to couple IVT PKs with suppression profiles of VEGF in the retina and aqueous humor as a PD readout. An extension of compartmental PK models for proteins are partial differential equations‐based models pioneered by the work of Missel who developed an anatomically accurate geometric model to simulate IVT injections in rabbits, monkeys, and humans. The model considers outer surfaces as well as interior structures in specific coordinate systems to account for mass flow, pressure, and concentration. It was validated for rabbits using drugs with a molecular weight up to 157 kDa. Extensions of this anatomically accurate geometric model for rabbit eyes describe diffusion of IgG and Fab in the retina and the RPE/choroid. , The model was also used as a spatio‐temporal model for drugs against macular degeneration as well as to describe drug delivery from an episcleral implant. First examples for PBPK modeling in ophthalmology include representations of the cornea, iris, lens, and aqueous humor to describe the intraocular distribution of pilocarpine in rabbits or timolol in a physiological ocular model in rats. An extension of these earlier models is the OCAT model, which encompasses aqueous and vitreous humor, retina, ciliary body, iris, choroid, cornea, lens, and sclera. A PBPK model has also been developed for therapeutic antibodies, considering segmentation into the aqueous humor, retina, and vitreous humor. Physiologically relevant PK models have been used to compare ocular exposure for different topically administered drug forms. Moreover, PBPK models are of particular relevance for the design of treatment strategies to optimize dosing schedules or pharmaceutical formulations. The OCAT model was used to investigate the bioavailability of topical ophthalmic suspension formulations. , PBPK models may be further informed by targeted animal model data. The biochemical and physiological effect of a drug to the body can be described through PD modeling. Classically, such PD models describe the dose–response relation at a single molecular target or pathway. Alternatively, associated clinical end points can be considered. With regard to ophthalmology, there are several mechanistic studies on eye biomechanics as well as on eye development. In contrast, there are only a few descriptions of PD models of the eye available in the literature. Among these are binding of VEGF‐A or central macular thickness in a maximum effect model as PD end points. In another study, an indirect response PD model was used to simulate the decrease in intraocular pressure (IOP) following topical administration of a single dose of timolol maleate. Detailed network models from cellular systems biology, however, are even more rare. Such network models may generally involve stoichiometric models of cellular metabolism, ordinary differential equation‐based intracellular signaling cascades or interaction maps. One of the few studies using such extended models for the metabolism of the eyes concern the identification of marker metabolites in the aqueous humor from patients with cataract or the role of sphingolipids in retinal pathophysiology. The reason for the limited availability of systems biology network models may largely lie in the lack of adequate large‐scale molecular data from different eye tissues which would otherwise provide a knowledge‐base for subsequent network analyses. An exception in this regard is the work of the EYE‐RISK Consortium, focused on the cornea, which enables the cross‐omics investigation of metabolomics, genomics, and disease pathways. Existing signaling models of the eye mainly address the functioning of photoreceptors to describe a phototransduction model in rod cells. A single‐cell atlas was compiled of cornea, iris, ciliary body, NR, RPE, and choroid in humans and pigs, which was also used to develop a disease map of genes involved in different eye disorders. Quantitative systems physiology (QSP) aims for the integration of cellular models from computational systems biology into PBPK models to overcome the focus on isolated drug targets. Considering the whole body, such multiscale models allow to simultaneously describe drug exposure in plasma or ocular tissue as well as the resulting drug‐induced response within the cellular networks. QSP models have among others been used to simulate cellular signaling models within the context of whole‐body PBPK models. Likewise, metabolic network models have been integrated in PBPK models. Finally, gene expression data have been correlated with on‐and‐off target drug exposure. Not unexpectedly, many QSP studies deal with central internal organs, such as the liver, kidneys, or the heart. Applications for the eyes, however, are lacking. This is not surprising given the limited amount of cellular effect models in ophthalmology, as discussed above. However, QSP concepts bear great promise in ophthalmology, because they allow PD analyses in segments of the eye which are experimentally not accessible, at least not in humans. Here, cellular systems play a crucial role because, ideally, they allow a systematic and dense sampling of data, in terms of both time and drug exposure, without the need of animal euthanizations. , This allows to experimentally track the longitudinal emergence of drug responses at different medium concentrations in cell systems. Such data can in turn be contextualized in QSP models to correlate downstream drug responses at the cellular scale with upstream dose administration at the patient level. This includes both markers for drug efficacy as well as toxicity. The latter can be described in detail by adverse outcome pathways. Ocular QSP models can thus be used to screen the therapeutic window of a drug for various treatment regimes. Ocular QSP models should be based on physiologically based descriptions of the eye or at least of its relevant segments to quantify in vivo drug exposure (Figure ). Availability of ocular PBPK models for both small molecules as well as therapeutic proteins is an important prerequisite so that relevant regions of the eye are mechanistically represented. Drug effects can then be described with mechanistic effect (PD) models to support the establishment of dose‐effect models. Given the rather limited number of cellular systems biology models in ophthalmology, molecular gene/protein interaction and regulation networks appear to be a promising approach. With the help of a dense sampling scheme, including different doses and timepoints, it is then possible to establish dose‐effect correlations for marker genes or pathways to perform differential network analyses. The integration of effect models in PBPK models support the contextualization of omics data in a systemic context, allowing detailed dose–response correlations through reverse dosimetry. In the future, ocular QSP models can enable important insights in ophthalmology, including the identification of optimal dosing schedules or the comparison of different routes of administration as well as formulations. Ocular QSP models may also be used to analyze adverse side effects in the eyes by quantifying the effects of off‐target drug exposure, for example, after oral drug administration. In the following, the application of ocular QSP for the development of new drug therapies will be exemplarily discussed for the case of glaucoma, one of the most severe eye diseases worldwide. The main risk factor for the development of glaucoma is the increase in IOP. The reduction of IOP is currently the only therapy that has been proven to slow the progression of the disease. In the majority of patients with glaucoma, IOP can be adjusted to physiological values by daily application of hypotensive eye drops. The active substances and the mechanisms of action vary and are selected according to the cause of the glaucoma disease, the patient's age, and the level of IOP. However, side effects of these therapies, such as dry eye disease caused by preservatives, allergic reactions, but also insufficient therapy efficiency or adherence, may make alternative, permanent forms of therapy necessary for the treatment of glaucoma. Besides coagulation procedures to reduce aqueous humor production , or laser trabeculoplasty, which increase the outflow of aqueous humor and thus lower the IOP, surgical interventions are frequently used for long‐term glaucoma therapy. Fistulating interventions, such as trabeculectomy and deep sclerectomy, , predominantly drain aqueous humor into the subconjunctival space via surgically created drainage channels. Conventional surgical glaucoma therapies using trabeculectomy and deep sclerectomy as well as the implantation of alloplastic glaucoma drainage implants are often associated with problems in terms of long‐term efficiency. The long‐term success rates of trabeculectomies are estimated to lie around 40%–50%. , Frequently, a renewed increase in IOP requiring additional interventions is caused by excessive fibrotic scar formation during healing, resulting in drainage resistance or closure, triggering increase of IOP, and therapy failure. In order to prevent fibrosis in fistulating glaucoma therapy, cytostatic agents are currently used which inhibit the proliferation of fibroblasts and their transformation into myofibroblasts, thereby slowing down or preventing scar formation and maintaining the functionality of the surgery‐created drainage pathways in the longer term. The main drugs used are mitomycin C and 5‐fluoruracil, which both inhibit fibroblast cell division. Due to the non‐specificity of the cytostatic effect, however, the use of these drugs is associated with side effects and often requires a renewed surgical intervention. As outlined above, establishment of an adequate cellular system is of key importance to analyze drug effects in vitro. In this use case, the anterior chamber, the trabecular meshwork (through which a major fraction of the aqueous humor exits the eye), the tenon (Tenon's space, which dominates the fibrotic scarring after glaucoma surgery), and the conjunctiva are of particular interest. Tenon fibroblasts are considered to occupy the conjunctival main drainage area, and their myofibroblast transformation triggers post‐surgery fibrosis. RNA‐seq gene expression data (for rabbit) in a time series after trabeculectomy were reported as well as a mouse dataset describing a glaucoma filtration surgery model. Primary human tenon fibroblasts (hTFs) were also used to investigate fibrotic processes after trabeculectomy in glaucoma therapy. An important question here is the validity of this specific cell system because cultivation of the primary tenon fibroblasts causes molecular changes which may already turn them fibrotic, at least in part. The molecular changes are characterized by an increased proliferation rate compared to the in vivo situation. Nevertheless, protein analyses of the primary tenon fibroblast cell culture system showed that the typical fibrosis marker alpha smooth muscle actin (α‐SMA), which also marks the transformation of fibroblasts into fibrotically active myofibroblasts, is only expressed to a very low extent. Only the stimulation with cytokines, such as transforming growth factor beta (TGF‐β1), induces notable α‐SMA expression and thus indicates myofibroblast transformation, which are also characterized by an increased expression rate of other fibrotic markers of the extracellular matrix (ECM; collagens and fibronectin) whereby the cellular behavior resembles the in vivo situation. It can hence be concluded that primary hTFs are not already “profibrotic,” so that analyses using this cell culture model provide valid insights and conclusions into the mechanisms of fibrotic scar formation after glaucoma surgery. Having established a cellular system, drug‐induced responses need to be identified and characterized next. For the cellular system of primary hTF, specific molecular mechanisms behind fibrosis are the upregulation of actins, the downregulation of CD34, and the upregulation of inflammatory cytokines such as IL‐6, IL‐11, and inflammatory BMP6. The macrolide antibiotic Josamycin (JM) reverses these molecular mechanisms according to human cancer cell line data from the CMap, indicating that JM could be an inhibitor of fibrosis. Follow‐up experiments validated the predictive value of the cellular system, JM indeed showed an inhibitory effect on hTF proliferation in a concentration‐dependent manner, and suppressed the synthesis of ECM components. In hTFs stimulated with TGF‐β1, JM specifically inhibited α‐SMA expression, suggesting that it inhibits the transformation of fibroblasts into fibrotic myofibroblasts. In addition, a decrease of components of the ECM, such as fibronectin, which is involved in in vivo scarring, was observed. Thus, JM may be a promising candidate for the treatment of fibrosis after glaucoma filtration surgery or drainage device implantation in vivo. Generally, in vitro studies require that the in vitro medium concentration of any bioactive agent is chosen at physiologically relevant conditions. To this end, in vivo drug concentrations in specific eye segments need to be identified through simulation with ocular PBPK models. Similar PBPK‐based concepts for assay design have been previously applied for the liver. A two‐dimensional assay design which covers both different drug concentrations in the medium at different times would be an ideal experimental setup. An accurate assay design significantly supports the identification of exposure‐effect correlations for molecular markers. In the case of postoperative glaucoma, this could be α‐SMA levels as a function of both concentration and time, such that drug PDs at the cellular levels can be established. Exposure‐effect correlations are an important prerequisite for PK/PD correlations. Thus, reverse dosimetry can be used to identify doses that have to be administered in vivo at the whole‐body or whole‐organ level, such that an observed drug effect can be achieved. PBPK models are of particular interest here because they allow simulation of PK profiles in tissues from which specific cell systems originate. With regard to glaucoma research, a physiologically based model representation of the anterior segment of the eye is of particular relevance. Having developed such a model, drug exposure in different eye segments could be specifically quantified and, combined with in vitro cell system data, be used to optimize a required treatment design. Ocular PBPK models could thus be used for the development of dedicated QSP models which simultaneously describe drug PKs and the resulting drug response at the molecular level. These QSP models could then be applied for forward dosimetry to convert drug doses at the whole‐body or whole‐organ level to the expected cellular biomarker levels. Of note, this includes markers for both efficacy and toxicity, enabling a systematic screening of the therapeutic window. QSP models can be validated further by animal model data. In particular, the accuracy of the computational models can be assessed by comparing the simulation results at the cellular scale with specific physiological endpoints in animals. In the case of postoperative fibrosis, that may involve the comparison of α‐SMA concentrations in Tenon's subspace with IOP. Some of the end points may only be accessible in animal models where invasive studies are possible. However, the development of a dose–response correlation in a rabbit or rodent QSP model already significantly reduces the need for animal experiments, if the in vitro/in silico approach reflects the in vivo situation sufficiently well. Animal QSP models can be further embedded in a parallelogram approach, as inspired by ref. , for inter‐species comparisons between animals and humans (Figure ). In such a parallelogram approach, PBPK models are established for both species, together with species‐specific cell systems and effect models with common molecular markers. In vitro PD measurements can then be correlated with in vivo drug effects in animal species through forward and reverse dosimetry in animal QSP models. In addition, at the whole‐body level, PBPK modeling supports interspecies comparison of drug exposure by means of PBPK models for preclinical animals and humans. At the cellular scale, gene expression similarity analysis may be used to compare in vitro drug responses for the different species. Following the parallelogram approach, human in vivo drug effects can be predicted from PK/PD correlations in animal and human QSP models, animal in vivo PD data, and animal and human in vitro PD assays. Of note, human in vivo PD end points (e.g., in the vitreous or the aqueous humor) are not accessible otherwise. However, even a validated animal QSP model does not validate its human equivalent due to the numerous interspecies differences. Still, careful translation of the animal model to humans (e.g., through PBPK‐based extrapolation of drug PK and functional, structural, or evolutionary analyses of the similarity of the PD marker genes), may help to develop accurate human QSP models. Many eye diseases significantly reduce quality of life of affected patients. Development of new therapies, however, is difficult due to an incomplete physiological understanding of the factors governing drug distribution in the eyes. Likewise, drug effects are almost impossible to characterize in vivo in the human eyes. Ocular QSP models, building upon ocular PBPK and advanced effect (PD) models, bear the promise of providing new concepts for ocular drug development. We described and discussed the current status of the specific building blocks (i.e., PK/PBPK models), dedicated cell systems and animal models, and systems biological effect PD models. We highlighted their application for the case of fibrosis after glaucoma surgery. Ocular QSP models shall eventually allow the prediction of drug disposition and action in the human in vivo situation. We are convinced that the concepts here discussed will increasingly be used in ocular pharmacology in the future. This work was supported by the BMBF, VIP+ − Validierung des technologischen und gesellschaftlichen Innovationspotentials wissenschaftlicher Forschung (03VP06230). T.S. and G.F. are listed as inventors in pending patent applications on the use of Josamycin as an antifibrotic compound, submitted on behalf of the Rostock University Medical Center. The other authors declare no other competing financial interests. |
Monogenic disease analysis establishes that fetal insulin accounts for half of human fetal growth | bbbc5826-b553-46d5-8637-6bcf9c5415cd | 10014100 | Gynaecology[mh] | There was a substantial, global reduction in fetal growth in the absence of fetal insulin ( and ). Mean birth weight adjusted to 40 weeks’ gestation was 51% of normal birth weight (1,697 g [95% CI, 1,586–1,808 g] vs. 3,320 g [50th percentile at 40 weeks]) . Median birth length was greatly reduced and 12% lower than normal (adjusted 43.7 cm [95% CI, 40.9–46.5 cm] vs. 49.6 cm at 40 weeks) . Female individuals without insulin were 196 g (95% CI, 80–312 g) lighter than male individuals, indicating that the sexual dimorphism in birth weight is accounted for by noninsulin-mediated fetal growth . In utero growth restraint due to absent fetal insulin secretion did not persist postnatally. In individuals with loss-of-function INS mutations, after birth there was evidence of rapid, early catch-up growth of weight and length . Compared with birth size, there was a median gain of 2.97 SDS (IQR, 2.29 to 3.07 SDS) in weight and 2.11 SDS (IQR, –0.58 to 3.68 SDS) in length. Most recently available weight ( n = 16) and height ( n = 15) after the age of 2 years were within the normal ranges . This study utilizing human monogenic disease as a model of absent fetal insulin has provided unique insights into the physiology of early growth. Insulin-mediated fetal growth in humans contributes approximately 49% to birth weight at term, which is highest out of all species studied . Absent fetal insulin also reduced birth length, but its greater effect on weight confirms its main effects relate to fetal fat deposition. The high contribution of fetal insulin-mediated growth to birth weight could explain why, at birth, humans have a higher proportion of body fat compared with other species . This high proportion of fat at birth could confer a survival advantage, as lipids provide an efficient and vital fuel for the developing, large human brain . The relatively long length of gestation in humans could also result in a longer period of exposure to fetal insulin. We observed birth weight without fetal insulin to deviate further from the normal range as pregnancy progressed , indicating that insulin-mediated growth becomes more important later in pregnancy. A key regulator of fetal insulin-mediated growth is maternal glucose. Maternal glycemia has a substantial effect on birth weight (1 mmol/L higher maternal fasting glucose raises birth weight by 301 g; ref. ). Similar to the situation in which insulin is replaced after birth following in utero deficiency, the effect of maternal glycemia is transient and rapidly lost in the first year of life, with catch-up and catch-down growth . In contrast, the lower birth weight in female individuals does not appear to have its origins in fetal insulin-mediated growth, and catch-up is not observed Rapid catch-up growth once insulin is replaced in individuals with INS loss-of-function mutations is in marked contrast to the postnatal growth failure of those with severe insulin resistance secondary to biallelic mutations in the insulin receptor gene INSR (Donohue syndrome), despite similarly low birth weight . This is likely to reflect that, postnatally, it is not possible to correct the tissue insulin resistance in these individuals. In conclusion, monogenic diseases resulting in absent fetal insulin have enabled us to answer fundamental questions about early growth in humans. We have used a monogenic human knockout of insulin to show that absence of fetal insulin reduces birth weight by approximately half and postnatally, there is rapid catch-up in weight and length. This establishes that insulin-mediated and noninsulin-mediated growth are equally important in humans. Whether other key modulators of fetal growth apart from maternal glucose act through fetal insulin is uncertain. In the future, all studies looking at fetal growth should determine whether insulin- or noninsulin-mediated growth are impacted, because the short- and long-term outcomes are likely to be different. Supplemental data |
Impact of pharmacogenetics on aspirin resistance: a systematic review | e1c69724-d741-4e20-b028-663269ec476b | 10014202 | Pharmacology[mh] | Cardiovascular disease (CVD) is the first cause of mortality worldwide, with all the healthcare systems facing this very challenging issue. The World Health Organization (WHO) estimates that 31% of deaths worldwide are due to CVD, with ∼ 17.7 million CVD-related deaths in 2015. Approximately 7.4 million of these deaths were due to heart disease and 6.7 million deaths were due to stroke. Platelet activation plays an important role in the development of CVD. Acetylsalicylic acid (ASA), commonly known as aspirin, is an irreversible inhibitor of platelet cyclooxygenase (COX), which prevents the formation of thromboxane A2 by arachidonic acid and, therefore, prevents the formation of this activating agent of platelet aggregation and vasoconstriction. Aspirin is a widely used antiplatelet for primary and secondary prevention of CVD, such as stroke and heart attacks. Nevertheless, several patients may still experience treatment failure with ASA and an increased risk in recurrent stroke events. There are several contributing factors for treatment failure including medication adherence, drug-drug interactions, aspirin-independent thromboxane A2 synthesis and also genetic variations. Even low daily aspirin doses (in the range between 75 and 150 mg) are able to suppress biosynthesis of thromboxane, inhibiting the accumulation of platelets, and reducing the risk of CVD. However, aspirin does not always prevent the formation of thromboxane A2 due to failure to inhibit platelet COX. Because of that, all individuals do not respond to antiplatelet therapy in a similar way. In this sense, the genetic mutations have been related with aspirin resistance (AR) and may cause reduction or increase in drug absorption and metabolism, contributing to AR. Aspirin resistance can be diagnosed by clinical criteria or by laboratory tests. Clinically, the patient has a new episode of CVD, despite the regular use of aspirin. While the failure of aspirin to inhibit a platelet function test can be seen by Platelet Function Analyser (PFA-100) or light transmission aggregometry (LTA), for example. The field of pharmacogenetics, which aims to implement specific pharmacological therapies to genetic characteristics with the intention to provide greater efficiency, is a constant target of research. Therefore, several studies have been published about candidate genes associated with the genetic predisposition of resistance to AAS, such as COX-2 , GPIIIA , and P2Y1 . Resistance to antiplatelet therapy and the indiscriminate use of ASA can increase rates of recurrence and mortality from cardiovascular diseases, such as stroke. Hence, the aim of the present study was to perform a systematic literature review to determine the impact of genetic variants on AR. The present systematic review was established according to the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyzes (PRISMA) statement published by Moher et al. (2019). Five following databases were systematically screened: MEDLINE/PubMed, Cochrane, Scopus, LILACS, and SCIELO. The research was restricted to a period of 10 years (December 2009 to December 2019) and the following search terms were applied: Aspirin AND Resistance AND Polymorphism and Aspirin AND Resistance AND Genetic variation . Eligibility criteria Only articles published in English were included in this search. Also, only articles describing the relation between AR, proven by laboratory tests or a new case of CVD, and polymorphisms or genetic variations were included in the present systematic review. The final articles included ( n = 21) in the present review were 20 case-controls and 1 cohort. Assessment of risk of bias The authors, using the combined search terms and based on the inclusion criteria, conducted the primary literature search. In that first moment, titles and abstracts were screened. All reports that appeared in accordance with the inclusion criteria were full-text screened. All studies that did not comply with pre-established eligibility and inclusion requirements were excluded. In a second step, the researchers independently evaluated whether the full-texts previously selected followed the inclusion criteria. In case of disagreement between two authors, a third author was consulted, and a consensus was reached by a meeting between them. Furthermore, to assess and minimize the presence of potential biases, the Risk of Bias in Systematic Reviews (ROBIS) method was used as a reference. Data extraction and synthesis In the primary literature search, a total of 290 articles were found: 178 in SCOPUS, 104 in MEDLINE/Pubmed, 5 in Cochrane, 2 articles in LILACS, and 1 in SCIELO. Of those, 19 were duplicated. Hence, 271 articles were screened for reading of title and abstract, 216 of which were excluded for not meeting our inclusion criteria. In the next step, the authors independently reviewed 65 full-text articles. Then, 44 articles were excluded for not meeting our inclusion criteria. So, in the end, 21 articles were included in the present systematic review . Only articles published in English were included in this search. Also, only articles describing the relation between AR, proven by laboratory tests or a new case of CVD, and polymorphisms or genetic variations were included in the present systematic review. The final articles included ( n = 21) in the present review were 20 case-controls and 1 cohort. The authors, using the combined search terms and based on the inclusion criteria, conducted the primary literature search. In that first moment, titles and abstracts were screened. All reports that appeared in accordance with the inclusion criteria were full-text screened. All studies that did not comply with pre-established eligibility and inclusion requirements were excluded. In a second step, the researchers independently evaluated whether the full-texts previously selected followed the inclusion criteria. In case of disagreement between two authors, a third author was consulted, and a consensus was reached by a meeting between them. Furthermore, to assess and minimize the presence of potential biases, the Risk of Bias in Systematic Reviews (ROBIS) method was used as a reference. In the primary literature search, a total of 290 articles were found: 178 in SCOPUS, 104 in MEDLINE/Pubmed, 5 in Cochrane, 2 articles in LILACS, and 1 in SCIELO. Of those, 19 were duplicated. Hence, 271 articles were screened for reading of title and abstract, 216 of which were excluded for not meeting our inclusion criteria. In the next step, the authors independently reviewed 65 full-text articles. Then, 44 articles were excluded for not meeting our inclusion criteria. So, in the end, 21 articles were included in the present systematic review . In the 21 final articles selected, a total of 10,873 patients were analyzed, of which 3,014 were aspirin resistant and 6,882 were aspirin sensitive (some articles brought semiresistance values and were disregarded, and another 2 articles did not classify their patients as sensitive and not sensitive). Of the 21 articles studied, 11 included patients with a cerebrovascular event, totaling 4,835 patients. The other 10 articles mostly analyzed cardiac outcomes. We also emphasize that the clinical conditions of the evaluated patients were varied among the articles, with some articles evaluating patients with > 1 disease: ischemic stroke (10 articles), coronary artery disease (9), peripheral arterial disease (3), acute vascular event (1), age > 80 years old (1), adults (1), and hypertension (1). Most of the patients in the selected articles are from the Asian continent (9 from China, 4 from India, 2 from Turkey, and 1 from Jordan), and regarding the other works, 3 articles are from the American continent (all from the United States of America), 1 from the European continent (Belgium), and 1 from the African continent (Tunisia). Among the resistance analysis methods, 4 articles used clinical outcome and 17 used platelet aggregation measurement. Among those who performed platelet aggregation measurement, the most common method was LTA (8 articles), followed by PFA-100 system (3), thromboelastography platelet mapping assay (TEG) (2), VerifyNow (2), PL-11 platelet analyzer (1), TXB2 elisa kit (1) and urinary 11-dehydro TXB2 (1), with some articles using > 1 method. In , we detail the following information from the 21 final articles included in the present review: Type of article, country, clinical condition, sample number, number of aspirin resistant patients, number of aspirin sensitive patients, gene, risk allele, protective allele, genetic variant, p-value, Odds Ratio (OR), CI, resistance assessment method, and daily aspirin dose. In addition, we have highlighted in a separate table the genetic variants with relevant results for AR . As for relevance, of the 64 genetic variants evaluated by the articles, 14 had statistical significance ( p < 0.05; 95%CI). Among them, the following polymorphisms have had concordant results so far: rs1371097 (P2RY1 ), rs1045642 ( MDR1 ), rs1051931 and rs7756935 ( PLA2G7 ), rs2071746 ( HO1 ), rs1131882 and rs4523 ( TBXA2R ), rs434473 ( ALOX12 ), rs9315042 ( ALOX5AP ), and rs662 ( PON1 ). In turn, these genetic variants differ in real interference in AR: rs5918 ( ITGB3 ), rs2243093 ( GP1BA ), rs1330344 ( PTGS 1), and rs20417 ( PTGS2 ). To study the relationship between polymorphisms and AR, it is necessary to consider the resistance analysis mode, which can be performed in two ways: clinical or laboratory. In the first, the patient is considered resistant if there is a negative outcome (death or stroke for example). In the second, several types of tests can be used, such as PFA-100, VerifyNow Aspirin, TEG, PL-11 platelet analyzer, serum and urinary TXB2, LTA, and multiplate analyzer. However, it is important to highlight that the measurement of platelet response to aspirin is highly variable, likely due to differing dependence of the arachidonic acid pathway between techniques. In our research, the most used laboratory method was the LTA, which is considered the gold standard for testing platelet function. The relationship between polymorphisms and AR has been described by Yi et al. This study assessed the interaction with PTGS1 (rs1236913 and rs3842787), PTGS2 (rs689466 and rs20417), TXAS1 (rs194149, rs2267679, and rs41708), P2RY1 (rs701265, rs1439010, and rs1371097), P2RY12 (rs16863323 and rs9859538), and ITGB3 (rs2317676 and rs11871251) gene variants. In the laboratory analysis, only rs1371097 of the P2RY1 gene, comparison CC x TT + CT, obtained statistical relevance ( p = 0.01), even after adjusting for other covariates ( p = 0.002; OR = 2.35; 95%CI: 1.87–6.86). In addition, using the generalized multifactor dimensionality reduction (GMDR) method, the following 3 sets of gene-gene interactions were significantly associated with AR: rs20417CC/rs1371097TT/rs2317676GG ( p = 0.004; OR = 2.72; 95%CI: 1.18–6.86); rs20417CC/rs1371097TT/rs2317676GG/AG ( p = 0.034; OR = 1.91; 95%CI: 1.07–3.84); rs20417CC/rs1371097CT/rs2317676AG ( p = 0.0025; OR = 2.28; 95%CI: 1.13–5.33). These high-risk interactive genotypes were also associated with a bigger chance of early neurological deterioration ( p < 0.001; Hazard Ratio [HR] = 2.47; 95%CI: 1.42–7.84). Peng et al. (2016) also assessed genes related to thromboxane and others. The analyzed polymorphisms were ABCB1 (rs1045642), TBXA2R (rs1131882), PLA2G7 (rs1051931 and rs7756935) and PEAR1 (rs12041331–rs1256888). There was statistical significance for 3 of them: rs1045642 ( p = 0.021; OR = 0.421; 95%CI: 0.233–0.759), rs1131882 ( p = 0.028; OR = 2.712; 95%CI: 1.080–6.810) and rs1051931–rs7756935 ( p = 0.023; OR = 8.233; 95%CI: 1.590–42.638), while Wang Z. et al (2013) researched the association with TBXA2R (rs4523), ITGB3 (rs5918), P2RY1 (rs701265), and GP1BA (rs6065) polymorphisms. The only polymorphism significantly associated with AR was rs4523 ( p = 0.001; OR = 4.479; 95%CI = 1.811–11.077). Another study that assessed the TBXA2 and glycoprotein genes was done by Gao et al. GP1BA (rs6065), ITGB3 (rs5918), P2RY1 (rs701265), and TBXA2R (rs4523) genetic variations were researched, but only TBXA2R (rs4523) polymorphism was related ( p = 0.01). In addition, Patel et al. also studied the ITGA2B/ITGB3 polymorphisms. They analyzed the relationship with CYP2C19 (rs4244285) and ITGA2B /I TGB3 (rs5918) polymorphisms. However, no association was observed ( p = 0.171 and p = 0.960, respectively). Moreover, still in the scope of glycoprotein genes, Derle et al. conducted a study with 208 patients with vascular risk factors. ITGB3 (rs5918) polymorphism was screened, and the results showed that there was no significant difference in the presence of the C allele between the groups ( p = 0.277). In addition, in the relationship between the presence of the C allele and atherothrombotic stroke, no significant difference was found ( p = 0.184). A study by Wang B et al. also analyzed the rs5918 (PLA1/A2 ) polymorphism of the ITGB3 gene. All 214 patients in the aspirin sensitive group had the PLA1/A1 genotype and no patients with PLA2/A2 were found. However, of the 236 patients in the AR group, 12 had PLA1/A2 heterozygous genotype ( p = 0.002), finding a statistically significant differenc. In the study by Pamukcu et al., 13 polymorphisms of 10 different genes were tested, including ITGB3 . The genes F5 (rs6025, rs1800595), F2 (rs1799963), F13A1 (rs5985), FGB (rs1800790), SERPINE1 (rs1799889), ITGB3 (rs5918), MTHFR (rs1801133, rs1801131), ACE (rs1799752 - Ins/Del), APOB (rs5742904), and APOE (rs429358 - C112R and C158A) were evaluated. However, there was no significant result for any polymorphism (p > 0.05). Furthermore, in the case-control study by Voora et al, 11 polymorphisms of 11 different genes were assessed: GNB3 (rs5443), ITGA2 (rs1126643), ITGB3 (rs5918), GP6 (rs1613662), GP1BA (rs2243093), PEAR1 (rs2768759), VAV3 (rs6583047), F2R (rs168753), THBS1 (rs2228262), PTGS1 (rs3842787), and ADRA2A (rs1800544). When comparing the groups, there was no relationship ( p > 0.05). Another research that studied some of the same genes was conducted by Al-Azzam et al.: GP1BA (rs1126643), ITGA2 (rs2243093) and PTGS2 (rs20417). Of these, only the GP1BA (rs2243093) gene was related ( p = 0.003), analyzing the presence of the C allele. Additionally, Wang et al. (2017) conducted a study about the following polymorphisms: ITGA2 polymorphism gene at rs1126643 and PTGS2 polymorphism gene at rs20417. The authors found no association: p = 0.21 for rs126643 and p = 0.69 for rs20417. Moreover, Yi et al. used Matrix-Assisted Laser Desorption/Ionization-Time Of Flight (MALDI-TOF) to link PTGS1 (rs1236913 and rs3842787) and PTGS2 (rs689466, and rs20417) with AR. The analysis showed that there was no statistical relevance for the relationship. Only when the gene-gene interaction (rs3842787 and rs20417) was evaluated, there was statistical significance: rs3842787/CT + rs20417/CC ( p = 0.016; OR = 2.36; 95%CI: 1.12–6.86), rs3842787/TT, CT + rs20417/CC ( p = 0.078; OR = 1.36; 95% CI: 0.82–2.01), and rs3842787/CT + rs20417/GC ( p = 0.034; OR = 1.78; 95%CI: 1.04–4.58). Highlighting the fact that, for the second combination, there is an invalid CI. Another study that investigated polymorphisms of the PTGS1 (rs1888943, rs1330344, rs3842787, rs5787, rs5789, rs5794) and PTGS2 (rs20417, rs5277) genes was conducted by Li et al.; in addition to these two genes, a genetic variant of the HO1 gene (rs2071746) was also tested. As a result, only two genetic variations were associated with AR. The rs2071746 polymorphism ( HO1 gene) had statistical significance to genotype TT ( p = 0.04; OR = 1.40; 95%CI = 0.59–3.30) and T allele ( p = 0.04; OR = 1.70; 95%CI =1.02–2.79), while rs1330344 ( PTGS1 gene) had significant results only when G was the risk allele and analyzed separately ( p = 0.02; OR = 1.77; 95%CI = 1.07–2.92). Still on the PTGS1 gene, Fan et al. investigated several polymorphisms of the PTGS1 gene (rs1888943, rs1330344, rs3842787, rs5787, rs5789, and rs5794), but rs1330344 was the only significantly related to AR ( p = 0.01; OR = 1.82; 95%CI = 1.13–2.92; allele value) just in LTA + TEG analysis. Moreover, another case-control study by Chakroun et al. investigated the relationship between rs3842787 polymorphism of the PTGS1 gene and AR. Patients with the allele had no statistically significant difference using CEPI-CT ( p = 0.1) and uTxB2 ( p = 0.43). Sharma et al. evaluated 3 polymorphisms of 3 different genes, PTGS2 (rs20417), ALOX5AP ( rs9315042) and ABCB1 ( rs1045642), to assess their role in AR. The research was performed in 3 different studies and all studies obtained statistical relevance for the CC allele of rs20417 ( p = 0.016; OR = 3.157; 95%CI: 1.241–8.033), the GC allele of rs20417 ( p < 0.001; OR = 2.983; 95%CI: 1,884–4,723) and for the rs9315042 variant ( p < 0.001; OR = 2.983; 95%CI: 1.884–4.723). For the variant rs1045642, 2 comparisons were made, one comparing cases and controls, for the TT x CC alleles ( p < 0.001; OR = 2.27; 95%CI: 1.64–3.168), and for the TT x CT + CC alleles ( p < 0.001; OR = 1.72; 95%CI: 1.335–2.239) and other comparing AR and sensitive participants ( p = 0.012; OR = 1.85; 95%CI: 1.142–3.017). Another study that tested the ALOX gene was done by Carroll et al. The study tested 4 genetic variants: rs434473 and rs1126667 of the ALOX12 gene, rs4792147 of the ALOX15B gene and rs3892408 of the ALOX15 gene. Only the rs434473 polymorphism obtained a significant p -value ( p = 0.043). Furthermore, Yeo et al. analyzed some variants of PTGS1 (rs10306114, rs3842787, rs5788, and rs5789), ITGA2 (rs1126643, rs1062535, and rs1126643), ITGB3 (rs5918), GP6 (rs1613662), P2RY12 (rs1065776), and F13A1 (rs5985) genes, but only rs662 ( A576G ) of PON1 gene was significantly relevant ( p = 0.005) to AR. Lastly, a study by Strisciuglio et al. included 450 noncarriers of the T2238C polymorphism (rs5065, NPPA gene) and 147 carriers. The authors concluded that there was no statistical difference when comparing the groups, neither in overall CAD patients ( p = 0.7) nor in the diabetic group ( p = 0.6). As limitations of the present study, we highlight the nonuniform methodologies of the analyzed articles, as well as population differences. These divergences made it difficult to compare the results of the articles. Among the studies, there was a great difference among the clinical conditions, as well as in the way of analysis of the resistance and in the dosage of aspirin. Unfortunately, meta-analysis was not performed due to such high clinical and methodological heterogeneity of the findings. Despite the heterogeneity of the findings in terms of methodology and results, it is clear that some polymorphisms are more studied than others. Among them, rs1126643 ( ITGA2 ), rs3842787 ( PTGS1 ), rs20417 ( PTGS2 ), and rs 5918 ( ITGB3 ) were the most studied. In conclusion, pharmacogenetics is an expanding area that promises a therapy aimed at the individualities of each patient, personalized medicine, for better control of diseases, including cardiovascular diseases, such as stroke. Finally, further studies are needed to better understand the association between genetic variants and AR and, therefore, the practical application of the findings. |
Machine learning approaches to predict drug efficacy and toxicity in oncology | 90095718-75e1-4e70-a261-df297eabfc70 | 10014302 | Internal Medicine[mh] | Machine learning algorithms (MLAs) are a set of algorithms within the field of artificial intelligence (AI) that can learn relevant relationships within large datasets and develop ideal approaches to their analysis without prior specification. , , , MLAs have found many applications in drug development, including FDA approval predictions, clinical trial design, drug repurposing, and even generation of new therapeutic targets. , , , The field has experienced a rapid development in the past decade and is now reaching a degree of maturity and sophistication that is continually improving. In the following sections we discuss the basics of MLAs and lay out a framework for how they can be used for drug development. We focus on the methods that have been developed for creating representations of both the therapeutics of interest as well as the disease to be targeted. Then we present the models that leverage these representations to predict the efficacy and toxicity of new therapeutics. The field of oncology has been a particular focus for the development of new therapeutics and key advances in machine learning (ML) technology have occurred within the cancer context. , , We delve into the details and highlight the resources available, principally in this field of research. We outline the general approach underlying MLA models in the therapeutics domain, as presented in , focusing mainly on models to predict the efficacy and toxicity of new therapeutics, which in turn inform their likelihood of approval. Particularly, we summarize this layout by showing key features, model types, and the insights that each can provide (depicted in detail in ). In terms of features, we show in A that they can be split into two key domains: therapeutic and disease state representations, respectively. In the top-left panel, we focus on the small-molecule and protein therapeutic types and show their innate structure and the various methodologies that have been developed to represent them. For the disease state representation, we summarize the related -omic profiles and their corresponding analyses in the bottom panel. Next we demonstrate, as depicted in B, the types of models with which both feature types can be utilized either separately or together. Specifically, we highlight the key model types in both the supervised and the unsupervised domains. Finally, we highlight (in C) the different predictions or insights each of those models can generate. The predictions can be characterized as either drug assessment or drug design. For assessment models, a therapeutic entity is pre-defined and the value to be predicted is its potential efficacy or toxicity. For drug design, the models themselves would generate potential therapeutics for a particular disease state. Generative autoencoders can be trained on existing drugs and their efficacy and toxicity can be used to generate new examples of therapeutics that would be safe and efficacious.
In this section, we give a brief overview of the types of ML and artificial models that have emerged broadly in the past decade. A more detailed exploration of each is provided in our previously published work. , The first distinction we make is between supervised and unsupervised learning. Supervised learning In supervised learning, we generally utilize a large dataset of labeled data to develop a model that is capable of classifying new entries with the correct label. It has broad applications for drug discovery and design as it can be used to assess the efficacy, toxicity, and likelihood of approval of a new therapeutic. Here, we lay out the basics of the approach to inform the discussion in the rest of the review. Bias-variance tradeoff In any supervised learning approach there is an underlying tension known as the bias-variance tradeoff, which emerges from two primary concerns that must be accounted for. One relates to the insufficient relevant data to generate valid rules (the bias error) and second that the rules generated are too specific to the particular dataset that is being used to train on (the variance error). The bias error can be thought of as underfitting and refers to an algorithm missing the relevant relationships between the features of interest and what is being predicted. The variance error, also known as overfitting , can be considered as the sensitivity of the algorithm to changes in the data, where a model is able to make very accurate predictions with the dataset it was trained on, but fails to generalize and performs poorly on new data. Understanding the trade-off will elucidate the rationale behind the approaches taken when developing these models. Train - test splitting The train-validate-test is one of a number of standard approaches that have emerged to deal with the bias-variance tradeoff. The train-validate-test approach requires the splitting of the dataset into two major subsets, the “train set” and the “test set.” The former can then be further split into a true train set and a validation set. The model is then trained on the train set, and its performance assessed on the validation one. The hyperparameters (model parameters that are set prior to training as opposed to ones derived via training) of the model can then be adjusted to improve the performance on the validation set. Once the training and tuning is optimized, then the model can be assessed on the test set that had not been used in any way. The approach is meant to avoid the possibility of overfitting the model to the specifics of the data being used and as a result can create a generalizable model that would perform well on previously unseen data. k-fold cross-validation A major consideration with the train-test approach is whether or not the split is done in a truly random fashion and if the resulting subsets are appropriately representative. The method of k-fold cross validation builds upon the train-test approach by running the split multiple times. In k-fold cross validation, the train set is split into a k-subsets, and one of the subsets is held out and used for validation; this process is repeated k times with a different subset being held out each time. The performance is then taken to be the average of the k models trained. There are also methods of introducing a degree of stochasticity into the training process by including slight variations in the datasets used for training or including dropout layers (a ML technique where certain neurons are ignored during training in a stochastic fashion), where certain learned processes are randomly inhibited allowing for the development of more flexible programs that have a better chance at being truly generalizable. Classification or regression modeling Supervised learning models can be further categorized based on the type prediction they are making. The two major model types are regression and classifiers. In a classifier model, the prediction of interest takes one of a few discrete values (e.g., 0 or 1 in a binary manner). A model to assess whether a drug will be approved or rejected would be considered as such. Regarding regression models, the prediction of interest can take on any continuous value. If we wanted to predict the efficacy of a cancer drug, by measuring the dosage of the drug required to inhibit 50% of the cancer cells in vitro (the IC 50 ), we would use a regression model. Certain ML models are used exclusively for either classification or regression. For example, linear regression should only be used for regression models, whereas logistic regression, despite the name, should only be used for classification problems. Certain models such as decision trees, random forests, support vector machines, k-nearest neighbors, and neural networks, have classifier and regressor versions and can be used for either problem type. Unsupervised learning In unsupervised learning the dataset used as a starting point is unlabeled and the models used are intended to reveal insights into the underlying structure of the data. The primary outcomes are: (1) dimensionality reduction, (2) data visualization, (3) feature extraction, and (4) clustering. These algorithms vary widely in terms of approach and outcomes, and we review its core concepts in the following sections. Dimensionality reduction In dimensionality reduction approaches, highly dimensional data (e.g., a transcriptomic profile of 20,000 genes for 10,000 patients as an example) are condensed into their most informative dimensions (e.g., in 2 dimensions for each patient). There are a number of ways to distill the most important dimensions from such a dataset. These techniques have been reviewed previously , , , and include (1) principal-component analysis (PCA), (2) t-distributed stochastic neighbor embedding (t-SNE), (3) linear discriminant analysis, (4) Uniform Manifold Approximation and Projection. Each of the algorithms has a unique approach, but the underlying concept is the same. Higher-order data are reduced into a smaller set of dimensions, which can then be used for either visualization, feature extraction, or as predictive components in other MLA models. A clear application of dimensionality reduction in drug discovery is the use of these techniques for high-dimensional patient data such as RNA sequencing (RNA-seq) expression. To demonstrate the process of unsupervised learning we show in A how training a t-SNE model on unlabeled cancer patient genomic information results in the natural emergence of clusters that correlate with the type of cancer the patient had. In B, t-SNE is used on the transcriptomic profiles of 1,086 breast cancer (BRCA) patients acquired from TCGA (The Cancer Genome Atlas Program, https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ; see ). The transcriptomic profile consists of the gene expression values of 17,715 genes. After the t-SNE analysis, the 17,715 genetic dimensions are reduced to just 2, allowing us to easily visualize the data as the two-dimensional plot shown. Each point here is a tumor sample from a BRCA patient. The distance between the points is an indicator of the degree of similarity between the patient samples. We label each sample with the identified BRCA subtype according to TCGA. We find that the clusters correspond well to the indicated cancer subtype. Clustering techniques Clustering algorithms can be used to find large-scale structures within a dataset. These algorithms can split the data points in a dataset into a specified number of clusters and assign each point to one of the clusters. Clustering can reveal higher-order structures within the dataset and help determine similarity between different entries. Types of such algorithms include , , , , , (1) k-means clustering, (2) hierarchical clustering, (3) Fuzzy C means clustering, (4) mean shift clustering, (5) density-based spatial clustering of applications with noise, and (6) Gaussian mixed models. Moreover, they can reveal mislabeling within certain datasets, where entries supposedly belonging to one group are revealed to belong to another. Clustering can be especially useful in the context of drug design as it can reveal patient sub-populations that might be more or less sensitive to particular treatment regimes. Neural network encoders Autoencoders are a relatively new form of unsupervised learning models that learn to generate data that resemble the input data they are presented with. The data are fed into a neural network, and then regenerated from the reduced embedding that the neural network develops. These models are called generative models (a form of neural networks that learn to create new examples of the data types that are used to train them) as they create new data points in accordance with the specifications of the input data. In drug discovery, these models can be used to generate new possible therapeutics with the requirement of having certain efficacy and toxicity profiles.
In supervised learning, we generally utilize a large dataset of labeled data to develop a model that is capable of classifying new entries with the correct label. It has broad applications for drug discovery and design as it can be used to assess the efficacy, toxicity, and likelihood of approval of a new therapeutic. Here, we lay out the basics of the approach to inform the discussion in the rest of the review. Bias-variance tradeoff In any supervised learning approach there is an underlying tension known as the bias-variance tradeoff, which emerges from two primary concerns that must be accounted for. One relates to the insufficient relevant data to generate valid rules (the bias error) and second that the rules generated are too specific to the particular dataset that is being used to train on (the variance error). The bias error can be thought of as underfitting and refers to an algorithm missing the relevant relationships between the features of interest and what is being predicted. The variance error, also known as overfitting , can be considered as the sensitivity of the algorithm to changes in the data, where a model is able to make very accurate predictions with the dataset it was trained on, but fails to generalize and performs poorly on new data. Understanding the trade-off will elucidate the rationale behind the approaches taken when developing these models. Train - test splitting The train-validate-test is one of a number of standard approaches that have emerged to deal with the bias-variance tradeoff. The train-validate-test approach requires the splitting of the dataset into two major subsets, the “train set” and the “test set.” The former can then be further split into a true train set and a validation set. The model is then trained on the train set, and its performance assessed on the validation one. The hyperparameters (model parameters that are set prior to training as opposed to ones derived via training) of the model can then be adjusted to improve the performance on the validation set. Once the training and tuning is optimized, then the model can be assessed on the test set that had not been used in any way. The approach is meant to avoid the possibility of overfitting the model to the specifics of the data being used and as a result can create a generalizable model that would perform well on previously unseen data. k-fold cross-validation A major consideration with the train-test approach is whether or not the split is done in a truly random fashion and if the resulting subsets are appropriately representative. The method of k-fold cross validation builds upon the train-test approach by running the split multiple times. In k-fold cross validation, the train set is split into a k-subsets, and one of the subsets is held out and used for validation; this process is repeated k times with a different subset being held out each time. The performance is then taken to be the average of the k models trained. There are also methods of introducing a degree of stochasticity into the training process by including slight variations in the datasets used for training or including dropout layers (a ML technique where certain neurons are ignored during training in a stochastic fashion), where certain learned processes are randomly inhibited allowing for the development of more flexible programs that have a better chance at being truly generalizable. Classification or regression modeling Supervised learning models can be further categorized based on the type prediction they are making. The two major model types are regression and classifiers. In a classifier model, the prediction of interest takes one of a few discrete values (e.g., 0 or 1 in a binary manner). A model to assess whether a drug will be approved or rejected would be considered as such. Regarding regression models, the prediction of interest can take on any continuous value. If we wanted to predict the efficacy of a cancer drug, by measuring the dosage of the drug required to inhibit 50% of the cancer cells in vitro (the IC 50 ), we would use a regression model. Certain ML models are used exclusively for either classification or regression. For example, linear regression should only be used for regression models, whereas logistic regression, despite the name, should only be used for classification problems. Certain models such as decision trees, random forests, support vector machines, k-nearest neighbors, and neural networks, have classifier and regressor versions and can be used for either problem type.
In any supervised learning approach there is an underlying tension known as the bias-variance tradeoff, which emerges from two primary concerns that must be accounted for. One relates to the insufficient relevant data to generate valid rules (the bias error) and second that the rules generated are too specific to the particular dataset that is being used to train on (the variance error). The bias error can be thought of as underfitting and refers to an algorithm missing the relevant relationships between the features of interest and what is being predicted. The variance error, also known as overfitting , can be considered as the sensitivity of the algorithm to changes in the data, where a model is able to make very accurate predictions with the dataset it was trained on, but fails to generalize and performs poorly on new data. Understanding the trade-off will elucidate the rationale behind the approaches taken when developing these models.
- test splitting The train-validate-test is one of a number of standard approaches that have emerged to deal with the bias-variance tradeoff. The train-validate-test approach requires the splitting of the dataset into two major subsets, the “train set” and the “test set.” The former can then be further split into a true train set and a validation set. The model is then trained on the train set, and its performance assessed on the validation one. The hyperparameters (model parameters that are set prior to training as opposed to ones derived via training) of the model can then be adjusted to improve the performance on the validation set. Once the training and tuning is optimized, then the model can be assessed on the test set that had not been used in any way. The approach is meant to avoid the possibility of overfitting the model to the specifics of the data being used and as a result can create a generalizable model that would perform well on previously unseen data.
A major consideration with the train-test approach is whether or not the split is done in a truly random fashion and if the resulting subsets are appropriately representative. The method of k-fold cross validation builds upon the train-test approach by running the split multiple times. In k-fold cross validation, the train set is split into a k-subsets, and one of the subsets is held out and used for validation; this process is repeated k times with a different subset being held out each time. The performance is then taken to be the average of the k models trained. There are also methods of introducing a degree of stochasticity into the training process by including slight variations in the datasets used for training or including dropout layers (a ML technique where certain neurons are ignored during training in a stochastic fashion), where certain learned processes are randomly inhibited allowing for the development of more flexible programs that have a better chance at being truly generalizable.
Supervised learning models can be further categorized based on the type prediction they are making. The two major model types are regression and classifiers. In a classifier model, the prediction of interest takes one of a few discrete values (e.g., 0 or 1 in a binary manner). A model to assess whether a drug will be approved or rejected would be considered as such. Regarding regression models, the prediction of interest can take on any continuous value. If we wanted to predict the efficacy of a cancer drug, by measuring the dosage of the drug required to inhibit 50% of the cancer cells in vitro (the IC 50 ), we would use a regression model. Certain ML models are used exclusively for either classification or regression. For example, linear regression should only be used for regression models, whereas logistic regression, despite the name, should only be used for classification problems. Certain models such as decision trees, random forests, support vector machines, k-nearest neighbors, and neural networks, have classifier and regressor versions and can be used for either problem type.
In unsupervised learning the dataset used as a starting point is unlabeled and the models used are intended to reveal insights into the underlying structure of the data. The primary outcomes are: (1) dimensionality reduction, (2) data visualization, (3) feature extraction, and (4) clustering. These algorithms vary widely in terms of approach and outcomes, and we review its core concepts in the following sections. Dimensionality reduction In dimensionality reduction approaches, highly dimensional data (e.g., a transcriptomic profile of 20,000 genes for 10,000 patients as an example) are condensed into their most informative dimensions (e.g., in 2 dimensions for each patient). There are a number of ways to distill the most important dimensions from such a dataset. These techniques have been reviewed previously , , , and include (1) principal-component analysis (PCA), (2) t-distributed stochastic neighbor embedding (t-SNE), (3) linear discriminant analysis, (4) Uniform Manifold Approximation and Projection. Each of the algorithms has a unique approach, but the underlying concept is the same. Higher-order data are reduced into a smaller set of dimensions, which can then be used for either visualization, feature extraction, or as predictive components in other MLA models. A clear application of dimensionality reduction in drug discovery is the use of these techniques for high-dimensional patient data such as RNA sequencing (RNA-seq) expression. To demonstrate the process of unsupervised learning we show in A how training a t-SNE model on unlabeled cancer patient genomic information results in the natural emergence of clusters that correlate with the type of cancer the patient had. In B, t-SNE is used on the transcriptomic profiles of 1,086 breast cancer (BRCA) patients acquired from TCGA (The Cancer Genome Atlas Program, https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ; see ). The transcriptomic profile consists of the gene expression values of 17,715 genes. After the t-SNE analysis, the 17,715 genetic dimensions are reduced to just 2, allowing us to easily visualize the data as the two-dimensional plot shown. Each point here is a tumor sample from a BRCA patient. The distance between the points is an indicator of the degree of similarity between the patient samples. We label each sample with the identified BRCA subtype according to TCGA. We find that the clusters correspond well to the indicated cancer subtype. Clustering techniques Clustering algorithms can be used to find large-scale structures within a dataset. These algorithms can split the data points in a dataset into a specified number of clusters and assign each point to one of the clusters. Clustering can reveal higher-order structures within the dataset and help determine similarity between different entries. Types of such algorithms include , , , , , (1) k-means clustering, (2) hierarchical clustering, (3) Fuzzy C means clustering, (4) mean shift clustering, (5) density-based spatial clustering of applications with noise, and (6) Gaussian mixed models. Moreover, they can reveal mislabeling within certain datasets, where entries supposedly belonging to one group are revealed to belong to another. Clustering can be especially useful in the context of drug design as it can reveal patient sub-populations that might be more or less sensitive to particular treatment regimes. Neural network encoders Autoencoders are a relatively new form of unsupervised learning models that learn to generate data that resemble the input data they are presented with. The data are fed into a neural network, and then regenerated from the reduced embedding that the neural network develops. These models are called generative models (a form of neural networks that learn to create new examples of the data types that are used to train them) as they create new data points in accordance with the specifications of the input data. In drug discovery, these models can be used to generate new possible therapeutics with the requirement of having certain efficacy and toxicity profiles.
In dimensionality reduction approaches, highly dimensional data (e.g., a transcriptomic profile of 20,000 genes for 10,000 patients as an example) are condensed into their most informative dimensions (e.g., in 2 dimensions for each patient). There are a number of ways to distill the most important dimensions from such a dataset. These techniques have been reviewed previously , , , and include (1) principal-component analysis (PCA), (2) t-distributed stochastic neighbor embedding (t-SNE), (3) linear discriminant analysis, (4) Uniform Manifold Approximation and Projection. Each of the algorithms has a unique approach, but the underlying concept is the same. Higher-order data are reduced into a smaller set of dimensions, which can then be used for either visualization, feature extraction, or as predictive components in other MLA models. A clear application of dimensionality reduction in drug discovery is the use of these techniques for high-dimensional patient data such as RNA sequencing (RNA-seq) expression. To demonstrate the process of unsupervised learning we show in A how training a t-SNE model on unlabeled cancer patient genomic information results in the natural emergence of clusters that correlate with the type of cancer the patient had. In B, t-SNE is used on the transcriptomic profiles of 1,086 breast cancer (BRCA) patients acquired from TCGA (The Cancer Genome Atlas Program, https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ; see ). The transcriptomic profile consists of the gene expression values of 17,715 genes. After the t-SNE analysis, the 17,715 genetic dimensions are reduced to just 2, allowing us to easily visualize the data as the two-dimensional plot shown. Each point here is a tumor sample from a BRCA patient. The distance between the points is an indicator of the degree of similarity between the patient samples. We label each sample with the identified BRCA subtype according to TCGA. We find that the clusters correspond well to the indicated cancer subtype.
Clustering algorithms can be used to find large-scale structures within a dataset. These algorithms can split the data points in a dataset into a specified number of clusters and assign each point to one of the clusters. Clustering can reveal higher-order structures within the dataset and help determine similarity between different entries. Types of such algorithms include , , , , , (1) k-means clustering, (2) hierarchical clustering, (3) Fuzzy C means clustering, (4) mean shift clustering, (5) density-based spatial clustering of applications with noise, and (6) Gaussian mixed models. Moreover, they can reveal mislabeling within certain datasets, where entries supposedly belonging to one group are revealed to belong to another. Clustering can be especially useful in the context of drug design as it can reveal patient sub-populations that might be more or less sensitive to particular treatment regimes.
Autoencoders are a relatively new form of unsupervised learning models that learn to generate data that resemble the input data they are presented with. The data are fed into a neural network, and then regenerated from the reduced embedding that the neural network develops. These models are called generative models (a form of neural networks that learn to create new examples of the data types that are used to train them) as they create new data points in accordance with the specifications of the input data. In drug discovery, these models can be used to generate new possible therapeutics with the requirement of having certain efficacy and toxicity profiles.
Constructing an ML algorithm to connect a molecular state, reflecting a disease, with the response to a particular therapeutic intervention and more specifically to the actual drug molecule, faces certain challenges. One of them is to select the best computer-readable form to represent the therapeutic agent under investigation. Here, we discuss the major approaches developed to address this question (see also ). Small-molecule representation A small molecule is generally defined as an organic compound with a molecular weight of less than 500 Da. The manageable size of small-molecule drugs allows for a tractable representation of their structures in a computer-readable way. In this section we review the various methods that are available for representing small molecules in a computer-readable manner. SMILE One approach of drug structure representation is the simplified molecular input line entry system (SMILE). The chemical annotation system uses a few syntactical rules to allow for the representation of a molecular structure in a computer-readable form. SMILE structures use characters to represent each of the atoms within a molecule and special ones to represent the bonds between them as well as the higher-order structural properties of the molecule such as aromaticity or cyclicality. Interest in using SMILEs in the context of ML and generative models revealed a major problem. The generated SMILEs might not correspond to valid molecules. Addressing this issue led to the development of the self-referencing embedded strings (SELFIES), which modifies the initial system to ensure that all generated strings refer to valid chemical molecules. Neither SMILES nor SELFIES can be directly used in ML models, as they often require their inputs to be in a vectorized or numerical form, whereas SMILES are character representations. Multiple approaches have emerged to confront this issue. Fingerprinting One method of embedding SMILE structure is called fingerprinting, where a chemical structure is converted into a binary vector of pre-determined size that captures the structural information of the original compound. One of the most utilized fingerprinting techniques is Morgan fingerprinting. Binarizing the chemical structure allows for the utilization of model architectures that expect binary vector input. Other fingerprinting techniques have been developed since to expand the capacity of and improve upon Morgan fingerprinting. Vectorizing the molecular structure of a therapeutic through fingerprinting makes it possible to leverage a number of ML architectures that require numerical features. Natural language processing With advances in natural language processing (NLP) models, an NLP approach to chemical structure embedding has gained traction in recent years. In this context, the SMILE/SELFIE string is tokenized and a specific language is trained to embed the chemical structure. Utilizing NLP-inspired models allows the models to capture higher-order relationships across larger distances within the molecule of interest. The NLP approach has been found to outperform the fingerprinting technique in a number of different classification tasks. However, these NLP techniques are still somewhat underutilized, thereby providing a ripe area for researchers in the field to improve the models and predictions. Molecular graphs Graphical representations of molecules are another way to capture the full complexity of the molecular therapeutic. In this framework, each atom is encoded as a node in a graph and the connections between them constitute edges. Creating molecular graphs has become a routine operation that can be easily conducted through software modules such as RDKit in Python, where the Le Verrier-Faddeev-Frame approach is applied. The use of graphs to represent molecular structures has become a standard feature of many top-of-the-line drug efficacy models. , , However, they do require additional complexity in terms of the architectures of the models that can utilize them. Therefore, they are better suited for larger therapeutics, such as proteins and peptides, where fewer adequate alternatives exist. Protein/peptide representation Representing protein therapeutics in a computer-readable form poses significant challenges that are not present with small molecules. The size and complexity of a protein therapeutic makes the previous approaches untenable. Protein sequences can be embedded by their physical properties or by their amino acid sequences. Using physical properties poses a challenge as it is difficult to know a priori which properties will be most relevant to a learning task. Multiple methods of embedding the amino acid structures have been developed in the past decade and are reviewed below. NLP for protein encoding NLP approaches such as word2vec and doc2vec have been used to develop learned embeddings of words or sentences based on their context and surrounding words. , A number of attempts have been made to apply these approaches to protein sequences by segmenting the protein sequences into fragments of length k (k-mers). , , , The protein embedding then learns which segments of a protein sequence are expected to appear next to one another. The approach can then be combined with task-specific learning to create embeddings that learn to extract the relevant aspects of the amino acid sequence. Task-assisted protein embeddings This is an approach building upon the NLP and a semi-supervised task ML paradigm described above. Task-assisted protein embeddings (TAPE) utilizes biologically relevant tasks to create an informed protein embedding from an amino acid input. The tasks highlight three major areas of protein biology: (1) structure prediction, (2) detection of remote homology, and (3) protein engineering. Rather than utilizing the word2vec or doc2vec approach, the TAPE approaches utilize other NLP paradigms, namely next-token prediction and masked-token predictions. , The TAPE embeddings have been adopted widely and have been used in a number of higher-order models such as IBM’s PaccMannRL. Graphical representations Graphical protein representations have been developed and have been quite successful in predicting protein function and interactions. In these graphs, each node is an amino acid residue and the edges contain information regarding the distances and angles between residues. Such a representation scales more efficiently compared with 3D structural representations used in convolutional neural nets.
A small molecule is generally defined as an organic compound with a molecular weight of less than 500 Da. The manageable size of small-molecule drugs allows for a tractable representation of their structures in a computer-readable way. In this section we review the various methods that are available for representing small molecules in a computer-readable manner. SMILE One approach of drug structure representation is the simplified molecular input line entry system (SMILE). The chemical annotation system uses a few syntactical rules to allow for the representation of a molecular structure in a computer-readable form. SMILE structures use characters to represent each of the atoms within a molecule and special ones to represent the bonds between them as well as the higher-order structural properties of the molecule such as aromaticity or cyclicality. Interest in using SMILEs in the context of ML and generative models revealed a major problem. The generated SMILEs might not correspond to valid molecules. Addressing this issue led to the development of the self-referencing embedded strings (SELFIES), which modifies the initial system to ensure that all generated strings refer to valid chemical molecules. Neither SMILES nor SELFIES can be directly used in ML models, as they often require their inputs to be in a vectorized or numerical form, whereas SMILES are character representations. Multiple approaches have emerged to confront this issue. Fingerprinting One method of embedding SMILE structure is called fingerprinting, where a chemical structure is converted into a binary vector of pre-determined size that captures the structural information of the original compound. One of the most utilized fingerprinting techniques is Morgan fingerprinting. Binarizing the chemical structure allows for the utilization of model architectures that expect binary vector input. Other fingerprinting techniques have been developed since to expand the capacity of and improve upon Morgan fingerprinting. Vectorizing the molecular structure of a therapeutic through fingerprinting makes it possible to leverage a number of ML architectures that require numerical features. Natural language processing With advances in natural language processing (NLP) models, an NLP approach to chemical structure embedding has gained traction in recent years. In this context, the SMILE/SELFIE string is tokenized and a specific language is trained to embed the chemical structure. Utilizing NLP-inspired models allows the models to capture higher-order relationships across larger distances within the molecule of interest. The NLP approach has been found to outperform the fingerprinting technique in a number of different classification tasks. However, these NLP techniques are still somewhat underutilized, thereby providing a ripe area for researchers in the field to improve the models and predictions. Molecular graphs Graphical representations of molecules are another way to capture the full complexity of the molecular therapeutic. In this framework, each atom is encoded as a node in a graph and the connections between them constitute edges. Creating molecular graphs has become a routine operation that can be easily conducted through software modules such as RDKit in Python, where the Le Verrier-Faddeev-Frame approach is applied. The use of graphs to represent molecular structures has become a standard feature of many top-of-the-line drug efficacy models. , , However, they do require additional complexity in terms of the architectures of the models that can utilize them. Therefore, they are better suited for larger therapeutics, such as proteins and peptides, where fewer adequate alternatives exist.
One approach of drug structure representation is the simplified molecular input line entry system (SMILE). The chemical annotation system uses a few syntactical rules to allow for the representation of a molecular structure in a computer-readable form. SMILE structures use characters to represent each of the atoms within a molecule and special ones to represent the bonds between them as well as the higher-order structural properties of the molecule such as aromaticity or cyclicality. Interest in using SMILEs in the context of ML and generative models revealed a major problem. The generated SMILEs might not correspond to valid molecules. Addressing this issue led to the development of the self-referencing embedded strings (SELFIES), which modifies the initial system to ensure that all generated strings refer to valid chemical molecules. Neither SMILES nor SELFIES can be directly used in ML models, as they often require their inputs to be in a vectorized or numerical form, whereas SMILES are character representations. Multiple approaches have emerged to confront this issue.
One method of embedding SMILE structure is called fingerprinting, where a chemical structure is converted into a binary vector of pre-determined size that captures the structural information of the original compound. One of the most utilized fingerprinting techniques is Morgan fingerprinting. Binarizing the chemical structure allows for the utilization of model architectures that expect binary vector input. Other fingerprinting techniques have been developed since to expand the capacity of and improve upon Morgan fingerprinting. Vectorizing the molecular structure of a therapeutic through fingerprinting makes it possible to leverage a number of ML architectures that require numerical features.
With advances in natural language processing (NLP) models, an NLP approach to chemical structure embedding has gained traction in recent years. In this context, the SMILE/SELFIE string is tokenized and a specific language is trained to embed the chemical structure. Utilizing NLP-inspired models allows the models to capture higher-order relationships across larger distances within the molecule of interest. The NLP approach has been found to outperform the fingerprinting technique in a number of different classification tasks. However, these NLP techniques are still somewhat underutilized, thereby providing a ripe area for researchers in the field to improve the models and predictions.
Graphical representations of molecules are another way to capture the full complexity of the molecular therapeutic. In this framework, each atom is encoded as a node in a graph and the connections between them constitute edges. Creating molecular graphs has become a routine operation that can be easily conducted through software modules such as RDKit in Python, where the Le Verrier-Faddeev-Frame approach is applied. The use of graphs to represent molecular structures has become a standard feature of many top-of-the-line drug efficacy models. , , However, they do require additional complexity in terms of the architectures of the models that can utilize them. Therefore, they are better suited for larger therapeutics, such as proteins and peptides, where fewer adequate alternatives exist.
Representing protein therapeutics in a computer-readable form poses significant challenges that are not present with small molecules. The size and complexity of a protein therapeutic makes the previous approaches untenable. Protein sequences can be embedded by their physical properties or by their amino acid sequences. Using physical properties poses a challenge as it is difficult to know a priori which properties will be most relevant to a learning task. Multiple methods of embedding the amino acid structures have been developed in the past decade and are reviewed below. NLP for protein encoding NLP approaches such as word2vec and doc2vec have been used to develop learned embeddings of words or sentences based on their context and surrounding words. , A number of attempts have been made to apply these approaches to protein sequences by segmenting the protein sequences into fragments of length k (k-mers). , , , The protein embedding then learns which segments of a protein sequence are expected to appear next to one another. The approach can then be combined with task-specific learning to create embeddings that learn to extract the relevant aspects of the amino acid sequence. Task-assisted protein embeddings This is an approach building upon the NLP and a semi-supervised task ML paradigm described above. Task-assisted protein embeddings (TAPE) utilizes biologically relevant tasks to create an informed protein embedding from an amino acid input. The tasks highlight three major areas of protein biology: (1) structure prediction, (2) detection of remote homology, and (3) protein engineering. Rather than utilizing the word2vec or doc2vec approach, the TAPE approaches utilize other NLP paradigms, namely next-token prediction and masked-token predictions. , The TAPE embeddings have been adopted widely and have been used in a number of higher-order models such as IBM’s PaccMannRL. Graphical representations Graphical protein representations have been developed and have been quite successful in predicting protein function and interactions. In these graphs, each node is an amino acid residue and the edges contain information regarding the distances and angles between residues. Such a representation scales more efficiently compared with 3D structural representations used in convolutional neural nets.
NLP approaches such as word2vec and doc2vec have been used to develop learned embeddings of words or sentences based on their context and surrounding words. , A number of attempts have been made to apply these approaches to protein sequences by segmenting the protein sequences into fragments of length k (k-mers). , , , The protein embedding then learns which segments of a protein sequence are expected to appear next to one another. The approach can then be combined with task-specific learning to create embeddings that learn to extract the relevant aspects of the amino acid sequence.
This is an approach building upon the NLP and a semi-supervised task ML paradigm described above. Task-assisted protein embeddings (TAPE) utilizes biologically relevant tasks to create an informed protein embedding from an amino acid input. The tasks highlight three major areas of protein biology: (1) structure prediction, (2) detection of remote homology, and (3) protein engineering. Rather than utilizing the word2vec or doc2vec approach, the TAPE approaches utilize other NLP paradigms, namely next-token prediction and masked-token predictions. , The TAPE embeddings have been adopted widely and have been used in a number of higher-order models such as IBM’s PaccMannRL.
Graphical protein representations have been developed and have been quite successful in predicting protein function and interactions. In these graphs, each node is an amino acid residue and the edges contain information regarding the distances and angles between residues. Such a representation scales more efficiently compared with 3D structural representations used in convolutional neural nets.
The previous sections described the work conducted to develop representations of the therapeutic agent. The ML models of interest also require a representation of the disease state, the therapeutic intends to target. The classic approach is to think of the disease representation in terms of a genetic or protein target that is associated with disease progression, which the drug would interact with. The early interest in ML-assisted drug design focused on the intersection of molecular dynamic modeling with ML being utilized to design therapeutic molecules specifically targeting a disease-associated enzyme’s active site. The representation of a disease state as a single gene or protein target of interest has been covered extensively elsewhere , and can be best appreciated in the context of the quantitative structure-activity relationship, which will not be covered here. Instead, we center the higher-order representations profiling the disease state to include the genomic, epigenetic, transcriptomic, and proteomic profiles of the diseased cell, either in vitro or from patients suffering from a specific disease . We will look at these approaches specifically in oncology and consider how they might be extended to other indications. Genomics The genomic profile of a disease state can be identified through the genetic sequencing of patients or disease state models. The genetic sequence allows for the identification of key mutations that are present and may differentially affect the onset of the disease and the outcome possibilities. Genomic mutations can be a single-nucleotide variant or single-nucleotide polymorphisms, insertions, deletions, inversions, copy number variations, tandem duplications, dispersed duplications, mobile element insertions, or translocations. The genomic mutational profile can then be used as a feature for ML models. The mutational status and copy number variation have been used repeatedly to predict the potential efficacy of new therapeutics and are summarized in . , , Epigenetics Epigenetic modifications are critical to gaining a full understanding of the processes underlying a biological state. Comprehensive databases for epigenetic information are currently being developed and are a fast-growing field in bioinformatics. One highly informative structural feature that can provide epigenetic insights is accessible chromatin. Human assay for transposase-accessible chromatin with high-throughput sequencing (a method to assess genome-wide chromatic accessibility datasets) provides a detailed map of accessible chromatin, has been accumulating rapidly in recent years, and an effort has been undertaken to provide annotated data in a centralized publicly accessible database. Beyond that, the Roadmap Epigenomics Mapping Consortium project, as part of the Encyclopedia of DNA Elements ( https://www.encodeproject.org/ ), has gathered information on DNA methylation, histone modification, chromatin accessibility, and small RNA transcripts in primary human tissues. , The epigenetic tracks provided by the Roadmap Epigenomics Mapping Consortium have been used to train a convolutional neural network to predict mutational rates within genomic regions, and find mutations that have positive associations with sub-cancer types. Transcriptomics One of the most ubiquitous -omic profiles used in computational bioinformatics today is the transcriptomic profile, which is captured through RNA-seq expression data. Here, the degree of mRNA expression gives a sense of which genes are activated and which are inhibited in a given cell. RNA-seq profiling is conducted on a bulk population of cells or in single cells. High-throughput sequential RNA-seq can also allow for spatiotemporal sequencing showing how the mRNA expression profile shifts over time or across spatially separated cells. Proteomics Databases of protein structure, properties, interactions, and abundances all inform the proteomic profile of the diseased state. The structural properties and amino acid sequences of proteins found in UNIPROT are used to create reduced embedding of protein targets and biologic therapeutics. The CHEMBL database ( https://www.ebi.ac.uk/chembl/ ) provides key features and ontologies for antibodies and therapeutically relevant protein targets, which can be used as features in drug prediction models directly, or to create protein networks and similarity metrics for possible drug targets. The ProteomicsDB ( https://www.proteomicsdb.org/ ) provides mass spectrometry data determining protein abundances in different biological tissue, providing a proteomic profile for the disease state, which can be combined with the other -omic profiles described.
The genomic profile of a disease state can be identified through the genetic sequencing of patients or disease state models. The genetic sequence allows for the identification of key mutations that are present and may differentially affect the onset of the disease and the outcome possibilities. Genomic mutations can be a single-nucleotide variant or single-nucleotide polymorphisms, insertions, deletions, inversions, copy number variations, tandem duplications, dispersed duplications, mobile element insertions, or translocations. The genomic mutational profile can then be used as a feature for ML models. The mutational status and copy number variation have been used repeatedly to predict the potential efficacy of new therapeutics and are summarized in . , ,
Epigenetic modifications are critical to gaining a full understanding of the processes underlying a biological state. Comprehensive databases for epigenetic information are currently being developed and are a fast-growing field in bioinformatics. One highly informative structural feature that can provide epigenetic insights is accessible chromatin. Human assay for transposase-accessible chromatin with high-throughput sequencing (a method to assess genome-wide chromatic accessibility datasets) provides a detailed map of accessible chromatin, has been accumulating rapidly in recent years, and an effort has been undertaken to provide annotated data in a centralized publicly accessible database. Beyond that, the Roadmap Epigenomics Mapping Consortium project, as part of the Encyclopedia of DNA Elements ( https://www.encodeproject.org/ ), has gathered information on DNA methylation, histone modification, chromatin accessibility, and small RNA transcripts in primary human tissues. , The epigenetic tracks provided by the Roadmap Epigenomics Mapping Consortium have been used to train a convolutional neural network to predict mutational rates within genomic regions, and find mutations that have positive associations with sub-cancer types.
One of the most ubiquitous -omic profiles used in computational bioinformatics today is the transcriptomic profile, which is captured through RNA-seq expression data. Here, the degree of mRNA expression gives a sense of which genes are activated and which are inhibited in a given cell. RNA-seq profiling is conducted on a bulk population of cells or in single cells. High-throughput sequential RNA-seq can also allow for spatiotemporal sequencing showing how the mRNA expression profile shifts over time or across spatially separated cells.
Databases of protein structure, properties, interactions, and abundances all inform the proteomic profile of the diseased state. The structural properties and amino acid sequences of proteins found in UNIPROT are used to create reduced embedding of protein targets and biologic therapeutics. The CHEMBL database ( https://www.ebi.ac.uk/chembl/ ) provides key features and ontologies for antibodies and therapeutically relevant protein targets, which can be used as features in drug prediction models directly, or to create protein networks and similarity metrics for possible drug targets. The ProteomicsDB ( https://www.proteomicsdb.org/ ) provides mass spectrometry data determining protein abundances in different biological tissue, providing a proteomic profile for the disease state, which can be combined with the other -omic profiles described.
Dedicated databases capture the interactions between individual genes, transcription factors, mRNA, and proteins as biological pathways. Reactome, KEGG, the Pathway Commons, and Omnipath are major databases that catalog biological pathways. They can be used to construct genomic networks to create disease signatures, and to find with pathways are particularly affected in the diseased state. The STRING database ( https://string-db.org/ ) provides information on both physical and functional protein-protein interactions (physical contacts of high specificity between two or more proteins), which can be used with network propagation algorithms to find genomic signatures of interest and reduce the dimensional complexity of -omic data generally. These databases can be leveraged and integrated to create a holistic view of the biology underlying the diseased state. While each of the -omic data types described above can be used independently to predict drug response, models that combine multiple data types have been found to yield more accurate results. , Various architectures of combining clinical and genomic data for cancer patients have been developed. One approach is to use autoencoders to condense different data types into a reduced embedding and then combine the embeddings themselves. Another is COSMOS (causal oriented search of multi-omic space), an -omic integration method that systematically generates mechanistic hypotheses through causal reasoning. , COSMOS generates trans -omic networks that capture the relationships between entities across -omic levels. The trans -omic networks are used to find signatures, or fingerprints, of disease subtypes. Gene signatures allow researchers to use a smaller subset of genes as key markers, reducing the complexity of the -omic profiles generated. Knowledge graphs Another method to combine multiple data types is to use a knowledge graph embedding (KGE), reflecting the disease state. Multiple, specific reviews have covered the subject of KGE recently. , Knowledge graphs are heterogeneous, which sets them apart from homogeneous graphs by the fact that the edges and nodes can be of differing types. With this approach, the therapeutic and -omic profiles are embedded as entities features in a graph, and the interactions of the different entities are expressed as relations. The -omic relationships are captured through the following data types: (1) gene ontologies (a formal representation of the body of knowledge within the genomic domain), (2) gene-gene interactions (a set of functional associations between genes), (3) protein-protein interactions, (4) gene pathways (sequential steps that are mediated by gene function that operate together to determine a biological process), and (5) Pearson correlation coefficients (a measurement of the degree of similarity between two entities). Knowledge graphs can be considered as a series of triplet structures that describes the relationship r between two entities, e 1 and e 2 . The entities could refer to genes, therapeutics, or even broader biological concepts. For example, appearing as (gene A, regulates , gene B) or (disease A, downregulates , gene B), and even (drug A, treats , disease A). The relational datasets are noisy and incomplete, where the relationship may appear as (disease A, downregulates , ?), or (drug A, treats , ?), or (?, treats , disease A). The drug discovery process can then be reformulated as finding missing links between the various embedded entities. Prediction models can be trained to find these missing links, which in turn may lead to finding new disease biomarkers, drug repurposing, and drug discovery, respectively. The network representations generated could then be used for discovery of disease gene signatures. Possible KGE model architectures include: ComplEx, DistMult, RotatE, TransE, and TransH. Hetionet and BioKG are KGE model architectures developed specifically for drug discovery.
Another method to combine multiple data types is to use a knowledge graph embedding (KGE), reflecting the disease state. Multiple, specific reviews have covered the subject of KGE recently. , Knowledge graphs are heterogeneous, which sets them apart from homogeneous graphs by the fact that the edges and nodes can be of differing types. With this approach, the therapeutic and -omic profiles are embedded as entities features in a graph, and the interactions of the different entities are expressed as relations. The -omic relationships are captured through the following data types: (1) gene ontologies (a formal representation of the body of knowledge within the genomic domain), (2) gene-gene interactions (a set of functional associations between genes), (3) protein-protein interactions, (4) gene pathways (sequential steps that are mediated by gene function that operate together to determine a biological process), and (5) Pearson correlation coefficients (a measurement of the degree of similarity between two entities). Knowledge graphs can be considered as a series of triplet structures that describes the relationship r between two entities, e 1 and e 2 . The entities could refer to genes, therapeutics, or even broader biological concepts. For example, appearing as (gene A, regulates , gene B) or (disease A, downregulates , gene B), and even (drug A, treats , disease A). The relational datasets are noisy and incomplete, where the relationship may appear as (disease A, downregulates , ?), or (drug A, treats , ?), or (?, treats , disease A). The drug discovery process can then be reformulated as finding missing links between the various embedded entities. Prediction models can be trained to find these missing links, which in turn may lead to finding new disease biomarkers, drug repurposing, and drug discovery, respectively. The network representations generated could then be used for discovery of disease gene signatures. Possible KGE model architectures include: ComplEx, DistMult, RotatE, TransE, and TransH. Hetionet and BioKG are KGE model architectures developed specifically for drug discovery.
To monitor the therapeutic efficacy, we need measures of effectiveness for a therapeutic. Various values are used and discussed below. Preclinical IC 50 The cell line resources highlighted above also provide preclinical efficacy data in the form of IC 50 for each therapeutic-cancer cell line combination. Within the context of cancer treatment, IC 50 refers to the minimum dosage required to inhibit 50% of the cancer cells. While IC 50 is an indicator of potential efficacy, the relationship between the IC 50 value and drug approval is unclear. The IC 50 value is an in vitro measurement, therefore translation into clinical efficacy is not guaranteed. Furthermore, it does not take into account the potential toxicity of the therapeutic being investigated. Clinical outcomes The focus of most drug efficacy models has been the preclinical IC 50 measurement as a number of public databases, such as Cancer Cell Line Encyclopedia (CCLE) (Broad Institute, https://sites.broadinstitute.org/ccle/ ) and Genomics of Drug Sensitivity in Cancer (GDSC) ( https://www.cancerrxgene.org/ ), provide those data in a centralized location. In the case of clinical outcomes, the data are less centralized and require a fair amount of curation. The major resource for clinical outcomes is ClinicalTrials.gov , a registry of clinical trials run by the US National Library of Medicine. However, the data provided require manual amending and curation. In oncology a number of key clinical endpoints are used to assess clinical efficacy. 1. Objective response rate (ORR): the percentage of patients who respond to treatment in a defined manner, e.g., the tumor shrinks or disappears. 2. Progression-free survival: the median or mean period of time that each patient spends without the disease showing any progression or advancing further. 3. Overall survival: the median or mean period of time that each patient, who takes a particular treatment, survives post-treatment. A particular challenge in applying the preclinical cell line approach to patient data is that there are comparatively fewer datasets where the -omic profiles, treatment, and response of the patients are all available. Models of interest to predict efficacy A number of models have been developed to predict the IC 50 of drug-cell line combinations. In we list a number of the models that have emerged in the past few years as well as their Spearman correlation as an assessment metric. All the models follow the same core idea of having therapeutic and disease state representations with the goal of predicting the IC 50 of a drug and cell line combination. The biggest differences are what the models use to represent the therapeutic as well as the disease state and the underlying architecture of the neural network. Clinical efficacy modeling Similar approaches have yet to emerge to predict clinical efficacy directly. The limitations described above regarding clinical data make adopting the preclinical framework challenging. Specifically, the biggest issue is the lack of large databases of patient outcomes with multiple treatment options. To address this issue, we can create patient populations representations called virtual-cohorts based on: (1) cancer type, (2) stage, (3) demographic information, and (4) biomarkers. The response of these virtual cohorts to different therapies is then considered as independent data points, with a representative -omic profile for each cohort generated. In C we show the results of a mixed model where we take the transcriptomic profiles of cancer patients and use an IC 50 predictor to model efficacy. C is the same plot as in B; however, rather than the TCGA subtype as the hue color, we show the predicted efficacy value for each patient and a representative therapeutic, in this case eribulin. As a result, we have a proxy for how predictive each of the treatments will be for each of the patients with their unique expression pattern. We then integrate the predicted efficacy over the patients for each subtype and plot the expected efficacy of the therapeutic for each indication subtype, as shown in D. It is important to highlight that we could have also found the predicted efficacy for cluster generated from the embedding itself. However, we chose to focus on the canonical subtypes to compare the predicted results with data in the literature. In this particular example, the results are consistent as eribulin has been found to be more effective against triple-negative “basal” breast cancer. For a more systematic assessment of this approach, we can utilize a dataset of 194 therapeutics that have either been approved or rejected by the FDA for a set of 14 oncology subtypes, and we can assess how the predicted IC 50 values correspond to their clinical potential. Before establishing the efficacy of the predicted models, we should first set up a baseline of how predictive the real IC 50 values are of eventual approval for 74 distinct therapeutics present in CCLE. In the top panel of A and 4B we show the IC 50 value distributions of drug-disease pairs collected from CCLE for both the approved and the rejected drugs. The results show that therapeutics with a low IC 50 value against cancer cell lines have a higher historical approval rate than those with higher IC 50 values. Notice, however, that a low IC 50 is not a guarantee that the drug gains approval, as a number of low IC 50 drug-disease pairs end up getting rejected. The IC 50 is a measure of how effective the drug is at inhibiting a cancer model cell line. On its own, it gives no information on its ability to target healthy cells. Also it does not provide a sense for how it might behave within the context of the human body. However, it is quite clear that it has real predictive value. In the middle panels of A and 4B we show the results using the predicted values for IC 50 for the various therapeutic agents against the CCLE cancer cell lines. The relationship between IC 50 and approval is consistent with the real data. In the bottom panels of A and 4B we also show predicted IC 50 values using the patient transcriptomic profiles collected from TCGA. A similar pattern holds with the drug-disease pairs that are expected to have low IC 50 values, and have correspondingly higher rates of approval historically. The results are summarized in C, where the IC 50 distributions shown are binned and their historical approval rate is calculated. There is a consistent pattern between the real and the predicted IC 50 values, with the high-efficacy models having an increased probability for approval.
50 The cell line resources highlighted above also provide preclinical efficacy data in the form of IC 50 for each therapeutic-cancer cell line combination. Within the context of cancer treatment, IC 50 refers to the minimum dosage required to inhibit 50% of the cancer cells. While IC 50 is an indicator of potential efficacy, the relationship between the IC 50 value and drug approval is unclear. The IC 50 value is an in vitro measurement, therefore translation into clinical efficacy is not guaranteed. Furthermore, it does not take into account the potential toxicity of the therapeutic being investigated.
The focus of most drug efficacy models has been the preclinical IC 50 measurement as a number of public databases, such as Cancer Cell Line Encyclopedia (CCLE) (Broad Institute, https://sites.broadinstitute.org/ccle/ ) and Genomics of Drug Sensitivity in Cancer (GDSC) ( https://www.cancerrxgene.org/ ), provide those data in a centralized location. In the case of clinical outcomes, the data are less centralized and require a fair amount of curation. The major resource for clinical outcomes is ClinicalTrials.gov , a registry of clinical trials run by the US National Library of Medicine. However, the data provided require manual amending and curation. In oncology a number of key clinical endpoints are used to assess clinical efficacy. 1. Objective response rate (ORR): the percentage of patients who respond to treatment in a defined manner, e.g., the tumor shrinks or disappears. 2. Progression-free survival: the median or mean period of time that each patient spends without the disease showing any progression or advancing further. 3. Overall survival: the median or mean period of time that each patient, who takes a particular treatment, survives post-treatment. A particular challenge in applying the preclinical cell line approach to patient data is that there are comparatively fewer datasets where the -omic profiles, treatment, and response of the patients are all available.
A number of models have been developed to predict the IC 50 of drug-cell line combinations. In we list a number of the models that have emerged in the past few years as well as their Spearman correlation as an assessment metric. All the models follow the same core idea of having therapeutic and disease state representations with the goal of predicting the IC 50 of a drug and cell line combination. The biggest differences are what the models use to represent the therapeutic as well as the disease state and the underlying architecture of the neural network. Clinical efficacy modeling Similar approaches have yet to emerge to predict clinical efficacy directly. The limitations described above regarding clinical data make adopting the preclinical framework challenging. Specifically, the biggest issue is the lack of large databases of patient outcomes with multiple treatment options. To address this issue, we can create patient populations representations called virtual-cohorts based on: (1) cancer type, (2) stage, (3) demographic information, and (4) biomarkers. The response of these virtual cohorts to different therapies is then considered as independent data points, with a representative -omic profile for each cohort generated. In C we show the results of a mixed model where we take the transcriptomic profiles of cancer patients and use an IC 50 predictor to model efficacy. C is the same plot as in B; however, rather than the TCGA subtype as the hue color, we show the predicted efficacy value for each patient and a representative therapeutic, in this case eribulin. As a result, we have a proxy for how predictive each of the treatments will be for each of the patients with their unique expression pattern. We then integrate the predicted efficacy over the patients for each subtype and plot the expected efficacy of the therapeutic for each indication subtype, as shown in D. It is important to highlight that we could have also found the predicted efficacy for cluster generated from the embedding itself. However, we chose to focus on the canonical subtypes to compare the predicted results with data in the literature. In this particular example, the results are consistent as eribulin has been found to be more effective against triple-negative “basal” breast cancer. For a more systematic assessment of this approach, we can utilize a dataset of 194 therapeutics that have either been approved or rejected by the FDA for a set of 14 oncology subtypes, and we can assess how the predicted IC 50 values correspond to their clinical potential. Before establishing the efficacy of the predicted models, we should first set up a baseline of how predictive the real IC 50 values are of eventual approval for 74 distinct therapeutics present in CCLE. In the top panel of A and 4B we show the IC 50 value distributions of drug-disease pairs collected from CCLE for both the approved and the rejected drugs. The results show that therapeutics with a low IC 50 value against cancer cell lines have a higher historical approval rate than those with higher IC 50 values. Notice, however, that a low IC 50 is not a guarantee that the drug gains approval, as a number of low IC 50 drug-disease pairs end up getting rejected. The IC 50 is a measure of how effective the drug is at inhibiting a cancer model cell line. On its own, it gives no information on its ability to target healthy cells. Also it does not provide a sense for how it might behave within the context of the human body. However, it is quite clear that it has real predictive value. In the middle panels of A and 4B we show the results using the predicted values for IC 50 for the various therapeutic agents against the CCLE cancer cell lines. The relationship between IC 50 and approval is consistent with the real data. In the bottom panels of A and 4B we also show predicted IC 50 values using the patient transcriptomic profiles collected from TCGA. A similar pattern holds with the drug-disease pairs that are expected to have low IC 50 values, and have correspondingly higher rates of approval historically. The results are summarized in C, where the IC 50 distributions shown are binned and their historical approval rate is calculated. There is a consistent pattern between the real and the predicted IC 50 values, with the high-efficacy models having an increased probability for approval.
Similar approaches have yet to emerge to predict clinical efficacy directly. The limitations described above regarding clinical data make adopting the preclinical framework challenging. Specifically, the biggest issue is the lack of large databases of patient outcomes with multiple treatment options. To address this issue, we can create patient populations representations called virtual-cohorts based on: (1) cancer type, (2) stage, (3) demographic information, and (4) biomarkers. The response of these virtual cohorts to different therapies is then considered as independent data points, with a representative -omic profile for each cohort generated. In C we show the results of a mixed model where we take the transcriptomic profiles of cancer patients and use an IC 50 predictor to model efficacy. C is the same plot as in B; however, rather than the TCGA subtype as the hue color, we show the predicted efficacy value for each patient and a representative therapeutic, in this case eribulin. As a result, we have a proxy for how predictive each of the treatments will be for each of the patients with their unique expression pattern. We then integrate the predicted efficacy over the patients for each subtype and plot the expected efficacy of the therapeutic for each indication subtype, as shown in D. It is important to highlight that we could have also found the predicted efficacy for cluster generated from the embedding itself. However, we chose to focus on the canonical subtypes to compare the predicted results with data in the literature. In this particular example, the results are consistent as eribulin has been found to be more effective against triple-negative “basal” breast cancer. For a more systematic assessment of this approach, we can utilize a dataset of 194 therapeutics that have either been approved or rejected by the FDA for a set of 14 oncology subtypes, and we can assess how the predicted IC 50 values correspond to their clinical potential. Before establishing the efficacy of the predicted models, we should first set up a baseline of how predictive the real IC 50 values are of eventual approval for 74 distinct therapeutics present in CCLE. In the top panel of A and 4B we show the IC 50 value distributions of drug-disease pairs collected from CCLE for both the approved and the rejected drugs. The results show that therapeutics with a low IC 50 value against cancer cell lines have a higher historical approval rate than those with higher IC 50 values. Notice, however, that a low IC 50 is not a guarantee that the drug gains approval, as a number of low IC 50 drug-disease pairs end up getting rejected. The IC 50 is a measure of how effective the drug is at inhibiting a cancer model cell line. On its own, it gives no information on its ability to target healthy cells. Also it does not provide a sense for how it might behave within the context of the human body. However, it is quite clear that it has real predictive value. In the middle panels of A and 4B we show the results using the predicted values for IC 50 for the various therapeutic agents against the CCLE cancer cell lines. The relationship between IC 50 and approval is consistent with the real data. In the bottom panels of A and 4B we also show predicted IC 50 values using the patient transcriptomic profiles collected from TCGA. A similar pattern holds with the drug-disease pairs that are expected to have low IC 50 values, and have correspondingly higher rates of approval historically. The results are summarized in C, where the IC 50 distributions shown are binned and their historical approval rate is calculated. There is a consistent pattern between the real and the predicted IC 50 values, with the high-efficacy models having an increased probability for approval.
Any therapeutic that seeks to gain FDA approval must have an acceptable safety profile. Therefore, being able to predict the potential toxicity of a new therapeutic agent is just as important and assessing its efficacy. Developing models to predict toxicity requires access to reliable large-scale data for assessment of various chemical agents. The US Tox21 program is an initiative that has developed a number of in vitro assays that utilize quantitative high-throughput screening to generate a large number of toxicity measurements for thousands of various chemical agents. The Tox21 in vitro assays are reported to be as reliable as animal models in predicting human toxicity levels and have clear utility in predicting adverse effects of a drug. The massive Tox21 dataset has been used to develop multiple ML models for predicting toxicity as part of the Tox21 challenge. One of the best performing models achieved an ROC-AUC of 0.88 on predicting Tox21 data. The toxicity prediction can then be utilized by other higher-order models to assess the likelihood of approval for new possible therapeutics. In D we show the relationship between the predicted toxicity and the approval rate. As was expected, the therapeutics with lower predicted toxicity have a higher historical approval rate. The relationship in itself is not surprising; however, it is worth noting that the toxicity value used is a purely predicted value using a model that only requires a representation of the therapeutic, in this case an SMILE structure.
In E we developed a simple random-forest classifier model to predict whether a drug gains approval for a specific indication. The models use only a few features, which are highlighted on the x axis: the clinical ORR, the predicted IC 50 , and the predicted toxicity. We show the results of a 10-fold cross-validated model for each of the feature sets, as summarized in . The ORR is in itself predictive of approval (AUC = 0.83), but has a wide spread in the AUC between various folds. The inclusion of the predicted IC 50 and the toxicity improves the predictions and creates more consistent predictions (AUC = 0.89), with a much tighter standard deviation of 0.06 compared with 0.13 for the ORR alone.
In this perspective, we lay out the basic schema of the approach that many AI models take in the domain of drug discovery and design. We also review the fundamentals in terms of model types, data sources, and the potential insights each provide. Subsequently, we show the ability of these models to inform the likelihood of approval by utilizing the predicted efficacy and toxicity of a potential therapeutic. Yet, there are a number of areas of active research that we did not touch upon so far. Within the domain of therapeutic representation most of the current work has focused on small-molecule therapies, as they are the most tractable. Predicting the efficacy and toxicity of higher-order therapeutics, such as large proteins, mRNA therapies, and cell therapies, are lacking. These advanced therapeutic types and their associated representations are an area of active research and are expected to advance significantly in the near term. For the representation of the disease state, we looked at the different -omic profiles as a way to capture the relevant information. While this approach is appropriate for diseases, such as oncology and autoimmune diseases, they are not directly transferable to bacterial or viral diseases. There, representation of the pathogen of interest would be more appropriate. In terms of the model types we discuss, we look at supervised and unsupervised learning, but we did not delve into reinforcement learning (RL) (a form of ML wherein optimal strategies are found by defining an agent, an environment, and a cost function) or generative models. In RL models the approach is quite different, as the researcher must a priori define a state space or “environment,” an agent with well-defined actions within the environment, and a cost function to be optimized for a particular task. Moreover, these models can be combined with generative ones and efficacy predictors to develop novel therapeutics that are designed to target specific disease states. The use of MLA for the purposes of drug discovery, assessment, and design is still in its infancy. Despite recent advances, it is quite evident that the future will bring even more rapid and consequential applications of MLA in this field.
|
High-efficiency pharmacogenetic ablation of oligodendrocyte progenitor cells in the adult mouse CNS | 053359c6-95df-4421-aeff-46e61000d05c | 10014347 | Pharmacology[mh] | Oligodendrocyte progenitor cells (OPCs) are the principal mitotic cell type in the adult mammalian CNS. OPCs are known primarily for generating myelin-forming oligodendrocytes (OLs) during postnatal development and adulthood. , Although OPCs are distributed throughout the CNS, including in brain regions where relatively little myelination occurs, there is evidence of OPC heterogeneity among brain regions, , raising the prospect that they could possess additional functions beyond oligodendrogenesis. To investigate OPC function, several groups have developed strategies to selectively ablate OPCs, including X-irradiation, laser-mediated ablation, genetically induced cell ablation, or the use of anti-mitotic drugs. , , , , , However these approaches have enabled only partial and transient OPC ablation because of incomplete targeting of the OPC population and rapid repopulation by non-ablated OPCs. Consequently, it has not yet been possible to explore the functional consequences of long-term OPC ablation. Designing a genetic approach to selectively ablate OPCs requires careful consideration of the promoter(s) used for OPC targeting. OPCs are defined by their expression of chondroitin sulfate proteoglycan 4 (CSPG4)/neuron-glial antigen 2 (NG2) and platelet-derived growth factor receptor alpha (PDGFRA). Although the Cspg4 promoter has been used to control transgene expression, , , , NG2 is also expressed by pericytes , , , and some microglia after injury. PDGFRΑ is also expressed by vascular and leptomeningeal cells (VLMCs) , and choroid plexus epithelial cells. , , Therefore, using either the Cspg4 or Pdgfra promoter alone cannot direct the expression of a suicide gene exclusively to OPCs. To precisely target OPCs we have generated a novel transgenic mouse model in which the expression of an inducible suicide gene is controlled by two different promoters, namely, the Pdgfra and Sox10 promoters, whose overlapping transcriptional activity is restricted to OPCs in the postnatal CNS. The method to conditionally ablate OPCs must also be highly efficient to overcome the proliferation of non-ablated OPCs that follows incomplete OPC ablation. , , Moreover, the method should be amenable to precise temporal control and have minimal effect on the animal’s overall health. To date, no strategy for ablating the OPC population has been described that meets all these requirements. Here we describe the development of a highly efficient pharmacogenetic approach to ablate OPCs in the adult mouse CNS that overcomes many limitations of previous approaches. The model provides a valuable tool for studies aimed at better understanding the functions of OPCs in the adult CNS. DTA-mediated ablation of OPCs induced rapid OPC regeneration To specifically ablate OPCs in the adult mouse CNS, we used an intersectional genetic approach to direct the inducible expression of a suicide gene in cells expressing both PDGFRΑ and SOX10. This was achieved by crossing two transgenic mouse lines, the Pdgfrα-CreER T2 line and the Sox10-lox-GFP-STOP-lox-DTA ( Sox10-DTA ) line, to enable diphtheria toxin A (DTA) expression in adult OPCs upon delivery of tamoxifen (TAM) ( A). As SOX10 is expressed exclusively by oligodendroglia in the postnatal CNS, , this ensures that DTA expression is restricted to OPCs and is excluded from VLMCs and choroid plexus epithelial cells, which express PDGFRΑ but not SOX10. DTA expression is not expected to target Schwann cells in the peripheral nervous system since most Schwann cells express SOX10 but not PDGFRA. , Moreover, it has been demonstrated conclusively that the Pdgfrα-CreER T2 line used in our study does not target Schwann cells. TAM was administered to 8-week-old Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice (hereafter denoted Pdgfrα + :DTA + ), and Pdgfrα-CreER T2+/+ :Sox10-DTA −/− littermates lacking the Sox10-DTA allele (denoted Pdgfrα + :DTA – ), which served as non-ablated controls ( B). Immunohistochemistry on brains of non-ablated Pdgfrα + :DTA – controls sacrificed 4 days post-TAM revealed abundant PDGFRA + OPCs throughout the brain, including the corpus callosum ( C). By contrast, TAM-administered Pdgfrα + :DTA + mice assessed at the same time point had very few PDGFRA + OPCs in the corpus callosum ( D), suggesting that Cre-mediated induction of DTA expression resulted in OPC ablation. In these OPC-deficient mice, Sox10 promoter-driven GFP expression was restricted to SOX10 + CC1 + cells ( A and S1B), indicating that Cre-mediated recombination of the Sox10-DTA allele targeted OPCs but not mature OLs. Although Pdgfrα + :DTA + mice exhibited marked OPC depletion 4 days post-TAM, OPC density in the corpus callosum returned to control levels by day 8 post-TAM and increased further over the subsequent 2 days ( E–1G). The marked increase in OPC density observed 10 days post-TAM suggests that OPCs exhibit robust proliferation following acute ablation. Indeed, most OPCs present after 10 days were newly generated, as demonstrated by the significant proportion of PDGFRA + cells incorporating 5-ethynyl-2′-deoxyuridine (EdU) provided continuously in the drinking water after TAM delivery ( C and S1D). Most OPCs present 4 days post-TAM expressed GFP ( D), suggesting that surviving OPCs were principally those in which the Sox10-DTA allele had not recombined. Some EdU + OPCs did not express GFP ( D), most likely reflecting low transcriptional activity of the Sox10 promoter that directs GFP expression. Supporting this idea, not all SOX10 + oligodendroglia in Pdgfrα + :DTA + brains examined 4 days post-TAM expressed GFP ( A). Together, these data demonstrate that the vast majority of OPCs in Pdgfrα + :DTA + mice were depleted after TAM administration. However, residual non-recombined OPCs exhibited a robust proliferative response to OPC ablation, resulting in restoration of OPCs to similar or higher density than non-ablated controls within 8–10 days post-TAM. Intracisternal infusion of AraC following TAM prevented rapid OPC regeneration Given that incomplete OPC ablation triggered non-recombined OPCs that had escaped DTA-mediated apoptosis to proliferate and repopulate the CNS, we introduced a second intervention designed to kill these rapidly dividing OPCs. After TAM administration, the anti-mitotic cytosine-β-D-arabinofuranoside (AraC) was infused into the cisterna magna of Pdgfrα + :DTA + mice to deplete proliferating OPCs. We elected to administer AraC directly into the cerebrospinal fluid rather than providing additional doses of TAM, given that we have noted toxicity when administering TAM for more than 4 days. Osmotic minipumps were implanted on day 4 post-TAM and removed on day 10 ( H), to provide 6 days of AraC infusion during the period of marked OPC proliferation. Vehicle-only controls received artificial cerebrospinal fluid (aCSF) without AraC. Mice were sacrificed either immediately after removal of the osmotic minipump or 10 days later ( H). TAM-administered Pdgfrα + :DTA + mice examined after 6 days of vehicle infusion, denoted as 0 days post-pump removal (dppr), had numerous PDGFRA + OPCs in the corpus callosum ( I), similar in density to that observed in Pdgfrα + :DTA + mice administered TAM alone and assessed 10 days later ( F). By contrast, no PDGFRA + OPCs could be identified in the corpus callosum of TAM-administered Pdgfrα + :DTA + mice sacrificed immediately after AraC infusion ( K). Indeed, we did not observe any OPCs in the cerebrum of OPC-ablated mice in sections of the rostral forebrain examined at 0 dppr, and only occasionally did we detect OPCs in the brainstem. Following ablation, OPCs remained depleted for at least 10 days post-AraC infusion ( L). Notably, AraC delivery to wild-type mice did not result in OPC loss ( F), consistent with the low proliferation rate of OPCs under basal conditions , and known homeostatic control mechanisms that maintain OPC density in equilibrium. Although PDGFRA + OPCs were almost completely absent in TAM + AraC-administered Pdgfrα + :DTA + mice, vascular-associated PDGFRA + GFP – cells surrounding PECAM-1 + endothelial cells remained intact ( M). Vascular-associated PDGFRA + GFP – cells in OPC-ablated mice did not exhibit typical ramified OPC morphology. We identified these cells as laminin-1 + VLMCs that are closely associated with but distinct from vascular-associated NG2 + PDGFRB + pericytes ( N and G–S1J), consistent with the recent description of these cells. To further explore the extent of OPC ablation, we generated Pdgfrα-CreER T2+/− :Ai14-tdTomato +/− :Sox10-DTA +/− mice (hereafter denoted Pdgfrα + :tdT + :DTA + ), to enable simultaneous genetic fate-mapping and ablation of OPCs. TAM was administered to 8-week-old Pdgfrα + :tdT + :DTA + mice to induce expression of both tdTomato and DTA from the Ai14 tdTomato and Sox10-DTA recombined alleles, respectively ( A). Starting 4 days post-TAM, mice received a 6-day infusion of AraC before being sacrificed. Pdgfrα + :tdT + :DTA − littermates administered TAM and infused with vehicle alone served as non-ablated controls. At the end of vehicle infusion, 97.0% ± 0.6% of PDGFRA + OPCs in non-ablated controls expressed tdTomato ( B and 2E), irrespective of differences in the local density of PDGFRA + OPCs along the rostrocaudal axis of the brain ( F). Cellular morphology was used to discriminate between OPCs and VLMCs, the former possessing fine ramified processes whereas the latter were devoid of fine processes and exhibited a circular morphology consistent with vascular localization. We also identified numerous tdTomato + CC1 + OLs generated by these fate-mapped OPCs in non-ablated controls ( D). By contrast, virtually no tdTomato + PDGFRA + cells exhibiting typical OPC morphology were detected in the brains of Pdgfrα + :tdT + :DTA + mice administered TAM + AraC ( B, 2F, and A lower panels) with only perivascular tdTomato + PDGFRA + cells remaining ( B and S2C). The mean density of tdTomato + OPCs in the AraC-infused brains was 99.7% ± 0.2% lower than that observed in vehicle-infused brains (0.16 ± 0.07 versus 70.1 ± 5.8 cells/mm 2 , p < 0.0001). We also identified significantly fewer non-recombined (tdTomato − ) OPCs in ablated mice compared with non-ablated controls (0.53 ± 0.14 versus 2.3 ± 0.2 cells/mm 2 , p < 0.0001) ( G). When tdTomato + and tdTomato − OPC counts were combined, this equated to a 98.6% ± 0.4% reduction in total OPC density across the entire brain of OPC-ablated mice compared with non-ablated control mice at 0 dppr (p < 0.0001) ( H). By 10 dppr, the mean density of tdTomato + OPCs in ablated mice had increased marginally but remained 97.1% ± 0.6% lower than that observed in vehicle-infused mice (2.00 ± 0.93 versus 70.1 ± 5.8 cells/mm 2 , p < 0.0001). At this time point, PDGFRA + cells remained depleted in the cerebrum, but began to repopulate caudoventral regions of the brain, particularly the brainstem ( D). By 20 dppr, PDGFRA + cells were evident in both the brainstem and cerebrum ( E and S2F). Collectively, our results demonstrate highly efficient OPC ablation throughout the brain that persists for at least 10 dppr after which PDGFRA + cells started to reappear, first in the brainstem then later in the cerebrum. Effects of OPC ablation on specific cell types in the brain To assess whether the induction of DTA-mediated apoptosis was restricted to OPCs within the oligodendroglial lineage, we quantified the densities of CC1 + mature OLs in coronal brain sections of OPC-ablated and non-ablated controls. We observed similar OL densities between groups ( A–3C). The transient ablation of PDGFRA + NG2 + cells resulted in a complete yet temporary disruption in oligodendrogenesis. This was evaluated by quantitating the density of ASPA + EdU + cells in the corpus callosum of OPC-ablated and non-ablated controls that were administered EdU continuously in their drinking water following infusion, until they were perfused at either 11, 20, or 34 dppr ( D). Consistent with this finding, the corpus callosum was deficient in both early (PDGFRA + GPR17 − ) and late (PDGFRA + GPR17 + ) OPCs as well as committed oligodendrocyte progenitors (PDGFRA − GPR17 + ) that are in the process of transitioning into mature OLs for at least 10 dppr before returning to control levels by 20 dppr ( A and S3B). Despite the transient reduction in oligodendrogenesis, the absolute density of callosal ASPA + OLs was not significantly different from non-ablated controls ( C), and no differences in myelin abundance were detected ( E and 3F). Together these findings suggest that OPC ablation resulted in a transient disruption to the production of newborn oligodendroglia but did not affect pre-existing mature OLs. Next, we turned our examination to other glial cell types in the CNS of OPC-ablated mice. The density of ALDH1L1 + astrocytes was elevated at 0 dppr in Pdgfrα + :DTA + mice administered TAM + AraC and returned to control levels by 10 dppr ( G). The transient increase in astrocyte density was not accompanied by any notable change in the expression of the intermediate filament protein GFAP, a marker of astrocyte activation ( H). Similarly, the morphology of GFAP + astrocytes was equivalent in OPC-ablated and non-ablated controls, although soma size and the number and length of processes increased marginally in both groups with time post-infusion ( D–S3G). In terms of microglial response, we observed a transient increase in the density of Iba1 + microglia in Pdgfrα + :DTA + mice 4 days after final TAM administration and at the end of AraC infusion (0 dppr) compared with non-ablated controls (0 dppr) which normalized by 20 dppr ( I). Morphological analysis of Iba1 + microglia revealed that OPC-ablated mice exhibited changes in process complexity and somal area over time. At 0 dppr, microglia exhibited an increase in the number of secondary processes, reflective of a more active (hyper-ramified) state. Conversely, by 10 dppr, there was a significant reduction in both primary and secondary processes, as well as process length, suggesting the presence of ameboid or dystrophic microglia ( J and H–S3K). These morphological changes were not associated with any significant shifts in the percentage of microglia that expressed the M1- or M2-associated markers CD16/CD32 or CD206, respectively ( L and S3M), although we noted that the level of expression of CD16/CD32 was elevated at 0 dppr in OPC-ablated mice before subsequently declining. Together, these data suggest that OPC ablation induced a modest and transient neuroinflammatory response that had largely resolved by 20 dppr. To assess whether extensive OPC ablation led to neuronal cell death through neuroinflammation as demonstrated in a previous study, NeuN + neurons were quantified in the cerebral cortex where PDGFRA + cells remained depleted until 20 dppr. The densities of cortical neurons were comparable between groups at all time points ( K), irrespective of cortical layer (data not shown), suggesting that OPC ablation does not compromise the viability of cortical neurons. PDGFRA + cells started to repopulate the cerebrum from 12 dppr but did not derive from the OPC lineage To examine the kinetics and anatomical origin of PDGFRA + cells that repopulated the cerebrum following OPC ablation, additional cohorts of TAM + AraC-administered Pdgfrα + :tdT + :DTA + mice were sacrificed at 0, 12, 18, 20, and 34 dppr. TAM + vehicle-administered Pdgfrα + :tdT + :DTA − mice served as non-ablated controls ( A). In the corpus callosum of OPC-ablated mice, we confirmed highly efficient ablation of tdTomato + PDGFRA + OPCs at 0 dppr ( B). However, by 20 dppr, we observed high densities of PDGFRA + cells that did not express tdTomato, indicating that the vast majority of these cells do not derive from surviving tdTomato + OPCs ( B, A–S4C). PDGFRA + cells repopulating the cerebrum first appeared at 12 dppr in a region of the corpus callosum adjacent to the V-SVZ. Unlike PDGFRA + cells found in the corpus callosum of non-ablated controls, those in OPC-depleted mice co-expressed Nestin, an intermediate filament protein normally expressed by cells in the V-SVZ ( C). Newly generated PDGFRA + NG2 + cells in OPC-depleted mice also expressed GFP, indicating that the Sox10-DTA allele in these cells was in a non-recombined state, indicating that they do not derive from recombined OPCs ( D). In addition, repopulating PDGFRA + cells were EdU + , indicating that they were born after AraC infusion (data not shown) and many possessed a unipolar or bipolar morphology consistent with migratory activity. The density of PDGFRA + cells in the rostral cerebrum of OPC-depleted mice returned to levels similar to that of non-ablated controls in a spatiotemporally defined manner, normalizing first in the region of the corpus callosum adjacent to the V-SVZ at 12 dppr ( B), followed by the midline corpus callosum at 18 dppr ( C), and later in the cerebral cortex at 20 dppr ( D). Other regions of the rostral cerebrum of OPC-depleted mice exhibited different latencies for PDGFRA + cell density to return to control levels ( E–5H). Overall, the mean density of PDGFRA + cells in the rostral cerebrum returned to levels similar to that of non-ablated controls by 20 dppr whereas caudal regions of the cerebrum remained deficient at the same time point ( F). Collectively these data demonstrate that ablation of 98.6% ± 0.4% of OPCs was followed by a late onset regenerative response resulting in the repopulation of PDGFRA + cells. The finding that the vast majority of repopulating PDGFRA + cells in the cerebrum were tdTomato − and GFP + is inconsistent with the notion that PDGFRA + cells arise through the proliferative expansion of surviving OPCs. Rather, the spatiotemporal pattern of PDGFRA + cell regeneration, emerging first in a region of the corpus callosum adjacent to the V-SVZ, whilst co-expressing the V-SVZ marker Nestin, raised the possibility that NPCs within the V-SVZ could be the primary source of PDGFRA + cells that repopulated this region of the brain following OPC ablation. Regeneration of V-SVZ-derived NPCs after AraC infusion To investigate whether NPCs in the V-SVZ could serve as a reservoir to regenerate PDGFRA + cells, we examined the response of NPCs to pharmacogenetic ablation of OPCs. A 6-day infusion of 2% AraC onto the surface of the brain was previously demonstrated to eliminate rapidly dividing cells in the V-SVZ. The subsequent activation and proliferation of quiescent neural stem cells in the V-SVZ resulted in complete regeneration of the neurogenic niche including transit-amplifying cells and neuroblasts within 10 days after AraC withdrawal. To assess the regeneration of NPCs in TAM + AraC-administered Pdgfrα + :DTA + mouse brains, mice were given EdU via their drinking water beginning immediately after pump removal and continuing until they were perfused 10 days later ( H). Compared with control mice, DCX + neuroblasts were virtually absent from the V-SVZ of AraC-infused mice at 0 dppr ( A). At 10 dppr, the number of DCX + cells in the V-SVZ of AraC-treated animals was similar to that of vehicle-treated controls ( A and 6B). In addition, these DCX + cells were positive for EdU ( C), indicating that they were born after AraC infusion. We conclude that the AraC infusion ablated rapidly dividing NPCs in the V-SVZ but these cells were completely regenerated by 10 dppr. V-SVZ-derived NPCs contributed to the regeneration of PDGFRA + cells in the cerebrum after OPC ablation Our earlier examination of OPC-ablated mice at 12 dppr had revealed that callosal PDGFRA + cells adjacent to the V-SVZ co-expressed Nestin, a marker of NPCs in the V-SVZ, whereas callosal OPCs in the vehicle-infused controls did not ( C). We posited that repopulating PDGFRA + cells could derive from NPCs located within the V-SVZ. In support of this possibility, we found that in the corpus callosum of OPC-ablated mice examined at 20 dppr, PDGFRA + NG2 + cells co-expressed GFP indicating that the Sox10-GFP allele was in a non-recombined state ( D). We also found that PDGFRA + NG2 + cells in the dorsolateral corner as well as in the dorsal and lateral walls of the V-SVZ co-labeled with EdU, which was provided to the mice following AraC infusion (data not shown). To confirm that NPCs generate PDGFRA + cells that migrate into the cerebrum following OPC ablation, we developed a Dre recombinase-based viral approach for genetic fate mapping of V-SVZ-derived NPCs ( A–S5D). Two weeks before ablating OPCs, we injected Pdgfrα + :tdT + :DTA + mice with lentiviruses to transduce NPCs in the V-SVZ ( A). This resulted in the expression of a Myc-tagged membrane-targeted mKate2 fluorescent reporter protein by a subset of cells in the V-SVZ ( E–S5I). At 20 dppr, we detected mKate2 in a subpopulation of newly generated PDGFRA + cells in the corpus callosum, indicating that they had derived from NPCs ( B and 7C). We corroborated the V-SVZ origin of newly generated PDGFRA + cells by simultaneous ablation of both parenchymal OPCs and oligodendrogenic NPCs using Nestin-CreER T2+/− ; Pdgfrα-CreER T2+/− ; Sox10-DTA +/− transgenic mice (denoted hereafter as Nestin + :Pdgfra + :DTA + mice) ( D). The persistent absence of PDGFRA + cells in the cerebrum of parenchymal OPC and oligodendrogenic NPC co-ablated mice at 20 dppr after AraC withdrawal indicates that regenerating PDGFRA + cells in the dorsal cerebrum of OPC-ablated mice derive principally from Nestin-expressing NPCs residing in the V-SVZ ( E, 7F, and J). Of note, Nestin + :Pdgfra + :DTA + mice exhibited hydrocephalus, which we believe reflects a degree of TAM-independent recombination in SOX10 + Nestin + neural crest cells during development (see , , ). Despite evidence of hydrocephalus, we observed a similar density of neurogenic NPCs expressing DCX in the lateral walls and dorsolateral corner of the V-SVZ of TAM + AraC-administered Nestin + :Pdgfra + :DTA + mice compared with non-ablated controls ( L and S5M). Thus, although neuroblast production persists in TAM + AraC-administered Nestin + :Pdgfra + :DTA + mice, PDGFRA + cells are not viable. Collectively, these data indicate that PDGFRA + cells that repopulate the dorsal cerebrum of OPC-depleted mice beginning 12 dppr arise principally from oligodendrogenic NPCs residing in the V-SVZ. To specifically ablate OPCs in the adult mouse CNS, we used an intersectional genetic approach to direct the inducible expression of a suicide gene in cells expressing both PDGFRΑ and SOX10. This was achieved by crossing two transgenic mouse lines, the Pdgfrα-CreER T2 line and the Sox10-lox-GFP-STOP-lox-DTA ( Sox10-DTA ) line, to enable diphtheria toxin A (DTA) expression in adult OPCs upon delivery of tamoxifen (TAM) ( A). As SOX10 is expressed exclusively by oligodendroglia in the postnatal CNS, , this ensures that DTA expression is restricted to OPCs and is excluded from VLMCs and choroid plexus epithelial cells, which express PDGFRΑ but not SOX10. DTA expression is not expected to target Schwann cells in the peripheral nervous system since most Schwann cells express SOX10 but not PDGFRA. , Moreover, it has been demonstrated conclusively that the Pdgfrα-CreER T2 line used in our study does not target Schwann cells. TAM was administered to 8-week-old Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice (hereafter denoted Pdgfrα + :DTA + ), and Pdgfrα-CreER T2+/+ :Sox10-DTA −/− littermates lacking the Sox10-DTA allele (denoted Pdgfrα + :DTA – ), which served as non-ablated controls ( B). Immunohistochemistry on brains of non-ablated Pdgfrα + :DTA – controls sacrificed 4 days post-TAM revealed abundant PDGFRA + OPCs throughout the brain, including the corpus callosum ( C). By contrast, TAM-administered Pdgfrα + :DTA + mice assessed at the same time point had very few PDGFRA + OPCs in the corpus callosum ( D), suggesting that Cre-mediated induction of DTA expression resulted in OPC ablation. In these OPC-deficient mice, Sox10 promoter-driven GFP expression was restricted to SOX10 + CC1 + cells ( A and S1B), indicating that Cre-mediated recombination of the Sox10-DTA allele targeted OPCs but not mature OLs. Although Pdgfrα + :DTA + mice exhibited marked OPC depletion 4 days post-TAM, OPC density in the corpus callosum returned to control levels by day 8 post-TAM and increased further over the subsequent 2 days ( E–1G). The marked increase in OPC density observed 10 days post-TAM suggests that OPCs exhibit robust proliferation following acute ablation. Indeed, most OPCs present after 10 days were newly generated, as demonstrated by the significant proportion of PDGFRA + cells incorporating 5-ethynyl-2′-deoxyuridine (EdU) provided continuously in the drinking water after TAM delivery ( C and S1D). Most OPCs present 4 days post-TAM expressed GFP ( D), suggesting that surviving OPCs were principally those in which the Sox10-DTA allele had not recombined. Some EdU + OPCs did not express GFP ( D), most likely reflecting low transcriptional activity of the Sox10 promoter that directs GFP expression. Supporting this idea, not all SOX10 + oligodendroglia in Pdgfrα + :DTA + brains examined 4 days post-TAM expressed GFP ( A). Together, these data demonstrate that the vast majority of OPCs in Pdgfrα + :DTA + mice were depleted after TAM administration. However, residual non-recombined OPCs exhibited a robust proliferative response to OPC ablation, resulting in restoration of OPCs to similar or higher density than non-ablated controls within 8–10 days post-TAM. Given that incomplete OPC ablation triggered non-recombined OPCs that had escaped DTA-mediated apoptosis to proliferate and repopulate the CNS, we introduced a second intervention designed to kill these rapidly dividing OPCs. After TAM administration, the anti-mitotic cytosine-β-D-arabinofuranoside (AraC) was infused into the cisterna magna of Pdgfrα + :DTA + mice to deplete proliferating OPCs. We elected to administer AraC directly into the cerebrospinal fluid rather than providing additional doses of TAM, given that we have noted toxicity when administering TAM for more than 4 days. Osmotic minipumps were implanted on day 4 post-TAM and removed on day 10 ( H), to provide 6 days of AraC infusion during the period of marked OPC proliferation. Vehicle-only controls received artificial cerebrospinal fluid (aCSF) without AraC. Mice were sacrificed either immediately after removal of the osmotic minipump or 10 days later ( H). TAM-administered Pdgfrα + :DTA + mice examined after 6 days of vehicle infusion, denoted as 0 days post-pump removal (dppr), had numerous PDGFRA + OPCs in the corpus callosum ( I), similar in density to that observed in Pdgfrα + :DTA + mice administered TAM alone and assessed 10 days later ( F). By contrast, no PDGFRA + OPCs could be identified in the corpus callosum of TAM-administered Pdgfrα + :DTA + mice sacrificed immediately after AraC infusion ( K). Indeed, we did not observe any OPCs in the cerebrum of OPC-ablated mice in sections of the rostral forebrain examined at 0 dppr, and only occasionally did we detect OPCs in the brainstem. Following ablation, OPCs remained depleted for at least 10 days post-AraC infusion ( L). Notably, AraC delivery to wild-type mice did not result in OPC loss ( F), consistent with the low proliferation rate of OPCs under basal conditions , and known homeostatic control mechanisms that maintain OPC density in equilibrium. Although PDGFRA + OPCs were almost completely absent in TAM + AraC-administered Pdgfrα + :DTA + mice, vascular-associated PDGFRA + GFP – cells surrounding PECAM-1 + endothelial cells remained intact ( M). Vascular-associated PDGFRA + GFP – cells in OPC-ablated mice did not exhibit typical ramified OPC morphology. We identified these cells as laminin-1 + VLMCs that are closely associated with but distinct from vascular-associated NG2 + PDGFRB + pericytes ( N and G–S1J), consistent with the recent description of these cells. To further explore the extent of OPC ablation, we generated Pdgfrα-CreER T2+/− :Ai14-tdTomato +/− :Sox10-DTA +/− mice (hereafter denoted Pdgfrα + :tdT + :DTA + ), to enable simultaneous genetic fate-mapping and ablation of OPCs. TAM was administered to 8-week-old Pdgfrα + :tdT + :DTA + mice to induce expression of both tdTomato and DTA from the Ai14 tdTomato and Sox10-DTA recombined alleles, respectively ( A). Starting 4 days post-TAM, mice received a 6-day infusion of AraC before being sacrificed. Pdgfrα + :tdT + :DTA − littermates administered TAM and infused with vehicle alone served as non-ablated controls. At the end of vehicle infusion, 97.0% ± 0.6% of PDGFRA + OPCs in non-ablated controls expressed tdTomato ( B and 2E), irrespective of differences in the local density of PDGFRA + OPCs along the rostrocaudal axis of the brain ( F). Cellular morphology was used to discriminate between OPCs and VLMCs, the former possessing fine ramified processes whereas the latter were devoid of fine processes and exhibited a circular morphology consistent with vascular localization. We also identified numerous tdTomato + CC1 + OLs generated by these fate-mapped OPCs in non-ablated controls ( D). By contrast, virtually no tdTomato + PDGFRA + cells exhibiting typical OPC morphology were detected in the brains of Pdgfrα + :tdT + :DTA + mice administered TAM + AraC ( B, 2F, and A lower panels) with only perivascular tdTomato + PDGFRA + cells remaining ( B and S2C). The mean density of tdTomato + OPCs in the AraC-infused brains was 99.7% ± 0.2% lower than that observed in vehicle-infused brains (0.16 ± 0.07 versus 70.1 ± 5.8 cells/mm 2 , p < 0.0001). We also identified significantly fewer non-recombined (tdTomato − ) OPCs in ablated mice compared with non-ablated controls (0.53 ± 0.14 versus 2.3 ± 0.2 cells/mm 2 , p < 0.0001) ( G). When tdTomato + and tdTomato − OPC counts were combined, this equated to a 98.6% ± 0.4% reduction in total OPC density across the entire brain of OPC-ablated mice compared with non-ablated control mice at 0 dppr (p < 0.0001) ( H). By 10 dppr, the mean density of tdTomato + OPCs in ablated mice had increased marginally but remained 97.1% ± 0.6% lower than that observed in vehicle-infused mice (2.00 ± 0.93 versus 70.1 ± 5.8 cells/mm 2 , p < 0.0001). At this time point, PDGFRA + cells remained depleted in the cerebrum, but began to repopulate caudoventral regions of the brain, particularly the brainstem ( D). By 20 dppr, PDGFRA + cells were evident in both the brainstem and cerebrum ( E and S2F). Collectively, our results demonstrate highly efficient OPC ablation throughout the brain that persists for at least 10 dppr after which PDGFRA + cells started to reappear, first in the brainstem then later in the cerebrum. To assess whether the induction of DTA-mediated apoptosis was restricted to OPCs within the oligodendroglial lineage, we quantified the densities of CC1 + mature OLs in coronal brain sections of OPC-ablated and non-ablated controls. We observed similar OL densities between groups ( A–3C). The transient ablation of PDGFRA + NG2 + cells resulted in a complete yet temporary disruption in oligodendrogenesis. This was evaluated by quantitating the density of ASPA + EdU + cells in the corpus callosum of OPC-ablated and non-ablated controls that were administered EdU continuously in their drinking water following infusion, until they were perfused at either 11, 20, or 34 dppr ( D). Consistent with this finding, the corpus callosum was deficient in both early (PDGFRA + GPR17 − ) and late (PDGFRA + GPR17 + ) OPCs as well as committed oligodendrocyte progenitors (PDGFRA − GPR17 + ) that are in the process of transitioning into mature OLs for at least 10 dppr before returning to control levels by 20 dppr ( A and S3B). Despite the transient reduction in oligodendrogenesis, the absolute density of callosal ASPA + OLs was not significantly different from non-ablated controls ( C), and no differences in myelin abundance were detected ( E and 3F). Together these findings suggest that OPC ablation resulted in a transient disruption to the production of newborn oligodendroglia but did not affect pre-existing mature OLs. Next, we turned our examination to other glial cell types in the CNS of OPC-ablated mice. The density of ALDH1L1 + astrocytes was elevated at 0 dppr in Pdgfrα + :DTA + mice administered TAM + AraC and returned to control levels by 10 dppr ( G). The transient increase in astrocyte density was not accompanied by any notable change in the expression of the intermediate filament protein GFAP, a marker of astrocyte activation ( H). Similarly, the morphology of GFAP + astrocytes was equivalent in OPC-ablated and non-ablated controls, although soma size and the number and length of processes increased marginally in both groups with time post-infusion ( D–S3G). In terms of microglial response, we observed a transient increase in the density of Iba1 + microglia in Pdgfrα + :DTA + mice 4 days after final TAM administration and at the end of AraC infusion (0 dppr) compared with non-ablated controls (0 dppr) which normalized by 20 dppr ( I). Morphological analysis of Iba1 + microglia revealed that OPC-ablated mice exhibited changes in process complexity and somal area over time. At 0 dppr, microglia exhibited an increase in the number of secondary processes, reflective of a more active (hyper-ramified) state. Conversely, by 10 dppr, there was a significant reduction in both primary and secondary processes, as well as process length, suggesting the presence of ameboid or dystrophic microglia ( J and H–S3K). These morphological changes were not associated with any significant shifts in the percentage of microglia that expressed the M1- or M2-associated markers CD16/CD32 or CD206, respectively ( L and S3M), although we noted that the level of expression of CD16/CD32 was elevated at 0 dppr in OPC-ablated mice before subsequently declining. Together, these data suggest that OPC ablation induced a modest and transient neuroinflammatory response that had largely resolved by 20 dppr. To assess whether extensive OPC ablation led to neuronal cell death through neuroinflammation as demonstrated in a previous study, NeuN + neurons were quantified in the cerebral cortex where PDGFRA + cells remained depleted until 20 dppr. The densities of cortical neurons were comparable between groups at all time points ( K), irrespective of cortical layer (data not shown), suggesting that OPC ablation does not compromise the viability of cortical neurons. + cells started to repopulate the cerebrum from 12 dppr but did not derive from the OPC lineage To examine the kinetics and anatomical origin of PDGFRA + cells that repopulated the cerebrum following OPC ablation, additional cohorts of TAM + AraC-administered Pdgfrα + :tdT + :DTA + mice were sacrificed at 0, 12, 18, 20, and 34 dppr. TAM + vehicle-administered Pdgfrα + :tdT + :DTA − mice served as non-ablated controls ( A). In the corpus callosum of OPC-ablated mice, we confirmed highly efficient ablation of tdTomato + PDGFRA + OPCs at 0 dppr ( B). However, by 20 dppr, we observed high densities of PDGFRA + cells that did not express tdTomato, indicating that the vast majority of these cells do not derive from surviving tdTomato + OPCs ( B, A–S4C). PDGFRA + cells repopulating the cerebrum first appeared at 12 dppr in a region of the corpus callosum adjacent to the V-SVZ. Unlike PDGFRA + cells found in the corpus callosum of non-ablated controls, those in OPC-depleted mice co-expressed Nestin, an intermediate filament protein normally expressed by cells in the V-SVZ ( C). Newly generated PDGFRA + NG2 + cells in OPC-depleted mice also expressed GFP, indicating that the Sox10-DTA allele in these cells was in a non-recombined state, indicating that they do not derive from recombined OPCs ( D). In addition, repopulating PDGFRA + cells were EdU + , indicating that they were born after AraC infusion (data not shown) and many possessed a unipolar or bipolar morphology consistent with migratory activity. The density of PDGFRA + cells in the rostral cerebrum of OPC-depleted mice returned to levels similar to that of non-ablated controls in a spatiotemporally defined manner, normalizing first in the region of the corpus callosum adjacent to the V-SVZ at 12 dppr ( B), followed by the midline corpus callosum at 18 dppr ( C), and later in the cerebral cortex at 20 dppr ( D). Other regions of the rostral cerebrum of OPC-depleted mice exhibited different latencies for PDGFRA + cell density to return to control levels ( E–5H). Overall, the mean density of PDGFRA + cells in the rostral cerebrum returned to levels similar to that of non-ablated controls by 20 dppr whereas caudal regions of the cerebrum remained deficient at the same time point ( F). Collectively these data demonstrate that ablation of 98.6% ± 0.4% of OPCs was followed by a late onset regenerative response resulting in the repopulation of PDGFRA + cells. The finding that the vast majority of repopulating PDGFRA + cells in the cerebrum were tdTomato − and GFP + is inconsistent with the notion that PDGFRA + cells arise through the proliferative expansion of surviving OPCs. Rather, the spatiotemporal pattern of PDGFRA + cell regeneration, emerging first in a region of the corpus callosum adjacent to the V-SVZ, whilst co-expressing the V-SVZ marker Nestin, raised the possibility that NPCs within the V-SVZ could be the primary source of PDGFRA + cells that repopulated this region of the brain following OPC ablation. To investigate whether NPCs in the V-SVZ could serve as a reservoir to regenerate PDGFRA + cells, we examined the response of NPCs to pharmacogenetic ablation of OPCs. A 6-day infusion of 2% AraC onto the surface of the brain was previously demonstrated to eliminate rapidly dividing cells in the V-SVZ. The subsequent activation and proliferation of quiescent neural stem cells in the V-SVZ resulted in complete regeneration of the neurogenic niche including transit-amplifying cells and neuroblasts within 10 days after AraC withdrawal. To assess the regeneration of NPCs in TAM + AraC-administered Pdgfrα + :DTA + mouse brains, mice were given EdU via their drinking water beginning immediately after pump removal and continuing until they were perfused 10 days later ( H). Compared with control mice, DCX + neuroblasts were virtually absent from the V-SVZ of AraC-infused mice at 0 dppr ( A). At 10 dppr, the number of DCX + cells in the V-SVZ of AraC-treated animals was similar to that of vehicle-treated controls ( A and 6B). In addition, these DCX + cells were positive for EdU ( C), indicating that they were born after AraC infusion. We conclude that the AraC infusion ablated rapidly dividing NPCs in the V-SVZ but these cells were completely regenerated by 10 dppr. + cells in the cerebrum after OPC ablation Our earlier examination of OPC-ablated mice at 12 dppr had revealed that callosal PDGFRA + cells adjacent to the V-SVZ co-expressed Nestin, a marker of NPCs in the V-SVZ, whereas callosal OPCs in the vehicle-infused controls did not ( C). We posited that repopulating PDGFRA + cells could derive from NPCs located within the V-SVZ. In support of this possibility, we found that in the corpus callosum of OPC-ablated mice examined at 20 dppr, PDGFRA + NG2 + cells co-expressed GFP indicating that the Sox10-GFP allele was in a non-recombined state ( D). We also found that PDGFRA + NG2 + cells in the dorsolateral corner as well as in the dorsal and lateral walls of the V-SVZ co-labeled with EdU, which was provided to the mice following AraC infusion (data not shown). To confirm that NPCs generate PDGFRA + cells that migrate into the cerebrum following OPC ablation, we developed a Dre recombinase-based viral approach for genetic fate mapping of V-SVZ-derived NPCs ( A–S5D). Two weeks before ablating OPCs, we injected Pdgfrα + :tdT + :DTA + mice with lentiviruses to transduce NPCs in the V-SVZ ( A). This resulted in the expression of a Myc-tagged membrane-targeted mKate2 fluorescent reporter protein by a subset of cells in the V-SVZ ( E–S5I). At 20 dppr, we detected mKate2 in a subpopulation of newly generated PDGFRA + cells in the corpus callosum, indicating that they had derived from NPCs ( B and 7C). We corroborated the V-SVZ origin of newly generated PDGFRA + cells by simultaneous ablation of both parenchymal OPCs and oligodendrogenic NPCs using Nestin-CreER T2+/− ; Pdgfrα-CreER T2+/− ; Sox10-DTA +/− transgenic mice (denoted hereafter as Nestin + :Pdgfra + :DTA + mice) ( D). The persistent absence of PDGFRA + cells in the cerebrum of parenchymal OPC and oligodendrogenic NPC co-ablated mice at 20 dppr after AraC withdrawal indicates that regenerating PDGFRA + cells in the dorsal cerebrum of OPC-ablated mice derive principally from Nestin-expressing NPCs residing in the V-SVZ ( E, 7F, and J). Of note, Nestin + :Pdgfra + :DTA + mice exhibited hydrocephalus, which we believe reflects a degree of TAM-independent recombination in SOX10 + Nestin + neural crest cells during development (see , , ). Despite evidence of hydrocephalus, we observed a similar density of neurogenic NPCs expressing DCX in the lateral walls and dorsolateral corner of the V-SVZ of TAM + AraC-administered Nestin + :Pdgfra + :DTA + mice compared with non-ablated controls ( L and S5M). Thus, although neuroblast production persists in TAM + AraC-administered Nestin + :Pdgfra + :DTA + mice, PDGFRA + cells are not viable. Collectively, these data indicate that PDGFRA + cells that repopulate the dorsal cerebrum of OPC-depleted mice beginning 12 dppr arise principally from oligodendrogenic NPCs residing in the V-SVZ. The development of methods to conditionally ablate OPCs in the adult mouse CNS provides a powerful experimental paradigm to explore their function. Although several approaches have been described, , , , , , the techniques established to date have not allowed the complete depletion of OPCs throughout the brain. Without complete OPC ablation, surviving OPCs enter the cell cycle and rapidly regenerate their population, complicating the interpretation of results. To avert rapid regeneration by surviving OPCs, we have developed a pharmacogenetic approach that allows high-efficiency ablation of OPCs throughout the brain. This approach consists of genetic elements permitting TAM-dependent induction of DTA expression exclusively in OPCs followed by a pharmacological intervention to prevent repopulation by OPCs that avert genetic targeting. Genetic control of OPC ablation is achieved by crossing the Pdgfra-CreER T2 and Sox10-lox-GFP-STOP-lox-DTA transgenic mouse lines. Double transgenic mice enable highly specific ablation of OPCs at a prescribed time point due to the dependence of DTA expression upon three levels of regulation: (1) activity of the Pdgfra promoter, (2) activity of the oligodendroglial-specific Sox10 promoter, and (3) temporal control provided by TAM-dependent regulation of CreER T2 activity. However, delivery of TAM alone to Pdgfrα + :DTA + mice was insufficient for complete or durable OPC ablation, because of the rapid proliferation of residual surviving OPCs. As cumulative toxicity precludes delivery of TAM beyond 4 days, following TAM delivery, mice were infused intracisternally with the anti-mitotic drug AraC to prevent OPCs that escape genetic targeting from re-entering the cell cycle and repopulating the brain. In isolation, intracisternal infusion of AraC to wild-type mice did not reduce OPC density, consistent with the fact that only a fraction of OPCs in the healthy adult CNS are dividing at a given time and the homeostatic control mechanisms that maintain OPC density when OPCs are sparsely depleted. However, the combined use of both genetic and pharmacological approaches eliminated 98.6% ± 0.4% of all OPCs throughout the brain without causing any overt adverse effects on the health of animals. The combined pharmacogenetic approach that we have developed to target OPCs is anticipated to result in the death of cells expressing both PDGFRA and SOX10 and mitotic cells exposed to AraC. By examining the densities of various non-OPC cell types in the brains of OPC-ablated mice, our data suggest that although OPCs, as well as committed oligodendrocyte progenitors (PDGFRA − GPR17 + ) and DCX + cells in the V-SVZ, are depleted, the density of mature OLs and neurons remains unchanged. Significantly however, the ablation of OPCs is anticipated to trigger a broad range of direct and indirect effects upon other neural cell types that are not necessarily reflected by a change in cell density. For instance, the activation of microglia to facilitate phagocytosis of cellular debris of OPCs undergoing cell death is expected. Indeed, we identified an initial increase and subsequent decrease in the density of microglia over the course of OPC ablation and subsequent regeneration. We also observed changes in microglial phenotype suggesting that OPC ablation alters their activation status. Given these observations, the effect that changes in microglial density and activation status have upon behavioral, cellular and molecular readouts should be taken into consideration in the future application of the model. One approach to address the role that microglia exert in this model would be to deplete microglia via inhibition of the colony-stimulating factor 1 receptor, in the context of OPC ablation, to discern the immune-mediated effects of OPC ablation from primary effects caused by depletion of OPCs. The effects of OPC ablation on other neural cell types, such as endothelial cells, remain to be determined. Despite the highly efficient ablation of OPCs throughout the brain, PDGFRA + NG2 + cells were eventually regenerated, beginning from around 12 dppr. In this context, it is important to note that AraC infusion also ablated almost all DCX + neuroblasts in the V-SVZ before their return to normal density by 10 dppr. The transient depletion of DCX + cells in the V-SVZ is consistent with previous reports demonstrating that AraC-mediated elimination of rapidly dividing transit-amplifying cells and neuroblasts in the V-SVZ activates quiescent neural stem cells to re-enter the cell cycle, leading to complete regeneration of neuroblasts in the V-SVZ within 10 days following AraC withdrawal. , The kinetics of neuroblast regeneration in the V-SVZ aligned closely with the spatiotemporal pattern of PDGFRA + NG2 + cell regeneration in the cerebrum. First, we noted that recombined (tdTomato + ) OPCs did not contribute to the regeneration of PDGFRA + cells. Rather, we showed that non-recombined (GFP + ) PDGFRA + cells arose first in the vicinity of the V-SVZ and expressed Nestin, a known V-SVZ marker, shortly after DCX + cell density returned to control levels after ablation. This observation could be explained by two distinct possibilities. Our a priori view was that this reflected recruitment of oligodendrogenic NPCs from the V-SVZ that maintain their expression of Nestin for some time after they migrate out of the V-SVZ, given the known capacity of the V-SVZ to generate PDGFRA + NG2 + cells in health and disease. , An alternate possibility is that Nestin + GFP + PDGFRA + cells observed at 12 dppr reflect non-recombined OPCs that survive pharmacogenetic ablation and transiently upregulate Nestin in response to neuroinflammatory cues, as has been described for astrocytes under ischemic conditions. , Countering this alternate possibility, we found that lentiviral-mediated labeling of Nestin-expressing cells in the V-SVZ prior to OPC ablation allowed us to identify mKate2 + PDGFRA + cells in the corpus callosum following OPC ablation, providing additional evidence to support the V-SVZ origin of repopulating PDGFRA + cells in the cerebrum. Although this experiment demonstrates that NPCs lining the lateral ventricles can give rise to PDGFRA + cells during the regenerative phase that follows OPC ablation, the limited efficiency of this viral labeling approach cannot account for all PDGFRA + cells that are regenerated in the cerebrum. We addressed this issue by adopting a co-ablation strategy that directs Sox10-dependent DTA expression to both the NPC and OPC lineages. Using this approach, we found that the co-ablation of both parenchymal OPCs and oligodendrogenic NPCs resulted in failure to regenerate PDGFRA + cells in the dorsal cerebrum, providing robust evidence that repopulating PDGFRA + cells in the cerebrum derive from NPCs. A potential caveat of the co-ablation strategy that warrants consideration is the fidelity with which the Nestin-CreER T2 driver is restricted to NPCs. Let us first consider the possibility that the PDGFRA + cells that we observe during early repopulation reflect surviving OPCs that express Nestin ectopically under inflammatory conditions rather than cells of NPC origin. If this were the case, ablation of Nestin-expressing OPCs that survive pharmacogenetic ablation in Nestin + :Pdgfra + :DTA + mice would require 100% of these putative Nestin-expressing OPCs to exhibit TAM-independent recombination of the Sox10-DTA allele. We were able to exclude this possibility by demonstrating that TAM-independent recombination in NPCs with a history of up to 10 weeks of postnatal Nestin expression occurs with a frequency of less than 1% ( P). Given these findings, the most parsimonious explanation is that following TAM gavage in Nestin + :Pdgfrα + :DTA + mice, the recombined Sox10-DTA allele starts to be expressed by NPCs upon their specification to an oligodendroglial fate, which induces DTA-mediated apoptosis of oligodendrogenic NPCs. Our findings expand on previous work demonstrating that NPCs originating in the V-SVZ give rise to oligodendrogenic progenitors, both under normal physiological conditions and in response to demyelination of proximal white matter tracts. , , , Another study, in which mice undergo global Pdgfra inactivation resulting in OPC depletion, suggested that repopulation of OPCs occurred by expansion of OPCs originating from immature Nestin-expressing cells activated in the meninges and brain parenchyma and from OPCs that escape Pdgfra inactivation. However, our data do not support a meningeal origin for newly generated PDGFRA + NG2 + cells ( N and S5O). Collectively, our mouse model of conditional OPC ablation provides a long-sought-after methodology to eliminate OPCs in the adult mouse CNS. The model now provides the opportunity to explore the role that OPCs play in CNS homeostasis by probing the consequences of their ablation through various approaches including, but not limited to, single-cell RNA sequencing and proteomic analysis to identify the transcriptional and post-translational changes of OPC-ablated versus non-ablated mice. Limitations of the study The strength of our finding that OPC-deficient mice have no OPCs in their cerebrum at 0 dppr is limited by the tissue sampling strategy that we adopted. As we did not analyze every single section from each mouse brain, or collect sections from the entire rostral and caudal extent of the cerebrum, we cannot exclude the possibility that there could be some OPCs in sections of the cerebrum that were not examined. We conducted a probability analysis to predict the number of OPCs that could theoretically be present within the region of the cerebrum from which sections were sampled (see , ). We calculated an even-chance probability that ∼97.2% of all sections of the cerebrum are devoid of OPCs ( E). We estimate on this basis that each OPC-deficient mouse is likely to have fewer than about 15 OPCs in the cerebrum within the region of interest (+0.25 to −5.07 mm A/P relative to bregma). Further analysis of OPC-deficient mice, including the more rostral and caudal extents of the cerebrum, would be required to determine the absolute number of OPCs in the cerebrum that escape ablation. Consequently, we cannot exclude the possibility that there are some OPCs in the cerebrum that escape ablation and that subsequently contribute to the regeneration of cerebral PDGFRA + cells following OPC ablation, in addition to the large numbers of PDGFRA + cells that derive from oligodendrogenic NPCs residing in the V-SVZ. Although we were able to identify a V-SVZ origin for the majority of repopulating PDGFRA + cells in the cerebrum, we are unable to clearly identify the origin of PDGFRA + cells that repopulated the brainstem. Although most PDGFRA + cells in the brainstem were tdTomato − , there were many that were tdTomato + ( G). An even higher fraction of repopulating PDGFRA + cells was found to express tdTomato in the spinal cord and optic nerve, areas that also exhibited robust OPC depletion ( E–S4H). Repopulating tdTomato + PDGFRA + cells in the brainstem co-expressed GFP, indicating that they likely originate by clonal expansion of pre-existing OPCs in which the Ai14 tdTomato but not the Sox10-DTA allele had recombined. In addition, the regeneration of tdTomato − PDGFRA + cells in the brainstem could reflect repopulation by OPCs that escape recombination of both the Ai14 tdTomato and Sox10-DTA alleles and survive AraC infusion. Alternatively, it is also plausible that repopulating tdTomato − PDGFRA + cells could arise from immature progenitors residing within alternate stem cell niches located within the brainstem. An oligodendrogenic niche has recently been described in the median eminence of the hypothalamus bordering the third ventricle. Other putative stem cell niches include the ependymal and subependymal zones of the third and fourth ventricles. Notably, we identified high densities of PDGFRA + cells around the aqueduct ( D) and the fourth ventricle (data not shown) at 10 dppr. It thus seems likely that the homeostatic regeneration of PDGFRA + NG2 + cells following OPC depletion is mediated by distinct populations of progenitors in the cerebrum and brainstem. The strength of our finding that OPC-deficient mice have no OPCs in their cerebrum at 0 dppr is limited by the tissue sampling strategy that we adopted. As we did not analyze every single section from each mouse brain, or collect sections from the entire rostral and caudal extent of the cerebrum, we cannot exclude the possibility that there could be some OPCs in sections of the cerebrum that were not examined. We conducted a probability analysis to predict the number of OPCs that could theoretically be present within the region of the cerebrum from which sections were sampled (see , ). We calculated an even-chance probability that ∼97.2% of all sections of the cerebrum are devoid of OPCs ( E). We estimate on this basis that each OPC-deficient mouse is likely to have fewer than about 15 OPCs in the cerebrum within the region of interest (+0.25 to −5.07 mm A/P relative to bregma). Further analysis of OPC-deficient mice, including the more rostral and caudal extents of the cerebrum, would be required to determine the absolute number of OPCs in the cerebrum that escape ablation. Consequently, we cannot exclude the possibility that there are some OPCs in the cerebrum that escape ablation and that subsequently contribute to the regeneration of cerebral PDGFRA + cells following OPC ablation, in addition to the large numbers of PDGFRA + cells that derive from oligodendrogenic NPCs residing in the V-SVZ. Although we were able to identify a V-SVZ origin for the majority of repopulating PDGFRA + cells in the cerebrum, we are unable to clearly identify the origin of PDGFRA + cells that repopulated the brainstem. Although most PDGFRA + cells in the brainstem were tdTomato − , there were many that were tdTomato + ( G). An even higher fraction of repopulating PDGFRA + cells was found to express tdTomato in the spinal cord and optic nerve, areas that also exhibited robust OPC depletion ( E–S4H). Repopulating tdTomato + PDGFRA + cells in the brainstem co-expressed GFP, indicating that they likely originate by clonal expansion of pre-existing OPCs in which the Ai14 tdTomato but not the Sox10-DTA allele had recombined. In addition, the regeneration of tdTomato − PDGFRA + cells in the brainstem could reflect repopulation by OPCs that escape recombination of both the Ai14 tdTomato and Sox10-DTA alleles and survive AraC infusion. Alternatively, it is also plausible that repopulating tdTomato − PDGFRA + cells could arise from immature progenitors residing within alternate stem cell niches located within the brainstem. An oligodendrogenic niche has recently been described in the median eminence of the hypothalamus bordering the third ventricle. Other putative stem cell niches include the ependymal and subependymal zones of the third and fourth ventricles. Notably, we identified high densities of PDGFRA + cells around the aqueduct ( D) and the fourth ventricle (data not shown) at 10 dppr. It thus seems likely that the homeostatic regeneration of PDGFRA + NG2 + cells following OPC depletion is mediated by distinct populations of progenitors in the cerebrum and brainstem. Key resources table Resource availability Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Tobias D. Merson ( [email protected] ). Materials availability Plasmids generated in this study, namely FUW-Nestin-NLS-HA-Dre and FUW-EF1α-FREX-Myc/mKate2-f-mem , will be made available upon request. Experimental model and subject details Animals Animal experiments were conducted in accordance with the National Health and Medical Research Council guidelines for the care and use of animals. All animal studies were approved by the animal ethics committee of the Florey Institute of Neuroscience and Mental Health (Parkville, VIC, Australia) and the animal ethics committee of Monash University (Clayton, VIC, Australia). Both male and female mice were used in all experimental cohorts, with experimental interventions (i.e. tamoxifen gavage) starting between 8–10 weeks of age. Pdgfrα-CreER T2 PAC transgenic mice (MGI:3832569) expressing CreER T2 under the regulation of the Pdgfra gene promoter, and Sox10-DTA transgenic mice (MGI:4999728) expressing a P1-derived artificial chromosome DNA construct containing the gene cassette Sox10-lox-GFP-poly(A)-lox-DTA driven by the Sox10 promoter. , These two mouse lines were crossed to generate Pdgfrα-CreER T2+/+ :Sox10-DTA +/− and Pdgfrα-CreER T2+/+ :Sox10-DTA −/− breeders that were used to produce experimental cohorts comprising male and female offspring. To generate Pdgfrα-CreER T2+/− :Sox10-DTA +/− :Ai14 tdTomato +/− mice, we crossed Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with homozygous Ai14 tdTomato +/+ mice (MGI:3809524), which were purchased from the Jackson Laboratory. We also generated Nestin-CreER T2+/− :Pdgfrα-CreER T2+/− :Sox10-DTA +/− transgenic mice for the combined ablation of both parenchymal OPCs and oligodendrogenic NPCs by crossing Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with Nestin-CreER T2 +/+ (line 5.1) mice (MGI:3641212) generously provided by Ryoichiro Kageyama. We noted that a number of Nestin + :Pdgfra + :DTA + mice did not survive beyond weaning due to hydrocephalus. In surviving adult mice that were administered TAM and AraC (n=3 mice), we observed anatomical abnormalities consistent with hydrocephalus including expanded lateral ventricles ( J and S5K). Hydrocephalus was noted in Nestin + :Pdgfra + :DTA + mice as early as postnatal day 16. The ratio of Nestin + :Pdgfra + :DTA + versus Nestin + :Pdgfra + :DTA – mice surviving beyond weaning (P21) was 33.6% (36 out of 107 offspring) and gross hydrocephalus was evident in 39% of mice (14/36 mice) that survived post weaning. By contrast, no Nestin + :Pdgfra + : DTA – mice exhibited evidence of gross hydrocephalus (0/71 mice). The incidence of hydrocephalus in Nestin + :Pdgfra + :DTA + mice likely reflects TAM-independent recombination of the DTA allele due to leaky Cre activity driven by the Nestin-CreER T2 allele during ontogeny thereby resulting in congenital apoptosis of a subset of neural crest cells which express both SOX10 and Nestin. This possibility is supported by the observation that loss of neural crest cells during fetal development is documented to cause hydrocephalus. , Finally, we generated Nestin-CreER T2+ : mTmG + mice to evaluate the degree of TAM-independent recombination among adult neural progenitor cells by crossing Nestin-CreER T2 +/+ (line 5.1) mice with mTmG +/+ mice (MGI:3716464). Method details Timelines of experimental interventions for high-efficiency OPC ablation Step 1 : Starting at 8–10 weeks of age, Pdgfrα + :DTA + mice receive TAM by oral gavage for 4 consecutive days. Step 2: Starting 4 days after the last day of TAM gavage, mice undergo surgical implantation of an osmotic minipump to deliver AraC into the CSF via the cisterna magna. Step 3: After 6 days of intracisternal infusion, mice undergo surgery to remove the osmotic minipump. The day of minipump removal is recorded as 0 days post-pump removal (0 dppr). Step 4: Mice are humanely sacrificed by perfusion fixation at the desired time-point, noting that OPCs remain depleted in the brain until at least 10 dppr. TAM gavage Cre-mediated recombination was induced by oral gavage of TAM (Sigma) delivered at a dose of 300 mg/kg/d for 4 consecutive days, as described in previous studies. , TAM was prepared at 40 mg/mL in corn oil (Sigma). No toxicity due to TAM administration was observed in any cohort of mice. EdU administration To label cells that proliferated during the first 10 days following AraC withdrawal, 5-ethynyl-2′-deoxyuridine (EdU; Life Technologies) was administered to mice in their drinking water at 0.1 mg/mL. EdU-supplemented drinking water was placed in light-proof water bottles and replaced every 3 days. Preparation of AraC for intracisternal infusions Cytosine-β-D-arabinofuranoside (AraC, Sigma) was prepared at a final concentration of 2% (w/v) in artificial CSF (aCSF, Tocris Bioscience). One hundred microliters of either 2% AraC or vehicle (aCSF) was injected into osmotic minipumps (Alzet Model 1007D, flow rate 0.5 μL/h, Brain Infusion Kit III) using a 1 mL syringe attached to a blunt fill needle. The flow moderator was attached to a bespoke tubing assembly made by connecting PE-10 polyethylene tubing to the vinyl catheter tube provided with the Brain Infusion Kit III (Alzet). The flow moderator with attached tubing was slowly inserted into the filled osmotic minipump to create a complete pump assembly. The pumps were then transferred into 50 mL conical tubes containing sterile saline and placed in a 37°C water bath overnight to prime the pumps prior to surgical implantation. Surgical implantation of osmotic minipumps AraC or vehicle (aCSF) was infused into the CSF at the level of the cisterna magna via an osmotic minipump for a period of 6 days. Prior to anesthesia, mice received a subcutaneous injection of meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline. Mice were then anesthetized by isoflurane inhalation (4% induction, 2% maintenance). The head of the anesthetized mouse was fixed in a stereotaxic frame using a nose cone and ear bars. The position of the abdomen was lowered so that the neck was flexed at an angle of 30–45° relative to horizontal and the body was placed on a thermostatically controlled heat pad to maintain body temperature. Eyes were moistened with water-based lubricant and the fur was cleared over the head and shoulders with an electric shaver. A sterile cotton tip soaked in 80% ethanol was used to swab and clean the surface of incision site, followed by 10% (w/v) povidone-iodine solution (Betadine). A midline skin incision was made using a sharp scalpel from a position just rostral of the external occipital protuberance to ∼1 cm cranial to the shoulders. The atlanto-occipital membrane was visualized after blunt dissection of the muscle layers to expose the position of the cisterna magna. Using a straight hemostat, a pocket was created by spreading the subcutaneous connective tissues apart and the osmotic minipump was inserted into the pocket overlying the hindquarters. The cisterna magna was pierced superficially with a 25 G needle and the PE-10 tubing connected to the osmotic minipump was introduced into the hole before applying a small amount of superglue to fix the tubing in place. The position of the tubing was further anchored and fixed to the musculature using sutures. Bupivacaine (100 μL of 0.25% solution) was flushed over the musculature to provide rapid onset analgesia. The skin was then sutured and 10% (w/v) povidone-iodine solution was applied to the sutured skin. The animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment and were administered meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline once daily for the first 2 days post-surgery. All mice were provided with powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. Surgical removal of osmotic minipumps Infusion of AraC or vehicle (aCSF) was ceased after 6 days by removing the osmotic minipumps. Animals were anesthetized as described above and the former skin incision site was reopened to gain access to the tubing connected to the osmotic minipump. The tubing was cut 2 mm from the glued/sutured musculature and the minipump was removed from the subcutaneous pocket. The tubing fixed to the musculature was left in place and the free end of the tubing was sealed with superglue. The incision was closed with sutures and the animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment. AraC-administered mice experienced a mild reduction in body weight during AraC infusion. If mice showed signs of greater than 10% weight loss, they were given powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. If mice maintained greater than 15% weight loss for more than 72 h, they were humanely euthanized. Following removal of the osmotic minipumps delivering AraC, mice returned to normal weight. Generation of lentiviral vectors The lentiviral vectors LV-FUW-Nestin-NLS-HA-Dre and LV-FUW-EF1α-FREX-Myc/mKate2-f-mem were used for fate-mapping of V-SVZ-derived NPCs. These vectors were designed using Geneious Prime bioinformatics software (RRID: SCR_010519 ) and constructed using standard molecular cloning techniques, including PCR using Phusion High-Fidelity DNA polymerase (New England Biolabs), restriction enzyme digestion and Gibson assembly (New England Biolabs). To create these lentiviral vectors, the rat Nestin promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#32401). The DNA encoding NLS-HA-Dre was amplified by PCR from plasmid DNA (Addgene Cat#51272). The coding sequence for the rat Nestin second intron enhancer was amplified from rat genomic DNA. EF1α promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#38770). The FREX-Myc/mKate2-f-mem DNA sequence was generated by DNA synthesis (Integrated DNA Technologies). The PCR products were cloned into the FUGW lentiviral vector backbone (Addgene Cat#14883) in place of the GFP coding sequence by Gibson Assembly. Plasmid DNA was then extracted and purified using Plasmid Mini or Midi Kits (Qiagen). The DNA sequences of the lentiviral vectors were verified by Sanger sequencing (Micromon, Monash University). Sequence alignments were performed using SnapGene molecular biology software (RRID: SCR_015052 ). In vitro validation of lentiviral vectors Plasmid DNA was transfected into HEK293T cells cultured at 37°C and 5% CO 2 and analyzed 48 h post-transfection for fluorescence. The pmKate2-f-mem plasmid (Evrogen Cat#FP186) served as a positive control for mKate2 fluorescence. HEK293T cells were plated in a 24-well plate and cultured in Dulbecco’s Modified Eagle’s Medium (DMEM, Gibco), supplemented with 10% fetal bovine serum (Invitrogen). At 80% confluency, cells were transfected with the plasmids using Lipofectamine 2000 Transfection Reagent (Thermo Fisher) according to the manufacturer’s instructions. The growth medium was replaced with fresh medium containing 100 U/mL penicillin and 100 mg/mL streptomycin (Gibco) 4 h post-transfection. At 48 h after transfection, cells were post-fixed with 4% PFA/DPBS, processed for immunocytochemistry and imaged for mKate2 expression using a Zeiss LSM780 confocal microscope. Lentivirus production Lentiviruses were produced and packaged in HEK293T cells by the Vector and Genome Engineering Facility, Children’s Medical Research Institute (Westmead, Australia). To determine viral titer, HEK293T cells were transduced with lentivirus. Genomic DNA was extracted from transduced cells and viral DNA copy number/cell was determined by Multiplex Taqman qPCR. Intraventricular injection of lentiviral vectors Mice were anesthetized by isoflurane inhalation and positioned in a motorized stereotaxic frame as described above. A small incision was made in the scalp and the injection site of the lateral ventricle was marked using the following stereotaxic coordinates, relative to bregma: anterior-posterior −0.22 mm, medial-lateral −1.00 mm, and dorsal-ventral −2.5 mm. A small hole was drilled into the skull to expose the brain surface. Five microliters of mixed lentiviral vectors were gently drawn up into a blunt-tipped 35G needle attached to a 10 μL NanoFil syringe (World Precision Instruments, WPI). The syringe was then placed into a microinjection pump (UltraMicroPump III, WPI) attached to the stereotaxic frame and lowered slowly into the injection site of the lateral ventricle according to the dorsal-ventral coordinate. The microinjection pump controlled the infusion of 5 μL total volume of lentiviruses at a flow rate of 0.5 μL/min, after which the needle was left in place for 5 min to ensure complete diffusion of the viruses and avoid backflow. Tissue processing and immunohistochemistry Mice were deeply anesthetized with 100 mg/kg sodium pentobarbitone and then transcardially perfused with PBS, followed by 4% PFA/PBS. Brains, optic nerves and spinal cords were removed and post-fixed in 4% PFA/PBS for 2 h on ice, transferred to PBS overnight, cryopreserved in 20% sucrose/PBS overnight, followed by embedding in Tissue-Tek OCT compound (Sakura FineTek). The tissues were stored at −80°C until sectioned. Ten micron-thick coronal sections of the brain and spinal cord, and longitudinal sections of the optic nerve were cut on a Leica cryostat, collected onto Superfrost Plus slides (Menzel Glaser), and air dried for 1 h before storing at −80°C until stained. Cryosections were air dried, then blocked with PBS containing 0.3% Triton X-100, 10% normal donkey serum, and 10% BlokHen (Aves Labs Cat. BH-1001) for 1 h at room temperature (RT). The sections were then incubated with primary antibodies at RT overnight, followed by 1 h incubation at RT with secondary antibodies. For multiplex immunohistochemistry, some primary antibodies were incubated simultaneously. The following primary antibodies were used: rabbit anti-ALDH1L1 (1:1000, Abcam Cat# ab87117), rabbit anti-ASPA (1:500, GeneTex GTX113389), mouse anti-CC1 (1:100, Calbiochem Cat# OP80), rat anti-CD16/CD32 (1:100, BD Biosciences Cat# 553142), rabbit anti-CD206 (1:200, Abcam Cat# ab64693), goat anti-DCX (1:100; Santa Cruz Biotechnology Cat# sc-8066), mouse anti-FoxJ1 (1:200, eBioscience Cat# 14-9965-82), mouse anti-GFAP (1:500, Millipore Cat# MAB360), chicken anti-GFP (1:2000, Aves Labs Cat# GFP-1020), rabbit anti-GPR17 (1:800, Cayman Cat# 10136), mouse anti-HA-tag (1:500, Sigma Cat# H9658), rabbit anti-Iba1 (1:200, Wako Cat# 019-19741), goat anti-Iba1 (1:500, Abcam Cat# ab5076), rabbit anti-Laminin-1 (1:400, Sigma Cat# L9393), mouse anti-Myc-tag (1:500, Sigma Cat# 05-419), mouse anti-Nestin (1:100, Millipore Cat# MAB353), mouse anti-NeuN (1:100, Millipore Cat# MAB377), rabbit anti-NG2 (1:200, Millipore Cat# AB5320), goat anti-PDGFRA (1:150, R&D Systems Cat# AF1062), rat anti-PDGFRA (1:150, BD Biosciences Cat# 558774), goat anti-PDGFRB (1:200, R&D Systems Cat# AF1042), mouse anti-α-SMA (1:500, Abcam Cat# ab7817), goat anti-SOX10 (1:100, R&D Systems Cat# AF2864), and rabbit anti-TagRFP for mKate2 labeling (1:500, Kerafast Cat# EMU113). To label GFP + tdTomato + brain sections using primary antibodies against Nestin, PDGFRA, and NG2, as well as EdU and Hoechst, inactivation of GFP and tdTomato was performed by firstly treating the brain sections with 3% H 2 O 2 and 20 mM HCl in PBS for 1 h at RT with light illumination. The slides were then washed three times with PBS, incubated with blocking buffer and processed for the immunostaining as described above. Secondary antibodies raised in donkey and conjugated to Alexa Fluor 488, FITC, TRITC, Alexa Fluor 594 or Alexa Fluor 647 were purchased from Jackson ImmunoResearch or Invitrogen and used at 1:200 dilution. Sections incubated with biotinylated rat anti-PECAM1/CD31 antibody (1:200, BD Biosciences Cat# 553371) were rinsed and further incubated with streptavidin-Brilliant Violet 480 (1:200; BD Biosciences Cat# 564876) for 30 min. Some slides stained without the fluorophore Brilliant Violet 480 were also counterstained with Hoechst 33342 (1 μg/mL, Thermo Fisher). For myelin analysis, slides were stained with Black-Gold II (Biosensis) according to the manufacturer’s instructions. To detect EdU incorporation in proliferating cells, sections were first processed for immunohistochemistry as above, followed by EdU detection using the Click-iT EdU Alexa Fluor 647 Imaging Kit (Thermo Fisher) as per the manufacturer’s instructions. Sections were coverslipped with Mowiol mounting medium and subjected to fluorescence and confocal microscopic analysis. Imaging Stained 10 μm thick coronal sections were imaged by laser scanning confocal microscopy (Zeiss LSM510-META or Zeiss LSM780), which was used to detect up to four fluorophores by laser excitation at 405, 488, 561 and 633 nm wavelengths. For five-color imaging such as brain sections stained with Brilliant Violet 480 or Hoechst, as well as Alexa Fluor 488, TRITC, Alexa Fluor 594 and Alexa Fluor 647, linear unmixing was performed during acquisition (online fingerprinting). Tile scanning was performed at a magnification of 10x or 20x for the cellular analysis of the entire brain sections of transgenic mouse lines. For the analysis of Laminin-1/PDGFRA, NG2/PDGFRA or PDGFRB/PDGFRA colocalization, confocal images were acquired using a 63X objective to generate Z-stacks. For global analysis of cellular distributions in the entire mouse brain, 2–3 sections at each rostro-caudal position were scanned on an Olympus VS120 Virtual Slide Microscope with a 20x objective at Monash Histology Platform, Monash University. The resulting images have a pixel resolution of 0.65 μm/pixel. For the sections stained with Black-Gold II, 2–3 representative images per mouse were taken using a 10x objective on a Zeiss Axioplan upright fluorescent microscope and captured with an Axioplan HRc camera (Carl Zeiss) using the Axiovision 7.2 imaging software. All images were taken with the same exposure time. Image processing Confocal images were imported into Fiji image analysis software (Fiji for macOS, RRID: SCR_002285 ) for quantification of cellular density in the regions of interest. For image stacks, deconvolution was performed with the “Iterative Deconvolve 3D″ plugin in Fiji. Colocalization analysis was performed by using the “Colocalization threshold” plugin in Fiji to automatically determine a detection threshold for each channel to avoid subjective bias. The extent of within-pixel fluorescent signal colocalization as indicated by Pearson’s correlation coefficient was calculated in each optical slice and then flattened into a maximal Z-projection to reveal colocalized pixels across the entire image thickness. All analyses were performed in a blinded fashion. Quantification and statistical analysis Image analysis and cell quantification For slide-scanned images, double- or triple-positive cells were counted manually in Fiji. For morphological analysis of microglia and astrocytes, Z-stacks of 1–4 cells were taken from the cortical region of each of the brain sections under a 40× objective. Microglia soma and branching measures were visualized using IBA1 immunofluorescence, whereas those for astrocytes were assessed with GFAP. Z-stacked images were converted to maximum intensity projections using Fiji and these images were background subtracted, contrast-enhanced to ensure full arborization could be detected and a local threshold was applied to the image. Microglial and astrocytic soma areas were calculated using the ‘Measure’ command and branch features (number of primary or secondary processes; maximum length of primary process) were manually counted and measured in Fiji. A total of 18–22 astrocytes and 18–39 microglia per group were analyzed in a blinded fashion. For quantification of myelin intensity, images from the sections stained with Black-Gold II were converted to grayscale in Fiji and automated measurements of myelin intensity were taken using the measurement function to record the mean gray value within the regions of interest. All analyses were performed in a blinded fashion. Probability calculations At 0 dppr, we did not find any PDGFRA + OPCs in sections of the cerebrum of TAM + AraC-administered Pdgfrα + :DTA + mice that were examined. However, since we did not analyze every single section of each mouse brain, we cannot exclude the possibility that there could be undetected OPCs in sections of the brains of OPC-ablated mice that were not examined. To estimate the theoretical number of undetected OPCs that could be missed based on our tissue sampling strategy, we calculated the probability that the cerebrum of sampled tissue sections contains no OPCs in circumstances where residual OPCs are sparsely distributed across the cerebrum. For OPC-ablated mice, we performed cell counts using six (6) ten-micron thick sections per mouse for a total of 4 mice at 0 dppr, i.e., a total of 24 sections across the 4 mice. These sections were sampled from a region of interest (ROI) extending from +0.25 mm to −5.07 mm A/P relative to bregma, theoretically reflecting a total of 532 ten-micron thick sections per mouse. We calculated the probability of detecting zero OPCs in the cerebrum of randomly sampled sections under various conditions, where we defined the number of sections ( N ) that contain no OPCs in the region of the cerebrum from which the sections were collected. The probability ( P 1 ) of selecting 6 sections that contain no OPCs is given by the formula [ P 1 = ( N 532 ) × ( N − 1 531 ) × ( N − 2 530 ) × ( N − 3 529 ) × ( N − 4 528 ) × ( N − 5 527 ) ] . Since we selected 6 sections from each of 4 mice, then the probability ( P 2 ) of selecting a total of 24 sections that contain no OPCs is given by the formula [ P 2 = ( P 1 ) 4 ] . The percentage ( X ) of sections devoid of OPCs was calculated as [ X = ( N 532 ) × 100 ] . We plotted P 2 against X ( E) and determined the value of N when P 2 = 0.5. Finally, we calculated the number of sections ( S ) that contain OPCs when P 2 = 0.5 using the formula [ S = 532 − N ], which gave a value of 15 sections per ROI. Working on the assumption that the cerebrum in each of these sections contains one OPC, we estimated that there are likely to be fewer than about 15 OPCs within the cerebrum ROI for each OPC-ablated mouse. Statistical analyses All statistical analyses were performed using the GraphPad Prism software (v.9). Statistical significance was determined using an unpaired, two-tailed Student’s t test or by two-way ANOVA with Bonferroni’s, Tukey’s or Sidak’s multiple-comparison tests. Statistical significance was defined as p < 0.05. Quantitative data are reported as mean ± SEM. Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Tobias D. Merson ( [email protected] ). Materials availability Plasmids generated in this study, namely FUW-Nestin-NLS-HA-Dre and FUW-EF1α-FREX-Myc/mKate2-f-mem , will be made available upon request. Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Tobias D. Merson ( [email protected] ). Plasmids generated in this study, namely FUW-Nestin-NLS-HA-Dre and FUW-EF1α-FREX-Myc/mKate2-f-mem , will be made available upon request. Animals Animal experiments were conducted in accordance with the National Health and Medical Research Council guidelines for the care and use of animals. All animal studies were approved by the animal ethics committee of the Florey Institute of Neuroscience and Mental Health (Parkville, VIC, Australia) and the animal ethics committee of Monash University (Clayton, VIC, Australia). Both male and female mice were used in all experimental cohorts, with experimental interventions (i.e. tamoxifen gavage) starting between 8–10 weeks of age. Pdgfrα-CreER T2 PAC transgenic mice (MGI:3832569) expressing CreER T2 under the regulation of the Pdgfra gene promoter, and Sox10-DTA transgenic mice (MGI:4999728) expressing a P1-derived artificial chromosome DNA construct containing the gene cassette Sox10-lox-GFP-poly(A)-lox-DTA driven by the Sox10 promoter. , These two mouse lines were crossed to generate Pdgfrα-CreER T2+/+ :Sox10-DTA +/− and Pdgfrα-CreER T2+/+ :Sox10-DTA −/− breeders that were used to produce experimental cohorts comprising male and female offspring. To generate Pdgfrα-CreER T2+/− :Sox10-DTA +/− :Ai14 tdTomato +/− mice, we crossed Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with homozygous Ai14 tdTomato +/+ mice (MGI:3809524), which were purchased from the Jackson Laboratory. We also generated Nestin-CreER T2+/− :Pdgfrα-CreER T2+/− :Sox10-DTA +/− transgenic mice for the combined ablation of both parenchymal OPCs and oligodendrogenic NPCs by crossing Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with Nestin-CreER T2 +/+ (line 5.1) mice (MGI:3641212) generously provided by Ryoichiro Kageyama. We noted that a number of Nestin + :Pdgfra + :DTA + mice did not survive beyond weaning due to hydrocephalus. In surviving adult mice that were administered TAM and AraC (n=3 mice), we observed anatomical abnormalities consistent with hydrocephalus including expanded lateral ventricles ( J and S5K). Hydrocephalus was noted in Nestin + :Pdgfra + :DTA + mice as early as postnatal day 16. The ratio of Nestin + :Pdgfra + :DTA + versus Nestin + :Pdgfra + :DTA – mice surviving beyond weaning (P21) was 33.6% (36 out of 107 offspring) and gross hydrocephalus was evident in 39% of mice (14/36 mice) that survived post weaning. By contrast, no Nestin + :Pdgfra + : DTA – mice exhibited evidence of gross hydrocephalus (0/71 mice). The incidence of hydrocephalus in Nestin + :Pdgfra + :DTA + mice likely reflects TAM-independent recombination of the DTA allele due to leaky Cre activity driven by the Nestin-CreER T2 allele during ontogeny thereby resulting in congenital apoptosis of a subset of neural crest cells which express both SOX10 and Nestin. This possibility is supported by the observation that loss of neural crest cells during fetal development is documented to cause hydrocephalus. , Finally, we generated Nestin-CreER T2+ : mTmG + mice to evaluate the degree of TAM-independent recombination among adult neural progenitor cells by crossing Nestin-CreER T2 +/+ (line 5.1) mice with mTmG +/+ mice (MGI:3716464). Animal experiments were conducted in accordance with the National Health and Medical Research Council guidelines for the care and use of animals. All animal studies were approved by the animal ethics committee of the Florey Institute of Neuroscience and Mental Health (Parkville, VIC, Australia) and the animal ethics committee of Monash University (Clayton, VIC, Australia). Both male and female mice were used in all experimental cohorts, with experimental interventions (i.e. tamoxifen gavage) starting between 8–10 weeks of age. Pdgfrα-CreER T2 PAC transgenic mice (MGI:3832569) expressing CreER T2 under the regulation of the Pdgfra gene promoter, and Sox10-DTA transgenic mice (MGI:4999728) expressing a P1-derived artificial chromosome DNA construct containing the gene cassette Sox10-lox-GFP-poly(A)-lox-DTA driven by the Sox10 promoter. , These two mouse lines were crossed to generate Pdgfrα-CreER T2+/+ :Sox10-DTA +/− and Pdgfrα-CreER T2+/+ :Sox10-DTA −/− breeders that were used to produce experimental cohorts comprising male and female offspring. To generate Pdgfrα-CreER T2+/− :Sox10-DTA +/− :Ai14 tdTomato +/− mice, we crossed Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with homozygous Ai14 tdTomato +/+ mice (MGI:3809524), which were purchased from the Jackson Laboratory. We also generated Nestin-CreER T2+/− :Pdgfrα-CreER T2+/− :Sox10-DTA +/− transgenic mice for the combined ablation of both parenchymal OPCs and oligodendrogenic NPCs by crossing Pdgfrα-CreER T2+/+ :Sox10-DTA +/− mice with Nestin-CreER T2 +/+ (line 5.1) mice (MGI:3641212) generously provided by Ryoichiro Kageyama. We noted that a number of Nestin + :Pdgfra + :DTA + mice did not survive beyond weaning due to hydrocephalus. In surviving adult mice that were administered TAM and AraC (n=3 mice), we observed anatomical abnormalities consistent with hydrocephalus including expanded lateral ventricles ( J and S5K). Hydrocephalus was noted in Nestin + :Pdgfra + :DTA + mice as early as postnatal day 16. The ratio of Nestin + :Pdgfra + :DTA + versus Nestin + :Pdgfra + :DTA – mice surviving beyond weaning (P21) was 33.6% (36 out of 107 offspring) and gross hydrocephalus was evident in 39% of mice (14/36 mice) that survived post weaning. By contrast, no Nestin + :Pdgfra + : DTA – mice exhibited evidence of gross hydrocephalus (0/71 mice). The incidence of hydrocephalus in Nestin + :Pdgfra + :DTA + mice likely reflects TAM-independent recombination of the DTA allele due to leaky Cre activity driven by the Nestin-CreER T2 allele during ontogeny thereby resulting in congenital apoptosis of a subset of neural crest cells which express both SOX10 and Nestin. This possibility is supported by the observation that loss of neural crest cells during fetal development is documented to cause hydrocephalus. , Finally, we generated Nestin-CreER T2+ : mTmG + mice to evaluate the degree of TAM-independent recombination among adult neural progenitor cells by crossing Nestin-CreER T2 +/+ (line 5.1) mice with mTmG +/+ mice (MGI:3716464). Timelines of experimental interventions for high-efficiency OPC ablation Step 1 : Starting at 8–10 weeks of age, Pdgfrα + :DTA + mice receive TAM by oral gavage for 4 consecutive days. Step 2: Starting 4 days after the last day of TAM gavage, mice undergo surgical implantation of an osmotic minipump to deliver AraC into the CSF via the cisterna magna. Step 3: After 6 days of intracisternal infusion, mice undergo surgery to remove the osmotic minipump. The day of minipump removal is recorded as 0 days post-pump removal (0 dppr). Step 4: Mice are humanely sacrificed by perfusion fixation at the desired time-point, noting that OPCs remain depleted in the brain until at least 10 dppr. TAM gavage Cre-mediated recombination was induced by oral gavage of TAM (Sigma) delivered at a dose of 300 mg/kg/d for 4 consecutive days, as described in previous studies. , TAM was prepared at 40 mg/mL in corn oil (Sigma). No toxicity due to TAM administration was observed in any cohort of mice. EdU administration To label cells that proliferated during the first 10 days following AraC withdrawal, 5-ethynyl-2′-deoxyuridine (EdU; Life Technologies) was administered to mice in their drinking water at 0.1 mg/mL. EdU-supplemented drinking water was placed in light-proof water bottles and replaced every 3 days. Preparation of AraC for intracisternal infusions Cytosine-β-D-arabinofuranoside (AraC, Sigma) was prepared at a final concentration of 2% (w/v) in artificial CSF (aCSF, Tocris Bioscience). One hundred microliters of either 2% AraC or vehicle (aCSF) was injected into osmotic minipumps (Alzet Model 1007D, flow rate 0.5 μL/h, Brain Infusion Kit III) using a 1 mL syringe attached to a blunt fill needle. The flow moderator was attached to a bespoke tubing assembly made by connecting PE-10 polyethylene tubing to the vinyl catheter tube provided with the Brain Infusion Kit III (Alzet). The flow moderator with attached tubing was slowly inserted into the filled osmotic minipump to create a complete pump assembly. The pumps were then transferred into 50 mL conical tubes containing sterile saline and placed in a 37°C water bath overnight to prime the pumps prior to surgical implantation. Surgical implantation of osmotic minipumps AraC or vehicle (aCSF) was infused into the CSF at the level of the cisterna magna via an osmotic minipump for a period of 6 days. Prior to anesthesia, mice received a subcutaneous injection of meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline. Mice were then anesthetized by isoflurane inhalation (4% induction, 2% maintenance). The head of the anesthetized mouse was fixed in a stereotaxic frame using a nose cone and ear bars. The position of the abdomen was lowered so that the neck was flexed at an angle of 30–45° relative to horizontal and the body was placed on a thermostatically controlled heat pad to maintain body temperature. Eyes were moistened with water-based lubricant and the fur was cleared over the head and shoulders with an electric shaver. A sterile cotton tip soaked in 80% ethanol was used to swab and clean the surface of incision site, followed by 10% (w/v) povidone-iodine solution (Betadine). A midline skin incision was made using a sharp scalpel from a position just rostral of the external occipital protuberance to ∼1 cm cranial to the shoulders. The atlanto-occipital membrane was visualized after blunt dissection of the muscle layers to expose the position of the cisterna magna. Using a straight hemostat, a pocket was created by spreading the subcutaneous connective tissues apart and the osmotic minipump was inserted into the pocket overlying the hindquarters. The cisterna magna was pierced superficially with a 25 G needle and the PE-10 tubing connected to the osmotic minipump was introduced into the hole before applying a small amount of superglue to fix the tubing in place. The position of the tubing was further anchored and fixed to the musculature using sutures. Bupivacaine (100 μL of 0.25% solution) was flushed over the musculature to provide rapid onset analgesia. The skin was then sutured and 10% (w/v) povidone-iodine solution was applied to the sutured skin. The animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment and were administered meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline once daily for the first 2 days post-surgery. All mice were provided with powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. Surgical removal of osmotic minipumps Infusion of AraC or vehicle (aCSF) was ceased after 6 days by removing the osmotic minipumps. Animals were anesthetized as described above and the former skin incision site was reopened to gain access to the tubing connected to the osmotic minipump. The tubing was cut 2 mm from the glued/sutured musculature and the minipump was removed from the subcutaneous pocket. The tubing fixed to the musculature was left in place and the free end of the tubing was sealed with superglue. The incision was closed with sutures and the animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment. AraC-administered mice experienced a mild reduction in body weight during AraC infusion. If mice showed signs of greater than 10% weight loss, they were given powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. If mice maintained greater than 15% weight loss for more than 72 h, they were humanely euthanized. Following removal of the osmotic minipumps delivering AraC, mice returned to normal weight. Generation of lentiviral vectors The lentiviral vectors LV-FUW-Nestin-NLS-HA-Dre and LV-FUW-EF1α-FREX-Myc/mKate2-f-mem were used for fate-mapping of V-SVZ-derived NPCs. These vectors were designed using Geneious Prime bioinformatics software (RRID: SCR_010519 ) and constructed using standard molecular cloning techniques, including PCR using Phusion High-Fidelity DNA polymerase (New England Biolabs), restriction enzyme digestion and Gibson assembly (New England Biolabs). To create these lentiviral vectors, the rat Nestin promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#32401). The DNA encoding NLS-HA-Dre was amplified by PCR from plasmid DNA (Addgene Cat#51272). The coding sequence for the rat Nestin second intron enhancer was amplified from rat genomic DNA. EF1α promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#38770). The FREX-Myc/mKate2-f-mem DNA sequence was generated by DNA synthesis (Integrated DNA Technologies). The PCR products were cloned into the FUGW lentiviral vector backbone (Addgene Cat#14883) in place of the GFP coding sequence by Gibson Assembly. Plasmid DNA was then extracted and purified using Plasmid Mini or Midi Kits (Qiagen). The DNA sequences of the lentiviral vectors were verified by Sanger sequencing (Micromon, Monash University). Sequence alignments were performed using SnapGene molecular biology software (RRID: SCR_015052 ). In vitro validation of lentiviral vectors Plasmid DNA was transfected into HEK293T cells cultured at 37°C and 5% CO 2 and analyzed 48 h post-transfection for fluorescence. The pmKate2-f-mem plasmid (Evrogen Cat#FP186) served as a positive control for mKate2 fluorescence. HEK293T cells were plated in a 24-well plate and cultured in Dulbecco’s Modified Eagle’s Medium (DMEM, Gibco), supplemented with 10% fetal bovine serum (Invitrogen). At 80% confluency, cells were transfected with the plasmids using Lipofectamine 2000 Transfection Reagent (Thermo Fisher) according to the manufacturer’s instructions. The growth medium was replaced with fresh medium containing 100 U/mL penicillin and 100 mg/mL streptomycin (Gibco) 4 h post-transfection. At 48 h after transfection, cells were post-fixed with 4% PFA/DPBS, processed for immunocytochemistry and imaged for mKate2 expression using a Zeiss LSM780 confocal microscope. Lentivirus production Lentiviruses were produced and packaged in HEK293T cells by the Vector and Genome Engineering Facility, Children’s Medical Research Institute (Westmead, Australia). To determine viral titer, HEK293T cells were transduced with lentivirus. Genomic DNA was extracted from transduced cells and viral DNA copy number/cell was determined by Multiplex Taqman qPCR. Intraventricular injection of lentiviral vectors Mice were anesthetized by isoflurane inhalation and positioned in a motorized stereotaxic frame as described above. A small incision was made in the scalp and the injection site of the lateral ventricle was marked using the following stereotaxic coordinates, relative to bregma: anterior-posterior −0.22 mm, medial-lateral −1.00 mm, and dorsal-ventral −2.5 mm. A small hole was drilled into the skull to expose the brain surface. Five microliters of mixed lentiviral vectors were gently drawn up into a blunt-tipped 35G needle attached to a 10 μL NanoFil syringe (World Precision Instruments, WPI). The syringe was then placed into a microinjection pump (UltraMicroPump III, WPI) attached to the stereotaxic frame and lowered slowly into the injection site of the lateral ventricle according to the dorsal-ventral coordinate. The microinjection pump controlled the infusion of 5 μL total volume of lentiviruses at a flow rate of 0.5 μL/min, after which the needle was left in place for 5 min to ensure complete diffusion of the viruses and avoid backflow. Tissue processing and immunohistochemistry Mice were deeply anesthetized with 100 mg/kg sodium pentobarbitone and then transcardially perfused with PBS, followed by 4% PFA/PBS. Brains, optic nerves and spinal cords were removed and post-fixed in 4% PFA/PBS for 2 h on ice, transferred to PBS overnight, cryopreserved in 20% sucrose/PBS overnight, followed by embedding in Tissue-Tek OCT compound (Sakura FineTek). The tissues were stored at −80°C until sectioned. Ten micron-thick coronal sections of the brain and spinal cord, and longitudinal sections of the optic nerve were cut on a Leica cryostat, collected onto Superfrost Plus slides (Menzel Glaser), and air dried for 1 h before storing at −80°C until stained. Cryosections were air dried, then blocked with PBS containing 0.3% Triton X-100, 10% normal donkey serum, and 10% BlokHen (Aves Labs Cat. BH-1001) for 1 h at room temperature (RT). The sections were then incubated with primary antibodies at RT overnight, followed by 1 h incubation at RT with secondary antibodies. For multiplex immunohistochemistry, some primary antibodies were incubated simultaneously. The following primary antibodies were used: rabbit anti-ALDH1L1 (1:1000, Abcam Cat# ab87117), rabbit anti-ASPA (1:500, GeneTex GTX113389), mouse anti-CC1 (1:100, Calbiochem Cat# OP80), rat anti-CD16/CD32 (1:100, BD Biosciences Cat# 553142), rabbit anti-CD206 (1:200, Abcam Cat# ab64693), goat anti-DCX (1:100; Santa Cruz Biotechnology Cat# sc-8066), mouse anti-FoxJ1 (1:200, eBioscience Cat# 14-9965-82), mouse anti-GFAP (1:500, Millipore Cat# MAB360), chicken anti-GFP (1:2000, Aves Labs Cat# GFP-1020), rabbit anti-GPR17 (1:800, Cayman Cat# 10136), mouse anti-HA-tag (1:500, Sigma Cat# H9658), rabbit anti-Iba1 (1:200, Wako Cat# 019-19741), goat anti-Iba1 (1:500, Abcam Cat# ab5076), rabbit anti-Laminin-1 (1:400, Sigma Cat# L9393), mouse anti-Myc-tag (1:500, Sigma Cat# 05-419), mouse anti-Nestin (1:100, Millipore Cat# MAB353), mouse anti-NeuN (1:100, Millipore Cat# MAB377), rabbit anti-NG2 (1:200, Millipore Cat# AB5320), goat anti-PDGFRA (1:150, R&D Systems Cat# AF1062), rat anti-PDGFRA (1:150, BD Biosciences Cat# 558774), goat anti-PDGFRB (1:200, R&D Systems Cat# AF1042), mouse anti-α-SMA (1:500, Abcam Cat# ab7817), goat anti-SOX10 (1:100, R&D Systems Cat# AF2864), and rabbit anti-TagRFP for mKate2 labeling (1:500, Kerafast Cat# EMU113). To label GFP + tdTomato + brain sections using primary antibodies against Nestin, PDGFRA, and NG2, as well as EdU and Hoechst, inactivation of GFP and tdTomato was performed by firstly treating the brain sections with 3% H 2 O 2 and 20 mM HCl in PBS for 1 h at RT with light illumination. The slides were then washed three times with PBS, incubated with blocking buffer and processed for the immunostaining as described above. Secondary antibodies raised in donkey and conjugated to Alexa Fluor 488, FITC, TRITC, Alexa Fluor 594 or Alexa Fluor 647 were purchased from Jackson ImmunoResearch or Invitrogen and used at 1:200 dilution. Sections incubated with biotinylated rat anti-PECAM1/CD31 antibody (1:200, BD Biosciences Cat# 553371) were rinsed and further incubated with streptavidin-Brilliant Violet 480 (1:200; BD Biosciences Cat# 564876) for 30 min. Some slides stained without the fluorophore Brilliant Violet 480 were also counterstained with Hoechst 33342 (1 μg/mL, Thermo Fisher). For myelin analysis, slides were stained with Black-Gold II (Biosensis) according to the manufacturer’s instructions. To detect EdU incorporation in proliferating cells, sections were first processed for immunohistochemistry as above, followed by EdU detection using the Click-iT EdU Alexa Fluor 647 Imaging Kit (Thermo Fisher) as per the manufacturer’s instructions. Sections were coverslipped with Mowiol mounting medium and subjected to fluorescence and confocal microscopic analysis. Imaging Stained 10 μm thick coronal sections were imaged by laser scanning confocal microscopy (Zeiss LSM510-META or Zeiss LSM780), which was used to detect up to four fluorophores by laser excitation at 405, 488, 561 and 633 nm wavelengths. For five-color imaging such as brain sections stained with Brilliant Violet 480 or Hoechst, as well as Alexa Fluor 488, TRITC, Alexa Fluor 594 and Alexa Fluor 647, linear unmixing was performed during acquisition (online fingerprinting). Tile scanning was performed at a magnification of 10x or 20x for the cellular analysis of the entire brain sections of transgenic mouse lines. For the analysis of Laminin-1/PDGFRA, NG2/PDGFRA or PDGFRB/PDGFRA colocalization, confocal images were acquired using a 63X objective to generate Z-stacks. For global analysis of cellular distributions in the entire mouse brain, 2–3 sections at each rostro-caudal position were scanned on an Olympus VS120 Virtual Slide Microscope with a 20x objective at Monash Histology Platform, Monash University. The resulting images have a pixel resolution of 0.65 μm/pixel. For the sections stained with Black-Gold II, 2–3 representative images per mouse were taken using a 10x objective on a Zeiss Axioplan upright fluorescent microscope and captured with an Axioplan HRc camera (Carl Zeiss) using the Axiovision 7.2 imaging software. All images were taken with the same exposure time. Image processing Confocal images were imported into Fiji image analysis software (Fiji for macOS, RRID: SCR_002285 ) for quantification of cellular density in the regions of interest. For image stacks, deconvolution was performed with the “Iterative Deconvolve 3D″ plugin in Fiji. Colocalization analysis was performed by using the “Colocalization threshold” plugin in Fiji to automatically determine a detection threshold for each channel to avoid subjective bias. The extent of within-pixel fluorescent signal colocalization as indicated by Pearson’s correlation coefficient was calculated in each optical slice and then flattened into a maximal Z-projection to reveal colocalized pixels across the entire image thickness. All analyses were performed in a blinded fashion. Step 1 : Starting at 8–10 weeks of age, Pdgfrα + :DTA + mice receive TAM by oral gavage for 4 consecutive days. Step 2: Starting 4 days after the last day of TAM gavage, mice undergo surgical implantation of an osmotic minipump to deliver AraC into the CSF via the cisterna magna. Step 3: After 6 days of intracisternal infusion, mice undergo surgery to remove the osmotic minipump. The day of minipump removal is recorded as 0 days post-pump removal (0 dppr). Step 4: Mice are humanely sacrificed by perfusion fixation at the desired time-point, noting that OPCs remain depleted in the brain until at least 10 dppr. Cre-mediated recombination was induced by oral gavage of TAM (Sigma) delivered at a dose of 300 mg/kg/d for 4 consecutive days, as described in previous studies. , TAM was prepared at 40 mg/mL in corn oil (Sigma). No toxicity due to TAM administration was observed in any cohort of mice. To label cells that proliferated during the first 10 days following AraC withdrawal, 5-ethynyl-2′-deoxyuridine (EdU; Life Technologies) was administered to mice in their drinking water at 0.1 mg/mL. EdU-supplemented drinking water was placed in light-proof water bottles and replaced every 3 days. Cytosine-β-D-arabinofuranoside (AraC, Sigma) was prepared at a final concentration of 2% (w/v) in artificial CSF (aCSF, Tocris Bioscience). One hundred microliters of either 2% AraC or vehicle (aCSF) was injected into osmotic minipumps (Alzet Model 1007D, flow rate 0.5 μL/h, Brain Infusion Kit III) using a 1 mL syringe attached to a blunt fill needle. The flow moderator was attached to a bespoke tubing assembly made by connecting PE-10 polyethylene tubing to the vinyl catheter tube provided with the Brain Infusion Kit III (Alzet). The flow moderator with attached tubing was slowly inserted into the filled osmotic minipump to create a complete pump assembly. The pumps were then transferred into 50 mL conical tubes containing sterile saline and placed in a 37°C water bath overnight to prime the pumps prior to surgical implantation. AraC or vehicle (aCSF) was infused into the CSF at the level of the cisterna magna via an osmotic minipump for a period of 6 days. Prior to anesthesia, mice received a subcutaneous injection of meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline. Mice were then anesthetized by isoflurane inhalation (4% induction, 2% maintenance). The head of the anesthetized mouse was fixed in a stereotaxic frame using a nose cone and ear bars. The position of the abdomen was lowered so that the neck was flexed at an angle of 30–45° relative to horizontal and the body was placed on a thermostatically controlled heat pad to maintain body temperature. Eyes were moistened with water-based lubricant and the fur was cleared over the head and shoulders with an electric shaver. A sterile cotton tip soaked in 80% ethanol was used to swab and clean the surface of incision site, followed by 10% (w/v) povidone-iodine solution (Betadine). A midline skin incision was made using a sharp scalpel from a position just rostral of the external occipital protuberance to ∼1 cm cranial to the shoulders. The atlanto-occipital membrane was visualized after blunt dissection of the muscle layers to expose the position of the cisterna magna. Using a straight hemostat, a pocket was created by spreading the subcutaneous connective tissues apart and the osmotic minipump was inserted into the pocket overlying the hindquarters. The cisterna magna was pierced superficially with a 25 G needle and the PE-10 tubing connected to the osmotic minipump was introduced into the hole before applying a small amount of superglue to fix the tubing in place. The position of the tubing was further anchored and fixed to the musculature using sutures. Bupivacaine (100 μL of 0.25% solution) was flushed over the musculature to provide rapid onset analgesia. The skin was then sutured and 10% (w/v) povidone-iodine solution was applied to the sutured skin. The animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment and were administered meloxicam (2 mg/kg, 0.25 mL/10 g body weight) in warm saline once daily for the first 2 days post-surgery. All mice were provided with powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. Infusion of AraC or vehicle (aCSF) was ceased after 6 days by removing the osmotic minipumps. Animals were anesthetized as described above and the former skin incision site was reopened to gain access to the tubing connected to the osmotic minipump. The tubing was cut 2 mm from the glued/sutured musculature and the minipump was removed from the subcutaneous pocket. The tubing fixed to the musculature was left in place and the free end of the tubing was sealed with superglue. The incision was closed with sutures and the animal was placed in a warm recovery box for monitoring until it regained consciousness and normal mobility. Animals were monitored daily throughout the experiment. AraC-administered mice experienced a mild reduction in body weight during AraC infusion. If mice showed signs of greater than 10% weight loss, they were given powdered chow mixed with fresh water daily in a small dish that was easily accessed within the animal cage. If mice maintained greater than 15% weight loss for more than 72 h, they were humanely euthanized. Following removal of the osmotic minipumps delivering AraC, mice returned to normal weight. The lentiviral vectors LV-FUW-Nestin-NLS-HA-Dre and LV-FUW-EF1α-FREX-Myc/mKate2-f-mem were used for fate-mapping of V-SVZ-derived NPCs. These vectors were designed using Geneious Prime bioinformatics software (RRID: SCR_010519 ) and constructed using standard molecular cloning techniques, including PCR using Phusion High-Fidelity DNA polymerase (New England Biolabs), restriction enzyme digestion and Gibson assembly (New England Biolabs). To create these lentiviral vectors, the rat Nestin promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#32401). The DNA encoding NLS-HA-Dre was amplified by PCR from plasmid DNA (Addgene Cat#51272). The coding sequence for the rat Nestin second intron enhancer was amplified from rat genomic DNA. EF1α promoter sequence was amplified by PCR from plasmid DNA (Addgene Cat#38770). The FREX-Myc/mKate2-f-mem DNA sequence was generated by DNA synthesis (Integrated DNA Technologies). The PCR products were cloned into the FUGW lentiviral vector backbone (Addgene Cat#14883) in place of the GFP coding sequence by Gibson Assembly. Plasmid DNA was then extracted and purified using Plasmid Mini or Midi Kits (Qiagen). The DNA sequences of the lentiviral vectors were verified by Sanger sequencing (Micromon, Monash University). Sequence alignments were performed using SnapGene molecular biology software (RRID: SCR_015052 ). validation of lentiviral vectors Plasmid DNA was transfected into HEK293T cells cultured at 37°C and 5% CO 2 and analyzed 48 h post-transfection for fluorescence. The pmKate2-f-mem plasmid (Evrogen Cat#FP186) served as a positive control for mKate2 fluorescence. HEK293T cells were plated in a 24-well plate and cultured in Dulbecco’s Modified Eagle’s Medium (DMEM, Gibco), supplemented with 10% fetal bovine serum (Invitrogen). At 80% confluency, cells were transfected with the plasmids using Lipofectamine 2000 Transfection Reagent (Thermo Fisher) according to the manufacturer’s instructions. The growth medium was replaced with fresh medium containing 100 U/mL penicillin and 100 mg/mL streptomycin (Gibco) 4 h post-transfection. At 48 h after transfection, cells were post-fixed with 4% PFA/DPBS, processed for immunocytochemistry and imaged for mKate2 expression using a Zeiss LSM780 confocal microscope. Lentiviruses were produced and packaged in HEK293T cells by the Vector and Genome Engineering Facility, Children’s Medical Research Institute (Westmead, Australia). To determine viral titer, HEK293T cells were transduced with lentivirus. Genomic DNA was extracted from transduced cells and viral DNA copy number/cell was determined by Multiplex Taqman qPCR. Mice were anesthetized by isoflurane inhalation and positioned in a motorized stereotaxic frame as described above. A small incision was made in the scalp and the injection site of the lateral ventricle was marked using the following stereotaxic coordinates, relative to bregma: anterior-posterior −0.22 mm, medial-lateral −1.00 mm, and dorsal-ventral −2.5 mm. A small hole was drilled into the skull to expose the brain surface. Five microliters of mixed lentiviral vectors were gently drawn up into a blunt-tipped 35G needle attached to a 10 μL NanoFil syringe (World Precision Instruments, WPI). The syringe was then placed into a microinjection pump (UltraMicroPump III, WPI) attached to the stereotaxic frame and lowered slowly into the injection site of the lateral ventricle according to the dorsal-ventral coordinate. The microinjection pump controlled the infusion of 5 μL total volume of lentiviruses at a flow rate of 0.5 μL/min, after which the needle was left in place for 5 min to ensure complete diffusion of the viruses and avoid backflow. Mice were deeply anesthetized with 100 mg/kg sodium pentobarbitone and then transcardially perfused with PBS, followed by 4% PFA/PBS. Brains, optic nerves and spinal cords were removed and post-fixed in 4% PFA/PBS for 2 h on ice, transferred to PBS overnight, cryopreserved in 20% sucrose/PBS overnight, followed by embedding in Tissue-Tek OCT compound (Sakura FineTek). The tissues were stored at −80°C until sectioned. Ten micron-thick coronal sections of the brain and spinal cord, and longitudinal sections of the optic nerve were cut on a Leica cryostat, collected onto Superfrost Plus slides (Menzel Glaser), and air dried for 1 h before storing at −80°C until stained. Cryosections were air dried, then blocked with PBS containing 0.3% Triton X-100, 10% normal donkey serum, and 10% BlokHen (Aves Labs Cat. BH-1001) for 1 h at room temperature (RT). The sections were then incubated with primary antibodies at RT overnight, followed by 1 h incubation at RT with secondary antibodies. For multiplex immunohistochemistry, some primary antibodies were incubated simultaneously. The following primary antibodies were used: rabbit anti-ALDH1L1 (1:1000, Abcam Cat# ab87117), rabbit anti-ASPA (1:500, GeneTex GTX113389), mouse anti-CC1 (1:100, Calbiochem Cat# OP80), rat anti-CD16/CD32 (1:100, BD Biosciences Cat# 553142), rabbit anti-CD206 (1:200, Abcam Cat# ab64693), goat anti-DCX (1:100; Santa Cruz Biotechnology Cat# sc-8066), mouse anti-FoxJ1 (1:200, eBioscience Cat# 14-9965-82), mouse anti-GFAP (1:500, Millipore Cat# MAB360), chicken anti-GFP (1:2000, Aves Labs Cat# GFP-1020), rabbit anti-GPR17 (1:800, Cayman Cat# 10136), mouse anti-HA-tag (1:500, Sigma Cat# H9658), rabbit anti-Iba1 (1:200, Wako Cat# 019-19741), goat anti-Iba1 (1:500, Abcam Cat# ab5076), rabbit anti-Laminin-1 (1:400, Sigma Cat# L9393), mouse anti-Myc-tag (1:500, Sigma Cat# 05-419), mouse anti-Nestin (1:100, Millipore Cat# MAB353), mouse anti-NeuN (1:100, Millipore Cat# MAB377), rabbit anti-NG2 (1:200, Millipore Cat# AB5320), goat anti-PDGFRA (1:150, R&D Systems Cat# AF1062), rat anti-PDGFRA (1:150, BD Biosciences Cat# 558774), goat anti-PDGFRB (1:200, R&D Systems Cat# AF1042), mouse anti-α-SMA (1:500, Abcam Cat# ab7817), goat anti-SOX10 (1:100, R&D Systems Cat# AF2864), and rabbit anti-TagRFP for mKate2 labeling (1:500, Kerafast Cat# EMU113). To label GFP + tdTomato + brain sections using primary antibodies against Nestin, PDGFRA, and NG2, as well as EdU and Hoechst, inactivation of GFP and tdTomato was performed by firstly treating the brain sections with 3% H 2 O 2 and 20 mM HCl in PBS for 1 h at RT with light illumination. The slides were then washed three times with PBS, incubated with blocking buffer and processed for the immunostaining as described above. Secondary antibodies raised in donkey and conjugated to Alexa Fluor 488, FITC, TRITC, Alexa Fluor 594 or Alexa Fluor 647 were purchased from Jackson ImmunoResearch or Invitrogen and used at 1:200 dilution. Sections incubated with biotinylated rat anti-PECAM1/CD31 antibody (1:200, BD Biosciences Cat# 553371) were rinsed and further incubated with streptavidin-Brilliant Violet 480 (1:200; BD Biosciences Cat# 564876) for 30 min. Some slides stained without the fluorophore Brilliant Violet 480 were also counterstained with Hoechst 33342 (1 μg/mL, Thermo Fisher). For myelin analysis, slides were stained with Black-Gold II (Biosensis) according to the manufacturer’s instructions. To detect EdU incorporation in proliferating cells, sections were first processed for immunohistochemistry as above, followed by EdU detection using the Click-iT EdU Alexa Fluor 647 Imaging Kit (Thermo Fisher) as per the manufacturer’s instructions. Sections were coverslipped with Mowiol mounting medium and subjected to fluorescence and confocal microscopic analysis. Stained 10 μm thick coronal sections were imaged by laser scanning confocal microscopy (Zeiss LSM510-META or Zeiss LSM780), which was used to detect up to four fluorophores by laser excitation at 405, 488, 561 and 633 nm wavelengths. For five-color imaging such as brain sections stained with Brilliant Violet 480 or Hoechst, as well as Alexa Fluor 488, TRITC, Alexa Fluor 594 and Alexa Fluor 647, linear unmixing was performed during acquisition (online fingerprinting). Tile scanning was performed at a magnification of 10x or 20x for the cellular analysis of the entire brain sections of transgenic mouse lines. For the analysis of Laminin-1/PDGFRA, NG2/PDGFRA or PDGFRB/PDGFRA colocalization, confocal images were acquired using a 63X objective to generate Z-stacks. For global analysis of cellular distributions in the entire mouse brain, 2–3 sections at each rostro-caudal position were scanned on an Olympus VS120 Virtual Slide Microscope with a 20x objective at Monash Histology Platform, Monash University. The resulting images have a pixel resolution of 0.65 μm/pixel. For the sections stained with Black-Gold II, 2–3 representative images per mouse were taken using a 10x objective on a Zeiss Axioplan upright fluorescent microscope and captured with an Axioplan HRc camera (Carl Zeiss) using the Axiovision 7.2 imaging software. All images were taken with the same exposure time. Confocal images were imported into Fiji image analysis software (Fiji for macOS, RRID: SCR_002285 ) for quantification of cellular density in the regions of interest. For image stacks, deconvolution was performed with the “Iterative Deconvolve 3D″ plugin in Fiji. Colocalization analysis was performed by using the “Colocalization threshold” plugin in Fiji to automatically determine a detection threshold for each channel to avoid subjective bias. The extent of within-pixel fluorescent signal colocalization as indicated by Pearson’s correlation coefficient was calculated in each optical slice and then flattened into a maximal Z-projection to reveal colocalized pixels across the entire image thickness. All analyses were performed in a blinded fashion. Image analysis and cell quantification For slide-scanned images, double- or triple-positive cells were counted manually in Fiji. For morphological analysis of microglia and astrocytes, Z-stacks of 1–4 cells were taken from the cortical region of each of the brain sections under a 40× objective. Microglia soma and branching measures were visualized using IBA1 immunofluorescence, whereas those for astrocytes were assessed with GFAP. Z-stacked images were converted to maximum intensity projections using Fiji and these images were background subtracted, contrast-enhanced to ensure full arborization could be detected and a local threshold was applied to the image. Microglial and astrocytic soma areas were calculated using the ‘Measure’ command and branch features (number of primary or secondary processes; maximum length of primary process) were manually counted and measured in Fiji. A total of 18–22 astrocytes and 18–39 microglia per group were analyzed in a blinded fashion. For quantification of myelin intensity, images from the sections stained with Black-Gold II were converted to grayscale in Fiji and automated measurements of myelin intensity were taken using the measurement function to record the mean gray value within the regions of interest. All analyses were performed in a blinded fashion. Probability calculations At 0 dppr, we did not find any PDGFRA + OPCs in sections of the cerebrum of TAM + AraC-administered Pdgfrα + :DTA + mice that were examined. However, since we did not analyze every single section of each mouse brain, we cannot exclude the possibility that there could be undetected OPCs in sections of the brains of OPC-ablated mice that were not examined. To estimate the theoretical number of undetected OPCs that could be missed based on our tissue sampling strategy, we calculated the probability that the cerebrum of sampled tissue sections contains no OPCs in circumstances where residual OPCs are sparsely distributed across the cerebrum. For OPC-ablated mice, we performed cell counts using six (6) ten-micron thick sections per mouse for a total of 4 mice at 0 dppr, i.e., a total of 24 sections across the 4 mice. These sections were sampled from a region of interest (ROI) extending from +0.25 mm to −5.07 mm A/P relative to bregma, theoretically reflecting a total of 532 ten-micron thick sections per mouse. We calculated the probability of detecting zero OPCs in the cerebrum of randomly sampled sections under various conditions, where we defined the number of sections ( N ) that contain no OPCs in the region of the cerebrum from which the sections were collected. The probability ( P 1 ) of selecting 6 sections that contain no OPCs is given by the formula [ P 1 = ( N 532 ) × ( N − 1 531 ) × ( N − 2 530 ) × ( N − 3 529 ) × ( N − 4 528 ) × ( N − 5 527 ) ] . Since we selected 6 sections from each of 4 mice, then the probability ( P 2 ) of selecting a total of 24 sections that contain no OPCs is given by the formula [ P 2 = ( P 1 ) 4 ] . The percentage ( X ) of sections devoid of OPCs was calculated as [ X = ( N 532 ) × 100 ] . We plotted P 2 against X ( E) and determined the value of N when P 2 = 0.5. Finally, we calculated the number of sections ( S ) that contain OPCs when P 2 = 0.5 using the formula [ S = 532 − N ], which gave a value of 15 sections per ROI. Working on the assumption that the cerebrum in each of these sections contains one OPC, we estimated that there are likely to be fewer than about 15 OPCs within the cerebrum ROI for each OPC-ablated mouse. Statistical analyses All statistical analyses were performed using the GraphPad Prism software (v.9). Statistical significance was determined using an unpaired, two-tailed Student’s t test or by two-way ANOVA with Bonferroni’s, Tukey’s or Sidak’s multiple-comparison tests. Statistical significance was defined as p < 0.05. Quantitative data are reported as mean ± SEM. For slide-scanned images, double- or triple-positive cells were counted manually in Fiji. For morphological analysis of microglia and astrocytes, Z-stacks of 1–4 cells were taken from the cortical region of each of the brain sections under a 40× objective. Microglia soma and branching measures were visualized using IBA1 immunofluorescence, whereas those for astrocytes were assessed with GFAP. Z-stacked images were converted to maximum intensity projections using Fiji and these images were background subtracted, contrast-enhanced to ensure full arborization could be detected and a local threshold was applied to the image. Microglial and astrocytic soma areas were calculated using the ‘Measure’ command and branch features (number of primary or secondary processes; maximum length of primary process) were manually counted and measured in Fiji. A total of 18–22 astrocytes and 18–39 microglia per group were analyzed in a blinded fashion. For quantification of myelin intensity, images from the sections stained with Black-Gold II were converted to grayscale in Fiji and automated measurements of myelin intensity were taken using the measurement function to record the mean gray value within the regions of interest. All analyses were performed in a blinded fashion. At 0 dppr, we did not find any PDGFRA + OPCs in sections of the cerebrum of TAM + AraC-administered Pdgfrα + :DTA + mice that were examined. However, since we did not analyze every single section of each mouse brain, we cannot exclude the possibility that there could be undetected OPCs in sections of the brains of OPC-ablated mice that were not examined. To estimate the theoretical number of undetected OPCs that could be missed based on our tissue sampling strategy, we calculated the probability that the cerebrum of sampled tissue sections contains no OPCs in circumstances where residual OPCs are sparsely distributed across the cerebrum. For OPC-ablated mice, we performed cell counts using six (6) ten-micron thick sections per mouse for a total of 4 mice at 0 dppr, i.e., a total of 24 sections across the 4 mice. These sections were sampled from a region of interest (ROI) extending from +0.25 mm to −5.07 mm A/P relative to bregma, theoretically reflecting a total of 532 ten-micron thick sections per mouse. We calculated the probability of detecting zero OPCs in the cerebrum of randomly sampled sections under various conditions, where we defined the number of sections ( N ) that contain no OPCs in the region of the cerebrum from which the sections were collected. The probability ( P 1 ) of selecting 6 sections that contain no OPCs is given by the formula [ P 1 = ( N 532 ) × ( N − 1 531 ) × ( N − 2 530 ) × ( N − 3 529 ) × ( N − 4 528 ) × ( N − 5 527 ) ] . Since we selected 6 sections from each of 4 mice, then the probability ( P 2 ) of selecting a total of 24 sections that contain no OPCs is given by the formula [ P 2 = ( P 1 ) 4 ] . The percentage ( X ) of sections devoid of OPCs was calculated as [ X = ( N 532 ) × 100 ] . We plotted P 2 against X ( E) and determined the value of N when P 2 = 0.5. Finally, we calculated the number of sections ( S ) that contain OPCs when P 2 = 0.5 using the formula [ S = 532 − N ], which gave a value of 15 sections per ROI. Working on the assumption that the cerebrum in each of these sections contains one OPC, we estimated that there are likely to be fewer than about 15 OPCs within the cerebrum ROI for each OPC-ablated mouse. All statistical analyses were performed using the GraphPad Prism software (v.9). Statistical significance was determined using an unpaired, two-tailed Student’s t test or by two-way ANOVA with Bonferroni’s, Tukey’s or Sidak’s multiple-comparison tests. Statistical significance was defined as p < 0.05. Quantitative data are reported as mean ± SEM. |
Health care providers’ perspectives on providing end-of-life psychiatric care in cardiology and oncology hospitals: a cross-sectional questionnaire survey | a03ee174-e5a3-4644-9b87-6705e9d6f6f9 | 10014396 | Internal Medicine[mh] | Heart failure (HF) is potentially fatal, unless a heart transplantation is performed, and it is a serious healthcare and economic burden on patients and their caregivers. The World Health Organization estimated the worldwide mortality from cardiovascular disease at 15.2 million in 2016 , making it the most common cause of death (40%) among middle-aged and older adults . Despite the recent rapid progress in medical treatments, the median survival rate after patients’ first hospitalization is low in severe HF (2.1 years) . In addition, HF has inflicted a burden of $180 million on the global health system . Patients with advanced HF commonly experience psychological symptoms, the most common of which are depression and anxiety, as well as physical symptoms, such as dyspnea, pain, or fatigue . Severe clinical depression is diagnosed in 12 to 33% of all patients with heart disease and in 38 to 42% of those with severe HF, featuring New York Heart Association class III-IV symptoms . Among patients with HF, 29% exhibit severe and clinically significant anxiety symptoms, and 9% have anxiety disorders, including generalized anxiety disorders . In addition, psychological symptoms have a highly negative impact on the quality of life and are associated with poor treatment adherence, severe physical symptoms, long-term hospitalization, and a reduced survival rate . Therefore, psychological symptoms, such as depression or anxiety, are particularly challenging problems for patients with end-stage HF . Psychiatric care, including pharmacotherapy and psychotherapy, can be of benefit for patients with HF who have psychological symptoms. However, there is inadequate evidence for the efficacy of pharmacotherapy in patients with HF , and psychiatric pharmacotherapy, such as antidepressants, increases the risk of all-cause death among HF patients . Nevertheless, psychotherapy has received attention among patients with HF in recent years, and cognitive behavioral therapy in particular has been shown to improve psychological symptoms . Relaxation, meditation, and mindfulness-based psychoeducation can also alleviate these symptoms . However, there is limited evidence and guidance on the efficacy of such psychiatric care among patients with terminal HF . In patients with end-stage cancer, many of whom experience psychological symptoms similar to patients with end-stage HF, many studies have demonstrated the effectiveness of pharmacotherapy and psychotherapy . Workshops or guidelines for oncologists can also enhance their practical skills in providing end-of-life psychiatric care . A comparison between the difficulties in providing psychiatric care for patients with end-stage HF versus those with cancer could provide useful insights into potential barriers to providing psychiatric care for patients with end-stage HF. However, to date, no study has examined the barriers to providing psychiatric care for patients with HF. In addition, we believe that a qualitative study design, examining thee difficulties faced by health care providers in pain management, would be also helpful in investigating the difficulties with psychiatric management and identifying the barriers to providing psychiatric care . The aims of this study were to identify and compare the barriers faced by health care providers of cardiology and oncology hospitals in providing psychiatric care to end-of-life patients.
Design and participants This was a national, cross-sectional survey conducted among Japanese health care providers of cardiology and oncology hospitals using self-completed questionnaires. We mailed the questionnaires to the departments of cardiovascular internal medicine of 427 implantable cardioverter defibrillators (ICD) specialized hospitals and to the departments of respiratory medicine of 347 designated cancer hospitals; we asked them to deliver the questionnaires directly to the chief physicians and the chief nurses in each department in March 2018. ICD specialized hospitals are equipped to perform implantation of ICDs and are the center of cardiovascular medicine in Japan. Additionally, designated cancer hospitals, recommended by the prefectural governments, can provide high-quality cancer treatment, as guaranteed by the Ministry of Health, Labour and Welfare in Japan. These medical facilities provide palliative care by a team of medical professional, provide specialized cancer treatments, establish local cooperation systems for cancer treatments, and provide consultation, support, and information for cancer patients. Demographic and clinical characteristics We collected demographic and clinical information from the self-completed questionnaires. First, we included the following data: sex, age, and medical license of the staff of each health care provider. Second, we included the following data: area (Hokkaido/Tohoku, Kanto/Koshinetsu, Chubu/Hokuriku, Kinki, Chugoku/Shikoku, and Kyushu/Okinawa area), hospital type (national medical center, academic medical center, general hospital except academic medical center, specialized hospital), the number of hospital beds, and the presence of a palliative care unit, palliative care team, liaison psychiatry team, palliative care physicians, psychiatrists, and psychologists at hospitals. Outcome measures Difficulty in providing palliative care The Palliative Care Difficulties Scale—a 15-item self-reported scale—was developed in Japan . The responses are scored in the format of a 4-point Likert-type scale ranging from 0 to 3 (overall score range: 0–42). The scale contains of the following five factors, each having three items: (1) alleviating symptoms, (2) expert support, (3) multidisciplinary communication, (4) communication with patient/family, and (5) community coordination. The reliability and validity of this measure were sufficiently supported in an earlier study . Difficulty in providing end-of-life psychiatric care We developed the following original question (Sup.1) for assessing the difficulty in providing end-of-life psychiatric care: “Do you face challenges in providing psychiatric care for patients at their end of life?” The possible answers were “yes” or “no.” Barriers to providing end-of-life psychiatric care To identify the barriers to providing end-of-life psychological care, we asked the following original question (Sup.1) to participants who answered “yes” to the above question: “What challenges do you face in providing psychological care to patients at their end of life?” Participants could respond freely to this open-ended question. Qualitative analyses Content analysis was used to analyze the responses to the open ended question answered freely. Content analysis is an objective and systematic procedure used to draw conclusions by creating categories of data from verbatim or unstructured data . We conducted a quantitative content analysis according to previous studies in palliative care settings . Our content analysis procedure was conducted as follows: (1) all text data were divided into thematic units, which are units of words with one logical meaning; (2) two researchers, a clinical psychologist (KI), and a cardiovascular nurse (SM) extracted all statements from the free descriptions related to the study topic, such as the barriers to providing end-of-life psychiatric care; (3) a clinical psychologist (KI), a cardiovascular nurse (SM), and two psychiatrists in the palliative care team (EM and TT) carefully conceptualized similarities and differences in the content, and defined all categories; and (4) two coders, a student of psychology, and a psychiatric clinical nurse independently determined how each thematic unit that was identified corresponded with any category. The concordance rate and kappa coefficient of the determinations of the categories were used as reliability indicators. The kappa coefficient was calculated using 20% of the data and random sampling was conducted based on the data from a standard set derived from a previous study, with more than 10% or 50 units of data . Statistical analyses First, we summarized the characteristics of the participants and hospitals using standard descriptive statistics. Second, the mean difference in difficulties in providing palliative care was compared between oncological and cardiovascular hospitals using a t test, and the frequency of difficulties in providing end-of-life psychiatric care was compared between oncological and cardiovascular hospitals using χ 2 test. Third, the frequency of the thematic units that were categorized in the above content analysis was compared between health care providers in oncological and cardiovascular hospitals using χ 2 test. The significance level was set at 5%. All data were analyzed using IBM SPSS Statistics for Windows, version 24 (IBM Corp., NY, USA).
This was a national, cross-sectional survey conducted among Japanese health care providers of cardiology and oncology hospitals using self-completed questionnaires. We mailed the questionnaires to the departments of cardiovascular internal medicine of 427 implantable cardioverter defibrillators (ICD) specialized hospitals and to the departments of respiratory medicine of 347 designated cancer hospitals; we asked them to deliver the questionnaires directly to the chief physicians and the chief nurses in each department in March 2018. ICD specialized hospitals are equipped to perform implantation of ICDs and are the center of cardiovascular medicine in Japan. Additionally, designated cancer hospitals, recommended by the prefectural governments, can provide high-quality cancer treatment, as guaranteed by the Ministry of Health, Labour and Welfare in Japan. These medical facilities provide palliative care by a team of medical professional, provide specialized cancer treatments, establish local cooperation systems for cancer treatments, and provide consultation, support, and information for cancer patients.
We collected demographic and clinical information from the self-completed questionnaires. First, we included the following data: sex, age, and medical license of the staff of each health care provider. Second, we included the following data: area (Hokkaido/Tohoku, Kanto/Koshinetsu, Chubu/Hokuriku, Kinki, Chugoku/Shikoku, and Kyushu/Okinawa area), hospital type (national medical center, academic medical center, general hospital except academic medical center, specialized hospital), the number of hospital beds, and the presence of a palliative care unit, palliative care team, liaison psychiatry team, palliative care physicians, psychiatrists, and psychologists at hospitals.
Difficulty in providing palliative care The Palliative Care Difficulties Scale—a 15-item self-reported scale—was developed in Japan . The responses are scored in the format of a 4-point Likert-type scale ranging from 0 to 3 (overall score range: 0–42). The scale contains of the following five factors, each having three items: (1) alleviating symptoms, (2) expert support, (3) multidisciplinary communication, (4) communication with patient/family, and (5) community coordination. The reliability and validity of this measure were sufficiently supported in an earlier study . Difficulty in providing end-of-life psychiatric care We developed the following original question (Sup.1) for assessing the difficulty in providing end-of-life psychiatric care: “Do you face challenges in providing psychiatric care for patients at their end of life?” The possible answers were “yes” or “no.” Barriers to providing end-of-life psychiatric care To identify the barriers to providing end-of-life psychological care, we asked the following original question (Sup.1) to participants who answered “yes” to the above question: “What challenges do you face in providing psychological care to patients at their end of life?” Participants could respond freely to this open-ended question.
The Palliative Care Difficulties Scale—a 15-item self-reported scale—was developed in Japan . The responses are scored in the format of a 4-point Likert-type scale ranging from 0 to 3 (overall score range: 0–42). The scale contains of the following five factors, each having three items: (1) alleviating symptoms, (2) expert support, (3) multidisciplinary communication, (4) communication with patient/family, and (5) community coordination. The reliability and validity of this measure were sufficiently supported in an earlier study .
We developed the following original question (Sup.1) for assessing the difficulty in providing end-of-life psychiatric care: “Do you face challenges in providing psychiatric care for patients at their end of life?” The possible answers were “yes” or “no.”
To identify the barriers to providing end-of-life psychological care, we asked the following original question (Sup.1) to participants who answered “yes” to the above question: “What challenges do you face in providing psychological care to patients at their end of life?” Participants could respond freely to this open-ended question.
Content analysis was used to analyze the responses to the open ended question answered freely. Content analysis is an objective and systematic procedure used to draw conclusions by creating categories of data from verbatim or unstructured data . We conducted a quantitative content analysis according to previous studies in palliative care settings . Our content analysis procedure was conducted as follows: (1) all text data were divided into thematic units, which are units of words with one logical meaning; (2) two researchers, a clinical psychologist (KI), and a cardiovascular nurse (SM) extracted all statements from the free descriptions related to the study topic, such as the barriers to providing end-of-life psychiatric care; (3) a clinical psychologist (KI), a cardiovascular nurse (SM), and two psychiatrists in the palliative care team (EM and TT) carefully conceptualized similarities and differences in the content, and defined all categories; and (4) two coders, a student of psychology, and a psychiatric clinical nurse independently determined how each thematic unit that was identified corresponded with any category. The concordance rate and kappa coefficient of the determinations of the categories were used as reliability indicators. The kappa coefficient was calculated using 20% of the data and random sampling was conducted based on the data from a standard set derived from a previous study, with more than 10% or 50 units of data .
First, we summarized the characteristics of the participants and hospitals using standard descriptive statistics. Second, the mean difference in difficulties in providing palliative care was compared between oncological and cardiovascular hospitals using a t test, and the frequency of difficulties in providing end-of-life psychiatric care was compared between oncological and cardiovascular hospitals using χ 2 test. Third, the frequency of the thematic units that were categorized in the above content analysis was compared between health care providers in oncological and cardiovascular hospitals using χ 2 test. The significance level was set at 5%. All data were analyzed using IBM SPSS Statistics for Windows, version 24 (IBM Corp., NY, USA).
Demographic and clinical characteristics From the 347 oncology and 427 cardiology hospitals, 130 oncological physicians (37.5%), 94 oncological nurses (27.1%), 120 cardiovascular physicians (28.1%), and 93 cardiovascular nurses (21.8%) were included in the analysis (Fig. ). The characteristics of the study participants and hospitals are listed in Table . More than 90% of physicians were specialists, such as lung cancer or cardiovascular specialists, and approximately half of the nurses were certified in a specialized field, including cancer nursing or palliative care. The sex ratio (men:women) was 1.4:1. Regarding both oncology and cardiology hospitals, more than 90% were general hospitals, approximately 60% were large-scale facilities (≥ 500 hospital beds), more than 80% had palliative care teams, and approximately 70% had psychiatric or psychological care specialists. Difficulty in providing end-of-life palliative and psychiatric care We found that the Palliative Care Difficulties Scale scores were significantly higher in health care providers among cardiology hospitals compared to that of oncology hospitals for “alleviating symptoms” and “expert support” ( F [423] = 8.63, p = 0.00 and F [414] = 18.96, p = 0.00, respectively), whereas no significant differences were found for any other factor ( F [426] = 3.50, p = 0.06 for multidisciplinary communication; F [424] = 2.82, p = 0.09 for communication with patient/family; F [423] = 1.11, p = 0.29 for community coordination) (Fig. ). The frequency of difficulties in providing end-of-life psychiatric care according to the χ 2 test and exact probability test is shown in Fig. . A total of 135 (62.2%) oncological and 125 (59.8%) cardiovascular health care providers had difficulties in providing end-of-life psychiatric care. There was no significant difference in the frequency of difficulties faced by healthcare providers of oncology and cardiology hospitals ( χ 2 = 0.26, p = 0.62). Barrier to providing end-of-life psychiatric care using qualitative methods We extracted 52 attributes from the content analysis, 40 of which were classified by the semantic content into “patients’ personal problems,” “family members’ problems,” “professionals’ personal problems,” “communication problems between professionals and patients,” “problems specific to end-of-life care,” “problems specific to psychiatric care,” “problems of institution or system,” and “problems specific to non-cancer patients” (Table ). The Kappa coefficient derived by the two independent coders was 0.54 in the random 20% data of this study. The frequency of barriers to providing psychiatric end-of-life care is shown in Table . We found that the “problems specific to non-cancer patients” occurred more frequently in health care providers of cardiology than that of oncology hospitals ( χ 2 = 22.475, p = 0.00). There was no significant difference between the frequencies of any other barrier between health care providers of oncology and cardiology hospitals.
From the 347 oncology and 427 cardiology hospitals, 130 oncological physicians (37.5%), 94 oncological nurses (27.1%), 120 cardiovascular physicians (28.1%), and 93 cardiovascular nurses (21.8%) were included in the analysis (Fig. ). The characteristics of the study participants and hospitals are listed in Table . More than 90% of physicians were specialists, such as lung cancer or cardiovascular specialists, and approximately half of the nurses were certified in a specialized field, including cancer nursing or palliative care. The sex ratio (men:women) was 1.4:1. Regarding both oncology and cardiology hospitals, more than 90% were general hospitals, approximately 60% were large-scale facilities (≥ 500 hospital beds), more than 80% had palliative care teams, and approximately 70% had psychiatric or psychological care specialists.
We found that the Palliative Care Difficulties Scale scores were significantly higher in health care providers among cardiology hospitals compared to that of oncology hospitals for “alleviating symptoms” and “expert support” ( F [423] = 8.63, p = 0.00 and F [414] = 18.96, p = 0.00, respectively), whereas no significant differences were found for any other factor ( F [426] = 3.50, p = 0.06 for multidisciplinary communication; F [424] = 2.82, p = 0.09 for communication with patient/family; F [423] = 1.11, p = 0.29 for community coordination) (Fig. ). The frequency of difficulties in providing end-of-life psychiatric care according to the χ 2 test and exact probability test is shown in Fig. . A total of 135 (62.2%) oncological and 125 (59.8%) cardiovascular health care providers had difficulties in providing end-of-life psychiatric care. There was no significant difference in the frequency of difficulties faced by healthcare providers of oncology and cardiology hospitals ( χ 2 = 0.26, p = 0.62).
We extracted 52 attributes from the content analysis, 40 of which were classified by the semantic content into “patients’ personal problems,” “family members’ problems,” “professionals’ personal problems,” “communication problems between professionals and patients,” “problems specific to end-of-life care,” “problems specific to psychiatric care,” “problems of institution or system,” and “problems specific to non-cancer patients” (Table ). The Kappa coefficient derived by the two independent coders was 0.54 in the random 20% data of this study. The frequency of barriers to providing psychiatric end-of-life care is shown in Table . We found that the “problems specific to non-cancer patients” occurred more frequently in health care providers of cardiology than that of oncology hospitals ( χ 2 = 22.475, p = 0.00). There was no significant difference between the frequencies of any other barrier between health care providers of oncology and cardiology hospitals.
This is the first study that investigated the barriers to providing psychiatric care for end-stage HF patients compared to end-stage cancer patients. Although we found no significant difference in the frequency of those who perceive barriers to providing end-of-life psychiatric care between the cardiology and oncology settings, there can be a difference in the context in which they perceive barriers. A particularly important result was that the cardiovascular health care providers faced problems with psychiatric care, which were specific to non-cancer patients, such as obtaining professional support, useful guidelines, or training opportunities. This study was useful in exploring solutions for providing sufficient psychiatric care for end-stage HF patients, by eliminating barriers using a bottom-up qualitative approach. Our results indicated that there were three challenges faced by health care providers in providing psychiatric care to end-of-life patients. First, knowledge of mental health issues specific to the end-of-life is necessary for health care providers to provide psychiatric care. Cardiovascular health care providers found it particularly difficult to improve their knowledge and skills for performing psychiatric assessments and for treating psychological and cardiac symptoms. In particular, depression, in addition to fatigue or pain, is one of the most common symptoms and imposes a heavy burden on patients with advanced HF . Some clinical practice guidelines on HF emphasize the need for psychiatric care for HF patients with depression as part of symptom management in Western countries . However, even these guidelines have insufficient information about a specific psychiatric assessment and treatment for patients with HF. Participants in this study also described that they had little access to information needed to improve their knowledge and skills in psychiatric care. For cancer patients, lack of knowledge and training among health care providers is a barrier to providing psychiatric care , and therefore some Japanese academic societies have held seminars or workshops to promote psychiatric care knowledge for oncologists or any other health care providers in the last few decades. Taken together, we recommend an expansion of the existing training and education system and provision of detailed guidelines as a way to provide access to methods of psychiatric assessment and treatment for psychological symptoms in patients with advanced HF. Furthermore, physical symptom management was also identified as a difficulty for cardiovascular health care providers compared with oncological health care providers in this study. Interventions directed at alleviating physical symptoms related to HF can lead to a reduction in psychological symptoms in palliative care . In the future, we recommend the development of a training system for end-of-life care professionals aimed at providing training for both physical and psychiatric care. Second, cooperation among health care providers with different specialties is important in providing psychiatric care for end-stage patients. Many health care providers felt that it was difficult to coordinate professional-patient relationships in both cardiovascular and oncological settings. Interventions to enhance communication between professionals and patients can improve the latter’s psychological well-being . Professional-patient relationship and communication are also important for the quality and outcome of medical treatment . Particularly in palliative settings, a lack of communication between professionals and patients can lead to the inhibition of critical decisions such as ICD deactivations . Practically, general education and specialized education can improve communication skills among health care providers and facilitate professional-patient communication . Advanced care planning can also encourage effective communication between professionals and patients with HF . Therefore, we conclude that a useful tool or training system for improving communication skills as well as psychiatric care skills among health care providers could enhance end-of-life care in cardiovascular settings. Third, health care providers’ own difficulties and distresses can be resolved to implement psychiatric care smoothly for end-stage patients. A professional’s personal psychological or physical distress could be a barrier to providing psychiatric care. Professional participants in this study described that many cardiovascular and oncological hospitals do not have sufficient staff and are consequently overwhelmed by the workload, leading to unsatisfactory psychiatric care for palliative patients. Health care providers also feel unable to provide sufficient spiritual psychiatric care for end-of-life patients . Reducing the workload and ensuring adequate time management for health care providers remain critical goals in modern Japanese medical settings. Limitations Our study has three major limitations. First, recall bias may have occurred because of the self-reported nature of the questionnaires. However, we conducted a content analysis by two researchers independently and ensured objectivity. Second, although the study conducted on a nation-wide level in Japan, the data may not be generalizable to other populations of the world. Therefore, future studies investigating the same research questions in other countries will be essential to validate our findings and to add to the evidence database. Third, as this study was conducted before the COVID-19 pandemic, our findings may not be consistent with the current situation in the Japanese medical field. Although it is noteworthy that the medical field is constantly overwhelmed with achieving a level of infection control, and the perception of health care providers regarding the significance of providing psychiatric care at the end of life is also changing.
Our study has three major limitations. First, recall bias may have occurred because of the self-reported nature of the questionnaires. However, we conducted a content analysis by two researchers independently and ensured objectivity. Second, although the study conducted on a nation-wide level in Japan, the data may not be generalizable to other populations of the world. Therefore, future studies investigating the same research questions in other countries will be essential to validate our findings and to add to the evidence database. Third, as this study was conducted before the COVID-19 pandemic, our findings may not be consistent with the current situation in the Japanese medical field. Although it is noteworthy that the medical field is constantly overwhelmed with achieving a level of infection control, and the perception of health care providers regarding the significance of providing psychiatric care at the end of life is also changing.
Our results demonstrated that (1) both cardiovascular and oncological health care providers perceive the barriers to providing end-of-life psychiatric care; (2) both of them faced challenges in terms of patients’ personal problems, family members’ problems, professionals’ personal problems, communication problems between professionals and patients, problems specific to end-of-life care, problems specific to psychiatric care, problems of institution or system, and problems specific to non-cancer patients; and (3) cardiovascular providers particularly faced challenges specific to non-cancer patients, compared to oncology providers. These results suggest that health care providers in cardiovascular hospitals, in contrast to those in oncological hospitals, experience problems in obtaining useful guidelines or training opportunities. We recommend the staffing to provide adequate psychiatric care for end-stage HF patients, and the provision of continuous educational opportunities for health care providers involved with psychiatric and palliative care for patients with HF. However, our study also indicates that both oncological and cardiovascular health care providers face challenges in providing end-of-life psychiatric care, which stem from patients’ or health care providers’ personal problems, among others. Therefore, we should also develop strategies to overcome not only the understaffing situation in medical services but also a lack of professionals’ psychiatric care skills.
Below is the link to the electronic supplementary material. Sup. 1 The questionnaire English translated version
|
Evidence of D-shaped wounds in the intrasomatic bullet path: two case reports | 788ba8bb-7018-4d59-947c-d198c53b9cab | 10014649 | Forensic Medicine[mh] | Forensic ballistics can be defined as the study of the projectile’s behavior to reconstruct the defining events in the production of a gunshot wound (GSW) and is divided into internal, external, and terminal ballistics . While internal ballistics mostly focuses on the mechanisms of bullet ejection from the inside of the firearm, external ballistics describes the flight from the muzzle to the final target, and terminal ballistics, also referred to as wound ballistics, analyzes the injuries that occur in the different anatomical compartments. The course of the bullet outside the barrel greatly influences the appearance of the GSW, especially when interaction with interposed objects, i.e., intermediate targets, occurs, which can cause alteration of the angle of incidence, tumbling and yawing of the bullet, resulting in an unstable flight and, ultimately, in atypical entrance wounds . Intermediate targets are environmental objects opposing resistance to the projectile, causing deviation from the original trajectory, decrease in velocity, and so, dispersion of kinetic energy, which can lead to: Ricochet, when the angle of incidence is small; Bullet deformation and fragmentation; Penetration/perforation of the intermediate target [ , – ]. The GSWs produced by the modifications in the bullet’s behavior greatly differ from those occurring when the initial trajectory and stability remain unaltered [ , , , ]. An infrequently observed morphology of the entrance wound has been described in the literature as D-shaped when the surface of impact with the cutaneous tissue is represented by the lateral projection of the bullet [ , , , , ]. This peculiar appearance is given by an object with an acute angle at the apex and a recognizable cylindric base with regular margins. In fact, the forensic pathologist examining the wounds needs to interpret this finding and reconstruct the dynamics and the manner of death based on the collection of anatomical and circumstantial evidence. For the sake of the analysis conducted in this small but paradigmatic case series, the Authors relied on the similarities observed during the study of the intrasomatic bullet paths. Two extremely different dynamics of production of D-shaped GSWs, where interaction with multiple (Case 1) and human (Case 2) intermediate targets was documented, are hereby presented, and analyzed. These examples are exceptional as in both cases the D-shaped conformation was demonstrated throughout the intrasomatic bullet path, from the entrance wound to the affected tissues.
Case 1 The incident details and crime scene investigation A man sitting in the back seat of a moving car was accidentally shot by a bullet fired from the opposite side of a highway. During the crime scene investigation, the metal wire fence placed between the two carriageways was examined, and a small, yet noticeable deflection was documented and sampled (Fig. a). There was a hole on the left rear window of the car with radial cracks in the glass (Fig. b). The victim was found in a sitting position with two GSWs to the neck, one on his left side and one on the right side, still wearing a necklace chain with two breaks at the level of the GSWs (Fig. a). The bullet, once retrieved during site examination, was found to be partially deformed at the ogive with a damaged jacket (Fig. ). The event’s dynamics was initially reconstructed from the testimonies collected by bystanders who saw a man shooting from the opposite side of the highway, where a cartridge case was found. Based on the conducted investigations, it was postulated that the bullet had impacted the metal fence with subsequent deviation from its original trajectory (first intermediate target), the rear window with the fragmentation of the glass (second intermediate target), and, lastly, the necklace worn by the victim (third intermediate target) before reaching his neck and producing a perforating GSW to the neck. For the sake of this analysis, before providing a description of the injuries caused by the deviated and deformed bullet, each of the intermediate targets will be examined. First intermediate target—metal wire fence During the crime scene investigation, the judicial police recognized that the wire fence dividing the two lanes had acted as an obstacle to the projectile's motion. A deformation was detected on the central portion of one of the cylindrical elements, orthogonally everted with respect to the plane of the wire and curved towards the side where the car was located (Fig. a). According to the reconstructions, the bullet drew this portion of the intermediate target causing a glove-like semi-invagination, and then deviated to the left of the wire as suggested by the morphology of the impression, which was compatible with the cylindrical-ogival shape of the deforming agent and limited to the contact area (Fig. ). Second intermediate target—glass of the rear window By analyzing the shape of the hole in the glass of the rear window of the car, it was postulated that an impact with the lateral surface of the bullet had occurred. The morphology was attributed both to the perturbation of the projectile’s trajectory after contact with the wire fence, and to the convexity of the glass itself (Fig. b). Third intermediate target—necklace Two breaks with deformed margins were found on the necklace: the one on the left side of the neck had produced some metallic fragments which deposited on the subcutaneous tissue of the cervical region, while the one on the right had branching-out metallic fibers, hence confirming the correct positioning with respect to the entrance and exit wounds (Fig. b–c). The bullet had caused an initial bending of the necklace, followed by tearing of the folded area, defeating the elastic capacity of the necklace. Field emission gun–scanning electron microscopy (FEG-SEM) and scanning electron microscopy–energy dispersive spectroscopy (SEM–EDS) analysis were conducted on the necklace’s fragments and allowed the investigators to document additional blood traces at the level of the right break. Final target—the victim A D-shaped 1.3 × 0.4 cm entrance wound was found in the left latero-cervical region of the neck, and was surrounded by a slightly depressed, triangularly shaped abrasion collar and a brown-to-black oval contusion ring (Fig. a). On the internal side of the cutaneous layers, golden metallic fibers were visible. On the right side of the neck, 1 cm above the clavicle, a 1 × 0.4 cm oval exit wound was detected, which was surrounded by a small abrasion collar-contusion ring complex (Fig. b). Anterior and lateral neck dissection revealed a D-shaped hole in the middle third of the left jugular vein, perfectly reproducing the bullet’s lateral surface, where the tip was cranially oriented (Fig. c). The ventral wall of the left common carotid artery was lacerated with irregular margins (Fig. ). Another D-shaped hole was detected at the level of the trachea, just beneath the cricoid cartilage (Fig. d). Both the lungs were pink with multiple red-to-purple polygonal punctuations on the pleural surface, which were compatible with bronchoaspiration. Based on the autoptic findings, it was possible to accurately reconstruct the intrasomatic bullet course: left cervical skin, left infrahyoid muscles, left jugular vein, left common carotid artery, trachea, right intrahyoid muscles, right cervical skin. Bullet’s characteristics The 7.40 g, cal. 9 Parabellum bullet was analyzed (Fig. ): The tip of the bullet was deformed and flattened by the impact with the glass of the rear window; The latero-ogival region of the base of the bullet was curved, as a result of the impact with the wire fence, which was also demonstrated by SEM–EDS analysis; The analysis on the bullet’s surface also detected the presence of silver and cadmium residues, which were the main components of the necklace; Additionally, silica fragments from the rear window were present on the projectile’s ogive; therefore, contact with the intermediate targets was confirmed. Case 2 The incident details and crime scene investigation A family of three was assaulted by two robbers while walking on a sidewalk. A single bullet was fired from a revolver and struck both the man and his 11-month-old daughter. The emergency response system was activated, and cardiopulmonary resuscitation was performed, without any successful restoration of consciousness nor perfusion of the victims. When the medical examiner arrived, the man was leaning on his right side on the sidewalk, while the infant had been moved into the ambulance by the rescuers for the resuscitation maneuvers. Several blood traces were found and documented during the site inspection. The right half of the infant’s face was covered in blood, with diffuse blood traces on her clothes. A D-shaped GSW was detected at the level of the sternum of the man’s chest, with no evidence of an exit wound. No cartridge case was retrieved during the crime scene investigation. Intermediate target—the infant’s head Upon removal of the blood from the infant’s forehead, a 0.6 cm diameter round entrance wound with a prominent V-shaped projection surrounded by an abrasion collar with a contusion ring was revealed on the superior portion of the glabella (Fig. a). On the left occipital region, a round exit wound, 0.5 cm in diameter, was detected (Fig. b). In the occipital region, the internal surface of the scalp was diffusely infiltrated by blood, while the frontal bone had a 0.8-diameter hole. The two wounds were connected by an intrasomatic bullet path, with a ventro-dorsal, right-to-left, top-to-bottom bullet course. The left frontal lobe, the left temporal lobe, and the left cerebellar lobe were collapsed with diffuse parenchymal and subarachnoid hemorrhage accompanied by cerebral edema. The skull was affected by a comminuted cranial base fracture with fragment dislocation and fracture lines extending up to the temporal and occipital bones. Final target—the victim A chest and abdomen X-ray was conducted prior to the autopsy and highlighted the presence of a semiconical radiopaque object with an ogival tip localized on the right side of the twelfth thoracic vertebra. On the left side of the midsternal line, 10.2 cm below the left jugular notch, an oval entrance wound was detected. This wound was 1.7 × 1.2 cm in dimension and was surrounded by an abrasion collar with a contusion ring (Fig. a). A 1 × 0.5 cm irregularly shaped wound was documented on the body of the sternum at the level of the IV intercostal space. Fragments from the outer layer of the sternal bone were detected together with three black metallic fragments embedded in the trabecular bony tissue. Upon removal of the sternum, osseous fragments protruding towards the internal thoracic cavity were detected around a 1.2 × 0.6 cm wound (Fig. b). In the intraosseous bullet path, an additional black metallic fragment was retrieved. On the parietal surface of the pericardium, a D-shaped GSW was documented and massive hemopericardium was present (Fig. c). Two similar wounds were detected both on the anterior and the posterior walls of the right ventricle (Fig. d–e). On the diaphragmatic surface of the pericardium, an irregular wound was documented, with signs of the left hepatic lobe involvement. A small round lesion was detected on the lesser omentum and a cal. 9-mm bullet was found in the peritoneal cavity, at the level of the posterior surface of the right hepatic lobe. Therefore, the intrasomatic bullet path was reconstructed: left midsternal line, body of the sternum, pericardial sac, anterior surface of the right ventricle, posterior surface of the right ventricle, diaphragmatic surface of the pericardial sac, left hemidiaphragm, left hepatic lobe, lesser omentum, epiploic retrocavity (XII thoracic vertebral body). The bullet course had an antero-posterior, left-to-right, up-to-down direction. Bullet’s characteristics The cal. 9-mm bullet retrieved during the autopsy showed no signs of deformation nor fragmentation, with an intact jacket.
The incident details and crime scene investigation A man sitting in the back seat of a moving car was accidentally shot by a bullet fired from the opposite side of a highway. During the crime scene investigation, the metal wire fence placed between the two carriageways was examined, and a small, yet noticeable deflection was documented and sampled (Fig. a). There was a hole on the left rear window of the car with radial cracks in the glass (Fig. b). The victim was found in a sitting position with two GSWs to the neck, one on his left side and one on the right side, still wearing a necklace chain with two breaks at the level of the GSWs (Fig. a). The bullet, once retrieved during site examination, was found to be partially deformed at the ogive with a damaged jacket (Fig. ). The event’s dynamics was initially reconstructed from the testimonies collected by bystanders who saw a man shooting from the opposite side of the highway, where a cartridge case was found. Based on the conducted investigations, it was postulated that the bullet had impacted the metal fence with subsequent deviation from its original trajectory (first intermediate target), the rear window with the fragmentation of the glass (second intermediate target), and, lastly, the necklace worn by the victim (third intermediate target) before reaching his neck and producing a perforating GSW to the neck. For the sake of this analysis, before providing a description of the injuries caused by the deviated and deformed bullet, each of the intermediate targets will be examined. First intermediate target—metal wire fence During the crime scene investigation, the judicial police recognized that the wire fence dividing the two lanes had acted as an obstacle to the projectile's motion. A deformation was detected on the central portion of one of the cylindrical elements, orthogonally everted with respect to the plane of the wire and curved towards the side where the car was located (Fig. a). According to the reconstructions, the bullet drew this portion of the intermediate target causing a glove-like semi-invagination, and then deviated to the left of the wire as suggested by the morphology of the impression, which was compatible with the cylindrical-ogival shape of the deforming agent and limited to the contact area (Fig. ). Second intermediate target—glass of the rear window By analyzing the shape of the hole in the glass of the rear window of the car, it was postulated that an impact with the lateral surface of the bullet had occurred. The morphology was attributed both to the perturbation of the projectile’s trajectory after contact with the wire fence, and to the convexity of the glass itself (Fig. b). Third intermediate target—necklace Two breaks with deformed margins were found on the necklace: the one on the left side of the neck had produced some metallic fragments which deposited on the subcutaneous tissue of the cervical region, while the one on the right had branching-out metallic fibers, hence confirming the correct positioning with respect to the entrance and exit wounds (Fig. b–c). The bullet had caused an initial bending of the necklace, followed by tearing of the folded area, defeating the elastic capacity of the necklace. Field emission gun–scanning electron microscopy (FEG-SEM) and scanning electron microscopy–energy dispersive spectroscopy (SEM–EDS) analysis were conducted on the necklace’s fragments and allowed the investigators to document additional blood traces at the level of the right break. Final target—the victim A D-shaped 1.3 × 0.4 cm entrance wound was found in the left latero-cervical region of the neck, and was surrounded by a slightly depressed, triangularly shaped abrasion collar and a brown-to-black oval contusion ring (Fig. a). On the internal side of the cutaneous layers, golden metallic fibers were visible. On the right side of the neck, 1 cm above the clavicle, a 1 × 0.4 cm oval exit wound was detected, which was surrounded by a small abrasion collar-contusion ring complex (Fig. b). Anterior and lateral neck dissection revealed a D-shaped hole in the middle third of the left jugular vein, perfectly reproducing the bullet’s lateral surface, where the tip was cranially oriented (Fig. c). The ventral wall of the left common carotid artery was lacerated with irregular margins (Fig. ). Another D-shaped hole was detected at the level of the trachea, just beneath the cricoid cartilage (Fig. d). Both the lungs were pink with multiple red-to-purple polygonal punctuations on the pleural surface, which were compatible with bronchoaspiration. Based on the autoptic findings, it was possible to accurately reconstruct the intrasomatic bullet course: left cervical skin, left infrahyoid muscles, left jugular vein, left common carotid artery, trachea, right intrahyoid muscles, right cervical skin. Bullet’s characteristics The 7.40 g, cal. 9 Parabellum bullet was analyzed (Fig. ): The tip of the bullet was deformed and flattened by the impact with the glass of the rear window; The latero-ogival region of the base of the bullet was curved, as a result of the impact with the wire fence, which was also demonstrated by SEM–EDS analysis; The analysis on the bullet’s surface also detected the presence of silver and cadmium residues, which were the main components of the necklace; Additionally, silica fragments from the rear window were present on the projectile’s ogive; therefore, contact with the intermediate targets was confirmed.
A man sitting in the back seat of a moving car was accidentally shot by a bullet fired from the opposite side of a highway. During the crime scene investigation, the metal wire fence placed between the two carriageways was examined, and a small, yet noticeable deflection was documented and sampled (Fig. a). There was a hole on the left rear window of the car with radial cracks in the glass (Fig. b). The victim was found in a sitting position with two GSWs to the neck, one on his left side and one on the right side, still wearing a necklace chain with two breaks at the level of the GSWs (Fig. a). The bullet, once retrieved during site examination, was found to be partially deformed at the ogive with a damaged jacket (Fig. ). The event’s dynamics was initially reconstructed from the testimonies collected by bystanders who saw a man shooting from the opposite side of the highway, where a cartridge case was found. Based on the conducted investigations, it was postulated that the bullet had impacted the metal fence with subsequent deviation from its original trajectory (first intermediate target), the rear window with the fragmentation of the glass (second intermediate target), and, lastly, the necklace worn by the victim (third intermediate target) before reaching his neck and producing a perforating GSW to the neck. For the sake of this analysis, before providing a description of the injuries caused by the deviated and deformed bullet, each of the intermediate targets will be examined.
During the crime scene investigation, the judicial police recognized that the wire fence dividing the two lanes had acted as an obstacle to the projectile's motion. A deformation was detected on the central portion of one of the cylindrical elements, orthogonally everted with respect to the plane of the wire and curved towards the side where the car was located (Fig. a). According to the reconstructions, the bullet drew this portion of the intermediate target causing a glove-like semi-invagination, and then deviated to the left of the wire as suggested by the morphology of the impression, which was compatible with the cylindrical-ogival shape of the deforming agent and limited to the contact area (Fig. ).
By analyzing the shape of the hole in the glass of the rear window of the car, it was postulated that an impact with the lateral surface of the bullet had occurred. The morphology was attributed both to the perturbation of the projectile’s trajectory after contact with the wire fence, and to the convexity of the glass itself (Fig. b).
Two breaks with deformed margins were found on the necklace: the one on the left side of the neck had produced some metallic fragments which deposited on the subcutaneous tissue of the cervical region, while the one on the right had branching-out metallic fibers, hence confirming the correct positioning with respect to the entrance and exit wounds (Fig. b–c). The bullet had caused an initial bending of the necklace, followed by tearing of the folded area, defeating the elastic capacity of the necklace. Field emission gun–scanning electron microscopy (FEG-SEM) and scanning electron microscopy–energy dispersive spectroscopy (SEM–EDS) analysis were conducted on the necklace’s fragments and allowed the investigators to document additional blood traces at the level of the right break.
A D-shaped 1.3 × 0.4 cm entrance wound was found in the left latero-cervical region of the neck, and was surrounded by a slightly depressed, triangularly shaped abrasion collar and a brown-to-black oval contusion ring (Fig. a). On the internal side of the cutaneous layers, golden metallic fibers were visible. On the right side of the neck, 1 cm above the clavicle, a 1 × 0.4 cm oval exit wound was detected, which was surrounded by a small abrasion collar-contusion ring complex (Fig. b). Anterior and lateral neck dissection revealed a D-shaped hole in the middle third of the left jugular vein, perfectly reproducing the bullet’s lateral surface, where the tip was cranially oriented (Fig. c). The ventral wall of the left common carotid artery was lacerated with irregular margins (Fig. ). Another D-shaped hole was detected at the level of the trachea, just beneath the cricoid cartilage (Fig. d). Both the lungs were pink with multiple red-to-purple polygonal punctuations on the pleural surface, which were compatible with bronchoaspiration. Based on the autoptic findings, it was possible to accurately reconstruct the intrasomatic bullet course: left cervical skin, left infrahyoid muscles, left jugular vein, left common carotid artery, trachea, right intrahyoid muscles, right cervical skin.
The 7.40 g, cal. 9 Parabellum bullet was analyzed (Fig. ): The tip of the bullet was deformed and flattened by the impact with the glass of the rear window; The latero-ogival region of the base of the bullet was curved, as a result of the impact with the wire fence, which was also demonstrated by SEM–EDS analysis; The analysis on the bullet’s surface also detected the presence of silver and cadmium residues, which were the main components of the necklace; Additionally, silica fragments from the rear window were present on the projectile’s ogive; therefore, contact with the intermediate targets was confirmed.
The incident details and crime scene investigation A family of three was assaulted by two robbers while walking on a sidewalk. A single bullet was fired from a revolver and struck both the man and his 11-month-old daughter. The emergency response system was activated, and cardiopulmonary resuscitation was performed, without any successful restoration of consciousness nor perfusion of the victims. When the medical examiner arrived, the man was leaning on his right side on the sidewalk, while the infant had been moved into the ambulance by the rescuers for the resuscitation maneuvers. Several blood traces were found and documented during the site inspection. The right half of the infant’s face was covered in blood, with diffuse blood traces on her clothes. A D-shaped GSW was detected at the level of the sternum of the man’s chest, with no evidence of an exit wound. No cartridge case was retrieved during the crime scene investigation. Intermediate target—the infant’s head Upon removal of the blood from the infant’s forehead, a 0.6 cm diameter round entrance wound with a prominent V-shaped projection surrounded by an abrasion collar with a contusion ring was revealed on the superior portion of the glabella (Fig. a). On the left occipital region, a round exit wound, 0.5 cm in diameter, was detected (Fig. b). In the occipital region, the internal surface of the scalp was diffusely infiltrated by blood, while the frontal bone had a 0.8-diameter hole. The two wounds were connected by an intrasomatic bullet path, with a ventro-dorsal, right-to-left, top-to-bottom bullet course. The left frontal lobe, the left temporal lobe, and the left cerebellar lobe were collapsed with diffuse parenchymal and subarachnoid hemorrhage accompanied by cerebral edema. The skull was affected by a comminuted cranial base fracture with fragment dislocation and fracture lines extending up to the temporal and occipital bones. Final target—the victim A chest and abdomen X-ray was conducted prior to the autopsy and highlighted the presence of a semiconical radiopaque object with an ogival tip localized on the right side of the twelfth thoracic vertebra. On the left side of the midsternal line, 10.2 cm below the left jugular notch, an oval entrance wound was detected. This wound was 1.7 × 1.2 cm in dimension and was surrounded by an abrasion collar with a contusion ring (Fig. a). A 1 × 0.5 cm irregularly shaped wound was documented on the body of the sternum at the level of the IV intercostal space. Fragments from the outer layer of the sternal bone were detected together with three black metallic fragments embedded in the trabecular bony tissue. Upon removal of the sternum, osseous fragments protruding towards the internal thoracic cavity were detected around a 1.2 × 0.6 cm wound (Fig. b). In the intraosseous bullet path, an additional black metallic fragment was retrieved. On the parietal surface of the pericardium, a D-shaped GSW was documented and massive hemopericardium was present (Fig. c). Two similar wounds were detected both on the anterior and the posterior walls of the right ventricle (Fig. d–e). On the diaphragmatic surface of the pericardium, an irregular wound was documented, with signs of the left hepatic lobe involvement. A small round lesion was detected on the lesser omentum and a cal. 9-mm bullet was found in the peritoneal cavity, at the level of the posterior surface of the right hepatic lobe. Therefore, the intrasomatic bullet path was reconstructed: left midsternal line, body of the sternum, pericardial sac, anterior surface of the right ventricle, posterior surface of the right ventricle, diaphragmatic surface of the pericardial sac, left hemidiaphragm, left hepatic lobe, lesser omentum, epiploic retrocavity (XII thoracic vertebral body). The bullet course had an antero-posterior, left-to-right, up-to-down direction. Bullet’s characteristics The cal. 9-mm bullet retrieved during the autopsy showed no signs of deformation nor fragmentation, with an intact jacket.
A family of three was assaulted by two robbers while walking on a sidewalk. A single bullet was fired from a revolver and struck both the man and his 11-month-old daughter. The emergency response system was activated, and cardiopulmonary resuscitation was performed, without any successful restoration of consciousness nor perfusion of the victims. When the medical examiner arrived, the man was leaning on his right side on the sidewalk, while the infant had been moved into the ambulance by the rescuers for the resuscitation maneuvers. Several blood traces were found and documented during the site inspection. The right half of the infant’s face was covered in blood, with diffuse blood traces on her clothes. A D-shaped GSW was detected at the level of the sternum of the man’s chest, with no evidence of an exit wound. No cartridge case was retrieved during the crime scene investigation.
Upon removal of the blood from the infant’s forehead, a 0.6 cm diameter round entrance wound with a prominent V-shaped projection surrounded by an abrasion collar with a contusion ring was revealed on the superior portion of the glabella (Fig. a). On the left occipital region, a round exit wound, 0.5 cm in diameter, was detected (Fig. b). In the occipital region, the internal surface of the scalp was diffusely infiltrated by blood, while the frontal bone had a 0.8-diameter hole. The two wounds were connected by an intrasomatic bullet path, with a ventro-dorsal, right-to-left, top-to-bottom bullet course. The left frontal lobe, the left temporal lobe, and the left cerebellar lobe were collapsed with diffuse parenchymal and subarachnoid hemorrhage accompanied by cerebral edema. The skull was affected by a comminuted cranial base fracture with fragment dislocation and fracture lines extending up to the temporal and occipital bones.
A chest and abdomen X-ray was conducted prior to the autopsy and highlighted the presence of a semiconical radiopaque object with an ogival tip localized on the right side of the twelfth thoracic vertebra. On the left side of the midsternal line, 10.2 cm below the left jugular notch, an oval entrance wound was detected. This wound was 1.7 × 1.2 cm in dimension and was surrounded by an abrasion collar with a contusion ring (Fig. a). A 1 × 0.5 cm irregularly shaped wound was documented on the body of the sternum at the level of the IV intercostal space. Fragments from the outer layer of the sternal bone were detected together with three black metallic fragments embedded in the trabecular bony tissue. Upon removal of the sternum, osseous fragments protruding towards the internal thoracic cavity were detected around a 1.2 × 0.6 cm wound (Fig. b). In the intraosseous bullet path, an additional black metallic fragment was retrieved. On the parietal surface of the pericardium, a D-shaped GSW was documented and massive hemopericardium was present (Fig. c). Two similar wounds were detected both on the anterior and the posterior walls of the right ventricle (Fig. d–e). On the diaphragmatic surface of the pericardium, an irregular wound was documented, with signs of the left hepatic lobe involvement. A small round lesion was detected on the lesser omentum and a cal. 9-mm bullet was found in the peritoneal cavity, at the level of the posterior surface of the right hepatic lobe. Therefore, the intrasomatic bullet path was reconstructed: left midsternal line, body of the sternum, pericardial sac, anterior surface of the right ventricle, posterior surface of the right ventricle, diaphragmatic surface of the pericardial sac, left hemidiaphragm, left hepatic lobe, lesser omentum, epiploic retrocavity (XII thoracic vertebral body). The bullet course had an antero-posterior, left-to-right, up-to-down direction.
The cal. 9-mm bullet retrieved during the autopsy showed no signs of deformation nor fragmentation, with an intact jacket.
The interaction between the bullet and an intermediate target can result in peculiar GSWs which reflect the projectile’s behavior in space. In the two cases presented in this article, the entrance wounds’ morphology was characterized by a D-shape appearance, which was first described in 1984 by Donoghue et al. and is produced by the impact of the lateral surface of the bullet with the cutaneous surface . In case 1, the projectile’s course was first altered by the metallic wire fence, producing the well-known phenomenon of ricochet which occurs when the angle of incidence between the two objects is small, resulting in a tangential deviation from the original course . In this case, the contact with this intermediate target led to the dispersion of kinetic energy which was later accentuated by the impact with the rear window and the necklace worn by the victim. Specifically, the metal chain also produced an asymmetric abrasion collar and contusion ring, which was more prominent along the minor axis of the entrance wound. Moreover, the anatomical district itself, i.e., the cutaneous fold of the neck, contributed to the final appearance of both the entrance hole and the surrounding skin. For the exact same reasons, the exit wound showed similar findings when compared to the entrance, as while exiting the skin the bullet encountered the other portion of the necklace, producing a shored exit wound . In case 2, the D-shaped conformation was observed in the final target, i.e., the father, where the destabilizing factor was another human being that was trespassed and acted as an intermediate target: the bullet perforated the infant’s head and lost most of its kinetic energy, but still retained the capacity to cause damage and penetrate in the man’s chest. In this case, an additional element of interest is the comparison between the two entrance wounds which are both atypical but are the result of two different physical processes: the lesion on the infant’s head was produced by a stable bullet and had a long V-shaped projection because of the properties of the skull and the overlying scalp, while the lesion on the man’s chest is represented by a wobbling, destabilized bullet that had changed its orientation in space . In forensic literature, the D-shaped morphology has been described at the level of the cutaneous surfaces, while in the present case reports some structures throughout the intrasomatic bullet path were also characterized by the same finding. The study of the internal wounds warrants a few considerations for what concerns the physical characteristics of the anatomical districts. In both the incidents presented in this article, the destabilized bullet produced the same results in terms of wounding effects, where the distribution and the morphology of the GSWs varied according to the different anatomical compartments. First, structures like muscles and arterial wall in case 1, bone and liver in case 2, showed irregularly shaped wounds, leaving no traces that could reveal information about the bullet’s behavior. These tissues have limited stretch capacity in the case of GSWs and they were lacerated with fragmented margins, leading to a non-specific appearance of the wound. On the other hand, the venous wall and the trachea in case 1, the pericardial sac in case 2, led to the production of the D-shaped morphology and allowed the examiner to clearly reconstruct the path, the orientation, and the rotation along its axis of the bullet throughout its course. These structures are more compliant to the energy and the displacement exerted by the bullet, especially when the velocity and therefore the temporary cavity effect are low. The GSWs at the level of the right ventricle were somewhat similar, where the D-shaped morphology could still be recognized despite the tissue recoil and the greater thickness when compared to the pericardium, which led to a less definite appearance of the bullet’s lateral surface. The different behavior of the different anatomical districts varied according to the intrinsic mechanical properties of the structures and the thickness of the tissues drawn by the bullet. The jugular vein, the trachea, and the pericardium have thin elastic walls, which make them more compliant to the energy and the displacement exerted by the bullet, especially when the velocity and therefore the temporary cavity effect are reduced or neglegible. Instead, other vascular structures, like the carotid artery are thick, semi-rigid tubes. Notably, the D-shaped morphology was seen exceptionally well at the level of the pericardium, perfectly reproducing the bullet’s impression on the thin fibrous tissue composing this membrane. The discovery of the D-shaped morphology both upon external inspection and autoptic examination confirmed and supported the dynamics of wound production, which had been studied extensively by the crime scene investigation and FEG/EDS-SEM analyses and allowed the forensic pathologist to postulate that in both cases interaction with intermediate targets had occurred producing significant bullet destabilization with change in orientation and velocity.
Regardless of the circumstantial evidence, a D-shaped morphology can accurately suggest that the bullet’s behavior has been altered by a significant impact with one or more intermediate targets before reaching the final target. Isolation and sectioning of the anatomical structures coupled to the study of the lesion(s) during the autopsy can lead to the direct visualization of D-shaped GSWs throughout the bullet’s path and give additional information on the stability of the projectile. There is no unique mechanism of destabilization, since the interaction with interposed objects can result in different phenomena; therefore, the correlation between crime scene investigation and autoptic findings is crucial in the determination of the event’s dynamics.
The interaction between a bullet and one or more intermediate target(s) can produce atypical entrance wounds. The D-shaped morphology has been described exclusively on the skin in entrance wounds In the two cases presented in this article, D-shaped wounds were observed throughout most of the intrasomatic bullet path as well as on the skin. The presence of intrasomatic D-shaped wounds during the forensic examination can suggest that interaction with an intermediate target has occurred.
|
1ed14ed3-ec94-46de-b0c1-aac929e11121 | 10014693 | Internal Medicine[mh] | The Small Business Innovation Research (SBIR) and the Small Business Technology Transfer (STTR) programs were established by the United States Congress to increase the commercialization of technologies developed by small businesses through federally funded research and development. The SBIR and STTR programs at the National Institutes of Health (NIH) have an annual budget of $1.2 billion and serve as one of the largest sources of seed funding for early‐stage technology development by small businesses in the United States. The NIH SBIR and STTR programs play a critical role to the NIH mission of seeking and applying knowledge to enhance health, lengthen life, and reduce illness and disability, by supporting the translation of innovative products to the clinic where they can ultimately be used to improve human health. The National Cancer Institute (NCI) is the largest NIH Institute and has a mission to support cancer research to help all people live longer, healthier lives. In 2021, the NCI SBIR Development Center provided ~$180 million in non‐dilutive funding to small businesses developing innovative technologies to prevent, diagnose, and treat cancer. The NCI SBIR Development Center has a centralized model for managing SBIR/STTR programs, and in addition to funding, the staff supports their portfolio companies with a variety of critical commercialization resources, such as entrepreneurial training, access to investor networks, and recently regulatory assistance in collaboration with the US Food and Drug Administration (FDA). The FDA's mission is to protect and promote public health by helping to speed innovations that make medical products safer and effective for the public. Many FDA‐regulated technologies have origins in SBIR, including small molecule drugs, biologic drugs, and medical devices. For example, a recent report by the National Academies of Sciences, Engineering, and Medicine on the impact of the NIH SBIR/STTR programs provides a list of FDA‐approved (authorized for marketing) drugs from 1996–2020 that received NIH SBIR/STTR funding to support product development. The report indicates more than 12% of the total novel drugs approved by the FDA during this time period received NIH SBIR/STTR funding, and that a number of these medicines have had a profound impact on human health with sales in the hundreds of millions of dollars. This is likely an under‐reported figure because companies often experience multiple name changes and acquisitions during the lengthy time it takes for a potential drug to be developed. Metrics collected by the NCI SBIR Development Center from publicly available websites for 2021 indicate a similar pattern with 38% and 20% of novel FDA‐approved drugs involving NIH and NCI SBIR/STTR‐funded companies, respectively. During 2020–2021, eight NCI SBIR portfolio companies filed an Investigational New Drug (IND) or Investigational Device Exemption application with the FDA, which allows for investigational drug or device testing in humans to begin, and 12 clinical trials were started. In addition, 13 NCI‐funded companies received 15 FDA authorizations or clearances for new drug therapies or devices (Table ), and these SBIR‐funded products are now available to healthcare providers and their patients. Given the overlapping goals of the FDA and NIH, collaboration between the two Agencies is vital for supporting product development and can lead to a substantial impact on public health. To better support small businesses developing cancer‐related products, the FDA and NCI SBIR Development Center have developed an interagency assistance program called Connecting Awardees with Regulatory Experts (CARE). The CARE program provides small businesses with coordinated support from the FDA and NCI to facilitate innovative, safe, and effective technologies in reaching patients, and it provides a model for developing future impactful interagency collaborations in other areas of mutual priority.
In 2018, a working group of external advisors to the NCI SBIR program was convened by the National Cancer Advisory Board to review the NCI SBIR program. External input is critical to ensure that the program reflects the current needs of small businesses that are subject to change over time. Recommendations for enhancements included regulatory assistance for small businesses. The NCI SBIR conducted a broad survey of their portfolio companies to identify key resource needs and regulatory hurdles that were a pain point for early‐stage companies and could lead to delays in federally funded technology development. They discovered that many companies wait until after they have completed critical research and development milestones to solicit FDA input, thus risking the need to redo experiments to align with the FDA requirements. For example, a company developing a novel therapeutic might complete toxicology studies in a non‐relevant species, or fail to include necessary critical assessments, only to find out later from the FDA that these toxicology studies are not adequate to support a first‐in‐human clinical trial. The NCI SBIR program is composed of three phases of funding (described in Figure ), and most companies enter the program with phase I award funding. To be eligible to apply for funding, the small business must be organized for‐profit and based in the United States with 500 or fewer employees. Unlike established businesses, NCI‐funded start‐ups operate in lean mode with teams composed of entrepreneurs with deep scientific expertise, and they have difficulty hiring experienced regulatory consultants with product development experience. Without this expertise, many start‐ups struggle with where to start in learning about the appropriate regulatory pathway for their technology and what resources are available from the FDA to help them. One valuable resource currently available to companies is the FDA's Regulatory Education for Industry (REdI) Conference. Hosted annually by the FDA, this free, week‐long virtual event brings together regulatory experts from across the FDA, including the Center for Biologics Evaluation and Research (CBER), the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and the Oncology Center of Excellence (OCE). Regulatory policy on key aspects of drug, device, and biologic regulations is discussed as well as information on other topic areas, such as combination products, companion diagnostics, and products for rare diseases and pediatrics. In addition, CBER, CDER, CDRH, and OCE each have an office that provides educational resources to industry. Developers can receive help in navigating the FDA's regulatory processes from knowledgeable staff that are available to answer general questions by telephone or email, as described below. Center for Biologics Evaluation and Research CBER is responsible for regulating biological products, including blood products, cellular and gene therapies, human tissue products, and vaccines. CBER's Manufacturers Assistance and Technical Training Branch (MATTB) responds to public inquiries for information from the biologics industry regarding these product types. The branch also coordinates speaker requests and conducts outreach events to increase public knowledge. During early‐stage technology development, innovators can apply for an INitial Targeted Engagement for Regulatory Advice on CBER producTs (INTERACT) meeting. For companies accepted, this informal consultation allows innovators to discuss their clinical study plans with the Agency prior to a formal pre‐IND or IND application request (required before clinical testing can begin). Center for Drug Evaluation and Research CDER regulates both over‐the‐counter and prescription drugs. The Small Business and Industry Assistance (SBIA) office staff assists entrepreneurs with information on the development of therapeutic small molecules, imaging drugs, and radiopharmaceuticals as well as therapeutic biologic products that fall outside of CBER's purview (e.g., monoclonal antibodies, proteins, and cytokines). SBIA regularly hosts webinars and in‐person workshops on topics of interest to industry and provides an online database of resources that is searchable by topic. Preclinical resources include FDA guidances and videos on the drug development process, how to request a formal pre‐IND meeting with CDER, and information on IND applications. Center for Devices and Radiological Health CDRH regulates medical devices including in vitro diagnostics, surgical and therapeutic devices, imaging devices, companion diagnostics, radiation emitting products, and digital health technologies. The Division of Industry and Consumer Education (DICE) office has staff available to answer questions from companies on topics such as medical device classification and premarket submission types (e.g., De Novo, Premarket Notification 510(k), Premarket Approval). There are over 150 videos available on device‐related regulatory topics as part of the CDRH Learn multimedia education for industry. In addition, CDRH encourages early‐stage medical device innovators to leverage assistance programs, such as the Q‐Submission Program, which includes Pre‐Submissions as well as additional opportunities for companies to engage with FDA. A Pre‐Submission provides the opportunity to obtain FDA feedback prior to a CDRH premarket submission. Last, the Early Payor Feedback Program is a voluntary opportunity for medical device sponsors to obtain payor input on clinical trial design or other plans for gathering clinical evidence needed to support positive coverage decisions. Oncology Center of Excellence In 2017, OCE was established to facilitate the development and clinical review of oncology products. OCE works across the FDA's product centers (CBER, CDER, and CDRH) to conduct expedited review of drugs, biologics, and devices that are related to cancer. OCE provides knowledge to early‐stage oncology companies through the Project Catalyst's Oncology Regulatory Expertise and Early Guidance program, a new initiative that offers educational resources to expedite the availability of novel cancer treatments to the public. As part of the program, frequently asked oncology drug development questions and answers are available by topic, and entrepreneurs can ask questions directly to regulatory science experts as well. In collaboration with CBER, CDER, CDRH, and OCE, NCI SBIR developed a multipronged approach to the interagency support provided to small businesses as part of the CARE program which includes: Demystifying the FDA during early‐stage technology development. Communicating with oncology small businesses about existing FDA resources. Providing joint NCI funding opportunities with the FDA. Below, we discuss the CARE program and share outcomes from 2019–2022.
CBER is responsible for regulating biological products, including blood products, cellular and gene therapies, human tissue products, and vaccines. CBER's Manufacturers Assistance and Technical Training Branch (MATTB) responds to public inquiries for information from the biologics industry regarding these product types. The branch also coordinates speaker requests and conducts outreach events to increase public knowledge. During early‐stage technology development, innovators can apply for an INitial Targeted Engagement for Regulatory Advice on CBER producTs (INTERACT) meeting. For companies accepted, this informal consultation allows innovators to discuss their clinical study plans with the Agency prior to a formal pre‐IND or IND application request (required before clinical testing can begin).
CDER regulates both over‐the‐counter and prescription drugs. The Small Business and Industry Assistance (SBIA) office staff assists entrepreneurs with information on the development of therapeutic small molecules, imaging drugs, and radiopharmaceuticals as well as therapeutic biologic products that fall outside of CBER's purview (e.g., monoclonal antibodies, proteins, and cytokines). SBIA regularly hosts webinars and in‐person workshops on topics of interest to industry and provides an online database of resources that is searchable by topic. Preclinical resources include FDA guidances and videos on the drug development process, how to request a formal pre‐IND meeting with CDER, and information on IND applications.
CDRH regulates medical devices including in vitro diagnostics, surgical and therapeutic devices, imaging devices, companion diagnostics, radiation emitting products, and digital health technologies. The Division of Industry and Consumer Education (DICE) office has staff available to answer questions from companies on topics such as medical device classification and premarket submission types (e.g., De Novo, Premarket Notification 510(k), Premarket Approval). There are over 150 videos available on device‐related regulatory topics as part of the CDRH Learn multimedia education for industry. In addition, CDRH encourages early‐stage medical device innovators to leverage assistance programs, such as the Q‐Submission Program, which includes Pre‐Submissions as well as additional opportunities for companies to engage with FDA. A Pre‐Submission provides the opportunity to obtain FDA feedback prior to a CDRH premarket submission. Last, the Early Payor Feedback Program is a voluntary opportunity for medical device sponsors to obtain payor input on clinical trial design or other plans for gathering clinical evidence needed to support positive coverage decisions.
In 2017, OCE was established to facilitate the development and clinical review of oncology products. OCE works across the FDA's product centers (CBER, CDER, and CDRH) to conduct expedited review of drugs, biologics, and devices that are related to cancer. OCE provides knowledge to early‐stage oncology companies through the Project Catalyst's Oncology Regulatory Expertise and Early Guidance program, a new initiative that offers educational resources to expedite the availability of novel cancer treatments to the public. As part of the program, frequently asked oncology drug development questions and answers are available by topic, and entrepreneurs can ask questions directly to regulatory science experts as well. In collaboration with CBER, CDER, CDRH, and OCE, NCI SBIR developed a multipronged approach to the interagency support provided to small businesses as part of the CARE program which includes: Demystifying the FDA during early‐stage technology development. Communicating with oncology small businesses about existing FDA resources. Providing joint NCI funding opportunities with the FDA. Below, we discuss the CARE program and share outcomes from 2019–2022.
CARE PROGRAM Demystifying the FDA during early‐stage technology development NCI SBIR and FDA CDRH Innovation launched CARE as a pilot program in 2019. Following its success, the interagency collaboration was expanded the next year to include CBER and CDER as well. This allows for the inclusion of all oncology‐related technology types regulated by the FDA into the CARE program, including therapeutics, in vitro diagnostics, imaging agents, as well as other medical devices and digital health products. At any given time, the NCI SBIR program supports 300–400 portfolio companies who are developing these products. Each year, the NCI SBIR solicits CARE program applications from portfolio companies that include information on the current stage of technology development, knowledge of regulatory path, familiarity with the FDA's resources for industry, and the regulatory questions that the company has for the FDA. Applicants are restricted to those who have received an NCI SBIR/STTR award in the past 2 years, have a technology that falls under the regulatory authority of CBER/CDER/CDRH, and have not previously met with the FDA for that particular product. The NCI SBIR team reviews applications for eligibility and fit with the program goals, and directs applications to the appropriate FDA Center. CBER, CDER, CDRH, and OCE review the applications for technologies that will come under their regulatory purview and match each company question to an FDA expert to answer that question. Depending on the complexity of the questions asked, companies receive responses from different discipline specific experts within the FDA across the four centers (CBER, CDER, CDRH, and OCE) or the industry education offices (DICE, MATTB, and SBIA). For questions outside the scope of an informal response, the FDA confirms a formal response is required and provides information on the process for formal meeting requests with the Agency. Communicating with oncology small businesses about existing FDA resources During the CARE pilot program in 2019, the NCI found that 84% (27/32) of companies in the program were unaware of the FDA's educational resources available for industry that are described above, including many free resources specific to small businesses. After learning this, NCI SBIR and FDA worked together to create a public facing website of curated FDA links to specific regulatory resources that target the needs of oncology startups. The website includes information on how to contact the FDA based on the type of technology being developed, provides links to the FDA training videos and educational series on topics of interest to early‐stage companies, answers frequently asked questions that the NCI receives from entrepreneurs, and lists resources for topic areas where the FDA often sees early‐stage companies struggle. Twice a year, the NCI SBIR holds a webinar for NCI‐funded companies that are new to the SBIR/STTR program, and as part of the webinar, information is provided on the regulatory resources webpage as well as how to contact the FDA's industry education offices. Providing joint NCI funding opportunities with the FDA In 2020, NCI SBIR began a collaboration with CDRH to develop targeted NCI funding opportunities for the small business community as part of the US Department of Health and Human Services NIH/CDC SBIR Contract Solicitation. The CDRH's Office of Science and Engineering Laboratories (OSEL) is dedicated to accelerating patient access to innovative, safe, and effective medical devices through best‐in‐the‐world regulatory science and regulatory science tools. These tools expand the scope of innovative science‐based approaches to help improve the development and assessment of emerging medical technologies. NCI and OSEL created contract topics in product development areas of mutual interest that support the development of promising Medical Device Development Tools (MDDTs). The MDDT Program is a mechanism for CDRH to qualify tools that facilitate regulatory decision making by supporting a safety, effectiveness, or performance assessment of a medical device. During the MDDT qualification process, CDRH evaluates the proposed MDDT and supporting evidence to determine whether the tool can be used according to the proposed context‐of‐use to produce scientifically valid measurements to facilitate regulatory decision making. Once an MDDT is qualified, medical device sponsors can choose to use the MDDT during device development and evaluation of medical devices without the need for CDRH to reconfirm the suitability and utility of the tool when used within the qualified context‐of‐use. Qualification of MDDTs helps to improve predictability and efficiency in device development and regulatory review. Under these unique funding opportunities, companies apply for 1‐year NCI SBIR phase I projects with budgets of up to $400,000. During the NCI phase I contract time period, companies work with CDRH to submit a Qualification Plan to CDRH's MDDT Program. By the end of the phase I contract, companies should work toward an accepted Qualification Plan. Successful companies are invited to apply for NCI phase II contract funding of up to $2 million over a 2‐year period, during which time companies complete development of their tool and may be invited to submit a qualification package to the MDDT Program. These funding opportunities incentivize the small business community to develop innovative tools for medical devices which they can then commercialize and disseminate to industry or academic entrepreneurs who are developing new device technologies or evaluating existing technologies in the total product lifecycle.
NCI SBIR and FDA CDRH Innovation launched CARE as a pilot program in 2019. Following its success, the interagency collaboration was expanded the next year to include CBER and CDER as well. This allows for the inclusion of all oncology‐related technology types regulated by the FDA into the CARE program, including therapeutics, in vitro diagnostics, imaging agents, as well as other medical devices and digital health products. At any given time, the NCI SBIR program supports 300–400 portfolio companies who are developing these products. Each year, the NCI SBIR solicits CARE program applications from portfolio companies that include information on the current stage of technology development, knowledge of regulatory path, familiarity with the FDA's resources for industry, and the regulatory questions that the company has for the FDA. Applicants are restricted to those who have received an NCI SBIR/STTR award in the past 2 years, have a technology that falls under the regulatory authority of CBER/CDER/CDRH, and have not previously met with the FDA for that particular product. The NCI SBIR team reviews applications for eligibility and fit with the program goals, and directs applications to the appropriate FDA Center. CBER, CDER, CDRH, and OCE review the applications for technologies that will come under their regulatory purview and match each company question to an FDA expert to answer that question. Depending on the complexity of the questions asked, companies receive responses from different discipline specific experts within the FDA across the four centers (CBER, CDER, CDRH, and OCE) or the industry education offices (DICE, MATTB, and SBIA). For questions outside the scope of an informal response, the FDA confirms a formal response is required and provides information on the process for formal meeting requests with the Agency.
During the CARE pilot program in 2019, the NCI found that 84% (27/32) of companies in the program were unaware of the FDA's educational resources available for industry that are described above, including many free resources specific to small businesses. After learning this, NCI SBIR and FDA worked together to create a public facing website of curated FDA links to specific regulatory resources that target the needs of oncology startups. The website includes information on how to contact the FDA based on the type of technology being developed, provides links to the FDA training videos and educational series on topics of interest to early‐stage companies, answers frequently asked questions that the NCI receives from entrepreneurs, and lists resources for topic areas where the FDA often sees early‐stage companies struggle. Twice a year, the NCI SBIR holds a webinar for NCI‐funded companies that are new to the SBIR/STTR program, and as part of the webinar, information is provided on the regulatory resources webpage as well as how to contact the FDA's industry education offices.
NCI funding opportunities with the FDA In 2020, NCI SBIR began a collaboration with CDRH to develop targeted NCI funding opportunities for the small business community as part of the US Department of Health and Human Services NIH/CDC SBIR Contract Solicitation. The CDRH's Office of Science and Engineering Laboratories (OSEL) is dedicated to accelerating patient access to innovative, safe, and effective medical devices through best‐in‐the‐world regulatory science and regulatory science tools. These tools expand the scope of innovative science‐based approaches to help improve the development and assessment of emerging medical technologies. NCI and OSEL created contract topics in product development areas of mutual interest that support the development of promising Medical Device Development Tools (MDDTs). The MDDT Program is a mechanism for CDRH to qualify tools that facilitate regulatory decision making by supporting a safety, effectiveness, or performance assessment of a medical device. During the MDDT qualification process, CDRH evaluates the proposed MDDT and supporting evidence to determine whether the tool can be used according to the proposed context‐of‐use to produce scientifically valid measurements to facilitate regulatory decision making. Once an MDDT is qualified, medical device sponsors can choose to use the MDDT during device development and evaluation of medical devices without the need for CDRH to reconfirm the suitability and utility of the tool when used within the qualified context‐of‐use. Qualification of MDDTs helps to improve predictability and efficiency in device development and regulatory review. Under these unique funding opportunities, companies apply for 1‐year NCI SBIR phase I projects with budgets of up to $400,000. During the NCI phase I contract time period, companies work with CDRH to submit a Qualification Plan to CDRH's MDDT Program. By the end of the phase I contract, companies should work toward an accepted Qualification Plan. Successful companies are invited to apply for NCI phase II contract funding of up to $2 million over a 2‐year period, during which time companies complete development of their tool and may be invited to submit a qualification package to the MDDT Program. These funding opportunities incentivize the small business community to develop innovative tools for medical devices which they can then commercialize and disseminate to industry or academic entrepreneurs who are developing new device technologies or evaluating existing technologies in the total product lifecycle.
Since the CARE program was launched in 2019, a total of 141 companies have participated. Since CBER, CDER, and OCE joined CDRH in the CARE program collaboration in 2020, 109 companies have participated who are developing products that are in the preclinical stages of development (i.e., in vitro or in vivo animal testing, or developing/refining the design of a prototype device). Technologies under development spanned a wide range of technology types similar to the makeup of the NCI SBIR portfolio. Half (50%; 55/109) of the companies were developing technologies under the regulatory authority of CDER, 37% (40/109) under the regulatory authority of CDRH, and 13% (14/109) under CBER (Table ). Regulatory questions from companies were triaged within each FDA Center and sent to the group with the appropriate expertise to respond. Approximately 1 month after companies received responses from the FDA on their questions, the NCI SBIR collected feedback from companies via an online survey. During the 2020–2022 period, NCI SBIR obtained feedback from 88 of 109 companies, a response rate of 81%. Results from the feedback survey indicated participation in the CARE program helped companies with their regulatory plans in several ways including learning or identifying the FDA Center that will regulate their technology (97%; 84/87), developing the regulatory strategy for their technology (85%; 75/88), and planning the next regulatory steps they need to take (89%; 78/88; Figure ). For example, a point of uncertainty for many medical device developers is the type of regulatory submission pathway that will be required for their technology, such as De Novo, Premarket Notification 510(k), or Premarket Approval. Depending on the pathway required, a different level of scientific evidence is required for potential FDA approval. When the correct submission pathway is known, small businesses can structure their SBIR/STTR project aims to collect the appropriate evidence for their future FDA submission. After participating in the CARE program, companies planned to contact the FDA's industry education offices (81%; 71/88), access information on a particular regulatory topic on the FDA's website (93%; 82/88), and submit a meeting request to the FDA (91%; 80/88; Figure ). Companies reported that as part of the program, they received information that informed their future SBIR/STTR project aims (81%; 71/88), suggesting future NCI SBIR/STTR grant applications will better align with FDA requirements. Overall, companies overwhelmingly reported the value of these early responses from the FDA and that they would recommend the program to other companies (90%; 79/88). Since NCI SBIR launched the regulatory resources webpage in October 2020, there have been over 2000 visitors to the site. There has been a steady increase in returning visitors to the webpage as well, comprising 30% of all visitors in the past year. Of the NCI‐funded companies that participated in CARE 2020–2022, 93% (81/87) strongly agreed or agreed that they found the webpage to be useful. The companies entering the CARE program that were familiar with the FDA resources for industry grew from 16% (5/32) in 2019 prior to launch of the webpage to 70% (30/43) in 2022 (16 months after launch of the webpage), indicating NCI‐funded companies are connecting with the FDA and utilizing more of the guidances, webinars, and other free resources the FDA has available online. In addition to the non‐funding regulatory support for companies described above, NCI SBIR and CDRH collaborated on funding opportunities to support regulatory science innovation for medical devices by the small business community. Three NCI contract funding opportunities were published to support the development of promising MDDTs in specific areas of mutual interest by both agencies. Topic 442 focuses on development of biomarker measurement tools that can be used for the evaluation of newly developed biomarker‐based tests for patient selection or for the evaluation of drug safety and effectiveness through prediction of clinically meaningful end points. Topic 444 supports the development of large, well‐curated, and statistically robust datasets for testing and assessing medical devices under development. Topic 454 focuses on the development of software that can evaluate artificial intelligence and machine learning algorithms found in medical devices for diagnostic imaging and radiation treatment planning within oncology settings. These quantitative software tools will monitor real‐world performance of radiation and imaging devices for device safety and efficacy purposes, which is critical for adoption of new devices into clinical practice. Tools developed under these awards and qualified by CDRH's MDDT program can be used by other entrepreneurs to evaluate safety, effectiveness, or performance of new medical devices under regulatory review, thus accelerating the translation of innovative next‐generation cancer technologies to the clinic.
The NCI and FDA collaborations support the development of novel cancer technologies by the small business community in several ways. To successfully reach the clinic and patients, companies must complete product development for their technology that aligns with FDA requirements. The NCI and FDA CARE program successfully connects early‐stage NCI‐funded companies with the FDA to receive regulatory guidance on their specific technologies. The CARE program focuses on supporting early‐stage companies because there is often a gap from when a small business receives SBIR/STTR funding support until the company is able to hire expert regulatory consultants. By incorporating regulatory guidance early on in product development, companies can decrease their time to first in human studies and overall time to market, and ensure efficient use of federal taxpayer dollars that fund their early‐stage translational research. OCE is also developing opportunities for early‐stage oncology companies to receive regulatory information to inform their drug development plans. In March 2021, OCE and CDER SBIA hosted a 2‐day workshop for entrepreneurs on oncology drug development and tailored the agenda to start‐up companies. Before the event, stakeholders provided recommendations on workshop topics relevant to the start‐up community. NCI SBIR provided some perspective on the pain points small business entrepreneurs experience when developing cancer therapeutics and how the CARE program collaboration has addressed some of these issues. Other workshop topics included guidances specific to oncology products, chemistry and manufacturing requirements, preclinical considerations, first‐in‐human trial design, and federal resources available through the NCI SBIR for startup companies. To complement these non‐funding regulatory resources, the NIH provides funding opportunities for small businesses that can be used to cover activities such as regulatory consulting costs. This funding can be requested as part of a SBIR/STTR grant or contract through the NIH Technical and Business Assistance program. This program allows funding requests of up to $6500 for phase I and up to $50,000 for phase II awards. Following a phase II award, companies are eligible to apply for Commercialization Readiness Pilot program funding to support non‐traditional research and development costs as companies transition their technologies to the commercialization stage. Small businesses can request up to $250,000 to cover costs for activities such as preparation of documents for an FDA submission, development of an intellectual property strategy, and/or planning for a clinical trial. In addition, the NIH Small business Education and Entrepreneurial Development Office offers free regulatory and business development consultations with their expert Entrepreneurs in Residence to NIH‐funded small businesses to assist companies with navigating the FDA. The majority of NCI‐funded portfolio companies enter the SBIR program during the early stages of translational research. Despite the long development time for medical products and the many obstacles small businesses face, NCI‐funded technologies have a strong track record of reaching the clinic. Specialized support programs like the CARE program as well as regulatory‐related funding programs can help facilitate the advance of their technologies to the later stages of product development and clinical testing.
The NCI SBIR and FDA CARE program supports small businesses as they navigate learning about their regulatory pathways. The program connects entrepreneurs with the FDA to receive feedback during early‐stage product development and provides information to companies about existing educational resources as well as the FDA process. By incorporating regulatory guidance early on, we anticipate that small businesses can mitigate risk in the product development pathway, decrease their time to market, and ensure efficient use of taxpayer dollars that fund their translational NCI SBIR/STTR projects. As the first program of its kind at the NIH, CARE demonstrates the benefits that interagency collaborations can provide to the private sector.
This work was funded by the National Institutes of Health.
The authors declared no competing interests for this work.
This reflects the views of the authors and should not be construed to represent FDA's views or policies.
|
|
Detecting DNA damage in stored blood samples | 85258e1f-edca-4566-9c28-637fd83fa72a | 10014702 | Forensic Medicine[mh] | Over the past decades, DNA profiling of biological crime scene traces by multiplex PCR analysis of short tandem repeat (STR) markers has become a very important tool in the investigation of crime. One of the main challenges forensic scientists encounter when working with biological trace evidence is DNA fragmentation resulting from degradation through chemical damage. Various external factors such as ultraviolet light, radiation, temperature, or humidity damage DNA structure due to the DNA’s limited chemical stability. One of the main chemical reactions by which DNA is damaged is hydrolytic degradation, the cleavage of chemical bonds by the addition of water [ – ].There are two main mechanisms by which hydrolysis attacks DNA integrity, that of base loss from the 2′-deoxyribose backbone and that of deamination. Hydrolysis of the DNA backbone attacks the linkage between the deoxyribose carbon atom and the base and is the main reason for DNA degradation in dead tissue . When the DNA bases cytosine, adenine, and guanine undergo the process of spontaneous deamination, hence the removal of an amino group, they will be converted to uracil, hypoxanthine, and xanthine, respectively, and ammonia (this process is reviewed here: ). Deamination of DNA bases leads to sequence variation and structure instability: During PCR, uracil is known to pair with adenine, resulting in C-T transitions in the amplicon . Hypoxanthine is known to have a pairing preference with cytosine, resulting in A-G transition . The mechanism by which deamination of cytosine into uracil is repaired is by removal of uracil by uracil DNA glycosylase (UNG). This results in the generation of an abasic site, which are more prone to spontaneous and glycosylase-mediated DNA strand breaks . Mechanisms described above may ultimately lead to DNA strand breaks. Fragmentation of the DNA strand can result in decreasing PCR efficiency in longer amplicons. In multiplex STR amplification, this phenomenon is well known as “ski-slope effect” visible in electropherorgrams with high peak heights of short amplicons and low peak heights of longer amplicons that might become so low that drop-outs of amplicons occur . Figure shows an example of such a STR profile. The effect of differing amplification efficiencies in multiplex STR analysis challenges the interpretation of STR typing results, particularly in more complex DNA mixtures: Degradation might lead to a high number of alleles in short amplicon STR markers and low numbers of alleles in longer amplicon markers, making it difficult to estimate the number of contributors to a mixture . This is further complicated if one component of the DNA mixture is degraded, while others are intact. Several DNA quantification tools based on quantitative real-time PCR now also incorporate the detection of DNA fragmentation. For example, the PowerQuant ® System (Promega) contains primers and probes for an autosomal target and a “degradation target” with the autosomal target (84 bp) being much shorter compared to the degradation target (296 bp) . The amplification efficiency differs between these two targets in degraded samples but not in intact samples. Similarly, the Investigator Quantiplex ® Pro kit (Qiagen) contains an autosomal target of 91 bp and a “degradation target” of 353 bp , and the Quantifiler ® Trio DNA Quantification Kit (Thermo Fisher Scientific) contains an autosomal target of 80 bp and a “degradation target” of 214 bp . In degraded samples, the DNA concentration detected with the degradation targets is expected to be lower than with the autosomal target. Thus, the relative value of DNA concentrations detected with both probes (“degradation index”) allows an assessment of DNA degradation before forensic STR analysis is performed. In casework, however, we regularly observe serious degradation patterns with ski-slope effects and drop-outs of longer amplicons even though the degradation index of quantitative real-time PCR kits remains unobtrusive. The aim of this work was to systematically investigate the potential of quantitative real-time PCR to actually detect DNA degradation and whether or not the use of UNG might increase the sensitivity of these systems to detect degradation. In 1993, Lindahl described for the first time that during DNA degradation, hydrolysis may lead to deamination of cytosine creating uracil , making these positions potential targets for UNG. Furthermore, methods using electrophoresis for assessing DNA quality were investigated as a potential alternative for detecting forensically relevant DNA degradation. Methods for DNA quality assessments based on electrophoresis are regularly used in massively parallel sequencing experiments to assess DNA fragment size distribution as a measure of degradation . Automated capillary electrophoresis and pulse-field capillary electrophoresis systems were used to detect DNA Integrity numbers DIN and Genomic Quality Numbers (GQN) , respectively.
Sample collection and DNA extraction Blood samples were collected from one individual only to eliminate the risk of interindividual variations caused, i.e., by the general health status of the donors. A blood sample was collected by venipuncture, using EDTA as an anti-coagulant. Spots of 40 µL of blood were immediately placed on sterile cellulose pads, gently inverting the blood tube several times every 4 spots. Pads were stored at three different conditions; room temperature (RT, ~ 21 °C, indoors, protected from direct sunlight), 4 °C (fridge), and 37 °C (incubator). Further information on sampling times and conditions are described below. As a positive control DNA was extracted and quantified immediately after collection. DNA was extracted using the Maxwell ® RSC Blood DNA Kit with the Maxwell ® 16 Forensic Instrument (Promega, Mannheim, Germany) and eluted in a final volume of 50 µL per blood spot. Extracts were stored at − 20 °C until further analysis. DNA degradation in multiplex STR analysis To confirm that DNA degradation producing ski-slope effects is visible with the storage conditions chosen, samples from three time points (0 days, 21 days, and 83 days) were analyzed using the PowerPlex ® ESX 17 System (Promega) following manufacturer’s recommendations but using a total reaction volume of 12.5 µL. Amplicons were visualized using the 3500 Genetic Analyzer, and data were interpreted using the GeneMapper™ ID-X Software v 1.6 (Thermo Fisher Scientific, Darmstadt, Germany). Degradation indices (DI) were analyzed using the PowerQuant System (Promega, Madison, US) with 2 μL sample, 5 μL 2 × Master Mix, 0.5 μL 20 × Primer/Probe/IPC Mix, and HPLC-grade water to a reaction volume of 11 μL. qPCR was performed using the 7500 real-time PCR system with the HID Real-Time PCR Analysis Software v1.2 (Thermo Fisher Scientific). DNA degradation detection by forensic quantitative real-time PCR For this part of the experiment, samples were stored for up to 83 days at three temperatures, and DNA was extracted every two or 3 days. Thus, 35 time points per temperature condition totaling 105 samples were analyzed. Per condition and time point, duplicate extractions using four spots of 40 µL blood each were extracted separately, and the eluates were subsequently pooled to a total volume of 200 µL per replicate. Samples were quantified using the PowerQuant ® System as described above. DNA concentration in each sample and respective degradation indices were used for data analysis. Influence of uracil DNA glycosylase on detection sensitivity To investigate if the use of uracil DNA glycosylase enhances the sensitivity of detecting DNA degradation, blood spots on cellulose pads were stored at RT for up to 316 days and extracted at 82 time points during this period. DNA was extracted in duplicates at each time point: One DNA extract was treated with AmpErase™ Uracil N-Glycosylase (UNG, Thermo Fisher Scientific) for 20 min at 50 °C, while the other remained untreated. DNA concentration in each extract and respective degradation indices were analyzed using the PowerQuant System as described above. To evaluate if results can be confirmed when using an alternative quantification kit, the UNG experiment was repeated using a subset of samples (stored at RT for up to 176 days with 53 time points). Both UNG-treated and untreated DNA extracts were quantified using the Investigator Quantiplex ® Pro Kit (Qiagen, Hilden, Germany) following manufacturer’s recommendations. Detection of DNA degradation by automated (pulse-field) electrophoresis To investigate if such electrophoretic methods are better suited for detecting DNA degradation in forensic casework samples, the 4150 TapeStation system (Verogen, San Diego, USA) with the Genomic DNA ScreenTape assay and TapeStation analysis software was used to calculate DNA integrity numbers (DIN). Furthermore, the FEMTO Pulse automated pulsed-field CE instrument (Agilent, Waldbronn) was used with the 165 kb analysis kit and ProSize data analysis software (with a size threshold of 50 000 bp). Blood spots were stored at three different temperatures (4 °C, 37 °C, and RT) for up to 26 days (TapeStation) or 37 days (Femto). TapeStation analysis was performed after 11 different time points and Femto analysis after 16 different time points, each in duplicates. TapeStation analyses were performed at Agilent Technologies and Femto Pulse analyses were performed at Genomics & Transcriptomics Laboratory (GTL) at Heinrich-Heine-University Düsseldorf. Statistical analysis For all calculations and figures, mean values of replicates were used. Statistically significant differences between degradation indices of different storage temperatures or duration were calculated using the Mann–Whitney U test using IBM SPSS Statistics version 28.0.1.0. The Mann–Whitney U test was chosen because there are no strict requirements to data distribution like normal or bivariate distribution and it can be used even in small sample sizes. Correlation between degradation indices and time were calculated using Pearson correlation using Microsoft Excel 2016. A value of p < 0.005 was considered statistically significant.
Blood samples were collected from one individual only to eliminate the risk of interindividual variations caused, i.e., by the general health status of the donors. A blood sample was collected by venipuncture, using EDTA as an anti-coagulant. Spots of 40 µL of blood were immediately placed on sterile cellulose pads, gently inverting the blood tube several times every 4 spots. Pads were stored at three different conditions; room temperature (RT, ~ 21 °C, indoors, protected from direct sunlight), 4 °C (fridge), and 37 °C (incubator). Further information on sampling times and conditions are described below. As a positive control DNA was extracted and quantified immediately after collection. DNA was extracted using the Maxwell ® RSC Blood DNA Kit with the Maxwell ® 16 Forensic Instrument (Promega, Mannheim, Germany) and eluted in a final volume of 50 µL per blood spot. Extracts were stored at − 20 °C until further analysis.
To confirm that DNA degradation producing ski-slope effects is visible with the storage conditions chosen, samples from three time points (0 days, 21 days, and 83 days) were analyzed using the PowerPlex ® ESX 17 System (Promega) following manufacturer’s recommendations but using a total reaction volume of 12.5 µL. Amplicons were visualized using the 3500 Genetic Analyzer, and data were interpreted using the GeneMapper™ ID-X Software v 1.6 (Thermo Fisher Scientific, Darmstadt, Germany). Degradation indices (DI) were analyzed using the PowerQuant System (Promega, Madison, US) with 2 μL sample, 5 μL 2 × Master Mix, 0.5 μL 20 × Primer/Probe/IPC Mix, and HPLC-grade water to a reaction volume of 11 μL. qPCR was performed using the 7500 real-time PCR system with the HID Real-Time PCR Analysis Software v1.2 (Thermo Fisher Scientific).
For this part of the experiment, samples were stored for up to 83 days at three temperatures, and DNA was extracted every two or 3 days. Thus, 35 time points per temperature condition totaling 105 samples were analyzed. Per condition and time point, duplicate extractions using four spots of 40 µL blood each were extracted separately, and the eluates were subsequently pooled to a total volume of 200 µL per replicate. Samples were quantified using the PowerQuant ® System as described above. DNA concentration in each sample and respective degradation indices were used for data analysis.
To investigate if the use of uracil DNA glycosylase enhances the sensitivity of detecting DNA degradation, blood spots on cellulose pads were stored at RT for up to 316 days and extracted at 82 time points during this period. DNA was extracted in duplicates at each time point: One DNA extract was treated with AmpErase™ Uracil N-Glycosylase (UNG, Thermo Fisher Scientific) for 20 min at 50 °C, while the other remained untreated. DNA concentration in each extract and respective degradation indices were analyzed using the PowerQuant System as described above. To evaluate if results can be confirmed when using an alternative quantification kit, the UNG experiment was repeated using a subset of samples (stored at RT for up to 176 days with 53 time points). Both UNG-treated and untreated DNA extracts were quantified using the Investigator Quantiplex ® Pro Kit (Qiagen, Hilden, Germany) following manufacturer’s recommendations.
To investigate if such electrophoretic methods are better suited for detecting DNA degradation in forensic casework samples, the 4150 TapeStation system (Verogen, San Diego, USA) with the Genomic DNA ScreenTape assay and TapeStation analysis software was used to calculate DNA integrity numbers (DIN). Furthermore, the FEMTO Pulse automated pulsed-field CE instrument (Agilent, Waldbronn) was used with the 165 kb analysis kit and ProSize data analysis software (with a size threshold of 50 000 bp). Blood spots were stored at three different temperatures (4 °C, 37 °C, and RT) for up to 26 days (TapeStation) or 37 days (Femto). TapeStation analysis was performed after 11 different time points and Femto analysis after 16 different time points, each in duplicates. TapeStation analyses were performed at Agilent Technologies and Femto Pulse analyses were performed at Genomics & Transcriptomics Laboratory (GTL) at Heinrich-Heine-University Düsseldorf.
For all calculations and figures, mean values of replicates were used. Statistically significant differences between degradation indices of different storage temperatures or duration were calculated using the Mann–Whitney U test using IBM SPSS Statistics version 28.0.1.0. The Mann–Whitney U test was chosen because there are no strict requirements to data distribution like normal or bivariate distribution and it can be used even in small sample sizes. Correlation between degradation indices and time were calculated using Pearson correlation using Microsoft Excel 2016. A value of p < 0.005 was considered statistically significant.
DNA degradation in multiplex STR analysis Apart from the control sample (0 days), DNA degradation was observed in samples from all storage conditions after 21 and 83 days. As an example, Fig. shows the results of the PowerPlex ® ESX amplification in a sample stored at 37 °C for 21 days. A ski-slope effect is visible, and a drop-out occurred of allele 28.2 in the longest amplicon representing SE33. The degradation indices (DI) of all samples remained below the critical value of 2 recommended by the manufacturer . The sample depicted in Fig. , for example, revealed a DI of 1.2. DNA degradation detection by forensic quantitative real-time PCR First, total DNA concentration within each extract was quantified to detect any overall loss of DNA or extraction efficiency over time. No statistically significant DNA loss over time was observed for any of the three storage conditions (Fig. ). Statistically significant differences in DNA concentrations were observed between storage conditions with much higher DNA amounts extracted from samples stored at 37 °C compared to samples stored at 4 °C ( p < 0.001) and RT ( p < 0.001). Furthermore, instead of DNA loss over time, a clear increase of DNA concentrations over time was observed for samples stored at 37 °C (Fig. a, Pearson correlation coefficient 0.536, p = 0.0009). No such correlations were observed for samples stored at 4 °C and RT. Thus, DNA extraction efficiency was increased in samples stored at 37 °C for a prolonged time period. The DI was subsequently analyzed. At all storage conditions, an increase of DI was observed within the first seven days of storage, which seems to level out over time (Fig. ). Over the total storage time, a slightly negative correlation between DI and time was observed for samples stored at 37 °C (Pearson correlation r = − 0.464; p = 0.0049) and RT ( r = − 0.533; p = 0.0009). There is no significant correlation between DI and time in samples stored at 4 °C ( r = − 0.236; p = 0.172). This data confirms previous studies, for example one by Bulla et al., who reported loss of DNA quantity but not integrity in long-term storage of blood samples . Influence of uracil DNA glycosylase on detection sensitivity Figure shows the differences in DI between samples treated with UNG and untreated samples measured with the PowerQuant ® System. No clear trend of a significant increase in degradation index over time in samples treated with UNG was observed in either treated or untreated DNA extracts. Results obtained with the Investigator Quantiplex Pro Kit (Fig. ) provided a similar outcome to results obtained with PowerQuant System, and no statistically significant correlation between DI and storage time was found. Samples analyzed with the Quantiplex ® Pro, however, showed a general difference in DI between samples treated with UNG and untreated samples ( p < 0.001) indicating an increased sensitivity of degradation detection (Fig. a). This effect was not observed with the PowerQuant ® System (Fig. b). DNA degradation analysis by automated (pulse-field) electrophoresis No significant correlation between DIN and storage time was observed at any of the temperatures (37 °C, p = 0.99; 4 °C, p = 0.2; RT, p = 0.85). Comparing DIN values between storage conditions, the median is considerably higher in samples stored at 37 °C compared to the other two storage conditions (Fig. ). Similar to the DIN results, GQN does not correlate with the storage duration in any storage temperature (4 °C p = 0.3; 37 °C p = 0.98; RT p = 0.91). The mean GQN of the different storage temperatures does not show any significant differences between storage conditions (Fig. ).
Apart from the control sample (0 days), DNA degradation was observed in samples from all storage conditions after 21 and 83 days. As an example, Fig. shows the results of the PowerPlex ® ESX amplification in a sample stored at 37 °C for 21 days. A ski-slope effect is visible, and a drop-out occurred of allele 28.2 in the longest amplicon representing SE33. The degradation indices (DI) of all samples remained below the critical value of 2 recommended by the manufacturer . The sample depicted in Fig. , for example, revealed a DI of 1.2.
First, total DNA concentration within each extract was quantified to detect any overall loss of DNA or extraction efficiency over time. No statistically significant DNA loss over time was observed for any of the three storage conditions (Fig. ). Statistically significant differences in DNA concentrations were observed between storage conditions with much higher DNA amounts extracted from samples stored at 37 °C compared to samples stored at 4 °C ( p < 0.001) and RT ( p < 0.001). Furthermore, instead of DNA loss over time, a clear increase of DNA concentrations over time was observed for samples stored at 37 °C (Fig. a, Pearson correlation coefficient 0.536, p = 0.0009). No such correlations were observed for samples stored at 4 °C and RT. Thus, DNA extraction efficiency was increased in samples stored at 37 °C for a prolonged time period. The DI was subsequently analyzed. At all storage conditions, an increase of DI was observed within the first seven days of storage, which seems to level out over time (Fig. ). Over the total storage time, a slightly negative correlation between DI and time was observed for samples stored at 37 °C (Pearson correlation r = − 0.464; p = 0.0049) and RT ( r = − 0.533; p = 0.0009). There is no significant correlation between DI and time in samples stored at 4 °C ( r = − 0.236; p = 0.172). This data confirms previous studies, for example one by Bulla et al., who reported loss of DNA quantity but not integrity in long-term storage of blood samples .
Figure shows the differences in DI between samples treated with UNG and untreated samples measured with the PowerQuant ® System. No clear trend of a significant increase in degradation index over time in samples treated with UNG was observed in either treated or untreated DNA extracts. Results obtained with the Investigator Quantiplex Pro Kit (Fig. ) provided a similar outcome to results obtained with PowerQuant System, and no statistically significant correlation between DI and storage time was found. Samples analyzed with the Quantiplex ® Pro, however, showed a general difference in DI between samples treated with UNG and untreated samples ( p < 0.001) indicating an increased sensitivity of degradation detection (Fig. a). This effect was not observed with the PowerQuant ® System (Fig. b).
No significant correlation between DIN and storage time was observed at any of the temperatures (37 °C, p = 0.99; 4 °C, p = 0.2; RT, p = 0.85). Comparing DIN values between storage conditions, the median is considerably higher in samples stored at 37 °C compared to the other two storage conditions (Fig. ). Similar to the DIN results, GQN does not correlate with the storage duration in any storage temperature (4 °C p = 0.3; 37 °C p = 0.98; RT p = 0.91). The mean GQN of the different storage temperatures does not show any significant differences between storage conditions (Fig. ).
DNA degradation in multiplex STR analysis Our results confirm experiences obtained from real casework samples, in which degradation was observed in STR analyses, while DIs obtained from quantitative real-time PCR remained unobtrusive. Recently, Lin et al. observed that after artificially degrading DNA to fragment sizes of 300 to 500 bp, full or nearly full STR profiles were obtained. Even when DNA was extremely degraded to fragment sizes of 150 bp, still partial profiles were obtained. Degrading DNA at such a low level, autosomal, and degradation quantification values of all quantification kits compared in their study dropped and DI increased . One explanation for the differing observations compared to our study might be that Lin et al. used artificial degradation until fragmentation was actually visible. Our study, on the other hand, did not artificially enhance degradation but relied on natural degradation over time. Thus, degradation might have been less severe in our samples and degradation mechanisms other than fragmentation might have occurred (see below). DNA degradation detection by forensic quantitative real-time PCR No significant loss of DNA over time was observed in any of the storage conditions. On the contrary, DNA extraction efficiency was increased in samples stored at 37 °C for a prolonged time period. There is certainly no straight-forward explanation for this observation. We hypothesize, however, that drying of samples (loss of humidity) might play a crucial role here: Colder storage environments might lead to higher humidity, which, in turn, enables bacterial growth. Another hypothesis is that substances inhibiting extraction and/or detection processes might decompose faster at this temperature compared to DNA. Several previous studies investigated the influence of storage temperature on DNA quantity and integrity in blood samples: For example, Al Rokayan found in 2000 that DNA extracted from blood samples showed higher molecular weight and less shearing if blood samples were stored at − 20 °C compared to samples stored at 4 °C and RT . Huang et al. described in 2017 a loss in DNA concentration from blood samples stored at 24 °C over 15 days and that this loss correlated with a decrease in white blood cell (WBC) counts . They also reported that samples stored at a low temperature (4 °C) showed stronger loss in WBC counts compared to storage at 24 °C, explaining this by cell lysis due to stress . Most laboratories use 4 °C for short-term storage of blood samples. Our results along with previously published data suggest that even for shorter storage periods of one to 2 weeks, storage at − 20 °C or an increased temperature is preferable over 4 °C. No significant increase of DI over time was observed even though these samples showed signs of degradation in multiplex STR PCR analyses. Our data confirm previous studies, for example, by Bulla et al., who reported loss of DNA quantity but not integrity in long-term storage of blood samples . Data obtained by Investigator Quantiplex Pro without UNG treatment also showed no increase in DI over time. This means that the limited sensitivity in detecting DNA degradation by qPCR is not kit specific but might be explained by a more general underlying principle, which we will discuss below. Thus, DNA degradation affecting STR analysis proved to be surprisingly difficult to detect at all. This might be because we initiated natural degradation by storage over time, while previous studies mainly used artificially enhanced degradation. DI only detects DNA fragmentation, but DNA fragmentation might not be the only mechanism behind the ski-slope effect. Other mechanisms of DNA degradation might play a crucial role here. For example, chemical alterations of bases as described above might reduce the efficiency of primer binding due to mismatched positions in the presence of uracil and hypoxanthine instead of cytosine and adenine. Mismatched positions in primer binding sites are known to have a stronger effect on STR markers with longer amplicons, such as SE33 . Influence of uracil DNA glycosylase on detection sensitivity A slightly higher sensitivity in detecting degradation in samples treated with UNG compared to untreated samples was observed in Investigator Quantiplex ® Pro but not in PowerQuant ® System. This difference might be caused by the slightly higher difference in amplicon length between the autosomal and degradation targets in the Quantiplex ® Pro compared to the PowerQuant ® System with a 212 bp and 262 bp difference, respectively . It is to be expected that higher amplicon differences between the two targets lead to stronger differences in PCR efficiency. Our findings confirm observations recently described by Holmes et al. , who observed generally higher DI values with Investigator Quantiplex ® Pro compared to PowerQuant ® System. We explain the difference between samples treated with UNG and untreated samples in the Investigator Quantiplex ® Pro by the samples having suffered from deamination of cytosine due to hydrolytic damage, changing cytosine to uracil (e.g., ). UNG directly attacks uracil positions and eliminates uracil by cleaving the N-glycosidic bond between the base and the sugar-phosphate backbone of DNA, creating an abasic site in the DNA structure . Such abasic sites comprise the weakest point in a DNA strand and enable the strand to break easily , for example, by heat stress during the first cycle of PCR. Changes in DI, however, were only minor. Further analyses of the correlation between degradation and storage time after UNG treatment, using a larger sample set and samples stored under different conditions, might provide interesting data on the detection of DNA degradation over time, which in turn might serve as a potential measure for the determination of the time since deposition of forensic traces. DNA degradation analysis by automated (pulse-field) electrophoresis No significant correlation between DIN or GQN and storage time was observed in any of the conditions analyzed. Consequently, electrophoretic systems for assessing DNA integrity could not be found to provide a suitable alternative to quantitative real-time PCR and did not improve the detection of DNA degradation. Taking into account the ability of some highly sensitive qPCR kits, such as the PowerQuant ® System, to reliably point out samples that contain no or too little DNA for successful STR analysis , makes these kits highly useful in forensic genetics. We did, however, observe a trend towards higher DIN numbers in samples stored at 37 °C compared to samples stored at RT and 4 °C. This trend is not statistically significant but correlates well with our previous findings of higher DNA recovery from samples stored at 37 °C compared to lower storage temperatures as described above.
Our results confirm experiences obtained from real casework samples, in which degradation was observed in STR analyses, while DIs obtained from quantitative real-time PCR remained unobtrusive. Recently, Lin et al. observed that after artificially degrading DNA to fragment sizes of 300 to 500 bp, full or nearly full STR profiles were obtained. Even when DNA was extremely degraded to fragment sizes of 150 bp, still partial profiles were obtained. Degrading DNA at such a low level, autosomal, and degradation quantification values of all quantification kits compared in their study dropped and DI increased . One explanation for the differing observations compared to our study might be that Lin et al. used artificial degradation until fragmentation was actually visible. Our study, on the other hand, did not artificially enhance degradation but relied on natural degradation over time. Thus, degradation might have been less severe in our samples and degradation mechanisms other than fragmentation might have occurred (see below).
No significant loss of DNA over time was observed in any of the storage conditions. On the contrary, DNA extraction efficiency was increased in samples stored at 37 °C for a prolonged time period. There is certainly no straight-forward explanation for this observation. We hypothesize, however, that drying of samples (loss of humidity) might play a crucial role here: Colder storage environments might lead to higher humidity, which, in turn, enables bacterial growth. Another hypothesis is that substances inhibiting extraction and/or detection processes might decompose faster at this temperature compared to DNA. Several previous studies investigated the influence of storage temperature on DNA quantity and integrity in blood samples: For example, Al Rokayan found in 2000 that DNA extracted from blood samples showed higher molecular weight and less shearing if blood samples were stored at − 20 °C compared to samples stored at 4 °C and RT . Huang et al. described in 2017 a loss in DNA concentration from blood samples stored at 24 °C over 15 days and that this loss correlated with a decrease in white blood cell (WBC) counts . They also reported that samples stored at a low temperature (4 °C) showed stronger loss in WBC counts compared to storage at 24 °C, explaining this by cell lysis due to stress . Most laboratories use 4 °C for short-term storage of blood samples. Our results along with previously published data suggest that even for shorter storage periods of one to 2 weeks, storage at − 20 °C or an increased temperature is preferable over 4 °C. No significant increase of DI over time was observed even though these samples showed signs of degradation in multiplex STR PCR analyses. Our data confirm previous studies, for example, by Bulla et al., who reported loss of DNA quantity but not integrity in long-term storage of blood samples . Data obtained by Investigator Quantiplex Pro without UNG treatment also showed no increase in DI over time. This means that the limited sensitivity in detecting DNA degradation by qPCR is not kit specific but might be explained by a more general underlying principle, which we will discuss below. Thus, DNA degradation affecting STR analysis proved to be surprisingly difficult to detect at all. This might be because we initiated natural degradation by storage over time, while previous studies mainly used artificially enhanced degradation. DI only detects DNA fragmentation, but DNA fragmentation might not be the only mechanism behind the ski-slope effect. Other mechanisms of DNA degradation might play a crucial role here. For example, chemical alterations of bases as described above might reduce the efficiency of primer binding due to mismatched positions in the presence of uracil and hypoxanthine instead of cytosine and adenine. Mismatched positions in primer binding sites are known to have a stronger effect on STR markers with longer amplicons, such as SE33 .
A slightly higher sensitivity in detecting degradation in samples treated with UNG compared to untreated samples was observed in Investigator Quantiplex ® Pro but not in PowerQuant ® System. This difference might be caused by the slightly higher difference in amplicon length between the autosomal and degradation targets in the Quantiplex ® Pro compared to the PowerQuant ® System with a 212 bp and 262 bp difference, respectively . It is to be expected that higher amplicon differences between the two targets lead to stronger differences in PCR efficiency. Our findings confirm observations recently described by Holmes et al. , who observed generally higher DI values with Investigator Quantiplex ® Pro compared to PowerQuant ® System. We explain the difference between samples treated with UNG and untreated samples in the Investigator Quantiplex ® Pro by the samples having suffered from deamination of cytosine due to hydrolytic damage, changing cytosine to uracil (e.g., ). UNG directly attacks uracil positions and eliminates uracil by cleaving the N-glycosidic bond between the base and the sugar-phosphate backbone of DNA, creating an abasic site in the DNA structure . Such abasic sites comprise the weakest point in a DNA strand and enable the strand to break easily , for example, by heat stress during the first cycle of PCR. Changes in DI, however, were only minor. Further analyses of the correlation between degradation and storage time after UNG treatment, using a larger sample set and samples stored under different conditions, might provide interesting data on the detection of DNA degradation over time, which in turn might serve as a potential measure for the determination of the time since deposition of forensic traces.
No significant correlation between DIN or GQN and storage time was observed in any of the conditions analyzed. Consequently, electrophoretic systems for assessing DNA integrity could not be found to provide a suitable alternative to quantitative real-time PCR and did not improve the detection of DNA degradation. Taking into account the ability of some highly sensitive qPCR kits, such as the PowerQuant ® System, to reliably point out samples that contain no or too little DNA for successful STR analysis , makes these kits highly useful in forensic genetics. We did, however, observe a trend towards higher DIN numbers in samples stored at 37 °C compared to samples stored at RT and 4 °C. This trend is not statistically significant but correlates well with our previous findings of higher DNA recovery from samples stored at 37 °C compared to lower storage temperatures as described above.
STR multiplex PCR analysis of forensic trace samples can show signs of DNA degradation even though the degradation index (DI) measured by quantitative real-time PCR is unobtrusive. Degradation indices measured by quantitative real-time PCR showed no correlation with storage time in samples stored at three different temperatures for up to 316 days and no significant loss of DNA was observed. Adding Uracil DNA glycosylase to enhance the sensitivity of detecting hydrolytic DNA damages improved the identification of DNA degradation. Thus, degradation effects other than fragmentation, such as deamination of DNA bases, play a role in reducing PCR efficiency of longer amplicons. Electrophoretic methods did not improve degradation detection in forensic samples and are not superior over conventional quantitative real-time PCR. Surprisingly, DNA recovery was significantly higher in samples stored at elevated temperatures (37 °C) compared to samples stored at room temperature or low temperatures (4 °C).
|
Intramuscular hemorrhages in the pathway of an electric current through the body — two case reports | a128d2fc-a186-4189-8446-f4161e35d1ec | 10014766 | Forensic Medicine[mh] | The diagnosis of a death by electrocution is mainly made on the basis of the external findings. The flow of electric current through the human body has specific effects on the excitable tissues, but morphological signs may be sparse or even absent . The problem is further accentuated by the fact that there are no specific internal findings suggesting death by electrocution, especially in cases without any externally visible electric marks. In a few cases, however, one may occasionally find intramuscular hemorrhages which are produced by tetany-induced muscle contractions . These hemorrhages are mostly seen in the skeletal muscles located in the current pathway, such as the upper limb and shoulder girdle muscles . A unique case of suicide by electrocution committed by an electrician, who used coin electrodes fixed to his chest and a time switch, has been reported by Anders et al. . During autopsy, a blackish linear mark was noticed on the parietal pleura of the left thoracic cavity topographically connecting the cutaneous current marks. Histologically, current- and heat-related changes, such as hypercontraction bands of the intercostal muscles and coagulative changes in the perineurium of peripheral nerves, were demonstrated. Anders et al. also reported a case of suicide by electrocution in an electrical engineer who used a home-made device consisting of a connecting plug, scissors, and a magnifying glass. At autopsy, intramuscular hemorrhages were found in the skeletal muscles of the arms and the upper back. Based on the topographical distribution and microscopic pattern of the skeletal muscle alterations, the authors concluded that the hemorrhages were of vital origin and caused by current-induced tetanic muscle contractions. Two more autopsy cases are hereby described, one relating to a right upper human limb while the other deals with a female child who died after sustaining a high-voltage electric shock. In each case, superficial and deep hemorrhages were seen in the skeletal muscles of the upper extremity that could be topographically associated with the current path in the body.
Case no. 1 Brief history and details of the scene The police brought part of a right upper human extremity for autopsy. The extremity consisted of the right forearm and the hand, apparently fresh and well articulated, found in supine position on a muddy surface, and surrounded by yellow plant twigs (Fig. ). Purportedly, the limb was detected in a sugarcane field from where it was reported to the local police station by the owner of the field. The police arrived at the scene of discovery, took photographs, and conducted investigations. There was no source of electricity nearby and no other body parts were found in the surrounding area, even after a thorough search. Autopsy findings The limb was apparently fresh and showed no signs of putrefaction. It was smudged with muddy brownish stains at some places. Rigor mortis had passed off from the small joints of the hand. Faint bluish-purple postmortem lividity was present on the anterior aspect of the forearm. The skin was lax and wrinkled, hyperpigmented, atrophic, with complete loss of turgor. The forearm’s soft-tissue defect was circular in shape with pale, scalloped, and wrinkled margins, free of any ecchymoses, and smudged with foreign matter (i.e., of postmortem origin). The ulnar head was missing while its exposed region showed a zig-zag defect with adjacent tiny punctures typical of animal scavenging. No sharp injuries were present on the skin and bones. The following observations were made on the hand (Fig. ): A blackish charred area with a depressed and irregular surface on the terminal phalanx of the right middle finger that was still emitting a slight smell resembling burnt paper upon closer scrutiny. The margins of the wound showed signs of nibbling. On dissection, the underlying tissues were slightly congested with some petechial hemorrhages and smudged with clumps of dark grayish debris (suggesting metallic deposits). A similar lesion was present on the back of the proximal interphalangeal joint. A wedge-shaped soft-tissue defect on the distal phalanx of the thumb near the interphalangeal joint. The defect showed a flattened, congested base and irregular margins. It was surrounded by prominent skin ridges. There were two brownish-ochre areas of scorched skin, one between the base of the right index and middle finger and the other near the first web space. The topography and appearance of the marks was consistent with electrical burns. In addition, confluent areas of bluish-gray discolorations were present at the thenar and hypothenar as well as the distal region of the right thumb, possibly due to metallization. Complete circumferential and careful layered dissection of the forearm did not reveal any ecchymoses in the subcutis. At this stage, it was noticed that one of the flexor muscles and its tendon were diffusely hemorrhagic (Fig. ). The overlying muscle fascia and tendon sheath were intact. The tendon was anatomically related to the right middle finger. Upon dissection, a hemorrhage was also seen in the depth of the muscle. The extensor compartment muscles of the forearm revealed similar superficial and also deep subfascial hemorrhages. From anthropological evaluations and the appearance and texture of the soft tissue, the limb belonged to a middle-aged to old man with a height of about 175.1 ± 4.1 cm. A poorly visible greenish tattoo mark was merging with the forearm’s soft-tissue defect. The tattoo mark revealed a Hindu male’s name, written in Hindi (probably belonging to the decedent). Until today, this case has remained unsolved. Case no. 2 Brief history and details of the scene This case deals with a 13-year-old girl who, along with her mother, was cutting grass on the side of a trail that was in the middle of waterlogged fields. According to her mother, she sustained an electric shock from a bare wire running over the bottom and side pole of a high-voltage transformer standing nearby. The child was declared “dead on arrival” in the hospital about 1 h after the incident. No resuscitation was carried out. The body was brought in for autopsy on the next day, about 23 h after the incident. Autopsy findings The body was that of a female child of average build with a height of 142 cm and a body weight of 35.5 kg. The lower legs and feet were patchily smudged with mud stains. No conjunctival petechiae were present. No external injuries were visible except for the following electric marks: Multiple current marks in the form of typical targetoid lesions (i.e., centrally flattened, charred, and metalized areas surrounded by zones of blistering, blanching, and hyperemia) were discernible on the posterior middle of the right index finger, the middle of the right arm, and the inferior aspect of the left forearm (Fig. ). A moderately sized contact electric burn was present, one on each side of the midline of the lower and anterior chest. The surface of each burn showed a fine net-like pattern of the white vest worn by the deceased. The right contact burn additionally showed spark burns in the vicinity. The overlying layers of clothes also revealed corresponding burn defects with curled up margins. Layered dissection of the soft tissues of the right arm, the forearm, the shoulder girdle, and the upper back did not reveal any bleeding into the skin or subcutaneous fat. On further dissection, the flexor compartment muscles of the right arm and forearm showed punctate to confluent areas of bleeding beneath an intact fascia (Fig. ). The bleedings displayed a flow-like pattern in some places. Similar areas of intramuscular hemorrhages were seen in the right deltoid and supraspinatus muscles. The tracheal lumen contained abundant coarse whitish froth that could be traced to the segmental bronchi in diminishing quantity. Pulmonary emphysema and edema, along with Tardieu’s spots and occasional large patches of pleural ecchymoses, were seen. There was moderate cerebral edema with white matter petechiae, primarily focused on the thalamus and caudate nucleus. Some dotted subendocardial hemorrhages were present in the left ventricular papillary muscles and outflow tract. Pronounced generalized visceral congestion was observed. The cause of death was electrocution. Due to the lack of technical possibilities, we were not able to perform histological examinations of the intramuscular hemorrhages or internal organs in both cases.
Brief history and details of the scene The police brought part of a right upper human extremity for autopsy. The extremity consisted of the right forearm and the hand, apparently fresh and well articulated, found in supine position on a muddy surface, and surrounded by yellow plant twigs (Fig. ). Purportedly, the limb was detected in a sugarcane field from where it was reported to the local police station by the owner of the field. The police arrived at the scene of discovery, took photographs, and conducted investigations. There was no source of electricity nearby and no other body parts were found in the surrounding area, even after a thorough search. Autopsy findings The limb was apparently fresh and showed no signs of putrefaction. It was smudged with muddy brownish stains at some places. Rigor mortis had passed off from the small joints of the hand. Faint bluish-purple postmortem lividity was present on the anterior aspect of the forearm. The skin was lax and wrinkled, hyperpigmented, atrophic, with complete loss of turgor. The forearm’s soft-tissue defect was circular in shape with pale, scalloped, and wrinkled margins, free of any ecchymoses, and smudged with foreign matter (i.e., of postmortem origin). The ulnar head was missing while its exposed region showed a zig-zag defect with adjacent tiny punctures typical of animal scavenging. No sharp injuries were present on the skin and bones. The following observations were made on the hand (Fig. ): A blackish charred area with a depressed and irregular surface on the terminal phalanx of the right middle finger that was still emitting a slight smell resembling burnt paper upon closer scrutiny. The margins of the wound showed signs of nibbling. On dissection, the underlying tissues were slightly congested with some petechial hemorrhages and smudged with clumps of dark grayish debris (suggesting metallic deposits). A similar lesion was present on the back of the proximal interphalangeal joint. A wedge-shaped soft-tissue defect on the distal phalanx of the thumb near the interphalangeal joint. The defect showed a flattened, congested base and irregular margins. It was surrounded by prominent skin ridges. There were two brownish-ochre areas of scorched skin, one between the base of the right index and middle finger and the other near the first web space. The topography and appearance of the marks was consistent with electrical burns. In addition, confluent areas of bluish-gray discolorations were present at the thenar and hypothenar as well as the distal region of the right thumb, possibly due to metallization. Complete circumferential and careful layered dissection of the forearm did not reveal any ecchymoses in the subcutis. At this stage, it was noticed that one of the flexor muscles and its tendon were diffusely hemorrhagic (Fig. ). The overlying muscle fascia and tendon sheath were intact. The tendon was anatomically related to the right middle finger. Upon dissection, a hemorrhage was also seen in the depth of the muscle. The extensor compartment muscles of the forearm revealed similar superficial and also deep subfascial hemorrhages. From anthropological evaluations and the appearance and texture of the soft tissue, the limb belonged to a middle-aged to old man with a height of about 175.1 ± 4.1 cm. A poorly visible greenish tattoo mark was merging with the forearm’s soft-tissue defect. The tattoo mark revealed a Hindu male’s name, written in Hindi (probably belonging to the decedent). Until today, this case has remained unsolved.
The police brought part of a right upper human extremity for autopsy. The extremity consisted of the right forearm and the hand, apparently fresh and well articulated, found in supine position on a muddy surface, and surrounded by yellow plant twigs (Fig. ). Purportedly, the limb was detected in a sugarcane field from where it was reported to the local police station by the owner of the field. The police arrived at the scene of discovery, took photographs, and conducted investigations. There was no source of electricity nearby and no other body parts were found in the surrounding area, even after a thorough search.
The limb was apparently fresh and showed no signs of putrefaction. It was smudged with muddy brownish stains at some places. Rigor mortis had passed off from the small joints of the hand. Faint bluish-purple postmortem lividity was present on the anterior aspect of the forearm. The skin was lax and wrinkled, hyperpigmented, atrophic, with complete loss of turgor. The forearm’s soft-tissue defect was circular in shape with pale, scalloped, and wrinkled margins, free of any ecchymoses, and smudged with foreign matter (i.e., of postmortem origin). The ulnar head was missing while its exposed region showed a zig-zag defect with adjacent tiny punctures typical of animal scavenging. No sharp injuries were present on the skin and bones. The following observations were made on the hand (Fig. ): A blackish charred area with a depressed and irregular surface on the terminal phalanx of the right middle finger that was still emitting a slight smell resembling burnt paper upon closer scrutiny. The margins of the wound showed signs of nibbling. On dissection, the underlying tissues were slightly congested with some petechial hemorrhages and smudged with clumps of dark grayish debris (suggesting metallic deposits). A similar lesion was present on the back of the proximal interphalangeal joint. A wedge-shaped soft-tissue defect on the distal phalanx of the thumb near the interphalangeal joint. The defect showed a flattened, congested base and irregular margins. It was surrounded by prominent skin ridges. There were two brownish-ochre areas of scorched skin, one between the base of the right index and middle finger and the other near the first web space. The topography and appearance of the marks was consistent with electrical burns. In addition, confluent areas of bluish-gray discolorations were present at the thenar and hypothenar as well as the distal region of the right thumb, possibly due to metallization. Complete circumferential and careful layered dissection of the forearm did not reveal any ecchymoses in the subcutis. At this stage, it was noticed that one of the flexor muscles and its tendon were diffusely hemorrhagic (Fig. ). The overlying muscle fascia and tendon sheath were intact. The tendon was anatomically related to the right middle finger. Upon dissection, a hemorrhage was also seen in the depth of the muscle. The extensor compartment muscles of the forearm revealed similar superficial and also deep subfascial hemorrhages. From anthropological evaluations and the appearance and texture of the soft tissue, the limb belonged to a middle-aged to old man with a height of about 175.1 ± 4.1 cm. A poorly visible greenish tattoo mark was merging with the forearm’s soft-tissue defect. The tattoo mark revealed a Hindu male’s name, written in Hindi (probably belonging to the decedent). Until today, this case has remained unsolved.
Brief history and details of the scene This case deals with a 13-year-old girl who, along with her mother, was cutting grass on the side of a trail that was in the middle of waterlogged fields. According to her mother, she sustained an electric shock from a bare wire running over the bottom and side pole of a high-voltage transformer standing nearby. The child was declared “dead on arrival” in the hospital about 1 h after the incident. No resuscitation was carried out. The body was brought in for autopsy on the next day, about 23 h after the incident. Autopsy findings The body was that of a female child of average build with a height of 142 cm and a body weight of 35.5 kg. The lower legs and feet were patchily smudged with mud stains. No conjunctival petechiae were present. No external injuries were visible except for the following electric marks: Multiple current marks in the form of typical targetoid lesions (i.e., centrally flattened, charred, and metalized areas surrounded by zones of blistering, blanching, and hyperemia) were discernible on the posterior middle of the right index finger, the middle of the right arm, and the inferior aspect of the left forearm (Fig. ). A moderately sized contact electric burn was present, one on each side of the midline of the lower and anterior chest. The surface of each burn showed a fine net-like pattern of the white vest worn by the deceased. The right contact burn additionally showed spark burns in the vicinity. The overlying layers of clothes also revealed corresponding burn defects with curled up margins. Layered dissection of the soft tissues of the right arm, the forearm, the shoulder girdle, and the upper back did not reveal any bleeding into the skin or subcutaneous fat. On further dissection, the flexor compartment muscles of the right arm and forearm showed punctate to confluent areas of bleeding beneath an intact fascia (Fig. ). The bleedings displayed a flow-like pattern in some places. Similar areas of intramuscular hemorrhages were seen in the right deltoid and supraspinatus muscles. The tracheal lumen contained abundant coarse whitish froth that could be traced to the segmental bronchi in diminishing quantity. Pulmonary emphysema and edema, along with Tardieu’s spots and occasional large patches of pleural ecchymoses, were seen. There was moderate cerebral edema with white matter petechiae, primarily focused on the thalamus and caudate nucleus. Some dotted subendocardial hemorrhages were present in the left ventricular papillary muscles and outflow tract. Pronounced generalized visceral congestion was observed. The cause of death was electrocution. Due to the lack of technical possibilities, we were not able to perform histological examinations of the intramuscular hemorrhages or internal organs in both cases.
This case deals with a 13-year-old girl who, along with her mother, was cutting grass on the side of a trail that was in the middle of waterlogged fields. According to her mother, she sustained an electric shock from a bare wire running over the bottom and side pole of a high-voltage transformer standing nearby. The child was declared “dead on arrival” in the hospital about 1 h after the incident. No resuscitation was carried out. The body was brought in for autopsy on the next day, about 23 h after the incident.
The body was that of a female child of average build with a height of 142 cm and a body weight of 35.5 kg. The lower legs and feet were patchily smudged with mud stains. No conjunctival petechiae were present. No external injuries were visible except for the following electric marks: Multiple current marks in the form of typical targetoid lesions (i.e., centrally flattened, charred, and metalized areas surrounded by zones of blistering, blanching, and hyperemia) were discernible on the posterior middle of the right index finger, the middle of the right arm, and the inferior aspect of the left forearm (Fig. ). A moderately sized contact electric burn was present, one on each side of the midline of the lower and anterior chest. The surface of each burn showed a fine net-like pattern of the white vest worn by the deceased. The right contact burn additionally showed spark burns in the vicinity. The overlying layers of clothes also revealed corresponding burn defects with curled up margins. Layered dissection of the soft tissues of the right arm, the forearm, the shoulder girdle, and the upper back did not reveal any bleeding into the skin or subcutaneous fat. On further dissection, the flexor compartment muscles of the right arm and forearm showed punctate to confluent areas of bleeding beneath an intact fascia (Fig. ). The bleedings displayed a flow-like pattern in some places. Similar areas of intramuscular hemorrhages were seen in the right deltoid and supraspinatus muscles. The tracheal lumen contained abundant coarse whitish froth that could be traced to the segmental bronchi in diminishing quantity. Pulmonary emphysema and edema, along with Tardieu’s spots and occasional large patches of pleural ecchymoses, were seen. There was moderate cerebral edema with white matter petechiae, primarily focused on the thalamus and caudate nucleus. Some dotted subendocardial hemorrhages were present in the left ventricular papillary muscles and outflow tract. Pronounced generalized visceral congestion was observed. The cause of death was electrocution. Due to the lack of technical possibilities, we were not able to perform histological examinations of the intramuscular hemorrhages or internal organs in both cases.
Both our cases showed intramuscular bleedings of the upper limbs with external electrocution marks. In both cases, there were no external signs of blunt injuries, and dissection did not show any bleedings in the subcutis. So, it was very likely that the bleedings were due to the electrical current. Intramuscular hemorrhages may be the result of mechanical trauma, but also have been described in cases of drowning , hanging , hypothermia , electrocution , and natural deaths from a cardiac or pulmonary cause . The proposed mechanisms responsible for these hemorrhages are convulsive spasms during the asphyxiation process that cause hypercontraction, overexertion, and strain-induced rupture . In hypothermia-related deaths, systemic vasoconstriction, hypoxia-induced endothelial damage as well as mechanical vascular damage due to shivering have also been held responsible . Tetany-induced muscle contractions are said to be responsible for rupture and bleeding into the muscle fibers in electrocutions as well . These hemorrhages are described as tiny to moderately sized, confluent to strip-like bleedings, and are localized in the superficial and deep compartment muscles of the arms and forearms, the shoulder girdle, the upper back, and the intercostal muscles, thereby suggesting the path taken by the current through the body . However, any other external injury and any postmortem artificial bleeding have to be ruled out before proposing the lesions to be of electrical origin. This might be challenging in forensic casework. Histological differentiation of vital (agonal) from postmortem (sustained during transportation and/or rough handling of the corpse, etc.) intramuscular hemorrhages has been evaluated in some studies, utilizing routine staining methods as well as immunohistochemistry [ , , , ]. The findings suggestive of a vital nature of muscular hemorrhages are discoid and segmental disintegration of the muscle fibers, funnel-like concavities with empty and intact sarcolemmal tubes, and appearance of pathological longitudinal striation. A star-shaped or cobweb-like, centrifugally oriented bleeding pattern in the deep muscle fibers has been suggested to be helpful in differentiating a traumatic from a non-traumatic origin, as well as vital from postmortem bleedings [ , , ]. During a retrospective analysis of 37 cases of fatal electrocutions, Karger et al. provided a histological account of the intramuscular hemorrhages in selected cases. The authors found ruptures of fibers and moderate bleeding in the flexor muscles of the forearms along the current pathway. However, the validity regarding the vitality of these alterations was doubted in another study . Henssge et al. examined samples of skeletal muscle taken from 20 human corpses for estimating the time since death by looking for an idiomuscular bulge or tetanic contraction in the supravital period. Additional examination of the muscles by light microscopy revealed that the findings, previously interpreted as being of intravital origin, could also be produced post mortem . The authors concluded that structural changes in the muscle fibers cannot be used as sole proof of vital mechanical or electrical traumatization and may also be produced by postmortem trauma, especially in the supravital period . The non-validation of the proposed vital nature of the muscular hemorrhages/alterations in the agonal period is due to the fact that muscular tissue is excitable by a variety of mechanical, electrical as well as pharmacological stimuli for a prolonged time period in the (postmortem) supravital period (of cellular life), thereby being able to generate responses and alterations akin to vitality which are apparent on a gross as well as a microscopic level . So, as in other cases, histological examination may be helpful but not mandatory to distinguish vital from artificial findings. Intramuscular hemorrhages as an internal sign of electrocution are a rarely reported finding. Anders et al. reported only two cases with intramuscular bleedings in a total of eight cases with a secured current path through the upper extremities during a period of 16 years . An important problem in this context may be that layered dissection of the muscles, especially those of the limbs, is not always performed. These two case reports should indicate the need of this simple but helpful technique at autopsy of suspected electrocution deaths.
Intramuscular hemorrhages at autopsy can have a wide variety of causes, which all require careful interpretation regarding their cause and vitality. Tetanic muscle contractions triggered by the current flow can lead to hemorrhages into the skeletal muscles and/or tendons that may be demonstrated at autopsy. These hemorrhages are topographically related to the current path through the body. The muscles frequently involved are the flexor–extensor groups of the upper limbs. The histological changes may be helpful in indicating the vital nature of muscular lesions, but the findings do not provide absolute validity. The presence of tetanic intramuscular hemorrhages, especially in the limb muscles, in the absence of visible current marks is an area still to be investigated.
|
Freiburg Neuropathology Case Conference | af4ef79b-da2b-4ae3-9259-eced8a2a7924 | 10014779 | Pathology[mh] | A 43-year-old male patient presented with a painless 5 mm exophthalmos of the right eye (OD), which had slowly developed over the past 12 months concurrently with a right-sided ptosis. He did not report any double vision. On examination the palpebral aperture measured 5 mm OD and 10 mm for the left eye (OS). There was no swelling, redness or hyperthermia of the right eyelid. Ocular motility OD was largely unremarkable except for a slight elevation and abduction deficit on wide gaze excursion. Best-corrected visual acuity (BCVA) was 20/25 Snellen (0.8 dec) OD and 25/25 Snellen (1.0 dec) OS. Ophthalmological examination showed no evidence of optic nerve compression OD with no relative afferent pupillary defect, inconspicuous fundoscopy and optical coherence tomography of the retina and peripapillary retinal nerve fiber layer. A transnasal biopsy of the tumor previously performed at a peripheral hospital had shown unspecific results. The multidisciplinary tumor board recommended surgical excision of the tumor. Surgery was performed via a transconjunctival approach in general anesthesia using a surgical microscope as described previously . Oculopression was performed preoperatively to lower eye pressure, facilitate lateral displacement of the globe within the orbit and thereby widen the surgical corridor. The conjunctiva was incised over 270° in the medial circumference. The superior and inferior rectus muscles were tethered with 4‑0 silk retraction sutures. The medial rectus muscle was detached from the globe. Another 4‑0 silk retraction suture at the muscle’s insertion point was used to displace the globe laterally. Into the medial quadrant of the parabulbar space, two narrow spatulas and one wide orbital spatula (Fig. ) were inserted to form a triangular viewing channel. Spatula blades were inserted flat sides together and rotated into an orthogonal position once in place, thereby carefully displacing orbital structures sideways. In the depth of the parabulbar space, orbital fat was found. Its fine septa were opened with scissors at the tumor’s suspected location, revealing a homogeneous, white-colored, smoothly encapsulated tumor abutting the medial rectus muscle. A primary docking attempt with a cryostat was insufficient, so tumor grasping forceps were used for removal. Macroscopically, the tumor was a homogeneous, whitish, encapsulated, clearly circumscribed mass. Close microscopic inspection of the tumor bed confirmed complete removal and sufficient hemostasis after coagulation. The patient reported no pain or double vision 1 day after surgery. The BCVA OD was 20/50 Snellen (0.4 dec), likely due to swelling and irritation of the conjunctiva. No clinical signs of optic nerve compression were observed. After 2 days the patient was discharged from the hospital. Magnetic resonance (MR) imaging showed an intraorbital, intraconal space-occupying lesion located medially of the right optic nerve (Figs. and , arrows). T2-weighted images (Fig. a, b, arrows) showed cystic components. Note the right-sided exophthalmos on axial images (Figs. a and a, b). On T1-weighted images the lesion appeared isointense (Fig. a, arrow). After administration of gadolinium (Gd) the lesion displayed distinct and homogeneous contrast enhancement (Fig. b, c, arrows). On diffusion-weighted images (B1000) the lesion showed no signs of restricted diffusion (not shown). Orbital Lymphoma Primary lymphoma of the orbit is a B-cell non-Hodgkin lymphoma and one of the most common orbital tumors, accounting for as much as half of all orbital malignancies. On imaging, orbital lymphoma usually appear as a soft tissue mass, often located in the upper outer quadrant in association with the lacrimal gland . In distinguishing lymphomas from other orbital tumors, the extraocular muscles can be encircled or displaced. However, extraocular muscles are usually not the origin of the mass lesion. Infiltration of the optic nerve or eyeball is also rare. Similar to intracranial lymphomas they are homogeneous in density with high cellularity resulting in restricted diffusion on diffusion-weighted imaging (DWI), isointensity to hypointensity compared to muscle in T1-weighted sequences and isointense to hyperintense signal compared to muscle in T2-weighted imaging. After administration of Gd they show homogeneous enhancement . Orbital lymphoma seemed a valid differential diagnosis as the imaging criteria matched and lymphomas account for a large proportion of malignant orbital masses. Metastases Orbital metastases are relatively uncommon with breast cancer being the most common malignancy to metastasize to the orbit, followed by prostate cancer, melanoma, and lung cancer . Extraocular orbital metastases are usually unilateral and only rarely primarily involve the extraocular muscles, although secondary involvement may commonly occur . Thyroid and prostate metastases can be located in the bony margins of the orbit. Radiographic features are variable both in computed tomography (CT) and MR imaging. The morphology ranges from well-defined to diffusely infiltrating lesions. Usually contrast enhancement is present but can be very variable. Bony destruction may be present. MR imaging shows its superiority compared to CT in greater contrast resolution making it invaluable in the assessment of orbital masses. Fat-suppression techniques and post-contrast T1-weighted images with thin slices and a reduced field of view are paramount for initial assessment . Although relatively uncommon in the orbit, metastases should always be considered as a differential diagnosis. Orbital Rhabdomyosarcoma Rhabdomyosarcoma (RMS) is a highly malignant tumor. It has been reported from birth up to the seventh decade of life, with the majority of cases presenting in early childhood making it the most common soft tissue sarcoma of the head and neck in childhood. Orbital RMS is usually located extraconally or extending both intraconally and extraconally with close proximity to extraocular muscles. In early stages the tumor is usually well circumscribed, whereas in later stages borders become irregular . On imaging, RMS are typically homogeneous soft tissue masses isodense to muscle and may show extension into the eyelid or through bony structures. MRI is the modality of choice for evaluating soft tissue tumors and plays an important role in initial diagnosis and assessment of tumor response after treatment . RMS appear with low to intermediate intensity and isointense signal to adjacent muscles in T1-weighted sequences. They generally show vivid contrast enhancement. Because of high cellular density RMS usually have restricted diffusion on DWI . In terms of imaging features and tumor growth we considered RMS to be a possible diagnosis. Orbital Schwannoma Schwannomas are benign nerve sheath tumors that originate from the Schwann cells of the perineurium of peripheral nerves. They are the most common benign peripheral nerve tumors in adults but rarely occur in the orbit . Orbital schwannomas account for only 1% of all orbital tumors and commonly arise from supraorbital and supratrochlear nerves in the upper anterior orbital cavity . It is difficult to differentiate orbital schwannomas from other intraorbital tumors. They are homogeneous, elongated, and oval to spindle-shaped lesions with a density similar to extraocular muscles. General imaging features include cystic and fatty degeneration. In larger schwannomas cystic degeneration or hemorrhage may occur and calcifications are rare. CT has less diagnostic value but may show characteristic expansion into bone. On MR imaging, orbital schwannomas are usually hypointense in T1-weighted imaging and hyperintense on T2-weighted imaging. After administration of Gd schwannomas enhance, either homogeneously or heterogeneously . In our case, we considered orbital schwannoma a valid differential diagnosis based on its location and cystic and solid appearance with vivid Gd enhancement. Solitary Fibrous Tumor Solitary fibrous tumors (SFT) are rare mesenchymal neoplasms which account for less than 2% of all soft tissue tumors . They usually present as a solitary well-circumscribed mass located in intraconal and extraconal spaces of the orbit. The lesion may show calcifications and necrosis with high vascularization. Also remodeling of the adjacent bone may be seen in larger tumors . Isointense to hypointense signal on T2-weighted images and vivid enhancement with probable washout pattern are the main MRI characteristics of orbital SFT. Internal hemorrhage, cysts or fibrosis are best demonstrated in T2-weighted sequences as well . Although a rare entity, if imaging criteria are met SFT may be included in the differential diagnosis of orbital soft tissue masses. Primary lymphoma of the orbit is a B-cell non-Hodgkin lymphoma and one of the most common orbital tumors, accounting for as much as half of all orbital malignancies. On imaging, orbital lymphoma usually appear as a soft tissue mass, often located in the upper outer quadrant in association with the lacrimal gland . In distinguishing lymphomas from other orbital tumors, the extraocular muscles can be encircled or displaced. However, extraocular muscles are usually not the origin of the mass lesion. Infiltration of the optic nerve or eyeball is also rare. Similar to intracranial lymphomas they are homogeneous in density with high cellularity resulting in restricted diffusion on diffusion-weighted imaging (DWI), isointensity to hypointensity compared to muscle in T1-weighted sequences and isointense to hyperintense signal compared to muscle in T2-weighted imaging. After administration of Gd they show homogeneous enhancement . Orbital lymphoma seemed a valid differential diagnosis as the imaging criteria matched and lymphomas account for a large proportion of malignant orbital masses. Orbital metastases are relatively uncommon with breast cancer being the most common malignancy to metastasize to the orbit, followed by prostate cancer, melanoma, and lung cancer . Extraocular orbital metastases are usually unilateral and only rarely primarily involve the extraocular muscles, although secondary involvement may commonly occur . Thyroid and prostate metastases can be located in the bony margins of the orbit. Radiographic features are variable both in computed tomography (CT) and MR imaging. The morphology ranges from well-defined to diffusely infiltrating lesions. Usually contrast enhancement is present but can be very variable. Bony destruction may be present. MR imaging shows its superiority compared to CT in greater contrast resolution making it invaluable in the assessment of orbital masses. Fat-suppression techniques and post-contrast T1-weighted images with thin slices and a reduced field of view are paramount for initial assessment . Although relatively uncommon in the orbit, metastases should always be considered as a differential diagnosis. Rhabdomyosarcoma (RMS) is a highly malignant tumor. It has been reported from birth up to the seventh decade of life, with the majority of cases presenting in early childhood making it the most common soft tissue sarcoma of the head and neck in childhood. Orbital RMS is usually located extraconally or extending both intraconally and extraconally with close proximity to extraocular muscles. In early stages the tumor is usually well circumscribed, whereas in later stages borders become irregular . On imaging, RMS are typically homogeneous soft tissue masses isodense to muscle and may show extension into the eyelid or through bony structures. MRI is the modality of choice for evaluating soft tissue tumors and plays an important role in initial diagnosis and assessment of tumor response after treatment . RMS appear with low to intermediate intensity and isointense signal to adjacent muscles in T1-weighted sequences. They generally show vivid contrast enhancement. Because of high cellular density RMS usually have restricted diffusion on DWI . In terms of imaging features and tumor growth we considered RMS to be a possible diagnosis. Schwannomas are benign nerve sheath tumors that originate from the Schwann cells of the perineurium of peripheral nerves. They are the most common benign peripheral nerve tumors in adults but rarely occur in the orbit . Orbital schwannomas account for only 1% of all orbital tumors and commonly arise from supraorbital and supratrochlear nerves in the upper anterior orbital cavity . It is difficult to differentiate orbital schwannomas from other intraorbital tumors. They are homogeneous, elongated, and oval to spindle-shaped lesions with a density similar to extraocular muscles. General imaging features include cystic and fatty degeneration. In larger schwannomas cystic degeneration or hemorrhage may occur and calcifications are rare. CT has less diagnostic value but may show characteristic expansion into bone. On MR imaging, orbital schwannomas are usually hypointense in T1-weighted imaging and hyperintense on T2-weighted imaging. After administration of Gd schwannomas enhance, either homogeneously or heterogeneously . In our case, we considered orbital schwannoma a valid differential diagnosis based on its location and cystic and solid appearance with vivid Gd enhancement. Solitary fibrous tumors (SFT) are rare mesenchymal neoplasms which account for less than 2% of all soft tissue tumors . They usually present as a solitary well-circumscribed mass located in intraconal and extraconal spaces of the orbit. The lesion may show calcifications and necrosis with high vascularization. Also remodeling of the adjacent bone may be seen in larger tumors . Isointense to hypointense signal on T2-weighted images and vivid enhancement with probable washout pattern are the main MRI characteristics of orbital SFT. Internal hemorrhage, cysts or fibrosis are best demonstrated in T2-weighted sequences as well . Although a rare entity, if imaging criteria are met SFT may be included in the differential diagnosis of orbital soft tissue masses. In the hematoxylin-eosin (H&E) stained section of the formaldehyde-fixed and paraffin-embedded biopsy material, an isomorphic tumor was detected with moderately increased cellularity (Fig. ). The tumor cells were mostly isomorphic and spindle-shaped. An increased number of blood vessels, and a collagenous stroma with streaming of cells between collagen was observed. No mitotic figures were identified. Fresh hemorrhages were present in a few, small regions. No traces of old hemorrhages were identified with the Prussian blue reaction (not shown). The tumor cells reacted positively in the immunohistochemistry for vimentin (Fig. a). In the immunohistochemistry for signal transducer and activator of transcription 6 (STAT6), a strong positive signal was observed in the nuclei of the tumor cells (Fig. b). The immunohistochemistry for inhibin, S100, pan-cytokeratin (PanCK), epithelial membrane antigen (EMA), and glucose transporter 1 (GLUT1) were negative in the tumor cells (not shown). The reaction for Ki-67 (Mib1) marked about 1% of all the tumor cells (Fig. c, asterisks). Numerous blood vessels (less so in number and intensity the tumor cells) were marked by the immunohistochemistry for CD34 (Fig. d, asterisk) and Wilm’s tumor protein (WT1, not shown). The positive immunohistochemistry for CD34 is characteristic (albeit nonspecific) for solitary fibrous tumors (SFT), especially low-grade SFT. The positive reaction for STAT6 in the tumor nuclei constitutes a very highly sensitive and specific marker for SFT . About 98% of SFT cases have been described to show nuclear expression of STAT6, making it the most specific immmunohistochemical marker . Nuclei positivity for STAT6 thereby reliably differentiates SFTs from meningioma, meningeal Ewingʼs sarcoma, mesenchymal chondrosarcoma, malignant peripheral nerve sheath tumor, and synovial sarcomas . To further rule out the differential diagnosis of a malignant peripheral nerve sheath tumor, immunohistochemistry for S100 was performed, which produced a negative result (not shown). Likewise, immunohistochemistry was used to check for the presence of monophasic synovial sarcomas, again, yielding a negative result (not shown). The nuclei in this sample were oval but lacked the pseudoinclusions typical for meningioma. In addition, no calcifications or psammoma bodies were observed. Both observations ruling out the presence of a meningothelial neoplasm. Solitary fibrous tumor (SFT) of the orbit Solitary fibrous tumors are rare mesenchymal tumors that arise at a plethora of anatomic sites, especially in deep soft tissues, and particularly in the thigh, pelvic fossa, retroperitoneum, and serosal surfaces . The tumor cells carry a NAB2:STAT6 gene fusion, which is a result of a paracentric inversion involving chromosome 12q13 . Orbital SFT appear mostly in middle-aged patients and are predominantly located in the superior aspect of the orbit . Previous reports have concluded that orbital SFT are mostly benign tumors with a low recurrence rate of approximately 16% . Surgical excision is the treatment of choice but can be difficult to achieve . Head and neck solitary fibrous tumors demonstrate a significantly larger local recurrence rate as compared with the rate of metastasis. They can recur many years after initial treatment, warranting long-term surveillance and follow-up to assess for tumor recurrence . Malignant SFT is extremely rare and it can be difficult to distinguish between benign and malignant SFT. Generally, malignant SFTs are larger than benign SFTs and common gross features are hemorrhage and/or necrosis in the malignant neoplasm . Recent studies suggest that the presence of a telomerase reverse transcriptase (TERT) promoter mutation resulting in its overexpression may be associated with a shorter disease-free survival . Solitary fibrous tumors are rare mesenchymal tumors that arise at a plethora of anatomic sites, especially in deep soft tissues, and particularly in the thigh, pelvic fossa, retroperitoneum, and serosal surfaces . The tumor cells carry a NAB2:STAT6 gene fusion, which is a result of a paracentric inversion involving chromosome 12q13 . Orbital SFT appear mostly in middle-aged patients and are predominantly located in the superior aspect of the orbit . Previous reports have concluded that orbital SFT are mostly benign tumors with a low recurrence rate of approximately 16% . Surgical excision is the treatment of choice but can be difficult to achieve . Head and neck solitary fibrous tumors demonstrate a significantly larger local recurrence rate as compared with the rate of metastasis. They can recur many years after initial treatment, warranting long-term surveillance and follow-up to assess for tumor recurrence . Malignant SFT is extremely rare and it can be difficult to distinguish between benign and malignant SFT. Generally, malignant SFTs are larger than benign SFTs and common gross features are hemorrhage and/or necrosis in the malignant neoplasm . Recent studies suggest that the presence of a telomerase reverse transcriptase (TERT) promoter mutation resulting in its overexpression may be associated with a shorter disease-free survival . |
Detection of | c77e023f-0cd6-4e88-81fc-045b3e822456 | 10015049 | Microbiology[mh] | Escherichia coli is a Gram-negative, facultative, anaerobic bacterium considered to be a commensal organism in the human body . However, the E. coli strain O157:H7 is a pathogen that poses a threat to human life by causing several diseases, such as haemolytic–uraemic syndrome (HUS), which may be fatal in some cases . The primary reservoir of E. coli O157:H7 is meat, although it has also been isolated from fruits and vegetables , . The O157:H7 strain was first detected in 1982. Within only two decades (1982–2002), it has been responsible for 73,000 illnesses annually in the United States alone, causing as many as 350 outbreaks . Illnesses caused by E. coli O157:H7 have been reported in over 30 countries across six continents . Escherichia coli strains that produce Shiga toxins (Stx1 and Stx2) are called Shigatoxigenic E. coli (STEC) , while those that produce Shiga-like toxins (verotoxins) are called verotoxigenic E. coli (VTEC) . The pathogenicity of STEC is associated with virulence factors such as enterohaemolysin (encoded by hlyA ), intimin (encoded by eae ) and Stx1 and Stx2 (encoded by stx1 and stx2 ) . STEC isolates are further divided into two groups: O157 and non-O157 . O157 isolates belong to the H7 and NM serogroups, whereas non-O157 isolates belong to the O26, O45, O103, O111, O121, and O145 serogroups , . Notably, O157, O26, O103, O111 and O145 are also classified as enterohaemorrhagic E. coli (EHEC) . Interestingly, a comprehensive E. coli O157:H7 clade-typing study (clades 1–9) of 269 HUS patients and 387 asymptomatic carriers (ACs) in Japan between 1999 and 2011 reported that clades 6 and 8 were frequently found in HUS patients . Furthermore, the norV gene, which codes a nitric oxide reductase (Shiga toxin inhibitor in anaerobic conditions), was found intact in clade 1–3 isolates but not in clade 4–8 isolates . In Saudi Arabia (SA), no E. coli O157:H7 outbreak has been reported to date, and the prevalence of this pathogen remains unknown. However, it has been isolated from several local cattle farms . Reporting outbreaks in SA is challenging because of its inefficient data collection system . For this reason, since 2003, the Saudi Food and Drug Authority (SFDA) has taken control of all food safety regulations, which has also helped avoid overlapping with other authorities . As a member of the Gulf Cooperation Council (GCC), SA is required to apply the GCC Standardization Organization’s (GSO) microbiological criteria for foodstuffs [GSO/1016/2015 (E)] E: referring to the English version . Accordingly, the SFDA labs follow the GSO 2015 guideline stating that all kinds of food must be free from E. coli O157:H7. Statistical information on food imported into SA over the past decade is limited. A recent study identified the main source of imported meat only in 2017 . Approximately 80% of the food available in Saudi Arabian markets is imported, and 15.71% of it is meat-based . Therefore, the main aim of this study was to compare imported meat contaminated with E. coli O157:H7 with the total meat imported in 2017. To that end, the study evaluated the possibility of detecting E. coli O157:H7 in meat products imported into SA in 2017 using the SFDA’s monitoring system to provide foundational data for creating a database of the O157:H7 serotype.
Sample collection The data used in this study were extracted from the laboratory information management system (LIMS) of the SFDA database, an online tool for data management operated by LabVantage Solutions, Inc. Typically, when shipments of imported consumable meat arrive at Saudi port customs, SFDA inspectors collect samples and send them to SFDA labs for analysis. Thereafter, the inspected samples are referred for E. coli O157:H7 detection. The data used in this study pertained to analyses of raw (not ready-to-eat ‘RTE’) products only. Sample’s specific details can be found in Supplementary Table 1, 2, 3 and 4. E. coli O157:H7 detection Enrichment Samples weighing 25 g selected for enrichment were placed in sterilised sample bags. They were then homogenised with 225 mL of modified tryptone soya broth (mTSB) supplemented with novobiocin to obtain a ratio as follows: mTSB + sample of 1/10 (mass to volume). The sample bags were massaged by hand and then incubated at 41.5 °C for 12–18 h. Escherichia coli O157 strain ATCC 43895 and blank were added as positive and negative controls, respectively. After incubation, the samples were subjected to immunomagnetic separation. Subsequently, 50 µL of each sample was streaked out on pre-dried cefixime tellurite sorbitol MacConkey (CT-SMAC) agar plates using sterile loops to obtain many well-isolated colonies and incubated at 37 °C for 18–24 h. Colony selection After incubation, at least five presumptive colonies were selected randomly from each plate and placed into polymerase chain reaction (PCR) tubes containing 10 µL of distilled water (dH 2 O) as a preparation step for DNA extraction. DNA extraction The samples were prepared using a PrepMan™ Ultra Sample Preparation Reagent Kit (lot number 1809191) according to the manufacturer’s protocol. PCR detection Real-time PCR (RT-PCR) was performed to amplify the O157:H7-specific target DNA sequences using a MicroSEQ™ E. coli O157:H7 Detection Kit (lot number 1804034) according to the manufacturer’s protocol. Non-pathogenic E. coli ATCC 25922, non-O157 'O111 and O26' and Salmonella strains enteritis and arizona were added as negative controls. A 7500 Fast System and Sequence Detection System (SDS) software v1.4.2 were used for the analysis. Each sample was analysed in triplicate. The thermal cycling conditions are displayed in Supplementary Table 5. International Organization for Standardization (ISO) 17025 (2017) and 13136 (2012) were used in SFDA labs and to isolate E. coli O157:H7, respectively , . Statistical analysis Statistical analyses were performed using Microsoft Office Excel Professional Plus 2019. For pairwise comparisons, the t -test was used to compare between samples to assess differences in the prevalence of E. coli O157:H7. Values of P < 0.05 were considered statistically significant.
The data used in this study were extracted from the laboratory information management system (LIMS) of the SFDA database, an online tool for data management operated by LabVantage Solutions, Inc. Typically, when shipments of imported consumable meat arrive at Saudi port customs, SFDA inspectors collect samples and send them to SFDA labs for analysis. Thereafter, the inspected samples are referred for E. coli O157:H7 detection. The data used in this study pertained to analyses of raw (not ready-to-eat ‘RTE’) products only. Sample’s specific details can be found in Supplementary Table 1, 2, 3 and 4.
O157:H7 detection Enrichment Samples weighing 25 g selected for enrichment were placed in sterilised sample bags. They were then homogenised with 225 mL of modified tryptone soya broth (mTSB) supplemented with novobiocin to obtain a ratio as follows: mTSB + sample of 1/10 (mass to volume). The sample bags were massaged by hand and then incubated at 41.5 °C for 12–18 h. Escherichia coli O157 strain ATCC 43895 and blank were added as positive and negative controls, respectively. After incubation, the samples were subjected to immunomagnetic separation. Subsequently, 50 µL of each sample was streaked out on pre-dried cefixime tellurite sorbitol MacConkey (CT-SMAC) agar plates using sterile loops to obtain many well-isolated colonies and incubated at 37 °C for 18–24 h. Colony selection After incubation, at least five presumptive colonies were selected randomly from each plate and placed into polymerase chain reaction (PCR) tubes containing 10 µL of distilled water (dH 2 O) as a preparation step for DNA extraction. DNA extraction The samples were prepared using a PrepMan™ Ultra Sample Preparation Reagent Kit (lot number 1809191) according to the manufacturer’s protocol. PCR detection Real-time PCR (RT-PCR) was performed to amplify the O157:H7-specific target DNA sequences using a MicroSEQ™ E. coli O157:H7 Detection Kit (lot number 1804034) according to the manufacturer’s protocol. Non-pathogenic E. coli ATCC 25922, non-O157 'O111 and O26' and Salmonella strains enteritis and arizona were added as negative controls. A 7500 Fast System and Sequence Detection System (SDS) software v1.4.2 were used for the analysis. Each sample was analysed in triplicate. The thermal cycling conditions are displayed in Supplementary Table 5. International Organization for Standardization (ISO) 17025 (2017) and 13136 (2012) were used in SFDA labs and to isolate E. coli O157:H7, respectively , .
Samples weighing 25 g selected for enrichment were placed in sterilised sample bags. They were then homogenised with 225 mL of modified tryptone soya broth (mTSB) supplemented with novobiocin to obtain a ratio as follows: mTSB + sample of 1/10 (mass to volume). The sample bags were massaged by hand and then incubated at 41.5 °C for 12–18 h. Escherichia coli O157 strain ATCC 43895 and blank were added as positive and negative controls, respectively. After incubation, the samples were subjected to immunomagnetic separation. Subsequently, 50 µL of each sample was streaked out on pre-dried cefixime tellurite sorbitol MacConkey (CT-SMAC) agar plates using sterile loops to obtain many well-isolated colonies and incubated at 37 °C for 18–24 h.
After incubation, at least five presumptive colonies were selected randomly from each plate and placed into polymerase chain reaction (PCR) tubes containing 10 µL of distilled water (dH 2 O) as a preparation step for DNA extraction.
The samples were prepared using a PrepMan™ Ultra Sample Preparation Reagent Kit (lot number 1809191) according to the manufacturer’s protocol.
Real-time PCR (RT-PCR) was performed to amplify the O157:H7-specific target DNA sequences using a MicroSEQ™ E. coli O157:H7 Detection Kit (lot number 1804034) according to the manufacturer’s protocol. Non-pathogenic E. coli ATCC 25922, non-O157 'O111 and O26' and Salmonella strains enteritis and arizona were added as negative controls. A 7500 Fast System and Sequence Detection System (SDS) software v1.4.2 were used for the analysis. Each sample was analysed in triplicate. The thermal cycling conditions are displayed in Supplementary Table 5. International Organization for Standardization (ISO) 17025 (2017) and 13136 (2012) were used in SFDA labs and to isolate E. coli O157:H7, respectively , .
Statistical analyses were performed using Microsoft Office Excel Professional Plus 2019. For pairwise comparisons, the t -test was used to compare between samples to assess differences in the prevalence of E. coli O157:H7. Values of P < 0.05 were considered statistically significant.
Escherichia coli O157:H7 strains were detected at varying frequencies in imported beef, sheep and chicken meat. The O157:H7 strain was most prevalent in chicken (6.07%) and beef (5.90%), while in sheep (2.00%), with a significant difference (P < 0.05; Table ). Regarding chicken, the greatest proportion of samples contaminated with E. coli O157:H7 was imported from Brazil (6.96%), followed by Ukraine (3.57%), while no contaminated samples were imported from Jordan, India, or Tunisia, with a significant difference (P < 0.05). Regarding beef, the greatest proportion of contaminated samples was imported from India (6.80%), followed by Brazil (2.20%). Finally, all sheep meat samples contaminated with E. coli O157:H7 were imported from India (2.1%; Table ). The highest frequency of E. coli O157:H7 contamination was found in products imported from Indian companies (30 of 476 samples: eight from company A, five from company B, four from company C, three from company D, two from company E, two from company F and six from other companies; Table , Supplementary Table , and ). More beef than sheep meat samples imported from India were screened given the high demand for the former in SA in 2017 . Therefore, the prevalence of E. coli O157:H7 in beef samples was higher than in sheep meat samples (6.80% and 2.13%, respectively). Products imported from Brazilian companies were also frequently contaminated (18 of 321 samples: four from company G, two from company H, two from company I, two from company K and eight from other companies; Table , Supplementary Table , and ). In this case, however, the prevalence of E. coli O157:H7 in chicken samples was higher than in beef samples (6.96% and 2.20%, respectively). To ensure anonymity, the companies’ names have been replaced with letters A to K.
Contaminated raw meat is the source of 90% of foodborne infections . Thirty-one pathogens, including E. coli O157:H7, were responsible for 10 million annual episodes of foodborne illnesses in the United States . In the present study, samples of imported raw meat were obtained from imported meats in the ports of SA, and the prevalence of E. coli O157:H7 in these samples was confirmed (Table ). Meat products imported from India and Brazil were the most frequently contaminated (Table ). The prevalence of E. coli O157:H7 was the highest in raw meat products imported from India, posing a threat to public health in the Kingdom of Saudi Arabia (Table ). According to Shinde et al. (2020), E. coli O157:H7 was frequently isolated from healthy Indian cattle on both organised and non-organised farms in and around the Pune District in India during 2015. This can be explained by the fact that new generations of cattle may carry the pathogen but may not present any symptoms, thus appearing as heathy livestock; however, the consumption of meat from such asymptomatic carriers of E. coli O157:H7 may affect humans, representing a severe public health concern. Furthermore, subsequent studies in the same region revealed the presence of E. coli O157:H7 isolates resistant to a number of common antibiotics used for livestock animals against this pathogen, including cefotaxime, streptomycin, penicillin G, kanamycin, ampicillin, tetracycline, gentamycin and piperacillin. These findings, in addition to our results, emphasise the need for the further of assessment of imported meat, specifically from India, to ensure public health safety. In another recent study in China, clinical isolates of E. coli exhibited high resistance to conventional antibiotics for livestock, including sulfamethoxazole, trimethoprim/sulfamethoxazole, tetracycline, nalidixic acid and ampicillin . Amongst samples of meat imported from Brazil, E. coli O157:H7 was detected at different frequencies in products from several companies (Table ). The prevalence of E. coli O157:H7 in samples from only specific companies (G, H, I, J, K and others), but not others, indicates internal contamination through air during rearing at the livestock farms , slaughter , or processing (Fig. ). According to Santos et al. (2018), the prevalence of STEC in Brazilian food products was approximately 9.50%, which was primarily attributed to the development of multi-resistance to antibiotics in these strains. Notably, Brazil is the second largest exporter and the third major producer of beef worldwide . The detection of E. coli O157:H7 in samples of meat imported from one company each in Ukraine and UAE also indicates unhygienic handling that led to contamination (Table ), highlighting the need for the revision of processing and packaging steps in these regions . Of note, the present report only includes results from products that have been undergone E. coli O157:H7 testing from the port of SA. Many shipments may have been excluded from the examination for approval and owners may have only been asked to produce a list of essential documents . In addition, to import food products into SA, the SFDA mandates a registration certificate authorised by the Saudi health ministry, an industry certificate authorised by the commerce ministry and a quality certificate (e.g. International Organization for Standardization 9001 or 22000, Good Manufacturing Practice and Hazard Analysis Critical Control Point) . Therefore, to ensure public safety, the SFDA has announced a list of countries from where the import of food into SA is prohibited (available at https://www.sfda.gov.sa/en/list_countries ).
The presence of E. coli O157:H7 in samples of imported raw meat highlights the need for more regular surveillance at the borders of SA before the products are made available on the market for consumption by the public. Our results underscore the necessity of more stringent control protocols for the approval of imported food products, particularly from India and Brazil, which are the major suppliers of meat to SA. Moreover, the detected E. coli O157:H7 isolates should be tested against antibiotics that are commonly used to treat livestock. For the future investigations and as an alternative method, we suggest tracking different sources of E. coli O157 contaminations by clade typing .
Supplementary Information 1. Supplementary Information 2. Supplementary Information 3. Supplementary Information 4. Supplementary Information 5.
|
Can Right-Sizing the Use of Virtual Care Improve Access to Equitable, Patient-Centered Care for Women Veterans? | 0738b891-10c3-473d-a351-f05d917d9be7 | 10015126 | Patient-Centered Care[mh] | |
Performance of infectious diseases specialists, hospitalists, and other internal medicine physicians in antimicrobial case-based scenarios: Potential impact of antimicrobial stewardship programs at 16 Veterans’ Affairs medical centers | 4e3efa3c-371a-4659-8bb9-1ceba4c5bd9c | 10015262 | Internal Medicine[mh] | The Cognitive Support Informatics for Antimicrobial Stewardship project enrolled 8 university-affiliated VA sites across the nation to participate in implementation of electronic antimicrobial dashboards that allow inter- and intrafacility comparisons of antimicrobial utilization across common inpatient conditions (eg, skin and soft-tissue infection, pneumonia, and urinary tract infection) over the duration of the typical hospital admission. To assess facility-level physician knowledge of appropriate antibiotic use, we administered an electronic survey via REDCap ( www.project-redcap.org ) to physicians who provide inpatient medical services at all 8 intervention sites along with 8 control sites, matched by complexity and geographic location, during October–December 2017. The full survey instrument is included in the Supplementary Materials (online). We contacted medical leadership at each participating facility to provide rosters of physicians who had provided inpatient acute general medicine services during the prior year. We invited those physicians to participate in the survey anonymously via e-mail. Over the course of 30 days, we sent 1 prenotification e-mail, 1 invitation with a survey link, and up to 8 reminder e-mails to initial nonrespondents. No incentives were provided for participation. The first portion of the survey collected information regarding physicians’ VA appointments, practice characteristics, attitudes toward antimicrobial use, and antibiotic prescribing practices. Questions about agreement with certain statements used Likert scales that were converted into numerical scores for analysis (1, strongly disagree; 2, disagree; 3, neutral; 4, agree; and 5, strongly agree). The second part of the survey explored how respondents would manage 4 clinical scenarios: cellulitis, community-acquired pneumonia (CAP), non–catheter-associated asymptomatic bacteriuria (NC-ASB), and catheter-associated asymptomatic bacteriuria (C-ASB). A final part of the survey addressed the availability and use of antibiotic prescribing resources. For each scenario, responses were scored by assigning +1 for answers most concordant with Infectious Diseases Society of America guidelines at the time (ie, “correct”), , – 0 for less concordant but acceptable answers (or no answer given), and −1 for guideline-discordant answers (ie, “incorrect”). Guidelines were interpreted with emphasis on antimicrobial stewardship and practicality. One generalist (P.A.G.) and 2 infectious diseases physicians (C.J.G. and M.B.G.) collectively assigned a value to each answer to each subquestion a priori with free-text responses analyzed post hoc independent of knowledge of the type of practitioner giving the answer. For questions that allowed for multiple answers, 0 points were assigned when a less guideline-concordant answer was combined with a guideline-concordant answer, and −1 point was assigned when a guideline-discordant answer was combined with either a guideline-concordant or less guideline-concordant answer. Scores were then compiled across all questions within each scenario and were normalized from 100% concordant (all “correct”) to 100% discordant (all “incorrect”). Mean scores were calculated across respondents who self-identified as belonging to 1 of 3 categories: infectious diseases (ID) specialists, hospitalists, and other internists (general internal medicine and non-ID internal medicine subspecialists). For each question within a scenario, we tabulated percentages of responses based on the total number of survey participants in each physician category rather than the number in each category that responded to the individual question. Statistical significance of differences between groups were calculated using the Kruskal-Wallis rank-sum test, Pearson χ 2 test, or the F test where appropriate. This study was approved by the Veterans’ Health Administration Central Institutional Review Board.
Practice characteristics and antimicrobial attitudes, prescribing practices, and resource utilization In total, 467 physicians who provided service on inpatient wards from all sites were contacted regarding participation in the survey. Among them, 159 answered at least 1 question (19 ID physicians, 71 hospitalists, and 69 other internists) and 140 respondents answered up to the first scenario (30.4% overall response rate): 19 ID physicians, 62 hospitalists, 58 other internists, and 1 respondent who did not identify a specialty. The respondent who did not identify a specialty was excluded, leaving 139 respondents to be analyzed. Of the 58 non-ID, nonhospitalist “other” internist respondents, 43 (74.1%) identified as generalists, 3 as rheumatologists, 3 as nephrologists, 2 as geriatricians, 2 as endocrinologists, 2 as pulmonologists, 2 as oncologists, and 1 as an endocrinologist and rheumatologist. No remarkable differences were identified between physician characteristics at intervention and control sites for any portion of the survey (data not shown). Practice characteristics and attitudes toward antimicrobial use are shown in Table . Significant differences were detected in proportion of time in clinical care ( P = .023) and in inpatient care ( P < .001), with hospitalists having the highest proportions. Attitudes toward antimicrobial use were largely similar across the 3 groups, though ID physicians more frequently felt that antibiotics were overused by clinicians at their facility ( P = .002) and less likely felt that the harm of antibiotic overuse in livestock is exaggerated ( P = .001). Of 139 physician respondents, 94 (67.6%) felt that antimicrobial stewardship programs were of at least moderate benefit to patient care at their institution, and 108 (77.7%) were satisfied or very satisfied with the assistance they have received from their facility regarding antibiotic prescribing over the prior year. Answers regarding antibiotic prescribing practices and resource utilization among provider groups are shown in Supplementary Table 1 (online). ID physicians were significantly more confident of their optimal use of antibiotics in the inpatient setting ( P < .001) and were less likely to believe they may be overprescribing antibiotics in the inpatient setting ( P = .019). ID physicians relied more on antibiograms ( P = .017) than hospitalists and other internists in making antibiotic prescribing decisions. Numerically, they tended to rely less on electronic health record (EHR) templates ( P = .144) and local infectious diseases online resources ( P = .181) than the other 2 groups, but these differences were not statistically significant. Hospitalists and other internists frequently noted that they would find feedback on prescribing practices to be extremely or very helpful (82.3% hospitalists, 72.4% other internists, and 52.6% of ID physicians; P = .033), and hospitalists frequently noted that additional education or guidance on antibiotic prescribing would be extremely or very helpful (74.2% hospitalists, 46.6% other internists, and 26.3% of ID physicians; P < .001). Although non-ID respondents infrequently noted that their facility provided any new general guidance for antibiotic prescribing for skin and soft-tissue infection, pneumonia, and urinary tract infection across different time points of a typical hospital course, they frequently noted that the guidelines, when present, did influence their antibiotic prescribing practices (Supplementary Table 2 online). Among non-ID physicians, guidance regarding tailoring antibiotic courses after 3 days affected antibiotic prescribing practices for pneumonia (97.1%) significantly more frequently than guidance regarding skin and soft-tissue infection (80%) or urinary tract infection (90%) ( P = .0079). No significant differences across these conditions were detected for guidance regarding initial choice and completion of an antibiotic course. Clinical scenario performance Clinical scenario scores are summarized in Table , with full descriptions of each scenario and all responses are listed in Supplementary Table 3 (online). Scenario 1 describes a case of simple spreading of cellulitis in the lower extremity with blood cultures on admission that turned positive for group A Streptococcus . We detected a significant difference in scores for this scenario (ID physicians, 76% concordant; hospitalists, 58% concordant; other internists, 52% concordant; P = .0087), driven mostly by differences in appropriately classifying the clinical condition as cellulitis alone ( P = .019). Scenario 2 describes a case of community-acquired pneumonia in which high-quality respiratory cultures grow Streptococcus pneumoniae . Scores were numerically but not significantly different across specialties for this scenario (ID physicians, 75% concordant; hospitalists, 60% concordant; other internists, 56% concordant; P = .0914), though ID physicians were significantly more likely to select appropriate oral antimicrobial therapy on day 3 ( P = .006). Scenarios 3 and 4 presented cases of asymptomatic bacteriuria (in a noncatheterized patient in scenario 3 and a catheterized patient in scenario 4), and 2 questions were given for each scenario: (1) What is the clinical presentation? The guideline-concordant answer was asymptomatic bacteriuria. And (2) what is the antibiotic treatment? The guideline-concordant answer was none. All specialties (including infectious diseases) did poorly on both scenarios, with ID physicians answering 65% concordant, hospitalists 55% concordant, and other internists 40% concordant ( P = .322) on scenario 3. For scenario 4, ID physicians answered 27% concordantly, but hospitalists and other internists actually had mean negative scores of 8% (consistent with guideline discordance) and other internists had mean negative scores of 13% ( P = .12). Other internists were more likely to incorrectly select an antibiotic in scenario 3 ( P = .034). Physicians were asked after each scenario what resources they would most likely use in management of the case at hand. General medical resources (eg, UpToDate or a medical textbook) were most frequently selected, though prespecified guidance from the facility and information and input from an inpatient ward pharmacist were also commonly selected (Supplementary Table 4 online). After each scenario, physicians were asked about their confidence in making antibiotic prescribing decisions for the patient in the scenario without the use of those resources. ID physicians were significantly more confident in all scenarios, particularly for scenario 1 (the cellulitis scenario; 84.2% “very confident” vs 27.4% for hospitalists and 39.7% for other internists; P < .001) (Supplementary Table 5 online), but confidence did not correlate with performance (data not shown). Finally, we examined whether non–ID physician awareness of new guidance within the prior 12 months from their facilities’ ID division or antimicrobial stewardship team on the initial choice, tailoring, and completion of an antibiotic course was associated with their performance on the clinical scenarios. Although no significant associations were detected for hospitalists, other internists’ overall awareness of this guidance was associated with higher performance across all scenarios ( P = .011), driven mostly by awareness of guidance regarding management of pneumonia ( P = .001, data not shown).
In total, 467 physicians who provided service on inpatient wards from all sites were contacted regarding participation in the survey. Among them, 159 answered at least 1 question (19 ID physicians, 71 hospitalists, and 69 other internists) and 140 respondents answered up to the first scenario (30.4% overall response rate): 19 ID physicians, 62 hospitalists, 58 other internists, and 1 respondent who did not identify a specialty. The respondent who did not identify a specialty was excluded, leaving 139 respondents to be analyzed. Of the 58 non-ID, nonhospitalist “other” internist respondents, 43 (74.1%) identified as generalists, 3 as rheumatologists, 3 as nephrologists, 2 as geriatricians, 2 as endocrinologists, 2 as pulmonologists, 2 as oncologists, and 1 as an endocrinologist and rheumatologist. No remarkable differences were identified between physician characteristics at intervention and control sites for any portion of the survey (data not shown). Practice characteristics and attitudes toward antimicrobial use are shown in Table . Significant differences were detected in proportion of time in clinical care ( P = .023) and in inpatient care ( P < .001), with hospitalists having the highest proportions. Attitudes toward antimicrobial use were largely similar across the 3 groups, though ID physicians more frequently felt that antibiotics were overused by clinicians at their facility ( P = .002) and less likely felt that the harm of antibiotic overuse in livestock is exaggerated ( P = .001). Of 139 physician respondents, 94 (67.6%) felt that antimicrobial stewardship programs were of at least moderate benefit to patient care at their institution, and 108 (77.7%) were satisfied or very satisfied with the assistance they have received from their facility regarding antibiotic prescribing over the prior year. Answers regarding antibiotic prescribing practices and resource utilization among provider groups are shown in Supplementary Table 1 (online). ID physicians were significantly more confident of their optimal use of antibiotics in the inpatient setting ( P < .001) and were less likely to believe they may be overprescribing antibiotics in the inpatient setting ( P = .019). ID physicians relied more on antibiograms ( P = .017) than hospitalists and other internists in making antibiotic prescribing decisions. Numerically, they tended to rely less on electronic health record (EHR) templates ( P = .144) and local infectious diseases online resources ( P = .181) than the other 2 groups, but these differences were not statistically significant. Hospitalists and other internists frequently noted that they would find feedback on prescribing practices to be extremely or very helpful (82.3% hospitalists, 72.4% other internists, and 52.6% of ID physicians; P = .033), and hospitalists frequently noted that additional education or guidance on antibiotic prescribing would be extremely or very helpful (74.2% hospitalists, 46.6% other internists, and 26.3% of ID physicians; P < .001). Although non-ID respondents infrequently noted that their facility provided any new general guidance for antibiotic prescribing for skin and soft-tissue infection, pneumonia, and urinary tract infection across different time points of a typical hospital course, they frequently noted that the guidelines, when present, did influence their antibiotic prescribing practices (Supplementary Table 2 online). Among non-ID physicians, guidance regarding tailoring antibiotic courses after 3 days affected antibiotic prescribing practices for pneumonia (97.1%) significantly more frequently than guidance regarding skin and soft-tissue infection (80%) or urinary tract infection (90%) ( P = .0079). No significant differences across these conditions were detected for guidance regarding initial choice and completion of an antibiotic course.
Clinical scenario scores are summarized in Table , with full descriptions of each scenario and all responses are listed in Supplementary Table 3 (online). Scenario 1 describes a case of simple spreading of cellulitis in the lower extremity with blood cultures on admission that turned positive for group A Streptococcus . We detected a significant difference in scores for this scenario (ID physicians, 76% concordant; hospitalists, 58% concordant; other internists, 52% concordant; P = .0087), driven mostly by differences in appropriately classifying the clinical condition as cellulitis alone ( P = .019). Scenario 2 describes a case of community-acquired pneumonia in which high-quality respiratory cultures grow Streptococcus pneumoniae . Scores were numerically but not significantly different across specialties for this scenario (ID physicians, 75% concordant; hospitalists, 60% concordant; other internists, 56% concordant; P = .0914), though ID physicians were significantly more likely to select appropriate oral antimicrobial therapy on day 3 ( P = .006). Scenarios 3 and 4 presented cases of asymptomatic bacteriuria (in a noncatheterized patient in scenario 3 and a catheterized patient in scenario 4), and 2 questions were given for each scenario: (1) What is the clinical presentation? The guideline-concordant answer was asymptomatic bacteriuria. And (2) what is the antibiotic treatment? The guideline-concordant answer was none. All specialties (including infectious diseases) did poorly on both scenarios, with ID physicians answering 65% concordant, hospitalists 55% concordant, and other internists 40% concordant ( P = .322) on scenario 3. For scenario 4, ID physicians answered 27% concordantly, but hospitalists and other internists actually had mean negative scores of 8% (consistent with guideline discordance) and other internists had mean negative scores of 13% ( P = .12). Other internists were more likely to incorrectly select an antibiotic in scenario 3 ( P = .034). Physicians were asked after each scenario what resources they would most likely use in management of the case at hand. General medical resources (eg, UpToDate or a medical textbook) were most frequently selected, though prespecified guidance from the facility and information and input from an inpatient ward pharmacist were also commonly selected (Supplementary Table 4 online). After each scenario, physicians were asked about their confidence in making antibiotic prescribing decisions for the patient in the scenario without the use of those resources. ID physicians were significantly more confident in all scenarios, particularly for scenario 1 (the cellulitis scenario; 84.2% “very confident” vs 27.4% for hospitalists and 39.7% for other internists; P < .001) (Supplementary Table 5 online), but confidence did not correlate with performance (data not shown). Finally, we examined whether non–ID physician awareness of new guidance within the prior 12 months from their facilities’ ID division or antimicrobial stewardship team on the initial choice, tailoring, and completion of an antibiotic course was associated with their performance on the clinical scenarios. Although no significant associations were detected for hospitalists, other internists’ overall awareness of this guidance was associated with higher performance across all scenarios ( P = .011), driven mostly by awareness of guidance regarding management of pneumonia ( P = .001, data not shown).
We detected significant differences in survey responses between ID physicians, hospitalists, and generalists on how to manage infectious conditions that are commonly seen in the practice of inpatient internal medicine and are frequently targets for antimicrobial stewardship interventions. Most notably, the low overall scores in management of asymptomatic bacteriuria (both non–catheter-associated and catheter-associated) point to the difficulties inherent in recognizing and/or avoiding antimicrobial treatment for this situation and the need for education and interventions in this domain that target all physicians who practice inpatient internal medicine, even ID physicians. Implementation of algorithm-based peer feedback has been shown to be successful in this regard. A knowledge gap between ID physicians and other specialties on the management of cellulitis also points to opportunities for developing stewardship interventions targeted at non-ID physicians and focusing on the management of skin and soft-tissue disease. All specialties scored highest on the community-acquired pneumonia scenario, but opportunities exist for improvement in this domain as well. As with cellulitis, de-escalation of antimicrobial therapy when culture data return and the patient is improved clinically was a particular weak point, indicating an opportunity for targeted interventions. Specialties likely differ in terms of how they can best be targeted by stewardship interventions. A recent study of inpatient services at an academic medical center demonstrated that generalist-led services prescribed more broad-spectrum therapy than hospitalist-led services. In our survey, hospitalists and other internists tended to rely less on antibiograms than ID physicians in their clinical practice. Although hospitalists and other internists tended to rely more on EHR templates and local infectious diseases online resources, overall reliance on these modalities was low. This finding illustrates an antimicrobial stewardship principle that occurs frequently in the literature: educational or informational resources make an impact when accompanied by patient-level antimicrobial stewardship team intervention. – More involvement of antimicrobial stewardship teams in provider-facing activities, such as audit and feedback and in-person presence on rounds (“handshake stewardship”), may be particularly effective. , – Physicians in our survey indicated that online general medical resources (eg, UpToDate, Wolters Kluwer) are the most frequently referenced when making antibiotic prescribing decisions. Antimicrobial stewards should routinely ensure that these resources reinforce antimicrobial prescribing principles at their facilities. Hospitalists and other internists particularly noted a desire for more feedback on prescribing practices, signifying awareness of their knowledge gaps and interest in improving upon them. Other internists seemed particularly influenced by guidance on antimicrobial prescription for pneumonia, particularly when tailoring therapy around hospital day 3. Our study had several limitations. A low number of ID physician respondents significantly limited our ability to make inferences about the ID community at large. The overall response rate was also relatively low. The survey was lengthy; not all respondents answered all questions, and there may be a bias toward those who had more available time, altruism, or interest in the subject. Although clinical scenarios can be effective in demonstrating physician proficiency independent of patient case mix and other factors that may influence patient care-related metrics, our scenarios may have been worded in a way that was less clear or not fully representative of real-life circumstances. For example, we noted in the community-acquired pneumonia case that the patient presented “from home” but did not give details that further suggested community versus healthcare-associated acquisition. Factors such as this may have influenced respondents to invoke underlying biases and experiences that may not truly reflect antibiotic prescribing expertise. Finally, the small number of questions pertaining to management of asymptomatic bacteriuria increased the variance in our estimate of provider understanding of its management. However, the overall detailed information we received on antimicrobial prescribing practices should serve as a useful roadmap for stewards who are trying to balance the attitude, knowledge, and practice differences of the practitioners at their facility in planning antimicrobial stewardship interventions.
|
Heidenhain Variant of Sporadic Creutzfeldt-Jakob Disease with a Variety of Visual Symptoms: A Case Report with Autopsy Study | 7dbce2e7-f865-4a8d-a402-4bd3885c453c | 10015505 | Forensic Medicine[mh] | Prion diseases are rapidly progressive and eventually fatal neurodegenerative disorders caused by accumulation of the transmissible scrapie form of the prion protein (PrP Sc ) following change of the normal cellular form of the prion protein (PrP) to PrP Sc in the central nervous system . Prion diseases are classified into 3 types: hereditary types caused by mutations in the PrP gene ( PRNP ); acquired types caused by prion infection such as iatrogenic Creutzfeldt-Jakob disease (CJD), kuru, or variant CJD; and sporadic types caused by unknown etiologies . Whereas 85% of prion diseases worldwide are of the sporadic type, 75.5% are of the sporadic type in Japan [ – ]. Furthermore, most cases of sporadic prion diseases are sporadic CJD (sCJD), which has been seen in Europe, North America, Central America, South America, Africa, Asia, and Australasia, with a global incidence of 1 to 2 per million people per year [ , , ]. In Japan, the incidence is reportedly 0.55 to 0.66 per million people per year . Sporadic CJD is classified into 6 categories, divided by western blot for PrP Sc into 2 categories (type 1 and type 2) and by the combinations of methionine and valine in the PRNP polymorphism at codon 129 into 3 categories (MM, MV, and VV) . The mean survival time of patients with sCJD in Japan is reportedly 15.7 months after presenting with rapidly progressive dementia (RPD), myoclonus, or akinetic mutism . The Heidenhain variant of sCJD is characterized by a variety of visual symptoms including blurred vision, deterioration of visual acuity, disturbance in color vision, and restricted vision without any ocular disease, with a short mean survival time of only 5.7 months . In addition, its incidence is extremely low, accounting for 3.7% to 4.9% of all cases of sCJD . The rapidly progressive symptoms and the extremely low incidence lead to low suspicion of the Heidenhain variant of sCJD in its early stage. Therefore, cerebrospinal fluid (CSF) examinations including measurement of tau protein, measurement of 14-3-3 protein, and real-time quaking-induced conversion (RTQuIC) tend not to be performed at an appropriate time. In such situations, autopsy regrettably becomes the only option to obtain the correct diagnosis . We herein report an autopsy case of the Heidenhain variant of sCJD with both MM type 1 and MM type 2 cortical forms in a patient who developed a variety of visual symptoms at onset, progressing to akinetic mutism and eventually death 4 months later. We were able to examine tau protein, 14-3-3 protein, and RTQuIC in the CSF while the patient was still alive, leading to the correct diagnosis after death despite the absence of periodic synchronous discharges (PSD) on electroencephalography or pathognomonic findings on cranial magnetic resonance imaging (MRI).
A 72-year-old woman who had no history of transplantation of the dura mater or cornea and who had enough independence in her activities of daily living to drive a car without glasses presented with a 3-month history of photophobia and blurring vision in both eyes. Visual acuity tests performed by her local ophthalmologist had revealed mild cataract and deterioration of unaided vision to 20/63 in the right eye and 20/32 in the left eye. One month later, she developed diplopia, left homonymous hemianopia, and restricted downward movement of her left eye. However, her pupillary light reflex and the findings of ophthalmoscopy and optical coherence tomography angiography were normal. Her visual acuity continued to deteriorate to 20/100 in both eyes 2 months later and 20/2000 at 7 days before admission. She was no longer able to drive a car and required assistance even when transferring to and from a chair, standing up, or maintaining the standing position. Because she also exhibited disorientation with respect to time and place, impairment of word repetition, and difficulty performing simple calculation, she was admitted to our hospital. On admission, her body temperature was 37.1ºC, blood pressure was 159/89 mmHg, pulse rate was 108 beats/min, respiratory rate was 28 breaths/min, and oxygen saturation was 94% on room air. She was emaciated with a body weight of 32.4 kg and body mass index of 14.4 kg/m 2 . Neurological examination revealed cervical dystonia that caused her neck to remain bent backward, rigidity of the cervical muscles, and deep-tendon hyperreflexia in her right upper and lower extremities with flexor plantar response on toe examination; she had no paralysis or rigidity of her 4 extremities. The sizes of both pupils (3 mm) were normal, and the light reflexes were intact. Her visual acuity and eye movements could not be evaluated because of her disturbance of consciousness. Laboratory findings on admission are presented in . Only aspar-tate aminotransferase concentration and lactate dehydrogenase concentration were elevated. There were no findings indicating the presence of syphilis or other infectious diseases, autoimmune diseases, or abnormal endocrine functions including thyroid or adrenocortical function. Because of the patient’s rapidly progressive cognitive dysfunction and deterioration of activities of daily living, sCJD was suspected. Cranial MRI (MAGNETOM 1.5T Avanto Fit; Siemens, Erlangen, Germany) on admission, 3 months after onset, using a fluid-attenuated inversion recovery (FLAIR) sequence and diffusion-weighted imaging showed no abnormality ( ). Electroencephalography (NIHON KODEN, Tokyo, Japan) on the second hospital day showed generalized slow waves without PSD, consisting mainly of delta waves ( ). CSF examinations on the second and sixth hospital days showed a normal cell count, protein concentration, and CSF/plasma glucose ratio ( ). Residual CSF taken on the sixth hospital day was sent to the laboratory of Nagasaki University, Nagasaki, Japan, for examination of tau protein, 14-3-3 protein, and RTQuIC. Cheyne-Stokes respiration and decorticate rigidity were observed 2 weeks after admission, followed by myoclonus 3 weeks after admission. The patient then rapidly developed akinetic mutism and died 4 weeks after admission (4 months after onset). After her death, the CSF analysis revealed the presence of tau protein and 14-3-3 protein and a positive result of RTQuIC. According to these findings and the clinical symptoms, including a variety of visual symptoms at the onset with rapidly progressive cognitive dysfunction, the probable diagnosis was determined to be the Heidenhain variant of sCJD. A pathological autopsy revealed a brain weight of 1400 g ( ), enlarged cerebral sulci in the bilateral frontal lobes, and focal atrophy and thinning of the cerebral cortex at the medial basal aspect in the right occipital lobe ( ). Hematoxylin and eosin staining of the medial basal aspect of the right occipital lobe revealed spongiform changes of the cortex, neuronal loss, and neuropil rarefaction ( ). Furthermore, in the same area, immunostaining for PrP with 3F4 antibody showed diffuse synaptic-type deposits of abnormal PrP ( ), and immunohistochemical staining for glial fibrillary acidic protein showed hypertrophic astrocytes ( ). However, these findings were unclear in the putamen and thalamus. The PRNP polymorphism at codon 129 revealed the genotype of methionine homozygosity. Western blot using 3F4 antibody showed the presence of protease-resistant PrP ( ). Furthermore, western blot using polyclonal antibodies specific for type 1 and type 2 PrP Sc (Tohoku1 and Tohoku2; Tohoku University Graduate School of Medicine, Sendai, Japan) showed bands compatible with both antibodies ( ). These findings of western blot and the findings of immunostaining mainly in the cortex led to the diagnosis of sCJD with both MM type 1 and MM type 2 cortical forms.
Pathological findings obtained by brain biopsy or brain autopsy are usually required for a definitive diagnosis of prion diseases, including sCJD . Therefore, pathological examiners must take appropriate preventive measures to avoid infection by PrP Sc by recognizing the possibility of sCJD before performing such examinations . Electroencephalography and cranial MRI can be helpful to suspect or diagnose sCJD without brain biopsy or autopsy . The presence of PSD on electroencephalography is a representative finding observed in 67% to 94% of patients with sCJD [ – ]. It is also observed in 77.8% of patients with the Heidenhain variant of sCJD . As for the phenotype of sCJD, such typical findings on electroencephalography are reportedly much more common in patients with MM1 than in those with MM2-costal, MM2-thalamic, MV2, and VV2 subtypes . Furthermore, the sensitivity and specificity of cranial MRI for the diagnosis of sCJD are reportedly 92.3% and 93.8%, respectively . Hyperintensity of the cerebral cortex, basal ganglia, and thalamus on diffusion-weighted imaging and FLAIR sequences is pathognomonic for sCJD, which can be observed in patients with all sub-types except the MM2-thalamic subtype . In addition, a study showed that 55.5% of patients with the Heidenhain variant of sCJD had brain atrophy and white matter degeneration in the occipital lobe, parietal lobe, and basal ganglia . However, only 70% of patients with the Heidenhain variant of sCJD show the typical findings on cranial MRI . Our patient with the Heidenhain variant of sCJD developed a variety of visual symptoms consisting mainly of deterioration of visual acuity without any abnormal findings of her eyeballs, followed by rapidly progressive cognitive dysfunction, consciousness disturbance, akinetic mutism, and eventual death 4 months after onset. Diagnosis of the Heidenhain variant of sCJD was challenging while she was alive because of her rapidly progressive clinical course and the lack of characteristic findings of electroencephalography, especially PSD, or typical findings on cranial MRI, even in the presence of characteristic symptoms of sCJD (including progressive cognitive dysfunction, myoclonus, and akinetic mutism) . The lack of typical findings on electroencephalography and cranial MRI in this case may have been because the sCJD was of the Heidenhain variant containing the MM2-constal form. It is essential to perform examinations for tau protein, 14-3-3 protein, and RTQuIC in the CSF at the earliest stage possible to diagnose the Heidenhain variant of sCJD, especially in a patient with normal electroencephalography or cranial MRI findings, as in the present case. Various positive and negative predictive values for sCJD of 14-3-3 protein in the CSF were reported; 31% and <10%, 35% and 99%, 69% and 88%, 72% and 86%, 76% and 97%, 81% and 68%, 87% and 88%, 87% and 94%, 92% and 98%, and 96% and 76%, respectively . In a study, tau protein and 14-3-3 protein in the CSF were detected in 45.5% and 55.5% of patients with the Heidenhain variant of sCJD, respectively . The World Health Organization diagnostic criteria also indicated that the detection of 14-3-3 protein in CSF at less than 2 years from onset indicates “probable” sCJD . Furthermore, RTQuIC detects abnormal PrP in CSF with a sensitivity of >70% . However, the rapidly progressive clinical course may make it impossible to perform such tests in a timely manner . To avoid failing to perform them at an appropriate time, it is important to know the typical symptoms of the Heidenhain variant of sCJD and suspect it at an earlier stage. The main features of the Heidenhain variant of sCJD are a variety of rapidly progressive visual symptoms as follows: deterioration of visual acuity (27.8–36.4%), blurring vision (27.3–38.9%), and restricted vision (4.5–38.9%) . Our patient initially developed photophobia and blurring vision 3 months before admission, followed by diplopia and homonymous hemianopia 1 month later; she eventually presented with visual activity of light perception on admission. The clues to the diagnosis of probable Heidenhain variant of sCJD in this case were the detection of tau protein and 14-3-3 protein and the positive RTQuIC in the CSF examination performed on the sixth hospital day. Knowing these results while the patient is alive could help medical personnel to explain the significance of a definitive diagnosis by autopsy to the patient’s family and avoid the serious risk of becoming infected. In addition to the above-mentioned visual symptoms, RPD is a major symptom of the Heidenhain variant of sCJD. RPD is mainly caused by central nervous system infections, auto-immune encephalitis or encephalopathy, neurodegenerative disorders, malignant tumors, and cerebrovascular disease; a much less common cause is prion diseases . Prompt diagnosis and treatment of RPD are mandatory because causative diseases can be treatable, and they can be serious public health threats due to their infectivity . While most RPDs commonly progress within 1 or 2 years, RPD due to prion diseases can progress more rapidly, even within several days or weeks . Among 22 autopsy cases of patients who presented with RPD and died within 4 years, the largest number of cases [8 (36.3%)] were caused by sCJD. The survival period of those 8 patients was about 1 year, which was shorter than that of the remaining 14 patients with RPD of other causes . Recently, a three-step diagnostic procedure for RPD has also been reported: patient history and clinical examination; standard technical procedures (blood tests, imaging such as CT or MRI, CSF, and electroencephalography); and advanced diagnostics (biomaterials, imaging such as positron emission tomography examination, anti-inflammatory therapy, and brain or leptomeningeal biopsy) . In a patient with rare presentations of an uncommon disease, such as in the present case, the procedure may be useful to rule out other more common diseases. Our patient presented with cognitive dys-function 7 days before admission and died 1 month after admission. This extremely rapid progressive cognitive dysfunction is a useful sign suggesting a possible diagnosis of prion diseases such as sCJD.
When a patient presents with various rapidly progressive visual symptoms, it is essential to suspect the Heidenhain variant of sCJD and examine tau protein, 14-3-3 protein, and RTQuIC in the CSF as soon as possible, even without typical findings on electroencephalography or cranial MRI.
|
Developing a Novel Case-Based Gastroenterology/Hepatology Online Resource for Enhanced Education During and After the COVID-19 Pandemic | c9b03914-eae3-40aa-87ae-275976fc44f2 | 10015521 | Internal Medicine[mh] | Free open access medical education (FOAMed), accessible via computers and mobile devices were becoming increasingly preferred to traditional resources by trainees, even before the COVID-19 pandemic . Digital medical education platforms offer exciting and innovative modalities to augment the quality and variety of didactic resources available to trainees . E-learning provides the advantages of individualized, self-directed, independent, and on-demand education, and has been recognized as an efficient tool for opportunistic learning among residents . In certain interactive forms, e-resources may additionally be more engaging and enjoyable than traditional formats , while allowing for greater global outreach and more rapid dissemination of peer-reviewed medical knowledge. The COVID-19 pandemic reshaped the delivery of medical education , necessitating new modes of online instruction that support asynchronous learning , while retaining the interactive nature of in-person learning. Gastroenterology (GI) and hepatology are visual fields that require specialists to be proficient in recognizing pathology through endoscopic and radiographic information. The ability to apply fundamental knowledge of pathophysiology to clinical contexts and to generate appropriate differentials and management plans are critical clinical reasoning skills that trainees are expected to develop. Traditionally, these skills were imparted by high-volume patient care and instructor-led sessions; however, COVID-19-related distancing practices and limitations on elective procedures significantly reduced such opportunities for in-person training . Thus far, very few online resources have offered case-based visual learning combining endoscopic, radiographic, and histologic images with hypothetical clinical scenarios to simulate real-life patient encounters. The COVID-19 pandemic created a surge in online learners and a demand for innovative, interactive online learning tools across all specialty areas, including GI/hepatology that would support clinical reasoning skill development . To address this need, we created GISIM, a free, mobile-optimized, case-based GI/hepatology educational resource designed for medical students, internal medicine (IM) residents, and GI/hepatology fellows.
(1) To describe the creation of a mobile-optimized, GI/hepatology educational resource for medical trainees, and (2) to report on trainee feedback on completing and authoring GISIM cases.
Our website, www.GiSIM.com , was created on WordPress and modeled after NephSIM ( www.nephsim.com ), an innovative e-learning platform created by our nephrology collaborators that features a case-based approach to teaching key nephrology topics . Instructional Design Each of GISIM’s clinical scenarios or “cases” focuses on a unique GI/hepatology-related chief complaint. Cases are designed to teach pathophysiology and disease management to trainees, while supporting active learning and the development of critical thinking and effective problem-solving skills through an interactive, case-based design . Hypothetical patient complaints vary in acuity level, introducing trainees to a range of potential scenarios encountered in both ambulatory and acute care settings. Cases aim to engage users through an interactive question-based format, challenging users’ process of selecting diagnostic workup and treatment options, their understanding of treatment side effects and disease course, and their approach to patient counseling, follow-up, and disease monitoring. To build a learner-oriented, user-driven resource, IM residents and GI/hepatology fellows are involved in authoring cases and selecting case topics. Key Accreditation Council for Graduate Medical Education (ACGME) Core Competencies addressed within GISIM’s content include patient care skills, as well as multiple domains of medical knowledge and practice-based learning. Cases are designed to supplement clinical training and to help trainees acquire the skills and knowledge needed to reach specific performance milestones toward mastery within the ACGME Core Competencies . Content Development Individual cases evolve sequentially, starting with history and examination details for a hypothetical patient encounter, leading to laboratory and imaging findings, endoscopy, and pathology results, and eventually ending with a final diagnosis (Fig. ). Cases incorporate labeled histopathologic, radiologic, and endoscopic images depicting common findings, along with up-to-date diagnostic and treatment algorithms (Fig. ). A case summary page concluding each case journey describes how the data unfolded to reach the final diagnosis; summarizes key takeaway points; and highlights references, guidelines, and resources for additional learning. Single-answer multiple choice questions are embedded throughout the case, prompting users to develop differential diagnoses and select next best steps in assessment and treatment. Real-time iterative feedback is provided for multiple-choice responses including a rationale for both correct and incorrect selections (Fig. ). Cases were drafted on a voluntary basis by the Icahn School of Medicine at Mount Sinai GI/hepatology faculty, fellows, and IM residents in collaboration with faculty from the Departments of Pathology and Radiology. A volunteer team of three GI/hepatology faculty members with active experience as medical educators assisted with case development and review. Case topics were proposed by case authors and were selected in conjunction with an overseeing GI/hepatology faculty member. Bloom’s taxonomy model (Fig. ) was then used to establish case objectives. Cases were drafted independently by case authors using a standardized case template created by GISIM faculty members. Drafts were subsequently reviewed by an overseeing GI/hepatology faculty member for content accuracy and case complexity with the goal of promoting higher-order thinking and advancing cognitive learning beyond knowledge recall to analysis, evaluation, and application . Endoscopic and radiographic images and histologic slides were provided by case authors or collaborating faculty members. Once finalized, cases were uploaded to GISIM’s website by a team of two volunteer IM residents. Content Distribution GISIM ( www.Gi-SIM.com ) was launched in February 2021 with four cases. Website information was disseminated to IM residents, GI/hepatology fellows and attendings across the Mount Sinai health system via institutional email subscriber lists, Twitter (@GISIM_website), and to the GI/Hepatology divisional webpage. New cases were uploaded to GISIM on a near-monthly basis and a Twitter alert was posted to notify social media followers. A total of ten cases were available on GISIM by February 2022 when this manuscript was compiled. Content Evaluation WordPress analytics were used to track GISIM website visitor and viewership numbers. Website user and case author surveys were developed on Google Forms. Website users accessed the user survey through a link embedded into the summary page of each case. User surveys evaluated users’ demographic information and GISIM experience with questions on website usability, content quality, and difficulty, as well as perceived educational value of cases, corresponding to Level 1 on Kirkpatrick’s evaluation model . Users were also asked about their likelihood of completing additional GISIM cases and recommending the resource to peers. The user survey included 14 questions (six closed-ended multiple-choice questions, seven Likert-scale questions with four responses, and one open-ended question for general feedback). A separate survey was sent to case authors via email. The author survey included 22 questions (five closed-ended multiple-choice questions, 12 Likert-scale questions with four responses, four five-point scale questions, and one open-ended question for general feedback), evaluating case authors’ demographics and GISIM experience, as well as their perceived educational value of case authorship. Surveys were developed by GISIM team members (two IM residents and three GI/hepatology faculty). Survey participation was voluntary and all survey questions were optional. Google Forms’ integrated analytics software was used to anonymously aggregate and analyze survey responses. Case completion rate was calculated as a ratio of case introduction page views to case summary page views. Website user survey response rate was calculated by dividing the total number of completed user surveys by the total number of completed cases (approximated by the number of case summary page views).
Each of GISIM’s clinical scenarios or “cases” focuses on a unique GI/hepatology-related chief complaint. Cases are designed to teach pathophysiology and disease management to trainees, while supporting active learning and the development of critical thinking and effective problem-solving skills through an interactive, case-based design . Hypothetical patient complaints vary in acuity level, introducing trainees to a range of potential scenarios encountered in both ambulatory and acute care settings. Cases aim to engage users through an interactive question-based format, challenging users’ process of selecting diagnostic workup and treatment options, their understanding of treatment side effects and disease course, and their approach to patient counseling, follow-up, and disease monitoring. To build a learner-oriented, user-driven resource, IM residents and GI/hepatology fellows are involved in authoring cases and selecting case topics. Key Accreditation Council for Graduate Medical Education (ACGME) Core Competencies addressed within GISIM’s content include patient care skills, as well as multiple domains of medical knowledge and practice-based learning. Cases are designed to supplement clinical training and to help trainees acquire the skills and knowledge needed to reach specific performance milestones toward mastery within the ACGME Core Competencies .
Individual cases evolve sequentially, starting with history and examination details for a hypothetical patient encounter, leading to laboratory and imaging findings, endoscopy, and pathology results, and eventually ending with a final diagnosis (Fig. ). Cases incorporate labeled histopathologic, radiologic, and endoscopic images depicting common findings, along with up-to-date diagnostic and treatment algorithms (Fig. ). A case summary page concluding each case journey describes how the data unfolded to reach the final diagnosis; summarizes key takeaway points; and highlights references, guidelines, and resources for additional learning. Single-answer multiple choice questions are embedded throughout the case, prompting users to develop differential diagnoses and select next best steps in assessment and treatment. Real-time iterative feedback is provided for multiple-choice responses including a rationale for both correct and incorrect selections (Fig. ). Cases were drafted on a voluntary basis by the Icahn School of Medicine at Mount Sinai GI/hepatology faculty, fellows, and IM residents in collaboration with faculty from the Departments of Pathology and Radiology. A volunteer team of three GI/hepatology faculty members with active experience as medical educators assisted with case development and review. Case topics were proposed by case authors and were selected in conjunction with an overseeing GI/hepatology faculty member. Bloom’s taxonomy model (Fig. ) was then used to establish case objectives. Cases were drafted independently by case authors using a standardized case template created by GISIM faculty members. Drafts were subsequently reviewed by an overseeing GI/hepatology faculty member for content accuracy and case complexity with the goal of promoting higher-order thinking and advancing cognitive learning beyond knowledge recall to analysis, evaluation, and application . Endoscopic and radiographic images and histologic slides were provided by case authors or collaborating faculty members. Once finalized, cases were uploaded to GISIM’s website by a team of two volunteer IM residents.
GISIM ( www.Gi-SIM.com ) was launched in February 2021 with four cases. Website information was disseminated to IM residents, GI/hepatology fellows and attendings across the Mount Sinai health system via institutional email subscriber lists, Twitter (@GISIM_website), and to the GI/Hepatology divisional webpage. New cases were uploaded to GISIM on a near-monthly basis and a Twitter alert was posted to notify social media followers. A total of ten cases were available on GISIM by February 2022 when this manuscript was compiled.
WordPress analytics were used to track GISIM website visitor and viewership numbers. Website user and case author surveys were developed on Google Forms. Website users accessed the user survey through a link embedded into the summary page of each case. User surveys evaluated users’ demographic information and GISIM experience with questions on website usability, content quality, and difficulty, as well as perceived educational value of cases, corresponding to Level 1 on Kirkpatrick’s evaluation model . Users were also asked about their likelihood of completing additional GISIM cases and recommending the resource to peers. The user survey included 14 questions (six closed-ended multiple-choice questions, seven Likert-scale questions with four responses, and one open-ended question for general feedback). A separate survey was sent to case authors via email. The author survey included 22 questions (five closed-ended multiple-choice questions, 12 Likert-scale questions with four responses, four five-point scale questions, and one open-ended question for general feedback), evaluating case authors’ demographics and GISIM experience, as well as their perceived educational value of case authorship. Surveys were developed by GISIM team members (two IM residents and three GI/hepatology faculty). Survey participation was voluntary and all survey questions were optional. Google Forms’ integrated analytics software was used to anonymously aggregate and analyze survey responses. Case completion rate was calculated as a ratio of case introduction page views to case summary page views. Website user survey response rate was calculated by dividing the total number of completed user surveys by the total number of completed cases (approximated by the number of case summary page views).
GISIM website/Twitter analytics and survey responses were collected from February 1 2021, when GISIM was launched, until February 28 2022, when data was analyzed for this manuscript. Website Usage and Twitter Analytics During the analysis period, GISIM had 12,184 website views and 2003 unique visitors from 76 countries. Views were primarily from users in the USA (64%), followed by India (8%) and Canada (3%). Case completion rate was 45%. GISIM’s Twitter account had 119 followers and nearly 3000 views. Survey Results Website User Surveys Sixty-one user surveys were collected and 629 cases were completed, corresponding to a survey response rate of approximately 10%. First-time SIM-series users contributed to 80% of the submissions. Respondent training level was varied (Table ). GI/hepatology fellows submitted the majority of responses (38%), followed by IM residents (26%), attendings (21%), others (8%), and medical students (7%). Most users completed cases in < 5 min (32%) or 5–10 min (53%), without a clear correlation between completion time and training level. Case length was reported as “just right” by 92% of users, suggesting overall satisfaction with the level of detail and time required for completion. Users’ evaluation of GISIM’s website and cases are shown in Fig. . All users found the website easy to use. Ninety percent of users agreed that cases were interactive. Sixty-seven percent of users agreed that cases enhanced their confidence and 86% agreed that cases improved their understanding of selected topics. Eighty-three percent of users reported that they would use the resource again and 87% agreed with the statement that they would recommend it to their colleagues. A subgroup analysis of nonattending users showed largely similar trends in results (Fig. ). Case Author Surveys Author Demographics Nine author surveys were collected. Surveys were completed by two attendings, two fellows, and five residents. Evaluation Primary motivators for author participation included an interest in medical education and a desire to contribute to GISIM as a learning resource. Authors primarily selected topics that they wanted to learn more about (78%) and felt would be useful to others (100%). After writing cases, authors uniformly indicated feeling “comfortable” obtaining history/examination, ordering/interpreting diagnostics, and following guideline-directed management within chosen topic areas, representing an improvement from prior to case authorship (Table ). All authors intended to incorporate GISIM into future teaching sessions with trainees, and the majority planned to volunteer to write additional cases and intended to recommend the authorship opportunity to their peers (Table ). Most authors felt the experience provided significant learning (75%) and had high utility (100%).
During the analysis period, GISIM had 12,184 website views and 2003 unique visitors from 76 countries. Views were primarily from users in the USA (64%), followed by India (8%) and Canada (3%). Case completion rate was 45%. GISIM’s Twitter account had 119 followers and nearly 3000 views.
Website User Surveys Sixty-one user surveys were collected and 629 cases were completed, corresponding to a survey response rate of approximately 10%. First-time SIM-series users contributed to 80% of the submissions. Respondent training level was varied (Table ). GI/hepatology fellows submitted the majority of responses (38%), followed by IM residents (26%), attendings (21%), others (8%), and medical students (7%). Most users completed cases in < 5 min (32%) or 5–10 min (53%), without a clear correlation between completion time and training level. Case length was reported as “just right” by 92% of users, suggesting overall satisfaction with the level of detail and time required for completion. Users’ evaluation of GISIM’s website and cases are shown in Fig. . All users found the website easy to use. Ninety percent of users agreed that cases were interactive. Sixty-seven percent of users agreed that cases enhanced their confidence and 86% agreed that cases improved their understanding of selected topics. Eighty-three percent of users reported that they would use the resource again and 87% agreed with the statement that they would recommend it to their colleagues. A subgroup analysis of nonattending users showed largely similar trends in results (Fig. ). Case Author Surveys Author Demographics Nine author surveys were collected. Surveys were completed by two attendings, two fellows, and five residents. Evaluation Primary motivators for author participation included an interest in medical education and a desire to contribute to GISIM as a learning resource. Authors primarily selected topics that they wanted to learn more about (78%) and felt would be useful to others (100%). After writing cases, authors uniformly indicated feeling “comfortable” obtaining history/examination, ordering/interpreting diagnostics, and following guideline-directed management within chosen topic areas, representing an improvement from prior to case authorship (Table ). All authors intended to incorporate GISIM into future teaching sessions with trainees, and the majority planned to volunteer to write additional cases and intended to recommend the authorship opportunity to their peers (Table ). Most authors felt the experience provided significant learning (75%) and had high utility (100%).
Sixty-one user surveys were collected and 629 cases were completed, corresponding to a survey response rate of approximately 10%. First-time SIM-series users contributed to 80% of the submissions. Respondent training level was varied (Table ). GI/hepatology fellows submitted the majority of responses (38%), followed by IM residents (26%), attendings (21%), others (8%), and medical students (7%). Most users completed cases in < 5 min (32%) or 5–10 min (53%), without a clear correlation between completion time and training level. Case length was reported as “just right” by 92% of users, suggesting overall satisfaction with the level of detail and time required for completion. Users’ evaluation of GISIM’s website and cases are shown in Fig. . All users found the website easy to use. Ninety percent of users agreed that cases were interactive. Sixty-seven percent of users agreed that cases enhanced their confidence and 86% agreed that cases improved their understanding of selected topics. Eighty-three percent of users reported that they would use the resource again and 87% agreed with the statement that they would recommend it to their colleagues. A subgroup analysis of nonattending users showed largely similar trends in results (Fig. ).
Author Demographics Nine author surveys were collected. Surveys were completed by two attendings, two fellows, and five residents. Evaluation Primary motivators for author participation included an interest in medical education and a desire to contribute to GISIM as a learning resource. Authors primarily selected topics that they wanted to learn more about (78%) and felt would be useful to others (100%). After writing cases, authors uniformly indicated feeling “comfortable” obtaining history/examination, ordering/interpreting diagnostics, and following guideline-directed management within chosen topic areas, representing an improvement from prior to case authorship (Table ). All authors intended to incorporate GISIM into future teaching sessions with trainees, and the majority planned to volunteer to write additional cases and intended to recommend the authorship opportunity to their peers (Table ). Most authors felt the experience provided significant learning (75%) and had high utility (100%).
Nine author surveys were collected. Surveys were completed by two attendings, two fellows, and five residents.
Primary motivators for author participation included an interest in medical education and a desire to contribute to GISIM as a learning resource. Authors primarily selected topics that they wanted to learn more about (78%) and felt would be useful to others (100%). After writing cases, authors uniformly indicated feeling “comfortable” obtaining history/examination, ordering/interpreting diagnostics, and following guideline-directed management within chosen topic areas, representing an improvement from prior to case authorship (Table ). All authors intended to incorporate GISIM into future teaching sessions with trainees, and the majority planned to volunteer to write additional cases and intended to recommend the authorship opportunity to their peers (Table ). Most authors felt the experience provided significant learning (75%) and had high utility (100%).
The unique strains on trainee education created by the COVID-19 pandemic have highlighted the importance of effective virtual learning platforms . While a variety of self-assessment modules, video and PowerPoint presentation libraries, question banks, and weekly webinars are offered by GI/hepatology societies for trainees , the majority of these resources support passive learning and focus heavily on knowledge recall, with limited opportunities for application-based learning. Clinical reasoning, data synthesis, and evidence-based practice are emphasized as important competencies by the ACGME ; however, imparting these skills has traditionally required intensive, in-person, preceptor-led approaches. GISIM uses an interactive case-based format, which enables integration of multifaceted and interdisciplinary curricular concepts to support critical thinking skill development and to enhance retention of basic science knowledge . Survey data demonstrates that GISIM modules strengthen trainees’ clinical confidence and decision-making, suggesting GISIM to be a potentially valuable e-learning resource for both case authors and website users. GISIM serves a global audience and provides asynchronous, on-demand distance GI/hepatology learning. Survey data shows that GISIM subjectively improves users’ understanding of covered topics and provides a perceived enhancement in medical knowledge. Author surveys illustrate a reciprocally beneficial experience for case writers as well, with authors overwhelmingly reporting that writing cases enhanced clinical confidence across the continuum of patient care from obtaining a history and physical to developing a management plan for the particular topic. While the significance of authors’ enhanced clinical confidence is difficult to quantify, their notably positive feedback about the utility and learning provided by the authorship experience as well as their intention to volunteer for additional authorship opportunities indicate that the experience was at least subjectively beneficial for this group. Additionally, our observation that the majority of authors selected topics they wished to learn more about, suggests that the process of case authorship itself can promote independent and self-directed, self-guided learning. A unique aspect of GISIM is its foundation in peer-to-peer or peer-assisted (PAL) learning, an approach in which students take on the role of tutor and trainee interchangeably. GISIM case authors and website users are primarily residents and fellows, with similar average training levels between groups. With regard to medical education, PAL has been shown to improve teaching skills, reinforce prior knowledge, and enhance academic performance; and has been shown to be a particularly effective tool to support independent learning . In the context of GISIM, the PAL approach enables learners to focus on topics that have been identified as high yield and training level appropriate by their peers authoring the cases. GISIM’s mobile-optimized platform offers a portable resource for residents and fellows to incorporate into impromptu bedside teaching sessions, enhancing the untapped potential of trainees as front-line educators . GISIM modules may additionally be used to prepare for in-person didactic sessions, to review specific content areas after morning reports, or to augment gaps in clinical exposure during rotations. To further promote on-the-go learning and optimize student engagement , cases were kept short and concise with the majority taking less than 10 min to complete. GISIM’s Twitter page provides an open line of communication with current and prospective followers and is key to expanding the website’s reach and user base. Our Twitter page has received feedback through tweets and direct messages from national and international trainees stating that they found the website useful and shared/retweeted a link with their colleagues. Faculty from outside institutions have also provided feedback through Twitter and email that they plan to incorporate GISIM into teaching sessions. Both trainees and instructors have offered to contribute to GISIM and several have written cases for the website, demonstrating the importance of social media in the resource’s continued growth. Several of our study’s limitations arise from our method and mode of data collection. Relying on self-reported survey data instead of objective knowledge assessments could compromise internal validity of the study. Furthermore, given that survey completion was optional, selection bias could have skewed survey completion towards more positively impacted users. Our relatively low user response rates (10%) present another limitation of our study given the potential impact on generalizability; however, it must be pointed out that optimum response rates are a matter of ongoing debate, with some researchers suggesting rates of 5–10% still provide reliable results for sample sizes of at least 500 . A few respondents evaluated multiple cases in a single survey, creating the potential for recall bias, due to delays between case completion and assessment. GISIM usage and viewership was also possibly overestimated by website host analytics. User experience is critically important to the success of online learning resources, and was a central focus of this study. While this study only provides a Kirkpatrick level 1 assessment of our learning tool by evaluating user satisfaction, additional Kirkpatrick levels will be evaluated in subsequent phases of our study. In our next phase, we will introduce GISIM cases into medical student workshops and IM residents’ outpatient didactic sessions and will use pre- and posttest surveys to objectively measure knowledge acquisition (Kirkpatrick level 2). To guide future case development, we will add a question to our online user survey inquiring about how the resource is being used (i.e., bedside teaching tool, on-the-go tutorial, or traditional computer-based learning). In summary, we have developed a novel GI/hepatology educational resource that supports on-the-go learning and caters to learners at multiple levels of training. Availability of this case-based resource on an open-access website could enable independent, self-paced learning, and clinical reasoning development both during and after the pandemic.
|
Good clinical scores, no evidence of excessive anterior tibial translation, a high return to sport rate and a low re-injury rate is observed following anterior cruciate ligament reconstruction using autologous hamstrings augmented with suture tape | 3641d44b-3148-4d6a-9986-cc9089233c90 | 10015537 | Suturing[mh] | Anterior cruciate ligament reconstruction (ACLR) is common and, whilst a primary post-operative goal for many patients is a return to sport (RTS), it has been reported that across all patients, only 65% of patients return to their pre-injury level of sport . Furthermore, an overall secondary re-injury rate of 7% has been reported, along with an 8% incidence of contralateral ACL tear, with a combined (ipsilateral and contralateral) ACL injury rate of 23% specifically in patients < 25 years of age who do RTS . The reasons for re-injury are multifactorial , though a recent systematic review reported no significant differences in graft failure rates across varied graft types (quadriceps, hamstring and patellar tendon autografts, or allografts) . In addition to ensuring that strength and functional performance is best restored given their link with re-injury risk , surgical reconstruction techniques involving autograft (or allograft) augmentation have been proposed [ – ] in an attempt to improve outcomes and reduce re-injury rates. ACLR augmentation may permit early ACL reinforcement and graft stability prior to graft incorporation, also expediting post-operative recovery and accelerating rehabilitation . A range of augmented procedures and devices have been reported . Encouraging clinical and RTS outcomes have been more recently reported when using a LARS ligament (LARS, Ligament Augmentation Reconstruction System, Corin Pty. Ltd.) to augment a hamstrings autograft [ , , ], with patient outcomes of those undergoing augmented ACLR better than those undergoing non-augmented ACLR . However, earlier use of synthetic augmentation, including LARS, appeared to present with excessive synovitis and in higher ACL graft failure rates [ – ]. A more recently employed device to augment an ACLR is FiberTape® (Arthrex, Naples, Florida, USA) [ , , , ], with a retrospective comparison of outcomes in patients undergoing ACLR with and without suture augmentation with FiberTape® demonstrating improved outcomes with augmentation . However, studies using FiberTape® augmentation are limited and a greater number of published papers exist related to the use of FiberTape® reinforcement in the context of ACL repair [ – ], rather than reconstruction, although even then many of these are technical notes and not studies reporting patient outcomes. This study presents the clinical outcomes of a prospective patient cohort undergoing ACLR employing autologous hamstrings augmented with suture tape, combined with a progressive, structured rehabilitation programme. With the aforementioned reported re-injury and RTS rates in mind, it was hypothesized that: (1) no significant post-operative differences in anterior tibial translation would exist between the operated and non-operated limbs, (2) a low re-injury rate (< 5%) would be observed over the 24-month period, (3) a high RTS rate (> 70%) would be observed at 12 and 24 months and (4) a significant improvement in patient-reported outcome measures (PROMs) and objective outcomes would be observed following surgery.
Patients Between March 2018 and November 2019, 57 patients scheduled for ACLR employing a hamstrings autograft and augmented with a suture tape were referred by a single surgeon in a private orthopaedic clinic for study discussion, recruitment and subsequent pre-operative review, of which 53 patients elected to participate (Fig. , Level IV prospective case series). Patients were candidates for surgery based on history, current symptoms and orthopaedic clinical examination, whilst magnetic resonance imaging (MRI) confirmed the ACL rupture in all patients. Patients were invited to participate in the study if they were deemed candidates for surgery, were 16–50 years of age (and skeletally mature) and required an isolated primary ACLR, with or without concomitant meniscal surgery. Whilst not encountered, patients were excluded from study participation if they presented with a body mass index (BMI) ≥ 40 or were unwilling or unable to participate in the post-operative rehabilitation protocol (outlined below). Ethics approval was provided by the relevant Human Research Ethics Committee (HREC) and the written consent of all participants was obtained prior to review. The surgical technique All surgeries were performed by the senior author. Examination under anaesthesia was performed prior to tourniquet application to assess laxity of the injured ACL knee in comparison to the contralateral knee and clinically confirm a rupture of the ACL. Knee arthroscopy was subsequently performed to confirm the clinical diagnosis and further evaluate concomitant and/or chondral damage, which was addressed initially if required. Unstable ACL remnant tissue was then removed. The ACL tunnels were routinely dictated by the anatomical positions of the existing ACL remnants. The tibial footprint of the ACL was initially identified, and all unstable remnant was removed. The tibial jig was placed centrally in the tibial footprint, and the tibial tunnel was prepared within the centre of the tibial ACL remnant (Fig. ). Femoral tunnel preparation was performed in a similar way. The femoral anteromedial bundle soft tissue footprint was identified and an awl mark was created. A secondary check was via confirming a prepared tunnel position 2-4 mm off the posterior notch wall, generally in the 2.00 o’clock (left knee) or 10.00 o’clock (right knee) position (Fig. ), with femoral tunnels drilled in maximal knee flexion. The ACL tibial remnant was cleared from the tibia to allow unobstructed passage of the graft within the knee. Semitendinosus and gracilis tendons were harvested from the ipsilateral knee through a 2–3 cm transverse incision approximately 1 cm above the pes anserinus, and prepared as doubled grafts. The combined diameter was measured to establish bone tunnel size reaming, with a minimum graft diameter of 8 mm confirmed for all cases. The harvested hamstring grafts were then passed through the ACL TightRope RT (Arthrex, Naples, Florida, USA) implant loop of the suspensory button creating a 4-strand hamstring graft. A FiberTape® (Arthrex, Naples, Florida, USA) was then attached by a half hitch to the femoral button to act as a ‘seat belt’ augmentation of the graft construct, creating a two-strand internal brace that was essentially placed alongside the autograft (Fig. ). The graft was passaged after placing a suture via a shuttle technique from the tibia through to the button tunnel on the femur. The graft was seated with maximal manual tension whilst cycling the knee ten times. The tibial fixation was performed with a peek interference screw (Arthrex, Naples, Florida, USA), 1 mm larger than the tunnel and positioned in full knee extension. The two internal brace strands were fixed in an accessory position with a knotless anchor 1 cm distal to the tibial tunnel. The knee was place in full extension and the tight rope femoral suture was toggled to optimize maximum graft tension. The final graft construct is shown in Fig. . Rehabilitation A standardized rehabilitation programme was implemented for all patients, aiming for a supervised therapist session every 2 weeks (starting from 2 weeks post-surgery) for the first 5–6 months (12 supervised sessions in total), with ongoing periodic review beyond 6 months post-surgery as required. These sessions were supplemented with an independent home and/or gym-based programme, aiming for 2–3 sessions in total per week. Whilst the home/gym-based programme was not closely monitored, 88.7% (47 of 53) of patients attended ≥ 75% of the designated supervised sessions, with the remaining 11.3% (6 of 53) of patients attending 58–67% of the designated sessions. This was generally due to geographical location and/or COVID-19 restrictions, and these patients were more closely monitored from afar as needed. All supervised rehabilitation was undertaken in a single, private out-patient therapy clinic. Table provides an overview of the programme implemented. In brief, early post-operative management included weight bearing as tolerated, early circulatory (such as foot/ankle pumps) and knee range of motion (ROM) exercises, followed by a progressive programme aiming to restore strength and load capacity, with progression towards running and activities that better prepared the patient for an eventual RTS. Whilst late-stage progression through sport-specific training-based activities was also dependent on the patient’s specific sport, these aspects were not documented as part of the current patient cohort and patients transitioned through these components of training at their own discretion in collaboration with their sporting team. Whilst RTS was not advised until ≥ 9 months post-surgery and patients were counselled on specific objective criteria that should be attained before returning to sports activities (such as the restoration active knee extension ROM and flexion ROM LSI ≥ 90%, ≥ 90% LSI in hop tests and peak isokinetic knee extensor and flexor strength), this was not enforced and still largely at the final discretion of the patient. Patient assessment First, all patients underwent a formal knee laxity exam performed in the clinic by the senior author (PA) at 4 months post-surgery, specifically to assess rotatory laxity grading via pivot shift evaluation. Anterior tibial translation (mm) was measured on both knees during a maximal manual test (MMT) using the KT-1000 knee arthrometer (MEDmetric Corp., San Diego, CA, USA) at 6, 9, 12 and 24 months post-surgery. Active knee flexion and extension range of motion (ROM, degrees) using a hand-held long-arm goniometer was assessed on the operated limb at 6 weeks, as well as 4, 6, 9, 12 and 24 months post-surgery. Patients underwent a 4-hop battery and assessment of peak isokinetic knee extensor and flexor strength (Nm) at 6, 9, 12 and 24 months. The 4-hop battery included the single hop for distance (SHD, m), the 6 m timed hop (6MTH, s), the triple hop for distance (THD, m) and the triple crossover hop for distance (TCHD, m) . Peak isokinetic knee extensor and flexor strength was measured at 90°/s, using an isokinetic dynamometer (Isosport International, Gepps Cross, South Australia). These reviews and all nominated assessments (apart from the laxity exam undertaken by the senior author at 4 months) were undertaken by a qualified therapist, with 20 years of experience undertaking all of the aforementioned assessments. Several patient-reported outcome measures (PROMs) were undertaken pre-surgery and at various post-operative time-points. These included the International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form , the Knee Outcome Survey (KOS) Activities of Daily Living Scale , the Cincinnati Knee Rating System (CKRS) , the Lysholm Knee Score (LKS) , the Tegner Activity Scale (TAS) , the Anterior Cruciate Ligament Return to Sport after Injury (ACL-RSI) and the Noyes Sports Activity Rating Scale (NSARS) . A satisfaction score was employed at 24 months post-surgery, evaluating patient satisfaction with the surgery overall, as well as with the surgery to relieve pain, improve the ability to perform normal daily and work activities, improve the ability to return to recreational activities (including walking, swimming, cycling, golf, dancing), and improve the ability to participate in sport (including sports such as tennis, netball, soccer and football). A Likert Response Scale was employed with descriptors Very Satisfied, Somewhat Satisfied, Somewhat Dissatisfied and Very Dissatisfied. Data and statistical analysis For this prospective study, a priori sample size power calculation was determined based on the recommendations of Cohen and employing data previously collected and published in patients undergoing ACLR with a hamstrings autograft, augmented with LARS . Therefore, in using this existing data and for an anticipated moderate effect size ( d = 0.67) in the primary outcome (anterior tibial translation as evaluated via side-to-side difference in anterior tibial translation in mm for the KT-1000 at 6 months post-surgery), assuming an SD of 3 mm and at alpha level of 0.05 and a power of 0.9, the sample size was estimated at 49 patients to demonstrate a significant difference in anterior tibial translation between the operated and non-operated knees. Overall, 53 patients were recruited to allow for attrition over the assessment period. For all subjective (PROMs) and objective outcomes, the means (SD, range) were presented at the designated assessment time-points, whilst repeated-measures analysis of variance (ANOVA) was employed to assess change in these outcomes over time. Limb Symmetry Indices (LSIs) were calculated and presented for the hop and strength tests, further categorized by the number and percentage of patients with LSIs ≥ 90% for all four hop tests (at each time-point), as well as all hop tests combined with peak isokinetic knee extension and flexion torque. For KT-1000 anterior tibial translation measures, t tests were employed to compare the operated and non-operated limbs at 6 months post-surgery, whilst repeated-measures ANOVA assessed any change in the side-side limb anterior tibial translation difference over time. KT-1000 anterior tibial translation measures were further categorized based on side-to-side difference as normal (< 3 mm), nearly normal (3–5 mm), abnormal (6–10 mm) and severely abnormal (> 10 mm) . The NSARS was employed to present the number (and percentage) of patients participating in Level 1 (participation 4–7 days/week) or Level 2 (participation 1–3 days per week) activities that included jumping, hard pivoting and cutting sports pre-injury and at 12- and 24 months post-surgery. The number (and percentage) of patients reporting ‘Very Satisfied’, ‘Somewhat Satisfied’, ‘Somewhat Dissatisfied’ and ‘Very Dissatisfied’ within each of the satisfaction domains at 24 months post-surgery was presented. The number (and type) of surgical complications, adverse events, re-operations and re-injuries were presented. Where appropriate, statistical analysis was performed using SPSS software (SPSS, Version 27.0, SPSS Inc., USA), with statistical significance determined at p < 0.05.
Between March 2018 and November 2019, 57 patients scheduled for ACLR employing a hamstrings autograft and augmented with a suture tape were referred by a single surgeon in a private orthopaedic clinic for study discussion, recruitment and subsequent pre-operative review, of which 53 patients elected to participate (Fig. , Level IV prospective case series). Patients were candidates for surgery based on history, current symptoms and orthopaedic clinical examination, whilst magnetic resonance imaging (MRI) confirmed the ACL rupture in all patients. Patients were invited to participate in the study if they were deemed candidates for surgery, were 16–50 years of age (and skeletally mature) and required an isolated primary ACLR, with or without concomitant meniscal surgery. Whilst not encountered, patients were excluded from study participation if they presented with a body mass index (BMI) ≥ 40 or were unwilling or unable to participate in the post-operative rehabilitation protocol (outlined below). Ethics approval was provided by the relevant Human Research Ethics Committee (HREC) and the written consent of all participants was obtained prior to review.
All surgeries were performed by the senior author. Examination under anaesthesia was performed prior to tourniquet application to assess laxity of the injured ACL knee in comparison to the contralateral knee and clinically confirm a rupture of the ACL. Knee arthroscopy was subsequently performed to confirm the clinical diagnosis and further evaluate concomitant and/or chondral damage, which was addressed initially if required. Unstable ACL remnant tissue was then removed. The ACL tunnels were routinely dictated by the anatomical positions of the existing ACL remnants. The tibial footprint of the ACL was initially identified, and all unstable remnant was removed. The tibial jig was placed centrally in the tibial footprint, and the tibial tunnel was prepared within the centre of the tibial ACL remnant (Fig. ). Femoral tunnel preparation was performed in a similar way. The femoral anteromedial bundle soft tissue footprint was identified and an awl mark was created. A secondary check was via confirming a prepared tunnel position 2-4 mm off the posterior notch wall, generally in the 2.00 o’clock (left knee) or 10.00 o’clock (right knee) position (Fig. ), with femoral tunnels drilled in maximal knee flexion. The ACL tibial remnant was cleared from the tibia to allow unobstructed passage of the graft within the knee. Semitendinosus and gracilis tendons were harvested from the ipsilateral knee through a 2–3 cm transverse incision approximately 1 cm above the pes anserinus, and prepared as doubled grafts. The combined diameter was measured to establish bone tunnel size reaming, with a minimum graft diameter of 8 mm confirmed for all cases. The harvested hamstring grafts were then passed through the ACL TightRope RT (Arthrex, Naples, Florida, USA) implant loop of the suspensory button creating a 4-strand hamstring graft. A FiberTape® (Arthrex, Naples, Florida, USA) was then attached by a half hitch to the femoral button to act as a ‘seat belt’ augmentation of the graft construct, creating a two-strand internal brace that was essentially placed alongside the autograft (Fig. ). The graft was passaged after placing a suture via a shuttle technique from the tibia through to the button tunnel on the femur. The graft was seated with maximal manual tension whilst cycling the knee ten times. The tibial fixation was performed with a peek interference screw (Arthrex, Naples, Florida, USA), 1 mm larger than the tunnel and positioned in full knee extension. The two internal brace strands were fixed in an accessory position with a knotless anchor 1 cm distal to the tibial tunnel. The knee was place in full extension and the tight rope femoral suture was toggled to optimize maximum graft tension. The final graft construct is shown in Fig. .
A standardized rehabilitation programme was implemented for all patients, aiming for a supervised therapist session every 2 weeks (starting from 2 weeks post-surgery) for the first 5–6 months (12 supervised sessions in total), with ongoing periodic review beyond 6 months post-surgery as required. These sessions were supplemented with an independent home and/or gym-based programme, aiming for 2–3 sessions in total per week. Whilst the home/gym-based programme was not closely monitored, 88.7% (47 of 53) of patients attended ≥ 75% of the designated supervised sessions, with the remaining 11.3% (6 of 53) of patients attending 58–67% of the designated sessions. This was generally due to geographical location and/or COVID-19 restrictions, and these patients were more closely monitored from afar as needed. All supervised rehabilitation was undertaken in a single, private out-patient therapy clinic. Table provides an overview of the programme implemented. In brief, early post-operative management included weight bearing as tolerated, early circulatory (such as foot/ankle pumps) and knee range of motion (ROM) exercises, followed by a progressive programme aiming to restore strength and load capacity, with progression towards running and activities that better prepared the patient for an eventual RTS. Whilst late-stage progression through sport-specific training-based activities was also dependent on the patient’s specific sport, these aspects were not documented as part of the current patient cohort and patients transitioned through these components of training at their own discretion in collaboration with their sporting team. Whilst RTS was not advised until ≥ 9 months post-surgery and patients were counselled on specific objective criteria that should be attained before returning to sports activities (such as the restoration active knee extension ROM and flexion ROM LSI ≥ 90%, ≥ 90% LSI in hop tests and peak isokinetic knee extensor and flexor strength), this was not enforced and still largely at the final discretion of the patient.
First, all patients underwent a formal knee laxity exam performed in the clinic by the senior author (PA) at 4 months post-surgery, specifically to assess rotatory laxity grading via pivot shift evaluation. Anterior tibial translation (mm) was measured on both knees during a maximal manual test (MMT) using the KT-1000 knee arthrometer (MEDmetric Corp., San Diego, CA, USA) at 6, 9, 12 and 24 months post-surgery. Active knee flexion and extension range of motion (ROM, degrees) using a hand-held long-arm goniometer was assessed on the operated limb at 6 weeks, as well as 4, 6, 9, 12 and 24 months post-surgery. Patients underwent a 4-hop battery and assessment of peak isokinetic knee extensor and flexor strength (Nm) at 6, 9, 12 and 24 months. The 4-hop battery included the single hop for distance (SHD, m), the 6 m timed hop (6MTH, s), the triple hop for distance (THD, m) and the triple crossover hop for distance (TCHD, m) . Peak isokinetic knee extensor and flexor strength was measured at 90°/s, using an isokinetic dynamometer (Isosport International, Gepps Cross, South Australia). These reviews and all nominated assessments (apart from the laxity exam undertaken by the senior author at 4 months) were undertaken by a qualified therapist, with 20 years of experience undertaking all of the aforementioned assessments. Several patient-reported outcome measures (PROMs) were undertaken pre-surgery and at various post-operative time-points. These included the International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form , the Knee Outcome Survey (KOS) Activities of Daily Living Scale , the Cincinnati Knee Rating System (CKRS) , the Lysholm Knee Score (LKS) , the Tegner Activity Scale (TAS) , the Anterior Cruciate Ligament Return to Sport after Injury (ACL-RSI) and the Noyes Sports Activity Rating Scale (NSARS) . A satisfaction score was employed at 24 months post-surgery, evaluating patient satisfaction with the surgery overall, as well as with the surgery to relieve pain, improve the ability to perform normal daily and work activities, improve the ability to return to recreational activities (including walking, swimming, cycling, golf, dancing), and improve the ability to participate in sport (including sports such as tennis, netball, soccer and football). A Likert Response Scale was employed with descriptors Very Satisfied, Somewhat Satisfied, Somewhat Dissatisfied and Very Dissatisfied.
For this prospective study, a priori sample size power calculation was determined based on the recommendations of Cohen and employing data previously collected and published in patients undergoing ACLR with a hamstrings autograft, augmented with LARS . Therefore, in using this existing data and for an anticipated moderate effect size ( d = 0.67) in the primary outcome (anterior tibial translation as evaluated via side-to-side difference in anterior tibial translation in mm for the KT-1000 at 6 months post-surgery), assuming an SD of 3 mm and at alpha level of 0.05 and a power of 0.9, the sample size was estimated at 49 patients to demonstrate a significant difference in anterior tibial translation between the operated and non-operated knees. Overall, 53 patients were recruited to allow for attrition over the assessment period. For all subjective (PROMs) and objective outcomes, the means (SD, range) were presented at the designated assessment time-points, whilst repeated-measures analysis of variance (ANOVA) was employed to assess change in these outcomes over time. Limb Symmetry Indices (LSIs) were calculated and presented for the hop and strength tests, further categorized by the number and percentage of patients with LSIs ≥ 90% for all four hop tests (at each time-point), as well as all hop tests combined with peak isokinetic knee extension and flexion torque. For KT-1000 anterior tibial translation measures, t tests were employed to compare the operated and non-operated limbs at 6 months post-surgery, whilst repeated-measures ANOVA assessed any change in the side-side limb anterior tibial translation difference over time. KT-1000 anterior tibial translation measures were further categorized based on side-to-side difference as normal (< 3 mm), nearly normal (3–5 mm), abnormal (6–10 mm) and severely abnormal (> 10 mm) . The NSARS was employed to present the number (and percentage) of patients participating in Level 1 (participation 4–7 days/week) or Level 2 (participation 1–3 days per week) activities that included jumping, hard pivoting and cutting sports pre-injury and at 12- and 24 months post-surgery. The number (and percentage) of patients reporting ‘Very Satisfied’, ‘Somewhat Satisfied’, ‘Somewhat Dissatisfied’ and ‘Very Dissatisfied’ within each of the satisfaction domains at 24 months post-surgery was presented. The number (and type) of surgical complications, adverse events, re-operations and re-injuries were presented. Where appropriate, statistical analysis was performed using SPSS software (SPSS, Version 27.0, SPSS Inc., USA), with statistical significance determined at p < 0.05.
Patient demographics and injury/surgery parameters of the 53 patients that were recruited and underwent surgery are demonstrated in Table . Objective results With respect to the 4-month knee laxity exam undertaken by the senior author, all patients presented with a normal (or near normal) pivot shift clinical examination, with no Grade II or III pivot laxity outcomes. For the later-stage KT-1000 assessments, there were no significant anterior tibial translation differences between the operated and non-operated knees at 6 months post-surgery ( p = 0.433), with no significant increase ( p = 0.841) in side-to-side anterior tibial translation from 6 to 24 months (Table ). At 24 months, KT-1000 measurements demonstrated normal (< 3 mm) or near normal (3–5 mm) side-to-side differences in 98.0% of patients (Table ). Knee flexion and extension ROM significantly improved ( p < 0.0001) over time, as did the LSI for peak isokinetic knee extensor torque ( p < 0.0001), the SHD ( p = 0.001), THD ( p = 0.001) and TCHD ( p < 0.0001) (Table ). At 12 months post-surgery, 72.3% of patients presented with an LSI ≥ 90% for every hop test, which dropped to 53.2% of patients when combined with LSIs ≥ 90% for peak isokinetic knee extensor and flexor strength (Table ). This was 79.6% of patients (all four hops) and 61.2% of patients (all four hops combined with strength measures) at 24 months post-surgery (Table ). Subjective results and return to sport All PROMs significantly improved over time ( p < 0.0001) (Table ). As per the NSARS, 90.6% of patients were actively participating in Level 1 or 2 sports that included jumping, hard pivoting, cutting, running, twisting and/or turning pre-injury, which was 70.2% and 85.7% at 12 and 24 months post-surgery, respectively (Table ). At 24-month review, 98.0% of patients were satisfied overall with their surgical outcome, with 93.9% satisfied with their ability to participate in sport (Table ). Complications, re-injuries and secondary surgical procedures Over the course of the 24-month follow-up period, one patient presented with an early wound infection that was treated accordingly without further issue. Three patients underwent secondary surgical procedures, including one patient that underwent arthroscopic lateral meniscectomy for recurrent symptoms at 18 months after his primary ACLR (with an intact ACL at time of secondary surgery) and one patient that underwent lateral meniscal repair at 10 months after his primary ACLR (with an intact ACL at time of secondary surgery, albeit the meniscal tear was new and following a secondary incident). The third patient underwent medial meniscectomy at 6 months after his primary ACLR for recurrent symptoms and, whilst he was doing well and had returned to pivoting sports by 12 months, experienced an ACL re-tear at 17 months after his primary ACLR which continues to be managed non-operatively. This patient had a graft diameter of 9 mm. There were no further ipsilateral re-tears or contralateral tears. The data collected from these patients were still included in the results analysis.
With respect to the 4-month knee laxity exam undertaken by the senior author, all patients presented with a normal (or near normal) pivot shift clinical examination, with no Grade II or III pivot laxity outcomes. For the later-stage KT-1000 assessments, there were no significant anterior tibial translation differences between the operated and non-operated knees at 6 months post-surgery ( p = 0.433), with no significant increase ( p = 0.841) in side-to-side anterior tibial translation from 6 to 24 months (Table ). At 24 months, KT-1000 measurements demonstrated normal (< 3 mm) or near normal (3–5 mm) side-to-side differences in 98.0% of patients (Table ). Knee flexion and extension ROM significantly improved ( p < 0.0001) over time, as did the LSI for peak isokinetic knee extensor torque ( p < 0.0001), the SHD ( p = 0.001), THD ( p = 0.001) and TCHD ( p < 0.0001) (Table ). At 12 months post-surgery, 72.3% of patients presented with an LSI ≥ 90% for every hop test, which dropped to 53.2% of patients when combined with LSIs ≥ 90% for peak isokinetic knee extensor and flexor strength (Table ). This was 79.6% of patients (all four hops) and 61.2% of patients (all four hops combined with strength measures) at 24 months post-surgery (Table ).
All PROMs significantly improved over time ( p < 0.0001) (Table ). As per the NSARS, 90.6% of patients were actively participating in Level 1 or 2 sports that included jumping, hard pivoting, cutting, running, twisting and/or turning pre-injury, which was 70.2% and 85.7% at 12 and 24 months post-surgery, respectively (Table ). At 24-month review, 98.0% of patients were satisfied overall with their surgical outcome, with 93.9% satisfied with their ability to participate in sport (Table ).
Over the course of the 24-month follow-up period, one patient presented with an early wound infection that was treated accordingly without further issue. Three patients underwent secondary surgical procedures, including one patient that underwent arthroscopic lateral meniscectomy for recurrent symptoms at 18 months after his primary ACLR (with an intact ACL at time of secondary surgery) and one patient that underwent lateral meniscal repair at 10 months after his primary ACLR (with an intact ACL at time of secondary surgery, albeit the meniscal tear was new and following a secondary incident). The third patient underwent medial meniscectomy at 6 months after his primary ACLR for recurrent symptoms and, whilst he was doing well and had returned to pivoting sports by 12 months, experienced an ACL re-tear at 17 months after his primary ACLR which continues to be managed non-operatively. This patient had a graft diameter of 9 mm. There were no further ipsilateral re-tears or contralateral tears. The data collected from these patients were still included in the results analysis.
The most important finding from the current study was that an ACLR technique using autologous hamstrings augmented with a suture tape, combined with a structured post-operative rehabilitation programme, produced high-scoring PROMs and patient satisfaction with encouraging performance scores and RTS rates, without evidence of excessive anterior tibial translation and/or a high re-injury rate. No difference in anterior tibial translation between the operated and non-operated limbs was observed, with 98% of patients demonstrating normal (< 3 mm) or near normal (3–5 mm) side-to-side differences up until 24 months post-surgery (the only patient who demonstrated side-to-side anterior tibial translation > 5 mm had suffered a known re-tear). This was in support of the first hypothesis. Further to this, as reported recently by Fiil et al. , excessive post-operative anterior tibial translation may be associated with worse knee-related quality of life, reduced function in sports and an increased revision rate. Whilst the rationale for graft augmentation is largely focussed on early graft reinforcement , the true nature of this reinforcement capacity remains unknown, given the relative lack of biomechanical research on suture tape augmentation. A biomechanical study published by Massey et al. reported a higher load to failure, stiffness and energy to failure when augmenting a graft with internal brace, though this was in the context of ACL repair (not reconstruction). In the current study, only one patient (2%) suffered an ACL re-injury with no contralateral ACL tears up until 24 months, also in support of the second hypothesis. However, it should be acknowledged that whilst Grindem et al. reported an increased re-tear rate up until 9 months post-surgery after which time no further reduction in re-tear risk was observed, theoretically an elevated re-tear risk may extend well after the patient’s RTS so ongoing review is required. Whilst excessive synovitis and high failure rates had limited the ongoing early use of synthetics in ACLR [ – ], these complications were not observed in the current study. In the current study, 70.2% of patients were actively participating in pivoting sports at 12 months post-surgery, which had increased to 85.7% at 24 months (noting that 90.6% of patients were actively participating in pivoting sports pre-injury). This supported the third hypothesis and, of further interest, the 24-month post-operative mean TAS was actually higher than the pre-injury TAS. Whilst similar RTS rates were previously reported in patients following ACLR augmented with LARS , Ardern et al. reported that only 65% of patients return to their pre-injury level of sport, with 55% returning to competitive sport. The higher RTS rates may be influenced by a range of factors including participation and ongoing progression of rehabilitation, which was well adhered to in the current study. Further to this, the underlying rationale for the use of ACLR augmentation is that it may permit early ACL reinforcement and graft stability prior to graft incorporation, also accelerating rehabilitation . Of importance, the encouraging RTS rates currently observed did not appear to increase the risk of excessive anterior tibial translation or re-injury risk. It should be reiterated again that RTS was not advised until ≥ 9 months post-surgery and patients were counselled on specific objective criteria that should be ideally attained before RTS, though this could not be enforced and was at the final discretion of the patient. High-scoring PROMs and high levels of patient satisfaction were reported, whilst mean LSIs ≥ 90% were reported at all post-operative time-points for peak isokinetic knee flexor strength and all hop measures. Furthermore, the mean LSI for peak isokinetic knee extensor strength was ≥ 90% at 12 and 24 months, albeit 75% and 82% at 6 and 9 months, respectively. This was largely in support of the fourth hypothesis. However, when grouped in the form of a performance test battery, 72% and 80% of patients presented with an LSI ≥ 90% for every hop test at 12 and 24 months, respectively. When this test battery further included LSIs ≥ 90% for the knee extensor and flexor strength measures, this was only 53% and 61% at 12 and 24 months, respectively. Despite the low re-injury rate currently observed, existing research has reported an increased re-injury risk if patients fail to meet LSIs ≥ 90% across a range of tests including strength and hop performance measures . In contrast, other research has suggested an increased risk of contralateral ACL injury in the presence of improved strength and/or hop performance symmetry . Therefore, the limitations of employing LSIs to present performance outcomes should be acknowledged, such as the variation in LSI ‘cut-off’ values employed [ , – ] and the potential for LSIs to overestimate function . Whilst the current subjective, objective and RTS outcomes appear similar to those reported previously in patients undergoing ACLR augmented with LARS , and more recent longer term follow-ups of reconstruction/repair with and without other ligament augmentation devices have reported sound clinical results , limited published outcomes exist presenting outcomes specifically after ACLR augmented with FiberTape®. Bodendorfer et al. presented a retrospective comparison of outcomes in patients undergoing ACLR with and without FiberTape® suture augmentation, with augmentation demonstrating less pain, improved PROMs and improved early return to activity, without evidence of over-constraint. A retrospective cohort study published by Barnas et al. reported comparable functional outcomes in patients undergoing surgery for partial ACL tears with synthetic augmentation using either a polyethylene terephthalate tape (Neoligaments) or FiberTape® suture augmentation. A recent retrospective comparison published by Hopper et al. reported comparable re-injury and secondary surgery rates in patients undergoing ACLR versus those undergoing ACL repair with suture tape augmentation, in the context of acute proximal ACL ruptures. Finally, a recent systematic review published by Zheng et al. specifically on the use of suture augmentation for ACLR reported overall favourable clinical outcomes and, whilst being associated with better sports performance compared to standard ACLR, was comparable in most functional scores, knee stability measures and graft failure rates. Most other ACLR papers employing FiberTape® augmentation are technical notes without patient outcomes [ , , ]. A prospective 2-year study published by Heusdens et al. reported improved post-operative outcomes of suture augmentation in the context of ACL repair, with a 4.8% re-rupture rate over the period, but other published papers using FiberTape® augmentation for ACL repair are also limited to technical notes . A number of limitations are acknowledged within the current study. First, it was a single centre study in patients undergoing a specific augmented ACLR technique that does not permit generalization. Furthermore, we acknowledge that there was no comparative group with the current study and, based on the early clinical experience our group had with this augmented ACLR technique, our initial plan was to undertake a robust prospective evaluation of patients undergoing this ACLR technique with close and frequent assessment of outcomes and adverse events, with comparison to existing literature where appropriate. This now provides a framework for a subsequent randomized comparative study. Additionally, it may be argued that it was a heterogeneous group with a wide age range (16–45 years) and almost 50% of patients undergoing concomitant meniscal surgery, though this is also a strength in presenting outcomes in a common community-level cohort embarking on ACLR. Second, we acknowledge that the primary study aim and sample size calculation was focussed around excessive anterior tibial translation (KT-1000 measurements), and both the 4-month pivot shift clinical review, as well as the 6-, 9-, 12- and 24-month KT-1000 reviews, were undertaken on the patient (on both limbs for the KT-1000) in an awake condition, which may be less reliable than an anaesthetized environment. Third, whilst an aim was to report on RTS rates at 12 and 24 months, the actual time to RTS was not documented. Finally, whilst it is acknowledged that rehabilitation can affect strength and function after ACLR [ , , ] and patients underwent a structured rehabilitation programme following surgery (also seeking to document rehabilitation adherence), it is acknowledged that in many community-level ACLR patients, rehabilitation will differ, as will individual patient motivation and exercise diligence.
The current study has demonstrated that ACLR using autologous hamstrings augmented with the suture tape, combined with a structured, post-operative rehabilitation programme, produced high-scoring PROMs and patient satisfaction with encouraging performance scores and RTS rates, without evidence of excessive anterior tibial translation and/or a high re-injury rate. Particularly given the high RTS rates at 24 months post-surgery, ongoing patient review is required to further investigate latter stage re-injury rates.
|
Individual and environmental determinants associated with longer times to access pediatric rheumatology centers for patients with juvenile idiopathic arthritis, a JIR cohort study | 537ae31f-13e7-43cc-87f7-f62d6e758daf | 10015663 | Internal Medicine[mh] | Juvenile idiopathic arthritis (JIA) is the most common chronic pediatric rheumatic disease . It is defined by the onset, before age 16 years, of arthritis of unknown cause persisting for at least 6 weeks . The term JIA encompasses a heterogeneous group of different diseases classified into seven categories of varying severity and long-term consequences depending on clinical manifestations and response to treatment. JIA qualifies as a rare disease (prevalence less than 1/2000 children) but is widely underdiagnosed . Prompt referral to a pediatric rheumatology (PR) center and effective care is known to be critical in changing the natural history of the disease and improving long-term prognosis . Delay in diagnosis can also be a source of anxiety, or of non-adherence to treatment, especially in case of loss of confidence in healthcare providers (HCPs). The care pathway for JIA patients can be complex, and reasons for delayed referral may depend on several factors such as individual patient characteristics and local and regional healthcare organization. JIA can also be under-recognized by HCPs because of its low prevalence and subtle clinical manifestations. International guidelines advocate that whatever the level of income of the country, new patients with suspected JIA should be assessed by a pediatric rheumatologist (PRst) within 4 weeks from the time of referral . In addition, the British Society for Paediatric and Adolescent Rheumatology Standards of Care (BSPAR) and the Arthritis and Musculoskeletal Alliance advocate that children with suspected JIA should be assessed by a PR team within 10 weeks of symptom onset . Despite guidelines, poor access for JIA patients to appropriate care remains a global issue. Literature reports give a median time to access the PRst of 3–10 months, with many medical stakeholders involved and a broad variability in JIA subtypes . It was found that some clinical characteristics and biological factors such as joint swelling, fever, and elevated C-reactive protein/erythrocyte sedimentation rate were associated with a shorter time to first PR visit. Conversely, enthesitis, older age at symptom onset/diagnosis or pain were associated with a longer time to access PR centers . Data on the impact of socio-economic status on time to access to PR center are scant, and available only in North America and the United Kingdom . Because health in childhood is influenced by socio-economic determinants , we sought to identify potential socio-economic determinants of delayed referral to a PRst for JIA patients in a cohort set up in France and Switzerland.
Study design The Juvenile Inflammatory Rheumatism cohort (JIRcohort) is an international multicenter prospective data repository where patients with juvenile inflammatory rheumatisms are collected in a web-secured database (clinicaltrial: NCT02377245) . The study was conducted in accordance with the Declaration of Helsinki and the protocol was approved by independent ethics committees for each participating center. Definitions and variables Time to first PR visit was defined as the time from the onset of symptoms to the first visit to a PR center. Time to first HCP visit was defined as the time between symptom onset and first assessment by an HCP. Time between first HCP and first PR visit was defined as the time between first consultation with an HCP and first PR assessment. HCP specialties recorded were pediatric rheumatologist (PRst), general pediatrician, general practitioner, emergency care practitioner (ECP), orthopedist, and other. The distance from parents’ dwelling place to the PR center was calculated using an Internet-based route calculator (URL: https://fr.mappy.com ) as the shortest distance to the PR center by road. Patients’ living area were classified as rural, intermediate (i.e., both rural and urban parts or rural under strong influence of an urban area) or urban (according to INSEE, the French National Statistical Office , and the Swiss Federal Statistical office ). Parental profession was recorded using the International Standard Classification of Occupations (ISCO) . Parental educational attainment was recorded using the International Standard Classification of Education (ISCED) . To assess the impact of closeness to medical and/or health systems on time to access a PR center. We separated parental professions into two categories: parents with health care profession (e.g., physician, nurse, laboratory technician, pharmacist, or physiotherapist) and others. By using parental occupation as a proxy, we aimed to study whether having parents in the medical field had an impact on access to care as suggested in other studies . Population All patients diagnosed with JIA (according to the International League of Associations for Rheumatology classification ), presenting at one center of the JIRcohort in France or in Switzerland were included (HCPs met during referral and dates of visits). The overall cohort was started in 2013. The data analyzed here were for a subcohort for which socio-economic data (which were not initially collected) had been collected from January 2018 to April 2019. The patients included were those who had been managed since the data were collected and those being followed in the PR center (and for whom data could be completed). Data collection Data was collected by a PRst in each center during a follow-up visit. Data on patients’ characteristics at first visit to PR center (age, JIA subtype) and parents’ characteristics (profession and educational attainment) were extracted from the JIRcohort database. The referral pathway to PR center was also described, i.e., the specialty of each HCP met by the patient for JIA related symptoms, and the timing (the date of first medical appointment with this specialist). If parents had forgotten the exact date, and if only the month and year were available, an approximation to within 15 days was recorded. Analysis Statistical analysis was performed using the Stata software (version 15; StataCorp, College Station, Texas, USA). All tests were two-sided, with an alpha level set at 0.05. Categorical data were expressed as number of subjects and associated percentages, and continuous data as median [25th; 75th percentile]. The primary outcome was estimated using the Kaplan–Meier approach, and factors associated with time to first PR visit were studied using the log-rank statistic in univariate analysis. A multivariable analysis was then performed using a Cox proportional hazards model, considering covariates determined according to univariate results and clinical relevance. The results were expressed as hazard ratios (HR) and 95% confidence interval (CI). An HR of 1 indicates an equal likelihood of first PR visit in the presence of the variable in question as in its absence, HR > 1 indicates an increased likelihood (shorter time), and HR < 1 a reduced likelihood (longer time). A logarithmic transformation of the distance from the patient’s dwelling place to the PR center was carried out to achieve normality. Factors associated with the time between symptom onset and first consultation with an HCP and the time between first consultation with an HCP and first PR assessment were also studied using the log-rank statistic.
The Juvenile Inflammatory Rheumatism cohort (JIRcohort) is an international multicenter prospective data repository where patients with juvenile inflammatory rheumatisms are collected in a web-secured database (clinicaltrial: NCT02377245) . The study was conducted in accordance with the Declaration of Helsinki and the protocol was approved by independent ethics committees for each participating center.
Time to first PR visit was defined as the time from the onset of symptoms to the first visit to a PR center. Time to first HCP visit was defined as the time between symptom onset and first assessment by an HCP. Time between first HCP and first PR visit was defined as the time between first consultation with an HCP and first PR assessment. HCP specialties recorded were pediatric rheumatologist (PRst), general pediatrician, general practitioner, emergency care practitioner (ECP), orthopedist, and other. The distance from parents’ dwelling place to the PR center was calculated using an Internet-based route calculator (URL: https://fr.mappy.com ) as the shortest distance to the PR center by road. Patients’ living area were classified as rural, intermediate (i.e., both rural and urban parts or rural under strong influence of an urban area) or urban (according to INSEE, the French National Statistical Office , and the Swiss Federal Statistical office ). Parental profession was recorded using the International Standard Classification of Occupations (ISCO) . Parental educational attainment was recorded using the International Standard Classification of Education (ISCED) . To assess the impact of closeness to medical and/or health systems on time to access a PR center. We separated parental professions into two categories: parents with health care profession (e.g., physician, nurse, laboratory technician, pharmacist, or physiotherapist) and others. By using parental occupation as a proxy, we aimed to study whether having parents in the medical field had an impact on access to care as suggested in other studies .
All patients diagnosed with JIA (according to the International League of Associations for Rheumatology classification ), presenting at one center of the JIRcohort in France or in Switzerland were included (HCPs met during referral and dates of visits). The overall cohort was started in 2013. The data analyzed here were for a subcohort for which socio-economic data (which were not initially collected) had been collected from January 2018 to April 2019. The patients included were those who had been managed since the data were collected and those being followed in the PR center (and for whom data could be completed).
Data was collected by a PRst in each center during a follow-up visit. Data on patients’ characteristics at first visit to PR center (age, JIA subtype) and parents’ characteristics (profession and educational attainment) were extracted from the JIRcohort database. The referral pathway to PR center was also described, i.e., the specialty of each HCP met by the patient for JIA related symptoms, and the timing (the date of first medical appointment with this specialist). If parents had forgotten the exact date, and if only the month and year were available, an approximation to within 15 days was recorded.
Statistical analysis was performed using the Stata software (version 15; StataCorp, College Station, Texas, USA). All tests were two-sided, with an alpha level set at 0.05. Categorical data were expressed as number of subjects and associated percentages, and continuous data as median [25th; 75th percentile]. The primary outcome was estimated using the Kaplan–Meier approach, and factors associated with time to first PR visit were studied using the log-rank statistic in univariate analysis. A multivariable analysis was then performed using a Cox proportional hazards model, considering covariates determined according to univariate results and clinical relevance. The results were expressed as hazard ratios (HR) and 95% confidence interval (CI). An HR of 1 indicates an equal likelihood of first PR visit in the presence of the variable in question as in its absence, HR > 1 indicates an increased likelihood (shorter time), and HR < 1 a reduced likelihood (longer time). A logarithmic transformation of the distance from the patient’s dwelling place to the PR center was carried out to achieve normality. Factors associated with the time between symptom onset and first consultation with an HCP and the time between first consultation with an HCP and first PR assessment were also studied using the log-rank statistic.
Of the 1342 JIA patients enrolled in the initial JIRcohort database, socio-economic factors and referral pathways were collected in 250 children (41 in Switzerland, 209 in France), in 20 centers (Additional file ). Characteristics at first visit to PR centers Characteristics of JIA patients and parents at first visit to a PR center are reported in Table . The median age at onset was 4.3 years [2.1; 8.4] and 76% of the children were female. The median distance from the patient’s dwelling place to the PR center was 37 km [17; 82] with a maximum of 407 km and more than half of the patients lived in an urban area (134/250). Regarding educational attainment level of parents, 68% of mothers and 63% of fathers had a post-graduate level (university degree or equivalent). Fourteen percent of children had at least one parent working in a medical or paramedical profession (12% mothers and 4% fathers). Time to referral Time to access the PR center is reported in Table and broken down into JIA subtypes. The most frequent JIA subtype was oligoarticular (oJIA) (50%), then polyarticular (pJIA) (22%), then enthesitis-related arthritis (ERA) (11%). The overall median time between onset and first PR assessment was 2.4 months [1.3; 6.9] and varied considerably across the JIA subtypes, from 1.4 months [0.6; 3.8] for children with sJIA to 5.3 months [2.0;19.1] for children with ERA. More precisely, the median time between first symptoms and first visit to an HCP was very short (0.0 month [0.0; 0.7]), whereas the median time between this first consultation with an HCP and first PR visit was 2.1 months [1.0; 5.0]. Only 47% of children were assessed by a PRst within 10 weeks after onset of symptoms (BSPAR guidelines) and about one quarter of the patients (27%) were seen by a PRst 6 months or more after first symptoms. Among ERA patients, time to PR visit was more than 6 months for approximately half (48%) and more than 12 months for one third (33%). Factors associated with delay in access to PR centers Based on univariate analysis, an appointment with an orthopedist during the referral pathway and a diagnosis of ERA were significantly associated with a longer time before the first PR visit (HR 0.71 [95% CI: 0.53; 0.94]) and (HR 0.47 [95% CI 0.30; 0.73], respectively) (Table ). By contrast, patients with an appointment with an ECP and a mother with a post-graduate educational level were more likely to experience a shorter time before the first PR visit (HR 1.36 [95% CI: 1.06; 1.75] and HR 1.38 [95% CI 1.04; 1.83], respectively). Country and distance to PR center were not associated with time to access a PRst (respectively HR 0.96 [95% CI: 0.68; 1.34] and HR 0.92 [95% CI: 0.82; 1.02]). In multivariable analysis, ERA subtype (HR 0.50 [95% CI: 0.29; 0.84]) and appointment with an orthopedist (HR 0.68 [95% CI: 0.49; 0.93]) remained independent factors associated with longer time to access a PRst, whereas visit to an ECP was almost significantly associated with a shorter delay (HR 1.31 [95% CI: 0.99; 1.72]) (Fig. ). Similarly, having a mother with a post-graduate educational level was tendentially associated with a shorter time before the first PR visit (HR 1.32 [95% CI: 0.99; 1.78]). Based on univariate analysis, having a mother with post-graduate level, and living in rural area were significantly associated with a shorter time between symptom onset and first visit to an HCP (HR 1.36 [95% CI:1.02; 1.81] and HR 1.34 [95% CI:1.00; 1.78], respectively). Conversely, a diagnosis of ERA was significantly associated with a longer time (HR 0.45 [95% CI:0.28; 0.70]) (Table ). Regarding the time between first consultation with an HCP and first PRst assessment, a longer distance from patient’s dwelling place to the PR center was associated with a longer time, while a diagnosis of sJIA was associated with a shorter time (HR 0.84 [95% CI 0.75; 0.93] and HR 1.84 [95% CI 1.17; 2.87], respectively) (Table ).
Characteristics of JIA patients and parents at first visit to a PR center are reported in Table . The median age at onset was 4.3 years [2.1; 8.4] and 76% of the children were female. The median distance from the patient’s dwelling place to the PR center was 37 km [17; 82] with a maximum of 407 km and more than half of the patients lived in an urban area (134/250). Regarding educational attainment level of parents, 68% of mothers and 63% of fathers had a post-graduate level (university degree or equivalent). Fourteen percent of children had at least one parent working in a medical or paramedical profession (12% mothers and 4% fathers).
Time to access the PR center is reported in Table and broken down into JIA subtypes. The most frequent JIA subtype was oligoarticular (oJIA) (50%), then polyarticular (pJIA) (22%), then enthesitis-related arthritis (ERA) (11%). The overall median time between onset and first PR assessment was 2.4 months [1.3; 6.9] and varied considerably across the JIA subtypes, from 1.4 months [0.6; 3.8] for children with sJIA to 5.3 months [2.0;19.1] for children with ERA. More precisely, the median time between first symptoms and first visit to an HCP was very short (0.0 month [0.0; 0.7]), whereas the median time between this first consultation with an HCP and first PR visit was 2.1 months [1.0; 5.0]. Only 47% of children were assessed by a PRst within 10 weeks after onset of symptoms (BSPAR guidelines) and about one quarter of the patients (27%) were seen by a PRst 6 months or more after first symptoms. Among ERA patients, time to PR visit was more than 6 months for approximately half (48%) and more than 12 months for one third (33%).
Based on univariate analysis, an appointment with an orthopedist during the referral pathway and a diagnosis of ERA were significantly associated with a longer time before the first PR visit (HR 0.71 [95% CI: 0.53; 0.94]) and (HR 0.47 [95% CI 0.30; 0.73], respectively) (Table ). By contrast, patients with an appointment with an ECP and a mother with a post-graduate educational level were more likely to experience a shorter time before the first PR visit (HR 1.36 [95% CI: 1.06; 1.75] and HR 1.38 [95% CI 1.04; 1.83], respectively). Country and distance to PR center were not associated with time to access a PRst (respectively HR 0.96 [95% CI: 0.68; 1.34] and HR 0.92 [95% CI: 0.82; 1.02]). In multivariable analysis, ERA subtype (HR 0.50 [95% CI: 0.29; 0.84]) and appointment with an orthopedist (HR 0.68 [95% CI: 0.49; 0.93]) remained independent factors associated with longer time to access a PRst, whereas visit to an ECP was almost significantly associated with a shorter delay (HR 1.31 [95% CI: 0.99; 1.72]) (Fig. ). Similarly, having a mother with a post-graduate educational level was tendentially associated with a shorter time before the first PR visit (HR 1.32 [95% CI: 0.99; 1.78]). Based on univariate analysis, having a mother with post-graduate level, and living in rural area were significantly associated with a shorter time between symptom onset and first visit to an HCP (HR 1.36 [95% CI:1.02; 1.81] and HR 1.34 [95% CI:1.00; 1.78], respectively). Conversely, a diagnosis of ERA was significantly associated with a longer time (HR 0.45 [95% CI:0.28; 0.70]) (Table ). Regarding the time between first consultation with an HCP and first PRst assessment, a longer distance from patient’s dwelling place to the PR center was associated with a longer time, while a diagnosis of sJIA was associated with a shorter time (HR 0.84 [95% CI 0.75; 0.93] and HR 1.84 [95% CI 1.17; 2.87], respectively) (Table ).
The aim of this study was to highlight factors associated with longer time to access a PRst in France and Switzerland. To our knowledge, this is the first multicenter study in these two countries analyzing access to PR care. Few data are available in France, where studies have covered limited geographical areas and focus mainly on clinical and biological characteristics . No studies had been conducted in Switzerland. In the present study, the median time to first PR visit was short (2.4 months) compared to other studies and close to the British guidelines (children with suspected JIA are to be assessed by a PR team within 10 weeks of symptom onset). However, the data show a broad variability and an excessively long time to access PR centers for many patients (more than 6 months for 27% of patients, and more than 1 year for 14%) while it is known that a late referral can be associated with important damages (as well as articular as ophthalmologic) in most JIA subtypes. This can also impact the quality of the relationship with HCPs involved in the disease management. As reported previously, sJIA subtype is associated with prompt referral due to eruptive symptoms (fever, rash, deep asthenia, biological inflammatory syndrome) . This is confirmed by our study with a shorter delay between first HCP and first PR visit. In contrast, children with the ERA subtype experienced a significantly longer time to access PR centers. Subtle presentations of JIA with indolent symptoms (e.g. enthesitis without swelling, low biological inflammation, transitional morning stiffness and well-preserved function), as frequently described in ERA subtype, led to a longer time to first PR visit and require specific training and experience in HCPs . In addition, it is possible that children with suspected ERA are less likely to report these symptoms to the attention of family and HCPs. Although this form is generally accompanied by fewer sequelae when treatment is delayed, the negative psychological effects of delay in access to appropriate care, and doubts about diagnosis for both patients and family caregivers should not be overlooked. The presence of an orthopedist during the referral pathway was significantly associated with a longer time of referral to the PRst. However, in our study, the orthopedist referred most frequently to a PRst (in 75% of the cases, data not shown). The time to get an appointment with an orthopedist and the presence of invasive procedures (e.g., arthroscopies, bone biopsies), that are frequently performed by orthopedist, could explain the overall increase time to the first PR visit . However, studies focusing on care pathways, taking into account the specificities of the organization of health systems in each country, would be necessary. The care pathway for JIA patients, from first symptoms to appropriate diagnosis and care by a PRst contains two successive intervals: (i) the interval between symptom onset and first assessment by an HCP, mostly depending on patients and family personal and environmental characteristics, and (ii) the referral pathway, namely the interval between first consultation with an HCP and the first PRst assessment, which depends on physicians and healthcare organization performance and effectiveness. Although previous research found a significant social gradient in health from early childhood , the impact of social determinants on children with JIA is poorly understood. In Canada, higher levels of parental education seem to be associated with a shorter time to first PR consultation . Conversely, in the United Kingdom, socio-economic status was not correlated with time to first PR consultation. However, studies found a link between socio-economic factors and different types of pathways: patients with lower socio-economic status were mainly referred to the PRst via the ECP, while patients with higher socio-economic status were mostly referred by a general pediatrician . In the United States, community poverty was associated with delayed time to rheumatology care for patients with pJIA . In our study there was a tendency for shorter symptom duration among children whose mothers had a post-graduate educational level, but this correlation was not statistically significant in multivariable analysis. In our study, no association was observed between time to access to PR center and rural/intermediate/urban type of living location. This is consistent with 2 previous studies that took place in 2 different areas in France: a densely populated metropolitan area with the highest medical density in the country and a less populated area with lower medical density and encompassing more rural areas , from which no differences were observed in access to PR care. Patients in rural area were more likely to experience a shorter time between symptom onset and first visit with an HCP. Although this result may seem surprising (given the low medical density in rural areas), it could be explained by greater pressure on health services due to higher population density in urban areas. Several studies reported a correlation between socio-economic-level and health literacy . The World Health Organization defines health literacy as the ability of individuals to gain access to, use, and understand health information and services in order to maintain good health . Impacts of low health literacy are multiple and adversely affect parents’ ability (especially that of mothers who are generally more involved in their children’s health) to use health information, make health decisions for their child and find their way in the healthcare system (more medication errors, more emergency department use, etc.) . A broader concept of health literacy, such as that measured by the Health Literacy Questionnaire (HLQ), also includes the ability to actively engage with healthcare providers (6 th domain of the HLQ) . Patients (or parents in the present case) who are passive in their approach to healthcare (i.e., who do not proactively seek or process information and advice and/or service options), tend to accept information unquestioningly. In contrast, parents who are proactive about their health, and feel in control in relationships with healthcare providers, are able to seek advice from additional healthcare providers when necessary and until they are satisfied. Recent qualitative studies have shown how parents’ determination and self-confidence are important in offsetting the insufficient knowledge of non-specialist physicians. Parental implication is a key factor in referral to appropriate care and it has been shown that parents play a central role at every step in the referral pathway . As reported by Rapley et al., beside the experience and skills of health professionals, “parental persistence” (i.e., persistence in seeking action such as in repeated visits to primary and hospital care to report stubborn symptoms in their child) is crucial for JIA children to access appropriate care . In the present study, although we did not directly measure health literacy, we used the educational level of the mother and father as a proxy for health literacy. We observed a significantly shorter time between first symptoms and access to first HCP when mothers had a post-graduate educational level but there was no association between the mother’s educational level and the time between first HCP visit and access to a PR center. This is consistent with the fact that the time lag before referral was more often due to an HCP’s referral than to a long time before access to the first HCP (2.1 months [1.0; 5.0] vs. 0.0 month [0.0; 0.7]). This is in line with Shiff et al., who found that children with rheumatic disease saw an HCP in a median time of 2 weeks after onset of symptoms, whereas the median time to first PR visit was 24 weeks . These results suggest that delays in access to a PR center depend mostly on healthcare organization rather than on patients’ literacy. This study has some limitations mainly owing to the reduced number of participants in the JIRcohort database whose data were sufficiently complete to be exploitable (only 250 JIA patients for 20 centers). However, characteristics of JIA children included in the study are closely similar to the data reported for Europe (such as frequency of JIA subtypes, age at symptom onset, and female/male ratio) . Moreover, the data observed on the educational attainment level of the parents in our sample are fairly close to the results for the general population in France and Switzerland . The date of the referral letter from the referring physician was not collected, so we can’t evaluate the time lag between referral and assessment by the PRst. Another limitation is the absence of direct measurements of parents’ health literacy. Although there is an overall correlation between educational level and health literacy, some individual domains of health literacy measured by HLQ may be less closely correlated: the ability to engage actively with professionals may depend on other personal characteristics such as self-confidence or psychosocial competencies. Educational attainment level may thus not be an accurate proxy for health literacy. It would be of interest to supplement these conclusions with data from other sources. However, data on the psychosocial determinants of delay are scant in other databases. Finally, the dates of symptom onset and of HCP’s assessment were declared by the parents, so a memory bias cannot be excluded. A strength of this study is that it was conducted in a prospective cohort based on 20 centers, which lessens the risk of selection bias observed in monocenter studies.
In France and Switzerland, the time to first PR visit was most often short compared to other studies, and close to the British recommendations. However, this time was still too long for many patients. We did not observe any social inequities in access to a PRst, but this study does show the need to improve effective pathway and access to a PRst for JIA patients. Qualitative studies are now needed to explore the reasons for this delay between first visit to a practitioner and appropriate referral to a PRst.
Additional file 1. Location of the 20 pediatric rheumatology centers (using Google Maps).
|
Perspectives for Cancer Care and Research in Central and Eastern Europe | e9f91a3d-90ed-4506-836d-9b6dc820ceff | 10015746 | Internal Medicine[mh] | The present article reviews the domains of medical oncology education, human resources in oncology, cancer care, and clinical research in Central and Eastern European (CEE) in order to comprehensively assess the current situation and needs, describe important initiatives, and also propose ways to improving cancer outcomes in the region.
The best care for cancer patients is achieved when diagnostic procedures and different treatment modalities are provided by well-trained and qualified specialists working together in a multidisciplinary team. The education in oncology starts at the medical school as undergraduate education. Since there is no uniform curriculum for undergraduate education in oncology in Europe, substantial differences in undergraduate oncology teaching may appear. While the results of a survey performed at the turn of the century revealed that oncology was present in the medical students' core curricula in only 41 out of 100 institutions taking part in the survey, data from a more recent period show a progress . The survey conducted among academic teachers at 32 institutions from 19 European countries revealed that oncology was taught as either an independent discipline or along with other disciplines at all institutions in all participating countries. This includes 7 CEE countries (Croatia, Czech Republic, Poland, Romania, Serbia, Slovakia, and Slovenia) . Nevertheless, it is important to note that the time devoted to oncology-related topics significantly varied among institutions and countries. At this point, CEE countries seem to perform quite well, with Poland institutions reporting the highest number of hours (approx. 120) devoted to teaching oncology per scholar year among the participating countries. In all other CEE countries, except Hungary, the number of hours devoted to oncology seems to be in the upper part of the average. In contrast to undergraduate education, there is an aspiration to deliver uniformly high cancer care across Europe by harmonizing postgraduate education in oncology. The European Union of Medical Specialists (UEMS) set up European training requirements for the specialty of radiation oncology and the specialty of medical oncology , while the recommendations on curricula, including the length of the specialization and the competencies that need to be acquired, have been made by various societies. The European Society for Radiation Oncology (ESTRO) developed a core curriculum for radiation oncology , while the European Society for Medical Oncology (ESMO) in collaboration with the American Society of Clinical Oncology (ASCO) developed a global curriculum in medical oncology . Both specialties were quickly recognized and introduced in a vast majority of CEE countries. Based on the survey performed in 2014, radiation oncology was recognized as a standalone specialty in 7 (Bulgaria, the Czech Republic, Hungary, Poland, Romania, Serbia, and Slovakia) out of 8 participating CEE countries . Only in Croatia, radiation oncology continued to be a part of the common oncology specialty on Clinical Oncology. Unfortunately, some CEE countries with a long tradition of specialty training in radiation oncology, such as Slovenia where radiation oncology was recognized as a standalone specialty already in 1957, were not part of the survey. Given the fact that radiation oncology was recognized as a standalone specialty in only 21 out of 28 participating European countries, the situation in CEE countries seems good. The length of the specialization was in line with the recommended 5 years in the vast majority of CEE countries, and the number of new trainees per year in CEE countries did not differ much from the numbers in the Western European (WE) countries. Even though medical oncology has been recognized as a separate specialty by the European Union (EU) only in March 2011, many CEE countries were among the pioneers in setting up medical oncology as an independent specialty much earlier. In Slovenia, medical oncology was recognized as independent specialty with an established national curriculum already in 2000, while in Croatia, subspecialty training in medical oncology was set up even earlier in the late nineties. Based on the results of a global survey performed by the ESMO/ASCO global curriculum (GC) working group in 2019 , medical oncology was recognized as a standalone specialty in all CEE countries (Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Hungary, Montenegro, Poland, Romania, Serbia, Slovakia, and Slovenia). At that time, the ESMO/ASCO GC on medical oncology was fully or partly adopted in a vast majority of CEE countries (Fig. ). Out of 11 CEE countries, only Bulgaria, Poland, and Romania did not adopt ESMO/ASCO GC into their national curricula. In all CEE countries, the duration of training in medical oncology was in line with the EU Directive and ESMO/ASCO GC recommendations, i.e., a minimum duration of 5 years. It is encouraging that the range of countries that recognize medical oncology as a standalone specialty in the CEE region is comparable to the WE region and much higher compared to the average rate of medical oncology recognition of 75% worldwide. In terms of ESMO/ASCO GC adoption, the adoption rate in CEE countries is quite comparable to the 68% rate reported for all participating countries as well as to the adoption rate observed in the WE countries. There is still room for improvement in the adoption of GC in the whole of Europe and by large in CEE countries in which awareness about GC ability to unify training and decrease inequities in cancer care still needs to be strongly advocated. In addition, only in Slovenia among the CEE countries and Switzerland among the WE countries, ESMO examination is a mandatory part of the final exam in medical oncology. The incorporation of the European examination in the final exam might enable CEE countries to further improve professional standards and qualifications and ease free movement of well-trained specialists across borders. Surgical oncology is not recognized as a specialty in the EU. In most European countries, surgical specializations are organ based and are likely to remain so in future. To increase professional competence in oncology, the European Society of Surgical Oncology (ESSO) developed a core curriculum in surgical oncology . The curriculum contains all aspects of multidisciplinary cancer care, with the focus on surgery, needed for future candidates who plan to train and eventually sit for the European exam in surgical oncology. The exam is organized jointly by the Oncology Division of the European Board of Surgery (EBS) and UEMS and offers candidates the European Board of Surgery Qualification (EBSQ) in surgical oncology. Unfortunately, the participation of surgeons from CEE countries in this exam is not as high as the participation from other European countries. In recent years, mainly candidates from Slovenia, Hungary, and Croatia took part in the exam, while participation from other CEE countries was poor or nonexistent . Therefore, increased awareness of this exam and stimulation of young surgeons from CEE to take the exam are warranted. In addition to undergraduate and postgraduate education, there are multiple courses on different topics of oncology at which oncologists from the CEE region can get added knowledge and skills. The European School of Oncology (ESO) organizes specific courses dedicated to young oncologists from the CEE region, while ESMO, in the period 2016–2019, led a particularly useful specific Integration Fellowship program dedicated to young oncologists from the countries which joined the EU after 2004. The interest of young oncologists from the CEE region in those courses and fellowships is rather high; however, based on competitive principles, only the most committed candidates manage to get places on those courses. The education of other professions taking part in multi-professional cancer care in CEE seems to be quite comparable to the education in WE countries, as well. The ESTRO GC makes the nonmedical experts' role more explicit , and in countries with recognized radiation oncology as a specialty, which CEE countries are, physicists and radiotherapy technicians are also educated according to those proposals. Oncology nurses from most CEE countries are members of the European Oncology Nursing Society (EONS) and some of them took a leading role in preparing the EONS Cancer Nursing Education Framework program . The status of oncology pharmacy varies widely both globally and at the European level. The need for harmonized and EU-recognized oncology pharmacist training and education was identified as an important issue. An important step forward was the creation of a comprehensive educational program for pharmacists named the European Specialization in Oncology Pharmacy by the European Society of Oncology Pharmacy . This is especially important in countries where oncology pharmacy as a specialty is still developing, as is the case in many of the CEE countries, where oncology-specific board-certified pharmacists practicing in oncology are not yet common. Taken together, the overall conditions for education of oncologists in the CEE region seem rather good. Both basic specializations, radiation oncology and medical oncology, are recognized in almost all CEE countries, and oncologists have many opportunities to upgrade their knowledge within the framework of various courses and fellowships. Additionally, the internet and social networks nowadays can supply unlimited access to international literature and learning resources. Despite this, an open question remains as to why these relatively satisfactory educational conditions do not result in a comparable level of cancer care in CEE compared with WE countries. The reasons for this are certainly the lack of personnel and money, but important reasons certainly remain the insufficient involvement of oncologists from the CEE region in international research and development activities after completing their formal training and a lack of education and skills on fundamentals of cancer care organization, adjusted to available resources. To improve cancer care in CEE, it is imperative to focus efforts on improving the education of both oncologists and other health care system providers in the field of cancer care organization. With this in mind, it is truly encouraging that various organizations, such as ESMO and ESO, provide educational leadership programs for young oncologists, and especially that CECOG developed the so-called Open CEEiling leadership program for future leaders in oncology for young oncologists from the region of Central and Southeastern Europe.
Higher wealth and higher health care expenditures are associated with both increased cancer incidence and decreased cancer mortality within the EU . Inequality in care leads to up to 40% higher cancer survival rates in WE than in CEE . Postcommunist political, economic, and social transformation has led to a gradual improvement in health system outcomes in CEE countries over the last few decades . Human resources not only account for a substantial proportion of health care expenditures but also represent the most important input into the provision of health care . The positive recent overall trends in health system outcomes in CEE countries may occur at the cost of the overwhelming workload pressure on health professionals, including oncologists, and on the corresponding health care infrastructure. Medical oncologists play an essential role in the multidisciplinary oncologic team, which is required for high-quality cancer care and cancer research . The suggested international maximum annual caseload of new patient consults per medical oncologist is between 150 and 175 . The key findings of the recent global study which also investigated the clinical workload of European medical oncologists were the following: (i) the median number of annual consults per medical oncologist is 225 in CEE countries compared with 175 in WE countries ( p < 0.001), (ii) the proportion of medical oncologists seeing more than 300 consults/year is 35% in CEE countries compared to 18% in WE countries, (iii) the median number of patients seen in a full day clinic is 25 in CEE countries and 15 in WE countries ( p < 0.001), and CEE medical oncologists report spending a median of 25 min per new consultation compared with 45 min in WE ( p < 0.001) . It is concerning that a half of medical oncologists in the CEE see several hundred new cases instead of the proposed reasonable new case volumes (175–225 new consults per year) . Moreover, a gap in the workload of medical oncologists between CEE and WE countries might further widen in the near future. In WE countries, the average mean annual increase in the total number of medical oncologists was 5.3% (range: 1.8–8.7%) during the last decade . Unfortunately, no comprehensive information about the future planning of the medical oncology workforce is currently available for CEE. Radiotherapy is a capital-intensive cancer treatment modality which requires both sufficient infrastructure and specialized and trained personnel including radiation oncologists, medical physicists, and radiation therapy technologists. The aim of the ESTRO QUAntification of Radiation Therapy infrastructure and Staffing needs (QUARTS) project was to provide health care planners and policymakers with objective estimates of infrastructure and staffing needs for radiotherapy . It was suggested that one linear accelerator could serve 450 patients annually, whereas the personnel needs were defined as one radiation oncologist per 200–250 patients and one physicist per 450–500 patients . In the year 2010, the ESTRO Health Economics in Radiation Oncology (HERO) project was launched to conduct a detailed evidence-based estimation of radiotherapy infrastructure and personnel in Europe. Final results of this project show that the average staffing figures in Europe are now consistent with, or even more favorable than the QUARTS recommendations . However, there are large variations between countries for most parameters studied. For example, averages and ranges for personnel numbers per million inhabitants are 12.8 (range: 2.5–30.9) for radiation oncologists, 7.6 (range: 0–19.7) for medical physicists, and 26.6 (range: 1.9–78) for radiation therapy technologists. Radiation oncologists on average treat 208.9 courses per year (range: 99.9–348.8), physicists and dosimetrists conjointly treat 303.3 courses (range: 85–757.7) and radiation therapy technologists 76.8 (range: 25.7–156.8). In less affluent countries, including CEE, all personnel categories treat higher courses per annum than in wealthier WE countries . Results of the ESTRO HERO reflect differences between CEE and WE countries in cancer incidence, socioeconomic situation, stage in technology adoption, and the different professional roles and responsibilities within each country. Currently, there are no high-level data available for the workload and staffing in surgical oncology and oncologic nursing in the CEE. However, it is encouraging to see recent initiatives of intensive networking of surgeons in the CEE. The Central Eastern European Breast Cancer Surgical Consortium (CEEBCSC) was officially established in 2018, and the main aims of the consortium are to increase the quality of breast surgical care in the region and to facilitate international breast surgical scientific relationships and education and training of breast surgeons . In the coming years, the workforce needs across Europe may further increase due to professional burnout. Published data suggest that the rate of burnout has been increasing among physicians over time . According to the results of a recent survey, 72% of oncologists in the CEE were at high risk for burnout and younger oncologists are the most vulnerable group . An analysis of the European Commission found that the migration of health professionals is especially pronounced from Eastern and Southern Europe to wealthier Western and Northern European countries. It is also concerning that 39–85% of medical students from Eastern Europe plan to seek employment abroad after their graduation . These observations may not only negatively affect medical oncology directly but also indirectly, as contemporary cancer care is becoming strongly dependent on other segments of health care. In the following years, demand for oncology services is expected to rise further as result of population aging, introduction of both new technologies and novel therapeutic drugs and improvements in cancer survival rates. European health policymakers and national governments should jointly initiate appropriate activities to reduce disparities between CEE and WE countries and to ensure a sustainable future of the oncology workforce in Europe, especially in the CEE.
Several studies have demonstrated profound disparities in cancer care across Europe, which mainly result from wealth differences between WE and CEE countries . Until the early 1990s, the CEE countries were part of the former Soviet Union or were under its influence. This resulted in their socioeconomic underdevelopment. After the abolishment of the communist regime, countries of this region have made enormous economic and civilizational progress, and some became high-income countries. Nevertheless, on average, the financial situation still greatly favors WE. Most CEE countries have a social security-based health care system, delivering health care via public funds. Even after adjustments for purchasing power parity, the per capita health care spending differed in 2019 five-fold between the richest and poorest EU countries: €70 in Romania versus €352 in Luxembourg . Annual per capita expenditures on cancer drugs ranged from around €13 to €16 in Czechia, Latvia, and Poland to about €92 to €108 in Austria, Germany, and Switzerland. Expenditures on oncology care in CEE countries have been steadily increasing, and differences in oncology spending across Europe have grown smaller. However, a rapid rise in treatment costs has largely nullified this increase. One of the prerequisites for improving cancer outcomes at the population level is the development of comprehensive national cancer control plans. However, by 2016, seven of 13 CEE countries did not develop cancer control plans compared to 90% in WE . Additionally, some CEE countries that have created plans have faced problems in their implementation. Finally, comprehensive population-based cancer registries are not available in all CEE countries, and their validity and reliability are uncertain. Approximately 50% of cancers can be prevented. However, many CEE countries cannot afford to implement effective prevention measures. Screening for cervical cancer, breast cancer, and colorectal cancer in most CEE countries has either been introduced late or not launched. Additionally, screening participation by targeted populations remains too low to reduce overall mortality from these malignancies. Access to novel anticancer drugs remains lower in CEE than in WE countries, and these figures have not changed over time . Again, the major factor contributing to inequity of access to anticancer medications is their cost and affordability . This also applies to countries within CEE, favoring those with higher incomes . A critical issue in medical oncology in CEE is an insufficient number of specialists, making their clinical workload substantially higher than in WE . Another gap in cancer care in Europe is radiotherapy. Many CEE countries face critical shortages of equipment, particularly state-of-the-art machines. The ESTRO HERO project showed a clear relation between socioeconomic status and the availability of radiotherapy equipment . The number of megavoltage units per million inhabitants ranged in 2014 from 1.4 in Albania and 1.8 in Bulgaria to 8.3 in Norway and 9.5 in Denmark. An important shortcoming of oncology care in CEE countries is their archaic organizational structure. For example, according to Eurostat, in 2016, CEE countries had, on average, a higher number of hospital beds than WE countries, e.g., 603, 314, and 215 per 100,000 population in Bulgaria, France, and Sweden, respectively. However, this is not an indicator of abundance but rather wasteful use of resources by maintaining Soviet-style hospital-based care instead of less costly ambulatory or day hospital treatment . The shortfalls mentioned above, result in persistently poor cancer treatment outcomes in CEE countries. For colon cancer, the average 5-year survival rates in CEE and WE countries are 52% and 63%, respectively , and for breast cancer are 75–77% and 82–87%, respectively . For all types of cancer, the 5-year survival rates range from 40% in Bulgaria to 64% in Sweden . Cancer incidence and mortality across Europe show significant differences; the overall incidence is higher in WE, whereas the mortality is higher in CEE . Further, overall cancer mortality has generally been decreasing in WE, whereas in CEE has reached a plateau or is growing . The mortality to incidence ratio, a surrogate for treatment outcomes, ranges from 0.30 to 0.37 in WE to 0.37–0.56 in CEE and is strongly related to GDP in each country . According to 2016 estimates, increasing overall cancer survival in countries with low rates to the EU median would have avoided approximately 50,000 additional cancer deaths per year . Lower efficacy of cancer treatment in CEE has often been attributed to variations in cancer detection and later stage at diagnosis. However, in a large observational study including 15 European countries, the risk of death in CEE countries was still higher after adjusting for age, sex, and cancer stage . These results strongly suggest inadequate cancer management as a major cause of poorer outcomes in CEE. Facing financial barriers and fundamental cancer care shortcomings, patients and their families in CEE have developed several coping strategies to access diagnostics and modern therapies. These include paying out of pocket, visiting a private practitioner, or referring to informal payments or personal connections . Patients in CEE also express distrust about cancer treatment and its success. A good illustration of these attitudes is a survey comparing the public perception of cancer treatment in Poland and Austria . Both countries are members of the European Union but show high differences in health-related per capita spending. Polish, compared to Austrian patients, less frequently positively rated overall treatment efficacy (29% vs. 80), hospital care (44 vs. 84%), and ambulatory care (28 vs. 76%), respectively. Only 10% of Polish, compared to 48% of Austrian patients, believed that treatment offered by their health care system was equivalent to that in other EU countries.
Clinical research is a complex process that relies on many organizations and factors. Due to its multifactorial determinants, the assessment of the research activities remains controversial . In a specific region or country, the access to the clinical trials may partially reflect the clinical research performance. Important concerns in performing clinical trials in the CEE region were put forward in the early 2000s , but nowadays, the situation has dramatically changed. However, some disparities in the access to oncology clinical trials are still present across the European countries. For example, during 2009–2019, 18.454 clinical trial entries were noted in Europe, of which 78% were phase II and III . In the CEE countries, the distribution of the clinical trial entries is heterogenous. Less than 200 trial entries were noted in Bulgaria, Slovakia, Serbia, Croatia, Lithuania, Latvia, Estonia, Slovenia, Bosnia and Herzegovina, Macedonia, Albania, and Montenegro, between 200- and 500 trial entries were documented in Romania, whereas between 500 and 1000 trial entries were noted in Hungary, Czech Republic, and Poland. Of note, countries with >1000 trial entries are found only in WE. When the distribution was adjusted according to the number of inhabitants per country, the heterogeneity was still preserved. Per 100,000 inhabitants, <1 clinical trial was noted in Bosnia and Herzegovina, North Macedonia, Montenegro, and Albania; between 1 and 3 clinical trials were recorded in Serbia, Romania, Croatia, Poland, Slovenia, Lithuania, Bulgaria, Slovakia, and Latvia; and between 4 and 6 in Estonia, Hungary, and the Czech Republic. Of note, two countries stand out with the highest clinical trials/100,000 inhabitants: Hungary with 5.26 and the Czech Republic with 5.28. Noteworthy, the majority of countries with 5–10 trials/100,000 inhabitants belong to the WE. Evaluating the trend of clinical trials/100,000 inhabitants according to the gross domestic product (GDP), a positive correlation was found across the European countries in general. However, discrepancies remain in the CEE region with consistent differences in the number of clinical trials related to similar average GDP per capita. For instance, for the interval 15–20k USD/capita, there are 5.26 and 5.28 trials/100,000 inhabitants for Hungary and the Czech Republic and 2.18 and 2.8 trials/100,000 inhabitants in Poland and Slovakia. At a European level, a positive correlation between the number of trials and cancer incidence was found. In the CEE region, this correlation could not be confirmed. For cancer incidence between 550 and 650 age standardized rate/100,000, we find the Czech Republic and Hungary with the highest number of trials (>5/100,000) as compared with Croatia (1.84/100,000), Slovenia (2.47/100,000), and Slovakia (2.8/100,000). The dynamic in the number of clinical trials during 2010 and 2018 is also heterogenous in the CEE region. Only 2 countries showed a positive growth rate, Poland (growth rate 0.34), and the Czech Republic (growth rate 0.24), whereas all other CEE countries showed a negative trend, suggesting an overall contraction in the number of clinical trials during this period. Another way of evaluating the research performance is by looking at the number of the published articles and their international impact. Such an evaluation was performed using the Web of Science for the Science Citation Index Expanded and the Proceedings for the years 2007–2016 . The distribution of the oncology-related papers, the annual average percentage growth, and the cancer research activity as a fraction of all biomedical research in the CEE countries are presented in Table . One can notice that Poland is by far the leading country, with the highest number of published papers, followed by the Czech Republic and Hungary. On the other hand, the highest growth rate was recorded in Romania and the Baltic countries, whereas the lowest was found in Croatia and Bulgaria. Interestingly, the cancer research activity as a fraction of all biomedical research looks pretty similar across the majority of CEE countries (around 10–12%), excepting Estonia, where it was 4.5%. Research outputs tend to correlate fairly with the GDP and population size. In the majority of CEE countries, the amount of international cooperation with other EU states is around 60–78%, excepting Latvia and Estonia, where it is close to 50%. For most of the CEE countries, international collaboration has increased in the period between 2012 and 2016 when compared with 2007–2011, especially in Bosnia and Herzegovina, Slovenia, and Croatia. On the other hand, the impact of the published research activity, according to the citation scores, shows a different pattern (Table ). The Czech Republic and Hungary are still at the top (with a mean “world-scale” [WS] score of 40.6 and 33.1). However, Poland (the first according to the number of papers) had a mean WS score of 26.8, whereas the Baltic countries (the last according to the number of publications) had a high WS score, in the range of 31.2–21.8. When the amount of cancer research was evaluated according to the main anatomical sites and the percentage of their burden, malignant melanoma, the central nervous system, and blood cancers dominate relative to other cancers, whereas pancreatic, gastric, and esophageal cancer are underrepresented by a factor of at least two and lung cancer by a factor of more than four. When the citation scores were classified according to the research domain, the most consistent international impact was obtained for the targeted treatments and other “clinical trials” (5-year mean citation score, actual citation impact >80%), whereas other domains like epidemiology, screening, palliative care, and quality of life have a low representation (actual citation impact<20%). One can conclude that in the CEE countries, participation in clinical trials is reasonable but needs improvement. Factors associated with the negative growth rate in some countries should be identified, and specific measures should be taken to counteract this trend. The positive experience from the Czech Republic and Hungary needs to be shared with the rest of the CEE countries. National oncology societies and patient organizations should exert pressure on the governmental and health authorities to perceive the research activity as part of the quality of the health care system and a strategy for providing a more affordable cancer care . Therefore, they must assume the responsibility of establishing efficacious means to sustain the logistical and financial needs of the current research sites and to stimulate the foundation of new research units in the academic and nonacademic medical institutions. This support should be subject of specific “official” plans, with clear, defined objectives, initiatives, timelines for implementation, financial resources, and responsibilities. In Romania, e.g., the recently elaborated “National Cancer Control Plan” recognizes the lack of investments in clinical research, but specific directions of action are not clearly stated . Tumor registries should be considered a priority, and more resources should be allocated in countries where this important reference tool is not operational . Epidemiological studies are mandatory for providing data on the country-specific tumor burden and morbidity trends, which may be used to elaborate specific clinical trials oriented toward addressing the national needs. Cooperation with other EU countries is paramount, but some regional specific drawbacks can be identity following regional cooperation . Besides the industry-sponsored trials, the academic-driven research is expected to contribute more, especially in the fields of palliative care, quality of life, and screening which are underrepresented in the CEE space as compared with other EU countries.
Over the past more than 30 years, CEE countries have made enormous economic and societal progress. Nevertheless, challenges especially in the health care sector persist. Research has been conducted to better understand the causes of these challenges and to propose solutions. As a consequence, educational initiatives aiming at a standardized, high-quality education in medical oncology, surgical oncology, and radiation oncology have been implemented. Despite these educational efforts and in conjunction with the economic precariat, health care professionals in the region face a higher workload, higher risk of brain drain, and lower research activity compared to WE countries. However, activities are under way to address these issues in national action plans to divert funding into oncology-related education, research, the purchase of equipment, and the attainment of modern hospital organization and structures.
T.C.: receipt of honoraria or consultation fees from AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Roche, MSD, Pfizer, and Takeda; B.S.: honorary and consultancy fee from Astellas, Jansen, and AstraZeneca; M.D.: advisory role and speaker fees from Aventis, Astellas, AstraZeneca, Amgen, Ipsen, Janssen, Novartis, Pfizer, Roche, Sandoz, BMS, MSD, Eli Lilly, Servier, and Takeda; J.J.: advisory roles in AstraZeneca, MSD, and Exact Sciences. C.T and C.C.Z.: institution (CECOG): BMS, MSD, Pfizer, AstraZeneca, Merck KgA, Amgen, Servier, Eli Lilly, Takeda, Daiichi Sankyo, Roche, Boehringer Ingelheim, Celgene, and Halozyme. C.C.Z.: consultancies and speaker's honoraria from Athenex, MSD, Imugene, AstraZeneca, Servier, and Eli Lilly; patents for Imugene.
The authors did not receive funding for this work.
T.C. contributed the chapter on education of oncologists in CEE. B.S. wrote the chapter on human resources in oncology in CEE. J.J. prepared the chapter on cancer care in CEE. M.D. contributed the chapter on clinical research activity in CEE countries: current status and further perspectives. C.T. contributed to agreement to be accountable for all aspects of the work. C.T. and C.C.Z. contributed to drafting the work and revising it critically for important intellectual content and final approval of the version to be published.
|
Navigating the electronic health record in university education: helping health care professionals of the future prepare for 21st century practice | 50047142-c80f-414a-8cb0-697d97589230 | 10016237 | Patient-Centered Care[mh] | |
Cost-effectiveness of running a paediatric oncology unit in Ethiopia | 1a8a84ac-283a-4037-9952-2d7722465c27 | 10016307 | Pediatrics[mh] | Globally, childhood cancer (age 0–19 years) represents 0.5%–4.6% of the total cancer burden in a population, and nearly 90% of this burden falls on low and middle-income countries (LMICs). In 2017, childhood cancer represented a disease burden of 11.5 million disability-adjusted life years (DALYs) globally and ranked as the sixth and ninth leading causes of disease burden in total cancer and childhood disease, respectively. Over the past few decades, high-income countries have dramatically improved the treatment outcomes of childhood cancers. In the UK, for example, the 5-year survival rate has increased from less than 30% in the 1960s to almost 80% on average in the 2000s. By contrast, survival rates in Africa generally remain lower than 20%, and these avoidable deaths are largely due to late diagnosis, misdiagnosis, lack of access to quality therapeutic and supportive care, high treatment abandonment rate, treatment adverse effects and avoidable high rate of relapse. In general, there is a significant lack of reliable data on the disease burden of childhood cancers in Ethiopia. The latest estimates from GLOBOCAN 2018 put the incidence of cancer among children aged 0–14 at 3800 cases annually, or 8.9 per 100 000 children. Another study on cancer incidence in Ethiopia estimated 3707 annual cases as of 2015. The most common childhood cancers in Ethiopia are acute lymphoblastic leukaemia (25.7%), non-Hodgkin’s lymphoma (8.9%), rhabdomyosarcoma (8.9%), Wilms tumour (8%) and neuroblastoma (7.8%). Sadly, as in other low-income countries (LICs), most childhood cancers in Ethiopia are not successfully treated. One Ethiopian study examined all children below 15 years of age admitted to the paediatric wards of Gondar University Hospital due to cancer in 2010–2013 and found that only 20% improved, while 65% were discharged without improvement and 7% died in the hospital. The main reason for discharge was the unavailability and unaffordability of chemotherapeutic drugs. In addition to the challenge of obtaining supplies and the unaffordability of treatment, there is also a large gap in the availability of equipped facilities and trained staff. As of 2019, Ethiopia had only six qualified paediatric hemato-oncologists for the entire nation, and access to diagnostic or treatment centres is very limited. Until recently, Tikur Anbessa Specialized Hospital (TASH) had the country’s only paediatric oncology unit. Cognizant of these factors, the Ethiopian Federal Ministry of Health (FMoH) recently developed a National Childhood and Adolescent Cancer Control Plan (NCACCP) for the years 2019–2023 with the aim of improving survival rates through early detection and diagnosis, quality treatment and supportive care. The overall goal is to achieve at least a 40% cure rate for common and curable childhood and adolescent cancers. The timing of the NCACCP plan aligns with the WHO Global Initiative for Childhood Cancer, launched in 2018, which aims to improve survival to at least 60% and to decrease cancer-related suffering for all children with cancer by 2030. One means by which the FMoH aims to achieve these targets is by increasing the number of fully equipped and functional paediatric oncology centres in the country from three in 2019 to eight before the end of 2023. In general, there is limited evidence on the cost, cost-effectiveness and affordability of paediatric cancer units in LMICs, but a few studies have found that treatment of certain paediatric cancers can be highly cost-effective in such settings. A 2019 systematic review of childhood cancer treatment in LMICs indicates that the cost per DALY averted could range from US dollars (USD) 22 to 4475, which is less than one time the gross domestic product (GDP) per capita of the studied countries, indicating that selected interventions are cost-effective ; the wide range of the result is explained by the difference in cost-component accounting among studies. Similarly, a study conducted in 2021 in four African countries (Kenya, Zambia, Nigeria and Tanzania) found that costs per DALY averted were less than 0.3 times the GDP per capita of Tanzania and Zambia. A 2013 study on the cost-effectiveness of acute lymphoblastic leukaemia and Burkitt’s lymphoma treatment in Brazil and Malawi concluded that running a paediatric oncology unit in LMICs would be highly cost-effective by the standard of the WHO-CHOICE cost-effectiveness threshold. Other studies conducted at paediatric oncology units in El Salvador and Ghana support these findings, with cost per DALY averted estimates of USD 1624 and USD 1034, respectively, which is very cost-effective according to the countries’ cost-effectiveness thresholds as determined by the WHO-CHOICE framework. Despite this promising evidence from other LMICs, a need remains for more country-level evidence because of differing disease burdens, patients’ survival rates, cost of care profiles and willingness to pay (WTP) in Ethiopia compared with other LMICs. Furthermore, local cost-effectiveness evidence could enhance advocacy, trust and policy prioritisation for childhood cancer programmes in the national priority-setting process. As an example, the Ethiopia Essential Health Service Package (EEHSP) classifies most childhood cancer diagnostic and treatment services as either low or medium priority despite the aspirational goals of the NCACCP and the recent global attention and advocacy for countries to invest in childhood cancer control; this represents a setback in Ethiopia’s childhood cancer control efforts, which will continue to be underfinanced and out of the leadership’s attention. These priority rankings were partly influenced by a lack of contextualised cost-effectiveness evidence, and the decision was based on experts’ judgement. Therefore, this research aimed to fill the local evidence gap regarding the cost-effectiveness of childhood cancer treatment (specialised paediatric oncology care delivery) to inform the revision of the EEHSP and harmonise the conflicting priority level of childhood cancer treatment between the NCACCP and the EEHSP. Study setting Ethiopia, a country with a population close to 110 million in 2019, formerly had only one paediatric oncology unit nationally, located at TASH in Addis Ababa, Ethiopia’s capital. Recently, three additional paediatric oncology centres (in Jimma, Gondar and Mekelle University Hospitals) were added. The costing part of this study was conducted at TASH, which has 81 clinical departments, a 735-bed capacity and close to 500 000 outpatient department (OPD) visits per year in 2019. TASH’s paediatric oncology centre has a capacity of 42 beds, and most suspected cases of childhood cancer (age <15 years) across the country have until recently been referred to this centre. The paediatric oncology unit is financed mainly by the government. The unit has an inpatient department embedded in the main compound of TASH and a satellite clinic proximal to TASH (around 1 km away). The satellite clinic not only serves mainly as an OPD but also provides inpatient services for short admissions to administer chemotherapy. Although the paediatric oncology unit is far from ideally staffed and equipped, it has paediatric oncologists, nurses trained in paediatric oncology services, social workers and dedicated pharmacists. Some clinical support services are shared with other departments, such as the laboratory, pharmacy, imaging, pathology, surgery, intensive care unit (ICU), emergency, radiotherapy, blood bank and non-medical central services, such as food, laundry, utilities (eg, electricity and water) and other operational costs. Decision analytic model We built a decision analytic model—a decision tree—to estimate the cost-effectiveness of running a paediatric oncology unit compared with a do-nothing scenario from a provider perspective . As time and recurrence are important considerations in shaping the natural course of cancer patients, state transition models (a cohort-level or individual-level microsimulation) applied to specific childhood cancer types would have been an ideal approach but that would require very detailed epidemiology and effectiveness data for each cancer type from Ethiopia or at least from similar settings to properly map the various clinical scenarios of patients over time (eg, remission, disease progression, recurrence, death) and justifiably populate the state transition models. Lacking such data, we used a decision analytic model and limited the scope of the study to providing only a gross overview of the cost-effectiveness of paediatric oncology care (at a service-platform level) compared with no paediatric oncology care to inform the national-level policy dialogue. The cancer-specific cost-effectiveness will be incorporated and addressed as more data become available in the future. We created a generic model simulating a child with cancer (without specifying the diagnosis) who receives services from the paediatric oncology unit (labelled as paediatric oncology care in ) compared with a do-nothing scenario (labelled as no paediatric oncology care). To estimate costs and effects, the model depicts 2 years of treatment (considering an average cancer treatment duration) divided into 8-month treatment intervals. We considered the average treatment duration to be around 2 years, as acute lymphoblastic leukaemia (which can take more than 3 years of treatment) was the dominant type of cancer at TASH, and we took estimates from other centres with comparable cancer patterns. An 8-month treatment interval was chosen, as the reported median time for events to occur (abandonment or death related to relapse, disease progression, treatment toxicity or background death) is around 8 months. For the no paediatric oncology care scenario, we assumed that all patients would die at the end of 6 months. For cured children, our model assumes that some survivors will develop late-treatment chronic complications that will affect their quality of life and shorten their life expectancy compared with other children with background mortality. Two outcomes—survival (event-free survival (EFS)) and death (non-survival)—were used to estimate cost and effects at the end of each 8-month treatment interval, and the probabilities for EFS and death were taken from a literature review in similar settings ( and ). Abandonment, a significant problem in Ethiopia (around 34%), was taken as an event and captured as equivalent to death in our model for the following reasons: (1) most childhood cancer patients in Ethiopia and LICs are diagnosed at a late stage (stage 3–4), and most patients abandon care at an early stage of the treatment phase (due to refusal to start or early discontinuation) ; thus, the chance of survival after abandonment is likely very low ; (2) TASH was the only oncology centre in Addis Ababa, making it unlikely that children would find alternative better treatment elsewhere in the country after abandoning care at the oncology unit unless they travelled abroad; (3) if children accessed treatment in private health facilities (in the country or abroad), the cost would fall on the patients’ guardians and could not be captured in our model, which is from the provider perspective. 10.1136/bmjopen-2022-068210.supp1 Supplementary data The disability of surviving patients was assumed to be better than non-surviving in each treatment interval . Surviving patients in each treatment interval were assumed to have a better utility compared with their earlier treatment interval status to account for response to treatment and reduced risk of treatment-associated toxicity. Hence, the disability weight progressively fell as they moved from the first 8-month interval to the second (9–16 months), third (17–24 months), and once cured. The disability weight at the first 8 month treatment interval was 0.37, while it was 0.29 at 9–16 months, 0.20 at 17–24 months and 0.07 for cured. The disability weights are taken from the 2019 Institute for Health Metrics and Evaluation estimate for childhood cancer and are measured on a scale of 0–1, in which 0 equals perfect health and 1 equals death . Model parameter inputs and assumptions The cost-related model parameters were generated through primary data collection (described below), and the health benefit parameters were taken from a literature review of comparable settings, as no local data were available ( and ). We conducted a scoping literature review to identify studies documenting the effectiveness of childhood cancer treatment in African LICs. The literature search was done in six electronic databases, including PubMed, Embase, ScienceDirect, Scopus, Web of Science and African Journals OnLine by combining terminologies covering the spectrum of childhood cancer types, country names (LICs in Africa) and treatment outcomes (survival or mortality). We identified 14 studies fulfilling our criteria and prioritised the evidence based on systematic review or meta-analysis, followed by prospective studies based on cancer registries, multicountry/multicentre studies, and those with large sample sizes, broad cancers coverage, long survival periods and recently conducted studies. We substantiated the survival rate findings from the scoping review using experts’ judgements and local evidence on treatment abandonment and survival rates drawn from expert opinion . We set a modest survival rate in our model to avoid biased cost-effectiveness conclusions. We assumed the 2-year childhood cancer survival rate at TASH to be 25%, with a 95% CI of 15% to 35%, despite commonly reported overall survival rates ranging from 35% to 45% in paediatric oncology centres in LICs in Africa. Further details on the scoping review process, key findings and transferring approach are provided in the . Estimation of cost We conducted a costing study (8 July 2018–7 July 2019) to estimate the annual cost of running the paediatric oncology unit at TASH from a provider perspective, using a mixed (top-down and bottom-up) costing approach (for further details, see, Mirutse MK, Palm MT, Tolla MT, Memirie ST, Kefyalew ES, Hailu D, Norheim OF. Cost of childhood cancer treatment in Ethiopia, submitted for publication). We identified, measured and valued the cost inputs used in running the unit. Direct cost inputs—costs directly attributable to a specific department or service output, such as costs of human resources, drugs/supplies and medical equipment—were computed by estimating the amounts consumed by the unit in a year (consumed quantity) multiplied by their unit costs. The costs of shared departments or services—including laboratory, radiation, imaging, pathology, surgical operating room, ICU, paediatric emergency services, inpatient food services, laundry, utilities (rent, electricity, telecommunication, water and other utility charges) and other overhead costs (operating expenses such as office supplies, printing, educational supplies, fuel, per diems and training costs)—were costed by allocating the share of those services used by the paediatric oncology unit; we used various allocation bases appropriate to each case (for further details, see, Mirutse MK, Palm MT, Tolla MT, Memirie ST, Kefyalew ES, Hailu D, Norheim OF. Cost of childhood cancer treatment in Ethiopia, submitted for publication). Finally, the total cost of the unit was computed by adding the direct cost, the indirect costs from the intermediate departments and the overhead cost. We converted the total cost to USD using the mean exchange rate for 2019. We computed the number of OPD visits per patient during the 8 months, cost per OPD visit, 8-month bed days per patient and cost per bed day. The 8-month OPD visits per patient were computed by dividing the total annual OPD visits of the paediatric oncology unit (7842) by the annual number of patients (1345), and this annual estimate was adjusted for 8 months (taking an 8 month share). The same techniques were used for the 8-month bed days per patient by using the total annual bed days (12 180) and annual number of patients. The costs per OPD visit and per bed day were calculated by integrating the annual OPD and IPD cost estimate and the annual OPD and IPD utilisation statistics report. Then, for each 8-month treatment interval, we estimated the cost of OPD and IPD in each arm and aggregated the total cost. We used the costs of OPD and IPD of non-surviving patients as 1.5 and 2 times the costs of OPD and IPD of surviving patients, respectively, as they are likely to use more and/or expensive services. These estimates were derived from the costing study at TASH, taking into account the cost distribution between regular OPDs and departments related to critical patients and the anticipated service utilisation patterns between surviving and non-surviving patients. However, it is also possible the cost of non-surviving patient to be lower than surviving patient given the high rate of treatment abandonment in Ethiopia, which affects the non-surviving arm in our model and such assumption lowers the cost of running the paediatric oncology unit at TASH (as the model assumes the overall survival rate at TASH to be 25%); hence, it will shift the conclusion towards cost-effective and vice versa in the case of surviving patient cost more than non-surviving patient assumption. We chose a more conservative assumption (the non-surviving patient costing more than surviving patient) so as not to bias the results towards overstating cost-effectiveness and as the alternative assumption will not change the conclusion. We discounted costs using the global discounting rate (3%) for 1 year, as cost was captured only over a 2-year treatment period. Estimation of health benefits We used the number of DALYs averted as the effectiveness measurement metric. The following formula was used to compute the DALYs: D A L Y s = y e a r s o f l i f e l o s t ( Y L L ) + y e a r s l i v e d w i t h d i s a b i l i t y ( Y L D ) For the no paediatric oncology scenario, we estimated the YLD by assuming that patients would survive for only 6 months without treatment (we multiplied the disability weight without treatment by the average survival duration) , and we computed the YLL by taking the difference between the age of death and life expectancy at that specific age. We compared both scenarios to a theoretical worst-case situation in which a child dies immediately after cancer diagnosis. To estimate DALYs averted, we used combinations of model variables : annual number of new cases, average age at diagnosis, average duration of treatment, EFS rate at end of treatment intervals, life expectancy at specific age, life expectancy gap related to late recurrence or late treatment adverse effects and disability weight. gives further details on the model variables, range of values and assumptions. As there is no cancer survival registry or previously conducted childhood cancer health outcome studies in Ethiopia, treatment outcome-related data were taken from evidence in similar settings. We did not use treatment outcome data from high and middle-income countries, as such outcomes would require further investments in quality improvements that were not captured in our costing estimate. We discounted DALYs averted by 3% using a lifetime horizon to bring future benefits to present value. Cost-effectiveness analysis Cost-effectiveness in this generic model was expressed as the incremental cost-effectiveness ratio (ICER) and computed by dividing the incremental costs of introducing a specialised oncology unit by the incremental DALYs averted, that is, due to interventions. I C E R = I C / I E An intervention was considered cost-effective if the ICER was less than 50% of the Ethiopian GDP per capita, and not cost-effective if otherwise. We used TreeAge software to build the decision model and run the cost-effectiveness analysis. Uncertainty We varied cost, EFS, life expectancy gap after treatment and disability weights using the 95% CI reports from the literature review to estimate the effect of the model variables’ uncertainty on the estimated result . We conducted a one-way sensitivity analysis and probabilistic sensitivity analysis (PSA) with 100 000 Monte Carlo simulations using various distributions . Patient and public involvement This project did not include patients or the public in developing the research questions or designing and conducting the study. There is a plan to disseminate the results of the study to various stakeholders, including associations and civil societies working on childhood cancer control programmes in Ethiopia. Ethiopia, a country with a population close to 110 million in 2019, formerly had only one paediatric oncology unit nationally, located at TASH in Addis Ababa, Ethiopia’s capital. Recently, three additional paediatric oncology centres (in Jimma, Gondar and Mekelle University Hospitals) were added. The costing part of this study was conducted at TASH, which has 81 clinical departments, a 735-bed capacity and close to 500 000 outpatient department (OPD) visits per year in 2019. TASH’s paediatric oncology centre has a capacity of 42 beds, and most suspected cases of childhood cancer (age <15 years) across the country have until recently been referred to this centre. The paediatric oncology unit is financed mainly by the government. The unit has an inpatient department embedded in the main compound of TASH and a satellite clinic proximal to TASH (around 1 km away). The satellite clinic not only serves mainly as an OPD but also provides inpatient services for short admissions to administer chemotherapy. Although the paediatric oncology unit is far from ideally staffed and equipped, it has paediatric oncologists, nurses trained in paediatric oncology services, social workers and dedicated pharmacists. Some clinical support services are shared with other departments, such as the laboratory, pharmacy, imaging, pathology, surgery, intensive care unit (ICU), emergency, radiotherapy, blood bank and non-medical central services, such as food, laundry, utilities (eg, electricity and water) and other operational costs. We built a decision analytic model—a decision tree—to estimate the cost-effectiveness of running a paediatric oncology unit compared with a do-nothing scenario from a provider perspective . As time and recurrence are important considerations in shaping the natural course of cancer patients, state transition models (a cohort-level or individual-level microsimulation) applied to specific childhood cancer types would have been an ideal approach but that would require very detailed epidemiology and effectiveness data for each cancer type from Ethiopia or at least from similar settings to properly map the various clinical scenarios of patients over time (eg, remission, disease progression, recurrence, death) and justifiably populate the state transition models. Lacking such data, we used a decision analytic model and limited the scope of the study to providing only a gross overview of the cost-effectiveness of paediatric oncology care (at a service-platform level) compared with no paediatric oncology care to inform the national-level policy dialogue. The cancer-specific cost-effectiveness will be incorporated and addressed as more data become available in the future. We created a generic model simulating a child with cancer (without specifying the diagnosis) who receives services from the paediatric oncology unit (labelled as paediatric oncology care in ) compared with a do-nothing scenario (labelled as no paediatric oncology care). To estimate costs and effects, the model depicts 2 years of treatment (considering an average cancer treatment duration) divided into 8-month treatment intervals. We considered the average treatment duration to be around 2 years, as acute lymphoblastic leukaemia (which can take more than 3 years of treatment) was the dominant type of cancer at TASH, and we took estimates from other centres with comparable cancer patterns. An 8-month treatment interval was chosen, as the reported median time for events to occur (abandonment or death related to relapse, disease progression, treatment toxicity or background death) is around 8 months. For the no paediatric oncology care scenario, we assumed that all patients would die at the end of 6 months. For cured children, our model assumes that some survivors will develop late-treatment chronic complications that will affect their quality of life and shorten their life expectancy compared with other children with background mortality. Two outcomes—survival (event-free survival (EFS)) and death (non-survival)—were used to estimate cost and effects at the end of each 8-month treatment interval, and the probabilities for EFS and death were taken from a literature review in similar settings ( and ). Abandonment, a significant problem in Ethiopia (around 34%), was taken as an event and captured as equivalent to death in our model for the following reasons: (1) most childhood cancer patients in Ethiopia and LICs are diagnosed at a late stage (stage 3–4), and most patients abandon care at an early stage of the treatment phase (due to refusal to start or early discontinuation) ; thus, the chance of survival after abandonment is likely very low ; (2) TASH was the only oncology centre in Addis Ababa, making it unlikely that children would find alternative better treatment elsewhere in the country after abandoning care at the oncology unit unless they travelled abroad; (3) if children accessed treatment in private health facilities (in the country or abroad), the cost would fall on the patients’ guardians and could not be captured in our model, which is from the provider perspective. 10.1136/bmjopen-2022-068210.supp1 Supplementary data The disability of surviving patients was assumed to be better than non-surviving in each treatment interval . Surviving patients in each treatment interval were assumed to have a better utility compared with their earlier treatment interval status to account for response to treatment and reduced risk of treatment-associated toxicity. Hence, the disability weight progressively fell as they moved from the first 8-month interval to the second (9–16 months), third (17–24 months), and once cured. The disability weight at the first 8 month treatment interval was 0.37, while it was 0.29 at 9–16 months, 0.20 at 17–24 months and 0.07 for cured. The disability weights are taken from the 2019 Institute for Health Metrics and Evaluation estimate for childhood cancer and are measured on a scale of 0–1, in which 0 equals perfect health and 1 equals death . The cost-related model parameters were generated through primary data collection (described below), and the health benefit parameters were taken from a literature review of comparable settings, as no local data were available ( and ). We conducted a scoping literature review to identify studies documenting the effectiveness of childhood cancer treatment in African LICs. The literature search was done in six electronic databases, including PubMed, Embase, ScienceDirect, Scopus, Web of Science and African Journals OnLine by combining terminologies covering the spectrum of childhood cancer types, country names (LICs in Africa) and treatment outcomes (survival or mortality). We identified 14 studies fulfilling our criteria and prioritised the evidence based on systematic review or meta-analysis, followed by prospective studies based on cancer registries, multicountry/multicentre studies, and those with large sample sizes, broad cancers coverage, long survival periods and recently conducted studies. We substantiated the survival rate findings from the scoping review using experts’ judgements and local evidence on treatment abandonment and survival rates drawn from expert opinion . We set a modest survival rate in our model to avoid biased cost-effectiveness conclusions. We assumed the 2-year childhood cancer survival rate at TASH to be 25%, with a 95% CI of 15% to 35%, despite commonly reported overall survival rates ranging from 35% to 45% in paediatric oncology centres in LICs in Africa. Further details on the scoping review process, key findings and transferring approach are provided in the . We conducted a costing study (8 July 2018–7 July 2019) to estimate the annual cost of running the paediatric oncology unit at TASH from a provider perspective, using a mixed (top-down and bottom-up) costing approach (for further details, see, Mirutse MK, Palm MT, Tolla MT, Memirie ST, Kefyalew ES, Hailu D, Norheim OF. Cost of childhood cancer treatment in Ethiopia, submitted for publication). We identified, measured and valued the cost inputs used in running the unit. Direct cost inputs—costs directly attributable to a specific department or service output, such as costs of human resources, drugs/supplies and medical equipment—were computed by estimating the amounts consumed by the unit in a year (consumed quantity) multiplied by their unit costs. The costs of shared departments or services—including laboratory, radiation, imaging, pathology, surgical operating room, ICU, paediatric emergency services, inpatient food services, laundry, utilities (rent, electricity, telecommunication, water and other utility charges) and other overhead costs (operating expenses such as office supplies, printing, educational supplies, fuel, per diems and training costs)—were costed by allocating the share of those services used by the paediatric oncology unit; we used various allocation bases appropriate to each case (for further details, see, Mirutse MK, Palm MT, Tolla MT, Memirie ST, Kefyalew ES, Hailu D, Norheim OF. Cost of childhood cancer treatment in Ethiopia, submitted for publication). Finally, the total cost of the unit was computed by adding the direct cost, the indirect costs from the intermediate departments and the overhead cost. We converted the total cost to USD using the mean exchange rate for 2019. We computed the number of OPD visits per patient during the 8 months, cost per OPD visit, 8-month bed days per patient and cost per bed day. The 8-month OPD visits per patient were computed by dividing the total annual OPD visits of the paediatric oncology unit (7842) by the annual number of patients (1345), and this annual estimate was adjusted for 8 months (taking an 8 month share). The same techniques were used for the 8-month bed days per patient by using the total annual bed days (12 180) and annual number of patients. The costs per OPD visit and per bed day were calculated by integrating the annual OPD and IPD cost estimate and the annual OPD and IPD utilisation statistics report. Then, for each 8-month treatment interval, we estimated the cost of OPD and IPD in each arm and aggregated the total cost. We used the costs of OPD and IPD of non-surviving patients as 1.5 and 2 times the costs of OPD and IPD of surviving patients, respectively, as they are likely to use more and/or expensive services. These estimates were derived from the costing study at TASH, taking into account the cost distribution between regular OPDs and departments related to critical patients and the anticipated service utilisation patterns between surviving and non-surviving patients. However, it is also possible the cost of non-surviving patient to be lower than surviving patient given the high rate of treatment abandonment in Ethiopia, which affects the non-surviving arm in our model and such assumption lowers the cost of running the paediatric oncology unit at TASH (as the model assumes the overall survival rate at TASH to be 25%); hence, it will shift the conclusion towards cost-effective and vice versa in the case of surviving patient cost more than non-surviving patient assumption. We chose a more conservative assumption (the non-surviving patient costing more than surviving patient) so as not to bias the results towards overstating cost-effectiveness and as the alternative assumption will not change the conclusion. We discounted costs using the global discounting rate (3%) for 1 year, as cost was captured only over a 2-year treatment period. We used the number of DALYs averted as the effectiveness measurement metric. The following formula was used to compute the DALYs: D A L Y s = y e a r s o f l i f e l o s t ( Y L L ) + y e a r s l i v e d w i t h d i s a b i l i t y ( Y L D ) For the no paediatric oncology scenario, we estimated the YLD by assuming that patients would survive for only 6 months without treatment (we multiplied the disability weight without treatment by the average survival duration) , and we computed the YLL by taking the difference between the age of death and life expectancy at that specific age. We compared both scenarios to a theoretical worst-case situation in which a child dies immediately after cancer diagnosis. To estimate DALYs averted, we used combinations of model variables : annual number of new cases, average age at diagnosis, average duration of treatment, EFS rate at end of treatment intervals, life expectancy at specific age, life expectancy gap related to late recurrence or late treatment adverse effects and disability weight. gives further details on the model variables, range of values and assumptions. As there is no cancer survival registry or previously conducted childhood cancer health outcome studies in Ethiopia, treatment outcome-related data were taken from evidence in similar settings. We did not use treatment outcome data from high and middle-income countries, as such outcomes would require further investments in quality improvements that were not captured in our costing estimate. We discounted DALYs averted by 3% using a lifetime horizon to bring future benefits to present value. Cost-effectiveness in this generic model was expressed as the incremental cost-effectiveness ratio (ICER) and computed by dividing the incremental costs of introducing a specialised oncology unit by the incremental DALYs averted, that is, due to interventions. I C E R = I C / I E An intervention was considered cost-effective if the ICER was less than 50% of the Ethiopian GDP per capita, and not cost-effective if otherwise. We used TreeAge software to build the decision model and run the cost-effectiveness analysis. We varied cost, EFS, life expectancy gap after treatment and disability weights using the 95% CI reports from the literature review to estimate the effect of the model variables’ uncertainty on the estimated result . We conducted a one-way sensitivity analysis and probabilistic sensitivity analysis (PSA) with 100 000 Monte Carlo simulations using various distributions . This project did not include patients or the public in developing the research questions or designing and conducting the study. There is a plan to disseminate the results of the study to various stakeholders, including associations and civil societies working on childhood cancer control programmes in Ethiopia. A total of 1345 children with cancer were treated at TASH from 8 July 2018 to 7 July 2019. The most common cancer types were acute lymphoblastic leukaemia (28%), Wilms tumour (15%) and Hodgkin’s lymphoma (12%), followed by rhabdomyosarcoma, retinoblastoma, neuroblastoma and non-Hodgkin’s lymphoma (further details included in ). The total cost of a running paediatric oncology unit per treated child (for 2 years) was USD 901, while it was USD 18 for the no paediatric oncology care scenario (6 months). The IC was USD 876 per treated child. The DALYs averted per treated child for an operating paediatric oncology unit were 2.49, whereas the figure was 0.06 for no paediatric oncology care, and the IE per treated child was 2.43. The ICER was USD 361 per DALY averted . The tornado diagram presents the variables and range of values tested in the one-way sensitivity analysis. The length of the horizontal bar indicates an individual variable’s potential level of parameter-impact uncertainty on the ICER estimate. The longer the bar, the greater the impact in the direction of the bar (to the left or right). Accordingly, the five parameters with the greatest potential influence on the ICER estimate were cost per bed day, EFS rate in the first 8 months, cost per OPD visit, EFS rate at 17–24 months and life expectancy gap. In the one-way sensitivity analysis, the uncertainty of individual parameters did not alter the cost-effectiveness conclusion, as the level of impact was lower than the WTP threshold for all individual parameters. We varied the cost of the no paediatric oncology scenario down to zero, but it had a minimal effect, slightly increasing the ICER from USD 362 per DALY averted in the base case to USD 370 per DALY averted. presents the PSA results. At a WTP of <USD 361, the no paediatric oncology care scenario had a higher probability of being cost-effective. At a WTP of USD 361, the two scenarios had an equal probability of being cost-effective (where the red and blue lines cross in ), and the probability of cost-effectiveness was higher for paediatric oncology care at a WTP of >USD 361. The probability of paediatric oncology care being cost-effective was 100% at a WTP of >USD 600. In our model, running a paediatric oncology unit was cost-effective compared with a no paediatric oncology care scenario in 90% of the Monte Carlo simulations (100 000 simulations) at a WTP of USD 477 (based on 50% of GDP per capita for Ethiopia in 2019) as indicated by the broken brown line in . The highest ICER estimate from the PSA was around USD 600 per DALY averted. Running a paediatric oncology unit is more effective (2.43 DALYs averted per child treated) than a no paediatric oncology care scenario, but it also costs more (USD 876 per child treated). The ICER of running a paediatric oncology unit compared with the no paediatric oncology care scenario is USD 361 per DALY averted, and it is cost-effective using a USD 477 WTP threshold (50% of Ethiopia’s 2019 GDP per capita), which is a lower threshold than the commonly used WHO-CHOICE-recommended threshold for very cost-effective interventions (lower than the 1 x GDP per capita (USD 953) for Ethiopia). The results of the Monte Carlo simulation (100 000 iterations) indicate a 90% chance that the ICER will be below the WTP threshold (being cost-effective). As indicated by the one-way sensitivity analysis, the chance of being cost-effective increases with an improvement in survival rate, which is currently very low in Ethiopia. The WHO Global Initiative for Childhood Cancer and the Disease Control and Priority Cancer module indicate that investing in childhood cancer control programmes will improve survival and is highly cost-effective, affordable and feasible in LMICs with prioritisation of certain cancer types, such as acute lymphoblastic leukaemia, Hodgkin’s lymphoma, Burkitt’s lymphoma, retinoblastoma, Wilms tumour and low-grade glioma (brain tumour). Our ICER finding in the generic model is similar to estimates from Tanzania (USD 323 per DALY averted), higher than reports from Uganda (USD 97 per DALY averted) and lower than reports from Zimbabwe (USD 537 per DALY averted), Ghana (USD 1034 per DALY averted) and Nigeria (USD 2940 per DALY averted). The lower ICER estimate in Ethiopia may be related mainly to the low annual cost estimate, which is possibly explained by Ethiopia’s low human resource payment scale, heavily subsidised utility costs (eg, water, electricity), service quality differences, unconsidered cost inputs (explained in the limitations discussion), differences in volume of service provided (the high patient volume in TASH compared with that in the other countries could reduce the cost per treated patient) and differences in treatment protocols, childhood cancer patterns and cost-effectiveness analysis approach. With an annual cost of USD 577 per treated child (which could be as high as USD 1085 when adjusted for suboptimal care), the budget impact of investing in childhood oncology care may be optimistic, as the population in need of care is small (an annual incidence of childhood cancer of around 3800). Beyond its high potential for cost-effectiveness and low budget impact (hence affordability), investing in paediatric oncology treatment could contribute to reducing financial hardship and improving equity. According to a 2014 WHO report, Making Fair Choices on the Path to Universal Health Coverage , one definition of the worst off is those with the largest individual disease burden, and children with cancer qualify for that definition, as they face high premature death. Furthermore, the Ethiopia Health Sector Transformation Plan and Health Equity Strategic Plan place due emphasis on addressing inequity, and children are among the prioritised groups. In the current EEHSP, childhood cancer services are less prioritised; for example, three of the six high-priority childhood cancers identified in the WHO Global Initiative for Childhood Cancers and the Disease Control Priorities—Burkitt’s lymphoma, retinoblastoma and Wilms tumour—are classified as low priority, and two (acute lymphoblastic leukaemia and Hodgkin’s lymphoma) are classified as medium priority. This may be due to various factors, including a lack of local cost and cost-effectiveness data (leading to a decision based on expert judgement), limitations related to transferring evidence from other countries to Ethiopia’s context and the general perception of a high cost of cancer care and of non-affordability in Ethiopia. Suboptimal engagement and alignment with key stakeholders (within and outside the sector) in the childhood cancer programme may also contribute to this; for example, the goals and target set in the NCACCP contradict the EEHSP revision’s priority results, although both were developed by the same organisation and the EEHSP was endorsed soon after the NCACCP. Our results support recent calls by WHO to emphasise childhood cancer, and they provide evidence for the NCACCP strategy to expand paediatric oncology units in Ethiopia. Our study has many limitations in terms of cost and effect estimation. The true cost of running a paediatric oncology unit may be larger than our estimate for the following reasons: (1) our estimate did not capture the start-up capital investment, such as building costs and the cost of training specialists (eg, oncologists, specialised nurses and pathologists); (2) the availability of critical diagnostic service, imaging, drugs and supportive care may be suboptimal; (3) direct non-medical costs (eg, transport, lodging) and indirect costs were not captured in our costing exercise; (4) the cost of late treatment adverse effects was not captured; (5) cancers that require advanced and costly diagnosis and treatment such as radiotherapy may not be well represented in our study as such treatment was not readily available in TASH and (6) despite the rigorous data validation conducted, data quality concerns persist in regards to hospital records in general, and it is almost certain that it was not possible to correct all data errors; this may have introduced bias in the form of both overestimation and underestimation of costs, but underestimation is the highly likely case. Since the cost-effectiveness analysis was conducted for a service delivery platform using average costs and average health outcomes, our model does not capture the clinical scenarios a patient might encounter during the treatment period, and the heterogeneity of childhood cancers could present differences in unit costs and health outcomes and, consequently, differences in ICER values. As we lacked a survival registry and previous local health outcome estimates, our model relied on reports from similar settings, which may not be as comparable as assumed. However, we tried to mitigate the limitation by adopting cautious survival values. Furthermore, the potential impact of these limitations on the ICER estimate was explored in the sensitivity analysis, which considered a reasonable range of input parameters and found minimal to no effect on the final conclusions. Around 90% of the ICER iteration results were below the WTP threshold, indicating the relevance of our results. The highest ICER estimate in the PSA is USD 600 per DALY averted, which is fairly close to the WTP. The provision of paediatric cancer services using a specialised oncology unit is most likely cost-effective in Ethiopia, at least for easily treatable cancer types in centres with minimal to moderate capability. Our findings support Ethiopia’s NCACCP strategy to expand childhood oncology units in the country. We recommend reassessing the priority-level decision regarding childhood cancer treatment in the current EEHSP. Childhood cancers’ specific cost-effectiveness estimates, along with budget, financial risk protection and equity impact analysis (which can indicate heterogeneity), could better inform prioritisation among childhood cancers. Improving the childhood cancer information system, including establishing a cancer registry in Ethiopia, is crucial to informing the childhood cancer control programme with robust evidence. Reviewer comments Author's manuscript |
Continuing medical education during a pandemic: an academic institution’s experience | a5a11794-1195-4b93-8520-3e4b3233c5ae | 10016839 | Preventive Medicine[mh] | |
Ophthalmological manifestations of COVID-19 and its transmissibility via ocular route | fc45d7b1-cbbb-4c4c-befe-0f35b95872c3 | 10016920 | Ophthalmology[mh] | postgradmedj-97-329-DC1-inline-supplementary-material-1 Click here for additional data file.
|
Rethinking infrastructure design: evaluating pedestrians and VRUs’ psychophysiological and behavioral responses to different roadway designs | 01d42e00-c3a7-4df3-ab1a-2598494345bd | 10017812 | Physiology[mh] | At its core, infrastructures are in fact an engineering product that have significant impact on people’s day to day lives. However, unlike many other products (e.g., smartphones, computers, etc.), we often overlook the importance of changing the design based on user feedback within the design phase. This is partly due to the fact that such a process can become costly and often not practical in the context of designing infrastructures at the community and city scales . For instance, it is not possible to build different replicas of the same road for testing driver distraction in each alternative design of the road. As a result, many times, design features are chosen by the designer and engineers with minimal feedback (if any) from all end-users (e.g., drivers, bicyclists, pedestrians, scooterstis that will use the road in the future). Over the recent years, due to advancements in technology, designers and decision makers have started to take into account the end user and human factors considerations, especially in the areas of human–building interaction , and human–vehicle interaction . This approach, which is often referred to as a human-centric approach in design, tends to put the user’s needs, comfort levels, and preferences at the center of the design process , . The integration of human-centric approaches in the infrastructure design has gained more attention recently due to their benefit in different infrastructure systems such as construction safety , accident prevention in traffic safety , energy saving for lighting luminaries , and outdoor comfort in urban spaces . For the design to become human-centric, it is crucial to measure the factors affecting the human–infrastructure interaction which can be divided into internal and external factors. Internal factors which are mostly related to the user are concerned with users’ preferences, needs, states, and behaviors, while the external factors are related to the outside context that is shaping the users’ environment such as the road environment in a driving condition, and the indoor built environment in a building case . This requires a platform to holistically monitor, model, and analyze the the relationship between the external and internal factors within a human–infrastructure interaction problem. Additionally, it requires methods that can quantitatively provide insights on the perception of the infrastructure from different end-user perspectives (e.g., demographic backgrounds, personality), and simulate different alternative designs within the simulated platform prior to the construction phase. As a new emerging technology, Immersive virtual environments (IVEs) simulators are a promising tool for behavioral studies and to identify how end-users perceive and react to different design alternatives, while holistically monitoring both internal and external factors. Additionally, within IVEs, users have the ability to realistically visualize and interact with the infrastructure before construction and to change the design accordingly. IVE has been applied in many indoor human-building interactions, such as buildings – as well as human–transportation interaction problems such as vehicles – , and cyclists and other road users. To monitor internal factors, human psycho-physiological metrics such as heart rate, skin temperature, skin conductance, and eye gaze patterns were used in the literature for assessing human state such as stress level, emotion, and cognitive load , – . Further, these physiological measures have also shown to be more sensitive than task performance measures to identify task difficulty when using a new technology or exploring a new environment – . It is easier to implement physiological sensors within IVEs compared to traditional methods such as observational studies or naturalistic studies . Coupling IVEs with psycho-physiological sensing of users allows researchers to measure the internal factors (the end-user perception and response) of alternative infrastructure designs, as well as simulating various external factors, which can objectively prevent faulty design features prior to construction . This paper aims at describing a system framework to evaluate infrastructure design for the different types of road users (e.g., drivers, pedestrians, cyclists, and construction workers) by leveraging a multimodal IVE system. We conduct a case study within the proposed IVE system to test and assess the users feedback to specific design alternatives for a mid-block crossing infrastructure. For the case study we focus on a transportation infrastructure evaluation for vulnerable road users due to the limited existing research in this area. Designing proper roadway and transportation systems is of high importance as it is highly associated with users well-being, injuries, fatalities, and overall quality of life . However, there is limited attention to how roadway systems need to be designed to be inclusive for all users . The majority of research on roadway design has heavily focused on studies evaluating driver’s behavior, safety, and responses to different design conditions and contextual settings . As a result limited studies have focused on other road users such as pedestrians, bicyclist’s responses to different roadway design and conditions . Among all road users, Vulnerable Road Users (VRUs) such as pedestrians and bicyclists require more attention due to the increasing fatalities in recent years and increasing number of these users , . Within VRUs, pedestrians are facing more safety challenges on the road, especially during the midwalk crossing as they have less protective equipment and slower speed than the vehicles, scooters, and bicycles . Ensuring the safety of pedestrians is a challenge for researchers, as pedestrian’s decision to cross and the crossing behavior may be affected by many internal factors such as visual/cognitive distraction and external factors, such as pedestrian infrastructure, roadway design, traffic volumes, vehicle speed, and visibility of the road environment , . Accidents involving pedestrians are especially common at un-signalized and mid-block crosswalks, where vehicles are less likely to yield to pedestrians . To increase pedestrian safety at mid-block crossings, different safety treatments have been introduced, such as rapid flashing beacon (RFB) , a vibrotactile wristband , countdown timer , and pedestrian footbridge , . However, each method mentioned above has its own shortcomings. For example, pedestrians’ response rate to the vibrotactile wristband is low and the countdown timer could make the pedestrian overestimate their speed, which will result in a higher chance of red light running . The development of connected vehicles and autonomous vehicles has changed the communication environment between pedestrians and vehicles. Recent studies have focused on how to communicate awareness and intent of autonomous vehicles to pedestrians . However, very few studies have pedestrian-centered design, which is how to communicate the pedestrian’s crossing intentions to the vehicles, especially in IVEs . With the limited number of pedestrian study, some researchers have reported the potential of integrating the aformentioned physiological signals in the data collection and analysis. Kitabayashi et al. used heart rate as the biosignal and found out that pedestrians stress in walking are affected by the road congestion . Additionally, pedestrians’ physiological measures were shown significantly correlated with certain urban features such as uneven sidewalks as well as subjective ratings of walkability , . The physiological data can also be used to quantify the ‘perception-decision-execution’ ability in avoiding danger to a certain level . These studies have been conducted either within naturalistic settings or simulated environments. Within naturalistic settings, researchers are not able to manipulate existing roadway designs or features; meanwhile, within simulated environments, sense of immersion of participants are not realistic, especially on studies conducted on 2D screens. Thus, combining physiological metrics within the IVE, where many design alternatives can be evaluated with high sense of immersion, help us better understand the effect of each pedestrian-centered design on road users. The provided case study will utilize the framework described to: (1) identify the benefits and limitations of using IVEs for collecting and modeling VRUs’ behaviors and psycho-physiological responses while highlighting how such information could improve the design decision making; (2) evaluate the objective and subjective measures of perceived safety rating across different alternative designs; and (3) evaluate pedestrians’ crossing behavior and psychophysiological responses across different conditions. We will introduce the system framework to collect VRUs’ behavior and physiological response. The system has integrated data collection methods (pedaling/walking performance, eye tracking, heart rate, and video) in virtual reality, and the modularized components makes it applicable to evaluate infrastructure design for all roadway users whether they are pedestrians, bicyclists, scooterists, construction workers, or drivers. Through a case study of pedestrian crossing, 51 pedestrians’ stated preferences, crossing behaviors and physiological responses are collected and analysed with three different mid-block crossing safety treatments—painted crosswalk (as-built), rapid flashing beacons (flashing beacon, Fig. ), and a connected vehicle phone application (smartphone app, Fig. ). The goal of this study is not only to identify which design is the best but also to explore the potential benefit of future technology in infrastructure design by implementing virtual reality (VR) method. This study hypothesises that: H1: Null—There are no significant differences in pedestrians’ subjective rating of the perceived safety in the three scenarios scenario. Alternative—at least one scenario’s subjective safety rating will differ from the other two alternatives. H2: Null—The pedestrians’ crossing behaviors (wait time, number of stops during the crossing) are similar in all three scenarios. Alternative–The pedestrians’ crossing behaviors will be different in at least one of the two alternative designs. H3: Null—Pedestrians in all conditions will have a similar level of cognitive workload, as indicated by mean fixation length, fixation rate, gaze entropy, and mean heart rate. Alternative—Pedestrians would experience significant differences in psychophysiological responses in the flashing beacon and/or smartphone app conditions. Stated preference survey response In the post-experiment survey, all participants are asked to rate the realism of the IVE with a 5-point Likert scale in several aspects: if the virtual environment feels appropriate to scale (scale), how immersed they felt in the virtual environment experience (immersive), and the extent of consistency of the experiences in the virtual environment with the real-world experiences of crossing a street (consistency). With respect to the scale ratings, an overwhelming majority of participants felt that the virtual environment was to scale (4.1% response ‘3’, 20.4% response ‘4’, 75.5% response ‘5’, mean score = 4.71). Most of them felt immersed in the VR IVE (6.1% response ‘3’, 34.7% response ‘4’, 61.2% response ‘5’, mean score = 4.53). Their experience in the VR IVE was consistent with their real-world experiences when crossing the street (2.0% response ‘1’, 12.2% response ‘3’, 36.7% response ‘4’, 49.0% response ‘5’, mean score = 4.31). Except for one participant responsed with ‘1’ in consistency rating, most participants reported a high realism of the simulator. On average, participants have a higher safety rating in flashing beacon scenario (4.56 out of 5 scale), followed by the smartphone app (3.6), and the as-built environment (3.0). The differences between the safety rating are all significant at a 95% confidence level ( [12pt]{minimal} $$ = 0.05$$ α = 0.05 ). Additionally, once asked to rank the three environments based on their perceived safety measures from safest to least safe, participants’ responses supported the previous metric, with flashing beacon ranked as the safest and the as built environment as the least safe condition (Fig. ). 69% of the participants rate the flashing beacon scenarios as the safest scenario of the three options, none of them rate it as the least safe scenario. For smartphone app scenario, 12% rate it as the safest, 27% participants rate it as the least safe one. For the as-built scenario, only 8% rate it as the safest and 61% of the participants rate it as the least safe scenario. Crossing behavior Crossing time For the crossing time, as shown on Fig. a, participants had a significantly lower crossing time in the flashing beacon ( [12pt]{minimal} $$ = -3.604, SE = 0.717, p = 0.0026$$ β = - 3.604 , S E = 0.717 , p = 0.0026 ) and smartphone app cases ( [12pt]{minimal} $$ = -3.417, SE = 0.720, p = 0.00013$$ β = - 3.417 , S E = 0.720 , p = 0.00013 ) as compared to the as-built environment. No significant differences are found between the flashing beacon and smartphone scenarios. Wait time before crossing The wait time before crossing for as-built, flashing beacon and smartphone app scenarios are 20.34 s, 22.20 s, and 21.47 s respectively. A marginal significant difference between as-built and smartphone app scenarios is found ( [12pt]{minimal} $$ = 2.284, SE = 1.177, p = 0.0548$$ β = 2.284 , S E = 1.177 , p = 0.0548 ). Wait time after crossing decision The wait time after crossing decision for smartphone app scenario (mean = 4.23 s, sd = 2.92 s) is lower than flashing beacon scenario (mean = 5.20 s, sd = 3.35 s), but the difference is not significant. Head movement The result shows a significant difference between the as-built and the two other scenarios with both p values less than 0.001. Specifically, for the flashing beacon scenario, [12pt]{minimal} $$ = -0.118, SE = 0.0284, p = 0.0000657$$ β = - 0.118 , S E = 0.0284 , p = 0.0000657 , and for the smartphone app scenario, [12pt]{minimal} $$ = -0.144, SE = 0.0241, p = 4.50e-08$$ β = - 0.144 , S E = 0.0241 , p = 4.50 e - 08 . However, we did not find a difference between the flashing beacon and smartphone app scenario. As shown on Fig. b, participants had a higher variation of head movement direction in the as-built environment as compared to the other two scenarios. It is also shown from the results that low prior VR experience contributes to higher head movement variation ( [12pt]{minimal} $$ = 0.069, SE = 0.027, p = 0.0141$$ β = 0.069 , S E = 0.027 , p = 0.0141 ). Stop during crossing We manually annotated the experiment videos to determine if the pedestrians have stopped in the middle of their crossing. Two participants’ (participants 42 and 46) data are excluded due to failure in video recording. Therefore, 49 participants’ stop behaviors are recorded. As shown in Table , Pedestrians in the as-built scenario stop significantly more in the middle of the corss walk compared to the other two scenarios. Interestingly, the flashing beacon and smartphone app scenarios has exact the same number of stops across the participants. In both scenarios, there are 10 participants who stop in the middle of crossing to wait for the vehicle’s response although they are told that the vehicles will stop for them after they send their request by pushing the buttons. Eye tracking For the eye tracking data, five participants’ data are excluded due to hardware failure during data collection. The eye tracking results in this section are based on 46 participants’ data. Fixation Participants in the smartphone app scenario had a significantly higher fixation rate as compared to the as-built environment ( [12pt]{minimal} $$ = 0.235, SE = 0.111, p = 0.0369$$ β = 0.235 , S E = 0.111 , p = 0.0369 ). We did not find any significant differences between the other scenarios, as shown in Fig. a. Furthermore, male participants’ fixation rate are significantly lower than females ( [12pt]{minimal} $$ = -0.296, SE = 0.145, p = 0.0475$$ β = - 0.296 , S E = 0.145 , p = 0.0475 ). Participants with a low level of familiarization to VR devices have a lower fixation rate ( [12pt]{minimal} $$ = -0.531, SE = 0.050, p = 0.000894$$ β = - 0.531 , S E = 0.050 , p = 0.000894 ). For mean fixation duration, there is a significant difference between the as-built and the smartphone app scenarios ( [12pt]{minimal} $$ = -0.0179, SE = 0.00787, p = 0.0259$$ β = - 0.0179 , S E = 0.00787 , p = 0.0259 ). As shown on Fig. b, participants had a lower mean fixation duration in the smartphone app scenario with an average of 0.184 sas compared to the as-built environment with an average of 0.201 s, in between is the mean fixation duration of flashing beacon scenario (0.193 s). Gaze entropy There are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). The results for the SGE shows that participants had a significantly lower SGE in the smartphone app as compared to the as-built environment, as shown in Fig. a. ( [12pt]{minimal} $$ = -0.343166, SE = 0.092471, p = 0.00036$$ β = - 0.343166 , S E = 0.092471 , p = 0.00036 ). Older pedestrians have a overall lower SGE than younger pedestrians ( [12pt]{minimal} $$ = -0.011, SE = 0.005, p = 0.0439$$ β = - 0.011 , S E = 0.005 , p = 0.0439 ). The results of GTE shows that the GTE is significantly lower in the both flashing beacon ( [12pt]{minimal} $$ = -0.0764, SE = 0.0403, p = 0.0444$$ β = - 0.0764 , S E = 0.0403 , p = 0.0444 ) and smartphone app scenarios ( [12pt]{minimal} $$ = -0.0830, SE = 0.0411, p = 0.0435$$ β = - 0.0830 , S E = 0.0411 , p = 0.0435 ) as compared to the as-built environment. No significant results are found between the flashing beacon and smartphone app scenario, as shown in Fig. b. Heart rate The heart rate result indicates that there is no significant differences between the three scenarios a 95% confidence level. Marginal difference for the mean heart rate during crossing is found between the smartphone app scenario and the as-built scenario ( [12pt]{minimal} $$ = -1.909, SE = 1.109, p = 0.0886$$ β = - 1.909 , S E = 1.109 , p = 0.0886 ). The mean HR (beat per minute) of the as-built, flashing beacon and smartphone app are 86.40, 86.29 and 84.63, respectively. In the post-experiment survey, all participants are asked to rate the realism of the IVE with a 5-point Likert scale in several aspects: if the virtual environment feels appropriate to scale (scale), how immersed they felt in the virtual environment experience (immersive), and the extent of consistency of the experiences in the virtual environment with the real-world experiences of crossing a street (consistency). With respect to the scale ratings, an overwhelming majority of participants felt that the virtual environment was to scale (4.1% response ‘3’, 20.4% response ‘4’, 75.5% response ‘5’, mean score = 4.71). Most of them felt immersed in the VR IVE (6.1% response ‘3’, 34.7% response ‘4’, 61.2% response ‘5’, mean score = 4.53). Their experience in the VR IVE was consistent with their real-world experiences when crossing the street (2.0% response ‘1’, 12.2% response ‘3’, 36.7% response ‘4’, 49.0% response ‘5’, mean score = 4.31). Except for one participant responsed with ‘1’ in consistency rating, most participants reported a high realism of the simulator. On average, participants have a higher safety rating in flashing beacon scenario (4.56 out of 5 scale), followed by the smartphone app (3.6), and the as-built environment (3.0). The differences between the safety rating are all significant at a 95% confidence level ( [12pt]{minimal} $$ = 0.05$$ α = 0.05 ). Additionally, once asked to rank the three environments based on their perceived safety measures from safest to least safe, participants’ responses supported the previous metric, with flashing beacon ranked as the safest and the as built environment as the least safe condition (Fig. ). 69% of the participants rate the flashing beacon scenarios as the safest scenario of the three options, none of them rate it as the least safe scenario. For smartphone app scenario, 12% rate it as the safest, 27% participants rate it as the least safe one. For the as-built scenario, only 8% rate it as the safest and 61% of the participants rate it as the least safe scenario. Crossing time For the crossing time, as shown on Fig. a, participants had a significantly lower crossing time in the flashing beacon ( [12pt]{minimal} $$ = -3.604, SE = 0.717, p = 0.0026$$ β = - 3.604 , S E = 0.717 , p = 0.0026 ) and smartphone app cases ( [12pt]{minimal} $$ = -3.417, SE = 0.720, p = 0.00013$$ β = - 3.417 , S E = 0.720 , p = 0.00013 ) as compared to the as-built environment. No significant differences are found between the flashing beacon and smartphone scenarios. Wait time before crossing The wait time before crossing for as-built, flashing beacon and smartphone app scenarios are 20.34 s, 22.20 s, and 21.47 s respectively. A marginal significant difference between as-built and smartphone app scenarios is found ( [12pt]{minimal} $$ = 2.284, SE = 1.177, p = 0.0548$$ β = 2.284 , S E = 1.177 , p = 0.0548 ). Wait time after crossing decision The wait time after crossing decision for smartphone app scenario (mean = 4.23 s, sd = 2.92 s) is lower than flashing beacon scenario (mean = 5.20 s, sd = 3.35 s), but the difference is not significant. Head movement The result shows a significant difference between the as-built and the two other scenarios with both p values less than 0.001. Specifically, for the flashing beacon scenario, [12pt]{minimal} $$ = -0.118, SE = 0.0284, p = 0.0000657$$ β = - 0.118 , S E = 0.0284 , p = 0.0000657 , and for the smartphone app scenario, [12pt]{minimal} $$ = -0.144, SE = 0.0241, p = 4.50e-08$$ β = - 0.144 , S E = 0.0241 , p = 4.50 e - 08 . However, we did not find a difference between the flashing beacon and smartphone app scenario. As shown on Fig. b, participants had a higher variation of head movement direction in the as-built environment as compared to the other two scenarios. It is also shown from the results that low prior VR experience contributes to higher head movement variation ( [12pt]{minimal} $$ = 0.069, SE = 0.027, p = 0.0141$$ β = 0.069 , S E = 0.027 , p = 0.0141 ). Stop during crossing We manually annotated the experiment videos to determine if the pedestrians have stopped in the middle of their crossing. Two participants’ (participants 42 and 46) data are excluded due to failure in video recording. Therefore, 49 participants’ stop behaviors are recorded. As shown in Table , Pedestrians in the as-built scenario stop significantly more in the middle of the corss walk compared to the other two scenarios. Interestingly, the flashing beacon and smartphone app scenarios has exact the same number of stops across the participants. In both scenarios, there are 10 participants who stop in the middle of crossing to wait for the vehicle’s response although they are told that the vehicles will stop for them after they send their request by pushing the buttons. For the crossing time, as shown on Fig. a, participants had a significantly lower crossing time in the flashing beacon ( [12pt]{minimal} $$ = -3.604, SE = 0.717, p = 0.0026$$ β = - 3.604 , S E = 0.717 , p = 0.0026 ) and smartphone app cases ( [12pt]{minimal} $$ = -3.417, SE = 0.720, p = 0.00013$$ β = - 3.417 , S E = 0.720 , p = 0.00013 ) as compared to the as-built environment. No significant differences are found between the flashing beacon and smartphone scenarios. The wait time before crossing for as-built, flashing beacon and smartphone app scenarios are 20.34 s, 22.20 s, and 21.47 s respectively. A marginal significant difference between as-built and smartphone app scenarios is found ( [12pt]{minimal} $$ = 2.284, SE = 1.177, p = 0.0548$$ β = 2.284 , S E = 1.177 , p = 0.0548 ). The wait time after crossing decision for smartphone app scenario (mean = 4.23 s, sd = 2.92 s) is lower than flashing beacon scenario (mean = 5.20 s, sd = 3.35 s), but the difference is not significant. The result shows a significant difference between the as-built and the two other scenarios with both p values less than 0.001. Specifically, for the flashing beacon scenario, [12pt]{minimal} $$ = -0.118, SE = 0.0284, p = 0.0000657$$ β = - 0.118 , S E = 0.0284 , p = 0.0000657 , and for the smartphone app scenario, [12pt]{minimal} $$ = -0.144, SE = 0.0241, p = 4.50e-08$$ β = - 0.144 , S E = 0.0241 , p = 4.50 e - 08 . However, we did not find a difference between the flashing beacon and smartphone app scenario. As shown on Fig. b, participants had a higher variation of head movement direction in the as-built environment as compared to the other two scenarios. It is also shown from the results that low prior VR experience contributes to higher head movement variation ( [12pt]{minimal} $$ = 0.069, SE = 0.027, p = 0.0141$$ β = 0.069 , S E = 0.027 , p = 0.0141 ). We manually annotated the experiment videos to determine if the pedestrians have stopped in the middle of their crossing. Two participants’ (participants 42 and 46) data are excluded due to failure in video recording. Therefore, 49 participants’ stop behaviors are recorded. As shown in Table , Pedestrians in the as-built scenario stop significantly more in the middle of the corss walk compared to the other two scenarios. Interestingly, the flashing beacon and smartphone app scenarios has exact the same number of stops across the participants. In both scenarios, there are 10 participants who stop in the middle of crossing to wait for the vehicle’s response although they are told that the vehicles will stop for them after they send their request by pushing the buttons. For the eye tracking data, five participants’ data are excluded due to hardware failure during data collection. The eye tracking results in this section are based on 46 participants’ data. Fixation Participants in the smartphone app scenario had a significantly higher fixation rate as compared to the as-built environment ( [12pt]{minimal} $$ = 0.235, SE = 0.111, p = 0.0369$$ β = 0.235 , S E = 0.111 , p = 0.0369 ). We did not find any significant differences between the other scenarios, as shown in Fig. a. Furthermore, male participants’ fixation rate are significantly lower than females ( [12pt]{minimal} $$ = -0.296, SE = 0.145, p = 0.0475$$ β = - 0.296 , S E = 0.145 , p = 0.0475 ). Participants with a low level of familiarization to VR devices have a lower fixation rate ( [12pt]{minimal} $$ = -0.531, SE = 0.050, p = 0.000894$$ β = - 0.531 , S E = 0.050 , p = 0.000894 ). For mean fixation duration, there is a significant difference between the as-built and the smartphone app scenarios ( [12pt]{minimal} $$ = -0.0179, SE = 0.00787, p = 0.0259$$ β = - 0.0179 , S E = 0.00787 , p = 0.0259 ). As shown on Fig. b, participants had a lower mean fixation duration in the smartphone app scenario with an average of 0.184 sas compared to the as-built environment with an average of 0.201 s, in between is the mean fixation duration of flashing beacon scenario (0.193 s). Gaze entropy There are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). The results for the SGE shows that participants had a significantly lower SGE in the smartphone app as compared to the as-built environment, as shown in Fig. a. ( [12pt]{minimal} $$ = -0.343166, SE = 0.092471, p = 0.00036$$ β = - 0.343166 , S E = 0.092471 , p = 0.00036 ). Older pedestrians have a overall lower SGE than younger pedestrians ( [12pt]{minimal} $$ = -0.011, SE = 0.005, p = 0.0439$$ β = - 0.011 , S E = 0.005 , p = 0.0439 ). The results of GTE shows that the GTE is significantly lower in the both flashing beacon ( [12pt]{minimal} $$ = -0.0764, SE = 0.0403, p = 0.0444$$ β = - 0.0764 , S E = 0.0403 , p = 0.0444 ) and smartphone app scenarios ( [12pt]{minimal} $$ = -0.0830, SE = 0.0411, p = 0.0435$$ β = - 0.0830 , S E = 0.0411 , p = 0.0435 ) as compared to the as-built environment. No significant results are found between the flashing beacon and smartphone app scenario, as shown in Fig. b. Participants in the smartphone app scenario had a significantly higher fixation rate as compared to the as-built environment ( [12pt]{minimal} $$ = 0.235, SE = 0.111, p = 0.0369$$ β = 0.235 , S E = 0.111 , p = 0.0369 ). We did not find any significant differences between the other scenarios, as shown in Fig. a. Furthermore, male participants’ fixation rate are significantly lower than females ( [12pt]{minimal} $$ = -0.296, SE = 0.145, p = 0.0475$$ β = - 0.296 , S E = 0.145 , p = 0.0475 ). Participants with a low level of familiarization to VR devices have a lower fixation rate ( [12pt]{minimal} $$ = -0.531, SE = 0.050, p = 0.000894$$ β = - 0.531 , S E = 0.050 , p = 0.000894 ). For mean fixation duration, there is a significant difference between the as-built and the smartphone app scenarios ( [12pt]{minimal} $$ = -0.0179, SE = 0.00787, p = 0.0259$$ β = - 0.0179 , S E = 0.00787 , p = 0.0259 ). As shown on Fig. b, participants had a lower mean fixation duration in the smartphone app scenario with an average of 0.184 sas compared to the as-built environment with an average of 0.201 s, in between is the mean fixation duration of flashing beacon scenario (0.193 s). There are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). The results for the SGE shows that participants had a significantly lower SGE in the smartphone app as compared to the as-built environment, as shown in Fig. a. ( [12pt]{minimal} $$ = -0.343166, SE = 0.092471, p = 0.00036$$ β = - 0.343166 , S E = 0.092471 , p = 0.00036 ). Older pedestrians have a overall lower SGE than younger pedestrians ( [12pt]{minimal} $$ = -0.011, SE = 0.005, p = 0.0439$$ β = - 0.011 , S E = 0.005 , p = 0.0439 ). The results of GTE shows that the GTE is significantly lower in the both flashing beacon ( [12pt]{minimal} $$ = -0.0764, SE = 0.0403, p = 0.0444$$ β = - 0.0764 , S E = 0.0403 , p = 0.0444 ) and smartphone app scenarios ( [12pt]{minimal} $$ = -0.0830, SE = 0.0411, p = 0.0435$$ β = - 0.0830 , S E = 0.0411 , p = 0.0435 ) as compared to the as-built environment. No significant results are found between the flashing beacon and smartphone app scenario, as shown in Fig. b. The heart rate result indicates that there is no significant differences between the three scenarios a 95% confidence level. Marginal difference for the mean heart rate during crossing is found between the smartphone app scenario and the as-built scenario ( [12pt]{minimal} $$ = -1.909, SE = 1.109, p = 0.0886$$ β = - 1.909 , S E = 1.109 , p = 0.0886 ). The mean HR (beat per minute) of the as-built, flashing beacon and smartphone app are 86.40, 86.29 and 84.63, respectively. Overall, from stated preference results, both the flashing beacon and smartphone app scenarios are perceived to be safer than the as-built scenario, and the participants show a higher preference on the flashing beacon scenario based on both the subjective and objective ratings. The majority of the participants (69%) choose the flashing beacon as the safest scenario, which could imply their trust in this technology (as well as their familiarity with this technology as it exists on some roads). Interestingly, the results from crossing behavior and physiological responses are slightly different from stated preference. For average crossing time, both the flashing beacon and smartphone app scenarios have a lower average crossing time compared to the as built scenario; additionally, there is no significant differences between the flashing beacon and smartphone app crossing time. The pedestrians have a lower wait time before crossing, but spend more time during the crossing, this is aligned with an observational study conducted by at mid-block crosswalks in which pedestrians who waited for little or no at the curbside generally lost time during the crossing. It is important to also note that some participants indicated that they were not sure about the smartphone app performance, so they chose to wait until the vehicle came to a complete stop for them. Our records showed that seven participants stated that they were not sure about what will happen after they pressed the button on the smartphone, more feedback in the smartphone app scenario is desired, as indicated by some comments, “ It will great to know if the nearby vehicles received my request when using the App, maybe a feedback on your phone, like message saying received by coming vehicles. ” (P6) and “ I’m little concerned about using the phone app to inform the drivers, because I have no experience on that. ” (P21). However, when checking the waiting time after crossing decision, the smartphone app scenario actually has a lower average waiting time after crossing decision (4.23 s) than flashing beacon scenario (5.20 s), although the difference is not significant. This may be explained by different reactions required by the two interactions (press the button on the phone vs. physically reach out to the button), or the gap acceptance difference in the two scenarios. With respect to head movement, the larger head movement variation in the as-built scenario indicates that participants are more hesitant during crossing, while no significant differences are found between the two alternative designs. Furthermore, visual inspection of the videos also qualitatively verifies the fact that the proportion of stop behaviors during crossing are the same for the two alternative designs, and both are lower than the as-built scenario. For eye tracking data, the difference in fixation rate and mean fixation duration between the as-built and smartphone app scenario shows pedestrians’ different visual scanning strategies. The longer fixation duration in the as-built scenario means that pedestrians spent a long time on searching the environment and potential hazards. As reported by previous studies, longer fixation duration and lower fixation rate is related to higher cognitive load . An earlier pedestrian eye tracking study also found that ‘safe’ pedestrians have a lower mean fixation duration than ‘rogue’ pedestrians after they get used to the environment . Lower SGE and GTE are observed in the smartphone app, as far as we know, there is no existing studies about the pedestrian gaze entropy. In flight situations, low gaze entropy is usually accompanied by high situation awareness, for different tasks, the gaze entropy of the group that succeeded in the task was low . Therefore, our results may indicate that the smartphone app scenario may have a lower cognitive workload for the pedestrian to cross. Due to the relatively low HR data frequency, only a limited number of HR data points are utilized for the mean HR comparison. Marginal significantly lower mean HR is found for smartphone app scenario in this study, which may reveal a lower stress level in the smartphone app scenario as compared to the other two scenarios. As mentioned before, previous studies show that lower HR values are generally associated to calmer, less-stressful states . However, we note that this finding needs to be validated in the future study with more professional HR data collection devices. In addition, the fidelity of the IVE system can be another reason that contributes to the significance of HR results, although our framework features with a simulation from a real-world environment, a head-mounted display for visualization, the real-time agency of movement, and environmental sound, more steps can be taken to further improve the fidelity such as the simulation of other pedestrians, weather conditions, and haptic feedback. The qualitative feedback collected from participants may also help to find the reasons behind the differences in subjective ratings and objective responses. A couple of participants stated that they were not sure about what would happen in the smartphone app scenario after pressing the button on the screen although instructions are given before the experiment. This may be the reason why more participants prefer the flashing beacon scenario. However, the crossing behavior data shows that the waiting time after crossing decision for the smartphone app is not significantly different from flashing beacon. For other crossing behavior variables, we also do not find significant differences between the flashing beacon and smartphone app scenario. In addition, for physiological responses, the smartphone app scenario seems to have a slightly better overall performance with a shorter fixation duration, higher fixation rate and lower HR, which is related to lower cognitive load. Given the fact that there is still much room for improvement in the smartphone app scenario, a better physiological performance can be expected if such limitations are addressed. Our results further emphasize the importance of objective measurement for the evaluation of infrastructure designs as the users’ subjective answers may not reflect their actual behaviors. The difference in subjective ratings and objective responses also highlights that public education is an important step of new technology implementation. In our study, although the smartphone app scenario shows a good overall performance, participants do not have a high safety ratings on it because they do not have any related experience with the new technology. IVE-based simulation offers a risk-free and low-cost platform for the public to get familiar with new technologies, which will help to increase the acceptance of these new technologies which are currently not familiar to them. The eye tracking section of our study only focuses on the overall information of fixations (fixations rate and mean fixation duration) and the general distribution of the fixations (gaze entropy), it makes more sense to extract contextual information about fixations. By defining Area of Interests (AOIs) such as the button on the flashing beacon, smartphone, crosswalk path or other vehicles, it would help to gain a better understanding of what the pedestrians are looking at. The visual attention allocation of pedestrians will provide more information about distraction state . In our future study, in-depth analysis of eye tracking data by integrating the AOIs information will be performed to explore pedestrians’ visual attention allocation on key AOIs, such as the flashing beacon button, the smartphone and the vehicles. Another limitation of our study was the low frequency of HR data. Due to collecting HR using off-the-shelf smartwatches we did not have access to higher frequency physiological sensing. Future work should consider adding other physiological sensing modalities such as skin temperature and skin conductance to enhance the physiological sensing module and inference. However, it should also be considered that more devices might degrade the feeling of realism of the study. More advanced devices that can collect multiple physiological sensors simultaneously can be integrated into studies as such to keep the realism while recording a higher number of modalities of data. Other limitations, as also mentioned by the participants were to include (1) realistic vehicle actions, (2) feedback from the smartphone app, and (3) traffic simulation. Currently, we are improving the logic of the vehicle by refactoring the vehicle speed controller so the response will be more realistic. More ways of interactions and the feedback are being developed such as audio warning, tactile feedback from the controller, vehicle’s flashing light, projections on the crosswalk, and so on. The various ways of interactions will be evaluated by users’ stated preferences and objective responses as well. Moreover, based on our framework , it is possible to include multiple agents in the IVE, so other road users such as bicyclists and drivers can be studied together with pedestrians within the IVE. We are developing a multi-agent simulator for different road users (more pedestrians, cyclists, work zone workers, and drivers) based on the current system framework, aiming to study the pedestrian platoons simultaneously in the same VR environment. More results are expected in our follow-up papers. This paper presents the evaluation of three pedestrian crossing infrastructure designs (the as-built painted crosswalks, the flashing beacon and a connected vehicle phone application) in an IVE-based experiment. With the system framework, the stated preferences, crossing behavior and physiological responses are collected from 51 participants. Several advantages can be identified from this study over an observational study: First, it is possible to collect physiological responses, such as eye tracking and heart rate. Second, this type of study can guarantee experimental control over other factors that may affect the response, such as weather conditions, traffic volumes, and other infrastructure conditions. Third, the designs that are currently unavailable in the real world can be evaluated in the IVE, such as the connected vehicle technology. Lastly, the IVE-based study offers a risk-free and low-cost platform, especially for the underrepresented road users, such as females, disabled and elderly people. The results indicate that the two alternative designs have a higher safety ratings than the as-built scenario, and the flashing beacon scenario is rated as the safest. Pedestrians in the as-built scenario have a lower waiting time but spend/lost more time during crossing by stopping in the middle of the crosswalk to wait for the vehicle, in addition, a larger head movement variation is observed in the as-built scenario. The crossing behavior in the flashing beacon and smartphone app scenario is similar. For the eye tracking data, pedestrians had a shorter fixation duration, larger fixation rate, smaller stationary gaze entropy and smaller gaze transition entropy in the smartphone app than the as-built scenario, which may be resulted from a lower cognitive workload. The difference between the flashing beacon and as-built scenario is not as significant as the smartphone app. A marginal significant lower mean heart rate is found in the smartphone app scenario. Overall, both the flashing beacon and smartphone app have a better physiological performance than the as-built scenario, but the smartphone app scenario appears to have a slightly better physiological outcome. Qualitative feedback is collected from the participants to explore the reasons for the differences between stated preferences and objective measurements, discussions, and suggestions are made. In conclusion, public education is required before the implementation of new technologies such as connected vehicles, which can help to increase users’ acceptance and safety. The study is reviewed and approved by the Institutional Review Board for the Social and Behavioral Sciences from University of Virginia (IRB-2148). All experiments were performed in accordance with relevant named guidelines and regulations. Informed consent was obtained from all participants and/or their legal guardians. Study design This research designs a within-subject experiment to study pedestrians’ stated preferences, crossing behavior, and physiological responses to three different mid-walk crossing designs in an immersive virtual environment with a random order: painted crosswalk (as-built), rapid flashing beacons (flashing beacon), and a connected vehicle smartphone application (smartphone app). The selected location for this study is the intersection of Water St and 1st Street South in Charlottesville, Virginia. This place has been identified as a hotspot for pedestrian-vehicle accidents in the Virginia Department of Transportation’s Pedestrian Safety Action Plan . The intersection of Water Street and 1st Street South is chosen as the study site. The north side of the intersection is a dead-end road (utilized only for deliveries). The south side of the road is a one-way street, which vehicles cannot turn onto from Water Street. At the beginning of the experiment, each participant is asked to sign the consent form approved by the IRB office and put on two smartwatches on both wrists, before completing the pre-experiment survey. After finishing the pre-experiment survey, instructions are given on how to use the VR headset, controllers, and pedestrian simulator, as well as how the three scenarios are designed and how to interact with the infrastructures in the VR. After the IVE system setup, the participant is placed into a familiarization scenario without any vehicle traffic to become familiar with interacting with the IVE. In this environment, the participant is free to walk around in the given area until the participant feels comfortable. Then the participant will experience the three scenarios in random order. In each scenario, pedestrians will be placed into the beginning location, facing the crosswalk heading southbound along 1st Street, crossing Water Street from the north side of the road. The independent variables are the crossing infrastructure designs and demographic information (i.e., age, gender). The dependent variables are stated preferences of the three scenarios, crossing behavior (crossing time, waiting time before crossing, waiting time after crossing decision, stop or not during crossing, and head movement variation) and physiological responses (i.e., eye tracking and HR features) during crossing. Virtual reality system setup A one-to-one road environment is built in the Unity software with SteamVR platform. HTC Vive Pro Eye headsets with the controllers are utilized for any interactions in the IVE. More detailed information of the IVE setup is available in our previous studies , , . Vehicle traffic within the IVEs is generated from empirical gap acceptance data observed at the real-world location. The gaps between vehicles are generated to fit the empirical distribution of accepted gap sizes . These gaps are randomized before each scenario so each participant’s exposure to any gap is randomized. All the vehicles has a speed of 25 mph, followed the speed limit. Vehicle type is also randomized from the four vehicle models used in the IVE. As-built scenario The as-built environment is built to model the existing painted crosswalk along the Water Street corridor to serve as the base case against the other two alternative designs. In the IVE, the pedestrian’s task is to crossing the street when they feel safe to do so after the first vehicle passes the crosswalk. The vehicle will stop right before the crosswalk to wait for the pedestrian to cross if a conflict is expected to happen. Flashing beacon scenario In the flashing beacon scenario, the pedestrian is allowed to cross the road whenever they feel appropriate. Pedestrians are able to interact with the flashing beacon by pressing the button located on the sign pole to initiate the flashers on the beacon. Figure shows how a pedestrian interacts with the RFB while in VR prior to crossing, as well as an image of the RFB in VR when used. Smartphone application In the smartphone app scenario, pedestrians will have a cellphone (a controller in their right hand in real life) in their right hand once they are placed in the IVE. As shown in Fig. , there are two interfaces that will show up on the phone during testing. The first interface of the mobile phone application (initial state) asks the pedestrian if they wish to cross the crosswalk. Should the pedestrian answer “Yes” and press the button on the controller’s central pad, a new interface will pop up indicating “Your request is being broadcast”. Once the system detects the pressed button, the program will send the request to the next approaching vehicle, and then it will brake and stop in front of the crosswalk to yield to the pedestrian, all the follow-up vehicles will stop as well. The pedestrian is then free to cross the crosswalk and vehicles will yield before the crosswalk for the pedestrian. Data collection The data collection method of this study follows the framework of our previous study , different types of behavioral and physiological data are collected: stated preferences from pre and post experiment survey, crossing behavior data from Unity, eye tracking data from Tobii Pro Eye headset, heart rate data from smartwatches. Survey response In addition to demographic information, in the pre-experiment survey, the participants are also asked to provide their familiarity with VR devices. After the experiment, the participants are asked for their safety ratings and preferences over the three scenarios. For each scenario, they will be asked to provide their answer with a Likert Scale 1–5 to the question “How safe do you feel in the scenario”, where 1 indicates “not safe at all” and 5 indicates “very safe”. Furthermore, they are asked to rank the safest to the least safe scenario from the three environments. Crossing behavior Five response variables are recorded to represent the pedestrians’ crossing behavior: crossing time, waiting time before crossing, waiting time after crossing decision, stop or not during crossing, and head movement variation. The crossing time is defined as the time interval from the moment when the pedestrian start crossing to the moment when the pedestrian reaches the other side of the crosswalk. Waiting time before crossing is defined as the time between the start of the experiment and the moment when the pedestrian start crossing. Waiting time after crossing decision are defined as the waiting time after pedestrian’s decision to cross the street (after pressing the button either on flashing beacon or smartphone to start crossing), which is only accessible in the flashing beacon and smartphone app scenarios. Stop or not during crossing is a binary response about whether the pedestrian has a obvious stop to wait for the vehicle’s behavior during crossing. The head movement is defined as the variations in the 3-D head movement direction in the unit vector. Fixation Fixation is defined as the moments when eyes stop scanning about the scene and hold the central foveal vision in certain places to look for detailed information of the target object. Similar to previous studies , , We define a fixation with 25 ms minimum duration and 100 pixel maximum dispersion thresholds to extract the fixation information from the original eye tracking data and videos. Two measurements of fixation are calculated: (1) the mean fixation duration is defined as the average length of all fixation events during the crossing; and (2) the fixation rate is defined as the number of fixations per second during the crossing. Gaze entropy there are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). SGE provides a measure of overall predictability for fixation locations, which indicates the level of gaze dispersion during a given viewing period. The SGE is calculated using Eq. : 1 [12pt]{minimal} $$ H(x) = - _{i=1}^{n}(p_i)log_2(p_i) $$ H ( x ) = - ∑ i = 1 n ( p i ) l o g 2 ( p i ) H ( x ) is the value of SGE for a sequence of data x with length n , i is the index for each individual state, [12pt]{minimal} $$p_i$$ p i is the proportion of each state within x . To calculate the SGE, the visual field is divided into spatial bins of discrete state spaces to generate probability distributions. Specifically, the coordinates are divided into spatial bins of [12pt]{minimal} $$100 100$$ 100 × 100 pixel. i to n is defined as all the gaze data during crossing. GTE is retrieved by applying the conditional entropy equation to first order Markov transitions of fixations with Eq. : 2 [12pt]{minimal} $$ H_{c}(x) = - _{i=1}^{n}(p_i) _{i=1}^{n}p(i,j) log_2 p(i,j) $$ H c ( x ) = - ∑ i = 1 n ( p i ) ∑ i = 1 n p ( i , j ) l o g 2 p ( i , j ) Here [12pt]{minimal} $$H_{c}(x)$$ H c ( x ) is the value of GTE, and p ( i , j ) is the probability of transitioning from state i to state j. The other variables have the same definitions as in the SGE equation . More details of calculating SGE and GTE can be found in , . Heart rate An Android smartwatch with the “SWEAR” app records the HR data with a frequency of 1 Hz. The watch is connected to a smartphone via Bluetooth, and the time is synchronized with the experiment computer before each experiment. All data from the smartwatch is temporally stored on the local device and then uploaded to Amazon S3 cloud storage to download for further analysis. Participants 51 participants were recruited for the experiment. Most of the participants are local residents, university students, and faculty members who are familiar with the study corridor. All participants are 18 or older and without color blindness. Two participants’ data are removed due to the malfunction in the data collection. For the remaining 49 participants (22 female and 27 male), the mean age is 33.92 with a standard deviation of 12.95 (1 participant did not reveal his/her age information). Statistical modeling A Linear Mixed Effects Model (LMM) was chosen to model the different response variables between independent variables across participants . The LMM framework is chosen specifically for their ability in addressing random and main effects simultaneously within the same modeling scheme . This type of modeling allows us to investigate the effect of each independent variable by considering that each participant might have different baselines for their psychophysiological responses. An LMM is defined as the following: 3 [12pt]{minimal} $$ y = X + bz + $$ y = X β + b z + ε In Eq. , y is the dependent variable in our problem, X is the matrix of predictors, [12pt]{minimal} $$ $$ β is the vector of fixed-effect regression coefficients, b is the matrix of random effects, z is the coefficients associated to each random effect, and [12pt]{minimal} $$ $$ ϵ is the unexplained error terms. The b and [12pt]{minimal} $$ $$ ϵ matrices are defined as: 4 [12pt]{minimal} $$ b_{ij} & {} N(0, _k^{2}),Cov(b_k,b_{k'}) $$ b ij ∼ N ( 0 , ψ k 2 ) , C o v ( b k , b k ′ ) 5 [12pt]{minimal} $$ _{ij} & {} N(0, ^{2} _{ijj}),Cov( _{ij}, _{ij'}) $$ ε ij ∼ N ( 0 , σ 2 λ ijj ) , C o v ( ε ij , ε i j ′ ) In our modeling, we applied LMM using the lme4 package in R programming language . The independent variables are the demographic information (age, gender), prior experience with VR devices (categorized as high/low by if they have used any VR devices before), and the three different pedestrian crossing designs. The dependent variables are all the behavioral responses, including crossing behaviors (crossing time, wait time before crossing, wait time after crossing, head movement and stop during crossing), eye tracking (fixation and gaze entropy), and heart rate. This analysis was performed in R programming language using the LME4 package . All statistical analyses were performed at a 95% confidence level ( [12pt]{minimal} $$ = 0.05$$ α = 0.05 ). This research designs a within-subject experiment to study pedestrians’ stated preferences, crossing behavior, and physiological responses to three different mid-walk crossing designs in an immersive virtual environment with a random order: painted crosswalk (as-built), rapid flashing beacons (flashing beacon), and a connected vehicle smartphone application (smartphone app). The selected location for this study is the intersection of Water St and 1st Street South in Charlottesville, Virginia. This place has been identified as a hotspot for pedestrian-vehicle accidents in the Virginia Department of Transportation’s Pedestrian Safety Action Plan . The intersection of Water Street and 1st Street South is chosen as the study site. The north side of the intersection is a dead-end road (utilized only for deliveries). The south side of the road is a one-way street, which vehicles cannot turn onto from Water Street. At the beginning of the experiment, each participant is asked to sign the consent form approved by the IRB office and put on two smartwatches on both wrists, before completing the pre-experiment survey. After finishing the pre-experiment survey, instructions are given on how to use the VR headset, controllers, and pedestrian simulator, as well as how the three scenarios are designed and how to interact with the infrastructures in the VR. After the IVE system setup, the participant is placed into a familiarization scenario without any vehicle traffic to become familiar with interacting with the IVE. In this environment, the participant is free to walk around in the given area until the participant feels comfortable. Then the participant will experience the three scenarios in random order. In each scenario, pedestrians will be placed into the beginning location, facing the crosswalk heading southbound along 1st Street, crossing Water Street from the north side of the road. The independent variables are the crossing infrastructure designs and demographic information (i.e., age, gender). The dependent variables are stated preferences of the three scenarios, crossing behavior (crossing time, waiting time before crossing, waiting time after crossing decision, stop or not during crossing, and head movement variation) and physiological responses (i.e., eye tracking and HR features) during crossing. A one-to-one road environment is built in the Unity software with SteamVR platform. HTC Vive Pro Eye headsets with the controllers are utilized for any interactions in the IVE. More detailed information of the IVE setup is available in our previous studies , , . Vehicle traffic within the IVEs is generated from empirical gap acceptance data observed at the real-world location. The gaps between vehicles are generated to fit the empirical distribution of accepted gap sizes . These gaps are randomized before each scenario so each participant’s exposure to any gap is randomized. All the vehicles has a speed of 25 mph, followed the speed limit. Vehicle type is also randomized from the four vehicle models used in the IVE. As-built scenario The as-built environment is built to model the existing painted crosswalk along the Water Street corridor to serve as the base case against the other two alternative designs. In the IVE, the pedestrian’s task is to crossing the street when they feel safe to do so after the first vehicle passes the crosswalk. The vehicle will stop right before the crosswalk to wait for the pedestrian to cross if a conflict is expected to happen. Flashing beacon scenario In the flashing beacon scenario, the pedestrian is allowed to cross the road whenever they feel appropriate. Pedestrians are able to interact with the flashing beacon by pressing the button located on the sign pole to initiate the flashers on the beacon. Figure shows how a pedestrian interacts with the RFB while in VR prior to crossing, as well as an image of the RFB in VR when used. Smartphone application In the smartphone app scenario, pedestrians will have a cellphone (a controller in their right hand in real life) in their right hand once they are placed in the IVE. As shown in Fig. , there are two interfaces that will show up on the phone during testing. The first interface of the mobile phone application (initial state) asks the pedestrian if they wish to cross the crosswalk. Should the pedestrian answer “Yes” and press the button on the controller’s central pad, a new interface will pop up indicating “Your request is being broadcast”. Once the system detects the pressed button, the program will send the request to the next approaching vehicle, and then it will brake and stop in front of the crosswalk to yield to the pedestrian, all the follow-up vehicles will stop as well. The pedestrian is then free to cross the crosswalk and vehicles will yield before the crosswalk for the pedestrian. The as-built environment is built to model the existing painted crosswalk along the Water Street corridor to serve as the base case against the other two alternative designs. In the IVE, the pedestrian’s task is to crossing the street when they feel safe to do so after the first vehicle passes the crosswalk. The vehicle will stop right before the crosswalk to wait for the pedestrian to cross if a conflict is expected to happen. In the flashing beacon scenario, the pedestrian is allowed to cross the road whenever they feel appropriate. Pedestrians are able to interact with the flashing beacon by pressing the button located on the sign pole to initiate the flashers on the beacon. Figure shows how a pedestrian interacts with the RFB while in VR prior to crossing, as well as an image of the RFB in VR when used. In the smartphone app scenario, pedestrians will have a cellphone (a controller in their right hand in real life) in their right hand once they are placed in the IVE. As shown in Fig. , there are two interfaces that will show up on the phone during testing. The first interface of the mobile phone application (initial state) asks the pedestrian if they wish to cross the crosswalk. Should the pedestrian answer “Yes” and press the button on the controller’s central pad, a new interface will pop up indicating “Your request is being broadcast”. Once the system detects the pressed button, the program will send the request to the next approaching vehicle, and then it will brake and stop in front of the crosswalk to yield to the pedestrian, all the follow-up vehicles will stop as well. The pedestrian is then free to cross the crosswalk and vehicles will yield before the crosswalk for the pedestrian. The data collection method of this study follows the framework of our previous study , different types of behavioral and physiological data are collected: stated preferences from pre and post experiment survey, crossing behavior data from Unity, eye tracking data from Tobii Pro Eye headset, heart rate data from smartwatches. Survey response In addition to demographic information, in the pre-experiment survey, the participants are also asked to provide their familiarity with VR devices. After the experiment, the participants are asked for their safety ratings and preferences over the three scenarios. For each scenario, they will be asked to provide their answer with a Likert Scale 1–5 to the question “How safe do you feel in the scenario”, where 1 indicates “not safe at all” and 5 indicates “very safe”. Furthermore, they are asked to rank the safest to the least safe scenario from the three environments. Crossing behavior Five response variables are recorded to represent the pedestrians’ crossing behavior: crossing time, waiting time before crossing, waiting time after crossing decision, stop or not during crossing, and head movement variation. The crossing time is defined as the time interval from the moment when the pedestrian start crossing to the moment when the pedestrian reaches the other side of the crosswalk. Waiting time before crossing is defined as the time between the start of the experiment and the moment when the pedestrian start crossing. Waiting time after crossing decision are defined as the waiting time after pedestrian’s decision to cross the street (after pressing the button either on flashing beacon or smartphone to start crossing), which is only accessible in the flashing beacon and smartphone app scenarios. Stop or not during crossing is a binary response about whether the pedestrian has a obvious stop to wait for the vehicle’s behavior during crossing. The head movement is defined as the variations in the 3-D head movement direction in the unit vector. Fixation Fixation is defined as the moments when eyes stop scanning about the scene and hold the central foveal vision in certain places to look for detailed information of the target object. Similar to previous studies , , We define a fixation with 25 ms minimum duration and 100 pixel maximum dispersion thresholds to extract the fixation information from the original eye tracking data and videos. Two measurements of fixation are calculated: (1) the mean fixation duration is defined as the average length of all fixation events during the crossing; and (2) the fixation rate is defined as the number of fixations per second during the crossing. Gaze entropy there are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). SGE provides a measure of overall predictability for fixation locations, which indicates the level of gaze dispersion during a given viewing period. The SGE is calculated using Eq. : 1 [12pt]{minimal} $$ H(x) = - _{i=1}^{n}(p_i)log_2(p_i) $$ H ( x ) = - ∑ i = 1 n ( p i ) l o g 2 ( p i ) H ( x ) is the value of SGE for a sequence of data x with length n , i is the index for each individual state, [12pt]{minimal} $$p_i$$ p i is the proportion of each state within x . To calculate the SGE, the visual field is divided into spatial bins of discrete state spaces to generate probability distributions. Specifically, the coordinates are divided into spatial bins of [12pt]{minimal} $$100 100$$ 100 × 100 pixel. i to n is defined as all the gaze data during crossing. GTE is retrieved by applying the conditional entropy equation to first order Markov transitions of fixations with Eq. : 2 [12pt]{minimal} $$ H_{c}(x) = - _{i=1}^{n}(p_i) _{i=1}^{n}p(i,j) log_2 p(i,j) $$ H c ( x ) = - ∑ i = 1 n ( p i ) ∑ i = 1 n p ( i , j ) l o g 2 p ( i , j ) Here [12pt]{minimal} $$H_{c}(x)$$ H c ( x ) is the value of GTE, and p ( i , j ) is the probability of transitioning from state i to state j. The other variables have the same definitions as in the SGE equation . More details of calculating SGE and GTE can be found in , . Heart rate An Android smartwatch with the “SWEAR” app records the HR data with a frequency of 1 Hz. The watch is connected to a smartphone via Bluetooth, and the time is synchronized with the experiment computer before each experiment. All data from the smartwatch is temporally stored on the local device and then uploaded to Amazon S3 cloud storage to download for further analysis. In addition to demographic information, in the pre-experiment survey, the participants are also asked to provide their familiarity with VR devices. After the experiment, the participants are asked for their safety ratings and preferences over the three scenarios. For each scenario, they will be asked to provide their answer with a Likert Scale 1–5 to the question “How safe do you feel in the scenario”, where 1 indicates “not safe at all” and 5 indicates “very safe”. Furthermore, they are asked to rank the safest to the least safe scenario from the three environments. Five response variables are recorded to represent the pedestrians’ crossing behavior: crossing time, waiting time before crossing, waiting time after crossing decision, stop or not during crossing, and head movement variation. The crossing time is defined as the time interval from the moment when the pedestrian start crossing to the moment when the pedestrian reaches the other side of the crosswalk. Waiting time before crossing is defined as the time between the start of the experiment and the moment when the pedestrian start crossing. Waiting time after crossing decision are defined as the waiting time after pedestrian’s decision to cross the street (after pressing the button either on flashing beacon or smartphone to start crossing), which is only accessible in the flashing beacon and smartphone app scenarios. Stop or not during crossing is a binary response about whether the pedestrian has a obvious stop to wait for the vehicle’s behavior during crossing. The head movement is defined as the variations in the 3-D head movement direction in the unit vector. Fixation is defined as the moments when eyes stop scanning about the scene and hold the central foveal vision in certain places to look for detailed information of the target object. Similar to previous studies , , We define a fixation with 25 ms minimum duration and 100 pixel maximum dispersion thresholds to extract the fixation information from the original eye tracking data and videos. Two measurements of fixation are calculated: (1) the mean fixation duration is defined as the average length of all fixation events during the crossing; and (2) the fixation rate is defined as the number of fixations per second during the crossing. there are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). SGE provides a measure of overall predictability for fixation locations, which indicates the level of gaze dispersion during a given viewing period. The SGE is calculated using Eq. : 1 [12pt]{minimal} $$ H(x) = - _{i=1}^{n}(p_i)log_2(p_i) $$ H ( x ) = - ∑ i = 1 n ( p i ) l o g 2 ( p i ) H ( x ) is the value of SGE for a sequence of data x with length n , i is the index for each individual state, [12pt]{minimal} $$p_i$$ p i is the proportion of each state within x . To calculate the SGE, the visual field is divided into spatial bins of discrete state spaces to generate probability distributions. Specifically, the coordinates are divided into spatial bins of [12pt]{minimal} $$100 100$$ 100 × 100 pixel. i to n is defined as all the gaze data during crossing. GTE is retrieved by applying the conditional entropy equation to first order Markov transitions of fixations with Eq. : 2 [12pt]{minimal} $$ H_{c}(x) = - _{i=1}^{n}(p_i) _{i=1}^{n}p(i,j) log_2 p(i,j) $$ H c ( x ) = - ∑ i = 1 n ( p i ) ∑ i = 1 n p ( i , j ) l o g 2 p ( i , j ) Here [12pt]{minimal} $$H_{c}(x)$$ H c ( x ) is the value of GTE, and p ( i , j ) is the probability of transitioning from state i to state j. The other variables have the same definitions as in the SGE equation . More details of calculating SGE and GTE can be found in , . An Android smartwatch with the “SWEAR” app records the HR data with a frequency of 1 Hz. The watch is connected to a smartphone via Bluetooth, and the time is synchronized with the experiment computer before each experiment. All data from the smartwatch is temporally stored on the local device and then uploaded to Amazon S3 cloud storage to download for further analysis. 51 participants were recruited for the experiment. Most of the participants are local residents, university students, and faculty members who are familiar with the study corridor. All participants are 18 or older and without color blindness. Two participants’ data are removed due to the malfunction in the data collection. For the remaining 49 participants (22 female and 27 male), the mean age is 33.92 with a standard deviation of 12.95 (1 participant did not reveal his/her age information). A Linear Mixed Effects Model (LMM) was chosen to model the different response variables between independent variables across participants . The LMM framework is chosen specifically for their ability in addressing random and main effects simultaneously within the same modeling scheme . This type of modeling allows us to investigate the effect of each independent variable by considering that each participant might have different baselines for their psychophysiological responses. An LMM is defined as the following: 3 [12pt]{minimal} $$ y = X + bz + $$ y = X β + b z + ε In Eq. , y is the dependent variable in our problem, X is the matrix of predictors, [12pt]{minimal} $$ $$ β is the vector of fixed-effect regression coefficients, b is the matrix of random effects, z is the coefficients associated to each random effect, and [12pt]{minimal} $$ $$ ϵ is the unexplained error terms. The b and [12pt]{minimal} $$ $$ ϵ matrices are defined as: 4 [12pt]{minimal} $$ b_{ij} & {} N(0, _k^{2}),Cov(b_k,b_{k'}) $$ b ij ∼ N ( 0 , ψ k 2 ) , C o v ( b k , b k ′ ) 5 [12pt]{minimal} $$ _{ij} & {} N(0, ^{2} _{ijj}),Cov( _{ij}, _{ij'}) $$ ε ij ∼ N ( 0 , σ 2 λ ijj ) , C o v ( ε ij , ε i j ′ ) In our modeling, we applied LMM using the lme4 package in R programming language . The independent variables are the demographic information (age, gender), prior experience with VR devices (categorized as high/low by if they have used any VR devices before), and the three different pedestrian crossing designs. The dependent variables are all the behavioral responses, including crossing behaviors (crossing time, wait time before crossing, wait time after crossing, head movement and stop during crossing), eye tracking (fixation and gaze entropy), and heart rate. This analysis was performed in R programming language using the LME4 package . All statistical analyses were performed at a 95% confidence level ( [12pt]{minimal} $$ = 0.05$$ α = 0.05 ). |
The Role of Endoscopic Ultrasound in Hepatology | cc019524-fd2a-447c-9981-767fb4dd404e | 10018300 | Internal Medicine[mh] | Endoscopic ultrasound (EUS) has been an indispensable and widely used diagnostic tool since its initial description in the 1980s. Its diverse therapeutic and diagnostic applications have allowed for its use in various medical field, including gastroenterology, cardiology, and urology. , In recent years, EUS has also proven effective and safe in patients with liver conditions where conventional endoscopy or cross-sectional imaging are inefficient and when surgical interventions pose high risks. Growing evidence shows that expanding therapeutic and diagnostic applications of EUS, especially in managing chronic hepatic diseases, outperform in accuracy compared to conventional imaging techniques, such as transabdominal ultrasound (US) and computer tomography (CT). More specifically, a major advantage of EUS is the proximity and ease of using the EUS transducer close to the liver and accurately identifying blood vessels and other intervening structures. Due to its superior performance and negligible adverse effects, EUS has been a highly preferred tool in identifying, characterizing, and staging primary and malignant liver tumors. - Moreover, newly emerging echoendoscopes are provided with color, power and pulsed Doppler, enabling them to identify blood vessels and measure portal pressure in blood vessels. , Combined EUS with real-time elastography (RTE) can effectively measure stiffness of the liver parenchyma and focal lesions. In addition, liver biopsy (LB) guided by EUS is safer with lower risks than the traditional percutaneous method. In this review, we discuss in detail previous and recent applications of EUS as a diagnostic and therapeutic tool in managing liver diseases and explain the potential future use of artificial intelligence analysis for EUS.
1. Focal liver lesions EUS has the advantage of evaluating the appearance of focal liver lesions and taking samples of lesions for histological diagnosis. Focal liver lesions include benign (hepatic abscess, hepatic cyst, hemangioma, and hepatocellular adenoma) and malignant lesions (hepatocellular carcinoma [HCC], cholangiocarcinoma, and liver metastasis) and are traditionally diagnosed using conventional methods, such as transabdominal imaging and percutaneous tissue sampling. In most cases, focal liver lesions are incidentally found using cross-sectional imaging with US, CT, or magnetic resonance imaging in patients at high risk for hepatic malignancies. , Understanding the nature of these lesions is extremely important for the prognosis of hepatic malignancies. However, conventional screening using US, CT, and magnetic resonance imaging has its limitations in accurately diagnosing the nature of these lesions, especially for smaller lesions (<10 mm). , EUS outperforms in its diagnostic accuracy compared to these traditional modalities with the power to diagnose lesions smaller than 10 mm. , In a prospective study in patients with gastrointestinal or pulmonary malignancies, EUS identified liver lesions with high accuracy in 14 patients compared to CT, which only detected three of the lesions. More studies have validated the superiority of EUS over CT by accurately detecting lesions <5 mm in diameter. , In fact, one of these studies showed that EUS detected an additional 28% of hepatic lesions among 14 patients with a history of suspected hepatic malignancies previously detected by CT. Diagnostic accuracy of EUS and CT for detecting hepatic lesions were found to be 98% and 92%, respectively, with EUS significantly detecting a higher number of hepatic metastatic lesions compared to CT. Elastography is a noninvasive method using US waves to assess liver stiffness. It has a strong correlation with the degree of liver fibrosis demonstrated by LB. However, the technique is limited to people with ascites, narrow intercostal spaces, and body habitus. EUS elastography can overcome most of these limitations and has been described as a significant tool in identifying, differentiating, and characterizing malignant and benign hepatic focal lesions with a diagnostic accuracy, sensitivity, and specificity of 88.6%, 92.5%, and 88.8%, respectively. - Malignant liver masses are stiffer than benign masses, and EUS elastography's ability to quantify the stiffness has rendered it a valuable tool in characterizing liver lesions. In addition, hepatic microvascular architecture can be better visualized using contrast agents. Contrast enhancement (CE) is widely used to improve the diagnostic performance of US and EUS. Contrast-enhanced-EUS (CE-EUS) is classified into CE-EUS with the Doppler method and CE-EUS with harmonic imaging, which allows improved detection and characterization of focal liver lesions. Like CE-US, CE-EUS can be used to detail different types of liver lesions through vascular enhancement patterns, with typical patterns including arterial hyperenhancement: (1) subsequent wash out in late-phase contrast in HCC; (2) rim-like enhancement and subsequent rapid washout in metastatic hepatic cancer; (3) with progressive, early, spoke-wheel arteries, unenhanced central scar in focal nodular hyperplasia, and peripheral nodular hyperenhancement; (4) with centripetal progressive fill-in hemangioma. - Moreover, CE-US has been demonstrated to be a valuable tool to evaluate the effectiveness of HCC treatment with a sensitivity and accuracy of 95.6% and 96.2%, respectively, and to detect residual tumor with a sensitivity and accuracy of 76.2% and 77.7%, respectively. Given these observations, CE-EUS could be of potential value with superior accuracy in detecting deep liver lesions over CE-US. However, further studies are required to validate it. In a retrospective analysis, Fujii-Lau et al. developed a EUS scoring system to distinguish between benign and malignant hepatic masses by analyzing data from patients who underwent EUS-fine needle aspiration (FNA) of solid hepatic masses. The derived and validated EUS criteria showed a high positive predictive value of 88% and could detect radiographically occult masses <5 mm. Moreover, the scoring system could enable endosonographers to make informed decisions to avoid unnecessary FNA interventions. Multicenter studies involving multiple endosonographers are required to further validate the scoring system. The algorithm is described in . 2. Liver cirrhosis LB is the gold standard diagnostic tool for liver cirrhosis; however, its application is limited due to sampling errors, complications associated with it being an invasive procedure, inter-observer variability, and cost. Several noninvasive modalities, based on noninvasive fibrosis markers, such as transient elastography (TE), and RTE to measure liver stiffness, have been developed to overcome these limitations. Nonetheless, the performance to detect fibrosis is suboptimal as the transabdominal approach is limited in obese patients and individuals with ascites. In such scenarios, EUS-guided liver stiffness measurements are advantageous and can overcome barriers given the transducer's proximity to the liver, thereby accurately assessing liver fibrosis. Moreover, EUS RTE is more sensitive than transabdominal RTE in evaluating liver fibrosis because the signal passes through the thin gastric wall versus the abdominal wall in transabdominal RTE. Liver fibrosis index, calculated using EUS RTE images, significantly correlates with transabdominal imaging and can accurately stratify normal, fatty, and cirrhotic livers. Given these advantages, EUS RTE can be both an effective and time-efficient modality for assessing fibrosis in patients with liver diseases, especially when patients undergo upper endoscopy for variceal screening or other indications. Furthermore, additional information about liver parenchyma can be obtained all in one session. Another study evaluating the diagnostic value of EUS, Fibroscan, and acoustic radiation force impulse to detect esophagogastric varices, liver stiffness measurement, and liver virtual touch tissue quantification, respectively, in patients with chronic viral liver disease, reported significantly higher detection rate for early-stage liver cirrhosis (Child-Pugh A grade) than chronic hepatitis. Moreover, the combination of these three modalities had a superior diagnostic value for early-stage liver cirrhosis. The regression model of EUS, Fibroscan, and acoustic radiation force impulse reported an area under the receiver operating characteristic curve (AUROC) of 0.947 with a sensitivity and specificity of 0.878 and 0.867, respectively. These results suggest a promising role for EUS along with other modalities in early and accurate diagnosis of complications in early liver cirrhosis and may improve the diagnosis rate and decrease the misdiagnosis rate. 3. Portal hypertension Cirrhosis can lead to portal hypertension (PHT), defined as a major hemodynamic shift due to increased pressure in the portal vein (PV), and correlates to complications, such as ascites, variceal bleeding, and encephalopathy. Thus, prevention, prompt diagnosis and therapy are critical to improve the prognosis in patients with PHT. The severity of PHT is reflected by the hepatic venous portal pressure gradient, also known as the portal pressure gradient (PPG). PHT can be assessed either by TE or by directly measuring the portal pressure. In patients with chronic liver disease who exhibit no symptoms, PHT can be diagnosed during routine checkups by using TE. A study involving patients with recurrent hepatitis C infections post-liver transplantation reported that liver stiffness measured by TE significantly reflected PHT with high sensitivity and specificity. In the latter approach, an interventional radiologist can measure portal pressure by assessing PPG via accessing the right jugular vein. EUS-enabled vascular intervention through PV catheterization was developed to address the limitation to assess PV pressure accurately. Initially tested in swine models, the method, which appeared feasible and safe, directly measured PPG with high accuracy and strongly correlated with the standard transjugular approach. - It was then proven effective in humans, and a later prospective study of 28 patients showed high efficacy in assessing PPG without adverse effects. , However, further studies are required to validate its application in clinical settings. Interestingly, a recent study showed the technical aspects, safety and feasibility of EUS-guided blood sampling from the portal circulation and its application in metabolomic profiling. 4. Varices During the last decade, the application of EUS to diagnose and manage gastric and esophageal varices has largely expanded. It can predict the risk of variceal bleeding and recurrent bleeding. Conventionally, esophagogastroduodenoscopy (EGD) was used to detect esophageal varices. Early reports showed EUS to be less effective than EGD, with several studies reporting that EUS was less accurate, and its sensitivity was largely dependent on the size and grade of varices. , However, recent studies reported EUS to be comparable to EGD in detecting esophageal varices. In a study involving 66 patients diagnosed with cirrhosis, EUS could detect esophageal varices in 48 patients compared to 49 patients identified by EGD. Furthermore, EUS is reported to have a high sensitivity of 96.4% compared to standard EGD in cirrhotic patients. Moreover, improvising the EUS modality using smaller echo-endoscope tips and increased video resolution significantly increased the performance of EUS in diagnosing small esophageal varices. - EUS-Doppler was shown to detect gastric and esophageal varices with high sensitivity, and it was also shown to be valuable in evaluating ectopic duodenal varices. - In addition, EUS is also beneficial in predicting the risk of variceal recurrence after sclerotherapy or band ligation. A study of 38 patients who underwent sclerotherapy for esophageal varices and followed up for 2 years with EUS, reported that EUS could predict variceal recurrence risk as early as 3 to 4 months in advance. Another study evaluating EUS characteristics pre- and post-band ligation for first esophageal variceal bleeding showed the presence of para-esophageal veins larger than 4 mm post-band ligation to be an accurate predictive factor for variceal recurrence within a year, with a sensitivity and specificity of 70.6% and 84.6%, respectively. EUS may also be useful in predicting the risk of recurrent variceal bleeding, as shown in a retrospective study involving 306 patients who underwent endoscopic sclerotherapy for esophageal varices. The study reported that the increased occurrence of perforating veins before undergoing therapy and increased appearance of intramural cardiac veins, perforating veins, and the inflowing type of perforating veins 3 to 5 months post-therapy were associated with recurrent bleeding within a year of therapy.
EUS has the advantage of evaluating the appearance of focal liver lesions and taking samples of lesions for histological diagnosis. Focal liver lesions include benign (hepatic abscess, hepatic cyst, hemangioma, and hepatocellular adenoma) and malignant lesions (hepatocellular carcinoma [HCC], cholangiocarcinoma, and liver metastasis) and are traditionally diagnosed using conventional methods, such as transabdominal imaging and percutaneous tissue sampling. In most cases, focal liver lesions are incidentally found using cross-sectional imaging with US, CT, or magnetic resonance imaging in patients at high risk for hepatic malignancies. , Understanding the nature of these lesions is extremely important for the prognosis of hepatic malignancies. However, conventional screening using US, CT, and magnetic resonance imaging has its limitations in accurately diagnosing the nature of these lesions, especially for smaller lesions (<10 mm). , EUS outperforms in its diagnostic accuracy compared to these traditional modalities with the power to diagnose lesions smaller than 10 mm. , In a prospective study in patients with gastrointestinal or pulmonary malignancies, EUS identified liver lesions with high accuracy in 14 patients compared to CT, which only detected three of the lesions. More studies have validated the superiority of EUS over CT by accurately detecting lesions <5 mm in diameter. , In fact, one of these studies showed that EUS detected an additional 28% of hepatic lesions among 14 patients with a history of suspected hepatic malignancies previously detected by CT. Diagnostic accuracy of EUS and CT for detecting hepatic lesions were found to be 98% and 92%, respectively, with EUS significantly detecting a higher number of hepatic metastatic lesions compared to CT. Elastography is a noninvasive method using US waves to assess liver stiffness. It has a strong correlation with the degree of liver fibrosis demonstrated by LB. However, the technique is limited to people with ascites, narrow intercostal spaces, and body habitus. EUS elastography can overcome most of these limitations and has been described as a significant tool in identifying, differentiating, and characterizing malignant and benign hepatic focal lesions with a diagnostic accuracy, sensitivity, and specificity of 88.6%, 92.5%, and 88.8%, respectively. - Malignant liver masses are stiffer than benign masses, and EUS elastography's ability to quantify the stiffness has rendered it a valuable tool in characterizing liver lesions. In addition, hepatic microvascular architecture can be better visualized using contrast agents. Contrast enhancement (CE) is widely used to improve the diagnostic performance of US and EUS. Contrast-enhanced-EUS (CE-EUS) is classified into CE-EUS with the Doppler method and CE-EUS with harmonic imaging, which allows improved detection and characterization of focal liver lesions. Like CE-US, CE-EUS can be used to detail different types of liver lesions through vascular enhancement patterns, with typical patterns including arterial hyperenhancement: (1) subsequent wash out in late-phase contrast in HCC; (2) rim-like enhancement and subsequent rapid washout in metastatic hepatic cancer; (3) with progressive, early, spoke-wheel arteries, unenhanced central scar in focal nodular hyperplasia, and peripheral nodular hyperenhancement; (4) with centripetal progressive fill-in hemangioma. - Moreover, CE-US has been demonstrated to be a valuable tool to evaluate the effectiveness of HCC treatment with a sensitivity and accuracy of 95.6% and 96.2%, respectively, and to detect residual tumor with a sensitivity and accuracy of 76.2% and 77.7%, respectively. Given these observations, CE-EUS could be of potential value with superior accuracy in detecting deep liver lesions over CE-US. However, further studies are required to validate it. In a retrospective analysis, Fujii-Lau et al. developed a EUS scoring system to distinguish between benign and malignant hepatic masses by analyzing data from patients who underwent EUS-fine needle aspiration (FNA) of solid hepatic masses. The derived and validated EUS criteria showed a high positive predictive value of 88% and could detect radiographically occult masses <5 mm. Moreover, the scoring system could enable endosonographers to make informed decisions to avoid unnecessary FNA interventions. Multicenter studies involving multiple endosonographers are required to further validate the scoring system. The algorithm is described in .
LB is the gold standard diagnostic tool for liver cirrhosis; however, its application is limited due to sampling errors, complications associated with it being an invasive procedure, inter-observer variability, and cost. Several noninvasive modalities, based on noninvasive fibrosis markers, such as transient elastography (TE), and RTE to measure liver stiffness, have been developed to overcome these limitations. Nonetheless, the performance to detect fibrosis is suboptimal as the transabdominal approach is limited in obese patients and individuals with ascites. In such scenarios, EUS-guided liver stiffness measurements are advantageous and can overcome barriers given the transducer's proximity to the liver, thereby accurately assessing liver fibrosis. Moreover, EUS RTE is more sensitive than transabdominal RTE in evaluating liver fibrosis because the signal passes through the thin gastric wall versus the abdominal wall in transabdominal RTE. Liver fibrosis index, calculated using EUS RTE images, significantly correlates with transabdominal imaging and can accurately stratify normal, fatty, and cirrhotic livers. Given these advantages, EUS RTE can be both an effective and time-efficient modality for assessing fibrosis in patients with liver diseases, especially when patients undergo upper endoscopy for variceal screening or other indications. Furthermore, additional information about liver parenchyma can be obtained all in one session. Another study evaluating the diagnostic value of EUS, Fibroscan, and acoustic radiation force impulse to detect esophagogastric varices, liver stiffness measurement, and liver virtual touch tissue quantification, respectively, in patients with chronic viral liver disease, reported significantly higher detection rate for early-stage liver cirrhosis (Child-Pugh A grade) than chronic hepatitis. Moreover, the combination of these three modalities had a superior diagnostic value for early-stage liver cirrhosis. The regression model of EUS, Fibroscan, and acoustic radiation force impulse reported an area under the receiver operating characteristic curve (AUROC) of 0.947 with a sensitivity and specificity of 0.878 and 0.867, respectively. These results suggest a promising role for EUS along with other modalities in early and accurate diagnosis of complications in early liver cirrhosis and may improve the diagnosis rate and decrease the misdiagnosis rate.
Cirrhosis can lead to portal hypertension (PHT), defined as a major hemodynamic shift due to increased pressure in the portal vein (PV), and correlates to complications, such as ascites, variceal bleeding, and encephalopathy. Thus, prevention, prompt diagnosis and therapy are critical to improve the prognosis in patients with PHT. The severity of PHT is reflected by the hepatic venous portal pressure gradient, also known as the portal pressure gradient (PPG). PHT can be assessed either by TE or by directly measuring the portal pressure. In patients with chronic liver disease who exhibit no symptoms, PHT can be diagnosed during routine checkups by using TE. A study involving patients with recurrent hepatitis C infections post-liver transplantation reported that liver stiffness measured by TE significantly reflected PHT with high sensitivity and specificity. In the latter approach, an interventional radiologist can measure portal pressure by assessing PPG via accessing the right jugular vein. EUS-enabled vascular intervention through PV catheterization was developed to address the limitation to assess PV pressure accurately. Initially tested in swine models, the method, which appeared feasible and safe, directly measured PPG with high accuracy and strongly correlated with the standard transjugular approach. - It was then proven effective in humans, and a later prospective study of 28 patients showed high efficacy in assessing PPG without adverse effects. , However, further studies are required to validate its application in clinical settings. Interestingly, a recent study showed the technical aspects, safety and feasibility of EUS-guided blood sampling from the portal circulation and its application in metabolomic profiling.
During the last decade, the application of EUS to diagnose and manage gastric and esophageal varices has largely expanded. It can predict the risk of variceal bleeding and recurrent bleeding. Conventionally, esophagogastroduodenoscopy (EGD) was used to detect esophageal varices. Early reports showed EUS to be less effective than EGD, with several studies reporting that EUS was less accurate, and its sensitivity was largely dependent on the size and grade of varices. , However, recent studies reported EUS to be comparable to EGD in detecting esophageal varices. In a study involving 66 patients diagnosed with cirrhosis, EUS could detect esophageal varices in 48 patients compared to 49 patients identified by EGD. Furthermore, EUS is reported to have a high sensitivity of 96.4% compared to standard EGD in cirrhotic patients. Moreover, improvising the EUS modality using smaller echo-endoscope tips and increased video resolution significantly increased the performance of EUS in diagnosing small esophageal varices. - EUS-Doppler was shown to detect gastric and esophageal varices with high sensitivity, and it was also shown to be valuable in evaluating ectopic duodenal varices. - In addition, EUS is also beneficial in predicting the risk of variceal recurrence after sclerotherapy or band ligation. A study of 38 patients who underwent sclerotherapy for esophageal varices and followed up for 2 years with EUS, reported that EUS could predict variceal recurrence risk as early as 3 to 4 months in advance. Another study evaluating EUS characteristics pre- and post-band ligation for first esophageal variceal bleeding showed the presence of para-esophageal veins larger than 4 mm post-band ligation to be an accurate predictive factor for variceal recurrence within a year, with a sensitivity and specificity of 70.6% and 84.6%, respectively. EUS may also be useful in predicting the risk of recurrent variceal bleeding, as shown in a retrospective study involving 306 patients who underwent endoscopic sclerotherapy for esophageal varices. The study reported that the increased occurrence of perforating veins before undergoing therapy and increased appearance of intramural cardiac veins, perforating veins, and the inflowing type of perforating veins 3 to 5 months post-therapy were associated with recurrent bleeding within a year of therapy.
1. Liver diseases LB remains the standard diagnostic tool for staging fibrosis in patients with chronic liver diseases, as well as identifying the etiology of liver disease. Conventionally, LB was performed via percutaneous, surgical, and transjugular approaches, but its application was limited due to its invasiveness and associated complications. Over the last two decades, and since it was first described in 2007, EUS-LB as evolved as an alternative technique for tissue sampling with proven safety and efficacy and with limited adverse events. Several studies have evaluated the efficacy, output, safety, and accuracy of EUS-LB for chronic liver disease. Core biopsy needles have been exclusively used for EUS-LB rather than conventional fine needles. Using this approach, the liver lobes are identified by the echo-endoscope: left lobe from the stomach and right lobe from the duodenal bulb. Color Doppler imaging is used to carefully navigate the needle and care is taken to avoid vascular structures along the needle path. Both EUS-FNA and fine-needle biopsy (FNB) can be used in EUS-LB. EUS-guided LB with a 19-gauge FNA has been shown to be safe with comparable or higher yield than the percutaneous or transjugular approach. In a study by Stavropoulos et al. , EUS-LB using a 19-gauge fine needle was confirmed to be significantly successful with adequate tissue acquisition. Furthermore, the 19-gauge FNA compared to the 22-gauge FNB was confirmed to be superior with tissue adequacy in terms of sample length and reduced tissue fragmentation. , A prospective study compared the 19-gauge FNA with 19-gauge FNB and reported FNB to have excellent performance for biopsy length and complete portal triads (CPT). However, a recent meta-analysis study reported FNA needles to be superior to FNB needles in tissue acquisition, with a 95.8% diagnostic yield and 0.9% rate of adverse events. In terms of tissue adequacy, 19-gauge FNA needle outperformed 22-gauge FNB needle as well as Tru-cut and non-Tru-cut 19-gauge FNB needles. Also, FNA was sufficient to harvest samples for cytological assessments, whereas FNB was the preferred method for harvesting samples for observing tissue architecture, molecular analysis, and immunohistochemistry. Moreover, EUS-LB enables sampling of both left and right liver lobes in a single session, which generally improves fibrosis assessment and management and reduces morbidity and mortality risks. , Several tissue acquisition techniques have been proposed to improve the diagnostic yield of EUS-LB. The common ones include dry heparin, dry suction technique (DRST) and wet suction technique (WEST). A prospective study found WEST to be superior in tissue acquisition with greater cellularity and improved yield compared with DRST. The yield was further improvised using heparin needles to prevent coagulation. Another prospective study comparing DRST, dry heparin, and wet heparin techniques to harvest LB specimens showed wet heparin to be superior to the dry methods. Specimens harvested with wet heparin method had less tissue fragmentation, produced more CPT, and maintained increased aggregate specimen length and longer lengths of the longest piece. These techniques are useful with FNA needles and studies are limited for such applications using FNB needles. One method is the modified one-pass one actuation WEST. A retrospective study explained this technique with a 19-gauge EUS-FNB (SharkCore) needle in patients with abnormal liver chemistries with median total specimen length of 6 cm and mean CPT of 7.5, suggesting it to be an effective method. 2. Nonalcoholic fatty liver disease As the prevalence of obesity and metabolic syndrome is increasing globally, nonalcoholic fatty liver disease (NAFLD) has become the most common cause of liver disease and the leading indication for liver transplant in many countries. Accurate and timely diagnosis is critical for NAFLD management, and despite emerging non-invasive modalities, LB remains the gold standard, , but EUS-LB is emerging as an alternative modality to diagnose fibrosis and the etiology of liver disease. In a large cohort study involving 47 patients with fatty liver who underwent EUS-FNB with 19-gauge SharkCore needle biopsy, the diagnostic yield and technical success were reported to be significantly higher, with only two patients developing minor adverse effects. Compared to magnetic resonance elastography, the 19-gauge core biopsy needle with the use of the modified one-pass wet suction method was more accurate in diagnosing and staging NAFLD. Another study reported a similar efficacy and safety rate using 22-gauge SharkCore needle biopsy among 21 NAFLD individuals, with minimal adverse events observed in six patients. There are several advantages to EUS-LB, including established safety and efficacy in delivering superior LB cores, easy access to bilobar biopsy, and cost and time efficiency when combined with other endoscopic procedures. Despite the advantages, EUS-LB has some limitations associated with its use. It is a relatively new technique and clinicians accustomed to traditional methods may find it challenging to use the EUS method as it requires a higher level of technical skills. , Given the advantages and disadvantages, a multidisciplinary team approach may be beneficial in deciding between traditional and EUS methods to perform LB, reduce the challenges and improve cost and time efficiency.
LB remains the standard diagnostic tool for staging fibrosis in patients with chronic liver diseases, as well as identifying the etiology of liver disease. Conventionally, LB was performed via percutaneous, surgical, and transjugular approaches, but its application was limited due to its invasiveness and associated complications. Over the last two decades, and since it was first described in 2007, EUS-LB as evolved as an alternative technique for tissue sampling with proven safety and efficacy and with limited adverse events. Several studies have evaluated the efficacy, output, safety, and accuracy of EUS-LB for chronic liver disease. Core biopsy needles have been exclusively used for EUS-LB rather than conventional fine needles. Using this approach, the liver lobes are identified by the echo-endoscope: left lobe from the stomach and right lobe from the duodenal bulb. Color Doppler imaging is used to carefully navigate the needle and care is taken to avoid vascular structures along the needle path. Both EUS-FNA and fine-needle biopsy (FNB) can be used in EUS-LB. EUS-guided LB with a 19-gauge FNA has been shown to be safe with comparable or higher yield than the percutaneous or transjugular approach. In a study by Stavropoulos et al. , EUS-LB using a 19-gauge fine needle was confirmed to be significantly successful with adequate tissue acquisition. Furthermore, the 19-gauge FNA compared to the 22-gauge FNB was confirmed to be superior with tissue adequacy in terms of sample length and reduced tissue fragmentation. , A prospective study compared the 19-gauge FNA with 19-gauge FNB and reported FNB to have excellent performance for biopsy length and complete portal triads (CPT). However, a recent meta-analysis study reported FNA needles to be superior to FNB needles in tissue acquisition, with a 95.8% diagnostic yield and 0.9% rate of adverse events. In terms of tissue adequacy, 19-gauge FNA needle outperformed 22-gauge FNB needle as well as Tru-cut and non-Tru-cut 19-gauge FNB needles. Also, FNA was sufficient to harvest samples for cytological assessments, whereas FNB was the preferred method for harvesting samples for observing tissue architecture, molecular analysis, and immunohistochemistry. Moreover, EUS-LB enables sampling of both left and right liver lobes in a single session, which generally improves fibrosis assessment and management and reduces morbidity and mortality risks. , Several tissue acquisition techniques have been proposed to improve the diagnostic yield of EUS-LB. The common ones include dry heparin, dry suction technique (DRST) and wet suction technique (WEST). A prospective study found WEST to be superior in tissue acquisition with greater cellularity and improved yield compared with DRST. The yield was further improvised using heparin needles to prevent coagulation. Another prospective study comparing DRST, dry heparin, and wet heparin techniques to harvest LB specimens showed wet heparin to be superior to the dry methods. Specimens harvested with wet heparin method had less tissue fragmentation, produced more CPT, and maintained increased aggregate specimen length and longer lengths of the longest piece. These techniques are useful with FNA needles and studies are limited for such applications using FNB needles. One method is the modified one-pass one actuation WEST. A retrospective study explained this technique with a 19-gauge EUS-FNB (SharkCore) needle in patients with abnormal liver chemistries with median total specimen length of 6 cm and mean CPT of 7.5, suggesting it to be an effective method.
As the prevalence of obesity and metabolic syndrome is increasing globally, nonalcoholic fatty liver disease (NAFLD) has become the most common cause of liver disease and the leading indication for liver transplant in many countries. Accurate and timely diagnosis is critical for NAFLD management, and despite emerging non-invasive modalities, LB remains the gold standard, , but EUS-LB is emerging as an alternative modality to diagnose fibrosis and the etiology of liver disease. In a large cohort study involving 47 patients with fatty liver who underwent EUS-FNB with 19-gauge SharkCore needle biopsy, the diagnostic yield and technical success were reported to be significantly higher, with only two patients developing minor adverse effects. Compared to magnetic resonance elastography, the 19-gauge core biopsy needle with the use of the modified one-pass wet suction method was more accurate in diagnosing and staging NAFLD. Another study reported a similar efficacy and safety rate using 22-gauge SharkCore needle biopsy among 21 NAFLD individuals, with minimal adverse events observed in six patients. There are several advantages to EUS-LB, including established safety and efficacy in delivering superior LB cores, easy access to bilobar biopsy, and cost and time efficiency when combined with other endoscopic procedures. Despite the advantages, EUS-LB has some limitations associated with its use. It is a relatively new technique and clinicians accustomed to traditional methods may find it challenging to use the EUS method as it requires a higher level of technical skills. , Given the advantages and disadvantages, a multidisciplinary team approach may be beneficial in deciding between traditional and EUS methods to perform LB, reduce the challenges and improve cost and time efficiency.
1. Hepatic cysts Simple hepatic cysts are mostly benign, asymptomatic and incidentally found in 2.5% to 7% of the population during routine screening. Of these benign cysts, 10% to 16% develop symptoms, such as abdominal pain and distension, among other complications that require further treatment. Conventionally, surgical therapy was considered the treatment option for symptomatic cysts, but the approach was associated with increased morbidity. While percutaneous aspiration was considered in certain circumstances, the method was associated with a recurrence rate of almost 100% within 2 years. Nevertheless, percutaneous aspiration followed by ethanol lavage was effective and safe in treating hepatic cysts with no recurrence observed in the 6 to 18 months follow-up period. In a retrospective study, Lee et al. studied the effectiveness of ethanol lavage therapy via percutaneous aspiration and EUS-guided approach in a total of 17 patients. Of the 19 hepatic cysts with a median cyst volume of 368.9 mL, 10 cysts were drained with the percutaneous approach and eight with the EUS-guided approach. Within the 15-month follow-up, the EUS-guided therapy approach was found to have a 100% reduction of cysts at a median 15-month follow-up, while the percutaneous approach had a 97.5% reduction at a median 11-month follow-up. Moreover, the EUS-guided drainage was exceptionally safe and feasible for cysts on the left hepatic lobe and the percutaneous approach for cysts on the right hepatic lobe. 2. Hepatic abscesses Similar to hepatic cysts, hepatic abscesses are traditionally treated using surgical or percutaneous methods. , Unfortunately, the percutaneous approach also has limitations due to possible organ injury and bleeding. , EUS-guided hepatic abscess drainage is considered a safe and efficient alternative to traditional modalities to overcome these barriers. It can provide excellent visualization of the abscess, and the proximity can aid in direct needle access into the abscess cavity. In a case series presented with three hepatic abscesses localized to the caudate lobe and the gastro-hepatic space that were technically challenging to drain by percutaneous method, EUS-guided drainage could effectively drain the abscesses and showed complete resolution on follow-up. Later, several case studies reported successful drainage of hepatic abscesses using EUS-guided method via trans-gastric and trans-duodenal approaches. - In a retrospective analysis involving 27 patients who underwent either EUS-guided drainage or percutaneous drainage, the EUS-guided group demonstrated a higher clinical success rate than the percutaneous group, at 100% and 82%, respectively. Further studies are required to validate its efficacy in standard practice. The procedure may be limited to abscesses localized in the left lobe; as for the right lobe, percutaneous drainage remains the traditional approach. 3. Variceal bleeding and PHT In the last two decades, there has been a growing interest in using EUS not only for the early diagnosis of PHT but also for the treatment of varices. A pilot study on modified endoscopic variceal ligation technique using EUS-Doppler to reduce variceal recurrence was shown to be superior and successful in preventing variceal recurrence compared to endoscopic variceal ligation performed using traditional upper endoscopy. This was mainly because the EUS-guided approach aids in the exact localization and helps to completely eradicate the varices. Also, five patients in Spain were initially treated for gastric varices (GV) by EUS-guided injection of cyanoacrylate (CYA) in perforating feeding veins, which proved to be safe and efficient in achieving variceal obturation. Later, a multicenter, retrospective study showed that EUS-guided CYA injection was marginally better than EUS-guided coil application (ECA) in achieving GV obliteration (94.7% in patients treated with CYA injection versus 90.9% in patients treated with ECA). However, ECA had significantly fewer adverse events and required fewer endoscopies than CYA injection. Almost half the patients treated with CYA injection (47%) developed asymptomatic pulmonary embolism, while there were none in the group of patients treated with ECA. None of the patients showed recurrent GV during the 6 months follow-up period. In another recent randomized controlled study, both conventional endoscopic CYA injection and EUS-guided combined application of coil and CYA reported a similar efficacy in varices obliteration, and there were no significant differences in the two methods regarding embolism occurrence. However, patients treated with conventional CYA injection alone showed a greater tendency to develop embolism. A single-center randomized controlled study comparing EUS-guided coil and CYA injection versus EUS-guided coil injection alone for GV therapy reported superior clinical excellence, with low rates of rebleeding and reintervention in patients treated with coil and CYA combination compared to coil alone. Significant, immediate disappearance of varices was observed in patients treated with a combination of coil and CYA versus coil alone (86.7% vs 13.3%, p<0.001). Another study comparing EUS-guided fine-needle injection (EUS-FNI) of CYA versus direct endoscopic injection of CYA showed GV rebleeding rates of 8.8% and 23.7%, and similar adverse event rates of 20.3% and 17.5% in patients treated with EUS-FNI-CYA and direct endoscopic injection of CYA, respectively. Moreover, EUS-guided coil injection with absorbable gelatin sponge was reported to be superior to conventional CYA injection with fewer complications (10% vs 20%) and without rebleeding occurrence (0% vs 38%) at 9 months follow-up. Interestingly, a recent meta-analysis comparing the efficacy and safety of EUS-guided therapy (coil and/or CYA) versus conventional endoscopic CYA injection to treat GV reported that EUS-guided therapy had a better clinical efficacy in terms of recurrence and long-term rebleeding. GV obliteration was significantly better with EUS-guided therapy (84.4%; 95% confidence interval, 74.8% to 90.9%; I 2 =77) than the conventional CYA injection (62.6%; 95% confidence interval, 42.6% to 79.1%; I 2 =97, p=0.02). A study evaluating the long-term outcomes of EUS-guided injection of coil and CYA to treat gastric fundal varices reported superior efficacy for hemostasis in active bleeding and primary and secondary bleeding prophylaxis. Finally, a recent study with 80 patients confirmed the safety and efficacy of EUS-guided coil and glue injection for the primary prophylaxis of gastric variceal hemorrhage. In 2021, Thiruvengadam and Sedarat published a review summarizing some of these results. In addition, esophageal varices can be eliminated entirely using EUS-guided sclerotherapy with less frequent recurrence. , EUS-guided CYA injection with or without coiling is also beneficial in eradicating duodenal varices, , , with far less adverse effects compared to endoscopy-guided CYA injection. Although transjugular intrahepatic portosystemic shunt has been the standard therapy for PHT complications or refractory variceal bleeding, the EUS-guided intrahepatic portosystemic shunt was introduced as a safe alternative to overcome the challenges of the transjugular intrahepatic portosystemic shunt, as it does not include catheterization into the heart or inferior vena cava. - Furthermore, it reduces radiation exposure risks to both patient and physician during stent placement.
Simple hepatic cysts are mostly benign, asymptomatic and incidentally found in 2.5% to 7% of the population during routine screening. Of these benign cysts, 10% to 16% develop symptoms, such as abdominal pain and distension, among other complications that require further treatment. Conventionally, surgical therapy was considered the treatment option for symptomatic cysts, but the approach was associated with increased morbidity. While percutaneous aspiration was considered in certain circumstances, the method was associated with a recurrence rate of almost 100% within 2 years. Nevertheless, percutaneous aspiration followed by ethanol lavage was effective and safe in treating hepatic cysts with no recurrence observed in the 6 to 18 months follow-up period. In a retrospective study, Lee et al. studied the effectiveness of ethanol lavage therapy via percutaneous aspiration and EUS-guided approach in a total of 17 patients. Of the 19 hepatic cysts with a median cyst volume of 368.9 mL, 10 cysts were drained with the percutaneous approach and eight with the EUS-guided approach. Within the 15-month follow-up, the EUS-guided therapy approach was found to have a 100% reduction of cysts at a median 15-month follow-up, while the percutaneous approach had a 97.5% reduction at a median 11-month follow-up. Moreover, the EUS-guided drainage was exceptionally safe and feasible for cysts on the left hepatic lobe and the percutaneous approach for cysts on the right hepatic lobe.
Similar to hepatic cysts, hepatic abscesses are traditionally treated using surgical or percutaneous methods. , Unfortunately, the percutaneous approach also has limitations due to possible organ injury and bleeding. , EUS-guided hepatic abscess drainage is considered a safe and efficient alternative to traditional modalities to overcome these barriers. It can provide excellent visualization of the abscess, and the proximity can aid in direct needle access into the abscess cavity. In a case series presented with three hepatic abscesses localized to the caudate lobe and the gastro-hepatic space that were technically challenging to drain by percutaneous method, EUS-guided drainage could effectively drain the abscesses and showed complete resolution on follow-up. Later, several case studies reported successful drainage of hepatic abscesses using EUS-guided method via trans-gastric and trans-duodenal approaches. - In a retrospective analysis involving 27 patients who underwent either EUS-guided drainage or percutaneous drainage, the EUS-guided group demonstrated a higher clinical success rate than the percutaneous group, at 100% and 82%, respectively. Further studies are required to validate its efficacy in standard practice. The procedure may be limited to abscesses localized in the left lobe; as for the right lobe, percutaneous drainage remains the traditional approach.
In the last two decades, there has been a growing interest in using EUS not only for the early diagnosis of PHT but also for the treatment of varices. A pilot study on modified endoscopic variceal ligation technique using EUS-Doppler to reduce variceal recurrence was shown to be superior and successful in preventing variceal recurrence compared to endoscopic variceal ligation performed using traditional upper endoscopy. This was mainly because the EUS-guided approach aids in the exact localization and helps to completely eradicate the varices. Also, five patients in Spain were initially treated for gastric varices (GV) by EUS-guided injection of cyanoacrylate (CYA) in perforating feeding veins, which proved to be safe and efficient in achieving variceal obturation. Later, a multicenter, retrospective study showed that EUS-guided CYA injection was marginally better than EUS-guided coil application (ECA) in achieving GV obliteration (94.7% in patients treated with CYA injection versus 90.9% in patients treated with ECA). However, ECA had significantly fewer adverse events and required fewer endoscopies than CYA injection. Almost half the patients treated with CYA injection (47%) developed asymptomatic pulmonary embolism, while there were none in the group of patients treated with ECA. None of the patients showed recurrent GV during the 6 months follow-up period. In another recent randomized controlled study, both conventional endoscopic CYA injection and EUS-guided combined application of coil and CYA reported a similar efficacy in varices obliteration, and there were no significant differences in the two methods regarding embolism occurrence. However, patients treated with conventional CYA injection alone showed a greater tendency to develop embolism. A single-center randomized controlled study comparing EUS-guided coil and CYA injection versus EUS-guided coil injection alone for GV therapy reported superior clinical excellence, with low rates of rebleeding and reintervention in patients treated with coil and CYA combination compared to coil alone. Significant, immediate disappearance of varices was observed in patients treated with a combination of coil and CYA versus coil alone (86.7% vs 13.3%, p<0.001). Another study comparing EUS-guided fine-needle injection (EUS-FNI) of CYA versus direct endoscopic injection of CYA showed GV rebleeding rates of 8.8% and 23.7%, and similar adverse event rates of 20.3% and 17.5% in patients treated with EUS-FNI-CYA and direct endoscopic injection of CYA, respectively. Moreover, EUS-guided coil injection with absorbable gelatin sponge was reported to be superior to conventional CYA injection with fewer complications (10% vs 20%) and without rebleeding occurrence (0% vs 38%) at 9 months follow-up. Interestingly, a recent meta-analysis comparing the efficacy and safety of EUS-guided therapy (coil and/or CYA) versus conventional endoscopic CYA injection to treat GV reported that EUS-guided therapy had a better clinical efficacy in terms of recurrence and long-term rebleeding. GV obliteration was significantly better with EUS-guided therapy (84.4%; 95% confidence interval, 74.8% to 90.9%; I 2 =77) than the conventional CYA injection (62.6%; 95% confidence interval, 42.6% to 79.1%; I 2 =97, p=0.02). A study evaluating the long-term outcomes of EUS-guided injection of coil and CYA to treat gastric fundal varices reported superior efficacy for hemostasis in active bleeding and primary and secondary bleeding prophylaxis. Finally, a recent study with 80 patients confirmed the safety and efficacy of EUS-guided coil and glue injection for the primary prophylaxis of gastric variceal hemorrhage. In 2021, Thiruvengadam and Sedarat published a review summarizing some of these results. In addition, esophageal varices can be eliminated entirely using EUS-guided sclerotherapy with less frequent recurrence. , EUS-guided CYA injection with or without coiling is also beneficial in eradicating duodenal varices, , , with far less adverse effects compared to endoscopy-guided CYA injection. Although transjugular intrahepatic portosystemic shunt has been the standard therapy for PHT complications or refractory variceal bleeding, the EUS-guided intrahepatic portosystemic shunt was introduced as a safe alternative to overcome the challenges of the transjugular intrahepatic portosystemic shunt, as it does not include catheterization into the heart or inferior vena cava. - Furthermore, it reduces radiation exposure risks to both patient and physician during stent placement.
Owing to its high performance, there is an increase in the use of artificial intelligence (AI) for medical image diagnosis. Deep learning, a type of AI algorithm, is an advanced machine learning technique based on neural networks being used for medical diagnosis. , In the gastroenterological field, in relation to EUS images, AI is used to detect and distinguish anatomical features. A recent study by Marya et al. , developed a novel EUS-based convolutional neural network model to identify and classify focal liver lesions. The study demonstrated the model’s ability to autonomously identify focal liver lesions and accurately distinguish between benign and malignant lesions. Unique EUS images were used to train, validate, and test the model. For classifying malignant lesions, the model reported a sensitivity and specificity of 90% and 71%, respectively (AUROC, 0.861) while evaluating still images; while evaluating full-length videos, sensitivity and specificity of 100% and 80%, respectively (AUROC, 0.904) were observed. Using AI to evaluate EUS images for the diagnosis of liver diseases is relatively new and warrants more studies to validate its use in clinical settings.
Potential limitations of EUS include higher costs, risks associated with invasive procedures, and lack of EUS modalities in some hospitals. The major limitations of EUS are challenges in examining the right liver lobe. The accuracy of this modality is limited for lesions presented in the right liver lobe or under the dome of the diaphragm, and accurate diagnosis of other regions of the liver is unclear. Despite the effectiveness of EUS-guided LB, it is difficult to perform an accurate target biopsy in the right liver lobe. More evidence is required to establish its efficacy for lesions presented in the left liver lobe. In addition, the endosonographer’s expertise and skills to carefully scrutinize the liver are of critical diagnostic importance. Finally, most of the current studies analyzing EUS’s efficacy are single-center, non-randomized, and retrospective analysis; therefore, adequately designed, large, multicenter randomized controlled studies are required to widely establish its use in clinical settings.
In recent years, the role of EUS has significantly evolved with emerging applications in both diagnostic and therapeutic hepatology . Owing to its excellent, unobstructed, real-time liver imaging, EUS is presented as a valuable tool for gastroenterologists and hepatologists to manage liver diseases and associated complications. EUS modalities leaped in several aspects, including improved visualization of focal liver lesions, tissue acquisition, and diagnosing gastric and esophageal varices. Moreover, EUS-guided interventional methods to assess portal pressure, drain hepatic abscesses, and ablate hepatic cysts are patient-friendly with limited risk of complications. In addition to diagnostic utilities, EUS is also considered a valuable and relatively safe and effective therapeutic modality for many applications in patients with chronic liver diseases. Given the several advantages and strengths of EUS, its clinical applications are expected to rapidly grow in all aspects of diagnostic and therapeutic hepatology.
|
Scalable Analysis
of Untargeted LC-HRMS Data by Means
of SQL Database Archiving | 777059c7-0702-4845-bfc0-e9093c59ab91 | 10018448 | Forensic Medicine[mh] | Liquid-chromatography-high-resolution
mass spectrometry (LC-HRMS)
is widely used for comprehensive screening of complex samples in environmental, forensic, clinical, and food chemistry.
The screening is considered comprehensive because it can cover chemicals
with a wide range of physiochemical properties and allows the use
of a flexible target list since data acquisition typically occurs
in an untargeted manner. The acquired data set is rich, with each
analytical compound identified as an m / z -retention time pair, possibly supported by additional parameters
such as the fragment ions, adduct pattern, or isotopic pattern. LC-HRMS
data originally acquired for one purpose is reanalyzed to answer new
research questions using retrospective screening, − metabolomics
applications, − and nontargeted screening. The data files are mostly queried at the batch level using vendor
or open-source software via open file formats. The data analysis strategies
for re-use of >1000 LC-HRMS data files typically involve either
significant
data reduction or are limited to the original research question, thereby
requiring process-heavy reanalysis for new questions. This article
reports a novel data analysis strategy for LC-HRMS data that involves
the additional storage of peak deconvoluted data in a structured query
language (SQL) database format, allowing for quick reanalysis to answer
new research questions. Untargeted and unannotated analytical
data from our LC-HRMS forensic
drug screening collected in 8 years was parsed to an SQL database,
referred to as ScreenDB. Forensic analyses adhere to strict quality
assurance schemes to ensure the traceability and reproducibility of
results. After a method development, validation, and implementation
phase, a screening workflow will run with only minor modifications
resulting in data that is comparable over time. ScreenDB was set up
with the objective of linking the screening data with other forensic
metadata, and the database should allow the access to all relevant
data layers including adduct, isotope, and fragment ions. Our applications
of ScreenDB show functional data workflows with >10,000 LC-HRMS
data
files. , , This novel
data analysis strategy for LC-HRMS data is scalable to at least 40,000
data files, although only the server hardware theoretically sets the
limit. SQL archiving is thus ideal for active storage of large amounts
of comparable LC-HRMS data sets if the data owner wishes to frequently
query data.
Instrumentation The sample preparation, LC-HRMS settings,
and subsequent data evaluation workflow were previously described
by Mollerup et al. Briefly, chromatographic
retention was performed with reversed-phase LC in the gradient mode
with a total run time of 15 min. Data were acquired on three different
Xevo G2-S QTOF HRMS instruments (Waters, Milford, USA) with comparable
parameters and sensitivities, in the MS E data-independent
acquisition mode. Drug Screening Internal standards were added to all
samples prior to sample preparation. Within-run quality control (QC)
samples included blank matrices, internal-standard blank injections,
and injection of three methanolic standard mixtures, each containing
approximately 100 compounds at 0.5 mg L –1 . The quality
of every batch and injection was verified as part of routine forensic
data analysis. Only data from analytical runs that fulfilled set forensic
QC criteria were transferred to the database. All data was analyzed
in UNIFI instrument software (Waters, Milford, USA) and then exported
as the UNIFI export package format (.uep). Since the data was previously
used in forensic drug screening workflows, data files were reprocessed
to remove annotations and to lower the ion count threshold. The count
threshold was lowered to allow evaluation of these set thresholds. Data Analysis LC-HRMS measurement variables were read
directly from the uep data files and subsequently parsed to the SQL
archive. To prepare plots, data were extracted from ScreenDB with
the SQL server (Microsoft, Redmond, Washington, USA), and subsequent
analysis steps were made using Python. Data variables of internal
standards for all biological samples and selected QC analytes analyzed
between 2014 and 2020 were extracted from ScreenDB with one diagnostic
fragment ion with precursor ion limits of exact mass ± 3 mDa.
These ions were then grouped using the mean-shift clustering algorithm
from the SciKit-learn Python package. All peaks were scaled using
tolerances of 3 mDa and 0.5 min between samples. The mean-shift clustering
algorithm was subsequently applied using a bandwidth of 1. Diagnostic
fragment ions were grouped if present at 0.015 min from the precursor
ion.
The sample preparation, LC-HRMS settings,
and subsequent data evaluation workflow were previously described
by Mollerup et al. Briefly, chromatographic
retention was performed with reversed-phase LC in the gradient mode
with a total run time of 15 min. Data were acquired on three different
Xevo G2-S QTOF HRMS instruments (Waters, Milford, USA) with comparable
parameters and sensitivities, in the MS E data-independent
acquisition mode.
Internal standards were added to all
samples prior to sample preparation. Within-run quality control (QC)
samples included blank matrices, internal-standard blank injections,
and injection of three methanolic standard mixtures, each containing
approximately 100 compounds at 0.5 mg L –1 . The quality
of every batch and injection was verified as part of routine forensic
data analysis. Only data from analytical runs that fulfilled set forensic
QC criteria were transferred to the database. All data was analyzed
in UNIFI instrument software (Waters, Milford, USA) and then exported
as the UNIFI export package format (.uep). Since the data was previously
used in forensic drug screening workflows, data files were reprocessed
to remove annotations and to lower the ion count threshold. The count
threshold was lowered to allow evaluation of these set thresholds.
LC-HRMS measurement variables were read
directly from the uep data files and subsequently parsed to the SQL
archive. To prepare plots, data were extracted from ScreenDB with
the SQL server (Microsoft, Redmond, Washington, USA), and subsequent
analysis steps were made using Python. Data variables of internal
standards for all biological samples and selected QC analytes analyzed
between 2014 and 2020 were extracted from ScreenDB with one diagnostic
fragment ion with precursor ion limits of exact mass ± 3 mDa.
These ions were then grouped using the mean-shift clustering algorithm
from the SciKit-learn Python package. All peaks were scaled using
tolerances of 3 mDa and 0.5 min between samples. The mean-shift clustering
algorithm was subsequently applied using a bandwidth of 1. Diagnostic
fragment ions were grouped if present at 0.015 min from the precursor
ion.
ScreenDB Architecture and Content Different types of
forensic cases were screened with the same LC-HRMS method, including
driving-under-the-influence-of-drugs, drug seizure, biological samples
from autopsy, and drug-facilitated crime cases. Registration and documentation
are handled through a laboratory information management system, STARLIMS
(STARLIMS Corporation, Hollywood, FL, USA). The sample name, analytical
run number, and analysis identifier nomenclature in ScreenDB refer
to the entries in the LIMS. Via this structured connectivity, data
from each LC-HRMS injection can be matched with well-curated data
consisting of quantitative results from complementary methods, the
sample age, case characteristics, and other historical data. ScreenDB
consists of two tables, with Tables S1 and S2 presenting the most important variables parsed from the uep files.
A sample table holds sample-specific information, such as the sample
identifier, file directory, run name, and a unique data identifier
(uid) assigned to the raw data using vendor software. Each analytical
injection corresponds to one line in the sample table. In the peak
table, each line corresponds to one ion signal from either low- or
high-energy channels. Each signal has an accurate mass, retention
time, and signal intensity, together with diagnostic variance parameters
calculated in the compression (componentization) process from the
profile data. The UNIFI componentization step involves peak detection
and subsequent grouping of co-eluting ions in the high- and low-energy
spectra, and thus filtering out of some
background signals that do not show chromatographic retention. The
precursor ion, in-source fragment ions, isotopes,
and adducts are available from the low-energy spectra, and the residual
precursor ion and fragment ions are available from the high-energy
spectra, along with some co-eluting interferences. Ions are decomponentized
in ScreenDB, meaning that each measured ion at a given retention time
results in a line in the peak table, and it can be queried independently
from the assigned component. Variables available from the ScreenDB
peak table are illustrated in for the drug cocaine. The spectra show how isotopic
peaks, diagnostic fragment ions, and protonated molecules are available
for queries. Extracted spectra for cocaine and morphine are available
in Figures S1A and S2A , respectively, from the lock-mass corrected and centroided
raw files and from the uep files from the same injection as that in and S2B to illustrate data retention in the componentization
step. Complex, multiparametric queries are not easy to build in vendor
software but can be coded in general-purpose programming languages
with extracts from ScreenDB. A legacy version of ScreenDB used for
earlier applications , was based on mascot generic format
files that associated each low-energy ion with the component’s
high-energy spectrum. This inflated the legacy ScreenDB and also required
conversion of the data files prior to storage. Reading the uep files
directly allows the retention of more analytical information and access
to the extra data layers arising from profile acquisition. The use
of the mascot generic format in combination with MS 1 -level
features is used in feature-based molecular networking, where only
representative MS 2 spectra are selected to eliminate this
data inflation. Feature-based LC-HRMS
data with linked fragment ion spectra could work well in an SQL database
structure, although ions recorded in low and high energy would have
variable levels of information. The database architecture would have
to be modified accordingly and may need separate tables for features
and fragment ions. Other acquisition and processing software or peak-picking
algorithms provide different data variables available for parsing.
Without the UNIFI componentization step, another lock-mass correction
and peak-picking algorithm should be used to centroid ions in the
mass and retention time domains. It is, however, crucial for retrospective
and non-targeted data analysis workflows that the data layers remain
accessible separately for flexible queries that are not locked in
feature formats. Our purpose of making this structured digital archive
was to mirror and support the in-house screening, which uses UNIFI
peak components, and therefore, alternative approaches were not evaluated
as this would not fulfill our research goals. Measurement Uncertainty Reproducible measurements are
imperative for the meaningful comparison of data variables acquired
over years of analysis. A strength of ScreenDB is that stored QC sample
variables and internal standard signals are accessible for further
quality check and to set informed limits. Histograms of the protonated
molecules in the mass and retention time domains from internal standards
in around 14,000 biological samples are presented in . Fragmentation reproducibility
is presented in , with 5 standards from around 1000 methanolic QC sample injections.
When performing big data analyses with ScreenDB, the extracted ion
chromatograms and fragment ion spectra are not evaluated individually,
as opposed to forensic screening evaluation. Therefore, well-informed
and more tailored decisions need to be made, as presented in Pan et
al. Applications of ScreenDB System Monitoring In accredited analytical laboratories,
systems are regularly maintained and evaluated with system suitability
testing. Monitoring of system performance with ScreenDB variables
supports informed decisions during troubleshooting sessions, as exemplified
in with data
from a single LC-HRMS instrument. These plots can help chemists distinguish
between insignificant drifts and problems needing intervention, when
combined with logged instrument maintenance events. ScreenOmics The in-house drug screening workflow is
improved by identification of new targets for analytes that cannot
otherwise be detected via metabolomics-type workflows. , A toxicological screening is frequently carried out with positive
electrospray ionization and therefore has lower sensitivity for neutral
and acidic compounds. Comparing the LC-HRMS data from samples with
known positive and negative quantitative results for a given drug
allows the identification of alternative targets ( A). This is referred to as
ScreenOmics. The ScreenOmics approach does not use signal intensities
for prioritization of targets but instead takes advantage of the large
number of control samples accessible in ScreenDB. Alternative targets
may be adducts and/or metabolites that ionize better in positive electrospray
ionization than the parent compound, as described for valproate and barbiturates. These workflows are only possible because all ions associated with
a feature are accessible in ScreenDB. Retrospective Data Analysis When a new drug emerges,
ScreenDB can be queried to disclose whether a feature with those characteristics
has ever been acquired. Retrospective data analysis is performed in
a matter of seconds, making this approach for retrospective screening
more efficient than reprocessing data or reanalyzing samples. Recently,
13,514 data files from driving-under-the-influence-of-drugs samples
stored in ScreenDB were queried for designer drugs. A data workflow was tailored to common benzodiazepines
using quantitative results from a complementary analytical method
as a true condition ( B). This study revealed 43 tentative positive findings that
were not detected in the screening when the case was open and only
9 false positive findings. Feature-based data analysis can answer
many of the same and more research questions than ScreenDB but can
only scale to 2000 samples by limiting the number of features for
downstream analysis. Vendor software
used for the drug screening workflow in our laboratory becomes slow
in batches of >100 samples, and ion signal data is locked in components.
ScreenDB was therefore developed as a structured library of ion signals
to enable active reuse of LC-HRMS data and flexible access to available
data layers. The scalability and flexibility of LC-HRMS E data analysis via SQL database archiving are not achievable with
other platforms. Moving data to an SQL archive is an alternative to
relying on memory upgrades to run larger batches. When the ion signals
are archived as tabular data, the data can no longer be directly imported
in the vast number of computational tools available for LC-HRMS data
analysis workflows. Consequently, data analysis workflows need to
be developed de novo, and some programming is necessary to make use
of the data. Storing data in the SQL format increases storage space
demands and most often requires a separate database server. However,
the price of the necessary hardware or cloud services is minuscule
compared to the price of the LC-HRMS hardware.
Different types of
forensic cases were screened with the same LC-HRMS method, including
driving-under-the-influence-of-drugs, drug seizure, biological samples
from autopsy, and drug-facilitated crime cases. Registration and documentation
are handled through a laboratory information management system, STARLIMS
(STARLIMS Corporation, Hollywood, FL, USA). The sample name, analytical
run number, and analysis identifier nomenclature in ScreenDB refer
to the entries in the LIMS. Via this structured connectivity, data
from each LC-HRMS injection can be matched with well-curated data
consisting of quantitative results from complementary methods, the
sample age, case characteristics, and other historical data. ScreenDB
consists of two tables, with Tables S1 and S2 presenting the most important variables parsed from the uep files.
A sample table holds sample-specific information, such as the sample
identifier, file directory, run name, and a unique data identifier
(uid) assigned to the raw data using vendor software. Each analytical
injection corresponds to one line in the sample table. In the peak
table, each line corresponds to one ion signal from either low- or
high-energy channels. Each signal has an accurate mass, retention
time, and signal intensity, together with diagnostic variance parameters
calculated in the compression (componentization) process from the
profile data. The UNIFI componentization step involves peak detection
and subsequent grouping of co-eluting ions in the high- and low-energy
spectra, and thus filtering out of some
background signals that do not show chromatographic retention. The
precursor ion, in-source fragment ions, isotopes,
and adducts are available from the low-energy spectra, and the residual
precursor ion and fragment ions are available from the high-energy
spectra, along with some co-eluting interferences. Ions are decomponentized
in ScreenDB, meaning that each measured ion at a given retention time
results in a line in the peak table, and it can be queried independently
from the assigned component. Variables available from the ScreenDB
peak table are illustrated in for the drug cocaine. The spectra show how isotopic
peaks, diagnostic fragment ions, and protonated molecules are available
for queries. Extracted spectra for cocaine and morphine are available
in Figures S1A and S2A , respectively, from the lock-mass corrected and centroided
raw files and from the uep files from the same injection as that in and S2B to illustrate data retention in the componentization
step. Complex, multiparametric queries are not easy to build in vendor
software but can be coded in general-purpose programming languages
with extracts from ScreenDB. A legacy version of ScreenDB used for
earlier applications , was based on mascot generic format
files that associated each low-energy ion with the component’s
high-energy spectrum. This inflated the legacy ScreenDB and also required
conversion of the data files prior to storage. Reading the uep files
directly allows the retention of more analytical information and access
to the extra data layers arising from profile acquisition. The use
of the mascot generic format in combination with MS 1 -level
features is used in feature-based molecular networking, where only
representative MS 2 spectra are selected to eliminate this
data inflation. Feature-based LC-HRMS
data with linked fragment ion spectra could work well in an SQL database
structure, although ions recorded in low and high energy would have
variable levels of information. The database architecture would have
to be modified accordingly and may need separate tables for features
and fragment ions. Other acquisition and processing software or peak-picking
algorithms provide different data variables available for parsing.
Without the UNIFI componentization step, another lock-mass correction
and peak-picking algorithm should be used to centroid ions in the
mass and retention time domains. It is, however, crucial for retrospective
and non-targeted data analysis workflows that the data layers remain
accessible separately for flexible queries that are not locked in
feature formats. Our purpose of making this structured digital archive
was to mirror and support the in-house screening, which uses UNIFI
peak components, and therefore, alternative approaches were not evaluated
as this would not fulfill our research goals.
Reproducible measurements are
imperative for the meaningful comparison of data variables acquired
over years of analysis. A strength of ScreenDB is that stored QC sample
variables and internal standard signals are accessible for further
quality check and to set informed limits. Histograms of the protonated
molecules in the mass and retention time domains from internal standards
in around 14,000 biological samples are presented in . Fragmentation reproducibility
is presented in , with 5 standards from around 1000 methanolic QC sample injections.
When performing big data analyses with ScreenDB, the extracted ion
chromatograms and fragment ion spectra are not evaluated individually,
as opposed to forensic screening evaluation. Therefore, well-informed
and more tailored decisions need to be made, as presented in Pan et
al.
System Monitoring In accredited analytical laboratories,
systems are regularly maintained and evaluated with system suitability
testing. Monitoring of system performance with ScreenDB variables
supports informed decisions during troubleshooting sessions, as exemplified
in with data
from a single LC-HRMS instrument. These plots can help chemists distinguish
between insignificant drifts and problems needing intervention, when
combined with logged instrument maintenance events. ScreenOmics The in-house drug screening workflow is
improved by identification of new targets for analytes that cannot
otherwise be detected via metabolomics-type workflows. , A toxicological screening is frequently carried out with positive
electrospray ionization and therefore has lower sensitivity for neutral
and acidic compounds. Comparing the LC-HRMS data from samples with
known positive and negative quantitative results for a given drug
allows the identification of alternative targets ( A). This is referred to as
ScreenOmics. The ScreenOmics approach does not use signal intensities
for prioritization of targets but instead takes advantage of the large
number of control samples accessible in ScreenDB. Alternative targets
may be adducts and/or metabolites that ionize better in positive electrospray
ionization than the parent compound, as described for valproate and barbiturates. These workflows are only possible because all ions associated with
a feature are accessible in ScreenDB. Retrospective Data Analysis When a new drug emerges,
ScreenDB can be queried to disclose whether a feature with those characteristics
has ever been acquired. Retrospective data analysis is performed in
a matter of seconds, making this approach for retrospective screening
more efficient than reprocessing data or reanalyzing samples. Recently,
13,514 data files from driving-under-the-influence-of-drugs samples
stored in ScreenDB were queried for designer drugs. A data workflow was tailored to common benzodiazepines
using quantitative results from a complementary analytical method
as a true condition ( B). This study revealed 43 tentative positive findings that
were not detected in the screening when the case was open and only
9 false positive findings. Feature-based data analysis can answer
many of the same and more research questions than ScreenDB but can
only scale to 2000 samples by limiting the number of features for
downstream analysis. Vendor software
used for the drug screening workflow in our laboratory becomes slow
in batches of >100 samples, and ion signal data is locked in components.
ScreenDB was therefore developed as a structured library of ion signals
to enable active reuse of LC-HRMS data and flexible access to available
data layers. The scalability and flexibility of LC-HRMS E data analysis via SQL database archiving are not achievable with
other platforms. Moving data to an SQL archive is an alternative to
relying on memory upgrades to run larger batches. When the ion signals
are archived as tabular data, the data can no longer be directly imported
in the vast number of computational tools available for LC-HRMS data
analysis workflows. Consequently, data analysis workflows need to
be developed de novo, and some programming is necessary to make use
of the data. Storing data in the SQL format increases storage space
demands and most often requires a separate database server. However,
the price of the necessary hardware or cloud services is minuscule
compared to the price of the LC-HRMS hardware.
In accredited analytical laboratories,
systems are regularly maintained and evaluated with system suitability
testing. Monitoring of system performance with ScreenDB variables
supports informed decisions during troubleshooting sessions, as exemplified
in with data
from a single LC-HRMS instrument. These plots can help chemists distinguish
between insignificant drifts and problems needing intervention, when
combined with logged instrument maintenance events.
The in-house drug screening workflow is
improved by identification of new targets for analytes that cannot
otherwise be detected via metabolomics-type workflows. , A toxicological screening is frequently carried out with positive
electrospray ionization and therefore has lower sensitivity for neutral
and acidic compounds. Comparing the LC-HRMS data from samples with
known positive and negative quantitative results for a given drug
allows the identification of alternative targets ( A). This is referred to as
ScreenOmics. The ScreenOmics approach does not use signal intensities
for prioritization of targets but instead takes advantage of the large
number of control samples accessible in ScreenDB. Alternative targets
may be adducts and/or metabolites that ionize better in positive electrospray
ionization than the parent compound, as described for valproate and barbiturates. These workflows are only possible because all ions associated with
a feature are accessible in ScreenDB.
When a new drug emerges,
ScreenDB can be queried to disclose whether a feature with those characteristics
has ever been acquired. Retrospective data analysis is performed in
a matter of seconds, making this approach for retrospective screening
more efficient than reprocessing data or reanalyzing samples. Recently,
13,514 data files from driving-under-the-influence-of-drugs samples
stored in ScreenDB were queried for designer drugs. A data workflow was tailored to common benzodiazepines
using quantitative results from a complementary analytical method
as a true condition ( B). This study revealed 43 tentative positive findings that
were not detected in the screening when the case was open and only
9 false positive findings. Feature-based data analysis can answer
many of the same and more research questions than ScreenDB but can
only scale to 2000 samples by limiting the number of features for
downstream analysis. Vendor software
used for the drug screening workflow in our laboratory becomes slow
in batches of >100 samples, and ion signal data is locked in components.
ScreenDB was therefore developed as a structured library of ion signals
to enable active reuse of LC-HRMS data and flexible access to available
data layers. The scalability and flexibility of LC-HRMS E data analysis via SQL database archiving are not achievable with
other platforms. Moving data to an SQL archive is an alternative to
relying on memory upgrades to run larger batches. When the ion signals
are archived as tabular data, the data can no longer be directly imported
in the vast number of computational tools available for LC-HRMS data
analysis workflows. Consequently, data analysis workflows need to
be developed de novo, and some programming is necessary to make use
of the data. Storing data in the SQL format increases storage space
demands and most often requires a separate database server. However,
the price of the necessary hardware or cloud services is minuscule
compared to the price of the LC-HRMS hardware.
In this study, we report
a novel scalable strategy for LC-HRMS
data analysis. ScreenDB is an SQL database that currently stores data
from around 40,000 data files, acquired with a single analytical method
from 2014 onward. ScreenDB can be used as a stand-alone data source,
but its main value for forensic toxicology lies in the linking with
digitalized laboratory and case data. In our laboratory, we frequently
query the database for contaminant trouble shooting and retrospective
data analysis and to improve our drug screening method. Easy access
to historic data is a prerequisite for it to be of any value in high-throughput
laboratories. Being an SQL database, we only have to connect with
ScreenDB (<1 min), and then, we can query 8 years’ worth
of LC-HRMS data with the same level of information available in vendor
software but with both flexibility and speed. Scalable data analysis
approaches as presented here are necessary as large-scale biomonitoring
with LC-HRMS becomes more prevalent. Active storage in SQL databases
can expand the impact of large-scale biomonitoring projects by making
the data more accessible, reusable, and interoperable. However, robust
analytical systems that are evaluated with QC systems are imperative
to compare data over long periods of time. Readily retrievable,
curated, and untargeted analytical data enable
fast and simple retrospective analyses for new targets and active
use of stored intelligence in historic LC-HRMS data. Consequently,
transferring data to a structured digital archive augments active
data reuse and, in our case, improves forensic services.
|
Medical physics practice guideline 4.b: Development, implementation, use and maintenance of safety checklists | f40ab8c9-1048-4c57-b2ea-ce3ba054602a | 10018656 | Internal Medicine[mh] | INTRODUCTION 1.1 Motivation: The value of checklists The field of medicine is characterized by highly complex and dynamic processes, where a multidisciplinary team works together using sophisticated imaging, planning, and delivery systems to provide efficient, accurate, and safe patient treatment, often under intense time pressure. As a result of such characteristics, the practice of medicine is susceptible to errors in judgment, errors in communication, lack of compliance with standard operating procedures, as well as workflow inefficiencies. Other complex environments outside of medicine, such as aviation , and product manufacturing, have successfully used simple tools to aid in reducing human errors. One of these tools is checklists. Checklists have been extensively validated in medical and non‐medical fields for many years and have proven to be an effective tool in error management. They are a key instrument in reducing the risk of costly mistakes and improving overall outcomes. , , , , , , , , As Atule Gawande so eloquently presented in the “Checklist Manifesto”, we must be realistic regarding the complexity and responsibility of our field, and humble enough to accept the benefits in error‐reduction that checklists provide. Checklists are only effective if they are used, and used as intended. Checklists should be used in conjunction with professional experience, knowledge, and inquiry. Effective implementation of checklists is dependent on the attitudes and motivation of involved staff and leadership. 1.2 Goals The goal of this document is to provide a comprehensive strategy for designing, implementing, using, and maintaining clear and effective safety checklists. It is also intended to provide standard components of checklists that can be used in the development of procedure‐ and clinic‐specific quality management tools. This document does not define the specific elements of a unique checklist for a specific clinical task or process. Over the past 5 years, since the original MPPG4a was published, interest in and use of safety checklists in medicine , and medical physics , , , has continued to grow. In this updated document we reinforce the strategies presented in the original document, and further address common barriers to checklist implementation and use in the context of change management. 1.3 Scope Given the wide variety of practices and technologies in diagnostic imaging, nuclear medicine, and radiation therapy, it is neither practical nor desirable in this document to provide a rigid set of checklists that must be adhered to. Experience from the aviation industry indicates that effective checklists are “works in progress” that evolve as techniques develop and technology advances. Additionally, effective checklists should fit the needs, workflow, and goals of a specific environment or practice. This document, therefore, focuses on guidelines for development of checklists, rather than rigid recommendations. Future AAPM Task Groups or accreditation organizations (e.g., ACRO, ACR, or ASTRO) should consider using the steps and methods presented in this document when developing standardized safety checklists as part of their documents. The scope of this MPPG is limited to: Checklist design and implementation recommendations. Providing a few example checklists and checklist components, (not intended to be adopted en bloc). See Appendix . Identifying strategies for maximizing use of checklists in the clinical environment. Identifying the necessary cultural and organizational features needed to develop, implement, and maintain effective checklists. , 1.4 Intended users The intended users of this MPPG are individuals involved in quality and safety management in a clinical setting.
Motivation: The value of checklists The field of medicine is characterized by highly complex and dynamic processes, where a multidisciplinary team works together using sophisticated imaging, planning, and delivery systems to provide efficient, accurate, and safe patient treatment, often under intense time pressure. As a result of such characteristics, the practice of medicine is susceptible to errors in judgment, errors in communication, lack of compliance with standard operating procedures, as well as workflow inefficiencies. Other complex environments outside of medicine, such as aviation , and product manufacturing, have successfully used simple tools to aid in reducing human errors. One of these tools is checklists. Checklists have been extensively validated in medical and non‐medical fields for many years and have proven to be an effective tool in error management. They are a key instrument in reducing the risk of costly mistakes and improving overall outcomes. , , , , , , , , As Atule Gawande so eloquently presented in the “Checklist Manifesto”, we must be realistic regarding the complexity and responsibility of our field, and humble enough to accept the benefits in error‐reduction that checklists provide. Checklists are only effective if they are used, and used as intended. Checklists should be used in conjunction with professional experience, knowledge, and inquiry. Effective implementation of checklists is dependent on the attitudes and motivation of involved staff and leadership.
Goals The goal of this document is to provide a comprehensive strategy for designing, implementing, using, and maintaining clear and effective safety checklists. It is also intended to provide standard components of checklists that can be used in the development of procedure‐ and clinic‐specific quality management tools. This document does not define the specific elements of a unique checklist for a specific clinical task or process. Over the past 5 years, since the original MPPG4a was published, interest in and use of safety checklists in medicine , and medical physics , , , has continued to grow. In this updated document we reinforce the strategies presented in the original document, and further address common barriers to checklist implementation and use in the context of change management.
Scope Given the wide variety of practices and technologies in diagnostic imaging, nuclear medicine, and radiation therapy, it is neither practical nor desirable in this document to provide a rigid set of checklists that must be adhered to. Experience from the aviation industry indicates that effective checklists are “works in progress” that evolve as techniques develop and technology advances. Additionally, effective checklists should fit the needs, workflow, and goals of a specific environment or practice. This document, therefore, focuses on guidelines for development of checklists, rather than rigid recommendations. Future AAPM Task Groups or accreditation organizations (e.g., ACRO, ACR, or ASTRO) should consider using the steps and methods presented in this document when developing standardized safety checklists as part of their documents. The scope of this MPPG is limited to: Checklist design and implementation recommendations. Providing a few example checklists and checklist components, (not intended to be adopted en bloc). See Appendix . Identifying strategies for maximizing use of checklists in the clinical environment. Identifying the necessary cultural and organizational features needed to develop, implement, and maintain effective checklists. ,
Intended users The intended users of this MPPG are individuals involved in quality and safety management in a clinical setting.
THE ROLE OF CHECKLISTS IN ERROR MANAGEMENT Most tasks can be classified into two basic categories, depending on the type of behavior needed for completion: tasks requiring schematic behavior, in other words done reflexively or “on autopilot”, and tasks requiring attentional behavior, which need a predefined active plan and problem‐solving skills. Errors can be associated with each type of behavior. Failures of schematic behavior are called slips or omissions and they are associated with lapses of concentration, distractions, exhaustion or burnout, or natural limitations of the human memory, for example when long lists of data fields need to be transmitted. Failures of attentional behavior are called mistakes, often occurring due to lack of experience or poor training but also arising from poor judgment, misunderstanding a situation, error when fatigue has set in, or when a process is rushed. In medicine, most errors fall in the schematic category rather than the attentional category. Checklists provide a framework to manage and reduce the risk of errors originated by slips or omissions. The aviation industry is a prime example of the successful use of checklists. The industry has learned that when pilots and air‐traffic controllers are provided with and trained in evidence‐based checklists in an environment that motivates them to follow the checklists every single time, the likelihood of errors and accidents is drastically reduced. Checklists provide a basic memory guide and backup for those tasks that are easily forgotten and ensure that the basics are not missed (e.g., wrong patient, wrong site, missed bolus, missed electron block), allowing the team to concentrate on the more difficult and complex tasks that require more time and attention. Additionally, checklists provide a communication and workflow process that allows teams or individuals to pause, ensuring they are working together. Properly structured checklists facilitate systematic and consistent care delivery, thus reducing variability and improving performance. Checklists must have the right balance of information and structure to support clinical practice without compromising or impeding professional judgment or being overly burdensome. A risk in using checklists is the potential illusion that checklist compliance is sufficient, and everything is fine if the checklist is complete. Even the best designed and implemented checklists cannot account for all scenarios and circumstances. In medical physics, professional scrutiny while using checklists is imperative for safe use. In summary, checklists function as a supporting interface among individuals, and between individuals and their environment, helping to guide a particular workflow or procedure.
CHECKLIST TEAM—QUALIFICATIONS AND RESPONSIBILITIES Staff requirements, time allocation, and resources needed to develop and implement a checklist will scale with the scope of the checklist, as well as the size of the practice where it will be used. Development efforts can range from one individual working for a day to a large team with member representation from each clinical care group (e.g., therapist, dosimetrist, physician, nurse, physicist) working for several months. Teamwork is an essential organizational component for a successful checklist when used in large multidisciplinary settings or where the scope of the checklist involves multiple clinical groups. As appropriate, a team approach should be used throughout all phases of development, implementation, revision, and maintenance of the checklist. Each professional group will have a varied perspective of the process and obtaining broad feedback will generate buy‐in toward future use of the checklist. There are additional incidental benefits of a multidisciplinary team approach: team members gain an improved understanding of the workflow tasks and roles as well as how work in one group impacts the others. This increased understanding may reveal opportunities for decreased duplication of efforts, increased efficiency, and improved communication. Additionally, each team member that participates during the development process acquires a sense of ownership, which will have a positive impact during implementation of the checklist into practice. Team members who will be participating in the checklist development and implementation processes should possess the technical expertise, knowledge, and experience in the area, process, or procedure where the checklist will be used. In addition, all team members should understand the benefits of safety checklists and the goals that the checklist aims to accomplish. Members should be empowered to speak directly and honestly, thus avoiding a situation where the checklist will go unused or will hamper efficiency without improving safety. Checklists have a strong sociocultural component because they rely on individuals’ motivation, commitment, and intervention to be effective as an error prevention strategy. Therefore, an individual or group embarking on the creation of a checklist will require skills in team building and collaboration, guiding participation, conducting constructive discussions, and finding and agreeing on mutual purpose, among other management, leadership, and organizational strategies. Often, these skills are underdeveloped and are not part of any of the team members’ formal training. Some recommended literature on this topic can be found in Appendix A.
CHECKLIST GUIDELINES 4.1 Development and implementation processes Based on current literature and best practices from aviation and medical industries , , , , , the development and implementation process can be categorized in the following steps (Figure ). 4.1.1 Clinical need and evidence‐based best practices The first step in developing a checklist is to find specific clinical areas or processes that have the strongest evidence to improve quality and safety, and have the highest clinical impact and the lowest barriers for implementation and use. Literature review of best practices, empirical evidence, and regulatory, local and community input can help with the selection process. Examples of processes that have shown to be effective quality control checks in radiation therapy and that could benefit from checklists were presented by Ford et al. and include: physics chart review, physics weekly chart check, and therapy chart review. Additionally, high‐risk and complex procedures are examples where effective safety checklists may have high impact as an error mitigation strategy. A checklist may also be implemented as part of a corrective action in response to an incident or event. When selecting processes or procedures that will potentially benefit from checklists, consider that an excessive use of checklists could potentially be detrimental to the practice, leading users and teams to experience “checklist fatigue”. Excessive and uncontrolled use of manual safety tools, like checklists, could make processes unnecessarily inefficient, thus decreasing the reliability of the tool and adding another layer of complexity. With this in mind, the selection process should concentrate on those tasks that are critical, often missed or overlooked, and can potentially put the patient at the highest risk for harm if not done. The checklist is a tool for task completion that does not replace professional experience and knowledge. A checklist must never be used as a strategy to resolve disciplinary issues, as a replacement for properly documented policies and procedures, or as a teaching tool by itself. However, checklists can be valuable complimentary tools to support a well‐designed educational or onboarding process. 4.1.2 Designing phase—content and format definition Poor selection or ambiguity on the checklist goal, role, or tasks will most likely lead to failure of the checklist intervention. Therefore, each checklist intervention needs to be associated with an explicit, concise, and unambiguous behavior. Methods for determining appropriate content include literature reviews, multidisciplinary focus groups, Delphi consensus, risk analysis approaches such as failure mode and effect analysis, , and causality models such as system‐theoretic process analysis. The content of the checklist should be organized so it facilitates efficient workflow. The language and sentences used for the checklist items should be simple, direct, and unambiguous, yet maintain the specialized language of the field. Checklist design should incorporate the user or team context, complementing the workflow and avoiding interference with safe and efficient care delivery. The additional time and resources needed to use and perform the checklists should be optimized and factored in the workflow. When borrowing checklists from other practices, the content and format of the checklist should not be considered absolute and will need to be evaluated and modified to fit each practice environment and workflow. Checklists should reflect up‐to‐date processes and procedures and reflect the current clinical operational context. Specific recommendations for physical checklist design are provided in Section , and example checklists are provided in Appendix . 4.1.3 Validation and pilot phase The validation and pilot phases are essential for the success of the checklist and will help the development team detect and identify problems, risks, and issues before clinical deployment, thus avoiding complications that could lead to resistance to using the checklist. , , , This step is the first feedback loop back to the designing phase, as shown in Figure . In most situations, the validation of the checklist is a continuous iterative process, requiring several revisions by the development team until the checklist design is acceptable (i.e., it achieves the initial goal, and it maintains a satisfactory workflow). During the validation process, the development team works on reaching consensus on the usability, timing, potential risks, team interaction, format, and content of the checklist. After initial validation, the checklist should go through a thorough pilot testing process in a simulated clinical setup, conducted by a group representing the target individuals or team. Depending on the scale of the target group and the scope of the checklist, standard quality control methods like Plan‐Do‐Study‐Act (PDSA), as well as heuristic evaluation using interviews, focus groups, and surveys can be used during the pilot phase to collect data and improve the format and conducting method of the checklist. 4.1.4 Preclinical implementation training Effective training on the use of the checklist must precede clinical deployment. Target users and teams must have a complete understanding of the purpose and methodology for using the checklist, as well as the goal of each item on the list. Simulation training using the checklist in the intended team and environment under a variety of possible scenarios should be conducted prior to clinical implementation. Consistent training should prevent misinterpretation of the items in the checklists and minimize erroneous answers or checks. During the initial time following clinical implementation, the development team should follow and monitor users and teams in clinical situations, provide guidance, and gather data to further enhance the tool. If it is discovered that the checklist contains faults or anomalies leading to common mistakes or confusion, it is important to correct the problems promptly and, if necessary, loop back to the designing stage for additional improvement of the checklist, as shown in Figure . The development team should seek to identify barriers to the use of the checklist. , , Section identifies common barriers to checklist implementation and use, and Table in Appendix outlines strategies for overcoming common barriers. 4.1.5 Outcomes and performance evaluation Measuring performance and specific outcomes is the only way to demonstrate that the intervention—in this case, the checklist, works. It is advisable to collect baseline measurements pre‐implementation to be able to compare with post‐implementation data and evaluate and quantify the success (or failure) of the checklist. Incident reporting systems provide one method to collect this information. , , Audits of checklist compliance provide another mechanism to evaluate performance. Ohri et al. showed that, in clinical trials, radiation therapy protocol deviations are associated with increased risk of treatment failure and overall mortality. Checklists, as an error mitigation strategy and quality assurance tool, have the potential to have an impact on clinical outcomes, but measuring this impact is very challenging and is outside of the scope of most checklist implementation processes, particularly for rare or sentinel events. Examples of achievable outcomes and end‐points that should be measured as part of a checklist implementation process include: Compliance with clinical protocols, procedures, and processes. Reduction of near‐misses and incidents in critical clinical processes. Enhancement of communication and team dynamic. Practice standardization. Streamlined workflow. Demonstrating the success of a specific checklist with concrete evidence will reinforce the utility of the tool to the group and may help motivate skeptical individuals to use the checklist. 4.1.6 Maintenance and continuous improvement Checklists should evolve with practice and reflect the most current, evidence‐based data, published guidelines, end‐user feedback, and organizational changes, as well as updates on internal institutional policies, procedures, systems, machines, and instruments. As part of the practice overall quality assurance or safety program, routine reviews (e.g., annual or semi‐annual) of the practice checklists, as well as checklist performance and compliance, should be performed. Reviews should include consideration of checklist retirement if it is no longer needed to support clinical practice. Incident learning systems provide a quality control metric of the checklist performance and can flag when the tool requires further development or possibly additional training. A checklist should be considered a constantly evolving document, requiring monitoring and modifications to adapt to practice changes. Roles and responsibilities of checklist maintenance, periodic review, and continuous improvement should be clearly defined to ensure continued relevance and proper use. 4.2 Checklist purpose and use How a checklist is used depends on its purpose. Some checklists guide the user through a process, preventing the omission of steps. Sometimes checklists ensure that data that will go into some process, such as a calculation, or facilitate passing information between team members, such as in a planning directive. Procedures or processes requiring multiple team members to be present at the same time (i.e., stereotactic body radiation therapy [SBRT], high dose rate [HDR] brachytherapy, stereotactic radiosurgery [SRS]), adaptive radiation therapy [ART], angiogram) might assign one person as the caller/checker of the tasks on the checklists. Upon completion of their corresponding task, the other team members will clearly state their task followed by “check” or “complete”. This approach lets the person calling the task know that the person performing the task heard the call correctly and performed the task. The most suitable method depends on the specific circumstances, the individual versus team approach, and the clinical context where the checklist will be used. Many checklists are used to intercept possible errors—for example, evaluating a brachytherapy plan before delivery. These forms are often used by a single individual, and are most effective without the participation of the person that originally performed the task. For these forms, where appropriate and without interfering with the workflow, the person doing the check should enter the actual value from the task (such as the dose to the clinical target volume from the plan) and compare it with the corresponding limits (upper and lower limits should be included in close proximity to the relevant item on the checklist). Writing all the values helps the checker notice if the values fall outside the limits. Additionally, performance may be enhanced if the person using the checklist knows that checklist use will be audited. The concept of redundancy is an important factor in the checklist philosophy. In any system where the human plays a central role in the outcome of a process, humans are often the weak link in the system; therefore, it is important to establish parallel redundancy to the human intervention. Based on the experience from the aviation industry, there are two types of redundancies available for the checklist use procedure. The first is between the initial configuration of a system, machine, or process and the use of the checklist as a backup only; this is called initial configuration redundancy. The second is the redundancy between team members supervising one another while conducting the checklist; this is called mutual redundancy. Checklist conducting methods can be classified into four categories: Static parallel or call‐do: Using this method, the checklist items are performed and completed as a series of read‐do tasks. The checklist leads the process and directs the team or individual through the process step‐by‐step. In other words, the checklist uses the “cook book” approach. This method does not use any of the redundancy strategies. Static sequential with verification: This method only uses initial configuration redundancy and requires at least two individuals. One person will perform tasks from start to finish. Then, a second team member will verify each item from the checklist. This method is frequently used upon completion of a process (e.g., treatment planning) followed by the independent verification of correct completion of critical items by another team member (e.g., pretreatment plan check). Static sequential with verification and confirmation: This method uses a challenge and response mechanism. During processes requiring a group approach, different members of the team perform various tasks. Upon task completion or during a reasonable procedural pause, a designated team member calls the items from the checklist and each responsible group verifies the completion and accuracy of their corresponding tasks. This method uses the combination of initial configuration and mutual redundancies as a safety barrier mechanism. Dynamic: This method is suited for complex decision‐making situations, where the team is confronted with multiple options and needs to decide the optimal course of action. Emergency situations or infrequent and unpredictable critical events are suitable for the dynamic method. This method frequently uses flow charts and workflow diagrams to aid with the decision‐making process. The aviation industry uses this method for emergency and abnormal situation checklists. Arriaga et al. used this method to develop their surgical‐crisis checklists. An example of an emergency‐style checklist is shown in Appendix , Figure . A summary of the four checklists approaches, with corresponding redundancy strategies and clinical examples, can be found in Table . 4.3 Checklist design recommendations The field of Human Factors Engineering uses knowledge about human characteristics, both capabilities and limitations, that are relevant during any design process and aims to optimize the interactions among people, machines, procedures, systems, and environments. There is ample evidence from both the aviation industry and the medical field showing that failing to adequately consider humans in the design and operations of systems is at best inefficient and at worst unsafe. It is important to consider applying Human Factors Engineering knowledge into the development of checklists because the checklist is a tool that relies completely on human intervention for effective performance. The following recommendations have been gathered from well‐established aviation industry guidelines , and from multiple disciplines in the medical field. , , , These recommendations can be classified into three main areas: (1) content; (2) workflow, layout, and format; and (3) physical characteristics. Additional guidance is provided in Appendix , Figure . A checklist for checklists. 4.3.1 Content A clear and unambiguous title that reflects the objective of the checklist should be defined. Clear guidance on the type of checklist and on what, when, and who is responsible for carrying out each of the actions and tasks in the checklist should be provided. Know the task and consider all task scenarios. Process mapping can facilitate understanding all the steps in the process. , Address how the task is, or should be, performed. Use standard and unambiguous language and terms. For time‐constrained clinical situations and processes, consider the minimum number of actions that need to be included on the checklists that will provide effective and safe patient care. Iterative trial use of the checklist can help determine which actions are imperative to include while minimizing length. Consider the physical demands of the task and environment in which the task is being executed (e.g., subtasks to pause when hands are free, switching windows on a computer). Automated subtasks must be differentiated from those tasks that require attention. For an automated task, the checklist should include a check that the task is completed. Specific values should be recorded on the checklists if compatible with the workflow to ensure a task is not marked as ‘complete’ when the value is out of tolerance. The date of creation or last revision of the checklist must be clearly identified. All documents should identify the originator and approval route. 4.3.2 Workflow, layout, and format Sequencing of checklist items should follow the clinical process or procedure, thus reducing the risk of users deferring checking items and potentially forgetting or missing those items and tasks. When compatible with the clinical process or procedure, the most critical items on the section of the checklists corresponding to that clinical process or procedure should be placed at the beginning of the section and should be completed first. Checklist procedures must be compatible with the operational context, restrictions, and needs of the environment where they will be used. Situations or processes requiring long checklists should be divided and grouped into smaller sections. Each section can be associated with systems, functions, or subprocesses. The appropriate length of checklists and subsections is highly dependent on the task and context of use; however, one should consider adding pause points or subsections at ten items or less. For team‐based checklists, the addition of a completion call (e.g., “checklist complete”) when the checklist is completed should be included. This step provides a cap to the checklist process and enables the team to mentally move from the checklist to other clinical operational processes and tasks. Natural breaks and pauses in the workflow, if such occur, should be used to perform the checklists. An appropriate amount of time to perform each check should be allocated as part of the clinical process or procedure. Studies show a negative relationship between the speed of performing the check and the accuracy of the check. Standardization of the format, layout, presentation, and the checklist process should be used, especially if multiple checklists are used in a group or practice. Distractions and unnecessary interruptions during the performance of the checklist should be minimized. Fatigue (particularly mental, but also physical) should be minimized. The process should include pauses where appropriate or needed. The form should be quick and easy to read. A useful checklist must be simple but thorough. Use of checklists should be part of standard operating procedures of the practice. When compatible with the clinical process or procedure, checklist items aimed at improving the communication among team members should be included. Revisions to the checklist should be made as appropriate based on concerns raised by those using the checklists. For example, use of the checklist may introduce new risks. Checklists should undergo periodic review to ensure their continued applicability to the task and workflow. 4.3.3 Physical characteristics Font types that have clear differentiation between characters (e.g., Sans‐serif fonts, Helvetica, Gill Medium, or Arial) should be used. Font type should be consistent throughout the checklist. Lower case with upper case initial capitals should be used. Use of upper case should be limited for checklist and section headers. Italics for comments, notes, or supporting information are acceptable, but should be used sparingly. A font size that it easy to read at about arm length (60 cm) should be used (this is especially important for paper‐based checklists used under dim light conditions). Font size for headings should be 14 pt (with a minimum of 12 pt). Font size for normal text should be 12 pt (with a minimum of 10 pt). For cases where a checklist needs to be contained on one page, font size smaller than 12 pt may be appropriate, but must never be smaller than 10 pt. Black text on a white or yellow background should be used, with white text on a black background as an acceptable alternative. Colored text should be used with caution because of difficulties in reading colors in some lighting conditions and because of the possibility of causing confusion among colorblind individuals. Colors can be used to differentiate tasks or personnel assignments but should be used after other methods have been exhausted. Pastel shading can be used effectively to discriminate specific items on the checklist (e.g., cautions, consequences), but they should be used sparingly. The following are effective highlight methods for situations or items that require a special emphasis and differentiation, but should be used sparingly to maximize the effect: Bold type. Larger font size. Underlining. Boxing text on a white or colored background. Pink or red pages should not be used. Using some of the concepts and suggestions previously described, Figure shows a visual comparison between a poor and improved checklist. Appendix C contains examples of clinical checklists use in radiation oncology, diagnostic imaging, and other areas of medicine. 4.3.4 Technological considerations: Electronic and intelligent dynamic checklists In addition to the items listed above, consideration should be given to the technical implementation of the checklist. Electronic systems have several potential advantages over paper‐based implementations including: Electronic interlocks such that a process or procedure cannot proceed if the checklist is not complete. Integration into the patient's electronic chart to facilitate communication between multidisciplinary team members. Formal documentation of checklist task completion. Ability to perform quick audits of checklist conformance. Forward thinking about how any collected data will be used is critical, and the desire to collect data for evaluation should not supersede the length, usability, and functionality of the checklist for the intended process. Electronic signatures may enhance ownership and responsibility, potentially improving accuracy of this data. However, an electronic‐based checklist can have disadvantages when not implemented well. Electronic documents can be challenging in some electronic medical records and may tie at least one user to a computer terminal. These disadvantages are accentuated when the checklist is used in a time‐critical procedure. Electronic checklist design and implementation should therefore be approached from a sociotechnical perspective along with concepts of human‐computer interaction. , The use of simple checks, drop‐down menus, and fillable forms should follow the same design principles outlined in Section . Intelligent dynamic checklists are a form of electronic checklists that are automatically adapted in real‐time based on pre‐programmed rules for the specific procedure or patient‐specific clinical need. , , Intelligent dynamic checklists use clinical context to maximize the relevance of the checks, can decrease the number of check items, increase checklist applicability, and reduce the absolute number of checklists in a department to minimize checklist fatigue. Intelligent dynamic checklists offer significant advantages for workflow integration; however, since many different scenarios are encoded within a single checklist, they require greater resource allocation for development and maintenance compared to static checklists. As with static checklists, it is critical that dynamic checklists are routinely updated to maintain relevance to clinical practice. Automation can be used to facilitate and support checklists, for example out‐of‐tolerance warnings and automatically populating values to be evaluated. When automation is incorporated in the clinical workflow, it can be an effective parallel safety tool that may reduce the number of checks that must be performed. , The introduction of automation and automated checks changes the processes and associated failure modes; thus, automation may introduce new potential errors while mitigating others. When automation is introduced into a clinical process, the corresponding quality assurance and associated safety checklists should be revised, updated, and validated using formal risk analysis principles.
Development and implementation processes Based on current literature and best practices from aviation and medical industries , , , , , the development and implementation process can be categorized in the following steps (Figure ). 4.1.1 Clinical need and evidence‐based best practices The first step in developing a checklist is to find specific clinical areas or processes that have the strongest evidence to improve quality and safety, and have the highest clinical impact and the lowest barriers for implementation and use. Literature review of best practices, empirical evidence, and regulatory, local and community input can help with the selection process. Examples of processes that have shown to be effective quality control checks in radiation therapy and that could benefit from checklists were presented by Ford et al. and include: physics chart review, physics weekly chart check, and therapy chart review. Additionally, high‐risk and complex procedures are examples where effective safety checklists may have high impact as an error mitigation strategy. A checklist may also be implemented as part of a corrective action in response to an incident or event. When selecting processes or procedures that will potentially benefit from checklists, consider that an excessive use of checklists could potentially be detrimental to the practice, leading users and teams to experience “checklist fatigue”. Excessive and uncontrolled use of manual safety tools, like checklists, could make processes unnecessarily inefficient, thus decreasing the reliability of the tool and adding another layer of complexity. With this in mind, the selection process should concentrate on those tasks that are critical, often missed or overlooked, and can potentially put the patient at the highest risk for harm if not done. The checklist is a tool for task completion that does not replace professional experience and knowledge. A checklist must never be used as a strategy to resolve disciplinary issues, as a replacement for properly documented policies and procedures, or as a teaching tool by itself. However, checklists can be valuable complimentary tools to support a well‐designed educational or onboarding process. 4.1.2 Designing phase—content and format definition Poor selection or ambiguity on the checklist goal, role, or tasks will most likely lead to failure of the checklist intervention. Therefore, each checklist intervention needs to be associated with an explicit, concise, and unambiguous behavior. Methods for determining appropriate content include literature reviews, multidisciplinary focus groups, Delphi consensus, risk analysis approaches such as failure mode and effect analysis, , and causality models such as system‐theoretic process analysis. The content of the checklist should be organized so it facilitates efficient workflow. The language and sentences used for the checklist items should be simple, direct, and unambiguous, yet maintain the specialized language of the field. Checklist design should incorporate the user or team context, complementing the workflow and avoiding interference with safe and efficient care delivery. The additional time and resources needed to use and perform the checklists should be optimized and factored in the workflow. When borrowing checklists from other practices, the content and format of the checklist should not be considered absolute and will need to be evaluated and modified to fit each practice environment and workflow. Checklists should reflect up‐to‐date processes and procedures and reflect the current clinical operational context. Specific recommendations for physical checklist design are provided in Section , and example checklists are provided in Appendix . 4.1.3 Validation and pilot phase The validation and pilot phases are essential for the success of the checklist and will help the development team detect and identify problems, risks, and issues before clinical deployment, thus avoiding complications that could lead to resistance to using the checklist. , , , This step is the first feedback loop back to the designing phase, as shown in Figure . In most situations, the validation of the checklist is a continuous iterative process, requiring several revisions by the development team until the checklist design is acceptable (i.e., it achieves the initial goal, and it maintains a satisfactory workflow). During the validation process, the development team works on reaching consensus on the usability, timing, potential risks, team interaction, format, and content of the checklist. After initial validation, the checklist should go through a thorough pilot testing process in a simulated clinical setup, conducted by a group representing the target individuals or team. Depending on the scale of the target group and the scope of the checklist, standard quality control methods like Plan‐Do‐Study‐Act (PDSA), as well as heuristic evaluation using interviews, focus groups, and surveys can be used during the pilot phase to collect data and improve the format and conducting method of the checklist. 4.1.4 Preclinical implementation training Effective training on the use of the checklist must precede clinical deployment. Target users and teams must have a complete understanding of the purpose and methodology for using the checklist, as well as the goal of each item on the list. Simulation training using the checklist in the intended team and environment under a variety of possible scenarios should be conducted prior to clinical implementation. Consistent training should prevent misinterpretation of the items in the checklists and minimize erroneous answers or checks. During the initial time following clinical implementation, the development team should follow and monitor users and teams in clinical situations, provide guidance, and gather data to further enhance the tool. If it is discovered that the checklist contains faults or anomalies leading to common mistakes or confusion, it is important to correct the problems promptly and, if necessary, loop back to the designing stage for additional improvement of the checklist, as shown in Figure . The development team should seek to identify barriers to the use of the checklist. , , Section identifies common barriers to checklist implementation and use, and Table in Appendix outlines strategies for overcoming common barriers. 4.1.5 Outcomes and performance evaluation Measuring performance and specific outcomes is the only way to demonstrate that the intervention—in this case, the checklist, works. It is advisable to collect baseline measurements pre‐implementation to be able to compare with post‐implementation data and evaluate and quantify the success (or failure) of the checklist. Incident reporting systems provide one method to collect this information. , , Audits of checklist compliance provide another mechanism to evaluate performance. Ohri et al. showed that, in clinical trials, radiation therapy protocol deviations are associated with increased risk of treatment failure and overall mortality. Checklists, as an error mitigation strategy and quality assurance tool, have the potential to have an impact on clinical outcomes, but measuring this impact is very challenging and is outside of the scope of most checklist implementation processes, particularly for rare or sentinel events. Examples of achievable outcomes and end‐points that should be measured as part of a checklist implementation process include: Compliance with clinical protocols, procedures, and processes. Reduction of near‐misses and incidents in critical clinical processes. Enhancement of communication and team dynamic. Practice standardization. Streamlined workflow. Demonstrating the success of a specific checklist with concrete evidence will reinforce the utility of the tool to the group and may help motivate skeptical individuals to use the checklist. 4.1.6 Maintenance and continuous improvement Checklists should evolve with practice and reflect the most current, evidence‐based data, published guidelines, end‐user feedback, and organizational changes, as well as updates on internal institutional policies, procedures, systems, machines, and instruments. As part of the practice overall quality assurance or safety program, routine reviews (e.g., annual or semi‐annual) of the practice checklists, as well as checklist performance and compliance, should be performed. Reviews should include consideration of checklist retirement if it is no longer needed to support clinical practice. Incident learning systems provide a quality control metric of the checklist performance and can flag when the tool requires further development or possibly additional training. A checklist should be considered a constantly evolving document, requiring monitoring and modifications to adapt to practice changes. Roles and responsibilities of checklist maintenance, periodic review, and continuous improvement should be clearly defined to ensure continued relevance and proper use.
Clinical need and evidence‐based best practices The first step in developing a checklist is to find specific clinical areas or processes that have the strongest evidence to improve quality and safety, and have the highest clinical impact and the lowest barriers for implementation and use. Literature review of best practices, empirical evidence, and regulatory, local and community input can help with the selection process. Examples of processes that have shown to be effective quality control checks in radiation therapy and that could benefit from checklists were presented by Ford et al. and include: physics chart review, physics weekly chart check, and therapy chart review. Additionally, high‐risk and complex procedures are examples where effective safety checklists may have high impact as an error mitigation strategy. A checklist may also be implemented as part of a corrective action in response to an incident or event. When selecting processes or procedures that will potentially benefit from checklists, consider that an excessive use of checklists could potentially be detrimental to the practice, leading users and teams to experience “checklist fatigue”. Excessive and uncontrolled use of manual safety tools, like checklists, could make processes unnecessarily inefficient, thus decreasing the reliability of the tool and adding another layer of complexity. With this in mind, the selection process should concentrate on those tasks that are critical, often missed or overlooked, and can potentially put the patient at the highest risk for harm if not done. The checklist is a tool for task completion that does not replace professional experience and knowledge. A checklist must never be used as a strategy to resolve disciplinary issues, as a replacement for properly documented policies and procedures, or as a teaching tool by itself. However, checklists can be valuable complimentary tools to support a well‐designed educational or onboarding process.
Designing phase—content and format definition Poor selection or ambiguity on the checklist goal, role, or tasks will most likely lead to failure of the checklist intervention. Therefore, each checklist intervention needs to be associated with an explicit, concise, and unambiguous behavior. Methods for determining appropriate content include literature reviews, multidisciplinary focus groups, Delphi consensus, risk analysis approaches such as failure mode and effect analysis, , and causality models such as system‐theoretic process analysis. The content of the checklist should be organized so it facilitates efficient workflow. The language and sentences used for the checklist items should be simple, direct, and unambiguous, yet maintain the specialized language of the field. Checklist design should incorporate the user or team context, complementing the workflow and avoiding interference with safe and efficient care delivery. The additional time and resources needed to use and perform the checklists should be optimized and factored in the workflow. When borrowing checklists from other practices, the content and format of the checklist should not be considered absolute and will need to be evaluated and modified to fit each practice environment and workflow. Checklists should reflect up‐to‐date processes and procedures and reflect the current clinical operational context. Specific recommendations for physical checklist design are provided in Section , and example checklists are provided in Appendix .
Validation and pilot phase The validation and pilot phases are essential for the success of the checklist and will help the development team detect and identify problems, risks, and issues before clinical deployment, thus avoiding complications that could lead to resistance to using the checklist. , , , This step is the first feedback loop back to the designing phase, as shown in Figure . In most situations, the validation of the checklist is a continuous iterative process, requiring several revisions by the development team until the checklist design is acceptable (i.e., it achieves the initial goal, and it maintains a satisfactory workflow). During the validation process, the development team works on reaching consensus on the usability, timing, potential risks, team interaction, format, and content of the checklist. After initial validation, the checklist should go through a thorough pilot testing process in a simulated clinical setup, conducted by a group representing the target individuals or team. Depending on the scale of the target group and the scope of the checklist, standard quality control methods like Plan‐Do‐Study‐Act (PDSA), as well as heuristic evaluation using interviews, focus groups, and surveys can be used during the pilot phase to collect data and improve the format and conducting method of the checklist.
Preclinical implementation training Effective training on the use of the checklist must precede clinical deployment. Target users and teams must have a complete understanding of the purpose and methodology for using the checklist, as well as the goal of each item on the list. Simulation training using the checklist in the intended team and environment under a variety of possible scenarios should be conducted prior to clinical implementation. Consistent training should prevent misinterpretation of the items in the checklists and minimize erroneous answers or checks. During the initial time following clinical implementation, the development team should follow and monitor users and teams in clinical situations, provide guidance, and gather data to further enhance the tool. If it is discovered that the checklist contains faults or anomalies leading to common mistakes or confusion, it is important to correct the problems promptly and, if necessary, loop back to the designing stage for additional improvement of the checklist, as shown in Figure . The development team should seek to identify barriers to the use of the checklist. , , Section identifies common barriers to checklist implementation and use, and Table in Appendix outlines strategies for overcoming common barriers.
Outcomes and performance evaluation Measuring performance and specific outcomes is the only way to demonstrate that the intervention—in this case, the checklist, works. It is advisable to collect baseline measurements pre‐implementation to be able to compare with post‐implementation data and evaluate and quantify the success (or failure) of the checklist. Incident reporting systems provide one method to collect this information. , , Audits of checklist compliance provide another mechanism to evaluate performance. Ohri et al. showed that, in clinical trials, radiation therapy protocol deviations are associated with increased risk of treatment failure and overall mortality. Checklists, as an error mitigation strategy and quality assurance tool, have the potential to have an impact on clinical outcomes, but measuring this impact is very challenging and is outside of the scope of most checklist implementation processes, particularly for rare or sentinel events. Examples of achievable outcomes and end‐points that should be measured as part of a checklist implementation process include: Compliance with clinical protocols, procedures, and processes. Reduction of near‐misses and incidents in critical clinical processes. Enhancement of communication and team dynamic. Practice standardization. Streamlined workflow. Demonstrating the success of a specific checklist with concrete evidence will reinforce the utility of the tool to the group and may help motivate skeptical individuals to use the checklist.
Maintenance and continuous improvement Checklists should evolve with practice and reflect the most current, evidence‐based data, published guidelines, end‐user feedback, and organizational changes, as well as updates on internal institutional policies, procedures, systems, machines, and instruments. As part of the practice overall quality assurance or safety program, routine reviews (e.g., annual or semi‐annual) of the practice checklists, as well as checklist performance and compliance, should be performed. Reviews should include consideration of checklist retirement if it is no longer needed to support clinical practice. Incident learning systems provide a quality control metric of the checklist performance and can flag when the tool requires further development or possibly additional training. A checklist should be considered a constantly evolving document, requiring monitoring and modifications to adapt to practice changes. Roles and responsibilities of checklist maintenance, periodic review, and continuous improvement should be clearly defined to ensure continued relevance and proper use.
Checklist purpose and use How a checklist is used depends on its purpose. Some checklists guide the user through a process, preventing the omission of steps. Sometimes checklists ensure that data that will go into some process, such as a calculation, or facilitate passing information between team members, such as in a planning directive. Procedures or processes requiring multiple team members to be present at the same time (i.e., stereotactic body radiation therapy [SBRT], high dose rate [HDR] brachytherapy, stereotactic radiosurgery [SRS]), adaptive radiation therapy [ART], angiogram) might assign one person as the caller/checker of the tasks on the checklists. Upon completion of their corresponding task, the other team members will clearly state their task followed by “check” or “complete”. This approach lets the person calling the task know that the person performing the task heard the call correctly and performed the task. The most suitable method depends on the specific circumstances, the individual versus team approach, and the clinical context where the checklist will be used. Many checklists are used to intercept possible errors—for example, evaluating a brachytherapy plan before delivery. These forms are often used by a single individual, and are most effective without the participation of the person that originally performed the task. For these forms, where appropriate and without interfering with the workflow, the person doing the check should enter the actual value from the task (such as the dose to the clinical target volume from the plan) and compare it with the corresponding limits (upper and lower limits should be included in close proximity to the relevant item on the checklist). Writing all the values helps the checker notice if the values fall outside the limits. Additionally, performance may be enhanced if the person using the checklist knows that checklist use will be audited. The concept of redundancy is an important factor in the checklist philosophy. In any system where the human plays a central role in the outcome of a process, humans are often the weak link in the system; therefore, it is important to establish parallel redundancy to the human intervention. Based on the experience from the aviation industry, there are two types of redundancies available for the checklist use procedure. The first is between the initial configuration of a system, machine, or process and the use of the checklist as a backup only; this is called initial configuration redundancy. The second is the redundancy between team members supervising one another while conducting the checklist; this is called mutual redundancy. Checklist conducting methods can be classified into four categories: Static parallel or call‐do: Using this method, the checklist items are performed and completed as a series of read‐do tasks. The checklist leads the process and directs the team or individual through the process step‐by‐step. In other words, the checklist uses the “cook book” approach. This method does not use any of the redundancy strategies. Static sequential with verification: This method only uses initial configuration redundancy and requires at least two individuals. One person will perform tasks from start to finish. Then, a second team member will verify each item from the checklist. This method is frequently used upon completion of a process (e.g., treatment planning) followed by the independent verification of correct completion of critical items by another team member (e.g., pretreatment plan check). Static sequential with verification and confirmation: This method uses a challenge and response mechanism. During processes requiring a group approach, different members of the team perform various tasks. Upon task completion or during a reasonable procedural pause, a designated team member calls the items from the checklist and each responsible group verifies the completion and accuracy of their corresponding tasks. This method uses the combination of initial configuration and mutual redundancies as a safety barrier mechanism. Dynamic: This method is suited for complex decision‐making situations, where the team is confronted with multiple options and needs to decide the optimal course of action. Emergency situations or infrequent and unpredictable critical events are suitable for the dynamic method. This method frequently uses flow charts and workflow diagrams to aid with the decision‐making process. The aviation industry uses this method for emergency and abnormal situation checklists. Arriaga et al. used this method to develop their surgical‐crisis checklists. An example of an emergency‐style checklist is shown in Appendix , Figure . A summary of the four checklists approaches, with corresponding redundancy strategies and clinical examples, can be found in Table .
Checklist design recommendations The field of Human Factors Engineering uses knowledge about human characteristics, both capabilities and limitations, that are relevant during any design process and aims to optimize the interactions among people, machines, procedures, systems, and environments. There is ample evidence from both the aviation industry and the medical field showing that failing to adequately consider humans in the design and operations of systems is at best inefficient and at worst unsafe. It is important to consider applying Human Factors Engineering knowledge into the development of checklists because the checklist is a tool that relies completely on human intervention for effective performance. The following recommendations have been gathered from well‐established aviation industry guidelines , and from multiple disciplines in the medical field. , , , These recommendations can be classified into three main areas: (1) content; (2) workflow, layout, and format; and (3) physical characteristics. Additional guidance is provided in Appendix , Figure . A checklist for checklists. 4.3.1 Content A clear and unambiguous title that reflects the objective of the checklist should be defined. Clear guidance on the type of checklist and on what, when, and who is responsible for carrying out each of the actions and tasks in the checklist should be provided. Know the task and consider all task scenarios. Process mapping can facilitate understanding all the steps in the process. , Address how the task is, or should be, performed. Use standard and unambiguous language and terms. For time‐constrained clinical situations and processes, consider the minimum number of actions that need to be included on the checklists that will provide effective and safe patient care. Iterative trial use of the checklist can help determine which actions are imperative to include while minimizing length. Consider the physical demands of the task and environment in which the task is being executed (e.g., subtasks to pause when hands are free, switching windows on a computer). Automated subtasks must be differentiated from those tasks that require attention. For an automated task, the checklist should include a check that the task is completed. Specific values should be recorded on the checklists if compatible with the workflow to ensure a task is not marked as ‘complete’ when the value is out of tolerance. The date of creation or last revision of the checklist must be clearly identified. All documents should identify the originator and approval route. 4.3.2 Workflow, layout, and format Sequencing of checklist items should follow the clinical process or procedure, thus reducing the risk of users deferring checking items and potentially forgetting or missing those items and tasks. When compatible with the clinical process or procedure, the most critical items on the section of the checklists corresponding to that clinical process or procedure should be placed at the beginning of the section and should be completed first. Checklist procedures must be compatible with the operational context, restrictions, and needs of the environment where they will be used. Situations or processes requiring long checklists should be divided and grouped into smaller sections. Each section can be associated with systems, functions, or subprocesses. The appropriate length of checklists and subsections is highly dependent on the task and context of use; however, one should consider adding pause points or subsections at ten items or less. For team‐based checklists, the addition of a completion call (e.g., “checklist complete”) when the checklist is completed should be included. This step provides a cap to the checklist process and enables the team to mentally move from the checklist to other clinical operational processes and tasks. Natural breaks and pauses in the workflow, if such occur, should be used to perform the checklists. An appropriate amount of time to perform each check should be allocated as part of the clinical process or procedure. Studies show a negative relationship between the speed of performing the check and the accuracy of the check. Standardization of the format, layout, presentation, and the checklist process should be used, especially if multiple checklists are used in a group or practice. Distractions and unnecessary interruptions during the performance of the checklist should be minimized. Fatigue (particularly mental, but also physical) should be minimized. The process should include pauses where appropriate or needed. The form should be quick and easy to read. A useful checklist must be simple but thorough. Use of checklists should be part of standard operating procedures of the practice. When compatible with the clinical process or procedure, checklist items aimed at improving the communication among team members should be included. Revisions to the checklist should be made as appropriate based on concerns raised by those using the checklists. For example, use of the checklist may introduce new risks. Checklists should undergo periodic review to ensure their continued applicability to the task and workflow. 4.3.3 Physical characteristics Font types that have clear differentiation between characters (e.g., Sans‐serif fonts, Helvetica, Gill Medium, or Arial) should be used. Font type should be consistent throughout the checklist. Lower case with upper case initial capitals should be used. Use of upper case should be limited for checklist and section headers. Italics for comments, notes, or supporting information are acceptable, but should be used sparingly. A font size that it easy to read at about arm length (60 cm) should be used (this is especially important for paper‐based checklists used under dim light conditions). Font size for headings should be 14 pt (with a minimum of 12 pt). Font size for normal text should be 12 pt (with a minimum of 10 pt). For cases where a checklist needs to be contained on one page, font size smaller than 12 pt may be appropriate, but must never be smaller than 10 pt. Black text on a white or yellow background should be used, with white text on a black background as an acceptable alternative. Colored text should be used with caution because of difficulties in reading colors in some lighting conditions and because of the possibility of causing confusion among colorblind individuals. Colors can be used to differentiate tasks or personnel assignments but should be used after other methods have been exhausted. Pastel shading can be used effectively to discriminate specific items on the checklist (e.g., cautions, consequences), but they should be used sparingly. The following are effective highlight methods for situations or items that require a special emphasis and differentiation, but should be used sparingly to maximize the effect: Bold type. Larger font size. Underlining. Boxing text on a white or colored background. Pink or red pages should not be used. Using some of the concepts and suggestions previously described, Figure shows a visual comparison between a poor and improved checklist. Appendix C contains examples of clinical checklists use in radiation oncology, diagnostic imaging, and other areas of medicine. 4.3.4 Technological considerations: Electronic and intelligent dynamic checklists In addition to the items listed above, consideration should be given to the technical implementation of the checklist. Electronic systems have several potential advantages over paper‐based implementations including: Electronic interlocks such that a process or procedure cannot proceed if the checklist is not complete. Integration into the patient's electronic chart to facilitate communication between multidisciplinary team members. Formal documentation of checklist task completion. Ability to perform quick audits of checklist conformance. Forward thinking about how any collected data will be used is critical, and the desire to collect data for evaluation should not supersede the length, usability, and functionality of the checklist for the intended process. Electronic signatures may enhance ownership and responsibility, potentially improving accuracy of this data. However, an electronic‐based checklist can have disadvantages when not implemented well. Electronic documents can be challenging in some electronic medical records and may tie at least one user to a computer terminal. These disadvantages are accentuated when the checklist is used in a time‐critical procedure. Electronic checklist design and implementation should therefore be approached from a sociotechnical perspective along with concepts of human‐computer interaction. , The use of simple checks, drop‐down menus, and fillable forms should follow the same design principles outlined in Section . Intelligent dynamic checklists are a form of electronic checklists that are automatically adapted in real‐time based on pre‐programmed rules for the specific procedure or patient‐specific clinical need. , , Intelligent dynamic checklists use clinical context to maximize the relevance of the checks, can decrease the number of check items, increase checklist applicability, and reduce the absolute number of checklists in a department to minimize checklist fatigue. Intelligent dynamic checklists offer significant advantages for workflow integration; however, since many different scenarios are encoded within a single checklist, they require greater resource allocation for development and maintenance compared to static checklists. As with static checklists, it is critical that dynamic checklists are routinely updated to maintain relevance to clinical practice. Automation can be used to facilitate and support checklists, for example out‐of‐tolerance warnings and automatically populating values to be evaluated. When automation is incorporated in the clinical workflow, it can be an effective parallel safety tool that may reduce the number of checks that must be performed. , The introduction of automation and automated checks changes the processes and associated failure modes; thus, automation may introduce new potential errors while mitigating others. When automation is introduced into a clinical process, the corresponding quality assurance and associated safety checklists should be revised, updated, and validated using formal risk analysis principles.
Content A clear and unambiguous title that reflects the objective of the checklist should be defined. Clear guidance on the type of checklist and on what, when, and who is responsible for carrying out each of the actions and tasks in the checklist should be provided. Know the task and consider all task scenarios. Process mapping can facilitate understanding all the steps in the process. , Address how the task is, or should be, performed. Use standard and unambiguous language and terms. For time‐constrained clinical situations and processes, consider the minimum number of actions that need to be included on the checklists that will provide effective and safe patient care. Iterative trial use of the checklist can help determine which actions are imperative to include while minimizing length. Consider the physical demands of the task and environment in which the task is being executed (e.g., subtasks to pause when hands are free, switching windows on a computer). Automated subtasks must be differentiated from those tasks that require attention. For an automated task, the checklist should include a check that the task is completed. Specific values should be recorded on the checklists if compatible with the workflow to ensure a task is not marked as ‘complete’ when the value is out of tolerance. The date of creation or last revision of the checklist must be clearly identified. All documents should identify the originator and approval route.
Workflow, layout, and format Sequencing of checklist items should follow the clinical process or procedure, thus reducing the risk of users deferring checking items and potentially forgetting or missing those items and tasks. When compatible with the clinical process or procedure, the most critical items on the section of the checklists corresponding to that clinical process or procedure should be placed at the beginning of the section and should be completed first. Checklist procedures must be compatible with the operational context, restrictions, and needs of the environment where they will be used. Situations or processes requiring long checklists should be divided and grouped into smaller sections. Each section can be associated with systems, functions, or subprocesses. The appropriate length of checklists and subsections is highly dependent on the task and context of use; however, one should consider adding pause points or subsections at ten items or less. For team‐based checklists, the addition of a completion call (e.g., “checklist complete”) when the checklist is completed should be included. This step provides a cap to the checklist process and enables the team to mentally move from the checklist to other clinical operational processes and tasks. Natural breaks and pauses in the workflow, if such occur, should be used to perform the checklists. An appropriate amount of time to perform each check should be allocated as part of the clinical process or procedure. Studies show a negative relationship between the speed of performing the check and the accuracy of the check. Standardization of the format, layout, presentation, and the checklist process should be used, especially if multiple checklists are used in a group or practice. Distractions and unnecessary interruptions during the performance of the checklist should be minimized. Fatigue (particularly mental, but also physical) should be minimized. The process should include pauses where appropriate or needed. The form should be quick and easy to read. A useful checklist must be simple but thorough. Use of checklists should be part of standard operating procedures of the practice. When compatible with the clinical process or procedure, checklist items aimed at improving the communication among team members should be included. Revisions to the checklist should be made as appropriate based on concerns raised by those using the checklists. For example, use of the checklist may introduce new risks. Checklists should undergo periodic review to ensure their continued applicability to the task and workflow.
Physical characteristics Font types that have clear differentiation between characters (e.g., Sans‐serif fonts, Helvetica, Gill Medium, or Arial) should be used. Font type should be consistent throughout the checklist. Lower case with upper case initial capitals should be used. Use of upper case should be limited for checklist and section headers. Italics for comments, notes, or supporting information are acceptable, but should be used sparingly. A font size that it easy to read at about arm length (60 cm) should be used (this is especially important for paper‐based checklists used under dim light conditions). Font size for headings should be 14 pt (with a minimum of 12 pt). Font size for normal text should be 12 pt (with a minimum of 10 pt). For cases where a checklist needs to be contained on one page, font size smaller than 12 pt may be appropriate, but must never be smaller than 10 pt. Black text on a white or yellow background should be used, with white text on a black background as an acceptable alternative. Colored text should be used with caution because of difficulties in reading colors in some lighting conditions and because of the possibility of causing confusion among colorblind individuals. Colors can be used to differentiate tasks or personnel assignments but should be used after other methods have been exhausted. Pastel shading can be used effectively to discriminate specific items on the checklist (e.g., cautions, consequences), but they should be used sparingly. The following are effective highlight methods for situations or items that require a special emphasis and differentiation, but should be used sparingly to maximize the effect: Bold type. Larger font size. Underlining. Boxing text on a white or colored background. Pink or red pages should not be used. Using some of the concepts and suggestions previously described, Figure shows a visual comparison between a poor and improved checklist. Appendix C contains examples of clinical checklists use in radiation oncology, diagnostic imaging, and other areas of medicine.
Technological considerations: Electronic and intelligent dynamic checklists In addition to the items listed above, consideration should be given to the technical implementation of the checklist. Electronic systems have several potential advantages over paper‐based implementations including: Electronic interlocks such that a process or procedure cannot proceed if the checklist is not complete. Integration into the patient's electronic chart to facilitate communication between multidisciplinary team members. Formal documentation of checklist task completion. Ability to perform quick audits of checklist conformance. Forward thinking about how any collected data will be used is critical, and the desire to collect data for evaluation should not supersede the length, usability, and functionality of the checklist for the intended process. Electronic signatures may enhance ownership and responsibility, potentially improving accuracy of this data. However, an electronic‐based checklist can have disadvantages when not implemented well. Electronic documents can be challenging in some electronic medical records and may tie at least one user to a computer terminal. These disadvantages are accentuated when the checklist is used in a time‐critical procedure. Electronic checklist design and implementation should therefore be approached from a sociotechnical perspective along with concepts of human‐computer interaction. , The use of simple checks, drop‐down menus, and fillable forms should follow the same design principles outlined in Section . Intelligent dynamic checklists are a form of electronic checklists that are automatically adapted in real‐time based on pre‐programmed rules for the specific procedure or patient‐specific clinical need. , , Intelligent dynamic checklists use clinical context to maximize the relevance of the checks, can decrease the number of check items, increase checklist applicability, and reduce the absolute number of checklists in a department to minimize checklist fatigue. Intelligent dynamic checklists offer significant advantages for workflow integration; however, since many different scenarios are encoded within a single checklist, they require greater resource allocation for development and maintenance compared to static checklists. As with static checklists, it is critical that dynamic checklists are routinely updated to maintain relevance to clinical practice. Automation can be used to facilitate and support checklists, for example out‐of‐tolerance warnings and automatically populating values to be evaluated. When automation is incorporated in the clinical workflow, it can be an effective parallel safety tool that may reduce the number of checks that must be performed. , The introduction of automation and automated checks changes the processes and associated failure modes; thus, automation may introduce new potential errors while mitigating others. When automation is introduced into a clinical process, the corresponding quality assurance and associated safety checklists should be revised, updated, and validated using formal risk analysis principles.
STRATEGIES FOR SUCCESSFUL IMPLEMENTATION OF A SAFETY CHECKLIST Checklists can be an exceptional safety management tool, but it is critical to recognize that checklists alone cannot provide enhancements in safety and quality. Bosk et al., in their article entitled “Reality check for Checklists”, state: “The mistake of the ‘simple checklist’ story is in the assumption that a technical solution (checklist) can solve an adaptive (sociocultural) problem.” The checklist is a supporting tool and requires convergence of many factors for effective use and ultimately for the successful completion of the associated task. Figure demonstrates, using an Ishikawa (Fishbone) diagram, many of the potential barriers to successful safety checklist implementation. Prospectively addressing these barriers may ease the implementation phase and increase long term utility of the checklist tool. 5.1 Environment Environmental factors that can impact checklist compliance include accessibility (location), format, number of checklists, and general work area. An inconvenient checklist location or too many checklists to choose from or to complete for a given process can deter users. Thorough consideration of the intended and actual workflow can help with identification of means for ensuring accessibility. It is important to consider checklist format (electronic vs. hard copy) with respect to accessibility and environment (e.g., lighting conditions). Clutter in the work area and unavailability of required supporting tools (e.g., pen, tablet) may also detract from effective checklist use. 5.2 Content and design As detailed in Section , there are many considerations in the design and content of a checklist to ensure success. Importantly, the design of the checklist must not inhibit efficient completion of the task, but rather have synergy with the intended workflow by considering item order, organization, and concise language. The items on the checklist must be carefully considered so that it is not overly exhaustive but does not leave out critical or often missed components. This balance of number of items to be checked, format, and workflow can help to ensure the intention of the checklist is clear, it fits easily into the process, and its use is consistent among all team members. Thorough validation and pilot testing are critical for the development of checklists of appropriate length, content, and design for clinical use. 5.3 Organizational factors and safety culture Checklists are a human‐based intervention tool, requiring a strong organizational and social infrastructure to support them, including communication, reinforcement of training, and shared knowledge. The underlying organizational component for successful implementation and effective use of safety checklists is the commitment of the department or group to establish and practice a safety culture, a safety culture is said to have four factors: The public and private commitment of upper‐level management to safety, Shared attitudes towards safety and hazards, Flexible norms and rules to deal with hazardous situations, and Organizational learning. A commonly‐cited checklist success story is the Michigan Keystone ICU (intensive care unit) program, which showed the implementation of checklists, in combination with other key elements, led to a 70% reduction of ICU‐acquired infection rates. , These key elements are: Summarizing, simplifying, and standardizing the process, Creating internal social networks with shared sense of mission and mutual reinforcement mechanisms, Gathering, measuring, and providing feedback on clearly defined outcomes, and Developing and supporting a safety culture. Safety cultures are “characterized by communications founded on mutual trust, by shared perceptions of the importance of safety, and by confidence in the efficacy of preventive measures”. Most importantly, a safety culture is an environment where all individuals are empowered and responsible to stop a process for any safety concern without fear of consequence, ridicule, or scorn. It is the responsibility of the practice leadership, including those with influential leadership and those with authority, to develop and maintain a safety culture, and the tools associated with that commitment. Leaders should view the development, maintenance, and use of a safety checklist as part of routine clinical duties, and they should demonstrate support through allocation of the needed time and resources. Providing a checklist to individuals and teams without building the right environment and organizational support will be a futile effort. Management and leadership support for this process is essential. Empowering staff to mutually reinforce the intended use of a safety checklist is a critical strategy for success. Team members can work together to set the normal behavior of effectively using the checklist. Having a mechanism to hold individuals accountable for using the checklist can be helpful, especially at the initial implementation. The checklist design team, quality safety committee, or clinical leadership team should clearly identify who is expected to participate in checklist execution, who is responsible for performing the different tasks listed on the checklist, who completes the checklist, and who maintains and updates the checklist. If responsibilities are not defined, confusion and apathy can follow. If expectations, roles, and responsibilities are defined early in the implementation process, compliance issues can be corrected and coached towards less risky behavior before they solidify into habits. Celebrate early successes using the checklist, and positively recognize staff who are using the checklists successfully. These positive actions can motivate people and bring attention to the change. 5.4 User engagement Just as organizational infrastructure influences the successful implementation of safety checklists, individual users need to be intrinsically motivated to use the checklist effectively. If staff do not perceive that they, the patient, or other members of the team gain benefits from the checklist, they may view the checklist as unnecessary work, or as a distraction. User engagement barriers to checklist use include: awareness—staff may not be aware of the checklist or process; agreement—staff may not agree with items on the checklist; ambiguity—staff may not be aware of what the checklist is asking them to do; ability—staff may not have the resources, time, or skills to comply with the checklist. Strategies to mitigate common user engagement barriers are described in Appendix , Table B1. Experienced staff may be less keen to use a checklist than staff who are not as familiar with the procedure. Similarly, there may be skepticism about the evidence supporting the utility of checklists. It is necessary to demonstrate the need for and value of the checklist to all users as early as possible. Include experienced staff, and those who may resist the checklist, early in the checklist design process and ask them to champion the checklists in practice. During training, emphasize the intended use of the checklist (as a tool to aid staff, instead of solely documentation), and gather data about the impact of the checklist during pilot testing and routine use. Monitor the instances in which the checklist prevented tasks from being missed and communicate them to the team. No checklist can account for all potential issues and scenarios that may arise. End users should be encouraged to continue to practice professional scrutiny and curiosity when using checklists to maximize safe and appropriate use. Checklists alone cannot provide enhancements in safety and quality, but in the appropriate organizational environment and individual user mindsets, checklists can be an exceptional safety management tool.
Environment Environmental factors that can impact checklist compliance include accessibility (location), format, number of checklists, and general work area. An inconvenient checklist location or too many checklists to choose from or to complete for a given process can deter users. Thorough consideration of the intended and actual workflow can help with identification of means for ensuring accessibility. It is important to consider checklist format (electronic vs. hard copy) with respect to accessibility and environment (e.g., lighting conditions). Clutter in the work area and unavailability of required supporting tools (e.g., pen, tablet) may also detract from effective checklist use.
Content and design As detailed in Section , there are many considerations in the design and content of a checklist to ensure success. Importantly, the design of the checklist must not inhibit efficient completion of the task, but rather have synergy with the intended workflow by considering item order, organization, and concise language. The items on the checklist must be carefully considered so that it is not overly exhaustive but does not leave out critical or often missed components. This balance of number of items to be checked, format, and workflow can help to ensure the intention of the checklist is clear, it fits easily into the process, and its use is consistent among all team members. Thorough validation and pilot testing are critical for the development of checklists of appropriate length, content, and design for clinical use.
Organizational factors and safety culture Checklists are a human‐based intervention tool, requiring a strong organizational and social infrastructure to support them, including communication, reinforcement of training, and shared knowledge. The underlying organizational component for successful implementation and effective use of safety checklists is the commitment of the department or group to establish and practice a safety culture, a safety culture is said to have four factors: The public and private commitment of upper‐level management to safety, Shared attitudes towards safety and hazards, Flexible norms and rules to deal with hazardous situations, and Organizational learning. A commonly‐cited checklist success story is the Michigan Keystone ICU (intensive care unit) program, which showed the implementation of checklists, in combination with other key elements, led to a 70% reduction of ICU‐acquired infection rates. , These key elements are: Summarizing, simplifying, and standardizing the process, Creating internal social networks with shared sense of mission and mutual reinforcement mechanisms, Gathering, measuring, and providing feedback on clearly defined outcomes, and Developing and supporting a safety culture. Safety cultures are “characterized by communications founded on mutual trust, by shared perceptions of the importance of safety, and by confidence in the efficacy of preventive measures”. Most importantly, a safety culture is an environment where all individuals are empowered and responsible to stop a process for any safety concern without fear of consequence, ridicule, or scorn. It is the responsibility of the practice leadership, including those with influential leadership and those with authority, to develop and maintain a safety culture, and the tools associated with that commitment. Leaders should view the development, maintenance, and use of a safety checklist as part of routine clinical duties, and they should demonstrate support through allocation of the needed time and resources. Providing a checklist to individuals and teams without building the right environment and organizational support will be a futile effort. Management and leadership support for this process is essential. Empowering staff to mutually reinforce the intended use of a safety checklist is a critical strategy for success. Team members can work together to set the normal behavior of effectively using the checklist. Having a mechanism to hold individuals accountable for using the checklist can be helpful, especially at the initial implementation. The checklist design team, quality safety committee, or clinical leadership team should clearly identify who is expected to participate in checklist execution, who is responsible for performing the different tasks listed on the checklist, who completes the checklist, and who maintains and updates the checklist. If responsibilities are not defined, confusion and apathy can follow. If expectations, roles, and responsibilities are defined early in the implementation process, compliance issues can be corrected and coached towards less risky behavior before they solidify into habits. Celebrate early successes using the checklist, and positively recognize staff who are using the checklists successfully. These positive actions can motivate people and bring attention to the change.
User engagement Just as organizational infrastructure influences the successful implementation of safety checklists, individual users need to be intrinsically motivated to use the checklist effectively. If staff do not perceive that they, the patient, or other members of the team gain benefits from the checklist, they may view the checklist as unnecessary work, or as a distraction. User engagement barriers to checklist use include: awareness—staff may not be aware of the checklist or process; agreement—staff may not agree with items on the checklist; ambiguity—staff may not be aware of what the checklist is asking them to do; ability—staff may not have the resources, time, or skills to comply with the checklist. Strategies to mitigate common user engagement barriers are described in Appendix , Table B1. Experienced staff may be less keen to use a checklist than staff who are not as familiar with the procedure. Similarly, there may be skepticism about the evidence supporting the utility of checklists. It is necessary to demonstrate the need for and value of the checklist to all users as early as possible. Include experienced staff, and those who may resist the checklist, early in the checklist design process and ask them to champion the checklists in practice. During training, emphasize the intended use of the checklist (as a tool to aid staff, instead of solely documentation), and gather data about the impact of the checklist during pilot testing and routine use. Monitor the instances in which the checklist prevented tasks from being missed and communicate them to the team. No checklist can account for all potential issues and scenarios that may arise. End users should be encouraged to continue to practice professional scrutiny and curiosity when using checklists to maximize safe and appropriate use. Checklists alone cannot provide enhancements in safety and quality, but in the appropriate organizational environment and individual user mindsets, checklists can be an exceptional safety management tool.
CONCLUSION Effective checklists support human thinking, allow constructive team member interactions, and facilitate systematic care delivery by reducing process variability. Developing and implementing successful checklists requires a strong organizational and social infrastructure, as well as the application of well‐defined human factors engineering concepts. The guidelines presented here summarize the evidence and knowledge of the aviation industry and other medical disciplines and are aimed to guide teams and individuals in our field to develop, implement, and use checklists as a robust and effective error mitigation strategy.
The members of TG344 listed below attest that they have no potential conflicts of interest related to the subject matter or materials presented in this document. Leigh Conroy (Chair), Jacqueline T. Faught, Erika Bowers, Gillian Ecclestone, Luis E. Fong de los Santos, Annie Hsu, Jennifer Lynn Johnson, Grace Gwe‐Ya Kim, Naomi Schechter, Leah K. Schubert, and David A. Sterling. The members of the report of TG344 listed below disclose the following potential conflict(s) of interest related to subject matter or materials presented in this report.
|
Quality management, quality assurance, and quality control in medical physics | 1d6a277b-8c92-4379-a517-d084cdb2c27c | 10018657 | Internal Medicine[mh] | INTRODUCTION A fundamental tenet in the practice of medicine involves maximizing clinical benefit to the patient, while minimizing associated risks to the patient including their caregivers. In the diverse applications spanning radiology, radiation oncology, nuclear medicine and molecular imaging, medical physics, and various imaging‐guided medical practices, implementing this tenet may involve optimizing diagnostic image quality, therapeutic gain, and/or image guidance accuracy (relevant to planning treatments or delivering medical procedures). These goals directly emphasize the critical importance of quality and safety programs, which are cornerstones for the American College of Radiology (ACR). To this aim, the ACR Commission on Quality and Safety provides oversight and management for all radiology quality and safety programs and initiatives, including Practice Parameters and Technical Standards, Appropriateness Criteria®, accreditation programs, centers of excellence, quality measurements, National Radiology Data Registry, RADPEER™ and Imaging‐RADS. The historic and ongoing evolution of the practice, technology, terminology, and implementation of programs related to Quality in the radiological sciences has given rise to the interchangeable use of the terms Quality Management (QM), Quality Assurance (QA), and Quality Control (QC) in the vernacular. AAPM Report 283 (Task Group 100) presented a well‐organized and in‐depth discussion on QM, QA, and QC, coupled this discussion with risk analysis methods, and applied it with impressive detail for Intensity Modulated Radiation Therapy (IMRT). The two primary objectives of this White Paper are (a) to re‐present an overview on the use of these three terms, as well as (b) to provide examples of how QM, QA, and QC may be applied in medical physics broadly, and particularly in ACR's Practice Parameters and Technical Standards. This White Paper is a work product of ACR's Committee on Practice Parameters – Medical Physics, which is under the auspices of ACR's Commission on Medical Physics. This White Paper represents a position of the ACR. Technical Standards describe technical procedures or practices that are quantitative or measurable. These often include recommendations for equipment specifications or settings, and are intended to set a minimum level of acceptable technical proficiencies and equipment performance. Practice Parameters describe recommended conduct in specific areas of clinical practice. ACR Technical Standards and Practice Parameters are based on the analysis of current literature, expert opinion, open forum commentary, and formal consensus. It is important to emphasize that the recommended standards and parameters are not intended to be legal standards of care or conduct; and may be modified as determined by individual circumstances and available resources. Notably, there are over 20 Medical Physics Technical Standards and Practice Parameters to date. Summary of terms and definitions: QM: An overall management system that includes establishing quality policies and quality objectives, and processes to achieve quality objectives through quality planning, QA, QC, and quality improvement. QA: A component of QM focused on providing confidence that quality requirements will be fulfilled; it includes all activities (planned, systematic, and practice‐based activities) that demonstrate the level of quality achieved by the output of a process. QC: A component of QM focused on the fulfillment of quality requirements; it includes activities that impose specific quality on a process; and entails the evaluation of actual operating performance characteristics of a device or system, comparing it to desired goals, and acting on the difference; QC works on the input to a process to ensure that important elements or parameters specific to the process are correct. 1.1 Quality management The broad description of QM (according to ISO 9000), is that it can include establishing quality policies quality objectives, and processes to achieve these quality objectives through quality planning, QA, QC, and quality improvement. A QM program will be comprised of many components that may include: radiation monitoring, management of radioactive sources, incident learning, treatment process QA, equipment and system QA, and equipment QC. Ideally, a QM program must be established for each planned, systematic, practice‐based activity and should include hazard analysis, QC, QA, training and documentation, and ongoing quality improvement efforts. Reactive QM is commonly performed in response to a QA‐detected failure or an adverse event, often taking the form of a “root‐cause‐analysis” as part of an incident investigation. Prospective QM is ideal, where timely intervention on process inputs (QC) and/or process outputs (QA) are implemented before the pre‐determined criteria for quality is exceeded. Risk analysis methods such as process mapping, Failure Modes and Effects Analysis, and Fault Tree Analysis are well‐developed approaches for prospective QM. A typical data‐driven strategy for Quality Improvement is the DMAIC algorithm (Define, Measure, Analyze, Improve, and Control) often incorporated in a “six‐sigma” initiative. According to the American Society for Quality, QM involves managing activities and resources of an organization to achieve objectives and prevent nonconformances; while a Quality Management System (QMS) is a formal system that documents the structure, processes, roles, responsibilities and procedures required to achieve effective QM. 37 Effective QM and QMS requires active collaboration among all members of a multi‐disciplinary team, including physicians, technologists, nurses, dosimetrists, medical physicists, administrators, and service engineers, among others. This is commonly achieved through the structure of a Quality and Safety (Q&S) Committee, such as a Radiation Safety Committee, MRI Q&S Committee, or Patient Q&S Committee. 1.2 Quality assurance The broad description of QA (from ISO 9000) is that it is a component of QM focused on providing confidence that quality requirements will be fulfilled. QA includes all activities (planned, systematic, practice‐based) that demonstrate the level of quality achieved by the output of a process. As defined by the International Electrotechnical Commission, a process is a set of inter‐related resources and activities that transform inputs into outputs. QA assesses the correctness of the process output, after taking relevant inputs into account, including equipment, systems, and procedures. Although the various ACR Technical Standards and Practice Parameters are in different stages of explicitly identifying QA, relevant parameters quantifying the output of a process are regularly used in clinical practice. Examples of the relevant parameters related to various ACR Technical Standards and Practice Parameters are included in Table . As discussed later for QC, a important criteria of QA is the establishment of corrective actions to be taken should a process demonstrate a failure of a quality metric. The QMP should advise on when corrective actions should occur, who should provide corrective actions or services, and how to document these actions based on the individual facility's policies, accreditation, or regulatory requirements. 1.3 Quality control The broad description of QC (from ISO 9000) is a component of QM focused on the fulfillment of quality requirements. It includes activities that impose specific quality on a process, and entails the evaluation of actual operating performance characteristics of a device or system, comparing it to desired goals, and acting on the difference. Generally, QC works on the input to a process to ensure that important elements or parameters specific to the process are correct. The use of relevant QC techniques (e.g., checklists, run‐charts, time‐outs, etc.) completed prior to performing the procedure is an established and effective practice for maintaining quality and safety. Overseeing and performing QC is a critical part of a Medical Physicist's role, and guidance on modality‐specific QC is described in the appropriate Medical Physics Technical Standard, Practice Parameter, AAPM Medical Physics Practice Guideline, AAPM Task Group Report, or Accreditation/Technical Reference. Examples of QC and the relevant performance criteria related to various ACR Technical Standards and Practice Parameters are included in Table . Another aspect of QC is the establishment of corrective actions to be taken should the device or system fail to meet the desired performance goals or QC criteria. The Qualified Medical Physicist should advise on when corrective actions should occur, who should provide corrective actions or services, and how to document these actions based on the facility's policies, accreditation, or regulatory requirements. See NCRP Report No.99, QA for Diagnostic Imaging Equipment for guidance on setting up a corrective action plan.
Quality management The broad description of QM (according to ISO 9000), is that it can include establishing quality policies quality objectives, and processes to achieve these quality objectives through quality planning, QA, QC, and quality improvement. A QM program will be comprised of many components that may include: radiation monitoring, management of radioactive sources, incident learning, treatment process QA, equipment and system QA, and equipment QC. Ideally, a QM program must be established for each planned, systematic, practice‐based activity and should include hazard analysis, QC, QA, training and documentation, and ongoing quality improvement efforts. Reactive QM is commonly performed in response to a QA‐detected failure or an adverse event, often taking the form of a “root‐cause‐analysis” as part of an incident investigation. Prospective QM is ideal, where timely intervention on process inputs (QC) and/or process outputs (QA) are implemented before the pre‐determined criteria for quality is exceeded. Risk analysis methods such as process mapping, Failure Modes and Effects Analysis, and Fault Tree Analysis are well‐developed approaches for prospective QM. A typical data‐driven strategy for Quality Improvement is the DMAIC algorithm (Define, Measure, Analyze, Improve, and Control) often incorporated in a “six‐sigma” initiative. According to the American Society for Quality, QM involves managing activities and resources of an organization to achieve objectives and prevent nonconformances; while a Quality Management System (QMS) is a formal system that documents the structure, processes, roles, responsibilities and procedures required to achieve effective QM. 37 Effective QM and QMS requires active collaboration among all members of a multi‐disciplinary team, including physicians, technologists, nurses, dosimetrists, medical physicists, administrators, and service engineers, among others. This is commonly achieved through the structure of a Quality and Safety (Q&S) Committee, such as a Radiation Safety Committee, MRI Q&S Committee, or Patient Q&S Committee.
Quality assurance The broad description of QA (from ISO 9000) is that it is a component of QM focused on providing confidence that quality requirements will be fulfilled. QA includes all activities (planned, systematic, practice‐based) that demonstrate the level of quality achieved by the output of a process. As defined by the International Electrotechnical Commission, a process is a set of inter‐related resources and activities that transform inputs into outputs. QA assesses the correctness of the process output, after taking relevant inputs into account, including equipment, systems, and procedures. Although the various ACR Technical Standards and Practice Parameters are in different stages of explicitly identifying QA, relevant parameters quantifying the output of a process are regularly used in clinical practice. Examples of the relevant parameters related to various ACR Technical Standards and Practice Parameters are included in Table . As discussed later for QC, a important criteria of QA is the establishment of corrective actions to be taken should a process demonstrate a failure of a quality metric. The QMP should advise on when corrective actions should occur, who should provide corrective actions or services, and how to document these actions based on the individual facility's policies, accreditation, or regulatory requirements.
Quality control The broad description of QC (from ISO 9000) is a component of QM focused on the fulfillment of quality requirements. It includes activities that impose specific quality on a process, and entails the evaluation of actual operating performance characteristics of a device or system, comparing it to desired goals, and acting on the difference. Generally, QC works on the input to a process to ensure that important elements or parameters specific to the process are correct. The use of relevant QC techniques (e.g., checklists, run‐charts, time‐outs, etc.) completed prior to performing the procedure is an established and effective practice for maintaining quality and safety. Overseeing and performing QC is a critical part of a Medical Physicist's role, and guidance on modality‐specific QC is described in the appropriate Medical Physics Technical Standard, Practice Parameter, AAPM Medical Physics Practice Guideline, AAPM Task Group Report, or Accreditation/Technical Reference. Examples of QC and the relevant performance criteria related to various ACR Technical Standards and Practice Parameters are included in Table . Another aspect of QC is the establishment of corrective actions to be taken should the device or system fail to meet the desired performance goals or QC criteria. The Qualified Medical Physicist should advise on when corrective actions should occur, who should provide corrective actions or services, and how to document these actions based on the facility's policies, accreditation, or regulatory requirements. See NCRP Report No.99, QA for Diagnostic Imaging Equipment for guidance on setting up a corrective action plan.
CONCLUSIONS The framework of QM, QA, and QC may be implemented in a variety of ways, across a vast spectrum of applications spanning Radiology, Radiation Oncology, Nuclear Medicine and Molecular Imaging, Medical Physics, and various imaging‐guided medical practices. Examples of QC and QA, as described in various Medical Physics Technical Standards and Practice Parameters were discussed to provide the reader with a sense of where QC and QA fits in the overall structure of QM and QMS. While specific applications were presented for ACR Practice Parameters and Technical Standards, the concepts of QM, QA, and QC classification are generalizable to other guidance initiatives in medical radiological environments.
All authors made substantial contributions to the design of the work, drafted and revised the content critically, approved the final version, and agree to be accountable for all aspects of the work, meeting ICMJE requirements for authorship. The author(s) declare(s) that they had full access to all of the data in this study and the author(s) take(s) complete responsibility for the integrity of the data and the accuracy of the data analysis.
M. Mahesh is Associate Editor and Medical Physics Editor of JACR. M. Mahesh and M. Amurao receive honoraria from ACR as accreditation program reviewers. All other authors declare no relevant conflicts of interest. This project was unfunded; it is a work product of an ACR committee, and the result of a White Paper application approved by the Executive Committee of the ACR to be a position of the ACR.
|
AAPM medical physics practice guideline 13.a: HDR brachytherapy, part A | 034c3460-e6af-49e4-b7f6-8b242c07ddb1 | 10018677 | Internal Medicine[mh] | DEFINITIONS AND ACRONYMS ABR : American Board of Radiology. ABS : American Brachytherapy Society. ACR : American College of Radiology. AMP : authorized medical physicist—an individual who meets the requirements listed in 10 CFR § 35. ASTRO : American Society for Radiation Oncology. AU : authorized user—a physician who meets the requirements listed in 10 CFR § 35 or is identified as an AU on a license or permit regarding medical use of byproduct material. CBCT : Cone Beam Computed Tomography. CFR : Code of Federal Regulations. COMP : Canadian Organization of Medical Physicists. CPQR : Canadian Partnership for Quality Radiotherapy. Dosimetrist : a qualified medical dosimetrist as defined by the Association of Medical Dosimetrists as “an individual who is competent to practice under the supervision of a qualified physician and qualified medical physicist.” ESTRO : European Society for Therapeutic Radiology and Oncology. GEC : The Groupe Européen de Curiethérapie. HDR : high dose‐rate brachytherapy—refers to dose rates higher than 12 Gy/h (ICRU38 ). IAEA : International Atomic Energy Agency. ICRP : International Commission on Radiological Protection. IFU : instructions for use—instructions for use provided from the manufacturer of applicators or devices. IPEM : Institute of Physics and Engineering in Medicine. IORT : Intraoperative Radiotherapy. NCRP : National Council on Radiation Protection and Measurements. NRC : Nuclear Regulatory Commission. PMI : preventative maintenance inspection. QA : quality assurance—as defined in the AAPM Task Group 100 report: “QA confirms the desired level of quality by demonstrating that the quality goals for a task or parameter are met.” QC : quality control—as defined in the AAPM Task Group 100 report: “QC encompasses procedures that force the desirable level of quality by evaluating the current status of a treatment parameter, comparing the parameter with the desired value, and acting on the difference to achieve the goal.” QM : quality management—as defined in the AAPM Task Group 100 report: “QM consists of all the activities designed to achieve the desired quality goals.” QMP : qualified medical physicist—as defined by AAPM Professional Policy 1. RAM : radioactive material. TGT : transfer guide tube.
INTRODUCTION The goal of this report is to assist the clinical medical physicist in assuring that key quality metrics and practice considerations are met to ensure the safe, reliable, and reproducible application of high‐dose rate (HDR) brachytherapy. This guideline has been developed to provide appropriate minimum standards for such services. The secondary goal is to provide recommendations to the regulatory community from the experts on this practice guideline to guide the adoption of regulations in the future. This MPPG is limited to iridium‐192‐based HDR brachytherapy and will not discuss electronic, low‐dose rate, pulsed dose rate brachytherapy, or any alternative radionuclides. 2.1 Scope This report has been divided into two parts. Part A describes the infrastructure and program design in the creation of an afterloader‐based HDR brachytherapy program. Part B (a separate, subsequent report) describes the clinical treatment processes including imaging, planning, and treatment delivery. 2.2 Disclaimer It is the responsibility of all healthcare staff to be familiar with state and federal guidelines that may take precedence over American Association of Physicists in Medicine (AAPM) recommendations that are provided in this report. Each health care facility may have site‐specific or state‐mandated needs and requirements that may modify their usage of these recommendations. 2.3 Background Brachytherapy enjoys a long and rich history that transcends the practice of radiation therapy. Shortly after the first observations of self‐inflicted biological effects by Henri Becquerel and Pierre Curie, the first encapsulated radium source was provided by Pierre and Marie Curie to Henri‐Alexandri Danlos in Paris (1903) for dermatological therapies. Over a century of advances and development followed this first implementation of radiation therapy. The modern nuclear era, including human‐induced radioactivity, and the advent of the computer age allowed brachytherapy to transform from a manually delivered qualitative practice to an automated, quantitative one. Mechanical advances in remote source afterloading provided significant radiation dose reduction to providers. Additionally, the preference to reduce in‐patient stays, which had a concomitant need for expensive, shielded medical units, led to the advent of HDR brachytherapy. Similar to how intensity‐modulated radiation therapy (IMRT) advanced external beam radiation therapy in the 1990s, HDR brachytherapy was the high‐tech treatment modality that advanced the field of brachytherapy in the 1980s. However, many of the reported drivers of HDR brachytherapy at the time were socio‐economical. Similar to IMRT, HDR brachytherapy lacked prospective clinical trials to demonstrate the clinical benefits and questions regarding dose, fractionation, and their related radiobiological considerations were expected to take years to answer. Today, HDR brachytherapy is a commonly used therapeutic technique. It is a resource‐intensive modality with an oversight by applicable government regulation and recommended practices by professional societies, accreditation standards, and many others. The following section provides an overview of guiding regulations, clinical practice recommendations, and manufacturers’ responsibilities that are applicable to the practice of HDR brachytherapy.
Scope This report has been divided into two parts. Part A describes the infrastructure and program design in the creation of an afterloader‐based HDR brachytherapy program. Part B (a separate, subsequent report) describes the clinical treatment processes including imaging, planning, and treatment delivery.
Disclaimer It is the responsibility of all healthcare staff to be familiar with state and federal guidelines that may take precedence over American Association of Physicists in Medicine (AAPM) recommendations that are provided in this report. Each health care facility may have site‐specific or state‐mandated needs and requirements that may modify their usage of these recommendations.
Background Brachytherapy enjoys a long and rich history that transcends the practice of radiation therapy. Shortly after the first observations of self‐inflicted biological effects by Henri Becquerel and Pierre Curie, the first encapsulated radium source was provided by Pierre and Marie Curie to Henri‐Alexandri Danlos in Paris (1903) for dermatological therapies. Over a century of advances and development followed this first implementation of radiation therapy. The modern nuclear era, including human‐induced radioactivity, and the advent of the computer age allowed brachytherapy to transform from a manually delivered qualitative practice to an automated, quantitative one. Mechanical advances in remote source afterloading provided significant radiation dose reduction to providers. Additionally, the preference to reduce in‐patient stays, which had a concomitant need for expensive, shielded medical units, led to the advent of HDR brachytherapy. Similar to how intensity‐modulated radiation therapy (IMRT) advanced external beam radiation therapy in the 1990s, HDR brachytherapy was the high‐tech treatment modality that advanced the field of brachytherapy in the 1980s. However, many of the reported drivers of HDR brachytherapy at the time were socio‐economical. Similar to IMRT, HDR brachytherapy lacked prospective clinical trials to demonstrate the clinical benefits and questions regarding dose, fractionation, and their related radiobiological considerations were expected to take years to answer. Today, HDR brachytherapy is a commonly used therapeutic technique. It is a resource‐intensive modality with an oversight by applicable government regulation and recommended practices by professional societies, accreditation standards, and many others. The following section provides an overview of guiding regulations, clinical practice recommendations, and manufacturers’ responsibilities that are applicable to the practice of HDR brachytherapy.
REGULATORY REQUIREMENTS The Code of Federal Regulations (CFRs) are general rules applied nationally and are organized under the United States president through the executive branch. Regulatory responsibility for radioactive material is the responsibility of the Nuclear Regulatory Commission (NRC) and is listed as Title 10 in the CFRs. Federal law allows states to administer their own regulatory programs so long as they meet or exceed the requirements of the CFRs. These agencies are subject to periodic review by the NRC to maintain Agreement State status. Medical physicists practicing in agreement states must review their state regulations as they may differ from the federal ones. The NRC oversees the licensing of all naturally occurring or accelerator‐produced materials (NARM) including nuclear reactor‐produced materials. This latter material is known as byproduct material. Title 10 CFR §37 describes physical protection of Category 1 and Category 2 quantities of radioactive material (RAM). NRC defines these sources as “risk‐significant sources” and they are listed in an IAEA publication. Most users will not trigger Category 2 requirements of greater than 21.6 Ci or 799.2 GBq (for Ir‐192) of contained activity unless they have multiple afterloaders. However, newer afterloaders have better shielding and higher activity sources (10–15 Ci), so each facility is responsible for evaluating their total on‐site activity with regards to the security of their sources and licensing requirements. Due to potential variations in specific rules for the current agreement states, the regulations in the federal register (i.e., the CFRs), which represent minimum compliance expectations, will be discussed in the subsequent sections (I–VIII) where appropriate. Table summarizes the legal references and topics discussed. When readily available, international publications have also been listed. All CFR reports can be accessed at nrc.gov. Beyond the borders of the United States, most sovereign nations have implemented regulations to guide the use of radioactive materials. In support of the peaceful use of atomic energy, the International Atomic Energy Agency (IAEA) was founded by the United Nations in 1957. It provides guidance and technical cooperation for 172 member nations and partners worldwide. Its primary mission is to promote safe, secure, and peaceful nuclear technologies. In this light, the IAEA assists in defining technical standards for the use of radioactive and byproduct material some of which are listed in Table . 3.1 RAM licensing Any healthcare facility offering brachytherapy services must have a radioactive materials license that allows them to receive, possess, utilize, and transport such sources. This license becomes a mechanism by which the regulatory agency can supervise the use of radioactive source and byproduct material and ensure that licensees comply with applicable regulations. 3.2 Personnel monitoring Personnel monitoring for radiation workers is only required if there is an expectation that staff receives greater than 10% of the regulatory limits; however, brachytherapy providers should be actively monitored in the event of an emergency response. 3.3 Shielding There are no specific US regulations regarding shielding design or requirements, with the exception that shielding must be installed to ensure that the requirements for radiation exposure to personnel and members of the public in 10 CFR § 20 are met. National Council on Radiation Protection and Measurements (NCRP) report No. 49 offers guidance on structural shielding design for gamma rays up to 10 MeV, which would include iridium‐based HDR. A regulatory agency may require review and approval of shielding plans prior to the construction of a new facility or modification of an existing one. 3.4 Security Source security is defined as a set of measures required to prevent unauthorized access, damage, loss, theft, or unauthorized transfer of radioactive sources. NRC and Agreement States established “a multilayered, comprehensive security program” to protect these sources. The licensee must generate policies and procedures that govern the storage and transfer of radioactive material to ensure compliance with this standard as per 10 CFR § 37. For example, designated storage areas and enhanced security measures may be helpful for compliance. Regarding source receipt, the licensee is expected to expeditiously take possession of the package. This should require creating a lockable storage area where packages are received before they can be surveyed and transferred to a secure storage area. 3.5 Transportation regulations Shippers and transporters must receive specialty training in these regulations every 3 years to assure compliance (see 10 CFR §172). Licensees must establish processes to comply with specific requirements around the receipt of RAM. Licensees must perform a wipe test within 3 h of the receipt of normal form RAM during business hours (or by the beginning of the next business day if delivered after hours) to assure that there were no leaks or spills of the material in transit by examining the packing for contamination. Some HDR sources are sent as special form RAM and may be exempted from wipe testing requirements (see 10 CFR § 20.1906). The AAPM virtual library contains two excellent overviews of this training. 3.6 Records The current regulations regarding source or by‐product material state that the records must be maintained for the receipt, duration of possession, and the transfer or disposition of the material for three years. Additionally, records for spots checks and surveys must be kept for 3 years. More details can be found in 10 CFR § 30.51 and 10 CFR § 40.61. 3.7 Periodic spot checks This section will be addressed in Section . 3.8 Training Training of staff will be addressed in Section . 3.9 Patient treatment Due to the rate of dose delivery in HDR brachytherapy, an authorized medical physicist (AMP) and authorized user (AU) must be physically present for the initiation of all patient treatments. The AMP must remain immediately available for the entire duration of treatments; a physician with training in emergency procedures may replace the AU for the remaining duration of the treatments. The interpretation of physical presence was later clarified as being within hearing distance of normally spoken voice. Additionally, AMPs and other involved personnel must participate initially and annually in an emergency drill. While most HDR brachytherapy systems do not merit the enhanced physical security requirements listed in 10 CFR § 37, if the licensee chooses to implement enhanced security practices (e.g., electronic door locks with biometric access) then the medical physicist must evaluate their potential role in an emergency response to assure that the patient can be quickly reached in the event of a system failure (e.g., power loss) or medical emergency (e.g., a cardiac arrest). More information regarding these topics will be given in Part B of this practice guidance report.
RAM licensing Any healthcare facility offering brachytherapy services must have a radioactive materials license that allows them to receive, possess, utilize, and transport such sources. This license becomes a mechanism by which the regulatory agency can supervise the use of radioactive source and byproduct material and ensure that licensees comply with applicable regulations.
Personnel monitoring Personnel monitoring for radiation workers is only required if there is an expectation that staff receives greater than 10% of the regulatory limits; however, brachytherapy providers should be actively monitored in the event of an emergency response.
Shielding There are no specific US regulations regarding shielding design or requirements, with the exception that shielding must be installed to ensure that the requirements for radiation exposure to personnel and members of the public in 10 CFR § 20 are met. National Council on Radiation Protection and Measurements (NCRP) report No. 49 offers guidance on structural shielding design for gamma rays up to 10 MeV, which would include iridium‐based HDR. A regulatory agency may require review and approval of shielding plans prior to the construction of a new facility or modification of an existing one.
Security Source security is defined as a set of measures required to prevent unauthorized access, damage, loss, theft, or unauthorized transfer of radioactive sources. NRC and Agreement States established “a multilayered, comprehensive security program” to protect these sources. The licensee must generate policies and procedures that govern the storage and transfer of radioactive material to ensure compliance with this standard as per 10 CFR § 37. For example, designated storage areas and enhanced security measures may be helpful for compliance. Regarding source receipt, the licensee is expected to expeditiously take possession of the package. This should require creating a lockable storage area where packages are received before they can be surveyed and transferred to a secure storage area.
Transportation regulations Shippers and transporters must receive specialty training in these regulations every 3 years to assure compliance (see 10 CFR §172). Licensees must establish processes to comply with specific requirements around the receipt of RAM. Licensees must perform a wipe test within 3 h of the receipt of normal form RAM during business hours (or by the beginning of the next business day if delivered after hours) to assure that there were no leaks or spills of the material in transit by examining the packing for contamination. Some HDR sources are sent as special form RAM and may be exempted from wipe testing requirements (see 10 CFR § 20.1906). The AAPM virtual library contains two excellent overviews of this training.
Records The current regulations regarding source or by‐product material state that the records must be maintained for the receipt, duration of possession, and the transfer or disposition of the material for three years. Additionally, records for spots checks and surveys must be kept for 3 years. More details can be found in 10 CFR § 30.51 and 10 CFR § 40.61.
Periodic spot checks This section will be addressed in Section .
Training Training of staff will be addressed in Section .
Patient treatment Due to the rate of dose delivery in HDR brachytherapy, an authorized medical physicist (AMP) and authorized user (AU) must be physically present for the initiation of all patient treatments. The AMP must remain immediately available for the entire duration of treatments; a physician with training in emergency procedures may replace the AU for the remaining duration of the treatments. The interpretation of physical presence was later clarified as being within hearing distance of normally spoken voice. Additionally, AMPs and other involved personnel must participate initially and annually in an emergency drill. While most HDR brachytherapy systems do not merit the enhanced physical security requirements listed in 10 CFR § 37, if the licensee chooses to implement enhanced security practices (e.g., electronic door locks with biometric access) then the medical physicist must evaluate their potential role in an emergency response to assure that the patient can be quickly reached in the event of a system failure (e.g., power loss) or medical emergency (e.g., a cardiac arrest). More information regarding these topics will be given in Part B of this practice guidance report.
CLINICAL PRACTICE RECOMMENDATIONS 4.1 Accreditation standards There are multiple groups that provide practice accreditation to hospitals and free‐standing radiation therapy clinics. The accreditation may be used as a demonstration of the ability to meet specified standards and may be used in advertising efforts. Accreditation may be obtained by the American College of Radiology (ACR), the American Society for Radiation Oncology (ASTRO) through the Accreditation Program for Excellence (APEx), or through the American College of Radiation Oncology (ACRO). To receive accreditation, sites must demonstrate compliance with the accreditation standards set by the various organizations. At present, there is no regulatory requirement or implication for accreditation. 4.2 Professional societies As a service to their members and to protect and benefit members of the public, professional societies may prepare guidance documents. The AAPM has produced over 100 guidance reports on a variety of topics including HDR brachytherapy, and other aligned societies have also published guidelines and recommendations for HDR brachytherapy including ASTRO, ACR, American Brachytherapy Society (ABS), and European Society for Therapeutic Radiology and Oncology (ESTRO). A wide variety of international entities have also generated guidelines that may be useful. A summary of documents that may be of use to HDR practitioners is found in Table . The AAPM has published numerous reports that address HDR brachytherapy in a variety of ways. Several reports include quality assurance (QA) recommendations for remote afterloaders, sources, applicators, and treatment planning systems (TPS). These form the basis for most institution‐specific QA programs. Of note, the most recent of these reports was published in 1998, showing that it has been nearly 20 years since quantitative QA performance benchmark recommendations were defined by the AAPM for brachytherapy. Report 283 (known as the report of TG‐100) introduced the concept of risk‐analysis methods in the formation of quality management (QM) protocols. There are also educational resources from the AAPM Brachytherapy Summer School publications from 1994, 2005, 2013, and 2017, which cover a wide variety of topics. Clinicians may also refer to other national professional societies to draw from their experience and benefit from their recommendations such as ESTRO and Canadian Organization of Medical Physicists (COMP). These groups have sponsored a large number of publications that may be of interest to HDR brachytherapy physicists. COMP has published recent quality control (QC) guidelines for remote afterloaders, among other pertinent recommendations. Helpful guidance documents may also be found by other national organizations, such as the Netherlands Commission on Radiation Dosimetry group and the Australasian College of Physical Scientists and Engineers in Medicine, among others. While hundreds of guidance documents may inform readers, health care facilities should follow their own internally defined and approved practices. Internal policy should outline key rules and requirements, while an associated procedure should describe the steps to ensure that policy goals are met. 4.3 Manufacturers Vendors that market and sell medical equipment in the United States must comply with Food and Drug Administration (FDA) regulations. The FDA is organized under the Department of Health and Human Services in the executive branch. Regulations governing the lifecycle of medical devices are located in Title 21 of the CFR, from preclinical use to labeling requirements. Medical device vendors have a responsibility to inform users of issues identified with a specific medical device. These typically take place as a Notice to Users or a Field Change Order in the event that service is required, and the notices may require acknowledgment of receipt by the end user. Users may be required to provide information and access to a vendor in the event of a medical device malfunction. Maintaining contact with the vendor, for example, through a service contract, ensures that users receive critical notifications and safety upgrades and that preventative maintenance is performed as recommended. Users should also ensure they have current copies of manufacturers’ instructions for use (IFU) that define proper use, sterilization requirements (if applicable), and the product lifecycle.
Accreditation standards There are multiple groups that provide practice accreditation to hospitals and free‐standing radiation therapy clinics. The accreditation may be used as a demonstration of the ability to meet specified standards and may be used in advertising efforts. Accreditation may be obtained by the American College of Radiology (ACR), the American Society for Radiation Oncology (ASTRO) through the Accreditation Program for Excellence (APEx), or through the American College of Radiation Oncology (ACRO). To receive accreditation, sites must demonstrate compliance with the accreditation standards set by the various organizations. At present, there is no regulatory requirement or implication for accreditation.
Professional societies As a service to their members and to protect and benefit members of the public, professional societies may prepare guidance documents. The AAPM has produced over 100 guidance reports on a variety of topics including HDR brachytherapy, and other aligned societies have also published guidelines and recommendations for HDR brachytherapy including ASTRO, ACR, American Brachytherapy Society (ABS), and European Society for Therapeutic Radiology and Oncology (ESTRO). A wide variety of international entities have also generated guidelines that may be useful. A summary of documents that may be of use to HDR practitioners is found in Table . The AAPM has published numerous reports that address HDR brachytherapy in a variety of ways. Several reports include quality assurance (QA) recommendations for remote afterloaders, sources, applicators, and treatment planning systems (TPS). These form the basis for most institution‐specific QA programs. Of note, the most recent of these reports was published in 1998, showing that it has been nearly 20 years since quantitative QA performance benchmark recommendations were defined by the AAPM for brachytherapy. Report 283 (known as the report of TG‐100) introduced the concept of risk‐analysis methods in the formation of quality management (QM) protocols. There are also educational resources from the AAPM Brachytherapy Summer School publications from 1994, 2005, 2013, and 2017, which cover a wide variety of topics. Clinicians may also refer to other national professional societies to draw from their experience and benefit from their recommendations such as ESTRO and Canadian Organization of Medical Physicists (COMP). These groups have sponsored a large number of publications that may be of interest to HDR brachytherapy physicists. COMP has published recent quality control (QC) guidelines for remote afterloaders, among other pertinent recommendations. Helpful guidance documents may also be found by other national organizations, such as the Netherlands Commission on Radiation Dosimetry group and the Australasian College of Physical Scientists and Engineers in Medicine, among others. While hundreds of guidance documents may inform readers, health care facilities should follow their own internally defined and approved practices. Internal policy should outline key rules and requirements, while an associated procedure should describe the steps to ensure that policy goals are met.
Manufacturers Vendors that market and sell medical equipment in the United States must comply with Food and Drug Administration (FDA) regulations. The FDA is organized under the Department of Health and Human Services in the executive branch. Regulations governing the lifecycle of medical devices are located in Title 21 of the CFR, from preclinical use to labeling requirements. Medical device vendors have a responsibility to inform users of issues identified with a specific medical device. These typically take place as a Notice to Users or a Field Change Order in the event that service is required, and the notices may require acknowledgment of receipt by the end user. Users may be required to provide information and access to a vendor in the event of a medical device malfunction. Maintaining contact with the vendor, for example, through a service contract, ensures that users receive critical notifications and safety upgrades and that preventative maintenance is performed as recommended. Users should also ensure they have current copies of manufacturers’ instructions for use (IFU) that define proper use, sterilization requirements (if applicable), and the product lifecycle.
FACILITY 5.1 Vaults The most common location of an HDR brachytherapy afterloader is in a dedicated vault or suite in which the unit is stored permanently and the patient is treated. An alternative is to use another pre‐existing shielded area of the hospital, such as a linac or imaging vault. Depending on the types of procedures performed at a facility, HDR brachytherapy in an intraoperative radiotherapy (IORT) environment may require treatment in another department's operating room. Each solution presents its own unique challenges and benefits. 5.1.1 Dedicated suites A dedicated suite is the easiest solution as the patient may wait in the vault during treatment planning and there may be no competing procedures which require the movement of the patient. Equipment storage is usually available through built‐in cabinetry allowing QA devices, applicators, and transfer guide tubes (TGTs) to be stored near the treatment area. Shielding should be designed specifically for HDR radionuclide energy ranges and is generally less costly than shielding for a linac vault. In the design phase, care should be taken to evaluate patient procedures that require ancillary devices and equipment such as overhead lighting, surgical lighting, access to oxygen and anesthesia equipment, radiographic needs, patient monitoring, and so forth. A wide maze and doorway will facilitate patient transport on a gurney. 5.1.2 Shared suites Due to space limitations, especially within an existing facility, it may not be possible to have a dedicated HDR brachytherapy suite. While this obviates the need to shield a room specifically for brachytherapy, regulations require the presence of interlocks to prevent the accidental use of the linac or simulator during a brachytherapy treatment (and vice versa). While most linac vaults provide adequate shielding for a new HDR brachytherapy source, a survey for hot spots or shielding defects should still be performed. Additionally, the afterloader must be properly stored and secured in the treatment vault to comply with any state or federal regulations. If shared with a simulator room to facilitate imaging, the patients should be imaged and treated on a non‐radiopaque table. Retrofitting an imaging suite for brachytherapy may be expensive due to shielding requirements. Information on the implementation of HDR brachytherapy in a limited resource setting can be found elsewhere. 5.1.3 Operating room settings Intraoperative procedures where applicator insertion, imaging, planning, and treatment are performed in one session under anesthesia are becoming increasingly common. Situating an afterloader in an operating room can allow for efficient treatment at the time of surgery, but can introduce other complicating factors such as increased training of nonradiation oncology personnel, the need to interlock and shield multiple doors, addition of warning lights and radiation monitors, and storage and security of the afterloader. In the rare event of an emergency, the patient may be under anesthesia and dependent on life support equipment and may not be able to be moved out of the procedure or operating room. Instead, the HDR afterloader (or source) must be isolated from the patient. This can be achieved if a small shielded enclosure is constructed as part of the room and serves both as an emergency container of the source and attached applicator(s) and as a secured routine storage area of the afterloading unit. Additional challenges of this environment include high‐pressure treatment planning time constraints as anesthesia duration should be minimized, and operating room time is costly. 5.1.4 Mobile HDR units It is possible to transport an HDR unit between multiple locations or to house an HDR unit in a shielded van. This may increase the ability to treat patients who otherwise could not travel for treatment. Ten CFR §35.2080 covers mobile medical services and is not be discussed further in this report. A summary table of advantages and challenges of various facility types with optional imaging devices is shown in Table below: 5.2 Imaging resources In order to appropriately plan the HDR treatment, the patient should be imaged with the applicator in situ. Placement of the afterloader within a linear accelerator treatment vault can permit the use of the linac's imaging capabilities such as kV imaging and CBCT. Dedicated brachytherapy suites can use a variety of imaging devices, such as a portable CT scanner, portable C‐arm, CT‐on‐rails, MR, or even an MR‐on‐rails. Both surface and intracavitary ultrasound can be utilized to assist in applicator placement as well as being used for planning, such as in HDR prostate brachytherapy. The most common setting is a departmental CT scanner with patient transfer to the HDR treatment room. The physicist should make best use of the imaging resources available. For example, obtaining an MR scan during the course of cervical brachytherapy can be performed at a scanner located outside of the department either with or without the applicator in place. , More information about treatment planning imaging and options will be provided in Part B of this report. 5.3 Patient management In addition to the radiotherapy and imaging resources, patients need to be medically managed. Use of anesthesia or conscious sedation requires independent monitoring of the patient. If the patient is to be treated while under anesthesia, vital signs must be monitored from outside the treatment vault. Typical installations will include two independent cameras with one that can be fixed onto the anesthesia equipment for monitoring. Patients can also be medicated orally for pain and anxiety relief, which does not require additional monitoring of the patient by trained personnel. 5.4 Transportation and immobilization The absence of dedicated treatment suites with in‐room imaging requires patients to be transported from the area of applicator placement and/or imaging to the treatment area. This can involve many separate movements of the patient, especially if the applicator is placed in an operating room. Proper training of staff as well as dedicated equipment to move the patient can help mitigate applicator or needle migration or displacement. If possible, the applicator should also be fixed in place with respect to the patient through the use of external fixation. Options include external fixation devices, special brachytherapy underwear, and homemade devices. Interstitial templates may be sutured to the patient, and glue or dental putty may be used to keep catheters and needles in place. Commercial patient transport systems for the movement of brachytherapy patients can be helpful. Regardless of the immobilization and transport method, the patient should be imaged as close to treatment initiation as possible to confirm applicator positioning. When the HDR unit is located in a facility that is not attached to the health care facility where the applicator is placed, an ambulance service may be necessary to transport the patient. Regardless of the distance involved, this type of transport can pose some challenges: coordination and timing of transportation, support staff for transportation depending on pain control methodology, type of anesthesia used for applicator placement, and applicator movement during transfer.
Vaults The most common location of an HDR brachytherapy afterloader is in a dedicated vault or suite in which the unit is stored permanently and the patient is treated. An alternative is to use another pre‐existing shielded area of the hospital, such as a linac or imaging vault. Depending on the types of procedures performed at a facility, HDR brachytherapy in an intraoperative radiotherapy (IORT) environment may require treatment in another department's operating room. Each solution presents its own unique challenges and benefits. 5.1.1 Dedicated suites A dedicated suite is the easiest solution as the patient may wait in the vault during treatment planning and there may be no competing procedures which require the movement of the patient. Equipment storage is usually available through built‐in cabinetry allowing QA devices, applicators, and transfer guide tubes (TGTs) to be stored near the treatment area. Shielding should be designed specifically for HDR radionuclide energy ranges and is generally less costly than shielding for a linac vault. In the design phase, care should be taken to evaluate patient procedures that require ancillary devices and equipment such as overhead lighting, surgical lighting, access to oxygen and anesthesia equipment, radiographic needs, patient monitoring, and so forth. A wide maze and doorway will facilitate patient transport on a gurney. 5.1.2 Shared suites Due to space limitations, especially within an existing facility, it may not be possible to have a dedicated HDR brachytherapy suite. While this obviates the need to shield a room specifically for brachytherapy, regulations require the presence of interlocks to prevent the accidental use of the linac or simulator during a brachytherapy treatment (and vice versa). While most linac vaults provide adequate shielding for a new HDR brachytherapy source, a survey for hot spots or shielding defects should still be performed. Additionally, the afterloader must be properly stored and secured in the treatment vault to comply with any state or federal regulations. If shared with a simulator room to facilitate imaging, the patients should be imaged and treated on a non‐radiopaque table. Retrofitting an imaging suite for brachytherapy may be expensive due to shielding requirements. Information on the implementation of HDR brachytherapy in a limited resource setting can be found elsewhere. 5.1.3 Operating room settings Intraoperative procedures where applicator insertion, imaging, planning, and treatment are performed in one session under anesthesia are becoming increasingly common. Situating an afterloader in an operating room can allow for efficient treatment at the time of surgery, but can introduce other complicating factors such as increased training of nonradiation oncology personnel, the need to interlock and shield multiple doors, addition of warning lights and radiation monitors, and storage and security of the afterloader. In the rare event of an emergency, the patient may be under anesthesia and dependent on life support equipment and may not be able to be moved out of the procedure or operating room. Instead, the HDR afterloader (or source) must be isolated from the patient. This can be achieved if a small shielded enclosure is constructed as part of the room and serves both as an emergency container of the source and attached applicator(s) and as a secured routine storage area of the afterloading unit. Additional challenges of this environment include high‐pressure treatment planning time constraints as anesthesia duration should be minimized, and operating room time is costly. 5.1.4 Mobile HDR units It is possible to transport an HDR unit between multiple locations or to house an HDR unit in a shielded van. This may increase the ability to treat patients who otherwise could not travel for treatment. Ten CFR §35.2080 covers mobile medical services and is not be discussed further in this report. A summary table of advantages and challenges of various facility types with optional imaging devices is shown in Table below:
Dedicated suites A dedicated suite is the easiest solution as the patient may wait in the vault during treatment planning and there may be no competing procedures which require the movement of the patient. Equipment storage is usually available through built‐in cabinetry allowing QA devices, applicators, and transfer guide tubes (TGTs) to be stored near the treatment area. Shielding should be designed specifically for HDR radionuclide energy ranges and is generally less costly than shielding for a linac vault. In the design phase, care should be taken to evaluate patient procedures that require ancillary devices and equipment such as overhead lighting, surgical lighting, access to oxygen and anesthesia equipment, radiographic needs, patient monitoring, and so forth. A wide maze and doorway will facilitate patient transport on a gurney.
Shared suites Due to space limitations, especially within an existing facility, it may not be possible to have a dedicated HDR brachytherapy suite. While this obviates the need to shield a room specifically for brachytherapy, regulations require the presence of interlocks to prevent the accidental use of the linac or simulator during a brachytherapy treatment (and vice versa). While most linac vaults provide adequate shielding for a new HDR brachytherapy source, a survey for hot spots or shielding defects should still be performed. Additionally, the afterloader must be properly stored and secured in the treatment vault to comply with any state or federal regulations. If shared with a simulator room to facilitate imaging, the patients should be imaged and treated on a non‐radiopaque table. Retrofitting an imaging suite for brachytherapy may be expensive due to shielding requirements. Information on the implementation of HDR brachytherapy in a limited resource setting can be found elsewhere.
Operating room settings Intraoperative procedures where applicator insertion, imaging, planning, and treatment are performed in one session under anesthesia are becoming increasingly common. Situating an afterloader in an operating room can allow for efficient treatment at the time of surgery, but can introduce other complicating factors such as increased training of nonradiation oncology personnel, the need to interlock and shield multiple doors, addition of warning lights and radiation monitors, and storage and security of the afterloader. In the rare event of an emergency, the patient may be under anesthesia and dependent on life support equipment and may not be able to be moved out of the procedure or operating room. Instead, the HDR afterloader (or source) must be isolated from the patient. This can be achieved if a small shielded enclosure is constructed as part of the room and serves both as an emergency container of the source and attached applicator(s) and as a secured routine storage area of the afterloading unit. Additional challenges of this environment include high‐pressure treatment planning time constraints as anesthesia duration should be minimized, and operating room time is costly.
Mobile HDR units It is possible to transport an HDR unit between multiple locations or to house an HDR unit in a shielded van. This may increase the ability to treat patients who otherwise could not travel for treatment. Ten CFR §35.2080 covers mobile medical services and is not be discussed further in this report. A summary table of advantages and challenges of various facility types with optional imaging devices is shown in Table below:
Imaging resources In order to appropriately plan the HDR treatment, the patient should be imaged with the applicator in situ. Placement of the afterloader within a linear accelerator treatment vault can permit the use of the linac's imaging capabilities such as kV imaging and CBCT. Dedicated brachytherapy suites can use a variety of imaging devices, such as a portable CT scanner, portable C‐arm, CT‐on‐rails, MR, or even an MR‐on‐rails. Both surface and intracavitary ultrasound can be utilized to assist in applicator placement as well as being used for planning, such as in HDR prostate brachytherapy. The most common setting is a departmental CT scanner with patient transfer to the HDR treatment room. The physicist should make best use of the imaging resources available. For example, obtaining an MR scan during the course of cervical brachytherapy can be performed at a scanner located outside of the department either with or without the applicator in place. , More information about treatment planning imaging and options will be provided in Part B of this report.
Patient management In addition to the radiotherapy and imaging resources, patients need to be medically managed. Use of anesthesia or conscious sedation requires independent monitoring of the patient. If the patient is to be treated while under anesthesia, vital signs must be monitored from outside the treatment vault. Typical installations will include two independent cameras with one that can be fixed onto the anesthesia equipment for monitoring. Patients can also be medicated orally for pain and anxiety relief, which does not require additional monitoring of the patient by trained personnel.
Transportation and immobilization The absence of dedicated treatment suites with in‐room imaging requires patients to be transported from the area of applicator placement and/or imaging to the treatment area. This can involve many separate movements of the patient, especially if the applicator is placed in an operating room. Proper training of staff as well as dedicated equipment to move the patient can help mitigate applicator or needle migration or displacement. If possible, the applicator should also be fixed in place with respect to the patient through the use of external fixation. Options include external fixation devices, special brachytherapy underwear, and homemade devices. Interstitial templates may be sutured to the patient, and glue or dental putty may be used to keep catheters and needles in place. Commercial patient transport systems for the movement of brachytherapy patients can be helpful. Regardless of the immobilization and transport method, the patient should be imaged as close to treatment initiation as possible to confirm applicator positioning. When the HDR unit is located in a facility that is not attached to the health care facility where the applicator is placed, an ambulance service may be necessary to transport the patient. Regardless of the distance involved, this type of transport can pose some challenges: coordination and timing of transportation, support staff for transportation depending on pain control methodology, type of anesthesia used for applicator placement, and applicator movement during transfer.
STAFFING 6.1 Participants The brachytherapy treatment team may consist of a multitude of different members including AU, resident physician, AMP, physics resident, dosimetrist, nurse, therapist, and interdepartmental members like a breast surgeon or anesthesiologist. Together, this team should be informed about the particular patient and work in a collaborative manner toward the ideal patient treatment. This requires good communication and standardization of policy and procedures. The radiation oncologist is generally present, and the physicist or dosimetrist should be present during the placement of the treatment applicator. The presence of interdepartmental personnel for applicator placement will depend upon the policies and procedures at each health care facility and the complexity of the patient treatment. Staffing needs may vary based on the type of sedation used, such as full or conscious sedation. Members outside of the radiation oncology team who may be needed for certain procedures include, but are not limited to anesthesiologists, scrub technicians, circulator nurses, or other medical doctors such as gynecologic oncologists, breast surgeons, or urologists. An AU and an AMP must be present for the initiation of HDR brachytherapy. Members of the treatment team who may be present during the treatment procedure, but whose attendance is not mandatory include dosimetrists, nurses, and therapists. Additional members of the treatment team who may be present for applicator placement and/or treatment delivery, but who are not required, include trainees such as radiation oncology or medical physics residents, and dosimetry or radiation therapy technology students. Treatment should be delivered in compliance with local regulations and facility policies. 6.2 Training and competency The federal and state regulations outline the specific education and training requirements for individuals holding the titles of AMP, AU, and RSO (radiation safety officer). These requirements must be followed even if not articulated in this report as they are beyond the scope of this MPPG. Additionally, individuals involved in an HDR brachytherapy program must hold the appropriate (advanced) degree for their specialty. With a few exceptions, individuals must be board certified by the appropriate specialty board which may include the American Board of Radiology (ABR), American Board of Medical Physics (ABMP), Medical Dosimetry Certification Board (MDCB), or American Registry of Radiology Technologists (ARRT). Team members must be licensed if employed in a state that requires licensure (FL, HI, NY, or TX). Regulations also outline the minimum expected initial and continuing education requirements for participants involved in an HDR brachytherapy program. Additionally, such participants (not previously described) should participate in emergency training, an emergency response drill, HDR‐specific radiation safety training, and in‐service training on an annual basis. Annual training should be completed on all relevant equipment, including the remote afterloader, applicators, transfer tubes, and the treatment planning software. The workflow for each procedure should be reviewed annually. Vendor‐supplied or vendor‐supported training of the treatment unit and treatment planning system should be performed for all relevant staff involved in the HDR brachytherapy program. This is particularly true for new programs. Additionally, on‐site training for the first few cases of each treatment site should be attended by both the vendor and the treatment team. This includes both a new afterloader facility, or new complex applicators such as multicatheter breast brachytherapy or interstitial brachytherapy. Training must be documented, and the documentation should include the training scope as well as a list of the individuals present. 6.3 Credentialing Credentialing of staff can be complex and involve different departments and agencies. Medical staff is often credentialed when newly hired. Training and licenses are verified as part of the local hospital credentialing office in order to grant hospital privileges. To use radioactive material and be identified as an AU or AMP on a RAM license, credentialing is commonly granted by the Radiation Safety Committee, the NRC, the Agreement State, or a combination of these entities. The radiation oncology department may also have its own workflows in order to credential or deem individuals as competent to participate or perform an HDR brachytherapy procedure independently. In some instances, this may involve proctoring and supervision of a defined number of cases and may be site‐specific. Each health care facility should develop an on‐boarding procedure and associated documentation that includes how an individual will demonstrate knowledge for the different types of procedures performed locally. Each individual should be responsible for reading the policies and procedures of the brachytherapy program, observing and performing a predetermined number of cases under supervision, and demonstrating competency. This on‐boarding process should be documented and maintained by the health care facility. Government‐run health care facilities, such as the Veterans Affairs health system, may have other applicable rules that must be understood and followed by the AMP. Annual refreshers or in‐services as well as annual competency evaluations may be helpful in maintaining proficiency.
Participants The brachytherapy treatment team may consist of a multitude of different members including AU, resident physician, AMP, physics resident, dosimetrist, nurse, therapist, and interdepartmental members like a breast surgeon or anesthesiologist. Together, this team should be informed about the particular patient and work in a collaborative manner toward the ideal patient treatment. This requires good communication and standardization of policy and procedures. The radiation oncologist is generally present, and the physicist or dosimetrist should be present during the placement of the treatment applicator. The presence of interdepartmental personnel for applicator placement will depend upon the policies and procedures at each health care facility and the complexity of the patient treatment. Staffing needs may vary based on the type of sedation used, such as full or conscious sedation. Members outside of the radiation oncology team who may be needed for certain procedures include, but are not limited to anesthesiologists, scrub technicians, circulator nurses, or other medical doctors such as gynecologic oncologists, breast surgeons, or urologists. An AU and an AMP must be present for the initiation of HDR brachytherapy. Members of the treatment team who may be present during the treatment procedure, but whose attendance is not mandatory include dosimetrists, nurses, and therapists. Additional members of the treatment team who may be present for applicator placement and/or treatment delivery, but who are not required, include trainees such as radiation oncology or medical physics residents, and dosimetry or radiation therapy technology students. Treatment should be delivered in compliance with local regulations and facility policies.
Training and competency The federal and state regulations outline the specific education and training requirements for individuals holding the titles of AMP, AU, and RSO (radiation safety officer). These requirements must be followed even if not articulated in this report as they are beyond the scope of this MPPG. Additionally, individuals involved in an HDR brachytherapy program must hold the appropriate (advanced) degree for their specialty. With a few exceptions, individuals must be board certified by the appropriate specialty board which may include the American Board of Radiology (ABR), American Board of Medical Physics (ABMP), Medical Dosimetry Certification Board (MDCB), or American Registry of Radiology Technologists (ARRT). Team members must be licensed if employed in a state that requires licensure (FL, HI, NY, or TX). Regulations also outline the minimum expected initial and continuing education requirements for participants involved in an HDR brachytherapy program. Additionally, such participants (not previously described) should participate in emergency training, an emergency response drill, HDR‐specific radiation safety training, and in‐service training on an annual basis. Annual training should be completed on all relevant equipment, including the remote afterloader, applicators, transfer tubes, and the treatment planning software. The workflow for each procedure should be reviewed annually. Vendor‐supplied or vendor‐supported training of the treatment unit and treatment planning system should be performed for all relevant staff involved in the HDR brachytherapy program. This is particularly true for new programs. Additionally, on‐site training for the first few cases of each treatment site should be attended by both the vendor and the treatment team. This includes both a new afterloader facility, or new complex applicators such as multicatheter breast brachytherapy or interstitial brachytherapy. Training must be documented, and the documentation should include the training scope as well as a list of the individuals present.
Credentialing Credentialing of staff can be complex and involve different departments and agencies. Medical staff is often credentialed when newly hired. Training and licenses are verified as part of the local hospital credentialing office in order to grant hospital privileges. To use radioactive material and be identified as an AU or AMP on a RAM license, credentialing is commonly granted by the Radiation Safety Committee, the NRC, the Agreement State, or a combination of these entities. The radiation oncology department may also have its own workflows in order to credential or deem individuals as competent to participate or perform an HDR brachytherapy procedure independently. In some instances, this may involve proctoring and supervision of a defined number of cases and may be site‐specific. Each health care facility should develop an on‐boarding procedure and associated documentation that includes how an individual will demonstrate knowledge for the different types of procedures performed locally. Each individual should be responsible for reading the policies and procedures of the brachytherapy program, observing and performing a predetermined number of cases under supervision, and demonstrating competency. This on‐boarding process should be documented and maintained by the health care facility. Government‐run health care facilities, such as the Veterans Affairs health system, may have other applicable rules that must be understood and followed by the AMP. Annual refreshers or in‐services as well as annual competency evaluations may be helpful in maintaining proficiency.
HARDWARE 7.1 Treatment Delivery System QA Broadly defined as “afterloader QA,” the following sections describe the minimum frequency and tolerances of a variety of tests required to ensure ongoing functionality of the console area, the afterloader, as well as specific tests to be performed during commissioning. Commissioning tests must be performed before beginning clinical treatments and all tests in Table must be performed at this point. All ancillary equipment and accessories such as printers, barometers, clamps, and so forth, must be tested prior to use. In cases where the afterloader console is integrated with patient record and verify systems, that communication must be validated at commissioning as well. An alternative plan transfer method should also be in place in case a network disruption occurs to ensure that patients can be treated timely and correctly. Vendors may perform preventative maintenance inspections (PMIs) on an annual or biannual basis, depending on the manufacturer and service contract. Evidence of the PMI should be maintained. Daily QA must be performed after any repair service to the afterloader. A discussion on appropriate source strength measurement methods also follows in Section . Items marked with a Roman numeral in the Table are further explained in the sections that follow. 7.1.1 Source positioning accuracy The NRC required tolerance value of 1 mm may be difficult to achieve in a variety of applications but should be verifiable under a fixed test geometry that is used during source exchange. Since the source must be measured within a TGT that itself can only be measured to an accuracy of 1 mm, a more practical tolerance value may be 2 mm as adopted by the report of TG‐56 and COMP. The overall source position accuracy should be 1 mm and must be 2 mm. 7.1.2 Timer accuracy The timer on the console computer must be accurate to deliver the intended radiation dose to the patient. The dwell time minimum threshold for various afterloaders may be as low as 0.1 s, which cannot be verified via conventional means. One method to check for gross errors is to use an independent stopwatch or timer and deliver a fixed treatment time (using a reasonable clinical time where disparities due to human reflexes are negligible). The accuracy must be within 1 s or 1% (whichever is greater) under these fixed test conditions. 7.1.3 Timer linearity The dwell time linearity must be validated over at least three treatment times where transit time (typically 1–2 s) is insignificant. For example, the well chamber reading with a 60‐s dwell must be twice the reading of a 30‐s dwell (to within 3%). If one accounts for and removes the reading due to transit time, the agreement may be closer to 1%. The linearity should not change over time unless the afterloader motor is adjusted for example, at a PMI. 7.2 Source strength The air‐kerma strength of each 192 Ir source used for HDR treatments must be accurately determined and properly accounted for in each treatment. Upon receipt and installation of a new 192 Ir HDR source, the air‐kerma strength value will be provided by the manufacturer in a calibration certificate with a specific reference date and time. It is the responsibility of the user to verify this value upon receipt of the source by performing measurements using a calibrated well‐type ionization chamber and electrometer. The well‐chamber determined value must agree with the manufacturer's source certificate's value (both decay corrected to a reference date and time) to within 5% although typical agreements are closer to 3%. If the measurement is outside of this agreement criteria, investigation into the possible reasons must immediately be pursued. It is recommended to check the reference date, the recorded ambient air conditions (temperature and pressure), and most recent well chamber calibration coefficient before contacting the manufacturer. It is uncommon for the difference to be greater than 5%, so treatments must not proceed until this discrepancy is resolved. Either vendor or institutional value may be used if it is applied consistently. In the United States, the well‐type ionization chamber and electrometer should be calibrated by an Accredited Dosimetry Calibration Laboratory (ADCL) at least once every 2 years with traceability to the National Institute of Standards and Technology (NIST). The well‐chamber must have a holder specific for an HDR source catheter and the same holder must be used for the ADCL calibration as well as the end user's clinical source strength measurement. The maximum reading of a source dwell position inside the well chamber should be determined by means of stepping the source in small increments through the well‐chamber holder to find the position where the highest ionization current is produced. This is commonly referred to as the well chamber sweet spot and is unique to each well chamber and source holder. 7.3 Applicators and TGTs Applicator commissioning and QA rely on a variety of physical, imaging, and radiological tests to ensure positional, temporal, and dose delivery accuracy. In general, these tests are described in the AAPM Report 59. The tests in Table describe the QA tests that must be performed and refer to multi‐use (i.e., not sterile single‐use) applicators and TGTs. Items marked with a Roman numeral in the Table are further explained in the sections that follow. 7.3.1 Autoradiography Autoradiography used to be the standard method of confirming the active source positioning within the applicator and validation of any planning off‐sets. However, due to many clinics becoming “film free,” this has become more challenging. Alternatives may still be possible using either C‐arms or Linacs (particularly electron beams) and radiochromic film. Care should be taken to properly identify the source path and locations within the applicator and any offsets characterized. With the advent of solid applicator libraries, the users may have more confidence in vendor‐provided offsets. 7.3.2 Applicator and TGT length In general, if applicators are solid metal or plastic and the TGTs are stored properly, the lengths of the applicator and TGT combination will rarely change more than 1 mm. All applicators and tubes that are in clinical rotation should be checked on at least an annual basis and compared to the commissioning baseline. The applicator + TGT length should be checked prior to treatment initiation or at least once prior to a fractionated treatment where the applicators are not removed between fractions. As this is one of the most common HDR errors, site‐specific recommendations will be given in part B of this guideline. 7.3.3 Source positioning Certain applicators may be highly sensitive to the positioning of the HDR source, which may change slightly over time and with repeated active runs and/or source exchanges. Examples may include tandem and ring, complex gynecological applicators, conical skin applicators, and some shielded applicators. This may affect the output (for conical applicators) or dose distribution surrounding the applicators when a PMI or source exchange occurs. For these applicators, the IFU regarding QA should be followed and tests should be performed to ensure consistent dose delivery. If determined to be a reasonable approximation, offsets over multiple source exchanges and afterloaders can be averaged and used for clinical use. A good discussion of source positional accuracy may be found in Kirisits et al. Applicators and TGT combination length measurements must be performed annually while in routine use. A failure mode and effects analysis or similar review could be performed to inform the basis for more practical periodicity for those devices, which are found to not change with time. Part B of this report will discuss patient treatment aspects regarding treatment length for planning purposes. Applicator and transfer tube combinations that have not been used in the past year should be tested prior to clinical use. It is also good practice to annually verify the accuracy of the adjustable length gauge and/or length measurement devices if applicable. Single‐use or one‐time‐use devices are considerably different in that they are often supplied by the manufacturer sterile and may already be placed in a patient by the time the patient presents for treatment in the facility. The specific patient handling aspects for these devices will be handled in part B of this report. It is recommended that the AMP performs QA and testing with a nonsterile test device prior to clinical implementation. Some applicators come nonsterilized and can be tested prior to sterilization. For patient treatments, the combined length of the applicator and transfer tube must be measured and documented at least once per device. Manufacturer specifications for end of life should be followed as articulated in the IFU. However, using an applicator beyond its stated end of life may be considered under some circumstances if care is taken to ensure the integrity of the applicator and mechanical functioning. Vendors recuse themselves from liability when equipment is used beyond end‐of‐life recommendations. If the applicator exceeds the number of sterilization cycles, material fatigue and infection control may become an issue. Homemade applicators (machined or 3D printed) add flexibility and the possibility to customize applicator geometry to the patient. The burden of establishing biocompatibility for the materials used (especially if used interstitially or surgically), cleaning, and sterilization procedures must be determined by the hospital team. Because of the high cost of validating repeated cleaning and sterilization cycles between patients, these applicators are typically single‐use. Applicator design and material selection should also reflect the imaging modality intended to be used for planning and treatment verification. Usually made of a plastic material, these applicators are often MR‐safe. Commissioning and validation of applicator geometry and function must be performed and documented for each applicator, as described above. Further guidance may be provided in the forthcoming report of TG‐336 or other published works on 3D printing applications. Geometric accuracy of shielded applicators must be verified after applicator assembly. A CT scan should be performed at commissioning to understand applicator geometry with and without shields in place. Dynamic shields must be tested for functionality and reproducibility. If shielding orientation is marked on the applicator, it should be checked for correctness. Solid applicators and the solid applicator library comparison will be discussed in the treatment planning QA section and in Table .
Treatment Delivery System QA Broadly defined as “afterloader QA,” the following sections describe the minimum frequency and tolerances of a variety of tests required to ensure ongoing functionality of the console area, the afterloader, as well as specific tests to be performed during commissioning. Commissioning tests must be performed before beginning clinical treatments and all tests in Table must be performed at this point. All ancillary equipment and accessories such as printers, barometers, clamps, and so forth, must be tested prior to use. In cases where the afterloader console is integrated with patient record and verify systems, that communication must be validated at commissioning as well. An alternative plan transfer method should also be in place in case a network disruption occurs to ensure that patients can be treated timely and correctly. Vendors may perform preventative maintenance inspections (PMIs) on an annual or biannual basis, depending on the manufacturer and service contract. Evidence of the PMI should be maintained. Daily QA must be performed after any repair service to the afterloader. A discussion on appropriate source strength measurement methods also follows in Section . Items marked with a Roman numeral in the Table are further explained in the sections that follow. 7.1.1 Source positioning accuracy The NRC required tolerance value of 1 mm may be difficult to achieve in a variety of applications but should be verifiable under a fixed test geometry that is used during source exchange. Since the source must be measured within a TGT that itself can only be measured to an accuracy of 1 mm, a more practical tolerance value may be 2 mm as adopted by the report of TG‐56 and COMP. The overall source position accuracy should be 1 mm and must be 2 mm. 7.1.2 Timer accuracy The timer on the console computer must be accurate to deliver the intended radiation dose to the patient. The dwell time minimum threshold for various afterloaders may be as low as 0.1 s, which cannot be verified via conventional means. One method to check for gross errors is to use an independent stopwatch or timer and deliver a fixed treatment time (using a reasonable clinical time where disparities due to human reflexes are negligible). The accuracy must be within 1 s or 1% (whichever is greater) under these fixed test conditions. 7.1.3 Timer linearity The dwell time linearity must be validated over at least three treatment times where transit time (typically 1–2 s) is insignificant. For example, the well chamber reading with a 60‐s dwell must be twice the reading of a 30‐s dwell (to within 3%). If one accounts for and removes the reading due to transit time, the agreement may be closer to 1%. The linearity should not change over time unless the afterloader motor is adjusted for example, at a PMI.
Source positioning accuracy The NRC required tolerance value of 1 mm may be difficult to achieve in a variety of applications but should be verifiable under a fixed test geometry that is used during source exchange. Since the source must be measured within a TGT that itself can only be measured to an accuracy of 1 mm, a more practical tolerance value may be 2 mm as adopted by the report of TG‐56 and COMP. The overall source position accuracy should be 1 mm and must be 2 mm.
Timer accuracy The timer on the console computer must be accurate to deliver the intended radiation dose to the patient. The dwell time minimum threshold for various afterloaders may be as low as 0.1 s, which cannot be verified via conventional means. One method to check for gross errors is to use an independent stopwatch or timer and deliver a fixed treatment time (using a reasonable clinical time where disparities due to human reflexes are negligible). The accuracy must be within 1 s or 1% (whichever is greater) under these fixed test conditions.
Timer linearity The dwell time linearity must be validated over at least three treatment times where transit time (typically 1–2 s) is insignificant. For example, the well chamber reading with a 60‐s dwell must be twice the reading of a 30‐s dwell (to within 3%). If one accounts for and removes the reading due to transit time, the agreement may be closer to 1%. The linearity should not change over time unless the afterloader motor is adjusted for example, at a PMI.
Source strength The air‐kerma strength of each 192 Ir source used for HDR treatments must be accurately determined and properly accounted for in each treatment. Upon receipt and installation of a new 192 Ir HDR source, the air‐kerma strength value will be provided by the manufacturer in a calibration certificate with a specific reference date and time. It is the responsibility of the user to verify this value upon receipt of the source by performing measurements using a calibrated well‐type ionization chamber and electrometer. The well‐chamber determined value must agree with the manufacturer's source certificate's value (both decay corrected to a reference date and time) to within 5% although typical agreements are closer to 3%. If the measurement is outside of this agreement criteria, investigation into the possible reasons must immediately be pursued. It is recommended to check the reference date, the recorded ambient air conditions (temperature and pressure), and most recent well chamber calibration coefficient before contacting the manufacturer. It is uncommon for the difference to be greater than 5%, so treatments must not proceed until this discrepancy is resolved. Either vendor or institutional value may be used if it is applied consistently. In the United States, the well‐type ionization chamber and electrometer should be calibrated by an Accredited Dosimetry Calibration Laboratory (ADCL) at least once every 2 years with traceability to the National Institute of Standards and Technology (NIST). The well‐chamber must have a holder specific for an HDR source catheter and the same holder must be used for the ADCL calibration as well as the end user's clinical source strength measurement. The maximum reading of a source dwell position inside the well chamber should be determined by means of stepping the source in small increments through the well‐chamber holder to find the position where the highest ionization current is produced. This is commonly referred to as the well chamber sweet spot and is unique to each well chamber and source holder.
Applicators and TGTs Applicator commissioning and QA rely on a variety of physical, imaging, and radiological tests to ensure positional, temporal, and dose delivery accuracy. In general, these tests are described in the AAPM Report 59. The tests in Table describe the QA tests that must be performed and refer to multi‐use (i.e., not sterile single‐use) applicators and TGTs. Items marked with a Roman numeral in the Table are further explained in the sections that follow. 7.3.1 Autoradiography Autoradiography used to be the standard method of confirming the active source positioning within the applicator and validation of any planning off‐sets. However, due to many clinics becoming “film free,” this has become more challenging. Alternatives may still be possible using either C‐arms or Linacs (particularly electron beams) and radiochromic film. Care should be taken to properly identify the source path and locations within the applicator and any offsets characterized. With the advent of solid applicator libraries, the users may have more confidence in vendor‐provided offsets. 7.3.2 Applicator and TGT length In general, if applicators are solid metal or plastic and the TGTs are stored properly, the lengths of the applicator and TGT combination will rarely change more than 1 mm. All applicators and tubes that are in clinical rotation should be checked on at least an annual basis and compared to the commissioning baseline. The applicator + TGT length should be checked prior to treatment initiation or at least once prior to a fractionated treatment where the applicators are not removed between fractions. As this is one of the most common HDR errors, site‐specific recommendations will be given in part B of this guideline. 7.3.3 Source positioning Certain applicators may be highly sensitive to the positioning of the HDR source, which may change slightly over time and with repeated active runs and/or source exchanges. Examples may include tandem and ring, complex gynecological applicators, conical skin applicators, and some shielded applicators. This may affect the output (for conical applicators) or dose distribution surrounding the applicators when a PMI or source exchange occurs. For these applicators, the IFU regarding QA should be followed and tests should be performed to ensure consistent dose delivery. If determined to be a reasonable approximation, offsets over multiple source exchanges and afterloaders can be averaged and used for clinical use. A good discussion of source positional accuracy may be found in Kirisits et al. Applicators and TGT combination length measurements must be performed annually while in routine use. A failure mode and effects analysis or similar review could be performed to inform the basis for more practical periodicity for those devices, which are found to not change with time. Part B of this report will discuss patient treatment aspects regarding treatment length for planning purposes. Applicator and transfer tube combinations that have not been used in the past year should be tested prior to clinical use. It is also good practice to annually verify the accuracy of the adjustable length gauge and/or length measurement devices if applicable. Single‐use or one‐time‐use devices are considerably different in that they are often supplied by the manufacturer sterile and may already be placed in a patient by the time the patient presents for treatment in the facility. The specific patient handling aspects for these devices will be handled in part B of this report. It is recommended that the AMP performs QA and testing with a nonsterile test device prior to clinical implementation. Some applicators come nonsterilized and can be tested prior to sterilization. For patient treatments, the combined length of the applicator and transfer tube must be measured and documented at least once per device. Manufacturer specifications for end of life should be followed as articulated in the IFU. However, using an applicator beyond its stated end of life may be considered under some circumstances if care is taken to ensure the integrity of the applicator and mechanical functioning. Vendors recuse themselves from liability when equipment is used beyond end‐of‐life recommendations. If the applicator exceeds the number of sterilization cycles, material fatigue and infection control may become an issue. Homemade applicators (machined or 3D printed) add flexibility and the possibility to customize applicator geometry to the patient. The burden of establishing biocompatibility for the materials used (especially if used interstitially or surgically), cleaning, and sterilization procedures must be determined by the hospital team. Because of the high cost of validating repeated cleaning and sterilization cycles between patients, these applicators are typically single‐use. Applicator design and material selection should also reflect the imaging modality intended to be used for planning and treatment verification. Usually made of a plastic material, these applicators are often MR‐safe. Commissioning and validation of applicator geometry and function must be performed and documented for each applicator, as described above. Further guidance may be provided in the forthcoming report of TG‐336 or other published works on 3D printing applications. Geometric accuracy of shielded applicators must be verified after applicator assembly. A CT scan should be performed at commissioning to understand applicator geometry with and without shields in place. Dynamic shields must be tested for functionality and reproducibility. If shielding orientation is marked on the applicator, it should be checked for correctness. Solid applicators and the solid applicator library comparison will be discussed in the treatment planning QA section and in Table .
Autoradiography Autoradiography used to be the standard method of confirming the active source positioning within the applicator and validation of any planning off‐sets. However, due to many clinics becoming “film free,” this has become more challenging. Alternatives may still be possible using either C‐arms or Linacs (particularly electron beams) and radiochromic film. Care should be taken to properly identify the source path and locations within the applicator and any offsets characterized. With the advent of solid applicator libraries, the users may have more confidence in vendor‐provided offsets.
Applicator and TGT length In general, if applicators are solid metal or plastic and the TGTs are stored properly, the lengths of the applicator and TGT combination will rarely change more than 1 mm. All applicators and tubes that are in clinical rotation should be checked on at least an annual basis and compared to the commissioning baseline. The applicator + TGT length should be checked prior to treatment initiation or at least once prior to a fractionated treatment where the applicators are not removed between fractions. As this is one of the most common HDR errors, site‐specific recommendations will be given in part B of this guideline.
Source positioning Certain applicators may be highly sensitive to the positioning of the HDR source, which may change slightly over time and with repeated active runs and/or source exchanges. Examples may include tandem and ring, complex gynecological applicators, conical skin applicators, and some shielded applicators. This may affect the output (for conical applicators) or dose distribution surrounding the applicators when a PMI or source exchange occurs. For these applicators, the IFU regarding QA should be followed and tests should be performed to ensure consistent dose delivery. If determined to be a reasonable approximation, offsets over multiple source exchanges and afterloaders can be averaged and used for clinical use. A good discussion of source positional accuracy may be found in Kirisits et al. Applicators and TGT combination length measurements must be performed annually while in routine use. A failure mode and effects analysis or similar review could be performed to inform the basis for more practical periodicity for those devices, which are found to not change with time. Part B of this report will discuss patient treatment aspects regarding treatment length for planning purposes. Applicator and transfer tube combinations that have not been used in the past year should be tested prior to clinical use. It is also good practice to annually verify the accuracy of the adjustable length gauge and/or length measurement devices if applicable. Single‐use or one‐time‐use devices are considerably different in that they are often supplied by the manufacturer sterile and may already be placed in a patient by the time the patient presents for treatment in the facility. The specific patient handling aspects for these devices will be handled in part B of this report. It is recommended that the AMP performs QA and testing with a nonsterile test device prior to clinical implementation. Some applicators come nonsterilized and can be tested prior to sterilization. For patient treatments, the combined length of the applicator and transfer tube must be measured and documented at least once per device. Manufacturer specifications for end of life should be followed as articulated in the IFU. However, using an applicator beyond its stated end of life may be considered under some circumstances if care is taken to ensure the integrity of the applicator and mechanical functioning. Vendors recuse themselves from liability when equipment is used beyond end‐of‐life recommendations. If the applicator exceeds the number of sterilization cycles, material fatigue and infection control may become an issue. Homemade applicators (machined or 3D printed) add flexibility and the possibility to customize applicator geometry to the patient. The burden of establishing biocompatibility for the materials used (especially if used interstitially or surgically), cleaning, and sterilization procedures must be determined by the hospital team. Because of the high cost of validating repeated cleaning and sterilization cycles between patients, these applicators are typically single‐use. Applicator design and material selection should also reflect the imaging modality intended to be used for planning and treatment verification. Usually made of a plastic material, these applicators are often MR‐safe. Commissioning and validation of applicator geometry and function must be performed and documented for each applicator, as described above. Further guidance may be provided in the forthcoming report of TG‐336 or other published works on 3D printing applications. Geometric accuracy of shielded applicators must be verified after applicator assembly. A CT scan should be performed at commissioning to understand applicator geometry with and without shields in place. Dynamic shields must be tested for functionality and reproducibility. If shielding orientation is marked on the applicator, it should be checked for correctness. Solid applicators and the solid applicator library comparison will be discussed in the treatment planning QA section and in Table .
SOFTWARE 8.1 Treatment planning imaging and tool QA Treatment planning software commissioning tasks are designed to ensure that the new software package handles clinical tasks such as image manipulation, structure delineations, and dose calculation correctly and some tests will need to be conducted to provide a baseline for periodic checks such as annual QA. Software used for HDR brachytherapy treatment planning may be dedicated to a specific HDR afterloader. In addition, various software packages exist to accommodate specific brachytherapy procedures. Interfaces with ancillary devices (such as an ultrasound stepper, etc.), configuration, networking, and workflow performance should be tested prior to the clinical use of the software. New commissioning must be performed for each new release of the software in addition to vendor‐required testing. Routine clinical use of the software in an active brachytherapy program will reduce the need for repeated testing as loss of functionality or network connections would be noticed with normal use. Imaging systems and treatment applicators used in HDR brachytherapy may result in imaging artifacts or distortion, which may lead to incorrect patient dose. , While a full discussion of artifacts is beyond the scope of this report, care should be taken to minimize and understand various imaging limitations. The imaging tests that must be performed (required) for TPS commissioning include the recommendations in Table and are discussed in the text below: 8.1.1 Image transfer Useability of images and image sets imported and exported from the software including DICOM format and live video acquisition. Images should maintain quality and be free of distortion or degradation. 8.1.2 Orientation Patient orientation is correctly displayed on images acquired using fixed imagers, mobile imagers, and non‐DICOM image acquisition methods where patient orientation is not included in the image data. 8.1.3 Labeling Transfer of image data including image identifiers, acquisition parameters, and imager information. 8.1.4 Geometric accuracy The accuracy of the imaging set depends on the modality of the images. CT image accuracy should be within 1 mm in‐plane and 2 mm elsewhere while MR should be within 2 mm , and ultrasound should be within 2 mm or 2%. 8.1.5 Image registration Rigid registration is most widely used. Multiple scans of the same phantom in different orientations can be aligned and evaluated. Quantitative errors can be measured in some systems using point‐to‐point matching between imaging sets and evaluating the target registration error. Achievable target registration errors should be in the 2–3‐mm range. Deformable image registration for brachytherapy is currently an active area of research and the registration and dosimetric errors may be large, for example, when registering an image set without the applicator in situ to an image set containing an applicator. , 8.1.6 Source, point, and line delineation Point delineation should be accurate to within 1 mm when compared with DICOM coordinates. Both 2D and 3D structure interpolations and expansions should be checked. Reference lines and reference points can be used as surrogates to structure contours and may be used for dose optimization and evaluation. 3D definition of the line and point coordinates should be verified. Structures may be contoured with the use of Boolean operators that should be verified to be performing correctly. 8.1.7 External device interface Some dedicated planning and delivery systems offer an option for interfacing with external devices. New devices are continually being developed to enhance the safety and consistency of treatments. Examples include electronic and robotic steppers for prostate implants, navigational devices for spine brachytherapy, and electromagnetic tracking. Functional and operational checks of these devices should be performed but specific QA tests are beyond the scope of this guidance report. 8.2 Treatment planning source validation and dose calculation Prior to TPS commissioning, a qualified medical physicist (QMP) must select the dose computation algorithm(s) to be used clinically. The QMP should have a clear understanding of the algorithm(s) chosen, the source model parameters, and how each option affects the resulting dose distributions. There are a variety of commercial and noncommercial brachytherapy treatment planning systems and a given TPS may include multiple dose calculation algorithms. The AAPM currently recommends using a modified AAPM report of TG‐43 dosimetry formalism for clinical dose calculation as defined in AAPM Report 229 25 (subsequently referred to as Report 229), which uses tabulated data to allow calculation of point doses and 3D dose distributions. The tests for source validation and dose calculation accuracy are provided in Table . Model‐based dose calculation algorithms (MBDCAs) are also commercially available and the AAPM report of TG‐186 provides recommendations for commissioning these algorithms. Practice guidelines for MBDCA commissioning are beyond the scope of this report. Due to possible dosimetric implications on the treatment prescription, MBDCAs should not be used clinically without rigorous validation and substantial brachytherapy experience. 8.2.1 Source model data Source reference data used by the brachytherapy TPS must be appropriate for the source type used for treatment delivery. It is recommended that consensus datasets from Report 229 are used for dose calculations. When checking source parameters in a TPS, the input data must correspond exactly with the published consensus dataset for that source. 8.2.2 Source decay Some treatment planning systems allow the user to account for radioactive decay. This should be checked with an independent calculation or other validation method. 8.2.3 Plan normalization, weighting, and scaling Treatment plans are often improved by adjusting isodose distributions globally or locally. Changing the number of fractions or prescribed dose can also scale the dwell times. Treatment times should be cross‐checked to validate the correct scaling of the planned time. 8.2.4 Dose calculation grid Brachytherapy treatments often involve small calculation volumes and dose accuracy can depend on the calculation grid size used. A large calculation grid may influence DVH calculations, particularly maximum point doses within a contour. Typically, the dose grid resolution may be set at 0.1–0.3 cm per dimension. 8.2.5 Point dose calculations Either the source model data or a point dose calculation may be verified on an annual basis as these two tests investigate the same process. Users may wish to create a fixed geometry test plan and compare dosimetry annually. If MBDCAs are to be used, dose consistency with AAPM Report 229 based calculations should be verified as well as inhomogeneity and scatter modeling accuracy. 8.2.6 Dose display The dose should display in both absolute (Gy) and relative doses. If applicators with shields are to be used, a methodology for documentation and isodose line reduction should be incorporated into the planning guidelines if using the Report 229 formalism. 8.2.7 DVH calculation According to the report of TG‐53, DVH analysis should be performed at least annually. However, for a brachytherapy TPS, the user is required to validate functionality rather than accuracy. Interested users may use the methodology of Gossman et al. 8.3 Miscellaneous commissioning tests Additional tests for software commissioning that should or must be performed are included in Table . Some tests may be vendor‐specific and may not apply to all brachytherapy TPS, in which case the requirement is waived. 8.3.1 Optimization validation Optimization of HDR brachytherapy treatment plans can occur in several ways, including, but not limited to, manual dwell time or weight adjustments, dose shaper or graphical optimization, geometric optimization, and inverse planning algorithms. Assessment of optimization should occur for each available optimization method and should be completed for each treatment or applicator type in clinical use in the department where appropriate. 8.3.2 TPS output Treatment plan document verification as well as integrity testing of data transfer from the TPS to treatment unit must be completed. 8.3.3 Applicators and catheters For applicators with known geometry or applicators with template/solid applicator libraries, visualization and digitization/reconstruction must be verified and should agree to the known geometry within +/−1 mm by superimposing a CT image of the applicator onto the geometrical representation of the applicator. A “solid” applicator refers to a vendor provided geometric and dwell position representation of a particular applicator that may be imported into the planning system. Free hand needle and catheter reconstruction may require image interpolation and rotations. The TPS may have tools to assist with auto‐segmentation of the source path. These tools should be checked, and their limitations documented. For example, noisy images, crossing of catheters/needles, use of dummy wires, high curvature of the catheters, or use of non‐CT images may impair correct detection of the source path. CT range finders may be used for applicator delineation and should also be evaluated for functionality. Digitized/reconstructed source positions within the applicator should be within +/−2 mm of true source positions. However, this limit may not be appropriate depending on the applicator and modality type used. 8.3.4 Independent dose calculation In HDR brachytherapy, an independent treatment time calculation has historically been performed to verify that the total dwell times and/or dose distribution is consistent with the specified arrangement including source positions, strength, and dwell times. There are a number of commercial checking programs available, however, these secondary programs rely on DICOM input from the TPS for secondary calculations and typically cannot find planning errors. Software that performs independent dose calculations based on independent implant reconstruction has been reported, as have script‐based algorithms and other software packages that check for consistency of the plan with prescription, as well as other electronic medical record parameters and quality indices. The recommendation of this practice guideline is that any secondary dose calculations should be optional as they lack true independence or are otherwise not readily available or practical. Any adopted independent system can be verified using the TG53 methodology (Appendix 5). They should be used with care to ensure that dose calculation has not been corrupted within the TPS, and only after implant geometry and plan parameters have been independently verified. Other independent treatment time calculations (e.g., nomograms, Manchester and Quimby tables) may also be valuable tools for HDR plan QA. Depending on the type of implant these methods can typically predict a plan total dwell time with an accuracy of 5–10%. A secondary dose calculation is separate from an independent plan check that will be addressed in more detail in part B. 8.3.5 Dry run testing The brachytherapy team must conduct at least one “dry run” functionality test of the entire brachytherapy process from imaging to dose delivery for each treatment technique. This testing should be performed prior to the implementation of a new treatment type and when a key aspect of any process has been modified. Each step in the process should be performed by the staff member who will perform the step when the program is clinically implemented. The dry run test should involve imaging of the applicator through anticipated mechanism, practical treatment planning, connection to tubes and afterloader, and delivery of planned treatment.
Treatment planning imaging and tool QA Treatment planning software commissioning tasks are designed to ensure that the new software package handles clinical tasks such as image manipulation, structure delineations, and dose calculation correctly and some tests will need to be conducted to provide a baseline for periodic checks such as annual QA. Software used for HDR brachytherapy treatment planning may be dedicated to a specific HDR afterloader. In addition, various software packages exist to accommodate specific brachytherapy procedures. Interfaces with ancillary devices (such as an ultrasound stepper, etc.), configuration, networking, and workflow performance should be tested prior to the clinical use of the software. New commissioning must be performed for each new release of the software in addition to vendor‐required testing. Routine clinical use of the software in an active brachytherapy program will reduce the need for repeated testing as loss of functionality or network connections would be noticed with normal use. Imaging systems and treatment applicators used in HDR brachytherapy may result in imaging artifacts or distortion, which may lead to incorrect patient dose. , While a full discussion of artifacts is beyond the scope of this report, care should be taken to minimize and understand various imaging limitations. The imaging tests that must be performed (required) for TPS commissioning include the recommendations in Table and are discussed in the text below: 8.1.1 Image transfer Useability of images and image sets imported and exported from the software including DICOM format and live video acquisition. Images should maintain quality and be free of distortion or degradation. 8.1.2 Orientation Patient orientation is correctly displayed on images acquired using fixed imagers, mobile imagers, and non‐DICOM image acquisition methods where patient orientation is not included in the image data. 8.1.3 Labeling Transfer of image data including image identifiers, acquisition parameters, and imager information. 8.1.4 Geometric accuracy The accuracy of the imaging set depends on the modality of the images. CT image accuracy should be within 1 mm in‐plane and 2 mm elsewhere while MR should be within 2 mm , and ultrasound should be within 2 mm or 2%. 8.1.5 Image registration Rigid registration is most widely used. Multiple scans of the same phantom in different orientations can be aligned and evaluated. Quantitative errors can be measured in some systems using point‐to‐point matching between imaging sets and evaluating the target registration error. Achievable target registration errors should be in the 2–3‐mm range. Deformable image registration for brachytherapy is currently an active area of research and the registration and dosimetric errors may be large, for example, when registering an image set without the applicator in situ to an image set containing an applicator. , 8.1.6 Source, point, and line delineation Point delineation should be accurate to within 1 mm when compared with DICOM coordinates. Both 2D and 3D structure interpolations and expansions should be checked. Reference lines and reference points can be used as surrogates to structure contours and may be used for dose optimization and evaluation. 3D definition of the line and point coordinates should be verified. Structures may be contoured with the use of Boolean operators that should be verified to be performing correctly. 8.1.7 External device interface Some dedicated planning and delivery systems offer an option for interfacing with external devices. New devices are continually being developed to enhance the safety and consistency of treatments. Examples include electronic and robotic steppers for prostate implants, navigational devices for spine brachytherapy, and electromagnetic tracking. Functional and operational checks of these devices should be performed but specific QA tests are beyond the scope of this guidance report.
Image transfer Useability of images and image sets imported and exported from the software including DICOM format and live video acquisition. Images should maintain quality and be free of distortion or degradation.
Orientation Patient orientation is correctly displayed on images acquired using fixed imagers, mobile imagers, and non‐DICOM image acquisition methods where patient orientation is not included in the image data.
Labeling Transfer of image data including image identifiers, acquisition parameters, and imager information.
Geometric accuracy The accuracy of the imaging set depends on the modality of the images. CT image accuracy should be within 1 mm in‐plane and 2 mm elsewhere while MR should be within 2 mm , and ultrasound should be within 2 mm or 2%.
Image registration Rigid registration is most widely used. Multiple scans of the same phantom in different orientations can be aligned and evaluated. Quantitative errors can be measured in some systems using point‐to‐point matching between imaging sets and evaluating the target registration error. Achievable target registration errors should be in the 2–3‐mm range. Deformable image registration for brachytherapy is currently an active area of research and the registration and dosimetric errors may be large, for example, when registering an image set without the applicator in situ to an image set containing an applicator. ,
Source, point, and line delineation Point delineation should be accurate to within 1 mm when compared with DICOM coordinates. Both 2D and 3D structure interpolations and expansions should be checked. Reference lines and reference points can be used as surrogates to structure contours and may be used for dose optimization and evaluation. 3D definition of the line and point coordinates should be verified. Structures may be contoured with the use of Boolean operators that should be verified to be performing correctly.
External device interface Some dedicated planning and delivery systems offer an option for interfacing with external devices. New devices are continually being developed to enhance the safety and consistency of treatments. Examples include electronic and robotic steppers for prostate implants, navigational devices for spine brachytherapy, and electromagnetic tracking. Functional and operational checks of these devices should be performed but specific QA tests are beyond the scope of this guidance report.
Treatment planning source validation and dose calculation Prior to TPS commissioning, a qualified medical physicist (QMP) must select the dose computation algorithm(s) to be used clinically. The QMP should have a clear understanding of the algorithm(s) chosen, the source model parameters, and how each option affects the resulting dose distributions. There are a variety of commercial and noncommercial brachytherapy treatment planning systems and a given TPS may include multiple dose calculation algorithms. The AAPM currently recommends using a modified AAPM report of TG‐43 dosimetry formalism for clinical dose calculation as defined in AAPM Report 229 25 (subsequently referred to as Report 229), which uses tabulated data to allow calculation of point doses and 3D dose distributions. The tests for source validation and dose calculation accuracy are provided in Table . Model‐based dose calculation algorithms (MBDCAs) are also commercially available and the AAPM report of TG‐186 provides recommendations for commissioning these algorithms. Practice guidelines for MBDCA commissioning are beyond the scope of this report. Due to possible dosimetric implications on the treatment prescription, MBDCAs should not be used clinically without rigorous validation and substantial brachytherapy experience. 8.2.1 Source model data Source reference data used by the brachytherapy TPS must be appropriate for the source type used for treatment delivery. It is recommended that consensus datasets from Report 229 are used for dose calculations. When checking source parameters in a TPS, the input data must correspond exactly with the published consensus dataset for that source. 8.2.2 Source decay Some treatment planning systems allow the user to account for radioactive decay. This should be checked with an independent calculation or other validation method. 8.2.3 Plan normalization, weighting, and scaling Treatment plans are often improved by adjusting isodose distributions globally or locally. Changing the number of fractions or prescribed dose can also scale the dwell times. Treatment times should be cross‐checked to validate the correct scaling of the planned time. 8.2.4 Dose calculation grid Brachytherapy treatments often involve small calculation volumes and dose accuracy can depend on the calculation grid size used. A large calculation grid may influence DVH calculations, particularly maximum point doses within a contour. Typically, the dose grid resolution may be set at 0.1–0.3 cm per dimension. 8.2.5 Point dose calculations Either the source model data or a point dose calculation may be verified on an annual basis as these two tests investigate the same process. Users may wish to create a fixed geometry test plan and compare dosimetry annually. If MBDCAs are to be used, dose consistency with AAPM Report 229 based calculations should be verified as well as inhomogeneity and scatter modeling accuracy. 8.2.6 Dose display The dose should display in both absolute (Gy) and relative doses. If applicators with shields are to be used, a methodology for documentation and isodose line reduction should be incorporated into the planning guidelines if using the Report 229 formalism. 8.2.7 DVH calculation According to the report of TG‐53, DVH analysis should be performed at least annually. However, for a brachytherapy TPS, the user is required to validate functionality rather than accuracy. Interested users may use the methodology of Gossman et al.
Source model data Source reference data used by the brachytherapy TPS must be appropriate for the source type used for treatment delivery. It is recommended that consensus datasets from Report 229 are used for dose calculations. When checking source parameters in a TPS, the input data must correspond exactly with the published consensus dataset for that source.
Source decay Some treatment planning systems allow the user to account for radioactive decay. This should be checked with an independent calculation or other validation method.
Plan normalization, weighting, and scaling Treatment plans are often improved by adjusting isodose distributions globally or locally. Changing the number of fractions or prescribed dose can also scale the dwell times. Treatment times should be cross‐checked to validate the correct scaling of the planned time.
Dose calculation grid Brachytherapy treatments often involve small calculation volumes and dose accuracy can depend on the calculation grid size used. A large calculation grid may influence DVH calculations, particularly maximum point doses within a contour. Typically, the dose grid resolution may be set at 0.1–0.3 cm per dimension.
Point dose calculations Either the source model data or a point dose calculation may be verified on an annual basis as these two tests investigate the same process. Users may wish to create a fixed geometry test plan and compare dosimetry annually. If MBDCAs are to be used, dose consistency with AAPM Report 229 based calculations should be verified as well as inhomogeneity and scatter modeling accuracy.
Dose display The dose should display in both absolute (Gy) and relative doses. If applicators with shields are to be used, a methodology for documentation and isodose line reduction should be incorporated into the planning guidelines if using the Report 229 formalism.
DVH calculation According to the report of TG‐53, DVH analysis should be performed at least annually. However, for a brachytherapy TPS, the user is required to validate functionality rather than accuracy. Interested users may use the methodology of Gossman et al.
Miscellaneous commissioning tests Additional tests for software commissioning that should or must be performed are included in Table . Some tests may be vendor‐specific and may not apply to all brachytherapy TPS, in which case the requirement is waived. 8.3.1 Optimization validation Optimization of HDR brachytherapy treatment plans can occur in several ways, including, but not limited to, manual dwell time or weight adjustments, dose shaper or graphical optimization, geometric optimization, and inverse planning algorithms. Assessment of optimization should occur for each available optimization method and should be completed for each treatment or applicator type in clinical use in the department where appropriate. 8.3.2 TPS output Treatment plan document verification as well as integrity testing of data transfer from the TPS to treatment unit must be completed. 8.3.3 Applicators and catheters For applicators with known geometry or applicators with template/solid applicator libraries, visualization and digitization/reconstruction must be verified and should agree to the known geometry within +/−1 mm by superimposing a CT image of the applicator onto the geometrical representation of the applicator. A “solid” applicator refers to a vendor provided geometric and dwell position representation of a particular applicator that may be imported into the planning system. Free hand needle and catheter reconstruction may require image interpolation and rotations. The TPS may have tools to assist with auto‐segmentation of the source path. These tools should be checked, and their limitations documented. For example, noisy images, crossing of catheters/needles, use of dummy wires, high curvature of the catheters, or use of non‐CT images may impair correct detection of the source path. CT range finders may be used for applicator delineation and should also be evaluated for functionality. Digitized/reconstructed source positions within the applicator should be within +/−2 mm of true source positions. However, this limit may not be appropriate depending on the applicator and modality type used. 8.3.4 Independent dose calculation In HDR brachytherapy, an independent treatment time calculation has historically been performed to verify that the total dwell times and/or dose distribution is consistent with the specified arrangement including source positions, strength, and dwell times. There are a number of commercial checking programs available, however, these secondary programs rely on DICOM input from the TPS for secondary calculations and typically cannot find planning errors. Software that performs independent dose calculations based on independent implant reconstruction has been reported, as have script‐based algorithms and other software packages that check for consistency of the plan with prescription, as well as other electronic medical record parameters and quality indices. The recommendation of this practice guideline is that any secondary dose calculations should be optional as they lack true independence or are otherwise not readily available or practical. Any adopted independent system can be verified using the TG53 methodology (Appendix 5). They should be used with care to ensure that dose calculation has not been corrupted within the TPS, and only after implant geometry and plan parameters have been independently verified. Other independent treatment time calculations (e.g., nomograms, Manchester and Quimby tables) may also be valuable tools for HDR plan QA. Depending on the type of implant these methods can typically predict a plan total dwell time with an accuracy of 5–10%. A secondary dose calculation is separate from an independent plan check that will be addressed in more detail in part B. 8.3.5 Dry run testing The brachytherapy team must conduct at least one “dry run” functionality test of the entire brachytherapy process from imaging to dose delivery for each treatment technique. This testing should be performed prior to the implementation of a new treatment type and when a key aspect of any process has been modified. Each step in the process should be performed by the staff member who will perform the step when the program is clinically implemented. The dry run test should involve imaging of the applicator through anticipated mechanism, practical treatment planning, connection to tubes and afterloader, and delivery of planned treatment.
Optimization validation Optimization of HDR brachytherapy treatment plans can occur in several ways, including, but not limited to, manual dwell time or weight adjustments, dose shaper or graphical optimization, geometric optimization, and inverse planning algorithms. Assessment of optimization should occur for each available optimization method and should be completed for each treatment or applicator type in clinical use in the department where appropriate.
TPS output Treatment plan document verification as well as integrity testing of data transfer from the TPS to treatment unit must be completed.
Applicators and catheters For applicators with known geometry or applicators with template/solid applicator libraries, visualization and digitization/reconstruction must be verified and should agree to the known geometry within +/−1 mm by superimposing a CT image of the applicator onto the geometrical representation of the applicator. A “solid” applicator refers to a vendor provided geometric and dwell position representation of a particular applicator that may be imported into the planning system. Free hand needle and catheter reconstruction may require image interpolation and rotations. The TPS may have tools to assist with auto‐segmentation of the source path. These tools should be checked, and their limitations documented. For example, noisy images, crossing of catheters/needles, use of dummy wires, high curvature of the catheters, or use of non‐CT images may impair correct detection of the source path. CT range finders may be used for applicator delineation and should also be evaluated for functionality. Digitized/reconstructed source positions within the applicator should be within +/−2 mm of true source positions. However, this limit may not be appropriate depending on the applicator and modality type used.
Independent dose calculation In HDR brachytherapy, an independent treatment time calculation has historically been performed to verify that the total dwell times and/or dose distribution is consistent with the specified arrangement including source positions, strength, and dwell times. There are a number of commercial checking programs available, however, these secondary programs rely on DICOM input from the TPS for secondary calculations and typically cannot find planning errors. Software that performs independent dose calculations based on independent implant reconstruction has been reported, as have script‐based algorithms and other software packages that check for consistency of the plan with prescription, as well as other electronic medical record parameters and quality indices. The recommendation of this practice guideline is that any secondary dose calculations should be optional as they lack true independence or are otherwise not readily available or practical. Any adopted independent system can be verified using the TG53 methodology (Appendix 5). They should be used with care to ensure that dose calculation has not been corrupted within the TPS, and only after implant geometry and plan parameters have been independently verified. Other independent treatment time calculations (e.g., nomograms, Manchester and Quimby tables) may also be valuable tools for HDR plan QA. Depending on the type of implant these methods can typically predict a plan total dwell time with an accuracy of 5–10%. A secondary dose calculation is separate from an independent plan check that will be addressed in more detail in part B.
Dry run testing The brachytherapy team must conduct at least one “dry run” functionality test of the entire brachytherapy process from imaging to dose delivery for each treatment technique. This testing should be performed prior to the implementation of a new treatment type and when a key aspect of any process has been modified. Each step in the process should be performed by the staff member who will perform the step when the program is clinically implemented. The dry run test should involve imaging of the applicator through anticipated mechanism, practical treatment planning, connection to tubes and afterloader, and delivery of planned treatment.
CONCLUSIONS Part A of this MPPG provides recommendations for considerations in designing the infrastructure of a HDR brachytherapy program and minimum standards for QA tests for the required equipment. The recommendations from the experts on this practice guideline are intended to guide adoption of regulations in the future.
All authors contributed to the writing of the manuscript.
The Chair of TG 348: MPPG 13.1: HDR Brachytherapy has reviewed the required Conflict of Interest statement on file for each member of TG 348 and determined that disclosure of potential Conflicts of Interest is an adequate management plan. Disclosures of potential Conflicts of Interest for each member of TG348 are found at the close of this document. The members of TG348 listed below attest that they have no potential Conflicts of Interest related to the subject matter or materials presented in this document. Susan Richardson, PhD, Chair; Ivan Buzurovic, PhD; Wesley Culberson, PhD; Claire Dempsey, PhD; Bruce Libby, PhD; Christopher Melhus, PhD; Robin Miller, MS; Samantha Simiele, PhD. The members of TG348 listed below disclose the following potential Conflict(s) of Interest related to subject matter or materials presented in this document. Daniel Scanderbeg, PhD––Varian Medical––speaker/consultant Merit Medical––speaker/consultant; Gil'ad Cohen, MS––Varian Medical––speaker.
|
Comparable clinical outcomes of culture-negative and culture-positive periprosthetic joint infections: a systematic review and meta-analysis | 76dd5e49-8259-4dee-a134-7e23c853363d | 10018887 | Debridement[mh] | Periprosthetic joint infection (PJI) is a catastrophic complication after total joint arthroplasty (TJA), with incidence of approximately 1% and 2% after total hip arthroplasty (THA) and total knee arthroplasty (TKA), respectively. Surgical treatment options of PJI include debridement, antibiotics and implant retention (DAIR), one-stage or two-stage revision, arthrodesis and amputation . The prevention, detection and treatment of PJI following TJA remains great challenge , particularly when cultures are negative. Culture-negative periprosthetic joint infection (CN PJI) was defined as the presence of purulence surrounding the prosthesis, a sinus tract communicating with the joint or positive histopathologic findings, in addition to there being no growth on aerobic and anaerobic cultures submitted to the clinical microbiology laboratory . It is difficult to deliver targeted and effective antibiotic treatment of CN PJI due to lack of microbiological evidence. The incidence rate of CN PJI ranged from 7 to 42% with a pooled result of 11% in a systematic review . In recent years, there have been numerous studies comparing the clinical outcomes of CN PJI and culture-positive PJI (CP PJI) treated with DAIR, one-stage revision and two-stage revision, but the conclusions are controversial. van Eck et al. reported that the failure rate in CN PJI group was significantly lower than that in CP PJI group. Mortazavi et al. reported that the failure rate in CN PJI group was significantly higher than that in CP PJI group. Xu et al. and Mulpur et al. reported the success rate of treatment for the CN PJI group was similar to that for the CP PJI group. The present study aims to give an overview on the current database of studies concerning CN PJI and evaluate whether CN PJI has a better or worse clinical outcomes when compared with CP PJI.
Data and literature sources This systematic review and meta-analysis adhered to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines . We performed a systematic search of various electronic databases (i.e., Embase, Web of Science and EBSCO) in November 2022 with the following search term: (total joint arthroplasty OR TJA OR total joint replacement OR TJR OR total knee arthroplasty OR TKA OR total knee replacement OR TKR OR total hip arthroplasty OR THA OR total hip replacement OR THR) AND (infection OR infections OR infected OR “periprosthetic joint infection” OR “prosthetic joint infection” OR PJI) AND (single-stage OR one-stage OR two-stage OR 2-stage OR revision OR revisions OR “irrigation and debridement” OR “I&D” OR “debridement, antibiotics, and implant retention” OR DAIR) AND (culture negative OR negative) AND (culture positive OR positive). All obtained by searching titles and abstracts were carefully evaluated, and then, full texts were screened to determine the included articles. Study selection Inclusion and exclusion criteria Two authors independently selected titles and abstracts as well as full-text articles from the above listed databases using the aforementioned search strategies, and a third author adjudicated discrepancies. The inclusion criteria were listed as follows (1) Retrospective or prospective studies comparing clinical outcomes of CN PJI versus CP PJI were included; (2) at least one of the following outcome measures was reported: success rate, failure rate, survival rate, infection control rate or reinfection rate; (3) without restrictions on age and sex were imposed; and (4) without limitations on race were imposed. The following exclusion criteria were used (1) Non-peer reviewed publications; (2) certain study designs (non-human trials, observational studies, case reports, case series, review articles and letters to the editor); (3) the inclusion and exclusion criteria for the study were not clear or reasonable; and (4) the full text cannot be obtained or the original data are incomplete. Data extraction and quality assessment The following data were extracted: 1) demographic and clinical information of the studies (including first author, year of publication, country, study type, study period, follow-up period, diagnostic criteria of PJI, sample size of CN PJI and CP PJI, joint involved, surgical strategies and antibiotic regimen); 2) outcome measures including success rate, failure rate, infection control rate or reinfection rate. Pertinent data were extracted by two reviewers independently from all eligible studies, and any disagreement was resolved by a third reviewer. Using the prior Delphi-based definition of success after treatment PJI , failure was defined as (1) failed infection eradication, characterized by a wound with fistula, drainage or pain, and reinfection by the same organism strain; (2) subsequent surgical intervention for infection after reimplantation surgery; or (3) occurrence of PJI-related mortality. Definitions of term used are illustrated in Additional file . For each included study, the methodological quality was evaluated using Newcastle–Ottawa scale (NOS) by two independent reviewers. The scores of each study were consisted of eight items with full mark of 9 scores. The studies with more than 6 scores were considered as high-quality article in our meta-analysis. Statistical analysis All analyses were conducted using the Review Manager software (Review Manager version 5.3, The Nordic Cochrane Centre, The Cochrane Collaboration 2014, Copenhagen, Denmark) and STATA software (STATA version 12.0). The Mantel–Haenszel model and odds ratios (ORs) with 95% confidence intervals (CIs) for outcomes of interest were used to compare dichotomous variables. A P -value less than 0.05 was considered statistically significant. We calculated the I 2 coefficient to assess heterogeneity with the following predetermined limits: low < 50%, moderate 50–74% and high > 75%, and P ≥ 0.05 and I 2 < 50% indicating no statistical heterogeneity between studies. A random-effects model was applied in circumstances of moderate or high heterogeneity; otherwise, a fixed-effects model was employed. If there was significant heterogeneity in the included studies, subgroup analysis was performed to explain heterogeneity. The Begg's funnel plots were used to evaluate publication bias. We judged that there was no publication bias if P -value was more than 0.05 for Begg’s test. Sensitivity analysis was performed to assess the stability of pooled results.
This systematic review and meta-analysis adhered to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines . We performed a systematic search of various electronic databases (i.e., Embase, Web of Science and EBSCO) in November 2022 with the following search term: (total joint arthroplasty OR TJA OR total joint replacement OR TJR OR total knee arthroplasty OR TKA OR total knee replacement OR TKR OR total hip arthroplasty OR THA OR total hip replacement OR THR) AND (infection OR infections OR infected OR “periprosthetic joint infection” OR “prosthetic joint infection” OR PJI) AND (single-stage OR one-stage OR two-stage OR 2-stage OR revision OR revisions OR “irrigation and debridement” OR “I&D” OR “debridement, antibiotics, and implant retention” OR DAIR) AND (culture negative OR negative) AND (culture positive OR positive). All obtained by searching titles and abstracts were carefully evaluated, and then, full texts were screened to determine the included articles.
Inclusion and exclusion criteria Two authors independently selected titles and abstracts as well as full-text articles from the above listed databases using the aforementioned search strategies, and a third author adjudicated discrepancies. The inclusion criteria were listed as follows (1) Retrospective or prospective studies comparing clinical outcomes of CN PJI versus CP PJI were included; (2) at least one of the following outcome measures was reported: success rate, failure rate, survival rate, infection control rate or reinfection rate; (3) without restrictions on age and sex were imposed; and (4) without limitations on race were imposed. The following exclusion criteria were used (1) Non-peer reviewed publications; (2) certain study designs (non-human trials, observational studies, case reports, case series, review articles and letters to the editor); (3) the inclusion and exclusion criteria for the study were not clear or reasonable; and (4) the full text cannot be obtained or the original data are incomplete. Data extraction and quality assessment The following data were extracted: 1) demographic and clinical information of the studies (including first author, year of publication, country, study type, study period, follow-up period, diagnostic criteria of PJI, sample size of CN PJI and CP PJI, joint involved, surgical strategies and antibiotic regimen); 2) outcome measures including success rate, failure rate, infection control rate or reinfection rate. Pertinent data were extracted by two reviewers independently from all eligible studies, and any disagreement was resolved by a third reviewer. Using the prior Delphi-based definition of success after treatment PJI , failure was defined as (1) failed infection eradication, characterized by a wound with fistula, drainage or pain, and reinfection by the same organism strain; (2) subsequent surgical intervention for infection after reimplantation surgery; or (3) occurrence of PJI-related mortality. Definitions of term used are illustrated in Additional file . For each included study, the methodological quality was evaluated using Newcastle–Ottawa scale (NOS) by two independent reviewers. The scores of each study were consisted of eight items with full mark of 9 scores. The studies with more than 6 scores were considered as high-quality article in our meta-analysis. Statistical analysis All analyses were conducted using the Review Manager software (Review Manager version 5.3, The Nordic Cochrane Centre, The Cochrane Collaboration 2014, Copenhagen, Denmark) and STATA software (STATA version 12.0). The Mantel–Haenszel model and odds ratios (ORs) with 95% confidence intervals (CIs) for outcomes of interest were used to compare dichotomous variables. A P -value less than 0.05 was considered statistically significant. We calculated the I 2 coefficient to assess heterogeneity with the following predetermined limits: low < 50%, moderate 50–74% and high > 75%, and P ≥ 0.05 and I 2 < 50% indicating no statistical heterogeneity between studies. A random-effects model was applied in circumstances of moderate or high heterogeneity; otherwise, a fixed-effects model was employed. If there was significant heterogeneity in the included studies, subgroup analysis was performed to explain heterogeneity. The Begg's funnel plots were used to evaluate publication bias. We judged that there was no publication bias if P -value was more than 0.05 for Begg’s test. Sensitivity analysis was performed to assess the stability of pooled results.
Two authors independently selected titles and abstracts as well as full-text articles from the above listed databases using the aforementioned search strategies, and a third author adjudicated discrepancies.
(1) Retrospective or prospective studies comparing clinical outcomes of CN PJI versus CP PJI were included; (2) at least one of the following outcome measures was reported: success rate, failure rate, survival rate, infection control rate or reinfection rate; (3) without restrictions on age and sex were imposed; and (4) without limitations on race were imposed.
(1) Non-peer reviewed publications; (2) certain study designs (non-human trials, observational studies, case reports, case series, review articles and letters to the editor); (3) the inclusion and exclusion criteria for the study were not clear or reasonable; and (4) the full text cannot be obtained or the original data are incomplete.
The following data were extracted: 1) demographic and clinical information of the studies (including first author, year of publication, country, study type, study period, follow-up period, diagnostic criteria of PJI, sample size of CN PJI and CP PJI, joint involved, surgical strategies and antibiotic regimen); 2) outcome measures including success rate, failure rate, infection control rate or reinfection rate. Pertinent data were extracted by two reviewers independently from all eligible studies, and any disagreement was resolved by a third reviewer. Using the prior Delphi-based definition of success after treatment PJI , failure was defined as (1) failed infection eradication, characterized by a wound with fistula, drainage or pain, and reinfection by the same organism strain; (2) subsequent surgical intervention for infection after reimplantation surgery; or (3) occurrence of PJI-related mortality. Definitions of term used are illustrated in Additional file . For each included study, the methodological quality was evaluated using Newcastle–Ottawa scale (NOS) by two independent reviewers. The scores of each study were consisted of eight items with full mark of 9 scores. The studies with more than 6 scores were considered as high-quality article in our meta-analysis.
All analyses were conducted using the Review Manager software (Review Manager version 5.3, The Nordic Cochrane Centre, The Cochrane Collaboration 2014, Copenhagen, Denmark) and STATA software (STATA version 12.0). The Mantel–Haenszel model and odds ratios (ORs) with 95% confidence intervals (CIs) for outcomes of interest were used to compare dichotomous variables. A P -value less than 0.05 was considered statistically significant. We calculated the I 2 coefficient to assess heterogeneity with the following predetermined limits: low < 50%, moderate 50–74% and high > 75%, and P ≥ 0.05 and I 2 < 50% indicating no statistical heterogeneity between studies. A random-effects model was applied in circumstances of moderate or high heterogeneity; otherwise, a fixed-effects model was employed. If there was significant heterogeneity in the included studies, subgroup analysis was performed to explain heterogeneity. The Begg's funnel plots were used to evaluate publication bias. We judged that there was no publication bias if P -value was more than 0.05 for Begg’s test. Sensitivity analysis was performed to assess the stability of pooled results.
Search strategy results The search strategy previously described produced 1455 results (411 in Embase, 510 in Web of Science and 534 in EBSCO). Eight hundred and thirty-eight duplicates were excluded. After being reviewed the titles and abstracts by two independent authors, 536 irrelevant citations were removed. Subsequently, we assessed the remaining 58 full-text articles and excluded 28 articles based on the inclusion and exclusion criteria. Finally, 30 studies were included in our study and could be quantitatively synthesized and the remaining two were qualitatively analyzed. The article selection process is illustrated in Fig. . Study characteristics and quality assessment In total, 28 retrospective studies [ – , – ] and 2 prospective studies from 12 countries containing 4207 PJI cases were included in this meta-analysis. All studies were published between 2010 and 2022. Follow-up period ranged from 12 to 120 months. The diagnosis of PJI was based on the Musculoskeletal Infection Society (MSIS) criteria in 22 studies, International Consensus Meeting (ICM) criteria in 3 studies, Infectious Diseases Society of America (IDSA) criteria in 1 study, Swiss Society of Infectious Diseases (SSID) criteria in 1 study, and diagnostic criteria not acquire in 3 studies. The surgical strategies include DAIR, one-stage or two-stage revision and others. Furthermore, we assessed the quality of each included study and the NOS scores ranged from 6 to 9, which suggests the included studies are in a high quality. The main characteristics and quality assessment results of the included studies are given in Table . Meta-analysis for overall failure rate The overall treatment failure rate was 21.7% (913/4207) with a failure rate of 19.0% (309/1630) and 23.4% (604/2577) for CN PJI and CP PJI, respectively. Since there was moderate heterogeneity among all included studies [ – , – ] ( I 2 = 53%, P = 0.0004), we performed a random-effects model to pool OR and 95% CI. As shown in Fig. , the pooled results also showed a lower treatment failure rate among patients with negative culture than those with positive culture (OR 0.63, 95% CI 0.47–0.84, P = 0.002). Subgroup analysis Because moderate heterogeneity exists in overall failure rate results, subgroup analyses were performed to estimate the failure rates based on different surgical strategies. In the nine included studies [ , , , , , , , , ] for patients who underwent DAIR, a fixed-effects model was performed due to no significant heterogeneity ( I 2 = 11%, P = 0.34). The pooled results revealed that CN PJI had a lower treatment failure rate than CP PJI (22.2% (53/239) vs 29.3% (227/775), OR 0.62, 95% CI 0.43–0.90, P = 0.01; Fig. A). In the four included studies [ , , , ] for patients who underwent one-stage revision, a fixed-effects model was performed due to no significant heterogeneity ( I 2 = 2%, P = 0.38). The pooled results showed a similar treatment failure rate between CN PJI and CP PJI (11.5% (11/96) vs 7.6% (27/355), OR 1.57, 95% CI 0.75–3.26, P = 0.23; Fig. B). In the 19 included studies [ , , , , – , , – ] for patients who underwent two-stage revision, a random-effects model was performed due to moderate significant heterogeneity ( I 2 = 52%, P = 0.005), The pooled results revealed that CN PJI had a lower treatment failure rate than CP PJI (16.1% (171/1062) vs 20.4% (206/1010), OR 0.52, 95% CI 0.34–0.79, P = 0.002; Fig. C). Publication bias and sensitivity analysis As shown in Fig. , there was no obvious asymmetry in the Begg’s funnel plot, and the P value for Begg’s test was 0.318, which was greater than 0.05. Thus, there was no significant publication bias among the included studies. Sensitivity analysis was applied to test the stability of pooled results. As shown in Fig. , the sensitivity analysis showed no significant changes when each of the studies included was removed sequentially.
The search strategy previously described produced 1455 results (411 in Embase, 510 in Web of Science and 534 in EBSCO). Eight hundred and thirty-eight duplicates were excluded. After being reviewed the titles and abstracts by two independent authors, 536 irrelevant citations were removed. Subsequently, we assessed the remaining 58 full-text articles and excluded 28 articles based on the inclusion and exclusion criteria. Finally, 30 studies were included in our study and could be quantitatively synthesized and the remaining two were qualitatively analyzed. The article selection process is illustrated in Fig. .
In total, 28 retrospective studies [ – , – ] and 2 prospective studies from 12 countries containing 4207 PJI cases were included in this meta-analysis. All studies were published between 2010 and 2022. Follow-up period ranged from 12 to 120 months. The diagnosis of PJI was based on the Musculoskeletal Infection Society (MSIS) criteria in 22 studies, International Consensus Meeting (ICM) criteria in 3 studies, Infectious Diseases Society of America (IDSA) criteria in 1 study, Swiss Society of Infectious Diseases (SSID) criteria in 1 study, and diagnostic criteria not acquire in 3 studies. The surgical strategies include DAIR, one-stage or two-stage revision and others. Furthermore, we assessed the quality of each included study and the NOS scores ranged from 6 to 9, which suggests the included studies are in a high quality. The main characteristics and quality assessment results of the included studies are given in Table .
The overall treatment failure rate was 21.7% (913/4207) with a failure rate of 19.0% (309/1630) and 23.4% (604/2577) for CN PJI and CP PJI, respectively. Since there was moderate heterogeneity among all included studies [ – , – ] ( I 2 = 53%, P = 0.0004), we performed a random-effects model to pool OR and 95% CI. As shown in Fig. , the pooled results also showed a lower treatment failure rate among patients with negative culture than those with positive culture (OR 0.63, 95% CI 0.47–0.84, P = 0.002).
Because moderate heterogeneity exists in overall failure rate results, subgroup analyses were performed to estimate the failure rates based on different surgical strategies. In the nine included studies [ , , , , , , , , ] for patients who underwent DAIR, a fixed-effects model was performed due to no significant heterogeneity ( I 2 = 11%, P = 0.34). The pooled results revealed that CN PJI had a lower treatment failure rate than CP PJI (22.2% (53/239) vs 29.3% (227/775), OR 0.62, 95% CI 0.43–0.90, P = 0.01; Fig. A). In the four included studies [ , , , ] for patients who underwent one-stage revision, a fixed-effects model was performed due to no significant heterogeneity ( I 2 = 2%, P = 0.38). The pooled results showed a similar treatment failure rate between CN PJI and CP PJI (11.5% (11/96) vs 7.6% (27/355), OR 1.57, 95% CI 0.75–3.26, P = 0.23; Fig. B). In the 19 included studies [ , , , , – , , – ] for patients who underwent two-stage revision, a random-effects model was performed due to moderate significant heterogeneity ( I 2 = 52%, P = 0.005), The pooled results revealed that CN PJI had a lower treatment failure rate than CP PJI (16.1% (171/1062) vs 20.4% (206/1010), OR 0.52, 95% CI 0.34–0.79, P = 0.002; Fig. C).
As shown in Fig. , there was no obvious asymmetry in the Begg’s funnel plot, and the P value for Begg’s test was 0.318, which was greater than 0.05. Thus, there was no significant publication bias among the included studies. Sensitivity analysis was applied to test the stability of pooled results. As shown in Fig. , the sensitivity analysis showed no significant changes when each of the studies included was removed sequentially.
The diagnosis, risk factors, treatment options and clinical outcomes of PJI have been widely discussed in the past two decades. However, data on CN PJI are relatively infrequent in the literature. Our study reported the failure rates in 1630 CN PJI cases and 2577 CP PJI cases who completed DAIR, one-stage or two-stage revision. We systematically collected relevant clinical trials of patients with CN PJI and CP PJI who underwent DAIR, one-stage and two-stage arthroplasty and then performed a meta-analysis and systematic review in this study. In this review of 30 studies including 4207 joints, we found that compared the outcomes of CN PJI with those of CP PJI after DAIR, one-stage or two-stage revision, it is suggested that negative culture may not be a negative prognostic factor for PJI. In contrast, we concluded that CN PJI patients have the better outcomes than CP PJI patients who underwent DAIR and two-stage arthroplasty, and patients undergoing a one-stage revision in the case of acute CN PJI have the same results compared to acute CP PJI. Treatment results of CN PJI and CP PJI patients following DAIR procedures To date, the value of culture results after DAIR for acute PJI as risk indicators in terms of prosthesis retention remains controversial, and there is a paucity of data comparing the outcomes of DAIR between acute CN PJI and acute CP PJI. The results of this study are in accordance with those of van Eck et al. and Malekzadeh et al. , we found the reinfection rate of CN PJI patients was lower than that of CP PJI patients after DAIR procedures (OR 0.62, 95% CI 0.43–0.90, P = 0.01), suggesting that negative culture may not be a contraindication to DAIR in patients with acute PJI. Kim et al. retrospectively reviewed 140 patients with CP PJI and 102 patients with CN PJI also proved that controlled infection and maintained functional TKA with a firm level of fixation for most patients in both CP PJI and CN PJI groups, even repeated debridement also improved infection control rate after the initial treatment and increased the likelihood of maintaining a functional TKA. Similarly, a systematic review and meta-analysis concluded that CN PJI has the same or even better results than CP PJI including eight studies . Both the aforementioned studies and our results showed that patients with CN PJI after DAIR procedure had the same or lower reinfection rate compared with patients with CP PJI. These results may be caused by low-virulence microorganisms, more thorough debridement during surgery, more strict perioperative management and longer antibiotic use. Furthermore, previous studies assessing DAIR procedures have shown the success rate is influenced by comorbidity, symptomatology, type of microorganism and especially timing of the DAIR procedure [ , – ]. The success rate extremely depended on the time frame between the surgical intervention and the start of symptoms. Löwik et al. recommended that DAIR is a feasible option in patients with early PJI presenting more than 4 weeks after surgery, as long as DAIR is performed within at least 1 week after the onset of symptoms and modular components can be exchanged. Similarly, a retrospective study published by Shao et al. reported success rate was 67.3% at a median 38.6 months follow-up in patients who underwent early surgery within ten days of the presentation of symptoms. Furthermore, there is a possibility that a negative culture could be the result of suboptimal diagnostic properties of cultures and was never infected in the first place, which may be one reason for the higher success rate. Another factor contributing to the high failure rate of CP PJI is the over-reliance on antibiotics by surgeons, which can result in the development of bacterial resistance. Therefore, extensive review of the local microbiological data used a multidisciplinary approach to optimize treatment protocols and improve the outcome for CP PJI patients. In conclusion, DAIR provided surgeons the possibility of curing the both acute CN PJI and CP PJI patients in appropriate time for surgery using a standard protocol during surgery and postoperatively can result in better outcomes, while at the same time retaining the implants, because it is thought to be associated with lower morbidity, less tissue fibrosis and better functional outcomes compared to the more invasive option of two-stage revision. Treatment results of CN PJI and CP PJI patients following one-stage arthroplasty Of the previous studies, only a few recommended one-stage arthroplasty as the first treatment option. When the microorganism is determined, treatment results are well recorded in the literature. However, treatment results of CN PJI are only reported in a few studies. Although two-stage arthroplasty has traditionally been considered the gold standard of treatment for PJI, growing evidence is emerging in support of one-stage arthroplasty for selected patients. Our present study revealed that there was no significant difference in the success rate between the CN PJI group and the CP PJI group during one-stage revision (OR 1.57, 95% CI 0.75–3.26, P = 0.23). By the earliest one-stage arthroplasty procedure performed by von Foerster et al. , 76 cases were cured as a result of this single operation among 104 patients at follow-up period of 5–15 years. Buechel et al. treated 22 infected TKAs by one-stage revision and followed for an average of 10.2 years, which found that the success rate achieved 90.9%. A retrospective study reported 70 patients who underwent one-stage arthroplasty with a rotating hinge with a minimum 9-year follow-up, which revealed that the infection-free survival was 93% . However, the above-mentioned studies included positive culture only and used antibiotic-loaded polymethylmethacrylate (PMMA) cement in each case, which would lead to poor joint function, and the bias possibly affects the results. In recent years, one-stage arthroplasty treatment of CN PJI and CP PJI has gradually appealed to the surgeon's interest and achieved good results. Ji et al. reported that 111 patients underwent routine one-stage revision with cementless reconstruction with powdered vancomycin or imipenem poured into the medullary cavity and reimplantation of cementless components at a mean follow-up time of 58 months; a recurrent infection was observed in four of the 23 patients (17.4%) with culture-negative infected hip. Not long after, the same medical center retrospectively analyzed 51 patients with CN PJI who underwent one-stage revision using intravenous and intra-articular antibiotic infusion compared with 192 patients with CP PJI at a mean of 53.2 months of follow-up, no significant difference in the infection control rate was observed between CN PJI and CP PJI (90.2% (46/51) versus 94.3% (181/192); P = 0.297) . In addition, van den Kieboom et al. considered one-stage revision demonstrated similar outcomes including reinfection, re-revision and readmission rates for the treatment of CN PJI after TKA and THA compared to two-stage revision. As there have significant physical, psychological and economic impacts with PJI, there are obvious advantages to operating one-stage revision, including reduced costs, less mortality, less time in hospital, decreased morbidity and higher patient satisfaction. But the criteria for one-stage revision in PJI, in principle, have been very strict, which reflect the complexity of cases and the need for a profitable condition to implant a new prosthesis. Contraindications to one-stage arthroplasty included culture-negative, significant tissue compromise, significant bone loss, systemic sepsis, immunosuppression, reinfection, multi-resistant organisms, polymicrobial infection, extensor mechanism failure or if primary wound closure was unlikely to be achievable . Therefore, this technique is still not widely used throughout the world due to restrictive inclusion criteria. The primary factor in the treatment at revision for CN PJI, whether one-stage or two-stage arthroplasty, following a thorough debridement, is the adequate and reasonable use of antibiotics. The spectrum of pathogens in published reports is almost similar, hence, the use of antibiotics with the broadest possible range, which can combat both gram-negative and gram-positive organisms, even CN PJI will cover almost all microorganisms. Moreover, in order to get better or comparable outcomes for CN PJI patients, surgeons are more careful while performing debridement, employing medications in conjunction with vancomycin and imipenem or meropenem and prescribing antibiotics for longer durations compared to CP PJI patients. To sum up, the surgeon should control indications and contraindications strictly, one-stage revision can be effective in the treatment of CN PJI and can achieve an infection control rate similar to that in CP PJI. Nonetheless, the patients in CP PJI may require further medical optimization and prior to one-stage revision to enhance their immune system, and a standardized diagnostic protocol and evidence-based treatment strategies for CN PJI should be implemented for further studies. Treatment results of CN PJI and CP PJI patients following two-stage arthroplasty Although two-stage arthroplasty is today considered as the gold standard for treating chronic PJI, the reported success rate is very variable, ranging from 64 to 100% [ , , – ]. In addition, CN PJI will complicate diagnosis and management of PJI, and a lack of identification of an infecting organism preoperatively is an unfavorable factor of reimplantation. However, our meta-analysis demonstrated that negative culture at two-stage reimplantation instead of increasing the risk for reinfection greatly improved the success rate compared with positive culture (OR 0.52, 95% CI 0.34–0.79, P = 0.002). In agreement with our results, most of the previous studies concluded that CN PJI had the same or even better results than culture-positive infections. Choi et al. retrospectively reviewed 40 culture-negative patients and 135 culture-positive patients demonstrating that the success rate of infection control was higher in the culture-negative group ( P = 0.006) undergoing two-stage reimplantation. Another retrospective cohort study also showed that data from 77 patients who underwent two-stage revision to PJI after hip and knee arthroplasty were followed regularly with an average of 29.2 months; the infection control rate for the CN PJI group was similar to that for the CP PJI group . On the contrary, Mortazavi et al. identified a prospective database contained 117 patients who underwent two-stage arthroplasty, the multivariate analysis provided culture-negative (OR 4.5; 95% CI 1.3–15.7), methicillin-resistant organisms (OR 2.8; 95% CI 0.8–10.3) and increased reimplantation operative time (OR 1.01; 95% CI 1.0–1.03) as predictors of failure, and CN PJI increases the risk of failure over fourfold; however, this study was early and the bacterial culture technique was poor, so many positives may not have been cultured. There are many other studies that show that CN PJI has the better infection control rate, but there was no significant difference in the success rate between the CN PJI group and the CP PJI group during two-stage revision [ , , , ]; this may explain the superior cure rate of CN PJI in the pooled results. Many factors may influence the outcomes of two-stage arthroplasties theoretically, including timing of reimplantation, serum markers, history of surgeries, the patient’s comorbidities, medical conditions, bone stock, soft tissue integrity and organism virulence; patients with these conditions are poor hosts and may thus be vulnerable to a new infection. Determining the appropriate timing of when reimplantation should be performed is often challenging for the treating surgeon. Khury et al. and Stambough et al. indicated that no association could be determined between the delta change in serum WBC, CRP and ESR before and after two-stage revision for PJI and reinfection risk, although a return to normal serology infrequently occurs before reimplantation, and Ackmann et al. considered plasma D-dimer does not help to guide the timing of reimplantation in two-stage exchange for PJI; these serum markers provide no additional diagnostic accuracy to determine the timing of reimplantation. Another retrospective study by Fu et al. proved that the proper timing of reimplantation should be combined with disappearance of clinical symptoms and negative intraoperative frozen section with spacer detention time at 12 to 16 weeks. As far as we know, the optimal timing of when reimplantation in two-stage revision remains unknown, so further studies are needed to resolve these questions. Furthermore, PJI with biofilm-forming organisms is a leading cause of failure and reinfection after two-stage reimplantation , because it is often difficult to detect such infections, particularly in patients who have received antibiotic treatment before surgery. Finally, methicillin-resistant or high-virulence microorganisms are positive culture, more comorbidities and increased reimplantation operative time as predictors of failure and reinfection after two-stage reimplantation [ , , ]. In conclusion, our present study revealed that better results were obtained with negative culture than with positive culture. Therefore, appropriate timing of surgery, well-managed comorbidities, thorough debridement and effective antibiotic use are all beneficial to success rate and the CN PJI is not contraindications of two-stage revision. Moreover, there are several possible reasons that an infective organism might not be confirmed preoperatively, including pre-operative use of antibiotics, an insufficient period without antibiotics before sampling, inadequate culture times or culture medium, low-virulence organisms, bacterial biofilms, limitations of sampling techniques or the lack of diagnostic facilities for rare organisms . As PJI is frequently caused by low-virulence organisms that might require prolonged incubation periods, to increase the detection rate of the low-virulence microorganisms multiple samples (minimum 3) should be taken, and an adequate growth time of at least 14 days . Sonication of explanted components is a new and more sensitive method for diagnosing infection that has proved to be effective, particularly in patients who had received antimicrobial treatment within 14 days before surgery . Sonication has been also reported to be a reliable tool for the diagnosis of an infected arthroplasty and subsequent biofilm-related infections , and it is crucial for the second-stage arthroplasty because spacers can act as a foreign body on to which bacteria may adhere . In addition, arthroscopic sampling and polymerase chain reaction are necessary, as these patients were considered as having a complex CN PJI . Even though recent most studies concluded that CN PJI has the same or even better results than CP PJI following DAIR, one-stage and two-stage revision, selection of antibiotics is challenging in the absence of information about the causative organism. Empirical antibiotic use for CN PJI patients who underwent DAIR, one-stage and two-stage revision was comparable to antibiotic use in CP PJI patients according to a reliable antimicrobial susceptibility test, but the duration of antibiotic medication may be longer, which will increase economic burden, drug toxicity, damage liver and kidney function and psychological impacts. Limitations Some limitations must be taken into consideration when interpreting the results of this study. Firstly, most of the included studies in our meta-analysis were mainly retrospective case–control studies and cohort studies with limitations inherent to such a study design, with no randomized controlled trials studies, so more prospective studies and confounders controlled are warranted to evaluate the clinical outcomes of CN PJI and CP PJI patients. Secondly, there were no single standard on other potential confounders, such as length of surgery time, blood loss, follow-up time, duration of antibiotic use, antibiotic treatment regimen and other non-measurable factors (e.g., the types of implants, surgical technique, surgical approach, etc.). Further research is necessary to elucidate for these findings. Thirdly, the diagnostic criteria for PJI are different, the different definitions of reinfection or cure is a potential criticism of every study assessing PJI in some studies; due to the lack of advanced culture techniques, infections caused by slow-growing pathogens such as mycobacteria or fungi were classified as CN PJI in previous studies, such a high rate of misclassification may threaten our study and we cannot analyze these risk factors or outcomes in this study. Perhaps we might be able to solve this issue by increasing the diagnostic accuracy of CN PJI using next-generation sequencing or a special culture medium, both of which have shown to be highly accurate in CN PJI diagnosis in subsequent studies. Fourth, several of the included studies had an identical author with overlap study period, and we confirmed that there is also partial overlap of reported population. However, despite using strict inclusion and exclusion criteria, we were unable to eliminate the overlap population. The pooled results may be impacted. So, sensitivity analysis was used to test the stability of pooled results. However, the sensitivity analysis showed no significant changes when each of the studies included were removed sequentially. Fifth, the majority of the included studies reflect the survival rates of CN PJI and CP PJI in the short- and medium-term. To demonstrate that the one-stage revision of CN PJI and CP PJI can result in the same or a higher survival rate, more long-term follow-up studies are required; Finally, the included studies used a mixed cohort of hips and knees and we thus were unable to investigate the independent results for hips in our meta-analysis, the possibility of not having retrieved all relevant information published on CN PJI should also be considered as one of the limitations of our study. These recognized limitations are inherent to all studies using this database design and could potentially be improved through prospective data collection.
To date, the value of culture results after DAIR for acute PJI as risk indicators in terms of prosthesis retention remains controversial, and there is a paucity of data comparing the outcomes of DAIR between acute CN PJI and acute CP PJI. The results of this study are in accordance with those of van Eck et al. and Malekzadeh et al. , we found the reinfection rate of CN PJI patients was lower than that of CP PJI patients after DAIR procedures (OR 0.62, 95% CI 0.43–0.90, P = 0.01), suggesting that negative culture may not be a contraindication to DAIR in patients with acute PJI. Kim et al. retrospectively reviewed 140 patients with CP PJI and 102 patients with CN PJI also proved that controlled infection and maintained functional TKA with a firm level of fixation for most patients in both CP PJI and CN PJI groups, even repeated debridement also improved infection control rate after the initial treatment and increased the likelihood of maintaining a functional TKA. Similarly, a systematic review and meta-analysis concluded that CN PJI has the same or even better results than CP PJI including eight studies . Both the aforementioned studies and our results showed that patients with CN PJI after DAIR procedure had the same or lower reinfection rate compared with patients with CP PJI. These results may be caused by low-virulence microorganisms, more thorough debridement during surgery, more strict perioperative management and longer antibiotic use. Furthermore, previous studies assessing DAIR procedures have shown the success rate is influenced by comorbidity, symptomatology, type of microorganism and especially timing of the DAIR procedure [ , – ]. The success rate extremely depended on the time frame between the surgical intervention and the start of symptoms. Löwik et al. recommended that DAIR is a feasible option in patients with early PJI presenting more than 4 weeks after surgery, as long as DAIR is performed within at least 1 week after the onset of symptoms and modular components can be exchanged. Similarly, a retrospective study published by Shao et al. reported success rate was 67.3% at a median 38.6 months follow-up in patients who underwent early surgery within ten days of the presentation of symptoms. Furthermore, there is a possibility that a negative culture could be the result of suboptimal diagnostic properties of cultures and was never infected in the first place, which may be one reason for the higher success rate. Another factor contributing to the high failure rate of CP PJI is the over-reliance on antibiotics by surgeons, which can result in the development of bacterial resistance. Therefore, extensive review of the local microbiological data used a multidisciplinary approach to optimize treatment protocols and improve the outcome for CP PJI patients. In conclusion, DAIR provided surgeons the possibility of curing the both acute CN PJI and CP PJI patients in appropriate time for surgery using a standard protocol during surgery and postoperatively can result in better outcomes, while at the same time retaining the implants, because it is thought to be associated with lower morbidity, less tissue fibrosis and better functional outcomes compared to the more invasive option of two-stage revision.
Of the previous studies, only a few recommended one-stage arthroplasty as the first treatment option. When the microorganism is determined, treatment results are well recorded in the literature. However, treatment results of CN PJI are only reported in a few studies. Although two-stage arthroplasty has traditionally been considered the gold standard of treatment for PJI, growing evidence is emerging in support of one-stage arthroplasty for selected patients. Our present study revealed that there was no significant difference in the success rate between the CN PJI group and the CP PJI group during one-stage revision (OR 1.57, 95% CI 0.75–3.26, P = 0.23). By the earliest one-stage arthroplasty procedure performed by von Foerster et al. , 76 cases were cured as a result of this single operation among 104 patients at follow-up period of 5–15 years. Buechel et al. treated 22 infected TKAs by one-stage revision and followed for an average of 10.2 years, which found that the success rate achieved 90.9%. A retrospective study reported 70 patients who underwent one-stage arthroplasty with a rotating hinge with a minimum 9-year follow-up, which revealed that the infection-free survival was 93% . However, the above-mentioned studies included positive culture only and used antibiotic-loaded polymethylmethacrylate (PMMA) cement in each case, which would lead to poor joint function, and the bias possibly affects the results. In recent years, one-stage arthroplasty treatment of CN PJI and CP PJI has gradually appealed to the surgeon's interest and achieved good results. Ji et al. reported that 111 patients underwent routine one-stage revision with cementless reconstruction with powdered vancomycin or imipenem poured into the medullary cavity and reimplantation of cementless components at a mean follow-up time of 58 months; a recurrent infection was observed in four of the 23 patients (17.4%) with culture-negative infected hip. Not long after, the same medical center retrospectively analyzed 51 patients with CN PJI who underwent one-stage revision using intravenous and intra-articular antibiotic infusion compared with 192 patients with CP PJI at a mean of 53.2 months of follow-up, no significant difference in the infection control rate was observed between CN PJI and CP PJI (90.2% (46/51) versus 94.3% (181/192); P = 0.297) . In addition, van den Kieboom et al. considered one-stage revision demonstrated similar outcomes including reinfection, re-revision and readmission rates for the treatment of CN PJI after TKA and THA compared to two-stage revision. As there have significant physical, psychological and economic impacts with PJI, there are obvious advantages to operating one-stage revision, including reduced costs, less mortality, less time in hospital, decreased morbidity and higher patient satisfaction. But the criteria for one-stage revision in PJI, in principle, have been very strict, which reflect the complexity of cases and the need for a profitable condition to implant a new prosthesis. Contraindications to one-stage arthroplasty included culture-negative, significant tissue compromise, significant bone loss, systemic sepsis, immunosuppression, reinfection, multi-resistant organisms, polymicrobial infection, extensor mechanism failure or if primary wound closure was unlikely to be achievable . Therefore, this technique is still not widely used throughout the world due to restrictive inclusion criteria. The primary factor in the treatment at revision for CN PJI, whether one-stage or two-stage arthroplasty, following a thorough debridement, is the adequate and reasonable use of antibiotics. The spectrum of pathogens in published reports is almost similar, hence, the use of antibiotics with the broadest possible range, which can combat both gram-negative and gram-positive organisms, even CN PJI will cover almost all microorganisms. Moreover, in order to get better or comparable outcomes for CN PJI patients, surgeons are more careful while performing debridement, employing medications in conjunction with vancomycin and imipenem or meropenem and prescribing antibiotics for longer durations compared to CP PJI patients. To sum up, the surgeon should control indications and contraindications strictly, one-stage revision can be effective in the treatment of CN PJI and can achieve an infection control rate similar to that in CP PJI. Nonetheless, the patients in CP PJI may require further medical optimization and prior to one-stage revision to enhance their immune system, and a standardized diagnostic protocol and evidence-based treatment strategies for CN PJI should be implemented for further studies.
Although two-stage arthroplasty is today considered as the gold standard for treating chronic PJI, the reported success rate is very variable, ranging from 64 to 100% [ , , – ]. In addition, CN PJI will complicate diagnosis and management of PJI, and a lack of identification of an infecting organism preoperatively is an unfavorable factor of reimplantation. However, our meta-analysis demonstrated that negative culture at two-stage reimplantation instead of increasing the risk for reinfection greatly improved the success rate compared with positive culture (OR 0.52, 95% CI 0.34–0.79, P = 0.002). In agreement with our results, most of the previous studies concluded that CN PJI had the same or even better results than culture-positive infections. Choi et al. retrospectively reviewed 40 culture-negative patients and 135 culture-positive patients demonstrating that the success rate of infection control was higher in the culture-negative group ( P = 0.006) undergoing two-stage reimplantation. Another retrospective cohort study also showed that data from 77 patients who underwent two-stage revision to PJI after hip and knee arthroplasty were followed regularly with an average of 29.2 months; the infection control rate for the CN PJI group was similar to that for the CP PJI group . On the contrary, Mortazavi et al. identified a prospective database contained 117 patients who underwent two-stage arthroplasty, the multivariate analysis provided culture-negative (OR 4.5; 95% CI 1.3–15.7), methicillin-resistant organisms (OR 2.8; 95% CI 0.8–10.3) and increased reimplantation operative time (OR 1.01; 95% CI 1.0–1.03) as predictors of failure, and CN PJI increases the risk of failure over fourfold; however, this study was early and the bacterial culture technique was poor, so many positives may not have been cultured. There are many other studies that show that CN PJI has the better infection control rate, but there was no significant difference in the success rate between the CN PJI group and the CP PJI group during two-stage revision [ , , , ]; this may explain the superior cure rate of CN PJI in the pooled results. Many factors may influence the outcomes of two-stage arthroplasties theoretically, including timing of reimplantation, serum markers, history of surgeries, the patient’s comorbidities, medical conditions, bone stock, soft tissue integrity and organism virulence; patients with these conditions are poor hosts and may thus be vulnerable to a new infection. Determining the appropriate timing of when reimplantation should be performed is often challenging for the treating surgeon. Khury et al. and Stambough et al. indicated that no association could be determined between the delta change in serum WBC, CRP and ESR before and after two-stage revision for PJI and reinfection risk, although a return to normal serology infrequently occurs before reimplantation, and Ackmann et al. considered plasma D-dimer does not help to guide the timing of reimplantation in two-stage exchange for PJI; these serum markers provide no additional diagnostic accuracy to determine the timing of reimplantation. Another retrospective study by Fu et al. proved that the proper timing of reimplantation should be combined with disappearance of clinical symptoms and negative intraoperative frozen section with spacer detention time at 12 to 16 weeks. As far as we know, the optimal timing of when reimplantation in two-stage revision remains unknown, so further studies are needed to resolve these questions. Furthermore, PJI with biofilm-forming organisms is a leading cause of failure and reinfection after two-stage reimplantation , because it is often difficult to detect such infections, particularly in patients who have received antibiotic treatment before surgery. Finally, methicillin-resistant or high-virulence microorganisms are positive culture, more comorbidities and increased reimplantation operative time as predictors of failure and reinfection after two-stage reimplantation [ , , ]. In conclusion, our present study revealed that better results were obtained with negative culture than with positive culture. Therefore, appropriate timing of surgery, well-managed comorbidities, thorough debridement and effective antibiotic use are all beneficial to success rate and the CN PJI is not contraindications of two-stage revision. Moreover, there are several possible reasons that an infective organism might not be confirmed preoperatively, including pre-operative use of antibiotics, an insufficient period without antibiotics before sampling, inadequate culture times or culture medium, low-virulence organisms, bacterial biofilms, limitations of sampling techniques or the lack of diagnostic facilities for rare organisms . As PJI is frequently caused by low-virulence organisms that might require prolonged incubation periods, to increase the detection rate of the low-virulence microorganisms multiple samples (minimum 3) should be taken, and an adequate growth time of at least 14 days . Sonication of explanted components is a new and more sensitive method for diagnosing infection that has proved to be effective, particularly in patients who had received antimicrobial treatment within 14 days before surgery . Sonication has been also reported to be a reliable tool for the diagnosis of an infected arthroplasty and subsequent biofilm-related infections , and it is crucial for the second-stage arthroplasty because spacers can act as a foreign body on to which bacteria may adhere . In addition, arthroscopic sampling and polymerase chain reaction are necessary, as these patients were considered as having a complex CN PJI . Even though recent most studies concluded that CN PJI has the same or even better results than CP PJI following DAIR, one-stage and two-stage revision, selection of antibiotics is challenging in the absence of information about the causative organism. Empirical antibiotic use for CN PJI patients who underwent DAIR, one-stage and two-stage revision was comparable to antibiotic use in CP PJI patients according to a reliable antimicrobial susceptibility test, but the duration of antibiotic medication may be longer, which will increase economic burden, drug toxicity, damage liver and kidney function and psychological impacts.
Some limitations must be taken into consideration when interpreting the results of this study. Firstly, most of the included studies in our meta-analysis were mainly retrospective case–control studies and cohort studies with limitations inherent to such a study design, with no randomized controlled trials studies, so more prospective studies and confounders controlled are warranted to evaluate the clinical outcomes of CN PJI and CP PJI patients. Secondly, there were no single standard on other potential confounders, such as length of surgery time, blood loss, follow-up time, duration of antibiotic use, antibiotic treatment regimen and other non-measurable factors (e.g., the types of implants, surgical technique, surgical approach, etc.). Further research is necessary to elucidate for these findings. Thirdly, the diagnostic criteria for PJI are different, the different definitions of reinfection or cure is a potential criticism of every study assessing PJI in some studies; due to the lack of advanced culture techniques, infections caused by slow-growing pathogens such as mycobacteria or fungi were classified as CN PJI in previous studies, such a high rate of misclassification may threaten our study and we cannot analyze these risk factors or outcomes in this study. Perhaps we might be able to solve this issue by increasing the diagnostic accuracy of CN PJI using next-generation sequencing or a special culture medium, both of which have shown to be highly accurate in CN PJI diagnosis in subsequent studies. Fourth, several of the included studies had an identical author with overlap study period, and we confirmed that there is also partial overlap of reported population. However, despite using strict inclusion and exclusion criteria, we were unable to eliminate the overlap population. The pooled results may be impacted. So, sensitivity analysis was used to test the stability of pooled results. However, the sensitivity analysis showed no significant changes when each of the studies included were removed sequentially. Fifth, the majority of the included studies reflect the survival rates of CN PJI and CP PJI in the short- and medium-term. To demonstrate that the one-stage revision of CN PJI and CP PJI can result in the same or a higher survival rate, more long-term follow-up studies are required; Finally, the included studies used a mixed cohort of hips and knees and we thus were unable to investigate the independent results for hips in our meta-analysis, the possibility of not having retrieved all relevant information published on CN PJI should also be considered as one of the limitations of our study. These recognized limitations are inherent to all studies using this database design and could potentially be improved through prospective data collection.
To our knowledge, this is the first study that has compared the clinical outcomes of CN PJI and CP PJI patients who underwent DAIR, one-stage or two-stage revision. Our study demonstrated that CN PJI patients had the better survival rate compared to CP PJI patients who underwent DAIR and two-stage revision, and a one-stage revision procedure in the case of CN PJI had the similar survival rate compared to CP PJI. Although CN PJI patients remain challenging to make exact diagnosis, suitable treatment and choose appropriate antibiotics, as the through debridement was considered imperative in every case, DAIR, one-stage and two-stage revision arthroplasty suggested that negative culture was not a worse prognostic factor for PJI.
Additional file 1. Availability of data and materials. Additional file 2: Table S1 : Definitions of terms used.
|
Future directions for medicinal chemistry in the field of oligonucleotide therapeutics | 43fab021-7f80-46b3-a4e2-8ea29437a006 | 10019366 | Pharmacology[mh] | Most researchers in the pharmaceutical industry in the early 1990s believed that oligonucleotides stood little chance of success as a new class of drug. Textbook knowledge stated that large polyanionic structures do not enter cells and thus, oligonucleotide drugs should not function in vivo. Furthermore, a manufacturing process for oligonucleotides seemed improbable, and therefore, there was no credible business case for such a class of therapeutics. In spite of this, today 13 oligonucleotide drugs have been approved by regulatory authorities (comprehensively reviewed in reference, ). It is perhaps time to rewrite some chapters of the textbooks. Today, oligonucleotide drugs are approved for use as medicines in the liver, the central nervous system (CNS), the skeletal muscle and the eye, and there are good reasons to believe that they will soon be validated in the lung, the kidney, and the bone marrow. Thus, oligonucleotides are now established as a major class of therapeutics, behind small-molecule drugs and therapeutic proteins. At the annual Oligonucleotide Therapeutics Society meeting in Phoenix, I was asked by a young chemist where I thought that medicinal chemistry could play a role in the future of oligonucleotide drugs. Medicinal chemistry milestones in oligonucleotide therapeutics Looking back over 30 yr, a handful of milestones in the chemistry of oligonucleotides stand out. Oligonucleotide drugs are large, chemically synthesized structures, and therefore optimization of their pharmacodynamics (PD) and pharmacokinetics (PK) properties fell under the responsibility of medicinal chemists. Pioneering work was carried out by chemists through the 1980s and the 1990s, during which the ribonucleotide structure was systematically modified in efforts: (i) to protect single-stranded antisense oligonucleotides (ASOs) against metabolic degradation, while retaining their ability to hybridize with their targets and to recruit cellular effector enzymes; and (ii) to remain accessible via solid-phase synthesis. The experience gained in these areas streamlined efforts a decade later with a second emerging class of oligonucleotide drugs, the double-stranded small interfering RNAs (siRNAs) . In parallel with this work, major advances were made with oligonucleotide synthesizers, both in terms of synthesis throughput and synthesis scale. The introduction of 96-well machines, such as the Mermade 192, allowed researchers to synthesize oligonucleotides in “high-throughput.” This meant that instead of struggling to predict possible binding sites for potent oligonucleotides on a target mRNA with the help of RNA folding programs, or by assessing GC-content, it became routine in industry to synthesize and screen hundreds of reagents in a brute-force approach to identify experimentally and unambiguously the “best” oligonucleotide. In turn, access to large screening datasets powered the use of machine learning methods that revealed some of the sequence-dependent properties of potent oligonucleotides, as described in 2005 with siRNAs . Meanwhile, at the opposite end of the synthesis spectrum, large capacity synthesizers were introduced, providing gram quantities of oligonucleotide reagents for routine testing in animal disease models, including nonhuman primates. Today, the OligoProcess synthesizer produces up to 15 kg of oligonucleotide in single batches. With these developments ongoing, the field had momentum. Ribose modifications in single-stranded RNA drugs The phosphodiesters of a native DNA or RNA oligonucleotide are quickly degraded by ubiquitous nucleases in vivo. Hence, medicinal chemists were tasked with modifying oligonucleotide structures to render them resistant to metabolism. However, researchers were alarmed to find that even minor modifications to the ribonucleotide unit of an ASO could severely reduce its affinity for a complementary RNA. Hence, over a period of two decades, hundreds of nucleoside modifications were designed, synthesized and tested in academia and industry, in search of the “perfect” modification . The synthetic chemistry was resource-intensive, monotonous, and demanding. In most cases, it necessitated the synthesis of the four nucleosides as stable but reactive phosphoramidites ( A), with protecting groups on the exocyclic amino groups of the nucleobases, and good solubility in acetonitrile solvent. These building blocks were subjected to solid phase synthesis, then harsh ammonia treatment, followed by purification and characterization. The resultant oligonucleotide was then evaluated for its binding affinity and selectivity toward a complementary RNA in in vitro assays. Not surprisingly, the rate of attrition was high and most of these modifications fell by the wayside; very few reached clinical evaluation and drug approval. Among the successful modifications, one of the most unusual was the phosphorodiamidate morpholino oligonucleotide (PMO) ( B). Its elegant synthesis involves oxidative-mediated ring opening of the ribonucleoside, followed by ring closure with reductive amination, to produce a nucleobase-substituted morpholine cycle. The morpholines are linked by a phosphorodiamidate backbone . This chemistry was tested in the clinic with the splice-switching oligonucleotide eteplirsen. The target of eteplirsen is the pre-mRNA of dystrophin in skeletal muscle cells, to which it binds and alters splicing so as to exclude a deleterious exon. The approval of eteplirsen (2016) for the treatment of Duchenne muscular dystrophy (DMD) was controversial, due to the low level of correction that the drug reportedly achieves in the skeletal muscles of patients . Nevertheless, its approval paved the way for three subsequent PMO drugs (golodirsen, vitolarsen; , casimersen) to address other disease-causing mutations in two other exons of dystrophin for DMD treatment . The morpholino drugs were notable as one of the earliest demonstrations that an antisense drug could rescue a genetically derived, loss-of-function phenotype by altering the splicing of an mRNA. Without doubt, the most successful means to modify DNA and RNA for therapeutic applications comprised two concomitant changes to the structure: exchange of the phosphodiester (PO) for the phosphorothioate (PS) group, as well as substitution of the ribose 2′- O -position ( C; ; ). The pioneering work of F. Eckstein had shown that incorporation of PS linkages into the backbone of an oligonucleotide greatly improves its hydrophobicity and nuclease stability . Fortunately, the PS group was easily adapted to solid-phase synthesis protocols and the modification was found—unexpectedly—to facilitate entry of PS oligonucleotides into cells . Furthermore, PS linkages in an ASO result in its weak binding to serum proteins, such as human albumin that retards its renal clearance and permits a wide distribution of a drug in vivo . Substitution of the hydroxyl group at the 2′-position of the ribose was an obvious avenue of investigation for chemists . A variety of different substituents were studied, ranging from small alkyl groups to alkyl chains containing aromatic, halogenated and amino groups. The most significant breakthrough came with the introduction of the 2′- O -methoxyethyl (MOE) group, described in a 1995 Helvetic publication by P. Martin ( D; ). The MOE group imposes a C3′- endo conformation on the riboses of an oligonucleotide, which enhances hybridization affinity and selectivity for target RNAs . Furthermore, in combination with the PS linkage, an MOE substituent renders an oligonucleotide highly stable to endo - and exo -nucleases. The MOE modification is today the most widely used chemical modification of single-stranded oligonucleotide drugs (for review, see ). The modification was clinically validated with the approval of mipomersen, a 20-mer “gapmer” PS oligonucleotide bearing five MOE-modified riboses flanking a 10-mer DNA “window.” The DNA segment recruits RNase H1 to the target mRNA, thereby mediating its cleavage and terminating synthesis of the target protein . Mipomersen targets the liver as a treatment for familial hypercholesterolaemia (FH), a rare disorder of low-density lipoprotein cholesterol (LDL-C) metabolism . Despite mipomersen not being a commercial success, it generated spectacular data and was celebrated by the field as the first of the new-generation oligonucleotide drugs, able to suppress selectively the expression of a deleterious protein . The approval of mipomersen in the USA (2013) was quickly followed by that of nusinersen (2016), a breakthrough treatment for spinal muscular atrophy (SMA). Nusinersen is a fully PS-MOE-modified, 18-mer ASO that binds to SMN2 pre-mRNA and alters its splicing, to switch on production of a functional SMN protein . It was the first oligonucleotide drug to work in the nervous system, confirming findings in the late 1990s that intrathecal delivery into the cerebral spinal fluid was a viable means to administer MOE oligonucleotides into the CNS . Also, it is the only oligonucleotide to date to achieve “blockbuster drug” status. A number of alternative ribose modifications for single-stranded RNA drugs are also worthy of mention. They include the structurally complex bicyclic “locked” nucleic acid (LNA, cEt) modifications and tricyclic deoxyribose (TCA) derivatives that endow oligonucleotides with very high RNA-binding affinities ( E,F). However, for a variety of reasons, they have either fallen at (e.g., miravirsen; ), or not yet cleared (e.g., danvatirsen; ), the last hurdles before regulatory approval. Intuitively, it seems likely that some of these structures will eventually achieve success in the clinic. SiRNAs and oligonucleotide conjugates The gapmer design of ASOs provided a workable solution for chemists aiming for a compromise between stability, affinity, and RNase H-compatibility. For siRNAs, the main difficulty with the PD properties was to achieve nuclease stability of the double-stranded RNA (dsRNA) in view of the sensitivity of the RNAi mechanism to structural modifications in the two strands (passenger and guide) . Furthermore, the mainstay substituents of antisense oligonucleotides, such as MOE, are poorly accepted by the RISC (RNA-induced silencing complex) machinery in many (but not all; ) positions of the siRNA duplex. Eventually researchers from siRNA Therapeutics and Alnylam Pharmaceuticals converged on the replacement of all ribonucleotides in an siRNA with intricate arrangements of 2′- O -methyl (OMe) and 2′-fluoro (F) nucleotides ( G; ). These fully modified siRNAs are then capped with a few terminal PS groups to top-up nuclease stability. This structural format was not effective in in vivo applications, since in contrast to single-stranded oligonucleotides, dsRNAs do not bind serum proteins and are quickly excreted from the body . Furthermore, they do not undergo gymnosis—unaided uptake into cells—in contrast to their single-stranded counterparts . This hurdle was countered by their formulation with multicomponent lipid nanoparticles (LNPs) , which were used for the first siRNA drug patisiran in the treatment of hereditary transthyretin-mediated amyloidosis . However, LNPs have mostly fallen out of favor for siRNA formulations, because of the complexity of their composition and their perceived potential for long term toxicity. Instead, the RNA field adopted a different strategy for siRNA delivery in vivo—the oligonucleotide conjugate. The idea of conjugating functional groups to oligonucleotides to improve their PD and PK properties is decades old (for an excellent early review see, ). A variety of innovative functional groups have been explored by chemists, ranging from intercalators , peptides , stable metal complexes , and polyamines . Today, for improved siRNA (and ASO) delivery, oligonucleotide conjugates can be grouped into those that improve systemic circulation, those that aid cellular uptake, or those that do both . Hydrophobic groups such as cholesterol and other lipids were seen to alter the distribution of siRNAs from liver , producing measurable target suppression in kidney, heart, lung, fat, and muscle . Higher hydrophobicity of the conjugate group led to greater tissue retention of the siRNA, although higher siRNA accumulation in a tissue does not correlate with higher gene silencing in cells of the tissue, as reported by several groups . The conjugation of receptor-targeting ligands to ASOs and siRNAs has been championed by researchers at Ionis Therapeutics and Alnylam Pharmaceuticals . The conjugation of a targeting ligand composed of three N -acetylgalactosamine (GalNAc) moieties to the 3′-end of the siRNA passenger strand stands alone as a breakthrough for the RNAi field ( A; ). GalNAc ligands show high affinity for asialoglycoprotein receptors (ASGPRs), expressed in hepatic cells. Upon ligand binding, ASGPRs undergo endocytosis, transporting the conjugated siRNA into cells . There, the conjugate is metabolically cleaved releasing its siRNA into the cytosol. The GalNAc group is so effective for hepatocyte delivery, that drug potency is improved by 10- to 30-fold, compared to nonconjugated ASOs . This conjugation strategy was clinically validated for siRNAs with the approval of givosiran targeting delta-aminolevulinic acid synthase 1 ( ALAS1 ), for treating the rare metabolic disorder acute hepatic porphyria, as well as inclisiran, the first RNA drug to treat a common disease—atherosclerotic cardiovascular disease . The consequences of GalNAc-targeting for the oligonucleotide therapeutics field cannot be over-stated, and leading siRNA and antisense companies have stacked their clinical pipelines with GalNAc conjugates . Researchers are now racing to identify the “next GalNAc” . On paper, the approach appears daunting . For a given target cell type, one needs to identify a highly expressed surface receptor that is internalized upon ligand binding, for which a ligand is available and can be attached at an appropriate position of the oligonucleotide or the siRNA. In addition, the site of conjugation and the composition of the linker should not attenuate the ability of the ligand to interact with its receptor or prevent the receptor from internalizing. Three main classes of conjugate ligands have been investigated for targeting through specific receptors: carbohydrates, peptides, and antibodies. Following the GalNAc example, a tetra-valent mannose ligand was conjugated to siRNAs for selective delivery to CD206-expressing macrophages and dendritic cells in vitro and in vivo ( B; ). Similar to the ASGPR, the CD-206 receptor is expressed selectively on the cell surface and undergoes fast recycling. Also, as for the GalNAc group, the multivalent ligand showed superior potency over a monovalent ligand, demanding the design and synthesis of a long, structurally complex linker group ( B). In mice, these conjugates accumulated and elicited gene silencing in CD-206-expressing cells. The most advanced example of receptor mediated targeting is that of the Glucagon-Like-Peptide-1 agonist (GLP-1), which was developed to target specifically pancreatic beta cells, where GLP1R expression is restricted . Gapmer ASOs conjugated to the 37-amino acid peptide GLP-1 inhibited their targets in the pancreatic cells ( C). Astonishingly, these ASOs are devoid of effects at low doses in liver, after systemic administration to ob/ob mice. An analogous approach is being pursued for targets in the brain, using ASOs or siRNA conjugated to a short 13-amino acid neurotensin peptide that binds with high affinity at the sortilin receptor ( D; ). The neuropeptide was conjugated to morpholino oligonucleotides. However, the reagents have exhibited relatively modest improvements in splice-modulating activity in the cortex and striatum of mice after intracerebroventricular injection. New ways to use oligonucleotide conjugation as a means to improve drug trafficking are underway, as our understanding of how oligonucleotides trafficking in cells and in vivo increases . For example, ancillary groups that aid endosomal escape of the oligonucleotides in cells or that help traffic an oligonucleotide to the nucleus would be of potentially high value . Such initiatives are supported by the development of new highly sensitive hybridization-based analytical techniques that can quantify oligonucleotides in individual protein complexes or compartments of the cell or in distinct tissues/organs of the body . Phosphorothioate linkages—the Dr. Jekyll and Mr. Hyde of oligonucleotide therapeutics The PS-linkage is an indispensable part of many oligonucleotide drugs and is likely to remain so for the foreseeable future. In the early phases of the field, it powered advances in the technology, thanks to its favorable PK properties, its metabolic stability, its ease of synthesis and its compatibility with the RNase H mechanism. However, the PS-linkage is often maligned for its toxicity , its metabolic instability in some sequence contexts and the hidden secrets of its isomeric composition. For some applications in vivo, efforts have been made to reduce the number of PS groups in an oligonucleotide, for example, by substituting selected linkages with stable PO-groups , with alkyl phosphonates , with phosphoryl guanidine (PN) groups ( H; , ) or with mesyl phosphoramidite (MsPA) groups ( I; ; ). The PN and MsPA groups represent relatively new chemistries that are highly resistant to nucleases and are easily incorporated into the solid-phase synthesis cycle by substituting an azide synthon for sulfur during P (III) to P (V) conversion . During conventional solid phase PS-oligonucleotide synthesis, the coupling of phosphoramidite building blocks ( A) occurs with epimerization, mediated by nucleophilic tetrazole activators. Thus, PS stereochemistry is not controlled, and therefore each linkage in the oligonucleotide exists as an approximate 1:1 ratio of R p and S p diastereoisomers ( A). Thus, the siRNA inclisiran with six PS groups comprises up to 64 (2 6 ) isomers, whereas the 20-mer ASO pelacarsen has 524,288 (2 19 ) possible diastereoisomers ( B). Ravikumar and Cole studied the influence of various parameters on the R p/ S p ratios produced during the coupling of conventional MOE phosphoramidites ( A), including synthesis scale, solid supports, machines, reagent concentrations, tetrazole activators, and phosphodiester protecting groups. They concluded that activators and the phosphate protecting groups had the greatest influence during solid phase synthesis . These findings were consistent with later work by T. Wada on RNAs and on (si)RNAs by who demonstrated that subtle changes in the R p/ S p composition of PS RNAs in siRNAs significantly affects their properties in cells. The pharmacological properties of an oligonucleotide are the sum activity of its component isomers. Each diastereoisomer exhibits its own distinct PD and PK properties . In modern conventional drug development, the use of diastereosiomeric mixtures of drugs is avoided wherever possible. However, due to the strict requirement for quantitative coupling reactions during oligonucleotide synthesis, the field of oligonucleotide therapeutics has been exempt from this condition. Recent developments, however, suggest this aspect should be reexamined, since: (a) it is now possible (though challenging) to synthesize antisense PS-oligonucleotides stereospecifically ; (b) a loss of stereochemical reproducibility during manufacturing may have contributed to the failure of the first generation antisense drug mongersen ; and (c) innovative new P (III) ( C; ) and P (V) chemistry ( D; ; ) has stirred chemists to revisit methods of oligonucleotide synthesis. The main premise of stereopure PS-oligonucleotides—besides the obvious benefits of working with a single molecular entity—is that one may be able to influence (improve) distinct PD and PK properties via control of PS stereochemistry, if methods are available to test/synthesize all possible diastereoisomers. For example, it has been demonstrated that some PS diastereoisomers in a stereorandom population of a PS gapmer show exaggerated toxicity, from the chiral interaction of selected PS groups with proteins . This toxicity was attenuated by a switch in the stereochemistry at specific PS centers. Moreover, there is the tantalizing prospect that through interactions with certain proteins, a distinct PS stereochemistry may for example, improve potency, aid target cell uptake or mediate allele-specific targeting etc. Indeed, it is long-known that a DNA segment composed of R p centers in a PS gapmer oligonucleotide is RNase H-compatible, but is quickly degraded by nucleases; whereas the S p diastereoisomers have better stability but show poor RNase H-compatibility . The major breakthrough in the chemistry of stereopure PS-oligonucleotides was the introduction by the Wada group of new P (III) nucleoside building blocks and activators that enable stereospecific coupling on solid phase . Initially, coupling yields with this chemistry were not sufficient to produce 20-mer oligonucleotides. However, by tinkering with the substituents on the chiral ancillary and the reaction conditions, chemists from Wave Life Sciences prevailed with the first chemical synthesis of “full-length” stereopure PS oligonucleotides . A second outcome of this seminal work was the discovery that a trivalent stereo-motif 3′- S p S p R p-5′ in the DNA segment of a stereopure PS gapmer provides both nuclease stability and RNase H compatibility , circumventing the longstanding challenge of how to exploit stereopurity in the DNA window of a gapmer. The authors demonstrated that the motif functions in gapmer oligonucleotides with different chemistries in the wings, although its benefit was not observed in some oligonucleotides with stereorandom PS-wings . To date, the properties and applications of fully stereopure PS oligonucleotides have been described in a handful of prominent papers . Critical analysis of the data in these works confirms that a stereodefined PS-backbone shows superior potency and duration of action to its stereorandom counterpart, in vitro and in vivo. Nevertheless, it cannot be forgotten that many factors other than potency play roles on the road to regulatory approval. Pleasingly, the phosphoryl guanidine and mesyl phosphoramidate groups ( H,I) can also be combined with Wada phosphoramidites to yield stereopure amidate linkages , thereby reducing the PS content of an oligonucleotide without compromising either metabolic stability or stereopurity. The uncharged stereopure PN modification was incorporated into the wings of gapmer oligonucleotides, as well as splice switching oligonucleotides, that showed superior activity to their PS counterparts in the CNS . The authors suggested that these enhanced effects occurred through improved oligonucleotide delivery. From the first data with this chemistry, it seems likely that the stereopure PN modification has a bright future in the field. Stereopure PS-gapmer oligonucleotides were “validated” in clinical trials of suvodirsen (a splice-switching oligonucleotide comprising 2′-OMe and 2′-F ribose modifications to treat DMD), as well as rovanersen and lexanersen (for Huntington`s disease). However, all three front-runner drugs failed to progress in these challenging disease indications, possibly for reasons of insufficient target exposure (DMD) or mechanism-related toxicity (Huntington's disease). However, the next wave of stereopure PS-oligonucleotides is already in clinical trials and therefore it seems likely that the approval of the first such drug is only a matter of time. Looking back over 30 yr, a handful of milestones in the chemistry of oligonucleotides stand out. Oligonucleotide drugs are large, chemically synthesized structures, and therefore optimization of their pharmacodynamics (PD) and pharmacokinetics (PK) properties fell under the responsibility of medicinal chemists. Pioneering work was carried out by chemists through the 1980s and the 1990s, during which the ribonucleotide structure was systematically modified in efforts: (i) to protect single-stranded antisense oligonucleotides (ASOs) against metabolic degradation, while retaining their ability to hybridize with their targets and to recruit cellular effector enzymes; and (ii) to remain accessible via solid-phase synthesis. The experience gained in these areas streamlined efforts a decade later with a second emerging class of oligonucleotide drugs, the double-stranded small interfering RNAs (siRNAs) . In parallel with this work, major advances were made with oligonucleotide synthesizers, both in terms of synthesis throughput and synthesis scale. The introduction of 96-well machines, such as the Mermade 192, allowed researchers to synthesize oligonucleotides in “high-throughput.” This meant that instead of struggling to predict possible binding sites for potent oligonucleotides on a target mRNA with the help of RNA folding programs, or by assessing GC-content, it became routine in industry to synthesize and screen hundreds of reagents in a brute-force approach to identify experimentally and unambiguously the “best” oligonucleotide. In turn, access to large screening datasets powered the use of machine learning methods that revealed some of the sequence-dependent properties of potent oligonucleotides, as described in 2005 with siRNAs . Meanwhile, at the opposite end of the synthesis spectrum, large capacity synthesizers were introduced, providing gram quantities of oligonucleotide reagents for routine testing in animal disease models, including nonhuman primates. Today, the OligoProcess synthesizer produces up to 15 kg of oligonucleotide in single batches. With these developments ongoing, the field had momentum. The phosphodiesters of a native DNA or RNA oligonucleotide are quickly degraded by ubiquitous nucleases in vivo. Hence, medicinal chemists were tasked with modifying oligonucleotide structures to render them resistant to metabolism. However, researchers were alarmed to find that even minor modifications to the ribonucleotide unit of an ASO could severely reduce its affinity for a complementary RNA. Hence, over a period of two decades, hundreds of nucleoside modifications were designed, synthesized and tested in academia and industry, in search of the “perfect” modification . The synthetic chemistry was resource-intensive, monotonous, and demanding. In most cases, it necessitated the synthesis of the four nucleosides as stable but reactive phosphoramidites ( A), with protecting groups on the exocyclic amino groups of the nucleobases, and good solubility in acetonitrile solvent. These building blocks were subjected to solid phase synthesis, then harsh ammonia treatment, followed by purification and characterization. The resultant oligonucleotide was then evaluated for its binding affinity and selectivity toward a complementary RNA in in vitro assays. Not surprisingly, the rate of attrition was high and most of these modifications fell by the wayside; very few reached clinical evaluation and drug approval. Among the successful modifications, one of the most unusual was the phosphorodiamidate morpholino oligonucleotide (PMO) ( B). Its elegant synthesis involves oxidative-mediated ring opening of the ribonucleoside, followed by ring closure with reductive amination, to produce a nucleobase-substituted morpholine cycle. The morpholines are linked by a phosphorodiamidate backbone . This chemistry was tested in the clinic with the splice-switching oligonucleotide eteplirsen. The target of eteplirsen is the pre-mRNA of dystrophin in skeletal muscle cells, to which it binds and alters splicing so as to exclude a deleterious exon. The approval of eteplirsen (2016) for the treatment of Duchenne muscular dystrophy (DMD) was controversial, due to the low level of correction that the drug reportedly achieves in the skeletal muscles of patients . Nevertheless, its approval paved the way for three subsequent PMO drugs (golodirsen, vitolarsen; , casimersen) to address other disease-causing mutations in two other exons of dystrophin for DMD treatment . The morpholino drugs were notable as one of the earliest demonstrations that an antisense drug could rescue a genetically derived, loss-of-function phenotype by altering the splicing of an mRNA. Without doubt, the most successful means to modify DNA and RNA for therapeutic applications comprised two concomitant changes to the structure: exchange of the phosphodiester (PO) for the phosphorothioate (PS) group, as well as substitution of the ribose 2′- O -position ( C; ; ). The pioneering work of F. Eckstein had shown that incorporation of PS linkages into the backbone of an oligonucleotide greatly improves its hydrophobicity and nuclease stability . Fortunately, the PS group was easily adapted to solid-phase synthesis protocols and the modification was found—unexpectedly—to facilitate entry of PS oligonucleotides into cells . Furthermore, PS linkages in an ASO result in its weak binding to serum proteins, such as human albumin that retards its renal clearance and permits a wide distribution of a drug in vivo . Substitution of the hydroxyl group at the 2′-position of the ribose was an obvious avenue of investigation for chemists . A variety of different substituents were studied, ranging from small alkyl groups to alkyl chains containing aromatic, halogenated and amino groups. The most significant breakthrough came with the introduction of the 2′- O -methoxyethyl (MOE) group, described in a 1995 Helvetic publication by P. Martin ( D; ). The MOE group imposes a C3′- endo conformation on the riboses of an oligonucleotide, which enhances hybridization affinity and selectivity for target RNAs . Furthermore, in combination with the PS linkage, an MOE substituent renders an oligonucleotide highly stable to endo - and exo -nucleases. The MOE modification is today the most widely used chemical modification of single-stranded oligonucleotide drugs (for review, see ). The modification was clinically validated with the approval of mipomersen, a 20-mer “gapmer” PS oligonucleotide bearing five MOE-modified riboses flanking a 10-mer DNA “window.” The DNA segment recruits RNase H1 to the target mRNA, thereby mediating its cleavage and terminating synthesis of the target protein . Mipomersen targets the liver as a treatment for familial hypercholesterolaemia (FH), a rare disorder of low-density lipoprotein cholesterol (LDL-C) metabolism . Despite mipomersen not being a commercial success, it generated spectacular data and was celebrated by the field as the first of the new-generation oligonucleotide drugs, able to suppress selectively the expression of a deleterious protein . The approval of mipomersen in the USA (2013) was quickly followed by that of nusinersen (2016), a breakthrough treatment for spinal muscular atrophy (SMA). Nusinersen is a fully PS-MOE-modified, 18-mer ASO that binds to SMN2 pre-mRNA and alters its splicing, to switch on production of a functional SMN protein . It was the first oligonucleotide drug to work in the nervous system, confirming findings in the late 1990s that intrathecal delivery into the cerebral spinal fluid was a viable means to administer MOE oligonucleotides into the CNS . Also, it is the only oligonucleotide to date to achieve “blockbuster drug” status. A number of alternative ribose modifications for single-stranded RNA drugs are also worthy of mention. They include the structurally complex bicyclic “locked” nucleic acid (LNA, cEt) modifications and tricyclic deoxyribose (TCA) derivatives that endow oligonucleotides with very high RNA-binding affinities ( E,F). However, for a variety of reasons, they have either fallen at (e.g., miravirsen; ), or not yet cleared (e.g., danvatirsen; ), the last hurdles before regulatory approval. Intuitively, it seems likely that some of these structures will eventually achieve success in the clinic. The gapmer design of ASOs provided a workable solution for chemists aiming for a compromise between stability, affinity, and RNase H-compatibility. For siRNAs, the main difficulty with the PD properties was to achieve nuclease stability of the double-stranded RNA (dsRNA) in view of the sensitivity of the RNAi mechanism to structural modifications in the two strands (passenger and guide) . Furthermore, the mainstay substituents of antisense oligonucleotides, such as MOE, are poorly accepted by the RISC (RNA-induced silencing complex) machinery in many (but not all; ) positions of the siRNA duplex. Eventually researchers from siRNA Therapeutics and Alnylam Pharmaceuticals converged on the replacement of all ribonucleotides in an siRNA with intricate arrangements of 2′- O -methyl (OMe) and 2′-fluoro (F) nucleotides ( G; ). These fully modified siRNAs are then capped with a few terminal PS groups to top-up nuclease stability. This structural format was not effective in in vivo applications, since in contrast to single-stranded oligonucleotides, dsRNAs do not bind serum proteins and are quickly excreted from the body . Furthermore, they do not undergo gymnosis—unaided uptake into cells—in contrast to their single-stranded counterparts . This hurdle was countered by their formulation with multicomponent lipid nanoparticles (LNPs) , which were used for the first siRNA drug patisiran in the treatment of hereditary transthyretin-mediated amyloidosis . However, LNPs have mostly fallen out of favor for siRNA formulations, because of the complexity of their composition and their perceived potential for long term toxicity. Instead, the RNA field adopted a different strategy for siRNA delivery in vivo—the oligonucleotide conjugate. The idea of conjugating functional groups to oligonucleotides to improve their PD and PK properties is decades old (for an excellent early review see, ). A variety of innovative functional groups have been explored by chemists, ranging from intercalators , peptides , stable metal complexes , and polyamines . Today, for improved siRNA (and ASO) delivery, oligonucleotide conjugates can be grouped into those that improve systemic circulation, those that aid cellular uptake, or those that do both . Hydrophobic groups such as cholesterol and other lipids were seen to alter the distribution of siRNAs from liver , producing measurable target suppression in kidney, heart, lung, fat, and muscle . Higher hydrophobicity of the conjugate group led to greater tissue retention of the siRNA, although higher siRNA accumulation in a tissue does not correlate with higher gene silencing in cells of the tissue, as reported by several groups . The conjugation of receptor-targeting ligands to ASOs and siRNAs has been championed by researchers at Ionis Therapeutics and Alnylam Pharmaceuticals . The conjugation of a targeting ligand composed of three N -acetylgalactosamine (GalNAc) moieties to the 3′-end of the siRNA passenger strand stands alone as a breakthrough for the RNAi field ( A; ). GalNAc ligands show high affinity for asialoglycoprotein receptors (ASGPRs), expressed in hepatic cells. Upon ligand binding, ASGPRs undergo endocytosis, transporting the conjugated siRNA into cells . There, the conjugate is metabolically cleaved releasing its siRNA into the cytosol. The GalNAc group is so effective for hepatocyte delivery, that drug potency is improved by 10- to 30-fold, compared to nonconjugated ASOs . This conjugation strategy was clinically validated for siRNAs with the approval of givosiran targeting delta-aminolevulinic acid synthase 1 ( ALAS1 ), for treating the rare metabolic disorder acute hepatic porphyria, as well as inclisiran, the first RNA drug to treat a common disease—atherosclerotic cardiovascular disease . The consequences of GalNAc-targeting for the oligonucleotide therapeutics field cannot be over-stated, and leading siRNA and antisense companies have stacked their clinical pipelines with GalNAc conjugates . Researchers are now racing to identify the “next GalNAc” . On paper, the approach appears daunting . For a given target cell type, one needs to identify a highly expressed surface receptor that is internalized upon ligand binding, for which a ligand is available and can be attached at an appropriate position of the oligonucleotide or the siRNA. In addition, the site of conjugation and the composition of the linker should not attenuate the ability of the ligand to interact with its receptor or prevent the receptor from internalizing. Three main classes of conjugate ligands have been investigated for targeting through specific receptors: carbohydrates, peptides, and antibodies. Following the GalNAc example, a tetra-valent mannose ligand was conjugated to siRNAs for selective delivery to CD206-expressing macrophages and dendritic cells in vitro and in vivo ( B; ). Similar to the ASGPR, the CD-206 receptor is expressed selectively on the cell surface and undergoes fast recycling. Also, as for the GalNAc group, the multivalent ligand showed superior potency over a monovalent ligand, demanding the design and synthesis of a long, structurally complex linker group ( B). In mice, these conjugates accumulated and elicited gene silencing in CD-206-expressing cells. The most advanced example of receptor mediated targeting is that of the Glucagon-Like-Peptide-1 agonist (GLP-1), which was developed to target specifically pancreatic beta cells, where GLP1R expression is restricted . Gapmer ASOs conjugated to the 37-amino acid peptide GLP-1 inhibited their targets in the pancreatic cells ( C). Astonishingly, these ASOs are devoid of effects at low doses in liver, after systemic administration to ob/ob mice. An analogous approach is being pursued for targets in the brain, using ASOs or siRNA conjugated to a short 13-amino acid neurotensin peptide that binds with high affinity at the sortilin receptor ( D; ). The neuropeptide was conjugated to morpholino oligonucleotides. However, the reagents have exhibited relatively modest improvements in splice-modulating activity in the cortex and striatum of mice after intracerebroventricular injection. New ways to use oligonucleotide conjugation as a means to improve drug trafficking are underway, as our understanding of how oligonucleotides trafficking in cells and in vivo increases . For example, ancillary groups that aid endosomal escape of the oligonucleotides in cells or that help traffic an oligonucleotide to the nucleus would be of potentially high value . Such initiatives are supported by the development of new highly sensitive hybridization-based analytical techniques that can quantify oligonucleotides in individual protein complexes or compartments of the cell or in distinct tissues/organs of the body . The PS-linkage is an indispensable part of many oligonucleotide drugs and is likely to remain so for the foreseeable future. In the early phases of the field, it powered advances in the technology, thanks to its favorable PK properties, its metabolic stability, its ease of synthesis and its compatibility with the RNase H mechanism. However, the PS-linkage is often maligned for its toxicity , its metabolic instability in some sequence contexts and the hidden secrets of its isomeric composition. For some applications in vivo, efforts have been made to reduce the number of PS groups in an oligonucleotide, for example, by substituting selected linkages with stable PO-groups , with alkyl phosphonates , with phosphoryl guanidine (PN) groups ( H; , ) or with mesyl phosphoramidite (MsPA) groups ( I; ; ). The PN and MsPA groups represent relatively new chemistries that are highly resistant to nucleases and are easily incorporated into the solid-phase synthesis cycle by substituting an azide synthon for sulfur during P (III) to P (V) conversion . During conventional solid phase PS-oligonucleotide synthesis, the coupling of phosphoramidite building blocks ( A) occurs with epimerization, mediated by nucleophilic tetrazole activators. Thus, PS stereochemistry is not controlled, and therefore each linkage in the oligonucleotide exists as an approximate 1:1 ratio of R p and S p diastereoisomers ( A). Thus, the siRNA inclisiran with six PS groups comprises up to 64 (2 6 ) isomers, whereas the 20-mer ASO pelacarsen has 524,288 (2 19 ) possible diastereoisomers ( B). Ravikumar and Cole studied the influence of various parameters on the R p/ S p ratios produced during the coupling of conventional MOE phosphoramidites ( A), including synthesis scale, solid supports, machines, reagent concentrations, tetrazole activators, and phosphodiester protecting groups. They concluded that activators and the phosphate protecting groups had the greatest influence during solid phase synthesis . These findings were consistent with later work by T. Wada on RNAs and on (si)RNAs by who demonstrated that subtle changes in the R p/ S p composition of PS RNAs in siRNAs significantly affects their properties in cells. The pharmacological properties of an oligonucleotide are the sum activity of its component isomers. Each diastereoisomer exhibits its own distinct PD and PK properties . In modern conventional drug development, the use of diastereosiomeric mixtures of drugs is avoided wherever possible. However, due to the strict requirement for quantitative coupling reactions during oligonucleotide synthesis, the field of oligonucleotide therapeutics has been exempt from this condition. Recent developments, however, suggest this aspect should be reexamined, since: (a) it is now possible (though challenging) to synthesize antisense PS-oligonucleotides stereospecifically ; (b) a loss of stereochemical reproducibility during manufacturing may have contributed to the failure of the first generation antisense drug mongersen ; and (c) innovative new P (III) ( C; ) and P (V) chemistry ( D; ; ) has stirred chemists to revisit methods of oligonucleotide synthesis. The main premise of stereopure PS-oligonucleotides—besides the obvious benefits of working with a single molecular entity—is that one may be able to influence (improve) distinct PD and PK properties via control of PS stereochemistry, if methods are available to test/synthesize all possible diastereoisomers. For example, it has been demonstrated that some PS diastereoisomers in a stereorandom population of a PS gapmer show exaggerated toxicity, from the chiral interaction of selected PS groups with proteins . This toxicity was attenuated by a switch in the stereochemistry at specific PS centers. Moreover, there is the tantalizing prospect that through interactions with certain proteins, a distinct PS stereochemistry may for example, improve potency, aid target cell uptake or mediate allele-specific targeting etc. Indeed, it is long-known that a DNA segment composed of R p centers in a PS gapmer oligonucleotide is RNase H-compatible, but is quickly degraded by nucleases; whereas the S p diastereoisomers have better stability but show poor RNase H-compatibility . The major breakthrough in the chemistry of stereopure PS-oligonucleotides was the introduction by the Wada group of new P (III) nucleoside building blocks and activators that enable stereospecific coupling on solid phase . Initially, coupling yields with this chemistry were not sufficient to produce 20-mer oligonucleotides. However, by tinkering with the substituents on the chiral ancillary and the reaction conditions, chemists from Wave Life Sciences prevailed with the first chemical synthesis of “full-length” stereopure PS oligonucleotides . A second outcome of this seminal work was the discovery that a trivalent stereo-motif 3′- S p S p R p-5′ in the DNA segment of a stereopure PS gapmer provides both nuclease stability and RNase H compatibility , circumventing the longstanding challenge of how to exploit stereopurity in the DNA window of a gapmer. The authors demonstrated that the motif functions in gapmer oligonucleotides with different chemistries in the wings, although its benefit was not observed in some oligonucleotides with stereorandom PS-wings . To date, the properties and applications of fully stereopure PS oligonucleotides have been described in a handful of prominent papers . Critical analysis of the data in these works confirms that a stereodefined PS-backbone shows superior potency and duration of action to its stereorandom counterpart, in vitro and in vivo. Nevertheless, it cannot be forgotten that many factors other than potency play roles on the road to regulatory approval. Pleasingly, the phosphoryl guanidine and mesyl phosphoramidate groups ( H,I) can also be combined with Wada phosphoramidites to yield stereopure amidate linkages , thereby reducing the PS content of an oligonucleotide without compromising either metabolic stability or stereopurity. The uncharged stereopure PN modification was incorporated into the wings of gapmer oligonucleotides, as well as splice switching oligonucleotides, that showed superior activity to their PS counterparts in the CNS . The authors suggested that these enhanced effects occurred through improved oligonucleotide delivery. From the first data with this chemistry, it seems likely that the stereopure PN modification has a bright future in the field. Stereopure PS-gapmer oligonucleotides were “validated” in clinical trials of suvodirsen (a splice-switching oligonucleotide comprising 2′-OMe and 2′-F ribose modifications to treat DMD), as well as rovanersen and lexanersen (for Huntington`s disease). However, all three front-runner drugs failed to progress in these challenging disease indications, possibly for reasons of insufficient target exposure (DMD) or mechanism-related toxicity (Huntington's disease). However, the next wave of stereopure PS-oligonucleotides is already in clinical trials and therefore it seems likely that the approval of the first such drug is only a matter of time. The examples of medicinal chemistry discussed in this Perspective—chemical modifications, oligonucleotide conjugates, PS stereochemistry—were selected to highlight three areas of future challenges for medicinal chemists in the oligonucleotide therapeutics field. Arguably, the need for new ribonucleoside modifications in the field has receded in recent times. This is due to the ready accessibility of MOE and LNA chemistries. When combined with “routine” high-throughput synthesis/screening methods, potent oligonucleotides can be produced against any target for which the sequence is known, as originally envisioned by Zamecnik and Stephenson . Furthermore, barriers of intellectual property related to these ribose chemistries have mostly ebbed away, leaving freedom to operate in the field. Reassured by the success of nusinersen that the technology can deliver, many large pharma companies have initiated oligonucleotide (antisense or siRNA) programs. For example, dozens of MOE-oligonucleotides are at various stages of clinical testing , sponsored by a variety of companies. Many of these clinical candidates are intended for use in rare diseases, where targets are clinically validated and competition with conventional drug classes is sparse. However, a growing number of programs are directed to the treatment of common diseases, with large patient populations. If only a small fraction of these new programs is clinically successful, it will likely create a strain on contract research organizations for oligonucleotide manufacture. On the other hand, it will also motivate chemists to seek out new methods of oligonucleotide synthesis that are better scalable and “greener” than current methods. Such initiatives may range from the development of new solid supports with higher loadings (similar to peptide solid supports), through solution-phase synthesis to even enzymatic synthesis . Recently, many research groups have turned to the area of oligonucleotide conjugates, for enhanced oligonucleotide delivery. has described two parts to the delivery problem: first, how to transport the oligonucleotide to the target organ of interest, and then, how to deliver it into the right cellular compartments. Oligonucleotide conjugates offer excellent possibilities to address both objectives, possibly with dedicated conjugate groups for each. However, current oligonucleotide conjugates have high structural complexity for chemical synthesis (see structures drawn in full in ). This complicates their development in the areas of synthesis/manufacture, companion analytics, as well as their metabolism and toxicity. These factors can be underestimated by chemists engaged in exploratory research. However, process chemists responsible for preclinical and clinical development of the drugs are sensitive to their large structures, where the conjugated group represents a significant part of the overall structure. Indeed, the manufacturing of inclisiran ( A), containing the tri-antennary GalNAc ligand is a formidable achievement. One means to simplify these structures would be to replace carbohydrate- and peptide-targeting ligands with small-molecule ligands that are equally capable of binding selectively and potently to internalizing cell surface receptors. A few reports describe targeting with small-molecule ligands, for example, anisamide or anandamide , but as yet this appears to be a largely unexplored area. Over the years, oligonucleotide chemists have reveled in the “which is the best” arguments: Is an LNA superior to an MOE modification? Is the siRNA better than an ASO? Is a lipid nanoparticle formulation better than a conjugate? This banter extends to the merits of stereopure PS-oligonucleotides , and discussions between those with opposing views will continue, at least until the approval of the first stereopure PS drug settles the question. Based on emerging work, it appears that PS stereochemistry has much to offer in terms of improving the PK and PD properties of oligonucleotides. The challenge here is to design experiments that can link a particular fingerprint of PS stereochemistry to the desired property of interest. In conclusion, young chemists rest assured: there is still a need for innovation in the oligonucleotide therapeutics field. |
The role of brain radiotherapy for EGFR- and ALK-positive non-small-cell lung cancer with brain metastases: a review | 114549cc-0239-489c-9d42-ad2afc2ded3e | 10020247 | Internal Medicine[mh] | Non-small cell lung cancer (NSCLC) is the first and second most frequent cause of death from cancer in men and women, respectively. Adenocarcinoma is the most represented histology with increasing incidence in western countries (> 50%) . Patients diagnosed in advanced or metastatic stage (mNSCLC) have poor prognosis with less than 5% of them surviving more than 5 years . The increased incidence of brain metastases (BMs) is likely resulting from longer patient survival due to more effective systemic therapies for the primary cancer and increased use of neuroimaging in neurologically asymptomatic patients that has allowed prompter treatments of this subset of patients . Before molecular targeted therapy and immune-checkpoint inhibitors monoclonal Antibodies (ICI moAbs), standard treatment was chemotherapy doublet with platinum (either cisplatin or carboplatin) and a second chemotherapeutic drug arbitrarily chosen among gemcitabine, paclitaxel, vinorelbine or pemetrexed eventually combined with anti VEGF mAbs (bevacizumab) (the latter two options restricted to non-squamous histology) . Thanks to the detection of EGFR gene alterations and ALK-rearrangements (10–30 and 3–7% of patients, respectively) and other driver mutations critical for lung cancer tumorigenesis and promotion, we have entered a new era of personalized therapy in the treatment of lung cancer patients driven by genotyping . Despite these breakthroughs in the treatment of advanced mNSCLC, several points still remain open, in particular for patients who present “ab initio” or develop late BMs . It is noteworthy the BMs are detected in 24.4% in EGFR-mutation patients and 23.8% in ALK-rearrangements patients at the time of diagnosis and respectively 46.7% and 58.4% within 3 years from the diagnosis . Therefore, the present review aims to describe the multidisciplinary strategies in patients with mNSCLC adenocarcinoma with CNS involvement and EGFR activating mutations or ALK rearrangement.
Frequency of BMs in EGFR/ALK mutant NSCLC The detection of synchronous BM during the staging of NSCLC is a challenging event for the clinical management of these patients. A recent epidemiological study conducted by Surresh K. et al. suggests a greater incidence of synchronous BM in NSCLC patients bearing EGFR / ALK driver mutation/translocations compared to other patients’ subsets (62% vs 57%, respectively; P < 0.05) with median survival not exceeding 14.6 months. EGFR-activating mutation mainly occurs in younger women and never-smokers with adenocarcinoma histology. These patients have a 50–70% high risk of BMs and about one third of them develops CNS progression during the course of treatment . Additionally, the risk of CNS relapse appears to be higher in patients bearing the L858R point mutations . Interestingly, it seems that the type of EGFR mutation is more related to specific patterns of BMs as suggested by a recent retrospective radiologic analysis of 57 NSCLC patients that recorded a multi-nodular BM pattern in patients bearing an exon 19 deletion . On the other hand, ALK rearrangement is rare and it is detected approximately in 3–7% of patients with the diagnosis of NSCLC . Likewise to EGFR mutations, ALK rearrangement is recorded in young, non-smoking men with non-squamous histology, who are susceptible of treatment with crizotinib, an ATP-competitive, orally bioavailable ALK inhibitor, firstly employed for the treatment of eml4-alk positive NSCLC . Unfortunately, nearly one third of the patients bearing an ALK rearrangement and receiving crizotinib develop CNS metastases within one year of therapy sometimes as the only extra-thoracic site of tumor progression. In this context, the development of second- and third-generation ALK inhibitors such as alectinib in the front line and lorlatinib in treatment lines following the first has encountered greater effectiveness in terms of intracranial response and better outcomes for these patients, overcoming the mechanisms of resistance to crizotinib .
The detection of synchronous BM during the staging of NSCLC is a challenging event for the clinical management of these patients. A recent epidemiological study conducted by Surresh K. et al. suggests a greater incidence of synchronous BM in NSCLC patients bearing EGFR / ALK driver mutation/translocations compared to other patients’ subsets (62% vs 57%, respectively; P < 0.05) with median survival not exceeding 14.6 months. EGFR-activating mutation mainly occurs in younger women and never-smokers with adenocarcinoma histology. These patients have a 50–70% high risk of BMs and about one third of them develops CNS progression during the course of treatment . Additionally, the risk of CNS relapse appears to be higher in patients bearing the L858R point mutations . Interestingly, it seems that the type of EGFR mutation is more related to specific patterns of BMs as suggested by a recent retrospective radiologic analysis of 57 NSCLC patients that recorded a multi-nodular BM pattern in patients bearing an exon 19 deletion . On the other hand, ALK rearrangement is rare and it is detected approximately in 3–7% of patients with the diagnosis of NSCLC . Likewise to EGFR mutations, ALK rearrangement is recorded in young, non-smoking men with non-squamous histology, who are susceptible of treatment with crizotinib, an ATP-competitive, orally bioavailable ALK inhibitor, firstly employed for the treatment of eml4-alk positive NSCLC . Unfortunately, nearly one third of the patients bearing an ALK rearrangement and receiving crizotinib develop CNS metastases within one year of therapy sometimes as the only extra-thoracic site of tumor progression. In this context, the development of second- and third-generation ALK inhibitors such as alectinib in the front line and lorlatinib in treatment lines following the first has encountered greater effectiveness in terms of intracranial response and better outcomes for these patients, overcoming the mechanisms of resistance to crizotinib .
The management of BM with systemic anticancer drugs presents great limitations due to the presence of a functional Blood–Brain Barrier (BBB) while loco-regional interventions (surgery and radiation therapy) can also damage the adjacent healthy tissue. Treatments with the first- and second-generation EGFRTKI, including erlotinib/gefitinib and afatinib top the response rate, PFS and survival obtained with doublet chemotherapy. More recently, osimertinib has emerged as an active third-generation EGFR TKI in the front-line setting as well as in patients with T790M mutation responsible for acquired resistance to the other EGFR-TKIs or with CNS lesions . Selected studies reported very promising activity of EGFR-TKIs use in fit patients with BMs, (intracranial response rate of 75% -88%, and median intracranial PFS and OS of 6.6–14.5 and 15.9–21.8 months, respectively) . The progressive better understanding of EGFR mutations in mNSCLC has allowed to set up the Lung Cancer Molecular Markers Graded Prognostic Assessment (Lung-mol-GPA based on EGFR status as the main target combined with other clinical parameters (25) to help clinical decisions on newly diagnosed with BM (see Table ). It is noteworthy that the efficacy of EGFR TKIs in patients with BM is not clearly as curtained in patients with symptomatic or uncontrolled BM because this patients’ subset was mostly excluded from pivotal, randomized controlled trials. The data concerning a potential efficacy of EGFR-TKI therapy in patients with mNSCLC, mutated EGFR and BMs have been mostly assumed from retrospective studies or indirect evidence. First-generation TKIs (gefitinib, erlotinib) reversibly blocks EGFR receptor and achieves a mean survival time of 33.1 months. This implies more likely onset of CNS disease, cutting life expectancy to 5.1 months from the diagnosis of BM. Despite their low molecular weight, the incomplete penetration through the BBB is responsible of the low CNS concentration and worse prognosis of gefitinib and erlotinib in these patients . Afatinib is a second-generation TKI that irreversibly binds to the EGFR receptor with higher affinity compared to first-generation TKIs. Two studies Lux-Lung 3 and Lux-Lung 6 have demonstrated the superiority of afatinib over platinum-based doublets also in patients with asymptomatic BMs. The Lux-Lung 7 trial compared gefitinib to afatinib including patients with BMs . Despite its promise as second-generation irreversible EGFR targeted agent, afatinib showed no superiority over the first-generation agents (except in some of the less common EGFR mutations) and less manageable toxicity profile. Osimertinib is a further EGFR TKI resulted very active in mNSCLC/EGFR mut patients who developed the EGFR T790M mutation known to be the most common mechanism of resistance to first- and second-generation TKI in 50–60% of patients who show progression . Osimertinib efficacy also showed superiority over chemotherapy in this subset of patients with BMs . The efficacy of osimertinib in EGFRmut mNSCLC was demonstrated by the results of the AURA 3 trial and subsequently confirmed in the FLAURA trial where it resulted also superior to first-generation EGFRTKI in term of PFS and OS . In particular the mean response time in the CNS reported in the AURA 3 trial was 8.9 months with Osimertinib versus 5.7 months with chemotherapy . Moreover, FLAURA clinical trial similarly showed the efficacy of osimertinib in patients with CNS metastases . Interestingly, within this trial it was shown that the presence of the uncommon C797SEGFR mutation was strongly predictive of resistance to osimertinib opening for the research of further drugs able to overcome this mechanism of resistance. Nevertheless, the antitumor effects of osimertinib single agent on CNS metastasis are unclear because these studies included patients treated with RT whose effects can be tardive. The OCEAN study was a two-cohort trial showing the efficacy of osimertinib in achieving BM response rate (BMRR) in RT-naïve patients with T790M EGFR mutated NSCLC especially in the presence of exon 19 deletion . Another interesting drug in this setting is represented by AZD3759, a miscellaneous oral EGFR TKI designed for CNS penetration that caused tumor regression in leptomeningeal and BM mouse models . Preliminary results of the phase I BLOOM study of 38 EGFR-mutant NSCLC with BM or leptomeningeal metastasis (LM) treated with AZD3759 showed an intracranial ORR of 63% . Table summarizes the prospective trials of three generations of EGFR TKI in EGFR-mutant NSCLC with BM.
The EML4/ALK fusion gene is a rare mutation occurring in 3–7% of mNSCLC that induces the constitutive activation of the ALK tyrosine kinase and downstream pathways . This subset of patients with CNS involvement results highly responsive to the frontline treatment with ALK-TKI. Crizotinib was the first ALK-TKI approved in these patients based on the successful results of the phase 3 Profile 1014 study . Not with standing CNS relapse resulted approximatively 30% more frequent with crizotinib than with chemotherapy within the first year of treatment . In the ALEX phase 3 clinical trial alectinib, a second-generation ALK-TKI was compared to crizotinib in first-line treatment of metastatic ALK-positive NSCLC showing a longer PFS and brain control . During the first 12 months incidence of CNS progression with alectinib or crizotinib treatment was, respectively, 9.4% versus 41.4%. Alectinib showed a better intracerebral disease control with an average PFS of 25.7 months than that of 10.4 months recorded for crizotinib . Further studies detected multiple resistance mutations responsible for the treatment failure with ALK-TKI including the I117N which confers tumor resistance to alectinib. This resistance, however, may be overcome by the use of ceritinib . When evaluated in the phase 3 clinical trial ASCEND-4 vs doublet chemotherapy as a frontline therapy in patients with BMs bearing ALK rearrangement, ceritinib achieved a better reduction of measurable CNS lesions (72.7% vs. 27.3%) . Additionally, the ASCEND-1 trial in patients with ALK rearrangement recorded a total intracerebral ORR of 63% in naïve patients and 36% in mNSCLC who had received ceritinib as a salvage therapy after previous treatment lines with other ALK TKIs . These results were mostly confirmed in the ASCEND-2 trial where the use of ceritinib resulted in an intracerebral ORR of 85% in chemo-naive patients and 40% in those who had received previous ALK-TKI lines . AG1202R is another well-known ALK mutation, conferring resistance to either first- or second-generation ALK-TKIs and potentially overcome using the newest TKIs brigatinib and lorlatinib. Both drugs have in fact been designed for their ability to penetrate the BBB and to overcome the resistance to TKIs approved for frontline treatment. Naito T. and colleagues have recently reviewed the substantial activity of brigatinib in controlling CNS metastases, in crizotinib-treated (ALTA trial) patients and crizotinib-naïve (ALTA-1L trial) patients with ALK rearrangement with or without specific resistance mutations. They also reported an analogue activity of lorlatinib in NSCLC patients with intracranial lesions bearing ALK, or c-ros oncogene 1 (ROS1)-positive rearrangements/mutations . Thanks to its activity against ALK-G1202R mutation (responsible for resistance to first- and second- generation ALK inhibitors) lorlatinib is a valid therapeutic option. Updated results from the Phase 3 CROWN trial, which evaluated lorlatinib versus crizotinib in people with previously untreated (ALK)-positive advanced NSCLC, reported that after a median follow-up of three years lorlatinib continues to demonstrate meaningful improvement in PFS compared to crizotinib (HR, 0.27; 95% CI, 0.18–0.39), corresponding to a 73% reduction in the rate of progression or death. Moreover, lorlatinib treatment resulted in a 92% reduction in the rate of intracranial progression (HR, 0.08; 95% CI, 0.04–0.17). The intracranial objective response rate (IC-ORR) for people with measurable BM at baseline was 83% (95% CI, 59–96, n = 15) with lorlatinib and 23% (95% CI, 5–54, n = 3) with crizotinib, with an intracranial complete response rate of 72% and 8%, respectively. In people without BMs at baseline, lorlatinib demonstrated a 98% reduction in the rate of intracranial progression (HR 0.02; 95% CI, 0.002–0.136). Finally, the long-term results from the CROWN trial confirm lorlatinib compelling safety and efficacy profile in the first-line setting and sustained benefit for up to three years for this patient population . Table summarizes the prospective trials of three generations of ALK inhibitors in ALK-rearranged NSCLC with BMs.
The use of radiation therapy/radiosurgery and/or surgery remains the backbone of BM management in mNSCLC patients due to the low permeability of BBB to most of the conventional anticancer drugs. Nevertheless this statement has been partially challenged for patients with oncogene-driven NSCLC. Currently whole brain radiotherapy (WBRT) and the focal radiotherapy are integrated with either surgery or systemic therapies within a multimodal approach. WBRT has been the standard approach to treatment of BMs from NSCLC thanks to an improvement of symptoms and distant BM control, in 70–93% and 60–80% of patients, respectively . The neurocognitive toxicity, and the lack of impact on the survival of mNSCLC with BMs has determined a progressive decline of WBRT in favor of less invasive strategies including stereotactic radiosurgery (SRS). In a phase III study WBRT and SRS equally affected OS, but SRS caused less decline in neurocognitive function (WBRT plus SRS 53% vs. 20% SRS alone), and an increased risk of further intracranial relapse . This risk, however, could be theoretically counterbalanced by a strict follow-up and new salvage SRS on recurrent BMs. Furthermore, appropriate systemic therapy may delay further intracranial progression, as more recently observed in patients with mNSCLC receiving multimodal treatment with SRS and immunotherapy . Therefore, mNSCLC patients with BM should be evaluated within a competent multidisciplinary team. Surgery may be offered for patients with solitary large brain metastases to counteract the expanding mass effect in the CNS whereas (despite the impact of multiple significant co-variables) in patients with a single BM SRS and surgery are equally effective on LR and OS . It is noteworthy that patients with BMs require supportive car to prevent and treat the frequent complications (i.e. cerebral edema, epilepsy, pain, etc.) and this should drive the decision making prior to combining ablative therapy and EGFR-TKI. A major argument against the use of brain RT encourages the use of the newest anticancer drugs in mNSCLC that, on one hand, overcome the BBB with no damage of healthy CNS (i.e. radio-necrosis) and on the other hand obtain satisfactory intracranial disease control . However, it cannot be ruled out that upfront BM treatment with locoregional treatment could prevent in selected patients on TKI with expected long survival. Treatment strategies based on BM numbers and dimension Brain oligometastatic disease is a common scenario in which the number of brain lesions becomes a “moving target” whose management is still far to be established. Patients with a single metastatic brain lesion experience significantly longer survival with minimal cognitive impairment and CNS symptoms (other than seizures) compared to patients with multiple metastases. Moreover, it has been shown that postoperative radiotherapy may significantly reduce the risk of local recurrence, whereas combined use of the two locoregional treatments improves the neurologic control of disease and the survival of these patients . Although WBRT has been long recognized as the standard adjuvant procedure after BM resection, a large phase III trial revealed a longer cognitive-deterioration-free survival of patients on SRS compared to WBRT with comparable effects of the two treatments in term of OS . A further study compared the effects of SRS focused on the surgical cavity in patients with radical resection of 1–3 BMs and revealed that the prophylactic radiotherapy reduced the local recurrence rate at 12 months with no effects on OS . The results of the two trials prompted the adoption of SRS as the new standard after surgical resection of BMs . The BM scenario is still more complex in mNSCLC patients with specific oncogene addiction. The results of recent studies in mNSCLC in fact suggest a significant heterogenicity in the expression (about 20%) of EGFR mutations with great discordance recorded between primary tumor and brain lesions . Therefore a further brain biopsy to confirm the presence EGFR mutations also in the brain lesions should be recommended to define a personalized treatment strategy including SRS. A rising number of recent studies focus on the comparison of WBRT vs. SRS and indicate that SRS is an important alternative to WBRT in fit patients. Japanese researchers reported the results of the prospective JLGK0901 trial indicating that SRS is still relevant in the presence of more than three CNS lesions . The use of SRS was associated to a median OS of 13.9 months (455 cases) 10.8 months (531 cases) and 10.8 months (208 cases) in patients with single BM, 2–4 treated BMs, and of 5–10 treated BMs, respectively. However, a retrospective study conducted by Balasubramanian et al. showed that the use of target therapy along with surgery and/or radiation may improve the OS on EGFR mut mNSCLC patients regardless the number of BMs. SRS and more conservative strategies are gaining further field of application also in large brain metastases with a diameter of > 2 cm. Patients with large BMs, commonly present severe neurologic invalidating symptoms and/or significant vasogenic edema or mass effect requiring fast upfront surgical resection when feasible. The subsequent post-operative SRS (median dose 15 Gy) after GTR, with average volume of 8.7 e 9.6 mL, can improve both LC and OS . Drawbacks to this treatment are always possible as neurological complication because of extensive resection and risk of symptomatic radionecrosis associated to ample planning target volume margin size (> 1.0 mm) for SRS . Jhaveri et al. carried out a multivariate analysis in mNSCLC, whose results showed that a GTV > 15 cc is the main risk factor predictive of local recurrence . Additionally, volumes of healthy brain tissue larger than > 10 mL receiving 12 Gy (V12 Gy) are directly correlated with radionecrosis (between 15 and 55%) ; hence the use of fractionated SRS (fSRS: i.e. V12 > 8.5 ml (30 Gy/5 fx; 27 Gy/3 fx) is advised in order to reduce this risk still maintaining an improvement of LC especially when the BM lesions are located in or near eloquent areas . At this purpose, the A071801 phase III trial aimed to evaluate the efficacy of SRS compared with fSRS for resected BMs in mNSCLC patients is ongoing [NCT04114981] with results expected by the end of 2022. Radiotherapy techniques SRS for < 10 mm BM is a high-precision treatment that requires a high level of technology. SRS can be delivered using different machines, with invasive contention or frameless, photons X or gamma. Several decades ago, in 1968, the Gamma Knife (GK) was introduced as the new treatment modality for SRS. The GK is a frame-based SRS that uses 60Co sources for irradiating a tumor volume with a diameter of approximately 4, 8, or 14 mm . GK is mainly characterized by non-homogeneous dose distributions within the target due to the effect of overlapping shots. The Cyberknife (CK) was invented at Stanford Health Care and first debuted in 1994. CK is an image-guided frameless robotic technology designed to deliver non-isocenter non-coplanar beam, and the entire treatment procedure is completely non-invasive . Despite the differences in treatment planning and dose delivery significant differences were not found in the quality of clinical outcome between GK and CK after SRS . Linear accelerator (LINAC)-based radiosurgery was developed as an alternative to GK SRS, using a standard LINAC modified for stereotactic purposes. Recent technical advances have made LINAC-based SRS (using multiple non-coplanar intersecting arc) a patient friendly technique, non-invasive, allowing for accurate patient positioning and a short treatment time . Following the technical improvements in treatment planning systems, LINAC-based SRS was marketed as having acceptably similar precision, accuracy, and mechanical stability for the treatment of numerous and small BM. Accordingly, LINAC-based SRS has been rapidly disseminating in the community in the last decades and despite the lack of systematic comparisons with GK-SRS, clinical results appear to be similar . LINAC-based SRS is considered a changing practice pattern in the treatment of BM NSCLC , considering also the benefit in the cost-effectiveness analysis compared to GK o CK SRS. Treatment strategies for critical areas Additional comments are needed for BM in critical CNS areas including the brainstem and optic pathway. Brainstem lesions are rare (3–5% of all BM ) and surgery is not amenable for high-risk mortality or further functional impairment. Brainstem metastases come with a poor prognosis and estimated survival without treatment is dramatically poor (from one to six months) . SRS is recommended for the treatment of brainstem metastases with a median dosage of 16 Gy (range 11–39) and median fractions 1 (1–13) . In a recent large metanalysis including 15,900 brainstem metastases treated with SRS the 1-year LC was 86% with an objective response rate of 59% and symptoms improvement of 55%. The grade 3–5 toxicity was 2.4% and deaths from progression after SRS are rare . Isolated optic nerve metastases are similarly rare but result in a unilateral or bilateral loss of the visual field. Thanks to the experience on gliomas and perioptic tumors prompt fractionation or multi-session radiosurgery is an option for treating this subset of patients with the risk of 1–2% of visual complication . These favorable results suggest the feasibility of a local treatment in patients with NSCLC critical areas metastases regardless molecular status and systemic therapy. Combined treatment strategies and choice of the optimal timing The combination of RT and TKI for BMs is still controversial. Results of the perspective study by Jiang et al. showed no advantage of early WBRT to TKI over TKI alone . The results of a recent retrospective study showed a trend to significant advantage (although no difference in OS) of RT and TKI combo vs. TKI alone in terms of median intracranial PFS (27.6 vs. 16.1 months; p = 0.053) . A large meta-analysis including 1,041 unselected NSCLC with BMs from 9 retrospective studies and 1 randomized controlled trial and aimed to investigate the combination of WBRT with EGFR TKI vs. WBRT alone or EGFR TKI therapy alone showed the best hazard ratios for intracranial PFS in patients who received EGFR-TKI alone . More recently, a retrospective analysis aimed to compare SRS + TKI vs. WBRT + TKI vs. TKI alone reported a significant advantage in term of iPFS and OS in the first group (23 vs. 24 vs. 17 months, respectively; p = 0.025) (46 vs. 30 vs. 25 months, respectively; p = 0.001) . A retrospective cohort of patients harboring EGFR-activating mutation treated with consolidative local ablative treatment yielded improved OS after first-line TKI. Interestingly, the BM site significantly affected the improved survival achieved with additional local treatment vs patients receiving exclusive systemic treatment (38.2 versus 29.2 months, HR = 0.48, 95% CI 0.30–0.76, p = 0.002) . Another meta-analysis provided the evidence that early RT in these patients offers a significant iPFS and OS advantage that is strictly correlated with the number of BMs, being the best results achieved in those with less than three brain lesions. On the contrary, no advantage was recorded in the other patients and those showing massive disease . On these bases no conclusive therapeutic statements may be defined and early radiotherapy continues to have a fundament role in the treatment of NSCLC patients with BMs harboring EGFR activating mutations. As for the possible prognostic advantage of an upfront RT treatment followed by TKI therapy, out of a multicentric series of 351 patients with BM from EGFR mutated NSCLC, 100 patients were treated with SRS followed by TKI therapy achieved the best therapeutic results (median survival, respectively, 46, 30, and 25 months; p < 0.001), compared to 120 with WBRT followed by TKI, and 131 with TKI followed by SRS or WBRT at progression. At multivariate analysis, prognostic features didn’t significantly differ between the upfront SRS and EGFR-TKI cohorts, whereas the WBRT cohort was more likely to have a less favorable prognosis ( p = 0.001). Despite the risk of selection biases because SRS is usually adopted for a limited number of BMs, this study shows the safety and effectiveness of elective RT procedure within a multidisciplinary therapeutic approach and warrants further investigation. The efficacy of concurrent radiotherapy and EGFR TKIs is still unclear. The results of a retrospective study involving 44 EGFR-mutant NSCLC who received concurrent radiotherapy and TKI , recorded frequent and severe AEs with two patients that had to discontinue the treatment due to grade ≥ 3 cutaneous toxicity . Additionally, they also reported radiation-related AEs including included hydrocephalus (2 patients), pneumonitis (3 patients, one grade ≥ 3), myocarditis (1 patient), radiodermatitis (3 patients), laryngo-pharyngitis (2 patients), esophagitis (2 patients), and enteritis (1 patient) . Preliminary reports suggested improved survival of NSCLC-patients bearing ALK-rearrangement and treated with radiotherapy for BM. The introduction of targeted treatment has improved the response of these patients although intrinsic radiosensitivity of ALK-rearranged cells seems to play a prevalent role . Johung et al. suggested a median life expectancy of 49.5 months in BM patients receiving both ALK-targeted therapy and radiotherapy . Adjunction of radiotherapy to first-generation ALK-TKI crizotinib significantly improved response rate and progression-free survival in patients with BMs in multiple studies . However, the therapeutic landscape is rapidly changing following the development of new generations of ALK-TKIs with enhanced capability to diffuse thorough the BBB. Although a benefit of radiotherapy in association with 2nd generation drugs as ceritinib or alectinib or 3rd generation drugs as lorlatinib (as upfront therapy or following progression after crizotinib) has not been shown, it should be pointed out that because of the small study populations and heterogeneous treatments with SRS and/or WBRT, these studies were not conclusive and did not underpin the deferral of local treatment. Radiotherapy and lorlatinib may act cooperatively by targeting different intracranial compartments , and case reports suggest that lorlatinib might be effective in intracranial sites that are traditionally considered unfit for radiotherapy such as symptomatic leptomeningeal dissemination, leading to impressive disease response (“Lazarus Effect”) .
Brain oligometastatic disease is a common scenario in which the number of brain lesions becomes a “moving target” whose management is still far to be established. Patients with a single metastatic brain lesion experience significantly longer survival with minimal cognitive impairment and CNS symptoms (other than seizures) compared to patients with multiple metastases. Moreover, it has been shown that postoperative radiotherapy may significantly reduce the risk of local recurrence, whereas combined use of the two locoregional treatments improves the neurologic control of disease and the survival of these patients . Although WBRT has been long recognized as the standard adjuvant procedure after BM resection, a large phase III trial revealed a longer cognitive-deterioration-free survival of patients on SRS compared to WBRT with comparable effects of the two treatments in term of OS . A further study compared the effects of SRS focused on the surgical cavity in patients with radical resection of 1–3 BMs and revealed that the prophylactic radiotherapy reduced the local recurrence rate at 12 months with no effects on OS . The results of the two trials prompted the adoption of SRS as the new standard after surgical resection of BMs . The BM scenario is still more complex in mNSCLC patients with specific oncogene addiction. The results of recent studies in mNSCLC in fact suggest a significant heterogenicity in the expression (about 20%) of EGFR mutations with great discordance recorded between primary tumor and brain lesions . Therefore a further brain biopsy to confirm the presence EGFR mutations also in the brain lesions should be recommended to define a personalized treatment strategy including SRS. A rising number of recent studies focus on the comparison of WBRT vs. SRS and indicate that SRS is an important alternative to WBRT in fit patients. Japanese researchers reported the results of the prospective JLGK0901 trial indicating that SRS is still relevant in the presence of more than three CNS lesions . The use of SRS was associated to a median OS of 13.9 months (455 cases) 10.8 months (531 cases) and 10.8 months (208 cases) in patients with single BM, 2–4 treated BMs, and of 5–10 treated BMs, respectively. However, a retrospective study conducted by Balasubramanian et al. showed that the use of target therapy along with surgery and/or radiation may improve the OS on EGFR mut mNSCLC patients regardless the number of BMs. SRS and more conservative strategies are gaining further field of application also in large brain metastases with a diameter of > 2 cm. Patients with large BMs, commonly present severe neurologic invalidating symptoms and/or significant vasogenic edema or mass effect requiring fast upfront surgical resection when feasible. The subsequent post-operative SRS (median dose 15 Gy) after GTR, with average volume of 8.7 e 9.6 mL, can improve both LC and OS . Drawbacks to this treatment are always possible as neurological complication because of extensive resection and risk of symptomatic radionecrosis associated to ample planning target volume margin size (> 1.0 mm) for SRS . Jhaveri et al. carried out a multivariate analysis in mNSCLC, whose results showed that a GTV > 15 cc is the main risk factor predictive of local recurrence . Additionally, volumes of healthy brain tissue larger than > 10 mL receiving 12 Gy (V12 Gy) are directly correlated with radionecrosis (between 15 and 55%) ; hence the use of fractionated SRS (fSRS: i.e. V12 > 8.5 ml (30 Gy/5 fx; 27 Gy/3 fx) is advised in order to reduce this risk still maintaining an improvement of LC especially when the BM lesions are located in or near eloquent areas . At this purpose, the A071801 phase III trial aimed to evaluate the efficacy of SRS compared with fSRS for resected BMs in mNSCLC patients is ongoing [NCT04114981] with results expected by the end of 2022.
SRS for < 10 mm BM is a high-precision treatment that requires a high level of technology. SRS can be delivered using different machines, with invasive contention or frameless, photons X or gamma. Several decades ago, in 1968, the Gamma Knife (GK) was introduced as the new treatment modality for SRS. The GK is a frame-based SRS that uses 60Co sources for irradiating a tumor volume with a diameter of approximately 4, 8, or 14 mm . GK is mainly characterized by non-homogeneous dose distributions within the target due to the effect of overlapping shots. The Cyberknife (CK) was invented at Stanford Health Care and first debuted in 1994. CK is an image-guided frameless robotic technology designed to deliver non-isocenter non-coplanar beam, and the entire treatment procedure is completely non-invasive . Despite the differences in treatment planning and dose delivery significant differences were not found in the quality of clinical outcome between GK and CK after SRS . Linear accelerator (LINAC)-based radiosurgery was developed as an alternative to GK SRS, using a standard LINAC modified for stereotactic purposes. Recent technical advances have made LINAC-based SRS (using multiple non-coplanar intersecting arc) a patient friendly technique, non-invasive, allowing for accurate patient positioning and a short treatment time . Following the technical improvements in treatment planning systems, LINAC-based SRS was marketed as having acceptably similar precision, accuracy, and mechanical stability for the treatment of numerous and small BM. Accordingly, LINAC-based SRS has been rapidly disseminating in the community in the last decades and despite the lack of systematic comparisons with GK-SRS, clinical results appear to be similar . LINAC-based SRS is considered a changing practice pattern in the treatment of BM NSCLC , considering also the benefit in the cost-effectiveness analysis compared to GK o CK SRS.
Additional comments are needed for BM in critical CNS areas including the brainstem and optic pathway. Brainstem lesions are rare (3–5% of all BM ) and surgery is not amenable for high-risk mortality or further functional impairment. Brainstem metastases come with a poor prognosis and estimated survival without treatment is dramatically poor (from one to six months) . SRS is recommended for the treatment of brainstem metastases with a median dosage of 16 Gy (range 11–39) and median fractions 1 (1–13) . In a recent large metanalysis including 15,900 brainstem metastases treated with SRS the 1-year LC was 86% with an objective response rate of 59% and symptoms improvement of 55%. The grade 3–5 toxicity was 2.4% and deaths from progression after SRS are rare . Isolated optic nerve metastases are similarly rare but result in a unilateral or bilateral loss of the visual field. Thanks to the experience on gliomas and perioptic tumors prompt fractionation or multi-session radiosurgery is an option for treating this subset of patients with the risk of 1–2% of visual complication . These favorable results suggest the feasibility of a local treatment in patients with NSCLC critical areas metastases regardless molecular status and systemic therapy.
The combination of RT and TKI for BMs is still controversial. Results of the perspective study by Jiang et al. showed no advantage of early WBRT to TKI over TKI alone . The results of a recent retrospective study showed a trend to significant advantage (although no difference in OS) of RT and TKI combo vs. TKI alone in terms of median intracranial PFS (27.6 vs. 16.1 months; p = 0.053) . A large meta-analysis including 1,041 unselected NSCLC with BMs from 9 retrospective studies and 1 randomized controlled trial and aimed to investigate the combination of WBRT with EGFR TKI vs. WBRT alone or EGFR TKI therapy alone showed the best hazard ratios for intracranial PFS in patients who received EGFR-TKI alone . More recently, a retrospective analysis aimed to compare SRS + TKI vs. WBRT + TKI vs. TKI alone reported a significant advantage in term of iPFS and OS in the first group (23 vs. 24 vs. 17 months, respectively; p = 0.025) (46 vs. 30 vs. 25 months, respectively; p = 0.001) . A retrospective cohort of patients harboring EGFR-activating mutation treated with consolidative local ablative treatment yielded improved OS after first-line TKI. Interestingly, the BM site significantly affected the improved survival achieved with additional local treatment vs patients receiving exclusive systemic treatment (38.2 versus 29.2 months, HR = 0.48, 95% CI 0.30–0.76, p = 0.002) . Another meta-analysis provided the evidence that early RT in these patients offers a significant iPFS and OS advantage that is strictly correlated with the number of BMs, being the best results achieved in those with less than three brain lesions. On the contrary, no advantage was recorded in the other patients and those showing massive disease . On these bases no conclusive therapeutic statements may be defined and early radiotherapy continues to have a fundament role in the treatment of NSCLC patients with BMs harboring EGFR activating mutations. As for the possible prognostic advantage of an upfront RT treatment followed by TKI therapy, out of a multicentric series of 351 patients with BM from EGFR mutated NSCLC, 100 patients were treated with SRS followed by TKI therapy achieved the best therapeutic results (median survival, respectively, 46, 30, and 25 months; p < 0.001), compared to 120 with WBRT followed by TKI, and 131 with TKI followed by SRS or WBRT at progression. At multivariate analysis, prognostic features didn’t significantly differ between the upfront SRS and EGFR-TKI cohorts, whereas the WBRT cohort was more likely to have a less favorable prognosis ( p = 0.001). Despite the risk of selection biases because SRS is usually adopted for a limited number of BMs, this study shows the safety and effectiveness of elective RT procedure within a multidisciplinary therapeutic approach and warrants further investigation. The efficacy of concurrent radiotherapy and EGFR TKIs is still unclear. The results of a retrospective study involving 44 EGFR-mutant NSCLC who received concurrent radiotherapy and TKI , recorded frequent and severe AEs with two patients that had to discontinue the treatment due to grade ≥ 3 cutaneous toxicity . Additionally, they also reported radiation-related AEs including included hydrocephalus (2 patients), pneumonitis (3 patients, one grade ≥ 3), myocarditis (1 patient), radiodermatitis (3 patients), laryngo-pharyngitis (2 patients), esophagitis (2 patients), and enteritis (1 patient) . Preliminary reports suggested improved survival of NSCLC-patients bearing ALK-rearrangement and treated with radiotherapy for BM. The introduction of targeted treatment has improved the response of these patients although intrinsic radiosensitivity of ALK-rearranged cells seems to play a prevalent role . Johung et al. suggested a median life expectancy of 49.5 months in BM patients receiving both ALK-targeted therapy and radiotherapy . Adjunction of radiotherapy to first-generation ALK-TKI crizotinib significantly improved response rate and progression-free survival in patients with BMs in multiple studies . However, the therapeutic landscape is rapidly changing following the development of new generations of ALK-TKIs with enhanced capability to diffuse thorough the BBB. Although a benefit of radiotherapy in association with 2nd generation drugs as ceritinib or alectinib or 3rd generation drugs as lorlatinib (as upfront therapy or following progression after crizotinib) has not been shown, it should be pointed out that because of the small study populations and heterogeneous treatments with SRS and/or WBRT, these studies were not conclusive and did not underpin the deferral of local treatment. Radiotherapy and lorlatinib may act cooperatively by targeting different intracranial compartments , and case reports suggest that lorlatinib might be effective in intracranial sites that are traditionally considered unfit for radiotherapy such as symptomatic leptomeningeal dissemination, leading to impressive disease response (“Lazarus Effect”) .
Three-six months after radiotherapy and/or systemic therapy BMs were crucially followed up with MRI and assessed by applying the response evaluation criteria in solid tumors (RECIST) . For naïve patients, according to ASCO guidelines for stage I-III NSCLC, brain MRI for routine surveillance should not be used in patients who have undergone curative-intent treatment . Indeed, for patients with clinical stage III-IV disease, surveillance brain MRI performed 12 months after initial evaluation may be warranted . The same recommendation is extended to EGFR mutation-positive NSCLC that had a higher incidence of BMs than in those with EGFR mutation-negative adenocarcinoma . A retrospective study on BMs after SRS showed that lesion less than 100 mm 3 in volume or 6 mm in diameter reaches a 100% LC, thus routine surveillance with brain imaging to diagnose new out of field lesions should be considered as part of the standard care in all stage lung cancer . The Ontario Cancer Registry demonstrated that patients with NSCLC and higher socioeconomic status showed an improved 5-year OS because underwent greater MRI, lung resection, adjuvant or intravenous chemotherapy and palliative radiotherapy . On the other hand, Vernon et al. created a model of comprehensive clinical staging in resectable lung cancer and evaluated the role of brain MRI: additional staging information were found in only four of 274 cases (1.5%). The results of comprehensive clinical staging with and without MRI were identical 98.8% of the cases and if brain MRI were removed from the staging algorithm, the total cost of staging in this population would have been 31.9% cheaper .
In the light of what reviewed here, the treatment of BMs in patients with mNSCLC with or without druggable drivers’ mutations requires a personalized workflow and the presence of multiple professionals with proven experience. Figure summarizes the proposed workflow of clinical management of BMs in patients with druggable mutation-driven mNSCLC. The current tumultuous development in this field disallows reaching guidelines set in stone. The large amounts of scientific information and the definition of specific clinical objectives musts be discussed case by case by a multidisciplinary team including the pathologist, neurosurgeon, neurologist, radiotherapist, oncologist and palliative care taking in full consideration that in the majority of cases the quality of life must the main target of the treatment strategy. Not with standing, in the modern era of precision medicine, the opinion of all the authors is that brain MRI is fundamental: (a) for clinical staging in advanced or systemic metastatic NCSCL (b) for all patients with EGFR driver mutation that had a higher risk of developing BMs; (c) to estimate the intracranial progression to assess the need of tempestive treatment for new BMs after the local treatment (surgery and/or radiotherapy); (d) to follow up and control all the patients with BMs and driver mutation in which is considered safe and feasible to procrastinate a local treatment (i.e. surgery and/or RT).
|
Evidence Regarding Pharmacogenetics in Pain Management and Cancer | fdee7606-0492-464a-9e38-d8dc93c891d9 | 10020807 | Pharmacology[mh] | |
Financial Disclosures Reported by Industry Among Authors of the American Academy of Ophthalmology Clinical Practice Guidelines | 9334e6e0-f8e5-478f-800a-fd2ec8772c51 | 10020930 | Ophthalmology[mh] | Rooted in evidence-based medicine, clinical practice guidelines (CPGs) use available scientific data to provide the latest recommendations with the goal of optimizing patient care. Guidelines can be subject to bias and conflicts of interest (COIs), which impact recommendations. Since the Physician Payments Sunshine Provision of the Affordable Care Act in 2013, medical supply manufacturers (ie, industry) must report specific payments to physicians and medical teaching institutions. Some companies also need to report their physicians’ ownership and investment interests. To ensure transparency, the information is centralized on the US Centers for Medicare & Medicaid Open Payments database, a public website. Financial compensation can compromise medical research by skewing study outcomes and physicians’ clinical decisions. , Readers should thus be aware of physician authors’ potential COIs. To our knowledge, the extent to which COIs systematically bias the field of ophthalmology by influencing society guidelines and the accuracy of COIs reported by authors of guidelines or reported by industry to have been given to those authors is unknown. Considering this gap in the ophthalmology literature, we assessed payments reported by authors of the American Academy of Ophthalmology (AAO) Practice Pattern Guidelines compared with payments reported by industry to have been given to these authors to evaluate the disclosures’ accuracy and authors’ compliance to the Council of Medical Specialty Societies’ Code for Interactions with Companies. We further determined author gender (social construct, assigned using an application program interface) to evaluate potential differences in COIs. AAO Clinical Guideline Selection All authors reviewed all clinical guidelines in the Preferred Practice Patterns (PPP) section of the AAO website and reviewed available industry payment data listed from the Open Payments database on May 1, 2022. The study period was determined based on publicly available industry payment data only being reported as of 2013. Indeed, the national policies mandating Open Payments were originally enacted in 2013 with the goal of collecting and publishing information about payments reporting entities made to covered recipients. This study was exempt from research ethics board review per Article 2.2 of the Tri-Council Policy Statement. The described research adhered to the tenets of the Declaration of Helsinki. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Data Extraction Two authors (A. X.-L. N. and M. J.-C.) retrieved guideline authors’ names and their reported COI disclosures on the guideline publication. Guideline authors only included guideline writers. Nonphysician guideline authors were excluded. We further documented if the authors were chairs or cochairs of the AAO subspecialty guidelines committee. The exact dates in which physician authors joined AAO’s guidelines committees were unknown. Physician authors were counted multiple times if they authored more than 1 AAO guideline. Three authors (A. X.-L. N., M. J.-C., and D.-D. N.) entered eligible authors’ names into the Open Payments search tool and cross-matched their full name, role (allopathic and osteopathic physicians), medical specialty (ophthalmology), and location to identify the correct individuals and extract their payment data matching the disclosure period indicated on the AAO guideline. Indeed, the AAO guidelines provide specific disclosure periods by indicating months and years. For example, if a guideline has a disclosure period from January to October 2019, we matched the same time period on Open Payments. Payment data were categorized into general payments, research payments, research funding, and ownerships. General payments are payments not associated with research studies, contrary to research payments. Examples of general payments include consulting and speaking fees, honoraria, gifts, and royalties not related to research studies. Associated research funding is defined as funding received for a research project where the physician is named as principal investigator. Ownership payments consist of the actual dollar amount invested and the value of the ownership or payment investment interest. One of us (A. X.-L. N.) determined physician authors’ gender (social construct) by inputting their first names into Gender API, which is an application program interface assigning gender with 98% accuracy. Statistical Analysis Descriptive statistics were calculated using Stata/IC version 16.1 (StataCorp). The Kruskal-Wallis test by ranks was performed to test whether there was a significant difference in median total payments between men and women. P values were not adjusted for multiple analyses. P values less than .05 were considered statistically significant, and all P values were 2-tailed. All authors reviewed all clinical guidelines in the Preferred Practice Patterns (PPP) section of the AAO website and reviewed available industry payment data listed from the Open Payments database on May 1, 2022. The study period was determined based on publicly available industry payment data only being reported as of 2013. Indeed, the national policies mandating Open Payments were originally enacted in 2013 with the goal of collecting and publishing information about payments reporting entities made to covered recipients. This study was exempt from research ethics board review per Article 2.2 of the Tri-Council Policy Statement. The described research adhered to the tenets of the Declaration of Helsinki. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Two authors (A. X.-L. N. and M. J.-C.) retrieved guideline authors’ names and their reported COI disclosures on the guideline publication. Guideline authors only included guideline writers. Nonphysician guideline authors were excluded. We further documented if the authors were chairs or cochairs of the AAO subspecialty guidelines committee. The exact dates in which physician authors joined AAO’s guidelines committees were unknown. Physician authors were counted multiple times if they authored more than 1 AAO guideline. Three authors (A. X.-L. N., M. J.-C., and D.-D. N.) entered eligible authors’ names into the Open Payments search tool and cross-matched their full name, role (allopathic and osteopathic physicians), medical specialty (ophthalmology), and location to identify the correct individuals and extract their payment data matching the disclosure period indicated on the AAO guideline. Indeed, the AAO guidelines provide specific disclosure periods by indicating months and years. For example, if a guideline has a disclosure period from January to October 2019, we matched the same time period on Open Payments. Payment data were categorized into general payments, research payments, research funding, and ownerships. General payments are payments not associated with research studies, contrary to research payments. Examples of general payments include consulting and speaking fees, honoraria, gifts, and royalties not related to research studies. Associated research funding is defined as funding received for a research project where the physician is named as principal investigator. Ownership payments consist of the actual dollar amount invested and the value of the ownership or payment investment interest. One of us (A. X.-L. N.) determined physician authors’ gender (social construct) by inputting their first names into Gender API, which is an application program interface assigning gender with 98% accuracy. Descriptive statistics were calculated using Stata/IC version 16.1 (StataCorp). The Kruskal-Wallis test by ranks was performed to test whether there was a significant difference in median total payments between men and women. P values were not adjusted for multiple analyses. P values less than .05 were considered statistically significant, and all P values were 2-tailed. Guideline Characteristics and Guideline Author Demographic Characteristics A total of 24 guidelines released between 2016 and 2020 by the AAO were included. PPPs were divided into subspecialty , including 7 focused on retinal pathologies, 6 on corneal and external diseases, 4 on adult strabismus and pediatrics, 3 on glaucoma, 1 on low vision, 1 on optics, 1 on cataract, and 1 on comprehensive ophthalmology. Per guideline, there was a mean (SD) of 7.83 (2.24) physician authors. There were 14 nonphysician author names, including 2 assigned as women and 12 assigned as men. These nonphysician authors wrote between 1 and 7 guidelines each and held one of the following degrees: CO, COMT, JD, MHS, MPH, PhD, or ScM. After excluding 14 nonphysician author names, 188 author names remained, including 83 assigned as women (44.1%) and 105 assigned as men (55.9%). These author names represented 66 different authors, as authors wrote between 1 and 8 guidelines each. According to the AAO guidelines, 149 guideline authors (79.3%) had no financial disclosures while serving on the AAO guideline committee. Industry Payments Reported for Guideline Authors Among the 149 guideline authors who reported having no financial disclosures, 81 (54.4%) had payments reported by industry on the Open Payments database while serving on the AAO guideline committee. More specifically, guideline authors with no financial disclosures reported on the guidelines had the following payments reported by industry on the Open Payments Database during the AAO disclosure period: a median (IQR; range) of 5 (3-8; 1-115) payments and a mean (SD) of 8.2 (13.8) payments, totaling between $9.78 and $79 602.07 (median [IQR] total, $332.17 [$199.03-$14 615.09]; mean [SD] total, $10 768.05 [$28 676.47]). A total of 16 guideline authors (19.8%) had payments of less than $100 reported by industry on the Open Payments database. According to the Open Payments database, 112 of 188 physician authors (59.6%) had been reported by industry to have received at least 1 payment while serving on the AAO guideline committee . Among them, there were 61 assigned as women (54.5%) and 51 assigned as men (45.5%). Physician authors had been reported by industry to have received a total of $3 343 127.48 in general payments and associated research fundings, including $2 541 227.78 to women physician authors and $801 899.70 to men. None of the physician authors had been reported by industry to have received research payments nor ownership payments. Physician authors had been reported by industry to have received a mean (SD) of $29 849.35 ($54 131.56), with total payments ranging from $9.78 to $225 958. The 98 physician authors with general payments had been reported by industry to have received a mean (SD) of $22 770.49 ($51 732.14), with total general payments ranging from $9.78 to $207 658. The 42 physician authors with associated research fundings had been reported by industry to have received a mean (SD) of $26 467.12 ($19 328.85), with total associated research fundings ranging from $265.20 to $57 572.20. As indicated in , physician authors had been reported by industry to have received a mean (SD) of $29 849.35 ($54 131.56) and a median (IQR) of $691.17 ($218.85-$41 104.67) in total payments. Women physician authors had been reported by industry to have received a mean (SD) of $41 659.47 ($66 364.94) and a median (IQR) of $15 265 ($598.47-$41 104.67) in payments. Men physician authors had been reported by industry to have received a mean (SD) of $15 723.52 ($29 090.22) and a median (IQR) of $301.48 ($218.85-$14 615.09) in payments. Women were therefore reported by industry to have been paid more than men, with a difference in medians of $14 963.52 (95% CI, 0.059-0.281; P = .003). Guidelines Chairs and Cochairs There was a total of 30 physician authors serving as chairs and cochairs in the AAO guidelines reviewed, with 6 guidelines cochaired by 2 physicians. A total of 6 of 30 chairs and cochairs (20%) reported financial disclosures in the AAO guidelines. Of these, 21 chairs and cochairs had financial disclosures reported by industry on the Open Payments database, with 1 to 115 payments (median [IQR] payments, 5 [3-115]; mean [SD] payments, 35.76 [51.38]) totaling $61.56 to $79 602.07 (median [IQR] of $15 265 [$301.48-$79 602.07]; mean [SD] of $27 689.74 [$34 222.31]). A total of 24 guidelines released between 2016 and 2020 by the AAO were included. PPPs were divided into subspecialty , including 7 focused on retinal pathologies, 6 on corneal and external diseases, 4 on adult strabismus and pediatrics, 3 on glaucoma, 1 on low vision, 1 on optics, 1 on cataract, and 1 on comprehensive ophthalmology. Per guideline, there was a mean (SD) of 7.83 (2.24) physician authors. There were 14 nonphysician author names, including 2 assigned as women and 12 assigned as men. These nonphysician authors wrote between 1 and 7 guidelines each and held one of the following degrees: CO, COMT, JD, MHS, MPH, PhD, or ScM. After excluding 14 nonphysician author names, 188 author names remained, including 83 assigned as women (44.1%) and 105 assigned as men (55.9%). These author names represented 66 different authors, as authors wrote between 1 and 8 guidelines each. According to the AAO guidelines, 149 guideline authors (79.3%) had no financial disclosures while serving on the AAO guideline committee. Among the 149 guideline authors who reported having no financial disclosures, 81 (54.4%) had payments reported by industry on the Open Payments database while serving on the AAO guideline committee. More specifically, guideline authors with no financial disclosures reported on the guidelines had the following payments reported by industry on the Open Payments Database during the AAO disclosure period: a median (IQR; range) of 5 (3-8; 1-115) payments and a mean (SD) of 8.2 (13.8) payments, totaling between $9.78 and $79 602.07 (median [IQR] total, $332.17 [$199.03-$14 615.09]; mean [SD] total, $10 768.05 [$28 676.47]). A total of 16 guideline authors (19.8%) had payments of less than $100 reported by industry on the Open Payments database. According to the Open Payments database, 112 of 188 physician authors (59.6%) had been reported by industry to have received at least 1 payment while serving on the AAO guideline committee . Among them, there were 61 assigned as women (54.5%) and 51 assigned as men (45.5%). Physician authors had been reported by industry to have received a total of $3 343 127.48 in general payments and associated research fundings, including $2 541 227.78 to women physician authors and $801 899.70 to men. None of the physician authors had been reported by industry to have received research payments nor ownership payments. Physician authors had been reported by industry to have received a mean (SD) of $29 849.35 ($54 131.56), with total payments ranging from $9.78 to $225 958. The 98 physician authors with general payments had been reported by industry to have received a mean (SD) of $22 770.49 ($51 732.14), with total general payments ranging from $9.78 to $207 658. The 42 physician authors with associated research fundings had been reported by industry to have received a mean (SD) of $26 467.12 ($19 328.85), with total associated research fundings ranging from $265.20 to $57 572.20. As indicated in , physician authors had been reported by industry to have received a mean (SD) of $29 849.35 ($54 131.56) and a median (IQR) of $691.17 ($218.85-$41 104.67) in total payments. Women physician authors had been reported by industry to have received a mean (SD) of $41 659.47 ($66 364.94) and a median (IQR) of $15 265 ($598.47-$41 104.67) in payments. Men physician authors had been reported by industry to have received a mean (SD) of $15 723.52 ($29 090.22) and a median (IQR) of $301.48 ($218.85-$14 615.09) in payments. Women were therefore reported by industry to have been paid more than men, with a difference in medians of $14 963.52 (95% CI, 0.059-0.281; P = .003). There was a total of 30 physician authors serving as chairs and cochairs in the AAO guidelines reviewed, with 6 guidelines cochaired by 2 physicians. A total of 6 of 30 chairs and cochairs (20%) reported financial disclosures in the AAO guidelines. Of these, 21 chairs and cochairs had financial disclosures reported by industry on the Open Payments database, with 1 to 115 payments (median [IQR] payments, 5 [3-115]; mean [SD] payments, 35.76 [51.38]) totaling $61.56 to $79 602.07 (median [IQR] of $15 265 [$301.48-$79 602.07]; mean [SD] of $27 689.74 [$34 222.31]). Some ophthalmologists are reported by industry to receive substantial payments from industry on the Open Payments database. The payments reported by industry to have been received by physician authors of AAO guidelines was substantial, with a median (IQR) of $691.17 ($218.85-$41 104.67) and a mean (SD) of $29 849.35 ($54 131.56), which is higher compared with industry payments received by clinical guideline physician authors in other medical specialties. , , All authors reviewed all AAO guidelines published from 2013 until May 2022, and 2 authors (A. X.-L. N. and M. J.-C.) extracted physician guideline authors’ COIs. Three authors (A. X.-L. N., M. J.-C., and D.-D. N.) retrieved physician payments reported by industry using the Open Payments database. AAO’s policy statement stipulated that committee members must disclose all financial relationships with companies. However, more than half of physician authors (81 [54.4%]) who declared having no financial disclosures while serving on the AAO guideline committee had payments reported by industry on the Open Payments database. Studies in other medical specialties that examined physicians who authored clinical guidelines of leading specialty organizations similarly reported a disconnect between the COIs reported in the guidelines and those reported by industry in Open Payment systems. , , , If truly representing errors, this disconnect between COIs reported in guidelines and in Open Payments may contribute to potential biases in guidelines of national medical organizations, including the AAO. Furthermore, the Institutes of Medicine Guidelines for trustworthy CPGs recommends that 50% or more authors on CPG committees have no COIs. While nominally all the AAO guidelines fit this criterion, our analyses show that 16 of 24 guidelines now fail and may be considered untrustworthy. Additionally, these best practice guidelines for CPGs recommend that committee chairs have no COIs, which is not the case in 6 of 30 chairs and cochairs based on the AAO guidelines’ own disclosures and in 21 of these same chairs and cochairs based on the information reported by industry on the Open Payments database. Among the 112 physician authors who were reported by industry to have received at least 1 industry payment, women (54.5%) were more represented than men. Women physicians were reported by industry to have been paid significantly more than men for total payments (difference, $14 963.52; 95% CI, 0.059-0.281; P = .003). Our findings differ from prior studies reporting that women physicians were underrepresented in industry compensation and paid less in industry partnerships compared with men. , Extended research should be conducted to understand these financial differences. Limitations Our study presents limitations. The first limitation is that we are using the Open Payments database. Therefore, we are relying on payments reported by industry, which do not necessarily indicate payments truly received by physicians. Indeed, Open Payments could have reporting errors from a company; some listings in Open Payments may have gone to a physician’s employer or a payment declined by a physician still could be reported in Open Payments system. Second, the SDs calculated for the payments reported to have been received indicate that the data are quite skewed, as they are 2-fold greater than the means. It is also not possible to fully assess ophthalmologist’s financial disclosures due to the limits of the Physician Payment Sunshine Act. The Sunshine Act only requires payment reporting of companies that sell products covered by government programs and that offer compensation worth more than $10. However, even small compensations as low as $10 may influence physicians. It was reported that there was an association between sponsored meals by pharmaceutical companies and the increase in their drug prescriptions. Our findings may underestimate the effect of COIs on guidelines authors. Additionally, some genders may be inaccurate since we assigned gender (social construct) using Gender API. Gender API does not capture the spectrum of gender identities, which limits genders included in this study. Our study presents limitations. The first limitation is that we are using the Open Payments database. Therefore, we are relying on payments reported by industry, which do not necessarily indicate payments truly received by physicians. Indeed, Open Payments could have reporting errors from a company; some listings in Open Payments may have gone to a physician’s employer or a payment declined by a physician still could be reported in Open Payments system. Second, the SDs calculated for the payments reported to have been received indicate that the data are quite skewed, as they are 2-fold greater than the means. It is also not possible to fully assess ophthalmologist’s financial disclosures due to the limits of the Physician Payment Sunshine Act. The Sunshine Act only requires payment reporting of companies that sell products covered by government programs and that offer compensation worth more than $10. However, even small compensations as low as $10 may influence physicians. It was reported that there was an association between sponsored meals by pharmaceutical companies and the increase in their drug prescriptions. Our findings may underestimate the effect of COIs on guidelines authors. Additionally, some genders may be inaccurate since we assigned gender (social construct) using Gender API. Gender API does not capture the spectrum of gender identities, which limits genders included in this study. Moving forward, AAO policies could be modified by reinforcing authors’ reviews of their disclosures with disclosures reported by industry on the Open Payments database. Increasing authors’ awareness and understanding of the AAO policy statement may also contribute to better declaration of COIs in clinical guidelines. It remains unclear if COI reporting alone is sufficient to mitigate industry influence on evidence-based guidelines. |
Impact of Socio-demographic Characteristics on Time in Outpatient
Cardiology Clinics: A Retrospective Analysis | 45fcc136-56b2-4e94-9c9d-e89c2d99b02c | 10021097 | Internal Medicine[mh] | Time spent accessing healthcare is a key measure of service quality and
strain. , Elective surgery waiting times are the focus of most analyzes, and have
increased in recent years. However, patients wait in multiple settings—in the community
for primary care, specialist, , and allied health appointments,
and in waiting rooms in emergency and ambulatory
clinics. Compared to elective surgery, these other waiting times are
poorly characterized, providing clinicians and policy makers with an incomplete view
of patient time burden across healthcare systems. This burden is greatest for patients with multiple comorbid conditions, such as
cardiovascular diseases, who require increased healthcare contact. There is
international evidence that elective surgery waiting times are greater for patients
of lower socio-economic status (SES). - This is particularly
concerning in single payer health systems where waiting time should be allocated
according to clinical acuity, rather than ability to pay. However, there are few
studies on patient time burden in other settings. Particularly, there are a lack of data on time spent accessing ambulatory care and in
outpatient clinic waiting rooms. Such time may seem less significant as an absolute,
but cumulates with increasing healthcare contact and has an associated opportunity
cost secondary to missed work hours, estimated at 15 cents per dollar spent on
healthcare. The largest reports on waiting room time are from the USA
and indicate a likely time of 20 to 40 min. , , Some studies suggest patients
from lower socio-economic backgrounds wait longer in this setting as well. An
analysis of 3787 responses to the American Time Use Survey by Ray et al found time
accessing outpatient care was 123 min on average and significantly longer for Black
and Hispanic patients, those with less education, and the unemployed. Oostrom et
al analyzed 21 million outpatient office visits in the USA, finding publicly insured
(Medicaid) patients were 20% more likely than privately insured patients to wait
longer than 20 min. A small 2022 analysis of 423 attendees to a public outpatient
clinic in Ethiopia found those with lower educational attainment were more likely to
have long waiting times than tertiary-educated participants (odds ratio 2.25 [95% CI
1.11, 4.58]). A study of 96 patients in a Nigerian outpatient department
found women were more likely to experience waiting times of ≥180 min than men (31.6%
vs 6.3%, respectively). While these data suggest a relationship may exist, to our
knowledge, there are no studies comparing clinic time with SES in single-payer
healthcare systems such as the UK, Canada, or Australia. In this study, we present data from consecutive patients attending outpatient
cardiology appointments across 3 public hospitals in Sydney, Australia between 2014
and 2019. We aim to describe the “clinic time” (difference between time arrived and
time departed) and assess whether this is impacted by socio-demographic
characteristics including SES, age, gender, number of comorbidities, country of
birth, and language spoken at home.
Setting and Study Population We examined a consecutive patient-level data set of all public outpatient
cardiology encounters across 3 hospitals within Western Sydney Local Health
District (WSLHD) between July 2014 and December 2019. Clinics are consultant-led
and staffed by junior doctors, training cardiologists, and nursing staff.
Patients are referred by general practitioners, emergency departments, or other
doctors and generally do not pay to access these clinics. WSLHD comprises 5
hospitals, 7 community health centers, and serves 946 000 residents in the
western suburbs of Sydney. The population is diverse
with 46.8% of residents born overseas and 50.3% speaking a language other than
English. WSLHD also houses the largest Aboriginal and Torres Strait Islander
population in Australia (approximately 13 000 persons). Inclusion and Exclusion Criteria All adult (>18) patients who accessed outpatient cardiology services in-person
across WSLHD between July 2014 and December 2019 were included in the
analysis. Patients were excluded if their clinic time was not assessed. This was defined if
clinic time data were missing, equal to 0, or if all patients within a clinic
were allocated to a pre-specified time (eg, 30, 45, or 60 min). Extreme values
were excluded with cut-offs of ≤20 min (the presumed time of a consultation
only), or ≥240 min (the entire duration of a morning or afternoon clinic
session) as these times were likely due to data entry error or unreliable
clerical processes. Audio and inpatient consultations were excluded. Data Collection, Handling, and Definitions The data were cleaned, de-identified and processed by the Business Analytics
Service (BAS) at Westmead Hospital and passed to the Westmead Applied Research
Center, University of Sydney, via a secure server. The data contained
patient-level variables on age, gender, Indigenous status, country of birth,
language spoken at home, number of comorbidities, and postcode. Data on country
of birth, Indigenous status, and language spoken is obtained from all patients
via self- report on presentation to hospital. Patient postcode was correlated
with the 2016 socio-economic indexes for areas (SEIFA) Index of relative
socio-economic disadvantage (IRSD) score. This score is derived from 2016
Australian census data and summarizes variables that indicate relative
disadvantage. The lower the score, the higher proportion of disadvantaged people
reside within the postcode of interest. IRSD deciles were applied
to each patient for the final analysis. In addition, the data contained
appointment-level information on time of day, visit type (new or follow up),
referrer (emergency department or other), clinic type (arbitrarily categorized
A-R for consultant and hospital anonymity), arrived time, and departed time. Total clinic time was calculated by measuring the difference between time arrived
and time departed. This is a convenience measure taken by administration staff
as part of the normal clinic workflow. Statistical Analysis Statistical analysis was undertaken using R statistical software (V3.6.1). All
variables of interest were first interrogated visually to assess for normality
of distribution. Means were calculated for normally distributed continuous
variables, and medians for non-normal continuous variables. Categorical
variables were presented as frequencies and percentages. Initially, the proportion of patients waiting longer than the median clinic time
in different demographic groups (Age ≥75 vs <75, IRSD ≤5 vs >5, ≥4
comorbidities vs <4, female vs male, Indigenous vs non-Indigenous, born in
Australia vs born Overseas, and English vs other language spoken at home) was
compared with a chi-squared test. A univariate unadjusted linear regression was
then conducted on the above patient characteristics and clinic process measures
(clinic, visit type (new/follow up), referrer, appointment year, and time of
day) to determine variables associated with increased clinic time. A cox proportional hazard model was then applied to identify patient-level
predictors of increased time in clinic. The model outcome was the time the
patient left clinic. A higher hazard ratio (HR) described greater chance of
leaving clinic earlier and hence shorter total time in clinic. This analytic
approach was selected due to the non-normal distribution of the time data and is
similar to cox proportional hazard models applied to assess time to wound
healing, where a higher HR corresponds to a better outcome. Multivariate models controlled for clinic, visit type, referral source, and the
above demographic characteristics. Results of these models are presented as HRs
with 95% confidence interval (CIs). Further analysis was conducted to identify
interactions between patient and clinic-level variables of interest. Finally,
within-hospital and within-clinic (shorter wait versus longer wait) analysis was
conducted to determine whether discrepancies could be accounted for by
between-hospital and clinic differences.
We examined a consecutive patient-level data set of all public outpatient
cardiology encounters across 3 hospitals within Western Sydney Local Health
District (WSLHD) between July 2014 and December 2019. Clinics are consultant-led
and staffed by junior doctors, training cardiologists, and nursing staff.
Patients are referred by general practitioners, emergency departments, or other
doctors and generally do not pay to access these clinics. WSLHD comprises 5
hospitals, 7 community health centers, and serves 946 000 residents in the
western suburbs of Sydney. The population is diverse
with 46.8% of residents born overseas and 50.3% speaking a language other than
English. WSLHD also houses the largest Aboriginal and Torres Strait Islander
population in Australia (approximately 13 000 persons).
All adult (>18) patients who accessed outpatient cardiology services in-person
across WSLHD between July 2014 and December 2019 were included in the
analysis. Patients were excluded if their clinic time was not assessed. This was defined if
clinic time data were missing, equal to 0, or if all patients within a clinic
were allocated to a pre-specified time (eg, 30, 45, or 60 min). Extreme values
were excluded with cut-offs of ≤20 min (the presumed time of a consultation
only), or ≥240 min (the entire duration of a morning or afternoon clinic
session) as these times were likely due to data entry error or unreliable
clerical processes. Audio and inpatient consultations were excluded.
The data were cleaned, de-identified and processed by the Business Analytics
Service (BAS) at Westmead Hospital and passed to the Westmead Applied Research
Center, University of Sydney, via a secure server. The data contained
patient-level variables on age, gender, Indigenous status, country of birth,
language spoken at home, number of comorbidities, and postcode. Data on country
of birth, Indigenous status, and language spoken is obtained from all patients
via self- report on presentation to hospital. Patient postcode was correlated
with the 2016 socio-economic indexes for areas (SEIFA) Index of relative
socio-economic disadvantage (IRSD) score. This score is derived from 2016
Australian census data and summarizes variables that indicate relative
disadvantage. The lower the score, the higher proportion of disadvantaged people
reside within the postcode of interest. IRSD deciles were applied
to each patient for the final analysis. In addition, the data contained
appointment-level information on time of day, visit type (new or follow up),
referrer (emergency department or other), clinic type (arbitrarily categorized
A-R for consultant and hospital anonymity), arrived time, and departed time. Total clinic time was calculated by measuring the difference between time arrived
and time departed. This is a convenience measure taken by administration staff
as part of the normal clinic workflow.
Statistical analysis was undertaken using R statistical software (V3.6.1). All
variables of interest were first interrogated visually to assess for normality
of distribution. Means were calculated for normally distributed continuous
variables, and medians for non-normal continuous variables. Categorical
variables were presented as frequencies and percentages. Initially, the proportion of patients waiting longer than the median clinic time
in different demographic groups (Age ≥75 vs <75, IRSD ≤5 vs >5, ≥4
comorbidities vs <4, female vs male, Indigenous vs non-Indigenous, born in
Australia vs born Overseas, and English vs other language spoken at home) was
compared with a chi-squared test. A univariate unadjusted linear regression was
then conducted on the above patient characteristics and clinic process measures
(clinic, visit type (new/follow up), referrer, appointment year, and time of
day) to determine variables associated with increased clinic time. A cox proportional hazard model was then applied to identify patient-level
predictors of increased time in clinic. The model outcome was the time the
patient left clinic. A higher hazard ratio (HR) described greater chance of
leaving clinic earlier and hence shorter total time in clinic. This analytic
approach was selected due to the non-normal distribution of the time data and is
similar to cox proportional hazard models applied to assess time to wound
healing, where a higher HR corresponds to a better outcome. Multivariate models controlled for clinic, visit type, referral source, and the
above demographic characteristics. Results of these models are presented as HRs
with 95% confidence interval (CIs). Further analysis was conducted to identify
interactions between patient and clinic-level variables of interest. Finally,
within-hospital and within-clinic (shorter wait versus longer wait) analysis was
conducted to determine whether discrepancies could be accounted for by
between-hospital and clinic differences.
Of 37 456 patients assessed for eligibility, 14 823 were excluded and 22 367 were
included in the final analysis . Of these, 14 925 (65.9%) were male and the mean age was 61.4
(SD 15.2) years. Only 7823 (35.0%) were born in Australia, and 8452 (37.8%) were in
the lowest IRSD decile, indicating they resided in a postcode with a greater
proportion of disadvantaged residents than 90% of postcodes in Australia. A
significant proportion of patients had >4 comorbidities (40.4%). Cardiac risk
factors and comorbid cardiac conditions were also relatively common . Time Spent in Clinic The median total time in clinic was 84 min (interquartile range 58-130). The
distribution was flat across the years of observation, ranging from 69 min in
2014 to 101 min in 2017 . Process Measures as Predictors of Longer Time in Clinic Clinic process measures were analyzed for their association with clinic time. New
patients and those referred from the emergency department were the most likely
to spend longer in clinic (median 120 and 125 min, respectively, ). There was
significant variance between clinics . Linear regression
demonstrated low to moderate association between all process measures and clinic
time besides year of appointment and time of day . Visit type, clinic, and
referral source account for 23.0%, 35.0%, and 20.0% of the variance (R ) in clinic
time, respectively. Patient-Level Predictors of Time in Clinic All patient-level variables were assessed for their correlation with clinic time
in a multivariate cox proportional hazards model controlling for clinic,
referral source and visit type. In the unadjusted model, low (IRSD ≤ 5th decile)
SES patients spent less time in clinic than those of high (IRSD > 5th decile)
SES (median 66 min vs 109 min, ). After adjustment, this was no longer significant (HR 1.02
[0.99-1.06]). Those older than 75 were less likely to leave the clinic (HR 0.94
[0.90-0.97). The relationship between all other sociodemographic characteristics
did not reach significance after adjustment . Interaction Analysis of Demographic, Process Measures, and Socio-Economic
Status Further analysis was performed assessing the interaction between SES, patient
characteristics and clinic process measures. Those of lower SES spent less time
in clinic irrespective of their age, gender, number of comorbidities, country of
birth or language spoken at home. However, after adjustment for visit type,
clinic, and referral source, there was no interaction between SES and any of the
identified demographic variables ( Supplemental Table 1 ). Patients of lower SES were more likely to
attend follow-up appointments (77.2% vs 57.6%), clinics with short clinic time
(66.8% vs 21.1%) and be referred from sources other than the emergency
department, compared to patients of higher SES ( Supplemental Table 1 ). Clinic and Hospital Sub Analysis To assess for discrimination within hospitals and clinics, the association
between socio-economic status and time in clinic was analyzed in a further cox
proportional hazards model adjusted for clinic, referral source and visit type.
Those of lower SES spent slightly less time in clinics in hospital C (57 min vs
60 min, HR 1.24 [1.13-1.37]), though there were no differences within other
hospitals. Within short wait clinics, lower SES spent less time in clinic
(59 min vs 71 min, HR 1.10 [1.05-1.17]). There was no difference according to
SES in longer wait clinics ( Supplemental Table 2 ).
The median total time in clinic was 84 min (interquartile range 58-130). The
distribution was flat across the years of observation, ranging from 69 min in
2014 to 101 min in 2017 .
Clinic process measures were analyzed for their association with clinic time. New
patients and those referred from the emergency department were the most likely
to spend longer in clinic (median 120 and 125 min, respectively, ). There was
significant variance between clinics . Linear regression
demonstrated low to moderate association between all process measures and clinic
time besides year of appointment and time of day . Visit type, clinic, and
referral source account for 23.0%, 35.0%, and 20.0% of the variance (R ) in clinic
time, respectively.
All patient-level variables were assessed for their correlation with clinic time
in a multivariate cox proportional hazards model controlling for clinic,
referral source and visit type. In the unadjusted model, low (IRSD ≤ 5th decile)
SES patients spent less time in clinic than those of high (IRSD > 5th decile)
SES (median 66 min vs 109 min, ). After adjustment, this was no longer significant (HR 1.02
[0.99-1.06]). Those older than 75 were less likely to leave the clinic (HR 0.94
[0.90-0.97). The relationship between all other sociodemographic characteristics
did not reach significance after adjustment .
Further analysis was performed assessing the interaction between SES, patient
characteristics and clinic process measures. Those of lower SES spent less time
in clinic irrespective of their age, gender, number of comorbidities, country of
birth or language spoken at home. However, after adjustment for visit type,
clinic, and referral source, there was no interaction between SES and any of the
identified demographic variables ( Supplemental Table 1 ). Patients of lower SES were more likely to
attend follow-up appointments (77.2% vs 57.6%), clinics with short clinic time
(66.8% vs 21.1%) and be referred from sources other than the emergency
department, compared to patients of higher SES ( Supplemental Table 1 ).
To assess for discrimination within hospitals and clinics, the association
between socio-economic status and time in clinic was analyzed in a further cox
proportional hazards model adjusted for clinic, referral source and visit type.
Those of lower SES spent slightly less time in clinics in hospital C (57 min vs
60 min, HR 1.24 [1.13-1.37]), though there were no differences within other
hospitals. Within short wait clinics, lower SES spent less time in clinic
(59 min vs 71 min, HR 1.10 [1.05-1.17]). There was no difference according to
SES in longer wait clinics ( Supplemental Table 2 ).
This analysis of over 20 000 consecutive outpatient cardiology clinic encounters
aimed to determine whether those of low SES were more likely to spend longer in
clinic. After adjusting for visit type, clinic, and referral source, there was no
difference in clinic time according to SES. Overall, 75% of patients spent at least
1hour in clinic. One quarter spent more than 2 hours. Potential implications of
these findings include consideration of a more productive use of this time in
ambulatory clinics, such as implementing interventions during this time that can
improve health literacy and may improve health outcomes and satisfaction with health
services. , The interaction between SES and time to accessing health services has been debated
for over 20 years. Most data are derived from elective surgery waiting
lists, , and there is some evidence discrimination is reversing as new
policies are introduced. Cooper et al analyzed elective surgery
wait lists in 1997 to 2000, 2001 to 2004, and 2005 to 2007, finding the effect of
SES on waiting time reduced over the period of observation and reversed for knee
replacement and cataract repair in 2005 to 2007, such that the most deprived fifth
waited less than the least deprived fifth. There are less studies of the Australian
system, but most reports suggest discrimination. Johar et al studied
90 162 patients in New South Wales public hospitals, finding that more advantaged
patients waited less for elective surgery at all quintiles of waiting time. Data
from developing countries is also suggestive of discrimination in this setting. A
2017 analysis of 219 surgeries within an Indian teaching hospital found those living
below the poverty line had threefold higher waiting times than those above the
poverty line. However, data are very limited within developing countries,
largely due to a lack of systematic reporting. For example, a recent international
collaboration for systematic reporting of waiting times is limited to organization
for economic co-operation and development (OECD) countries, which are almost
exclusively high-income. The finding of no relation to SES for patients accessing public clinics in our study
is reassuring and may be explained by several reasons. There are likely fewer
opportunities for preferential treatment within waiting rooms (where patients are
seen in the order they arrive) than elective surgery (where waiting time is
determined by clinician priority allocation), which may explain the lack of
association between SES and clinic time in our study. The Australian system is private-public, where patients with insurance that
anticipate a long wait time can opt-in for private hospital care. There is evidence
this preferential service selection model explains elective surgery waiting time
inequity in Australia, though more studies of waiting room time are needed. Many
hospitals in Australia run large public outpatient services where patients generally
do not pay out-of-pocket for services, which are the services analyzed here.
However, higher SES patients are more likely to access privately billed clinics in
the community and findings here may have limited applicability to these care
settings. They do however suggest that the lack of relation to SES of time spent in
public clinics found here may be because of the absence of per-patient payment and
of classification based on public/private status. Patients with cardiovascular disease are more likely to be older, Indigenous, of
lower SES, live in rural areas and have comorbidities than the general
population. Analysis of time in cardiology clinics provides an
opportunity to assess for poorer outcomes among these patient populations. In our
study, we found patients older than 75 were more likely to spend longer in
cardiology clinics. This may be due to these patients having more complex care needs
requiring a longer consultation with additional time to see other health
professional, for example, nurses, allied health workers, social workers. Older
patients may also be more likely to arrive early to clinic appointments, increasing
the overall appointment time. Faiz and Kristoffersen collected data from 1353
outpatient neurology clinic appointments and found older patients were less likely
to arrive late than younger patients (OR 0.74 [0.63-0.88]). In our study, lower SES patients were more likely to attend follow-up appointments
and clinics with shorter waits overall, both strong predictors of reduced total
clinic time. Sub-analysis of these clinics found lower SES patients spent less time
after adjusting for process measures. Importantly, our analysis did not delineate
between consultation and waiting room time. It is possible that lower SES patients
had shorter consult times, which was the primary driver for a shorter total clinic
time. This is supported by an analysis of 70 758 GP consultations in Australia in
2001 to 2002, which found older patients of higher SES had longer consultation
times. A 2020 qualitative analysis of 36 head and neck cancer
appointments found lower SES patients were more passive in their care, engaging in
less agenda setting and information seeking, potentially explaining shorter
consultation times within this group. Further studies are needed to
better define patient time burden while waiting, an indicator of poor care, from
time spent with clinicians, likely an indicator of quality care. The implications of “in-clinic” waiting times are different to those for elective
surgery, specialist and primary care visits, where longer waiting time has been
associated with poorer clinical outcomes. - Increased time in ambulatory
care has been linked to reduced care satisfaction, however the consequences are
primarily economic – the opportunity cost of accessing healthcare. Increasing
workforce casualization, where employees do not have access to sick leave, further
compounds the economic cost of increased clinic time. These implications are
greater for patients that require more contact with healthcare services. Addressing Patient Waiting Time—What Approaches Are Needed? Several methods have been trialed to reduce the time patients spend accessing
healthcare. In the emergency department, the introduction of 4-hour targets in
the UK, Australia and other countries has seen significant reductions in waiting
times. However, there may be diminishing returns from further
reductions. Sullivan et al present an analysis of
12.5 million emergency department episodes of care, finding compliance with
waiting time targets reduced in-hospital mortality. However as compliance
increased past a critical point of 83%, the relationship was lost. Countries
that lack a benchmark likely have even longer waiting times. A 2006 analysis of
675 patients at a public hospital in Barbados revealed a median 377 min length
of stay, over 2 hours longer than targets in Australia and the UK. Despite
some small studies in China, Singapore, and
Korea, there is a paucity of research about interventions to
address in-clinic waiting time. To our knowledge, there are no examples of such
interventions within cardiology outpatient clinics. Irrespective of between-group differences, this study underscores that time spent
accessing healthcare is significant. This time could be better utilized to
deliver health interventions that convert this from wasted to productive time.
There is some literature suggesting waiting room interventions can improve
patient knowledge, but a paucity of robustly designed studies to assess the
efficacy of waiting room interventions on clinical outcomes. , Though a
focus on health outcomes is desirable, waiting room interventions could also
target process outcomes such as patient satisfaction with care, total time in
clinic or consultation time. Integrated delivery of tech-enabled interventions
that begin in the waiting room, continue through the consultation and into the
post-consultation period could contribute to a new paradigm of healthcare that
values patient time whilst also increasing provider efficiency. There are several strengths and weaknesses to this study. We considered a
consecutive sample of patients attending a single specialty within one local
health district. This limited between-hospital and specialty heterogeneity,
however provided limited view on waiting times in rural locations, other cities
and specialties. Data were collected over 5 years, providing insight into
longitudinal waiting time trends within our sample and were convenience based
and likely less prone to bias than data collected by self-report or specifically
measured for the monitoring of waiting time. The convenience nature of these
data also limits generalizability. Approximately 40% of encounters where data
were incomplete or unreliable were excluded to minimize impact on findings
. We did
not have differential data on time spent with clinicians versus in waiting rooms
and could not identify patients that left clinic without being seen by a doctor.
We were unable to characterize the urgency of each patient’s clinic visit and
cannot rule out an effect due to preferential treatment of higher acuity
patients. A sample size calculation was also not performed in this study. All
available data in the sample were analyzed. Finally, data were at the level of
the encounter, not the patient. It is possible there are duplicate patients who
attended clinics multiple times within the data set.
Several methods have been trialed to reduce the time patients spend accessing
healthcare. In the emergency department, the introduction of 4-hour targets in
the UK, Australia and other countries has seen significant reductions in waiting
times. However, there may be diminishing returns from further
reductions. Sullivan et al present an analysis of
12.5 million emergency department episodes of care, finding compliance with
waiting time targets reduced in-hospital mortality. However as compliance
increased past a critical point of 83%, the relationship was lost. Countries
that lack a benchmark likely have even longer waiting times. A 2006 analysis of
675 patients at a public hospital in Barbados revealed a median 377 min length
of stay, over 2 hours longer than targets in Australia and the UK. Despite
some small studies in China, Singapore, and
Korea, there is a paucity of research about interventions to
address in-clinic waiting time. To our knowledge, there are no examples of such
interventions within cardiology outpatient clinics. Irrespective of between-group differences, this study underscores that time spent
accessing healthcare is significant. This time could be better utilized to
deliver health interventions that convert this from wasted to productive time.
There is some literature suggesting waiting room interventions can improve
patient knowledge, but a paucity of robustly designed studies to assess the
efficacy of waiting room interventions on clinical outcomes. , Though a
focus on health outcomes is desirable, waiting room interventions could also
target process outcomes such as patient satisfaction with care, total time in
clinic or consultation time. Integrated delivery of tech-enabled interventions
that begin in the waiting room, continue through the consultation and into the
post-consultation period could contribute to a new paradigm of healthcare that
values patient time whilst also increasing provider efficiency. There are several strengths and weaknesses to this study. We considered a
consecutive sample of patients attending a single specialty within one local
health district. This limited between-hospital and specialty heterogeneity,
however provided limited view on waiting times in rural locations, other cities
and specialties. Data were collected over 5 years, providing insight into
longitudinal waiting time trends within our sample and were convenience based
and likely less prone to bias than data collected by self-report or specifically
measured for the monitoring of waiting time. The convenience nature of these
data also limits generalizability. Approximately 40% of encounters where data
were incomplete or unreliable were excluded to minimize impact on findings
. We did
not have differential data on time spent with clinicians versus in waiting rooms
and could not identify patients that left clinic without being seen by a doctor.
We were unable to characterize the urgency of each patient’s clinic visit and
cannot rule out an effect due to preferential treatment of higher acuity
patients. A sample size calculation was also not performed in this study. All
available data in the sample were analyzed. Finally, data were at the level of
the encounter, not the patient. It is possible there are duplicate patients who
attended clinics multiple times within the data set.
Accessing healthcare presents a significant time burden for patients at all levels of
the health system. In this analysis of 22 367 patients attending publicly funded
outpatient cardiology clinic appointments over 6 years, older patients spent longer
in clinic, but no difference for low SES or other demographically disadvantaged
patients was identified. This is reassuring, however does not exclude the
possibility of disparities. Further studies that are prospective and diverse in
geographical, health service funding, and economic advantage at a country level are
required. Ongoing monitoring of the health system with respect to performance and
inequities is also important. Consideration should be given to the opportunistic
delivery of interventions during this time to improve health engagement and
outcomes.
sj-docx-1-inq-10.1177_00469580231159491 – Supplemental material for
Impact of Socio-demographic Characteristics on Time in Outpatient Cardiology
Clinics: A Retrospective Analysis Click here for additional data file. Supplemental material, sj-docx-1-inq-10.1177_00469580231159491 for Impact of
Socio-demographic Characteristics on Time in Outpatient Cardiology Clinics: A
Retrospective Analysis by Daniel McIntyre, Simone Marschner, Aravinda
Thiagalingam, David Pryce and Clara K. Chow in INQUIRY: The Journal of Health
Care Organization, Provision, and Financing sj-docx-2-inq-10.1177_00469580231159491 – Supplemental material for
Impact of Socio-demographic Characteristics on Time in Outpatient Cardiology
Clinics: A Retrospective Analysis Click here for additional data file. Supplemental material, sj-docx-2-inq-10.1177_00469580231159491 for Impact of
Socio-demographic Characteristics on Time in Outpatient Cardiology Clinics: A
Retrospective Analysis by Daniel McIntyre, Simone Marschner, Aravinda
Thiagalingam, David Pryce and Clara K. Chow in INQUIRY: The Journal of Health
Care Organization, Provision, and Financing
|
Tap water as the source of a Legionnaires’ disease outbreak spread to several residential buildings and one hospital, Finland, 2020 to 2021 | 01fa1d31-7232-4107-b65b-6f8a7b44e1a2 | 10021472 | Microbiology[mh] | Legionnaires’ disease (LD) is an important cause of atypical pneumonia and can be community-acquired, travel-associated or nosocomial [ - ]. Besides age and having a weak immune system, or a chronic lung disease, former and current smokers are at increased risk for LD . Another risk factor is a stay at a hotel or similar accommodation . Case fatality is 8–12%, being higher in elderly people, those with underlying diseases and nosocomial cases . Legionnaires’ disease is caused by Gram-negative aerobic Legionella bacteria that are frequent in fresh water and soil. Legionella can enrich in man-made water systems, especially in stagnant water in temperatures between 20 °C and 45 °C. Legionella is not an exceptional finding in residential water systems and can cause sporadic infections associated with non-hospital facilities . However, large LD outbreaks have often been caused by single or multiple cooling towers . Transmission occurs mainly by inhalation of aerosols or aspiration of water containing Legionella . Among the 30 pathogenic Legionella species, Legionella pneumophila serogroup 1 (Lp 1) is responsible for the majority of LD cases in Europe . In nosocomial cases, other serogroups and species are common as well. The diagnosis of LD is based on urinary antigen test (UAG), PCR and/or culture from respiratory specimens or serology. Most UAG tests are specific only for Lp 1, thus the detection of other serogroups or species requires PCR and/or culture. Diagnosis of LD should prompt to identify the source of infection and trace other cases, as there is the potential of an outbreak . In Finland, the annual number of LD cases ranged between five and 44 in the period from 2010 to 2020, which corresponds to an incidence of 0.8 per 100,000 population, lower than the average incidence in European countries of 2.2 per 100,000 in 2019 (range by country: 0.1–9.4/100,000) . More than half of the Finnish cases were linked to travelling abroad. No major LD outbreaks occurred, except some small clusters, including one nosocomial outbreak and two industrial wastewater-associated cases . Since 2014, enhanced surveillance has been conducted by interviewing all LD cases to identify the potential places of exposure, collecting environmental samples and, since 2016, comparing human and environmental isolates by whole genome sequencing (WGS) in the reference laboratory in the Finnish Institute for Health and Welfare (THL).
The outbreak investigation was initiated in March 2021 when we detected five LD cases within one month in a Finnish city with 120,000 inhabitants in the Northern Savonia healthcare district (247,000 inhabitants), where previously between one and three LD cases have been detected annually. Four more cases appeared during April and May 2021, and further cases from January 2020, October 2020 and February 2021 linked to this outbreak were detected retrospectively ( ). Five of the cases were from the same residential area, living in different buildings on different streets. There were also cases living in three neighbouring residential areas (n = 4) and in one local hospital (n = 3). The residential areas and the hospital were located within a maximum distance of 8 km from each other and 4–8 km from the water plant which provided water for the area of the community. The residential buildings were low-rise apartments or terraced houses built in the late 1980s and early 1990s. The hospital buildings were built between 1914 and 1990. All residential buildings and the hospital had a central heating system. The objectives of the outbreak investigation were to detect the source of the outbreak, to control the outbreak and to report this unusual LD outbreak in one Finnish city with 12 cases extending over 1 year, January 2020 to May 2021.
Surveillance and epidemiological investigation Finnish microbiology laboratories notify all Legionella sp. findings, and physicians notify LD cases to the National Infectious Disease Register. Notifications include demographic data, date and type of specimen, laboratory method and preceding travel history. Legionnaires’ disease cases are also under enhanced surveillance: all cases are routinely interviewed using a structured form (underlying diseases, smoking, travel/hospital history, aerosol exposures at work and leisure time) to identify the possible places of exposure within the incubation period (2–10 days) and based on these data, water or soil samples are collected from homes and/or other potential places of exposure. Human and environmental isolates are compared by WGS at THL. Case definitions We defined an LD case as a patient with pneumonia and a specimen positive for Lp 1 in UAG, PCR or culture from a respiratory specimen. An LD case was classified as residential when there was no hospital history or as nosocomial when a patient had stayed in the hospital during the incubation period. The classification was supported by environmental findings and inclusion to the outbreak was confirmed by detection of the Lp 1 outbreak genotype from a human and/or water isolate in the city in Northern Savonia healthcare district in 2020 or 2021. Environmental and microbiological investigation As part of the enhanced surveillance, tap water samples were collected in a similar manner from the cases’ homes, including hot water samples from shower heads, cold water samples from shower taps and hot water from kitchen taps, mostly within 2–3 weeks of the LD diagnosis ( ). In two residential buildings, water samples were also collected from the shared sauna facilities. In the hospital, sampling was first conducted on the wards where the nosocomial LD cases had stayed, and later on other wards as a precaution. Samples were also taken from a hotel and an industrial site where two residential LD cases had worked, and in one nursing home where one nosocomial LD case had stayed before their hospitalisation. Samples were taken first without running the water and water temperatures were measured at three time points: at the start, 1 min and 2−3 min after opening the tap. Approximately 2–4 weeks after the implementation of control measures, control samples were collected from all sites and sampling points where Legionella concentrations exceeded 1,000 colony forming units per litre (cfu/L) . One municipal water company provided water for the area of the city from a single water plant which is located close to the city centre. We took water samples from the regional storage tanks. Human samples Patient specimens were examined in the local clinical microbiology laboratory. Decision on the diagnostic method (UAG, PCR and/or culture) was clinical. Water samples Water samples were collected by local health inspectors or by THL water microbiology laboratory personnel as part of the enhanced surveillance. Samples were analysed at THL by culture according to the SFS-EN ISO 11731:2017 standard . Genotyping Serotyping of human isolates and all genotyping was performed in the reference laboratory at THL. Human and water isolates were compared by WGS core genome multilocus sequence typing (cgMLST). The WGS was performed on a MiSeq instrument (Illumina, San Diego, United States (US)). Library preparation was done with NexteraXT V2 DNA sample preparation kit (Illumina). The cgMLST was performed using SeqSphere cgMLST tool version 8.2.0 (Ridom GmbH). The Sequence Based Typing protocol for Legionella (SBT) profile was checked with the legsta tool from GitHub . Statistical analysis We compared proportions for categorical variables by chi-squared test and Legionella concentrations by quantile regression with fixed effects . The analyses were performed using Stata version 17.0 (StataCorp LLC, College Station, US).
Finnish microbiology laboratories notify all Legionella sp. findings, and physicians notify LD cases to the National Infectious Disease Register. Notifications include demographic data, date and type of specimen, laboratory method and preceding travel history. Legionnaires’ disease cases are also under enhanced surveillance: all cases are routinely interviewed using a structured form (underlying diseases, smoking, travel/hospital history, aerosol exposures at work and leisure time) to identify the possible places of exposure within the incubation period (2–10 days) and based on these data, water or soil samples are collected from homes and/or other potential places of exposure. Human and environmental isolates are compared by WGS at THL.
We defined an LD case as a patient with pneumonia and a specimen positive for Lp 1 in UAG, PCR or culture from a respiratory specimen. An LD case was classified as residential when there was no hospital history or as nosocomial when a patient had stayed in the hospital during the incubation period. The classification was supported by environmental findings and inclusion to the outbreak was confirmed by detection of the Lp 1 outbreak genotype from a human and/or water isolate in the city in Northern Savonia healthcare district in 2020 or 2021.
As part of the enhanced surveillance, tap water samples were collected in a similar manner from the cases’ homes, including hot water samples from shower heads, cold water samples from shower taps and hot water from kitchen taps, mostly within 2–3 weeks of the LD diagnosis ( ). In two residential buildings, water samples were also collected from the shared sauna facilities. In the hospital, sampling was first conducted on the wards where the nosocomial LD cases had stayed, and later on other wards as a precaution. Samples were also taken from a hotel and an industrial site where two residential LD cases had worked, and in one nursing home where one nosocomial LD case had stayed before their hospitalisation. Samples were taken first without running the water and water temperatures were measured at three time points: at the start, 1 min and 2−3 min after opening the tap. Approximately 2–4 weeks after the implementation of control measures, control samples were collected from all sites and sampling points where Legionella concentrations exceeded 1,000 colony forming units per litre (cfu/L) . One municipal water company provided water for the area of the city from a single water plant which is located close to the city centre. We took water samples from the regional storage tanks. Human samples Patient specimens were examined in the local clinical microbiology laboratory. Decision on the diagnostic method (UAG, PCR and/or culture) was clinical. Water samples Water samples were collected by local health inspectors or by THL water microbiology laboratory personnel as part of the enhanced surveillance. Samples were analysed at THL by culture according to the SFS-EN ISO 11731:2017 standard . Genotyping Serotyping of human isolates and all genotyping was performed in the reference laboratory at THL. Human and water isolates were compared by WGS core genome multilocus sequence typing (cgMLST). The WGS was performed on a MiSeq instrument (Illumina, San Diego, United States (US)). Library preparation was done with NexteraXT V2 DNA sample preparation kit (Illumina). The cgMLST was performed using SeqSphere cgMLST tool version 8.2.0 (Ridom GmbH). The Sequence Based Typing protocol for Legionella (SBT) profile was checked with the legsta tool from GitHub . Statistical analysis We compared proportions for categorical variables by chi-squared test and Legionella concentrations by quantile regression with fixed effects . The analyses were performed using Stata version 17.0 (StataCorp LLC, College Station, US).
Patient specimens were examined in the local clinical microbiology laboratory. Decision on the diagnostic method (UAG, PCR and/or culture) was clinical.
Water samples were collected by local health inspectors or by THL water microbiology laboratory personnel as part of the enhanced surveillance. Samples were analysed at THL by culture according to the SFS-EN ISO 11731:2017 standard .
Serotyping of human isolates and all genotyping was performed in the reference laboratory at THL. Human and water isolates were compared by WGS core genome multilocus sequence typing (cgMLST). The WGS was performed on a MiSeq instrument (Illumina, San Diego, United States (US)). Library preparation was done with NexteraXT V2 DNA sample preparation kit (Illumina). The cgMLST was performed using SeqSphere cgMLST tool version 8.2.0 (Ridom GmbH). The Sequence Based Typing protocol for Legionella (SBT) profile was checked with the legsta tool from GitHub .
We compared proportions for categorical variables by chi-squared test and Legionella concentrations by quantile regression with fixed effects . The analyses were performed using Stata version 17.0 (StataCorp LLC, College Station, US).
We identified 12 LD cases; nine were residential and three nosocomial. Four were in working life and none had travelled. The median age was 65 years (range: 52–85), seven were female and five were male, and 10 of the 12 were taking immunosuppressive medication and/or had underlying disease. One case was known to be a smoker. All residential cases were hospitalised for LD, one of the nosocomial cases died. The LD was diagnosed by UAG in 10 cases, PCR in nine and bacterial culture in five; all PCR tests and cultures were positive for Lp 1. Typing Genotyping was performed for all five available human isolates and for selected water isolates, representing both hot and cold water from eight of nine homes (16 isolates) and from the hospital (six isolates) ( ). One stored water isolate of Lp 1 obtained from a home was no longer viable, but the human isolate for the corresponding case was available (Case 12). The first and third case were identified when we compared the outbreak strains with earlier L. pneumophila isolates in the THL cgMLST library and discovered one identical human and one identical water isolate from the city in January 2020 (Case 1) and February 2021 (Case 3). We then sequenced one water isolate connected to an LD case in October 2020 diagnosed by UAG and found it to be identical (Case 2). In 2020, there had been altogether three LD cases in the city, two of them caused by Lp 1 and the third by L. longbeachae . All 27 isolates were identical or with minor differences (maximum 2/1,443 targets) ( ). The mean percentage of good targets was 99.4% (range: 97.8−99.9%) and the mean average coverage was 151 (range: 90−262). The isolates were complex type 100. The legsta SBT tool generated a similar profile for all isolates for genes flaA , pilE , asd , mip , mompS , proA , neuA as 3, 4, 1, ND, 14, 9, 11. The sequence type could not be named because the mip allele number was missing. The LD cases’ link to the outbreak was confirmed by water isolate (seven cases), human isolate (five cases) or both (four cases). Environmental investigation Water samples We collected 90 water samples (35 cold and 55 hot water) from 20 sites during the initial sampling round ( ). Legionella genus was found in 56 of 90 samples from 13 of 20 sites, including nine homes, several wards in the hospital and the hotel. Serogroup Lp 1 was found in nine of 12 homes of LD cases (residential Cases 1–2, 4–6, 8, 10–12) and in the hospital (nosocomial Cases 3, 7, 9) ( ). In three homes, we detected only Lp 1, in six homes Lp 1 and other Legionella species, in the hospital Lp 1, Lp 2−14 and other Legionella , and in the hotel Lp 2−14 and other Legionella . Water samples from the nosocomial cases’ homes were negative for Legionella . Water samples from the hotel did not grow Lp 1, and samples from the industrial site and nursing home were negative for Legionella . In five of nine homes (residential Cases 1, 2, 4, 5 and 10), the Lp 1 concentration was ≥ 1,000 cfu/L and in four of nine homes (residential Cases 6, 8, 11 and 12) it was ≤ 500 cfu/L. In the hospital, an Lp 1 concentration > 1,000 cfu/L was detected three times. Among all samples positive for Legionella genus, the concentrations ranged between 5 and 640,000 cfu/L (median: 1,500 cfu/L). It was > 1,000 cfu/L in 30 of 56 (54%) samples and ≤ 500 cfu/L in 21 of 56 (38%). The concentrations did not differ significantly between Lp 1 (n = 42) and non-Lp 1 Legionella (n = 21) (median: 620 vs 3,500 cfu/L; p = 0.931). Hot water samples were not more often Legionella -positive than cold water samples (34/46 vs 22/33; p = 0.484), and Legionella concentrations did not differ between hot and cold water (median: 1,650 vs 570 cfu/L; p = 0.285). Hot water temperatures (1 min) were below 50 °C at one or two of the sample points in four homes (Cases 1, 5, 9 and 10), at one sample point in the hospital and at two sample points in the hotel ( ). Cold water temperature was above 20 °C at a single sample point in one home (Case 9). The mean hot water temperatures were slightly higher at Legionella -negative than Legionella -positive sample points (53.1 °C vs 51.2 °C, 1 min). The mean cold water temperature was higher at Legionella -negative than -positive sites (13.7 °C vs 10.7 °C, 1 min). The lowest hot water temperature (44.6 °C, 1 min) and highest cold water temperature (22.2 °C, 1 min) were both measured from a Legionella- negative site. Water company One municipal water company provided water for the city from a single water plant which was located close to the city centre. Water was transferred via local storage tanks and pumping stations. Tanks were washed and disinfected every 4 to 5 years. There was no regular Legionella monitoring, but samples were taken from the regional pumping station and mains as part of a research project in April 2021 and were negative for Legionella . Samples collected in connection with the outbreak in May 2021 from regional storage tanks were also negative for Legionella .
Genotyping was performed for all five available human isolates and for selected water isolates, representing both hot and cold water from eight of nine homes (16 isolates) and from the hospital (six isolates) ( ). One stored water isolate of Lp 1 obtained from a home was no longer viable, but the human isolate for the corresponding case was available (Case 12). The first and third case were identified when we compared the outbreak strains with earlier L. pneumophila isolates in the THL cgMLST library and discovered one identical human and one identical water isolate from the city in January 2020 (Case 1) and February 2021 (Case 3). We then sequenced one water isolate connected to an LD case in October 2020 diagnosed by UAG and found it to be identical (Case 2). In 2020, there had been altogether three LD cases in the city, two of them caused by Lp 1 and the third by L. longbeachae . All 27 isolates were identical or with minor differences (maximum 2/1,443 targets) ( ). The mean percentage of good targets was 99.4% (range: 97.8−99.9%) and the mean average coverage was 151 (range: 90−262). The isolates were complex type 100. The legsta SBT tool generated a similar profile for all isolates for genes flaA , pilE , asd , mip , mompS , proA , neuA as 3, 4, 1, ND, 14, 9, 11. The sequence type could not be named because the mip allele number was missing. The LD cases’ link to the outbreak was confirmed by water isolate (seven cases), human isolate (five cases) or both (four cases).
Water samples We collected 90 water samples (35 cold and 55 hot water) from 20 sites during the initial sampling round ( ). Legionella genus was found in 56 of 90 samples from 13 of 20 sites, including nine homes, several wards in the hospital and the hotel. Serogroup Lp 1 was found in nine of 12 homes of LD cases (residential Cases 1–2, 4–6, 8, 10–12) and in the hospital (nosocomial Cases 3, 7, 9) ( ). In three homes, we detected only Lp 1, in six homes Lp 1 and other Legionella species, in the hospital Lp 1, Lp 2−14 and other Legionella , and in the hotel Lp 2−14 and other Legionella . Water samples from the nosocomial cases’ homes were negative for Legionella . Water samples from the hotel did not grow Lp 1, and samples from the industrial site and nursing home were negative for Legionella . In five of nine homes (residential Cases 1, 2, 4, 5 and 10), the Lp 1 concentration was ≥ 1,000 cfu/L and in four of nine homes (residential Cases 6, 8, 11 and 12) it was ≤ 500 cfu/L. In the hospital, an Lp 1 concentration > 1,000 cfu/L was detected three times. Among all samples positive for Legionella genus, the concentrations ranged between 5 and 640,000 cfu/L (median: 1,500 cfu/L). It was > 1,000 cfu/L in 30 of 56 (54%) samples and ≤ 500 cfu/L in 21 of 56 (38%). The concentrations did not differ significantly between Lp 1 (n = 42) and non-Lp 1 Legionella (n = 21) (median: 620 vs 3,500 cfu/L; p = 0.931). Hot water samples were not more often Legionella -positive than cold water samples (34/46 vs 22/33; p = 0.484), and Legionella concentrations did not differ between hot and cold water (median: 1,650 vs 570 cfu/L; p = 0.285). Hot water temperatures (1 min) were below 50 °C at one or two of the sample points in four homes (Cases 1, 5, 9 and 10), at one sample point in the hospital and at two sample points in the hotel ( ). Cold water temperature was above 20 °C at a single sample point in one home (Case 9). The mean hot water temperatures were slightly higher at Legionella -negative than Legionella -positive sample points (53.1 °C vs 51.2 °C, 1 min). The mean cold water temperature was higher at Legionella -negative than -positive sites (13.7 °C vs 10.7 °C, 1 min). The lowest hot water temperature (44.6 °C, 1 min) and highest cold water temperature (22.2 °C, 1 min) were both measured from a Legionella- negative site. Water company One municipal water company provided water for the city from a single water plant which was located close to the city centre. Water was transferred via local storage tanks and pumping stations. Tanks were washed and disinfected every 4 to 5 years. There was no regular Legionella monitoring, but samples were taken from the regional pumping station and mains as part of a research project in April 2021 and were negative for Legionella . Samples collected in connection with the outbreak in May 2021 from regional storage tanks were also negative for Legionella .
We collected 90 water samples (35 cold and 55 hot water) from 20 sites during the initial sampling round ( ). Legionella genus was found in 56 of 90 samples from 13 of 20 sites, including nine homes, several wards in the hospital and the hotel. Serogroup Lp 1 was found in nine of 12 homes of LD cases (residential Cases 1–2, 4–6, 8, 10–12) and in the hospital (nosocomial Cases 3, 7, 9) ( ). In three homes, we detected only Lp 1, in six homes Lp 1 and other Legionella species, in the hospital Lp 1, Lp 2−14 and other Legionella , and in the hotel Lp 2−14 and other Legionella . Water samples from the nosocomial cases’ homes were negative for Legionella . Water samples from the hotel did not grow Lp 1, and samples from the industrial site and nursing home were negative for Legionella . In five of nine homes (residential Cases 1, 2, 4, 5 and 10), the Lp 1 concentration was ≥ 1,000 cfu/L and in four of nine homes (residential Cases 6, 8, 11 and 12) it was ≤ 500 cfu/L. In the hospital, an Lp 1 concentration > 1,000 cfu/L was detected three times. Among all samples positive for Legionella genus, the concentrations ranged between 5 and 640,000 cfu/L (median: 1,500 cfu/L). It was > 1,000 cfu/L in 30 of 56 (54%) samples and ≤ 500 cfu/L in 21 of 56 (38%). The concentrations did not differ significantly between Lp 1 (n = 42) and non-Lp 1 Legionella (n = 21) (median: 620 vs 3,500 cfu/L; p = 0.931). Hot water samples were not more often Legionella -positive than cold water samples (34/46 vs 22/33; p = 0.484), and Legionella concentrations did not differ between hot and cold water (median: 1,650 vs 570 cfu/L; p = 0.285). Hot water temperatures (1 min) were below 50 °C at one or two of the sample points in four homes (Cases 1, 5, 9 and 10), at one sample point in the hospital and at two sample points in the hotel ( ). Cold water temperature was above 20 °C at a single sample point in one home (Case 9). The mean hot water temperatures were slightly higher at Legionella -negative than Legionella -positive sample points (53.1 °C vs 51.2 °C, 1 min). The mean cold water temperature was higher at Legionella -negative than -positive sites (13.7 °C vs 10.7 °C, 1 min). The lowest hot water temperature (44.6 °C, 1 min) and highest cold water temperature (22.2 °C, 1 min) were both measured from a Legionella- negative site.
One municipal water company provided water for the city from a single water plant which was located close to the city centre. Water was transferred via local storage tanks and pumping stations. Tanks were washed and disinfected every 4 to 5 years. There was no regular Legionella monitoring, but samples were taken from the regional pumping station and mains as part of a research project in April 2021 and were negative for Legionella . Samples collected in connection with the outbreak in May 2021 from regional storage tanks were also negative for Legionella .
We implemented control measures right after the confirmation of Legionella in each site. Immediate control measures included flushing each tap, heat shock treatment and/or increasing hot water temperatures and changing of water mixers, shower heads and hoses if they were in poor condition. Point-of-use microbiological grade filters were installed in the hospital. Altogether 76 control samples were collected from sites where Legionella concentrations exceeded 1,000 cfu/L: six of nine Legionella -positive homes (34 samples), the hospital (34 samples) and the hotel (eight samples). The hot water temperatures at Legionella -positive sites increased by each control round from a mean of 52.8 to 59.6 °C ( ). Legionella concentrations decreased below the detection limit in three homes by the first control round, in two homes and in the hotel by the second. In one building, the counts in the apartment dropped by the first control round but the shared sauna remained positive until disinfection measures and four rounds of control sampling. Disinfection (chlorination) was performed in two residential buildings where the Legionella concentration remained > 1,000 cfu/L in cold water control samples. Surface water with free chlorine treatment had been used in the water company, however, it was known that the chlorine levels were low at the distal sites in residential areas several kilometres away from the plant. A change to UV treatment combined with monochloramine treatment had already been planned and was carried out in summer 2021. To monitor the level of contamination and the effects of control measures in the hospital, we conducted five additional sampling rounds to cover the three wards with cases and also other wards located in different buildings, including 41 sample points with altogether 60 samples with two to four control rounds per ward. Hot water temperatures varied widely between the different sample points in the first sampling round (mean: 54 °C; range: 47.2−60.2 °C; 1 min), after which temperatures increased (mean: 57.3 °C; range: 49.4−62.4 °C; 1 min), and finally no Legionella was found. The decision to install a chlorination system was made to prevent future contamination. Permanent recommendations included a request for regular flushing, elevating hot water temperatures above 55 °C, and overall enhanced maintenance and monitoring of the water systems, especially in high-risk buildings, such as healthcare settings. We monitored the effect of flushing on temperatures by using all obtained water temperature measurements, including 305 hot water measurements (103 sample points) and 158 cold water measurements (54 sample points). The effect of flushing on temperatures was clear, the mean temperatures dropped 6.1 °C (from 15.4 to 9.3 °C) for cold water and increased 5.8 °C (from 50.7 to 56.5 °C) for hot water by flushing 2−3 min.
We described an LD outbreak in a Finnish city during 2020 and 2021 with few community-acquired cases per year in the past. All LD cases were exposed in different buildings or hospital wards but were linked to each other by time and the common water system. The outbreak was confirmed by WGS, all human and water Lp 1 isolates were identical or very closely related. Enhanced surveillance by interviewing all LD cases and environmental sampling of potential places of exposure was crucial in the detection of this outbreak. As most LD cases were diagnosed by UAG or PCR instead of culture, human isolates were not available for all cases for typing and comparison. However, water isolates from the places of exposure were used to link the cases to the outbreak. The outbreak extended to nine residential buildings and one hospital located in five different parts of the city, served by a common water network. Even though the bacteria were most probably initially spreading through the city water network, the cases were limited to certain buildings where the same Legionella strain was enriched. The most important contributing factors were probably inadequate maintenance measures by users, such as low water consumption in the buildings involved, as well as the vulnerability of the LD cases. We only investigated sites associated with the LD cases, and as the source of infection was found to be the water system (showers) associated with each case, there was no immediate cause for further sampling. Thus, the extent of contamination in other buildings of the residential areas with cases, in cooling towers and in other parts of the city remain unknown. However, more cases were sought by alerting healthcare professionals and giving public announcements of the outbreak. Typically for Legionella outbreaks , most exposed did not get infected. The cases shared risk factors including age and underlying medical conditions. Most had also some physical limitations and reported a relatively low water consumption enabling stagnant water in the taps, hoses and pipes in the apartments. Showering was the most likely route of infection for all cases. Similar findings pointing out the relevance of host risk factors, with only single or few cases among many exposed residents, have been reported previously . However, our outbreak is exceptional as we had a matching isolate among seemingly sporadic cases and in the hospital cluster. Interestingly, all residential buildings were equally old. Four of them were owned by a single municipal public housing company and all were maintained by private maintenance and service companies. The buildings were typical low-rise buildings in that area with no special features explaining the outbreak. Control measures were implemented rapidly in each building. The maintenance service in the affected hospital got support from another hospital with a history and experience of controlling L. pneumophila serogroup 5 . By implementing the new European Union Drinking Water Directive (2020/2184), which mandates regular Legionella monitoring in high-risk buildings, defined by municipalities in Finland, healthcare-associated infections could be prevented . Legionella concentrations were < 1,000 cfu/L in three homes (Lp 1: 50, 100 and 250 cfu/L), suggesting that also lower Legionella concentrations can be infectious. However, infectivity may also be affected by other factors, such as exposure time, strain virulence and host susceptibility. The concentrations may vary over time. Here, the interval between the time of diagnosis and sampling varied from 1 to 5 weeks. The majority of the contaminated water systems yielded multiple Legionella species; among them, the concentrations of Lp 1 were often lower than those of other Legionella . Still, all human isolates were Lp 1, which is in line with L. pneumophila being the most virulent Legionella species . It is also noteworthy that not all samples from the identified sources of infection yielded Lp 1. Thus, it was essential that multiple sample points were initially tested and multiple isolates serotyped. In an Italian study, 22% of the hot domestic water samples were Legionella -positive with a 75% rate of positivity for L. pneumophila . In Canada, 33% of domestic water systems among community-acquired LD cases were Legionella -positive but only 14% of the findings matched with human isolates by genotyping . In Finland, the most likely source of infection was identified by environmental sampling for 50–60% of domestic Legionella cases during the last decade, but the patient isolate was available for only around 20% of the cases. The mean water temperatures were largely similar in Legionella -positive and -negative sites. Only few samples were outside the temperature recommendations for hot and cold water, suggesting that the technical settings, house pipelines, thermal insulation and the adjusted level of heating were not the main cause of the outbreak. This is further supported by the fact that there were only single cases in each building. We think that the regular use of plumbing fixtures or any water containing equipment is equally important as the upkeeping of the recommended temperatures to control Legionella. Recommendations for hot water in Finland are > 50 °C for buildings built before 2007 and > 55 °C for those built or renovated after 2007. We isolated Legionella despite temperatures > 50 °C, thus higher temperatures (even 60–65 °C) may be needed, especially knowing that Legionella is able to grow inside thermotolerant amoebae . Legionella was also found in lower concentrations in cold water taps with temperatures where Legionella should not grow but can remain viable. The initial control measures were effective only in three homes; at all other sites, more control measures and further sampling were needed. Two sites required disinfection using biocides because Legionella persisted in cold water. In Germany, there was no clear correlation between cold water temperature and Legionella contamination rate in healthcare facilities . A high positivity rate (35%) among cold water samples < 20 °C suggested that no temperature threshold can be defined below which cold water would be considered free of Legionella . We found some exceptionally high Legionella concentrations. When a water system is heavily contaminated, a total eradication of Legionella is seldom possible but lowering the concentration can be achieved by continuous technical control measures . A system may be colonised by a long- term predominant strain causing LD cases infrequently, while the sporadic strains are of less pathogenetic relevance . In France, a long-term Lp 1 colonisation of the city water network was reported, but unlike in our situation, the particular clone was not connected to LD cases . Long-term colonisation has been reported especially for hospital buildings but also for municipal water systems [ - ]. The water temperatures would have favoured the growth of Legionella in the apartment of one of the nosocomial cases. However, all samples were negative, confirming that the water mains had not been heavily contaminated. There was also no known incident in the water plant explaining any sudden increase in the nutrients, decrease in disinfection efficiency or any other change in the process that would cause the increase in Legionella . As reported earlier, a change in primary disinfectant can cause an interruption in the corrosion control, a decrease in disinfectant residuals and an increase in lead in the distribution system, and create favourable conditions for Legionella . Similarly, Legionella clusters have been connected to a switch in raw water source from non-corrosive water to corrosive river water . Coronavirus disease (COVID-19) pandemic closures have been shown to affect water consumption . In the concerned city, water usage decreased by 2% in the city centre and increased by 2% in the residential areas between March and December 2020. Legionella concentrations did not always decrease immediately after the control measures, highlighting the difficulties in defining the appropriate number and level of actions and the need for control samples. The mean hot water temperatures rose with the sequential sampling rounds. If taps are not used regularly, the water gradually reaches room temperature. As shown in our study, water temperature changed by 4 °C already after 1 min of flushing and by 6 °C after 2–3 min of flushing, thereby improving water quality by removing residual water from the pipes. Actions need to be tailored to a site’s specific situation, while considering safety issues such as the risk of skin burns during heat shock. To ensure proper management, guidance and supervision of the investigation and control measures, the outbreak was handled by a multi-professional working group consisting of technical, environmental health and clinical experts from all parties involved, including THL, the city environmental and health authorities, the water company, the regional university hospital, support services and the housing company. The city published two press releases, one on the cases and the environmental findings and the other on prevention methods and recommendations for all residential buildings. Local healthcare professionals were informed about the Legionella outbreak and advised to test for Legionella with a low threshold.
Enhanced surveillance including water samples for single cases combined with WGS for isolates was crucial in detecting and defining this unusual LD outbreak related to the city water network. Inadequate maintenance measures and probably low water consumption, together with the vulnerability of the cases, contributed to the outbreak.
|
Paediatric emergency medicine practice in Nigeria: a narrative review | 46c839e9-160b-4edb-9001-e51cad725f7f | 10022062 | Pediatrics[mh] | The practice of paediatric emergency in low-middle income countries (LMICs), particularly in sub-Saharan Africa (SSA) has remained daunting owing to a lack of skilled manpower, infrastructure, and equipment . Although the special needs of children in SSA are well recognized, adequate response to the needs has not been properly established. This is partly due to inadequacies in the entire health care system attributable to poor healthcare financing, particularly in emergency care. The problem is compounded by substantial gaps in the availability of morbidity and mortality data especially from the rural areas ; which has made it difficult to establish the magnitude of the problem and convince policymakers to make major new investments in paediatric emergency care . For these reasons, providing timely, high-quality care for the initial management of critically ill children in African hospitals remains a challenge . The overall quality of care differs between countries and among hospitals. Services are generally better in tertiary facilities than in secondary or primary care facilities because of better working conditions for health workers; and relatively better availability of basic utilities and equipment. Despite the variations in availability and quality of services among the three levels of healthcare, the mortality rates in the children’s emergency room remain high . More than 50% of deaths recorded in children’s emergency room in resource-limited settings occur within the first 24 h of admission . These deaths are mostly from treatable conditions, and occur partly because of late presentation and inadequate hospital service provision on arrival . Childhood morbidity and mortality could be reduced by provision of standardised emergency care for paediatric emergencies. The standards would ensure that necessary human resource, infrastructure, and equipment are clear to all those that are tasked with the provision of paediatric emergent care. The standards can detail the training and re-training of practitioners in the requisite knowledge and skills; infrastructural or facility development, provision of appropriate equipment in adequate numbers; as well as policy prioritization and adequate funding for sustainability by all stakeholders . In addition, audit of the current standards can highlight areas of deficiency, identify potential targets for process improvement and ultimately lead to improved patient outcomes . Studies in Malawi, Kenya, and Tanzania have shown that improving the systems and training in paediatric emergency care can significantly reduce in-hospital mortality . Recent studies have shown that Nigeria’s health service is far from being optimally designed and prepared to deliver optimal emergency care to its children . Paediatric emergency units of most hospitals in Nigeria are often understaffed and ill-equipped and are run by doctors and nurses who have no formalised training in paediatric emergency care. The situation is worse for hospitals situated in rural areas and privately owned hospitals. We aimed to review the existing structures, organization, and practice, challenges and prospects of paediatric emergency medicine practice in Nigeria; and proffer possible ways of improving this practice in order to bridge the gaps in providing the initial holistic care of children under emergency conditions. Despite the gap in training capacity between the LMICs and high- income countries, maintaining a baseline standard of care remains an overarching problem not just in LMIC but also in high-income countries . Emergency preparedness of hospitals in most of these countries is deemed suboptimal. In the United States of America, an ED that maintains a baseline level of pediatric resources in keeping with the national guidelines developed by the American Academy of Pediatrics in collaboration with the American College of Emergency Physicians is considered pediatric-ready. Despite the efforts and resources devoted to training and monitoring in paediatric emergency medicine practice in the United States, compliance with the paediatric emergency medicine guidelines is not optimal. In England, UK, 28% of acute hospital trusts were “weak” for children’s emergency services. Compared to high-income countries, paediatric emergency care is among the weakest parts of health systems in low-income countries in both quality and accessibility . Very few facilities in LMIC have dedicated PEDs. Obermeyer and colleagues observed that only 36 (19%) of 192 emergency medicine facilities in LMICs were designated for children . Hospitals in LMICs are significantly poor compared to those in the developed world given problems such as lower staffing ratios, lack of skills and resources, lack of essential equipment for emergency care and higher acuity of patients . The majority of emergency care in most African countries is still provided by physicians and mid-level practitioners with no formal EM training and little pediatric-specific training . The problem is compounded by lack of basic equipment. Overall, the median availability of functional equipment for resuscitation in emergency settings remains below 50% in some African hospitals . In order to improve the quality of emergency medicine practice, concerted efforts need to be made to maintain a baseline standard of care in emergency settings, particularly in LMICs including Nigeria. This can only be achieved if stakeholders understood the magnitude of the problem. Infrastructure and organisation The Nigerian healthcare system is organised into primary, secondary and tertiary healthcare levels. Of the 40,348 operational health facilities in Nigeria; 85.1% are primary, 14.5% secondary and 0.4% tertiary . Secondary- and tertiary healthcare facilities are mostly found in urban areas, whereas rural areas are predominantly served by primary health care (PHC) facilities . The imbalance in the structural and geographical distribution of the hospitals and health centers between the urban and rural areas give rise to inequitable provision of health services including emergency services. Given the inequality in facility distribution and service delivery, paediatric emergency medicine practice in the primary and secondary care centers is grossly suboptimal. This has brought undue pressures on the tertiary care centers which are also not optimally ready for quality service. Structural facilities and equipment in the children’s emergency wards in almost all the tertiary centres in Nigeria are grossly inadequate . These range from emergency rooms not well structured for easy access and workflow, absence of triage and resuscitation areas, side laboratory, dedicated pharmacy and radiological facility, to absence or at best broken-down equipment. Due to limited spaces, emergency treatment areas in hospitals are often crowded and hamper patient flow in and out of the emergency room. The triage areas where present are usually not spacious enough. Studies have noted that hospitals in less developed countries lacked an adequate system for triage; and most emergency treatment areas were poorly organized . Initial patient assessment in this circumstance was often inadequate and treatments are delayed. Molyneux made a very pertinent suggestion for planners to consider the way patients are received and moved through the department to obtain different aspects of care and the best way to improve timely patient care and supervision without causing bottlenecks or confusion . Ideally, patients should enter through one doorway and exit through another, with services arranged in sequence of use to avoid counter flows of patients across the corridors . The problem of space in paediatric emergency rooms of hospitals in Nigeria is compounded by the absence of high dependency care areas for managing critically ill children. Dedicated paediatric intensive care units are absent in most hospitals in Nigeria. Thus children who need intensive care admission would remain in the emergency room clogging the already limited space. A pioneer Paediatric Intensive Care Unit (PICU) project was recently initiated at the University of Nigeria Teaching Hospital, Ituku-Ozalla Enugu, south-east, Nigeria by alumnus of the University of Nigeria, College of Medicine . Similar PICU project has also been established at the University College Hospital Ibadan. These are laudable projects enabling enhanced care of children who need ICU admission. It is hoped that other hospitals in Nigeria will emulate these great initiatives. Human resource and quality of service Capacity development of all cadres of staff across all levels of health care is a requisite for delivery of quality health service. Large disparities in the distribution of the health workforce and skills exist between rural and urban areas in Nigeria. Poor attraction and retention of health workers in the rural areas have resulted in inequitable distribution of health workers and access to quality health services at the primary health centers . The PHCs are run by staffs that lack the necessary skills to resuscitate a child under emergency settings. Although the secondary and tertiary care hospitals are better staffed, lack of basic skills needed for efficient emergency service delivery are very apparent among healthcare professionals in these hospitals. Only 55.6% of the doctors and none of the nurses in the study by Paul and Edelu had the requisite certification in basic emergency skills . The situation has remained the same given the recent reports by Enyuma et al., of an overall deficiency in emergency care preparedness amongst PEDs in tertiary care facilities in Nigeria. The authors observed that none of the paediatricians heading the PEDs had a subspecialist/fellowship qualification in emergency medicine; and only 11.8% of the nurses had any certification in emergency care skills . It then buttresses the fact that Nigeria needs to take skill acquisition training for all health care providers seriously, especially those that work in emergency settings. Paediatric Emergency Medicine is still an evolving discipline in Nigeria. Hospitals in Nigeria fall short of the recommendations of existing International Federation for Emergency Medicine (IFEM) and African Federation for Emergency Medicine (AFEM) guidelines/framework for running emergency departments. For instance, the IFEM recommends that the emergency departments be run by healthcare staffers that are appropriately trained and qualified to deliver emergency care; and suggests early involvement of senior doctors with specific expertise in EM for their ability to resuscitate and stabilise critical patients and to facilitate early referral to appropriate specialties. On the average, less than 50% of paediatric emergency units are headed by a dedicated Consultant Paediatrician . The rest are run by different paediatrician based on roster. Despite the engagement in interactive, scenario-based courses such as the Emergency Triage Assessment and Treatment (ETAT) developed by the World Health Organisation (WHO) , and the Emergency Care Assessment Tool (ECAT) developed by AFEM , to improve paediatric emergency care, health workers in Nigeria do not have access to the training and practice that they need and desire. Paul and Edelu reported that only 55.6% of the staff (doctors and nurses) in emergency units in Nigeria had the skills for emergency triage; and < 50% of them had the skills to use either a manual or an automated external defibrillator (AED) . Poor skill for emergency resuscitation among healthcare professionals is a recurring decimal that is not peculiar to Nigeria alone but has been identified in other African countries. In a recent study in South Africa, about 20% of the doctors had never performed cardiopulmonary resuscitation (CPR) in paediatric patients; and up to 35% of them did not feel confident performing CPR in children. A very important contributing factor to the problem is the non-retention of trained staff in their areas of training in many public hospitals. This is often a major challenge among the nursing cadre in most facilities in Nigeria. It is difficult to retain a nurse in the emergency unit for a long period of time even after they have acquired satisfactory level of skills to improve practice. Availability of emergency medicines, equipment and utilities Emergency medicines and equipment are not readily available in many emergency units around the country. Enyuma et al. reported mean medication and equipment performance scores of 50.7% and 43.9% respectively for 34 health facilities in northern and southern Nigeria. The Southern region had significantly higher equipment score (47.6%) than the Northern region (38.9%) . Lack basic equipment (including defibrillators) has been reported not just in Nigeria ; but in other African countries . In terms of utilities, less than 50% of the facilities in Nigeria had regular running water . Minimal improvement in water reticulation and supply has been noted in some hospitals. The Nigerian healthcare system is organised into primary, secondary and tertiary healthcare levels. Of the 40,348 operational health facilities in Nigeria; 85.1% are primary, 14.5% secondary and 0.4% tertiary . Secondary- and tertiary healthcare facilities are mostly found in urban areas, whereas rural areas are predominantly served by primary health care (PHC) facilities . The imbalance in the structural and geographical distribution of the hospitals and health centers between the urban and rural areas give rise to inequitable provision of health services including emergency services. Given the inequality in facility distribution and service delivery, paediatric emergency medicine practice in the primary and secondary care centers is grossly suboptimal. This has brought undue pressures on the tertiary care centers which are also not optimally ready for quality service. Structural facilities and equipment in the children’s emergency wards in almost all the tertiary centres in Nigeria are grossly inadequate . These range from emergency rooms not well structured for easy access and workflow, absence of triage and resuscitation areas, side laboratory, dedicated pharmacy and radiological facility, to absence or at best broken-down equipment. Due to limited spaces, emergency treatment areas in hospitals are often crowded and hamper patient flow in and out of the emergency room. The triage areas where present are usually not spacious enough. Studies have noted that hospitals in less developed countries lacked an adequate system for triage; and most emergency treatment areas were poorly organized . Initial patient assessment in this circumstance was often inadequate and treatments are delayed. Molyneux made a very pertinent suggestion for planners to consider the way patients are received and moved through the department to obtain different aspects of care and the best way to improve timely patient care and supervision without causing bottlenecks or confusion . Ideally, patients should enter through one doorway and exit through another, with services arranged in sequence of use to avoid counter flows of patients across the corridors . The problem of space in paediatric emergency rooms of hospitals in Nigeria is compounded by the absence of high dependency care areas for managing critically ill children. Dedicated paediatric intensive care units are absent in most hospitals in Nigeria. Thus children who need intensive care admission would remain in the emergency room clogging the already limited space. A pioneer Paediatric Intensive Care Unit (PICU) project was recently initiated at the University of Nigeria Teaching Hospital, Ituku-Ozalla Enugu, south-east, Nigeria by alumnus of the University of Nigeria, College of Medicine . Similar PICU project has also been established at the University College Hospital Ibadan. These are laudable projects enabling enhanced care of children who need ICU admission. It is hoped that other hospitals in Nigeria will emulate these great initiatives. Capacity development of all cadres of staff across all levels of health care is a requisite for delivery of quality health service. Large disparities in the distribution of the health workforce and skills exist between rural and urban areas in Nigeria. Poor attraction and retention of health workers in the rural areas have resulted in inequitable distribution of health workers and access to quality health services at the primary health centers . The PHCs are run by staffs that lack the necessary skills to resuscitate a child under emergency settings. Although the secondary and tertiary care hospitals are better staffed, lack of basic skills needed for efficient emergency service delivery are very apparent among healthcare professionals in these hospitals. Only 55.6% of the doctors and none of the nurses in the study by Paul and Edelu had the requisite certification in basic emergency skills . The situation has remained the same given the recent reports by Enyuma et al., of an overall deficiency in emergency care preparedness amongst PEDs in tertiary care facilities in Nigeria. The authors observed that none of the paediatricians heading the PEDs had a subspecialist/fellowship qualification in emergency medicine; and only 11.8% of the nurses had any certification in emergency care skills . It then buttresses the fact that Nigeria needs to take skill acquisition training for all health care providers seriously, especially those that work in emergency settings. Paediatric Emergency Medicine is still an evolving discipline in Nigeria. Hospitals in Nigeria fall short of the recommendations of existing International Federation for Emergency Medicine (IFEM) and African Federation for Emergency Medicine (AFEM) guidelines/framework for running emergency departments. For instance, the IFEM recommends that the emergency departments be run by healthcare staffers that are appropriately trained and qualified to deliver emergency care; and suggests early involvement of senior doctors with specific expertise in EM for their ability to resuscitate and stabilise critical patients and to facilitate early referral to appropriate specialties. On the average, less than 50% of paediatric emergency units are headed by a dedicated Consultant Paediatrician . The rest are run by different paediatrician based on roster. Despite the engagement in interactive, scenario-based courses such as the Emergency Triage Assessment and Treatment (ETAT) developed by the World Health Organisation (WHO) , and the Emergency Care Assessment Tool (ECAT) developed by AFEM , to improve paediatric emergency care, health workers in Nigeria do not have access to the training and practice that they need and desire. Paul and Edelu reported that only 55.6% of the staff (doctors and nurses) in emergency units in Nigeria had the skills for emergency triage; and < 50% of them had the skills to use either a manual or an automated external defibrillator (AED) . Poor skill for emergency resuscitation among healthcare professionals is a recurring decimal that is not peculiar to Nigeria alone but has been identified in other African countries. In a recent study in South Africa, about 20% of the doctors had never performed cardiopulmonary resuscitation (CPR) in paediatric patients; and up to 35% of them did not feel confident performing CPR in children. A very important contributing factor to the problem is the non-retention of trained staff in their areas of training in many public hospitals. This is often a major challenge among the nursing cadre in most facilities in Nigeria. It is difficult to retain a nurse in the emergency unit for a long period of time even after they have acquired satisfactory level of skills to improve practice. Emergency medicines and equipment are not readily available in many emergency units around the country. Enyuma et al. reported mean medication and equipment performance scores of 50.7% and 43.9% respectively for 34 health facilities in northern and southern Nigeria. The Southern region had significantly higher equipment score (47.6%) than the Northern region (38.9%) . Lack basic equipment (including defibrillators) has been reported not just in Nigeria ; but in other African countries . In terms of utilities, less than 50% of the facilities in Nigeria had regular running water . Minimal improvement in water reticulation and supply has been noted in some hospitals. Healthcare providers’ experiences with paediatric emergency medicine practice in Nigeria support the need for a total reorientation and revamping of emergency medicine practice to optimize service delivery . Despite the efforts being made by concerned stakeholders (individuals, non-governmental agencies/institutions, policy-makers, and the federal government of Nigeria) to improve emergency service delivery in the country, a lot still needs to be done to achieve the desired goal. The ultimate goal is to institute a system/framework that will deliver a sustainable high quality emergency service to Nigerian children. This can only be achieved if the numerous challenges are promptly addressed. While poor staff training, insufficient equipment, and lack of local disease-specific guidelines have been identified as the key challenges, other challenges are highlighted to direct the policy makers and stakeholders to specific areas that required urgent attention. First-hand accounts of challenges of emergency service delivery in Nigeria were well captured in published studies . Numerous challenges were identified including poor coordination and collaboration among essential stakeholders including government and non-governmental agencies and institutions relevant to emergency care delivery. The challenges include: Management in the community Emergency care for the sick child starts in the community with care seeking by the parents, then to community health workers being able to recognise severe illness as the first responders . The pre-hospital management of sick children at home and the subsequent transportation of the children to the desired health facility greatly impact on the outcomes of treatment for such children. Mothers especially play a key role in identifying signs of illness in their children and giving the initial home treatment. While most mothers in Nigeria irrespective of their level of education can identify a sick child, many delay presentation to health facilities and are unable to correctly institute the necessary home treatment. Abdulraheem and Parakoyi observed that mothers of sick children in a rural Nigerian setting used home remedies in up to 69.6% of reported episodes of their children’s illness. The use of health facility was consistently low (5.7–9.9%), and appropriate care was sought by only 25.3% of these mothers . On the other hand, community health workers play vital roles in managing sick children at the community level. Their ability to detect signs of serious illness in children and promptly refer to the next level of health care substantially determines the health outcome. The community health workers are often trained to recognise the danger signs in line with the Integrated Management of Childhood Illness (IMCI) guidelines but they lack the appropriate skills to intervene. As such they are taught to refer to the next level of healthcare. It becomes challenging where the sick child requires an immediate life-saving intervention and the healthcare worker lacks the skills to provide such intervention. Referral and transport of sick children A well-coordinated referral and transport system is key, and desirable in the Nigerian health sector. Unfortunately referral systems and between-facility transport of patients in Nigeria are still rudimentary and vulnerable. None of the states in Nigeria have functional emergency medicine services (EMS) open to the public because of the high out-of-pocket expenditure associated with the services. Emergency medicine services are virtually non-existent at primary and secondary care levels. It becomes quite challenging to transport sick children from home, health centres and district hospitals, particularly those situated in remote areas, to the nearest tertiary health facility or referral hospital for treatment. While most tertiary hospitals in Nigeria have ambulance vehicles, majority of these ambulances are ill-equipped and non-functional thus limiting the capacity to transport patients in and out of the hospitals. The situation is even worse considering the poor state of the existing road networks in Nigeria which make road ambulance service inefficient. Air ambulance service is limited to a few corporate organisations such as oil companies. Besides the limited availability of vehicle and air ambulance, training of paramedics for efficient transfer of critically ill children between facilities are largely limited and needs to be rejuvenated. In-hospital care The factors accounting for delivery of sub-optimal in-hospital care to paediatric patients are multi-pronged. One of the major challenges to the timely care of the sick children is the disproportionate ratio of doctor and nurses to patients . The documented number of doctors (7 to 22) and nurses (10 to 24) with a nurse: bed ratio of 1:3 in children emergency units in Nigeria are grossly inadequate . This ultimately results in longer waiting time and sometimes poorer attention from the healthcare providers increasing the tendency for high incidence of medical errors. Low numbers of workforce and poor distribution of qualified professionals in hospitals are general problems in less developed countries . These constraints make the provision of quality health care challenging in these countries. The problem is compounded by the fact that doctors and nurses in district and teaching hospitals in these countries including Nigeria have inadequate knowledge of guidelines and reported practice for managing important childhood illnesses . In addition to inadequate numbers of doctors and nurses in the emergency room, lack of skilled workforce has led to poor quality of services in paediatric emergency units in Nigerian hospitals. Policy prioritization, implementation and funding of health service Due to poor implementation of health policies and lean healthcare funding, most paediatric emergency units in Nigeria struggle to maintain efficient services. Paediatric emergency physicians under the umbrella bodies of Society of Emergency Practitioners of Nigeria, and the Paediatric Association of Nigeria have continued to advocate for improved paediatric emergency services in the country. There is an obvious dis-connect between the various tiers of government in health system governance in Nigeria. The focus of health system challenges and solutions identified by doctors who work in the emergency room during a focused group discussion centered on the functions of the government and its responsibility in facilitating healthcare access and financing . Health care financing in Nigeria is poor and undermines the desire to achieve universal health coverage for the Nigerian populace, particularly children. Presently, out-of-pocket expenditure accounts for over 70% of national spending on health , due to poor implementation of Nigeria’s health insurance scheme. Payment is almost 100% for all emergency services at the time the services are provided . Caregivers are required to purchase every medicine and material required for resuscitation and stabilization. For example, caregivers are required to pay for blood transfusion services before these services are provided. Failure to pay for these services ultimately affects the timeline for the intervention and the quality of care. In order to avail the sick children opportunity of getting appropriate treatment within the first hour of presentation, an effective health insurance scheme at all levels of care must be instituted. Currently, Nigeria’s health insurance scheme is not functional and needs to be revamped. The identified challenges, and practical ways of surmounting them have been summarised in Table . Emergency care for the sick child starts in the community with care seeking by the parents, then to community health workers being able to recognise severe illness as the first responders . The pre-hospital management of sick children at home and the subsequent transportation of the children to the desired health facility greatly impact on the outcomes of treatment for such children. Mothers especially play a key role in identifying signs of illness in their children and giving the initial home treatment. While most mothers in Nigeria irrespective of their level of education can identify a sick child, many delay presentation to health facilities and are unable to correctly institute the necessary home treatment. Abdulraheem and Parakoyi observed that mothers of sick children in a rural Nigerian setting used home remedies in up to 69.6% of reported episodes of their children’s illness. The use of health facility was consistently low (5.7–9.9%), and appropriate care was sought by only 25.3% of these mothers . On the other hand, community health workers play vital roles in managing sick children at the community level. Their ability to detect signs of serious illness in children and promptly refer to the next level of health care substantially determines the health outcome. The community health workers are often trained to recognise the danger signs in line with the Integrated Management of Childhood Illness (IMCI) guidelines but they lack the appropriate skills to intervene. As such they are taught to refer to the next level of healthcare. It becomes challenging where the sick child requires an immediate life-saving intervention and the healthcare worker lacks the skills to provide such intervention. A well-coordinated referral and transport system is key, and desirable in the Nigerian health sector. Unfortunately referral systems and between-facility transport of patients in Nigeria are still rudimentary and vulnerable. None of the states in Nigeria have functional emergency medicine services (EMS) open to the public because of the high out-of-pocket expenditure associated with the services. Emergency medicine services are virtually non-existent at primary and secondary care levels. It becomes quite challenging to transport sick children from home, health centres and district hospitals, particularly those situated in remote areas, to the nearest tertiary health facility or referral hospital for treatment. While most tertiary hospitals in Nigeria have ambulance vehicles, majority of these ambulances are ill-equipped and non-functional thus limiting the capacity to transport patients in and out of the hospitals. The situation is even worse considering the poor state of the existing road networks in Nigeria which make road ambulance service inefficient. Air ambulance service is limited to a few corporate organisations such as oil companies. Besides the limited availability of vehicle and air ambulance, training of paramedics for efficient transfer of critically ill children between facilities are largely limited and needs to be rejuvenated. The factors accounting for delivery of sub-optimal in-hospital care to paediatric patients are multi-pronged. One of the major challenges to the timely care of the sick children is the disproportionate ratio of doctor and nurses to patients . The documented number of doctors (7 to 22) and nurses (10 to 24) with a nurse: bed ratio of 1:3 in children emergency units in Nigeria are grossly inadequate . This ultimately results in longer waiting time and sometimes poorer attention from the healthcare providers increasing the tendency for high incidence of medical errors. Low numbers of workforce and poor distribution of qualified professionals in hospitals are general problems in less developed countries . These constraints make the provision of quality health care challenging in these countries. The problem is compounded by the fact that doctors and nurses in district and teaching hospitals in these countries including Nigeria have inadequate knowledge of guidelines and reported practice for managing important childhood illnesses . In addition to inadequate numbers of doctors and nurses in the emergency room, lack of skilled workforce has led to poor quality of services in paediatric emergency units in Nigerian hospitals. Due to poor implementation of health policies and lean healthcare funding, most paediatric emergency units in Nigeria struggle to maintain efficient services. Paediatric emergency physicians under the umbrella bodies of Society of Emergency Practitioners of Nigeria, and the Paediatric Association of Nigeria have continued to advocate for improved paediatric emergency services in the country. There is an obvious dis-connect between the various tiers of government in health system governance in Nigeria. The focus of health system challenges and solutions identified by doctors who work in the emergency room during a focused group discussion centered on the functions of the government and its responsibility in facilitating healthcare access and financing . Health care financing in Nigeria is poor and undermines the desire to achieve universal health coverage for the Nigerian populace, particularly children. Presently, out-of-pocket expenditure accounts for over 70% of national spending on health , due to poor implementation of Nigeria’s health insurance scheme. Payment is almost 100% for all emergency services at the time the services are provided . Caregivers are required to purchase every medicine and material required for resuscitation and stabilization. For example, caregivers are required to pay for blood transfusion services before these services are provided. Failure to pay for these services ultimately affects the timeline for the intervention and the quality of care. In order to avail the sick children opportunity of getting appropriate treatment within the first hour of presentation, an effective health insurance scheme at all levels of care must be instituted. Currently, Nigeria’s health insurance scheme is not functional and needs to be revamped. The identified challenges, and practical ways of surmounting them have been summarised in Table . Policy prioritization and funding of health service There is often a culture of ignorance or acceptance of poorer standards of care by health workers with the ardent hope that policy makers will see the urgent need to totally revamp the health systems in Nigeria in the future. Indeed, improvement in paediatric emergency service in Nigeria strongly depends on effective policy development, prioritization, and implementation. A system-wide paediatric emergency care planning, preparedness, coordination, and funding are key to enthronement of minimum standards of care in paediatric emergency care. The pre-hospital system needs improvement. An emergency management system must be carefully planned with the involvement of the relevant national ministries and sub-national health authorities . The health care service and referral system among the primary, secondary and tertiary levels of care in Nigeria are often uncoordinated . Thus, services in many tertiary hospitals have usurped those of the primary and secondary health facilities. Services within the hospitals are poorly coordinated such that paediatric departments struggle for survival where the services are to a greater extent driven by adult oriented policies and regulations. The situation can only be improved with the development and implementation of goal-oriented policies and strategic frameworks that equally cut across both adults and children. Relevant organisations such as the Paediatric Association of Nigeria who are advocates of child health care in Nigeria should influence policy formulation and implementation in paediatric emergency services. Improved health care financing is desirable. Capacity development Training of health workers in all health care facilities including those in the rural communities is an integral part of the child survival strategies globally. Regular pediatric life support training for emergency practitioners both at primary, secondary and tertiary care facilities will enhance child survival at every encounter. There is a need to bolster paediatric emergency medicine practice through education and training of different cadres of hospital staff in pediatric emergency care to ensure more optimal outcome . Hence, establishing paediatric emergency medicine training programmes for physicians, nurses, and pre-hospital personnel becomes imperative. Introducing a well-designed paediatric emergency medicine skill-based learning programme into the various medical curricula in Nigerian Universities on a broader scope may be the best approach to lay a solid foundation for improved emergency service delivery in Nigeria. This will go a long way to sharpen the students’ skills and preparedness for emergency medicine practice in the future. In 2011, the IFEM developed model curriculum for emergency medicine specialists training. This document defined the basic minimum standards for specialist trainees in emergency medicine . Subsequently, the Paediatric Emergency Medicine Special Interest Group (PEMSIG) of the IFEM produced a document applicable on a global level, which delineates valuable practical standards for care of children in emergency settings. . This document recognizes the varying challenges inherent in different parts of the world, including differences in patient load, burden of disease, staffing, infrastructure, and access to education in pediatric emergency care, equipment, and medications. Similarly, the African Federation of Emergency Medicine (AFEM) developed a curriculum tailored for paediatric emergency medicine training in Africa . Using these resourceful documents, relevant stakeholders involved in paediatric emergency care in Nigeria should as a matter of urgency pursue the agenda to begin a subspecialist training programme in Nigeria to bring about the long awaited change in paediatric emergency practice in the country. The quality of training could be checked by relevant bodies such as the postgraduate medical colleges of Nigeria to maintain the required standards for such training. While plans are already initiated to start such training programmes, there is need to encourage all health care workers to seek for opportunities for training where they exist. Improving standard of care The quality of service can be enhanced by ensuring that minimum standards for emergency care are maintained in all hospitals in Nigeria. The PEMSIG consensus document assists hospitals around the world in defining minimum standards of care for children aged 0–18 years in the Emergency Department . This comprehensive document could serve as a guide to improve services in Nigeria within the local context. There is need to develop standard treatment protocols and provide basic equipment that will enhance quality of service. In order to help emergency practitioners in Africa to identify gaps in quality of care delivery in their various hospitals a group of experts who work in the African settings derived a simple, practice-based quality assessment tool (PBT) for resource-limited settings aimed at improving the management of sentinel emergency presentations in children . The PBT is essentially a list of actions including core skills in initial assessment and management of an ill child in emergency setting within the first hour of care. The absence of these actions in the hospitals reflects a modifiable gap in the quality of care delivery. Like the PEMSIG and AFEM guidelines for maintaining quality of care in paediatric emergency settings, the PBT tool may be used to assess the availability of minimum expectation for care in those centers where resources are very limited. The PBT also helps to identify both individual and collective need for training and re-training; and measure the impact of a change in practice following an education or policy intervention within a department. A useful approach to improving paediatric medical systems identified by Khan and colleague include the establishment of a coordinated approach to patient care, and increased inter-departmental cooperation and collaboration within hospitals . In a hospital in Lilongwe Malawi, Simple, inexpensive interventions such as posting of senior doctors to supervise pediatric services in under-five clinics, institution of a formal triage process that improved patient flow, and treatment and stabilization of patients before transfer to the inpatient ward improved pediatric emergency care and decreased hospital mortality rates . Such simple but effective interventions can be enthroned in paediatric departments in Nigeria. Regular trainings of staffs in skills such as Paediatric Basic and Advanced life Support are imperative in order to bridge the existing training gaps and improve the overall practice. In conclusion, paediatric emergency medicine is a critical aspect of paediatrics that impacts child survival, morbidity and mortality rates in Nigeria, and other LMIC countries. To this end, concerted efforts by all the stake holders including paediatricians, government and non- governmental organizations, and hospital administrators are needed to drive the progress of paediatric emergency practice in Nigeria. Important aspects of intervention to improve services include capacity development, improved funding of paediatric practice, and provision of basic equipment for emergency care. There should be equitable redeployment of the available resources to the major areas of need in the hospitals to optimize service delivery. There is often a culture of ignorance or acceptance of poorer standards of care by health workers with the ardent hope that policy makers will see the urgent need to totally revamp the health systems in Nigeria in the future. Indeed, improvement in paediatric emergency service in Nigeria strongly depends on effective policy development, prioritization, and implementation. A system-wide paediatric emergency care planning, preparedness, coordination, and funding are key to enthronement of minimum standards of care in paediatric emergency care. The pre-hospital system needs improvement. An emergency management system must be carefully planned with the involvement of the relevant national ministries and sub-national health authorities . The health care service and referral system among the primary, secondary and tertiary levels of care in Nigeria are often uncoordinated . Thus, services in many tertiary hospitals have usurped those of the primary and secondary health facilities. Services within the hospitals are poorly coordinated such that paediatric departments struggle for survival where the services are to a greater extent driven by adult oriented policies and regulations. The situation can only be improved with the development and implementation of goal-oriented policies and strategic frameworks that equally cut across both adults and children. Relevant organisations such as the Paediatric Association of Nigeria who are advocates of child health care in Nigeria should influence policy formulation and implementation in paediatric emergency services. Improved health care financing is desirable. Training of health workers in all health care facilities including those in the rural communities is an integral part of the child survival strategies globally. Regular pediatric life support training for emergency practitioners both at primary, secondary and tertiary care facilities will enhance child survival at every encounter. There is a need to bolster paediatric emergency medicine practice through education and training of different cadres of hospital staff in pediatric emergency care to ensure more optimal outcome . Hence, establishing paediatric emergency medicine training programmes for physicians, nurses, and pre-hospital personnel becomes imperative. Introducing a well-designed paediatric emergency medicine skill-based learning programme into the various medical curricula in Nigerian Universities on a broader scope may be the best approach to lay a solid foundation for improved emergency service delivery in Nigeria. This will go a long way to sharpen the students’ skills and preparedness for emergency medicine practice in the future. In 2011, the IFEM developed model curriculum for emergency medicine specialists training. This document defined the basic minimum standards for specialist trainees in emergency medicine . Subsequently, the Paediatric Emergency Medicine Special Interest Group (PEMSIG) of the IFEM produced a document applicable on a global level, which delineates valuable practical standards for care of children in emergency settings. . This document recognizes the varying challenges inherent in different parts of the world, including differences in patient load, burden of disease, staffing, infrastructure, and access to education in pediatric emergency care, equipment, and medications. Similarly, the African Federation of Emergency Medicine (AFEM) developed a curriculum tailored for paediatric emergency medicine training in Africa . Using these resourceful documents, relevant stakeholders involved in paediatric emergency care in Nigeria should as a matter of urgency pursue the agenda to begin a subspecialist training programme in Nigeria to bring about the long awaited change in paediatric emergency practice in the country. The quality of training could be checked by relevant bodies such as the postgraduate medical colleges of Nigeria to maintain the required standards for such training. While plans are already initiated to start such training programmes, there is need to encourage all health care workers to seek for opportunities for training where they exist. The quality of service can be enhanced by ensuring that minimum standards for emergency care are maintained in all hospitals in Nigeria. The PEMSIG consensus document assists hospitals around the world in defining minimum standards of care for children aged 0–18 years in the Emergency Department . This comprehensive document could serve as a guide to improve services in Nigeria within the local context. There is need to develop standard treatment protocols and provide basic equipment that will enhance quality of service. In order to help emergency practitioners in Africa to identify gaps in quality of care delivery in their various hospitals a group of experts who work in the African settings derived a simple, practice-based quality assessment tool (PBT) for resource-limited settings aimed at improving the management of sentinel emergency presentations in children . The PBT is essentially a list of actions including core skills in initial assessment and management of an ill child in emergency setting within the first hour of care. The absence of these actions in the hospitals reflects a modifiable gap in the quality of care delivery. Like the PEMSIG and AFEM guidelines for maintaining quality of care in paediatric emergency settings, the PBT tool may be used to assess the availability of minimum expectation for care in those centers where resources are very limited. The PBT also helps to identify both individual and collective need for training and re-training; and measure the impact of a change in practice following an education or policy intervention within a department. A useful approach to improving paediatric medical systems identified by Khan and colleague include the establishment of a coordinated approach to patient care, and increased inter-departmental cooperation and collaboration within hospitals . In a hospital in Lilongwe Malawi, Simple, inexpensive interventions such as posting of senior doctors to supervise pediatric services in under-five clinics, institution of a formal triage process that improved patient flow, and treatment and stabilization of patients before transfer to the inpatient ward improved pediatric emergency care and decreased hospital mortality rates . Such simple but effective interventions can be enthroned in paediatric departments in Nigeria. Regular trainings of staffs in skills such as Paediatric Basic and Advanced life Support are imperative in order to bridge the existing training gaps and improve the overall practice. In conclusion, paediatric emergency medicine is a critical aspect of paediatrics that impacts child survival, morbidity and mortality rates in Nigeria, and other LMIC countries. To this end, concerted efforts by all the stake holders including paediatricians, government and non- governmental organizations, and hospital administrators are needed to drive the progress of paediatric emergency practice in Nigeria. Important aspects of intervention to improve services include capacity development, improved funding of paediatric practice, and provision of basic equipment for emergency care. There should be equitable redeployment of the available resources to the major areas of need in the hospitals to optimize service delivery. |
Plasma phospho-tau in Alzheimer’s disease: towards diagnostic and therapeutic trial applications | 5c471d43-0bf6-4475-be31-0cc20ae0c37d | 10022272 | Pathology[mh] | As the leading cause of dementia worldwide, Alzheimer’s disease (AD) continues to present urgent strains on clinical care, public health efforts, palliative care and family systems . Although the ultimate confirmation of AD pathology is by autopsy examination of brain tissue for extracellular amyloid plaques made of amyloid-beta (Aβ) peptides and intraneuronal neurofibrillary tangles (NFTs) containing phosphorylated tau (p-tau) forms , in vivo diagnosis is presently achieved by using either cerebrospinal fluid (CSF) or neuroimaging biomarkers. Neuroimaging biomarkers that can identify biological evidence of AD include Aβ positron emission tomography (PET) for brain amyloidosis, tau-PET for NFT pathology, structural magnetic resonance imaging (MRI) for hippocampal atrophy, and fluorodeoxyglucose (FDG) PET for brain metabolic changes . For CSF, three markers (referred to as the core AD biomarkers) can jointly detect “a positive AD profile”. These are: Aβ42 (or Aβ42/Aβ40 ratio), which reflects Aβ plaque pathophysiology; phosphorylated-tau (p-tau), an indicator of tau phosphorylation; and total-tau (t-tau), a neuronal injury or neurodegeneration marker . The concentrations of these biomarkers change in individuals with biological evidence of AD compared with normal controls. Aβ42 is decreased and Aβ40 is unchanged. However, the Aβ42/Aβ40 ratio adjusts for inter-individual differences in the concentrations of the aggregation-prone Aβ42 peptide, making the ratio a more reliable indicator of Aβ plaque pathology compared with Aβ42 alone . P-tau and t-tau levels are both increased in AD versus unaffected controls, with the biomarker concentrations increasing according to disease severity . CSF t-tau is excellent for differentiating AD from healthy controls . CSF neurofilament light (NfL) is another strong indicator of neurodegeneration that can in principle substitute for t-tau in AD, however, unlike t-tau, CSF NfL is also increased in other neurodegenerative diseases . Moreover, CSF t-tau, but not CSF NfL, is associated with Aβ pathology in AD . The core CSF biomarkers are reported to be adidtionally changed, in different combinations, in non-neurodegenerative neurological conditions such as traumatic brain injury, Cretuzfeldt Jakob disease, stroke and cardiac arrest . For this reason, their specificity to AD should be interpreted in the context of neurodegenerative diseases. Shortcomings of current biomarker tools There is limited availability of cyclotrons for PET radiotracer synthesis worldwide . Similarly, the expertise and resources for CSF biomarker analyses is limited, with a recent study identifying only 40 centers mostly in Europe and North America (and a few in Australia and China) actively involved . Access to, and expertise for, biomarker-supported AD diagnosis and research is therefore acutely limited, excluding most of the global population. Blood biomarkers: next-generation AD diagnostics Blood, being the most ubiquitous biospecimen for clinical chemistry purposes, provides new opportunities to expand access to and participation in AD biomarker research and clinical care . Blood collection procedures do not require specialized training and facilities as lumbar puncture and PET imaging. Furthermore, the costs of blood biomarker analyses are estimated to be a fraction of the fees charged for neuroimaging appointments . The classical AD biomarkers that characterize the disease in the brain and CSF – Aβ, p-tau and t-tau – have also been described in blood (for recent updates see ). This review provides a short update on plasma p-tau, the latest addition to the plasma biomarker toolbox. Several new plasma p-tau methods have been described recently from independent academic and pharmaceutical research laboratories that have shown robust technical, clinical and prognostic performances. Novel p-tau biomarkers in CSF In AD context, p-tau biomarkers that work in blood must also have high (if not better) diagnostic and predictive performances in CSF, due to the close contact of the CSF with the brain parenchyma and serving as a sink for brain extracellular solutes . P-tau biomarker performances in CSF have been extremely important to the analytical and clinical validation of plasma P-tau181 as the most widely characterized tau phosphorylation site in CSF, with biomarkers focusing on this epitope currently being used in clinical practice . Nonetheless, several other p-tau biomarkers have been described recently. For example, while both CSF p-tau181 and p-tau231 are well-established indicators of ongoing tau pathology, pathological phosphorylation at threonine-231 appears to be observed earlier than at threonine-181 . This observation is useful for biomarker development to detect AD at very early stages prior to symptom onset . Recent studies have also shown that p-tau217 may be more sensitive for familial and sporadic AD than p-tau181 . Nevertheless, the most recent studies showed that standard immunoassays that target phosphorylated tau protein in its mid-region are outperformed by those that capture tau on its N-terminal-to-mid-region peptides/fragments, especially in the preclinical stage . Note that tau is truncated at several defined epitopes . N-terminal-directed p-tau181 and p-tau217 differentiated Aβ + AD dementia from control groups with much greater accuracy and fold-changes than mid-p-tau181 . Moreover, fold changes in AD versus control groups were highest for p-tau217, suggesting superior dynamic ranges over the aforementioned epitopes . Nonetheless, p-tau231 shows the strongest topographical associations with the earliest changes in Aβ-PET uptake ahead of p-tau217 and p-tau181 , in agreement with neuropathological evidence . More recently, the novel biomarker p-tau235 which becomes abnormal mostly in those already positive for p-tau231 has been described as a potential staging biomarker . Moreover, an assay for tau truncated at amino acid 368 shows strong correlation with tau-PET , while the concentration of tau species truncated at 224 also increases according to neuropathological staging . CSF tau fragments starting from amino acid 243 are also shown to associate with tau PET, and could thus be a marker of soluble tau aggregates . Furthermore, brain-derived tau, an assay capturing central nervous system tau released into blood, demonstrates specificity to AD and might reflect neurodegeneration due to AD . The foregoing discussion shows that CSF p-tau biomarkers have proven highly beneficial for the prognosis, diagnosis and staging of AD. However, the limitations highlighted above for CSF markers apply to them as well, making the transition to blood-based p-tau markers much more desirable. It is important to note that it is incorrect to refer to the different p-tau forms or epitopes as "isoforms" as done in some recent publications. This is because isoforms indicate splice variants of a gene, and not the phosphorylation sites in the resulting protein. There is limited availability of cyclotrons for PET radiotracer synthesis worldwide . Similarly, the expertise and resources for CSF biomarker analyses is limited, with a recent study identifying only 40 centers mostly in Europe and North America (and a few in Australia and China) actively involved . Access to, and expertise for, biomarker-supported AD diagnosis and research is therefore acutely limited, excluding most of the global population. Blood, being the most ubiquitous biospecimen for clinical chemistry purposes, provides new opportunities to expand access to and participation in AD biomarker research and clinical care . Blood collection procedures do not require specialized training and facilities as lumbar puncture and PET imaging. Furthermore, the costs of blood biomarker analyses are estimated to be a fraction of the fees charged for neuroimaging appointments . The classical AD biomarkers that characterize the disease in the brain and CSF – Aβ, p-tau and t-tau – have also been described in blood (for recent updates see ). This review provides a short update on plasma p-tau, the latest addition to the plasma biomarker toolbox. Several new plasma p-tau methods have been described recently from independent academic and pharmaceutical research laboratories that have shown robust technical, clinical and prognostic performances. In AD context, p-tau biomarkers that work in blood must also have high (if not better) diagnostic and predictive performances in CSF, due to the close contact of the CSF with the brain parenchyma and serving as a sink for brain extracellular solutes . P-tau biomarker performances in CSF have been extremely important to the analytical and clinical validation of plasma P-tau181 as the most widely characterized tau phosphorylation site in CSF, with biomarkers focusing on this epitope currently being used in clinical practice . Nonetheless, several other p-tau biomarkers have been described recently. For example, while both CSF p-tau181 and p-tau231 are well-established indicators of ongoing tau pathology, pathological phosphorylation at threonine-231 appears to be observed earlier than at threonine-181 . This observation is useful for biomarker development to detect AD at very early stages prior to symptom onset . Recent studies have also shown that p-tau217 may be more sensitive for familial and sporadic AD than p-tau181 . Nevertheless, the most recent studies showed that standard immunoassays that target phosphorylated tau protein in its mid-region are outperformed by those that capture tau on its N-terminal-to-mid-region peptides/fragments, especially in the preclinical stage . Note that tau is truncated at several defined epitopes . N-terminal-directed p-tau181 and p-tau217 differentiated Aβ + AD dementia from control groups with much greater accuracy and fold-changes than mid-p-tau181 . Moreover, fold changes in AD versus control groups were highest for p-tau217, suggesting superior dynamic ranges over the aforementioned epitopes . Nonetheless, p-tau231 shows the strongest topographical associations with the earliest changes in Aβ-PET uptake ahead of p-tau217 and p-tau181 , in agreement with neuropathological evidence . More recently, the novel biomarker p-tau235 which becomes abnormal mostly in those already positive for p-tau231 has been described as a potential staging biomarker . Moreover, an assay for tau truncated at amino acid 368 shows strong correlation with tau-PET , while the concentration of tau species truncated at 224 also increases according to neuropathological staging . CSF tau fragments starting from amino acid 243 are also shown to associate with tau PET, and could thus be a marker of soluble tau aggregates . Furthermore, brain-derived tau, an assay capturing central nervous system tau released into blood, demonstrates specificity to AD and might reflect neurodegeneration due to AD . The foregoing discussion shows that CSF p-tau biomarkers have proven highly beneficial for the prognosis, diagnosis and staging of AD. However, the limitations highlighted above for CSF markers apply to them as well, making the transition to blood-based p-tau markers much more desirable. It is important to note that it is incorrect to refer to the different p-tau forms or epitopes as "isoforms" as done in some recent publications. This is because isoforms indicate splice variants of a gene, and not the phosphorylation sites in the resulting protein. In this section, we discuss the diagnostic and pathophysiological performances of plasma p-tau, and their associations with Aβ, tau and neurodegenerative pathological changes (Fig. ). Time course of plasma p-tau changes in normal aging and across the AD continuum Plasma p-tau181, 217 and 231 levels have age associations, although not as strong as those reported for other markers like NfL . Young adults (~ 20–30 years of age) have lower concentrations of these markers compared with CU older adults without biomarker evidence of disease . The levels of plasma p-tau181, p-tau217 and p-tau231 each increase with disease severity and the intensity of Aβ and tau pathologies, with higher rates of change for those with abnormal baseline p-tau concentrations . When analyzed according to diagnostic groups, these increases tend to plateau in individuals in the late AD dementia stage presumably due to extensive degeneration, resulting in reduced or lost association with CSF and PET biomarkers . A recent study showed that contrary to plasma p-tau181 and p-tau231, p-tau217 demonstrated longitudinal increase in Aβ+ compared with Aβ- individuals, making it a candidate monitoring marker in therapeutic trials . Plasma p-tau levels in individuals with genetic predisposition to AD and other tauopathies Although the vast majority (> 90%) of AD patients show sporadic/late-onset forms of the disease (despite strong associations with genetic risks such as APOE e4 carriership) individuals with known genetic predispositions present with familial AD . In familial AD, plasma p-tau levels showed increases in pre-symptomatic individuals over a decade before symptom onset . In APP and PSEN1 mutation carriers, plasma p-tau181 and p-tau217 were increased in presymptomatic and symptomatic cases compared with non-carrier controls . Plasma p-tau217 was significantly increased approximately 20 years before the estimated year of onset of MCI while plasma p-tau181 was increased 16 years before the onset of cognitive impairment (in combined MCI and AD dementia cases) . In a study that directly compared plasmap-tau181 and p-tau217 in familial AD participants, p-tau181 only modestly discriminated symptomatic from presymptomatic and was only evident when compared to non-carriers . Plasma p-tau217, on the other hand, differentiated biologically-defined AD from patients without diagnostic levels of AD histology . In adults with Down syndrome (which can be characterized by triplication of the APP gene), plasma p-tau181 and p-tau217 discriminated asymptomatic individuals from each of the prodromal and dementia groups . Since a large proportion of people with Down syndrome develop AD symptomatology and pathology during their lives, evaluating biomarker changes in these individuals provides key insights into the biological progression and staging that is important for understanding same in sporadic cases. To the contrary, in participants carrying mutations in the MAPT gene that are known to cause tauopathies other than AD, blood-based p-tau181 levels remained normal as in healthy controls, and in the case of specific mutations the concentrations appeared to be further decreased compared with normal controls . Increased levels of CSF p-tau217 have also been found in non-AD tauopathy carriers of the MAPT mutation R406W . Plasma p-tau associations with clinical and biological evidence of AD and normal aging Plasma p-tau forms correlate with cognitive capacity assessed with a range of instruments including the Mini-Mental State Examination, the Montreal Cognitive Assessment and the Clinical Dementia Rating-Sum of Boxes (CDR-SOB) . Baseline plasma p-tau concentrations predict future cognitive decline and progression to MCI and dementia, with performances sometimes paralleling those of CSF p-tau . Increased levels of plasma p-tau associate with more rapid decline in cognition, cortical thickness, hippocampal atrophy and glucose metabolism . More recently, a comparative study that evaluated p-tau181, p-tau231, and p-tau-217 in a head-to-head manner demonstrated that p-tau217 quantified by IP-MS technology discriminated with higher accuracy patients with MCI and those who progressed to AD dementia . Plasma p-tau levels significantly associated with CSF Aβ42/Aβ40 as well as with Aβ-PET accumulation in early accumulating brain regions (e.g., precuneus, temporal and superior-frontal areas) in preclinical stages, which became stronger and extended to late-accumulating regions (e.g., subcortical structures) later in the disease course . In neuropathology studies, similar positive associations were recorded against various Aβ staining methods such as Thal, CERAD, and thioflavin stain scores . Furthermore, plasma p-tau concentrations associated with tau biomarkers (i.e., NFT pathology at postmortem, CSF p-tau or tau-PET) in the AT(N) framework . Plasma p-tau also associated with brain atrophy, FDG PET, CSF t-tau or CSF NfL . In Down syndrome, plasma p-tau181 correlated with atrophy and hypometabolism in temporoparietal regions . When more than one p-tau form was included in a study, plasma p-tau217 generally showed stronger associations with brain Aβ deposition than p-tau181 and p-tau231 . Moreover, the IP-MS plasma p-tau217 method performed better than immunoassay-based ones in a recent comparative study . Head-to-head comparisons of plasma p-tau forms Recent studies comparing the performances of plasma p-tau217 and/or p-tau231 with p-tau181 assays from different academic and industrial sources have shown that they have equally robust analytical performances and diagnostic capacities to identify individuals with AD pathology versus biomarker-negative normal controls or non-AD tauopathies (except plasma p-tau231 from ADx NeuroSciences which may need further improvement) signifying that these biomarkers are ready for widespread clinical and research use. Plasma p-tau concentrations increase gradually along the sporadic AD continuum in relation to the severity of Aβ pathology and cognitive function, reaching the highest concentrations in Aβ + participants with MCI and AD dementia . Plasma p-tau181, p-tau217 and p-tau231 each differentiates between Aβ- CU individuals versus Aβ + CU (preclinical AD), Aβ + MCI, and Aβ + AD dementia with good accuracies, while improving clinical characterization of cognitive performance . The largest fold increases (compared with Aβ- CU) are observed for plasma p-tau217, followed by p-tau231 and p-tau181 in agreement with CSF data . To this end, p-tau217 is the most analytically challenging of the p-tau biomarkers to measure since the levels are very low in those without (e.g., Aβ- CU and Aβ- non-AD dementias) and those with emerging Aβ pathology (including preclinical stages) . From research perspectives, however, plasma p-tau217 and p-tau231 each tends to show earlier and stronger associations with Aβ and tau pathologies than p-tau181 , including correlating with Aβ accumulation in early brain regions and with tau pathology in MCI patients with temporal lobe pathology . Ashton et al . (2021) showed that plasma p-tau231 is a promising biomarker in AD due to its diagnostic accuracy in early stages, and its association with incremental levels of brain Aβ pathology even before abnormality thresholds of Aβ-PET are reached . Plasma p-tau231 was superior to both plasma p-tau181 and CSF p-217 for this purpose . Moreover, plasma p-tau217 is a promising candidate biomarker for AD. p-tau217 appears earlier and has a stronger association with AD pathology than plasma p-tau181 in preclinical AD . Recent data support these arguments, and further demonstrated that p-tau231 is the first to increase in preclinical AD (A + T-) . However, p-tau217 becomes abnormal shortly after (at the A + T + stage), following which this biomarker shows faster longitudinal increases compared with p-tau231. Plasma p-tau181 also becomes abnormal in A + T + individuals but with less robust longitudinal change versus p-tau217. Therefore, p-tau181 seems to be mostly associated with changes corresponding to widespread amyloidosis. These findings also explain why plasma p-tau181, p-tau217 and p-tau231 all have excellent diagnostic performances for symptomatic AD but p-tau217 and p-tau231 have improved accuracies at the preclinical stages. Together, these finding support the use of specific plasma p-tau biomarkers for staging and tracking AD progression. However, all these plasma p-tau forms become abnormal ahead of tau-PET, suggesting that they can predict the outcome of PET imaging . In line with this, high levels of plasma p-tau are present even in preclinical stages of AD and can predict changes in tau-PET . Recent studies suggest that longitudinal levels of plasma p-tau217 could reflect the relation between amyloid pathology and tau deposits which would make it a suitable biomarker for both amyloid and tau pathologies disease progression. Although plasma p-tau is mostly validated in cohorts of individuals pre-classified according to PET or CSF biomarker results, a few studies in population-based cohorts categorized solely by clinical diagnosis give a glimpse into potential uses as a pre-screening tool. For example, Simrén et al. showed that plasma p-tau181 is increased in a subset of individuals at the MCI and AD dementia stages, and correlate with cognitive impairment and gray matter atrophy. In individuals presenting to the primary-care clinic with suspected cognitive decline and given preliminary diagnosis without biomarker testing, plasma p-tau181 and p-tau231 discriminated those with cognitive impairment from normal controls, however the biomarkers were unable to differentiate between those given preliminary diagnoses of MCI or AD . The value of plasma p-tau to differentiate AD from other neurodegenerative diseases Plasma p-tau181, p-tau217 and p-tau231 each distinguished AD from non-AD tauopathies such as frontotemporal dementia, progressive supranuclear palsy and corticobasal degeneration . In studies with postmortem validation, the discriminatory accuracies between Aβ + AD and Aβ- non-AD cases were as high as > 90%, with plasma p-tau being able to further distinguish between non-AD cases with or without concomitant AD pathology . Separating cognitive impairment due to AD versus dementia with Lewy bodies (DLB) is difficult to establish clinically because up to 50% of DLB patients are also thought to have concomitant AD . Plasma p-tau181 levels differentiated between autopsy confirmed AD and DLB, and went on to show that DLB patients with AD co-pathology have higher p-tau concentrations than those without . In DLB patients with a positive CSF Aβ profile, plasma p-tau181 and 231 levels were higher than those of normal controls and DLB participants with a negative Aβ profile but lower than those of AD patients, correlating with cognitive performance . Similarly, plasma p-tau181 and p-tau217 correlated with CSF biomarkers, Aβ PET and tau PET in clinically-diagnosed DLB patients to suggest that these biomarkers have capacity to identify AD co-pathology in DLB . Plasma p-tau versus other biomarkers Plasma p-tau181, p-tau217 and p-tau231 individually performed significantly better than the diagnostic capacities of each of APOE ε4 carriership, plasma NfL, t-tau, and the Simoa Aβ42/Aβ40 . When compared against non-phospho-tau blood biomarkers – NFL, Aβ ratio, t-tau and glial fibrillary acidic protein – plasma p-tau were significantly better at differentiating between AD and CU individuals . These results were comparable to those of predictive models incorporating Aβ PET, age, sex and APOE ε4 carriership . Diversity in plasma p-tau cohort validation studies Plasma p-tau studies have so far been performed in research cohorts in Europe and North America, with a few studies form Australia and Asia. The included volunteers in most studies identified as non-Hispanic Whites, and were also mostly of high socio-economic status (e.g., highly-educated, high-earning jobs, communities with high neighborhoods index). On the other hand, people living in other neighborhoods and those of other socioeconomic statuses are yet to be studied. Moreover, racial and ethnic diversity in research participation has been minimal. At the time of writing this manuscript, only three studies have included significant numbers of ethnoracially diverse participants : one investigated plasma p-tau181 in relation to amyloid accumulation and AD diagnosis in a Singaporean cohort of high baseline cerebrovascular burden while another probed plasma p-tau217 and p-tau181 in a multi-ethnic, community based cohort in the United States . Furthermore, Schindler et al. studying non-Hispanic White and African-American pairs of older adults of the same demographic characteristics (age, sex, cognition and APOE ε4 genotype) recently demonstrated that the predictive accuracies of plasma p-tau231 and p-tau181 identify abnormal Aβ-PET and CSF Aβ42/Aβ40 results significantly differ in the participants who represented the two racial groupings studied. Another point worth discussing is that most cohorts evaluated so far have been from memory clinics or are clinical research cohorts; population-based studies are missing . A recent study of community-dwelling older adults in a socioeconomically deprived region of southern Pennsylvania showed that plasma p-tau181 (the only p-tau marker assessed) levels were significantly higher in those with compared with those without cognitive impairment . Another important factor that should addressed is the effect of comorbidities; Mielke et al. found that chronic kidney disease associates with plasma p-tau181 and p-tau217 levels with a similar effect size as that between Aβ + and Aβ- individuals. Plasma p-tau181, 217 and 231 levels have age associations, although not as strong as those reported for other markers like NfL . Young adults (~ 20–30 years of age) have lower concentrations of these markers compared with CU older adults without biomarker evidence of disease . The levels of plasma p-tau181, p-tau217 and p-tau231 each increase with disease severity and the intensity of Aβ and tau pathologies, with higher rates of change for those with abnormal baseline p-tau concentrations . When analyzed according to diagnostic groups, these increases tend to plateau in individuals in the late AD dementia stage presumably due to extensive degeneration, resulting in reduced or lost association with CSF and PET biomarkers . A recent study showed that contrary to plasma p-tau181 and p-tau231, p-tau217 demonstrated longitudinal increase in Aβ+ compared with Aβ- individuals, making it a candidate monitoring marker in therapeutic trials . Although the vast majority (> 90%) of AD patients show sporadic/late-onset forms of the disease (despite strong associations with genetic risks such as APOE e4 carriership) individuals with known genetic predispositions present with familial AD . In familial AD, plasma p-tau levels showed increases in pre-symptomatic individuals over a decade before symptom onset . In APP and PSEN1 mutation carriers, plasma p-tau181 and p-tau217 were increased in presymptomatic and symptomatic cases compared with non-carrier controls . Plasma p-tau217 was significantly increased approximately 20 years before the estimated year of onset of MCI while plasma p-tau181 was increased 16 years before the onset of cognitive impairment (in combined MCI and AD dementia cases) . In a study that directly compared plasmap-tau181 and p-tau217 in familial AD participants, p-tau181 only modestly discriminated symptomatic from presymptomatic and was only evident when compared to non-carriers . Plasma p-tau217, on the other hand, differentiated biologically-defined AD from patients without diagnostic levels of AD histology . In adults with Down syndrome (which can be characterized by triplication of the APP gene), plasma p-tau181 and p-tau217 discriminated asymptomatic individuals from each of the prodromal and dementia groups . Since a large proportion of people with Down syndrome develop AD symptomatology and pathology during their lives, evaluating biomarker changes in these individuals provides key insights into the biological progression and staging that is important for understanding same in sporadic cases. To the contrary, in participants carrying mutations in the MAPT gene that are known to cause tauopathies other than AD, blood-based p-tau181 levels remained normal as in healthy controls, and in the case of specific mutations the concentrations appeared to be further decreased compared with normal controls . Increased levels of CSF p-tau217 have also been found in non-AD tauopathy carriers of the MAPT mutation R406W . Plasma p-tau forms correlate with cognitive capacity assessed with a range of instruments including the Mini-Mental State Examination, the Montreal Cognitive Assessment and the Clinical Dementia Rating-Sum of Boxes (CDR-SOB) . Baseline plasma p-tau concentrations predict future cognitive decline and progression to MCI and dementia, with performances sometimes paralleling those of CSF p-tau . Increased levels of plasma p-tau associate with more rapid decline in cognition, cortical thickness, hippocampal atrophy and glucose metabolism . More recently, a comparative study that evaluated p-tau181, p-tau231, and p-tau-217 in a head-to-head manner demonstrated that p-tau217 quantified by IP-MS technology discriminated with higher accuracy patients with MCI and those who progressed to AD dementia . Plasma p-tau levels significantly associated with CSF Aβ42/Aβ40 as well as with Aβ-PET accumulation in early accumulating brain regions (e.g., precuneus, temporal and superior-frontal areas) in preclinical stages, which became stronger and extended to late-accumulating regions (e.g., subcortical structures) later in the disease course . In neuropathology studies, similar positive associations were recorded against various Aβ staining methods such as Thal, CERAD, and thioflavin stain scores . Furthermore, plasma p-tau concentrations associated with tau biomarkers (i.e., NFT pathology at postmortem, CSF p-tau or tau-PET) in the AT(N) framework . Plasma p-tau also associated with brain atrophy, FDG PET, CSF t-tau or CSF NfL . In Down syndrome, plasma p-tau181 correlated with atrophy and hypometabolism in temporoparietal regions . When more than one p-tau form was included in a study, plasma p-tau217 generally showed stronger associations with brain Aβ deposition than p-tau181 and p-tau231 . Moreover, the IP-MS plasma p-tau217 method performed better than immunoassay-based ones in a recent comparative study . Recent studies comparing the performances of plasma p-tau217 and/or p-tau231 with p-tau181 assays from different academic and industrial sources have shown that they have equally robust analytical performances and diagnostic capacities to identify individuals with AD pathology versus biomarker-negative normal controls or non-AD tauopathies (except plasma p-tau231 from ADx NeuroSciences which may need further improvement) signifying that these biomarkers are ready for widespread clinical and research use. Plasma p-tau concentrations increase gradually along the sporadic AD continuum in relation to the severity of Aβ pathology and cognitive function, reaching the highest concentrations in Aβ + participants with MCI and AD dementia . Plasma p-tau181, p-tau217 and p-tau231 each differentiates between Aβ- CU individuals versus Aβ + CU (preclinical AD), Aβ + MCI, and Aβ + AD dementia with good accuracies, while improving clinical characterization of cognitive performance . The largest fold increases (compared with Aβ- CU) are observed for plasma p-tau217, followed by p-tau231 and p-tau181 in agreement with CSF data . To this end, p-tau217 is the most analytically challenging of the p-tau biomarkers to measure since the levels are very low in those without (e.g., Aβ- CU and Aβ- non-AD dementias) and those with emerging Aβ pathology (including preclinical stages) . From research perspectives, however, plasma p-tau217 and p-tau231 each tends to show earlier and stronger associations with Aβ and tau pathologies than p-tau181 , including correlating with Aβ accumulation in early brain regions and with tau pathology in MCI patients with temporal lobe pathology . Ashton et al . (2021) showed that plasma p-tau231 is a promising biomarker in AD due to its diagnostic accuracy in early stages, and its association with incremental levels of brain Aβ pathology even before abnormality thresholds of Aβ-PET are reached . Plasma p-tau231 was superior to both plasma p-tau181 and CSF p-217 for this purpose . Moreover, plasma p-tau217 is a promising candidate biomarker for AD. p-tau217 appears earlier and has a stronger association with AD pathology than plasma p-tau181 in preclinical AD . Recent data support these arguments, and further demonstrated that p-tau231 is the first to increase in preclinical AD (A + T-) . However, p-tau217 becomes abnormal shortly after (at the A + T + stage), following which this biomarker shows faster longitudinal increases compared with p-tau231. Plasma p-tau181 also becomes abnormal in A + T + individuals but with less robust longitudinal change versus p-tau217. Therefore, p-tau181 seems to be mostly associated with changes corresponding to widespread amyloidosis. These findings also explain why plasma p-tau181, p-tau217 and p-tau231 all have excellent diagnostic performances for symptomatic AD but p-tau217 and p-tau231 have improved accuracies at the preclinical stages. Together, these finding support the use of specific plasma p-tau biomarkers for staging and tracking AD progression. However, all these plasma p-tau forms become abnormal ahead of tau-PET, suggesting that they can predict the outcome of PET imaging . In line with this, high levels of plasma p-tau are present even in preclinical stages of AD and can predict changes in tau-PET . Recent studies suggest that longitudinal levels of plasma p-tau217 could reflect the relation between amyloid pathology and tau deposits which would make it a suitable biomarker for both amyloid and tau pathologies disease progression. Although plasma p-tau is mostly validated in cohorts of individuals pre-classified according to PET or CSF biomarker results, a few studies in population-based cohorts categorized solely by clinical diagnosis give a glimpse into potential uses as a pre-screening tool. For example, Simrén et al. showed that plasma p-tau181 is increased in a subset of individuals at the MCI and AD dementia stages, and correlate with cognitive impairment and gray matter atrophy. In individuals presenting to the primary-care clinic with suspected cognitive decline and given preliminary diagnosis without biomarker testing, plasma p-tau181 and p-tau231 discriminated those with cognitive impairment from normal controls, however the biomarkers were unable to differentiate between those given preliminary diagnoses of MCI or AD . Plasma p-tau181, p-tau217 and p-tau231 each distinguished AD from non-AD tauopathies such as frontotemporal dementia, progressive supranuclear palsy and corticobasal degeneration . In studies with postmortem validation, the discriminatory accuracies between Aβ + AD and Aβ- non-AD cases were as high as > 90%, with plasma p-tau being able to further distinguish between non-AD cases with or without concomitant AD pathology . Separating cognitive impairment due to AD versus dementia with Lewy bodies (DLB) is difficult to establish clinically because up to 50% of DLB patients are also thought to have concomitant AD . Plasma p-tau181 levels differentiated between autopsy confirmed AD and DLB, and went on to show that DLB patients with AD co-pathology have higher p-tau concentrations than those without . In DLB patients with a positive CSF Aβ profile, plasma p-tau181 and 231 levels were higher than those of normal controls and DLB participants with a negative Aβ profile but lower than those of AD patients, correlating with cognitive performance . Similarly, plasma p-tau181 and p-tau217 correlated with CSF biomarkers, Aβ PET and tau PET in clinically-diagnosed DLB patients to suggest that these biomarkers have capacity to identify AD co-pathology in DLB . Plasma p-tau181, p-tau217 and p-tau231 individually performed significantly better than the diagnostic capacities of each of APOE ε4 carriership, plasma NfL, t-tau, and the Simoa Aβ42/Aβ40 . When compared against non-phospho-tau blood biomarkers – NFL, Aβ ratio, t-tau and glial fibrillary acidic protein – plasma p-tau were significantly better at differentiating between AD and CU individuals . These results were comparable to those of predictive models incorporating Aβ PET, age, sex and APOE ε4 carriership . Plasma p-tau studies have so far been performed in research cohorts in Europe and North America, with a few studies form Australia and Asia. The included volunteers in most studies identified as non-Hispanic Whites, and were also mostly of high socio-economic status (e.g., highly-educated, high-earning jobs, communities with high neighborhoods index). On the other hand, people living in other neighborhoods and those of other socioeconomic statuses are yet to be studied. Moreover, racial and ethnic diversity in research participation has been minimal. At the time of writing this manuscript, only three studies have included significant numbers of ethnoracially diverse participants : one investigated plasma p-tau181 in relation to amyloid accumulation and AD diagnosis in a Singaporean cohort of high baseline cerebrovascular burden while another probed plasma p-tau217 and p-tau181 in a multi-ethnic, community based cohort in the United States . Furthermore, Schindler et al. studying non-Hispanic White and African-American pairs of older adults of the same demographic characteristics (age, sex, cognition and APOE ε4 genotype) recently demonstrated that the predictive accuracies of plasma p-tau231 and p-tau181 identify abnormal Aβ-PET and CSF Aβ42/Aβ40 results significantly differ in the participants who represented the two racial groupings studied. Another point worth discussing is that most cohorts evaluated so far have been from memory clinics or are clinical research cohorts; population-based studies are missing . A recent study of community-dwelling older adults in a socioeconomically deprived region of southern Pennsylvania showed that plasma p-tau181 (the only p-tau marker assessed) levels were significantly higher in those with compared with those without cognitive impairment . Another important factor that should addressed is the effect of comorbidities; Mielke et al. found that chronic kidney disease associates with plasma p-tau181 and p-tau217 levels with a similar effect size as that between Aβ + and Aβ- individuals. Plasma p-tau biomarkers can, as highlighted above, capture relevant clinico-biological information in AD, with the advantages of less invasive collection and cost-effectiveness in comparison to established CSF and PET biomarkers. These factors, alongside the AD-specific characteristics (in comparison to other biomarker such as plasma NfL ) and analytical advantages (in comparison to plasma Aβ42/Aβ40, which presents challenges due to low disease-related fold changes and narrow analytical detection range ), make plasma p-tau biomarkers more scalable candidates for implementation. This newly-achieved technical feasibility of large-scale in vivo detection of AD has several implications for clinical trials, epidemiologic research and public health (Fig. ). Clinical diagnosis and prognosis Plasma p-tau has vast potential to support AD diagnosis and prognosis (Fig. ). We propose that these biomarkers are integrated into the existing diagnostic workup at both primary and specialist care hospitals. In the primary care setting, plasma p-tau could be used to pre-screen for AD pathophysiology. When combined with the regular clinical workflow for suspected dementia, altered levels of plasma p-tau in patients with cognitive symptoms would point to potential AD (or at least AD-associated amyloidosis) while those with normal concentrations are further evaluated for non-AD causes of cognitive symptoms. In patients whose clinical profiles fit AD (e.g., those with family history of the disease and/or have confirmed genetic predisposition for AD) but have their plasma p-tau in normal ranges, periodic follow-up clinical and blood biomarker assessments (e.g., annually) would be ideal to monitor for longitudinal changes in p-tau and cognitive capacity. All patients showing increased plasma p-tau levels at the primary care clinic should be referred to secondary care for their plasma biomarker results to be compared with more extensive dementia assessment outcomes and, if necessary, confirmed by CSF or PET. Similarly, those with symptoms suspected to be due to non-AD causes would also be verified to be without biomarker evidence of AD by either CSF or PET ATN biomarkers. In patients whose plasma p-tau profiles are confirmed at the specialist clinic, the blood biomarkers would be further useful to follow disease progression over several years. As the continue to learn more about blood biomarkers and their analytical robustness and diagnostic accuracies improve, it is feasible to envisage that the need to confirm results with CSF biomarker measures will reduce over time. A future of standalone blood biomarker evaluations may not be too far away. Clinical trials The development of clinically effective disease-modifying therapies remains a challenge. Some anti-Aβ immunotherapy candidates, have demonstrated to be biologically effective in clearing amyloid from the brain , while failing to robustly meet pre-specified cognitive endpoints . In 2021, the anti-Aβ drug aducanumab was approved by the United States Food and Drug Administration based on the results of two parallel phase-3 trials, ENGAGE and EMERGE, that had been previously interrupted in futility analyses. However, post-hoc analyses on the group of participants that completed the study revealed that EMERGE had achieved its primary and secondary endpoints, while ENGAGE did not, with both of them showing amyloid-related imaging abnormalities as a prevalent side effects . This has generated much debate, since many consider that the statistically significant findings from EMERGE may not be of high clinical relevance . Moreover, other anti-Aβ drugs also demonstrated similar or better performance in comparison to aducanumab, such as the phase 2 donanemab trial, which achieved its primary endpoint on slowing cognitive decline as measured by the Integrated Alzheimer’s Disease Rating Scale . More recently, phase III trial of the Aβ aggregate-targeting experimental drug lecanemab met its primary endpoint of significantly reducing cognitive decline and reducing markers of brain Aβ deposition in a large multi-center evaluation of early AD, which was approved by the FDA . Plasma biomarker results are expected to follow soon. With the field rapidly moving towards a treatment response phase, understanding how blood biomarkers can be incorporated into the drug development pipeline is highly needed, given their potential to be used in pre-screening and in monitoring treatment response and safety. The role of plasma p-tau in trial enrolment With the development of biomarkers and advances in diagnostic guidelines, the understanding of AD as a clinico-biological entity has directly impacted trial design, with new clinical studies progressively adopting biomarker-evidence of AD as enrollment criteria. Usually, these trials screen eligible participants with PET or CSF biomarkers and then randomize only those participants with abnormal biomarker profiles according to established thresholds. Considering trials evaluating anti-Aβ and anti-tau therapies need to assess target engagement throughout the study, PET measures are often preferred as the enrollment biomarker. In this context, plasma p-tau biomarkers may not have the same hierarchical status as CSF and PET, but as they associate with and predict PET results and are relatively inexpensive, accessible and less invasive, they are the ideal tools to pre-screen clinical and demographically eligible individuals (Fig. ). Several strategies have been discussed for this purpose, such as applying plasma p-tau to pre-screen individuals for the presence of Aβ pathology and also to detect eligible participants who are at greater risk of tau accumulation. The plasma p-tau diagnostic accuracy for Aβ positivity has been widely reported in independent studies, and a recent review article suggested that, by adding a plasma p-tau181 to pre-screen for Aβ-PET pathology, up to ~ 60% of the original cost could be saved in comparison to pre-screening only with Aβ-PET, one of the conventional approaches . Regarding Aβ and tau accumulation, Moscoso and colleagues first demonstrated that plasma p-tau181 was associated with longitudinal changes in Aβ-PET in early accumulating regions , and then showed that it was capable to identify individuals at higher risk for longitudinal tau accumulation, performing particularly better in cognitively unimpaired individuals with a higher Aβ burden , a group of special interest for future pre-symptomatic trials. Similarly, in a recent study by Leuzy et al., the two strongest predictors of tau-PET accumulation were plasma p-tau217 and baseline tau-PET, with the former being the predictor contributing the most in Aβ-positive CU individuals and the latter in Aβ-positive MCIs . Regarding real-life clinical trial applications of such advances, the TRAILBLAZER-2 (Eli Lilly; NCT04437511) donanemab trial for early AD tested the potential of a pre-screening strategy with plasma p-tau181 combined before proceeding to Aβ- and tau-PET . Among the subset of 752 candidate participants who had their plasma p-tau181 levels quantified, 63% of those with elevated p-tau181 had subsequent positive scans for both Aβ- and tau-PET. In contrast, only 37% of the 3619 candidates that had been pre-screened straight away with Aβ- and tau-PET demonstrated positive scans for the two proteinopathies . Based on the success of the plasma pre-screening approach, the same company has taken a step further for their TRAILBLAZER-3 donanemab trial in a large sample of asymptomatic older adults (NCT05026866) . The study is the first to use plasma p-tau (p-tau217) as the sole enrollment criteria. Participants will have their definitive enrolment decision based on plasma p-tau217 levels “consistent with the presence of amyloid and early-tau pathology”, and Aβ-PET is not included in any part of enrollment workflow nor amongst the secondary outcomes . Given that plasma p-tau analytical standardization have not yet been achieved, and the absence of validated strategies for plasma p-tau results interpretation, such a strategy could be susceptible to giving anti-Aβ therapy to asymptomatic individuals without Aβ pathology, a problem that a biomarker-based AD definition had been proposed to resolve . However, the higher performance of p-tau217 (in comparison to p-tau181) and the success from the TRAILBLAZER-2 strategy may indicate potential efficacy for such a bold enrollment criterion. Still, it is important to consider that, unlike the previous trials that focused on early AD dementia, TRAILBLAZER-3 is a prevention trial in asymptomatic individuals, a group that presents mild-to-moderate fold changes in plasma p-tau biomarkers – even for p-tau217 – in Aβ + individuals . In summary, plasma p-tau biomarkers demonstrate great potential to be applied in the clinical trial recruitment flowchart, with clear potential for pre-screening, while results for TRAILBLAZER-3 could be indicative on whether they could be used as a standalone biomarker enrollment criterion. Monitoring drug activity While actual target engagement for the main anti-Aβ and anti-tau trials has been determined by PET measures of the respective target, plasma p-tau biomarkers could offer a minimally-invasive option for monitoring drug activity of new interventions, which is crucial not only for advanced phases but for the whole drug development pipeline (Fig. ). A blood biomarker capable to monitor drug activity would allow for more frequent time-points in comparison to Aβ-PET, also with the potential of remote sampling, and would also represent, to some extent, what types of treatment response could be seen in the future when the drugs start to be widely applied in clinical practice. Considering that plasma p-tau associates with both Aβ and tau pathologies , in theory it is possible that blood p-tau biomarkers are able to reflect activity of either anti-tau or anti-amyloid therapies. In 2021, the first results evaluating plasma p-tau levels during disease modifying trials were shared with the field. Results from both the ENGAGE and EMERGE aducanumab trials showed that 13–16% reductions in plasma p-tau181 were observed in the high- and low-dose groups in comparison to placebo on treatment week 56 . Moreover, results from the concluded TRAILBLAZER-ALZ donanemab trial, that had more frequent sampling, demonstrated that levels of plasma p-tau217 dropped 24% in comparison to placebo as early as on treatment week 12 . In both cases the changes agreed with reductions in Aβ-PET uptake suggesting that plasma p-tau is associated with brain Aβ accumulation . Interestingly, in TRAILBLAZER-3 the group-level p-tau217 reductions generally persisted even in the subgroup that had discontinued donanemab after 24 weeks due to lack of significant Aβ-PET changes . Nevertheless, it still remains unknown whether plasma p-tau levels would be affected by more effective anti- tau therapies in the clinic. This raises the question of how certain one can be that changes in soluble p-tau are solely due to intervention-mediated removal of Aβ plaques – or potentially associated with yet undetermined clearance of peri-plaque dystrophic neurites containing tau tangles – or if they could be achieved by removing tau tangles from the brain. When such information becomes available, a better understanding on the biological meaning of soluble p-tau will be achieved, since currently it is not entirely possible to disentangle its dual association with AD key neuropathological features. In brief, these results indicate that plasma p-tau can be a promising biomarker to monitor drug activity of disease modifying treatments in AD. Further trials studies should continue to address their value in treatment response, potentially increase sampling frequency by testing remote collection, and, most importantly, carry detailed analyses of individual-level clinical trial data to determine in which cases reductions in p-tau can identify an effective clinical and biological treatment response. Plasma p-tau has vast potential to support AD diagnosis and prognosis (Fig. ). We propose that these biomarkers are integrated into the existing diagnostic workup at both primary and specialist care hospitals. In the primary care setting, plasma p-tau could be used to pre-screen for AD pathophysiology. When combined with the regular clinical workflow for suspected dementia, altered levels of plasma p-tau in patients with cognitive symptoms would point to potential AD (or at least AD-associated amyloidosis) while those with normal concentrations are further evaluated for non-AD causes of cognitive symptoms. In patients whose clinical profiles fit AD (e.g., those with family history of the disease and/or have confirmed genetic predisposition for AD) but have their plasma p-tau in normal ranges, periodic follow-up clinical and blood biomarker assessments (e.g., annually) would be ideal to monitor for longitudinal changes in p-tau and cognitive capacity. All patients showing increased plasma p-tau levels at the primary care clinic should be referred to secondary care for their plasma biomarker results to be compared with more extensive dementia assessment outcomes and, if necessary, confirmed by CSF or PET. Similarly, those with symptoms suspected to be due to non-AD causes would also be verified to be without biomarker evidence of AD by either CSF or PET ATN biomarkers. In patients whose plasma p-tau profiles are confirmed at the specialist clinic, the blood biomarkers would be further useful to follow disease progression over several years. As the continue to learn more about blood biomarkers and their analytical robustness and diagnostic accuracies improve, it is feasible to envisage that the need to confirm results with CSF biomarker measures will reduce over time. A future of standalone blood biomarker evaluations may not be too far away. The development of clinically effective disease-modifying therapies remains a challenge. Some anti-Aβ immunotherapy candidates, have demonstrated to be biologically effective in clearing amyloid from the brain , while failing to robustly meet pre-specified cognitive endpoints . In 2021, the anti-Aβ drug aducanumab was approved by the United States Food and Drug Administration based on the results of two parallel phase-3 trials, ENGAGE and EMERGE, that had been previously interrupted in futility analyses. However, post-hoc analyses on the group of participants that completed the study revealed that EMERGE had achieved its primary and secondary endpoints, while ENGAGE did not, with both of them showing amyloid-related imaging abnormalities as a prevalent side effects . This has generated much debate, since many consider that the statistically significant findings from EMERGE may not be of high clinical relevance . Moreover, other anti-Aβ drugs also demonstrated similar or better performance in comparison to aducanumab, such as the phase 2 donanemab trial, which achieved its primary endpoint on slowing cognitive decline as measured by the Integrated Alzheimer’s Disease Rating Scale . More recently, phase III trial of the Aβ aggregate-targeting experimental drug lecanemab met its primary endpoint of significantly reducing cognitive decline and reducing markers of brain Aβ deposition in a large multi-center evaluation of early AD, which was approved by the FDA . Plasma biomarker results are expected to follow soon. With the field rapidly moving towards a treatment response phase, understanding how blood biomarkers can be incorporated into the drug development pipeline is highly needed, given their potential to be used in pre-screening and in monitoring treatment response and safety. With the development of biomarkers and advances in diagnostic guidelines, the understanding of AD as a clinico-biological entity has directly impacted trial design, with new clinical studies progressively adopting biomarker-evidence of AD as enrollment criteria. Usually, these trials screen eligible participants with PET or CSF biomarkers and then randomize only those participants with abnormal biomarker profiles according to established thresholds. Considering trials evaluating anti-Aβ and anti-tau therapies need to assess target engagement throughout the study, PET measures are often preferred as the enrollment biomarker. In this context, plasma p-tau biomarkers may not have the same hierarchical status as CSF and PET, but as they associate with and predict PET results and are relatively inexpensive, accessible and less invasive, they are the ideal tools to pre-screen clinical and demographically eligible individuals (Fig. ). Several strategies have been discussed for this purpose, such as applying plasma p-tau to pre-screen individuals for the presence of Aβ pathology and also to detect eligible participants who are at greater risk of tau accumulation. The plasma p-tau diagnostic accuracy for Aβ positivity has been widely reported in independent studies, and a recent review article suggested that, by adding a plasma p-tau181 to pre-screen for Aβ-PET pathology, up to ~ 60% of the original cost could be saved in comparison to pre-screening only with Aβ-PET, one of the conventional approaches . Regarding Aβ and tau accumulation, Moscoso and colleagues first demonstrated that plasma p-tau181 was associated with longitudinal changes in Aβ-PET in early accumulating regions , and then showed that it was capable to identify individuals at higher risk for longitudinal tau accumulation, performing particularly better in cognitively unimpaired individuals with a higher Aβ burden , a group of special interest for future pre-symptomatic trials. Similarly, in a recent study by Leuzy et al., the two strongest predictors of tau-PET accumulation were plasma p-tau217 and baseline tau-PET, with the former being the predictor contributing the most in Aβ-positive CU individuals and the latter in Aβ-positive MCIs . Regarding real-life clinical trial applications of such advances, the TRAILBLAZER-2 (Eli Lilly; NCT04437511) donanemab trial for early AD tested the potential of a pre-screening strategy with plasma p-tau181 combined before proceeding to Aβ- and tau-PET . Among the subset of 752 candidate participants who had their plasma p-tau181 levels quantified, 63% of those with elevated p-tau181 had subsequent positive scans for both Aβ- and tau-PET. In contrast, only 37% of the 3619 candidates that had been pre-screened straight away with Aβ- and tau-PET demonstrated positive scans for the two proteinopathies . Based on the success of the plasma pre-screening approach, the same company has taken a step further for their TRAILBLAZER-3 donanemab trial in a large sample of asymptomatic older adults (NCT05026866) . The study is the first to use plasma p-tau (p-tau217) as the sole enrollment criteria. Participants will have their definitive enrolment decision based on plasma p-tau217 levels “consistent with the presence of amyloid and early-tau pathology”, and Aβ-PET is not included in any part of enrollment workflow nor amongst the secondary outcomes . Given that plasma p-tau analytical standardization have not yet been achieved, and the absence of validated strategies for plasma p-tau results interpretation, such a strategy could be susceptible to giving anti-Aβ therapy to asymptomatic individuals without Aβ pathology, a problem that a biomarker-based AD definition had been proposed to resolve . However, the higher performance of p-tau217 (in comparison to p-tau181) and the success from the TRAILBLAZER-2 strategy may indicate potential efficacy for such a bold enrollment criterion. Still, it is important to consider that, unlike the previous trials that focused on early AD dementia, TRAILBLAZER-3 is a prevention trial in asymptomatic individuals, a group that presents mild-to-moderate fold changes in plasma p-tau biomarkers – even for p-tau217 – in Aβ + individuals . In summary, plasma p-tau biomarkers demonstrate great potential to be applied in the clinical trial recruitment flowchart, with clear potential for pre-screening, while results for TRAILBLAZER-3 could be indicative on whether they could be used as a standalone biomarker enrollment criterion. While actual target engagement for the main anti-Aβ and anti-tau trials has been determined by PET measures of the respective target, plasma p-tau biomarkers could offer a minimally-invasive option for monitoring drug activity of new interventions, which is crucial not only for advanced phases but for the whole drug development pipeline (Fig. ). A blood biomarker capable to monitor drug activity would allow for more frequent time-points in comparison to Aβ-PET, also with the potential of remote sampling, and would also represent, to some extent, what types of treatment response could be seen in the future when the drugs start to be widely applied in clinical practice. Considering that plasma p-tau associates with both Aβ and tau pathologies , in theory it is possible that blood p-tau biomarkers are able to reflect activity of either anti-tau or anti-amyloid therapies. In 2021, the first results evaluating plasma p-tau levels during disease modifying trials were shared with the field. Results from both the ENGAGE and EMERGE aducanumab trials showed that 13–16% reductions in plasma p-tau181 were observed in the high- and low-dose groups in comparison to placebo on treatment week 56 . Moreover, results from the concluded TRAILBLAZER-ALZ donanemab trial, that had more frequent sampling, demonstrated that levels of plasma p-tau217 dropped 24% in comparison to placebo as early as on treatment week 12 . In both cases the changes agreed with reductions in Aβ-PET uptake suggesting that plasma p-tau is associated with brain Aβ accumulation . Interestingly, in TRAILBLAZER-3 the group-level p-tau217 reductions generally persisted even in the subgroup that had discontinued donanemab after 24 weeks due to lack of significant Aβ-PET changes . Nevertheless, it still remains unknown whether plasma p-tau levels would be affected by more effective anti- tau therapies in the clinic. This raises the question of how certain one can be that changes in soluble p-tau are solely due to intervention-mediated removal of Aβ plaques – or potentially associated with yet undetermined clearance of peri-plaque dystrophic neurites containing tau tangles – or if they could be achieved by removing tau tangles from the brain. When such information becomes available, a better understanding on the biological meaning of soluble p-tau will be achieved, since currently it is not entirely possible to disentangle its dual association with AD key neuropathological features. In brief, these results indicate that plasma p-tau can be a promising biomarker to monitor drug activity of disease modifying treatments in AD. Further trials studies should continue to address their value in treatment response, potentially increase sampling frequency by testing remote collection, and, most importantly, carry detailed analyses of individual-level clinical trial data to determine in which cases reductions in p-tau can identify an effective clinical and biological treatment response. Recent breakthrough advances in biochemistry and clinical chemistry have enabled the development of ultrasensitive and robust plasma p-tau biomarkers with the potential to lead the AD field in new directions. Accumulating evidence from multiple independent cohorts using different plasma p-tau assays show that these biomarkers have shown excellent diagnostic accuracies as well as performances that demonstrate capacity to predict post-mortem diagnosis and the outcomes of CSF and neuroimaging biomarker testing. While plasma p-tau181, p-tau231 and p-tau217 have all shown excellent diagnostic utility for the symptomatic stages of AD, plasma p-tau217 and p-tau231 have emerged as markers of incipient AD that become abnormal earlier ahead of p-tau181, especially in the preclinical phase. Since these biomarkers associate to different degrees with amyloid and tau pathology at various stages of the AD continuum, we find it plausible that different p-tau biomarkers will be more suitable for various purposes, especially to evaluate preclinical disease. However, in the case of detecting symptomatic AD, all p-tau biomarkers perform equally well. Together, these findings show that it is prime time that plasma p-tau biomarkers were employed to support clinical diagnosis as well as to recruit volunteers for therapeutic trials and to monitor the efficacy of drug interventions. In clinical diagnosis, abnormal levels of plasma p-tau would signal a high probability of AD pathophysiology underlying cognitive decline. This observation would be strengthened if plasma NfL are in normal ranges. In clinical trials, pre-screening potential volunteers with plasma p-tau would enrich the population of individuals with high likelihood of AD who could then receive CSF or PET assessments for confirmation (Fig. ). Outstanding questions As the field moves towards widespread clinical and research implementation of blood biomarkers, it is important to identify and mitigate against physiological and lifestyle factors that can inadvertently introduce measurement errors independent of analytical procedures. As biomarker availability and accessibility increase, so will repeated sampling for clinical assessments and longitudinal evaluations become more common. It is absolutely essential to differentiate between biomarker changes due to pathological and treatment effects from variability induced by physiological and lifestyle factors. Future research should establish if everyday factors like sleep, circadian rhythm, exercise, medical comorbidities, fasting and diet affect the reproducibility of blood biomarker measurements. The results will be important to identify potential sources of error, addressing which should minimize false positivity and false negativity. Furthermore, the results will be critical to developing evidence-backed pre-analytical guidelines for blood handling. Standardization and harmonization of plasma p-tau results collected from different centers and in using different assays will be essential for cross-cohort comparison of results and the generation and validation of cut-points. Moreover, plasma p-tau must be validated in a broad range of populations that reflects the diversity of the larger community in which these blood biomarkers will be applied. This includes people of different socio-economic statuses, ethno-racial identities, age, cognitive functions, as well as those living in various countries. As the field moves towards widespread clinical and research implementation of blood biomarkers, it is important to identify and mitigate against physiological and lifestyle factors that can inadvertently introduce measurement errors independent of analytical procedures. As biomarker availability and accessibility increase, so will repeated sampling for clinical assessments and longitudinal evaluations become more common. It is absolutely essential to differentiate between biomarker changes due to pathological and treatment effects from variability induced by physiological and lifestyle factors. Future research should establish if everyday factors like sleep, circadian rhythm, exercise, medical comorbidities, fasting and diet affect the reproducibility of blood biomarker measurements. The results will be important to identify potential sources of error, addressing which should minimize false positivity and false negativity. Furthermore, the results will be critical to developing evidence-backed pre-analytical guidelines for blood handling. Standardization and harmonization of plasma p-tau results collected from different centers and in using different assays will be essential for cross-cohort comparison of results and the generation and validation of cut-points. Moreover, plasma p-tau must be validated in a broad range of populations that reflects the diversity of the larger community in which these blood biomarkers will be applied. This includes people of different socio-economic statuses, ethno-racial identities, age, cognitive functions, as well as those living in various countries. |
An autopsy case of disseminated | 4c621e6a-e2e1-4844-bd3a-b48e06374681 | 10022292 | Forensic Medicine[mh] | Mucormycosis is an important infection that is associated with high mortality . In recent decades, its incidence has increased in populations with underlying conditions, such as those with malignancies and recipients of bone marrow transplant; among these, pulmonary mucormycosis is the primary cause at initial diagnosis . Among zygomycetes, Cunninghamella spp. are rarely isolated from samples from immunocompromised patients; however, the associated mortality is significantly higher than that associated with other zygomycetes . Only few cases of disseminated pulmonary , cardiovascular , and aortic infections have been reported in immunocompromised patients. Furthermore, in immunocompetent patients, the occurrence of a disseminated Cunninghamella infection is rarer. Therefore, its clinical and pathological features are not fully understood. We experienced a case wherein an immunocompetent patient was diagnosed with disseminated Cunninghamella bertholletiae infection; sputum culture indicated bronchial colonization prior to the diagnosis. Histological findings from their autopsy could reveal the expected invasion sites.
A 67-year-old Japanese man with emphysema visited our hospital every month for bronchodilator medication. He was hospitalized for progressive dyspnea, productive cough, and moderate fever that had developed 3 days prior to admission. He had a smoking habit (33 smoking pack years) and recurrent pneumothoraxes with chest tube drainage management, and a home oxygen therapy (2 L/min). He did not have a history of allergy or immunodeficiency, and did not consume alcohol as a habit. Physical examination revealed an elevated body temperature (37.8℃) and respiratory failure with SpO 2 97% and 3.5 L/min oxygen therapy; he also had bilateral coarse crackles without leg edema. Chest radiography revealed a bilateral consolidation shadow, with emphysema in the right lower lobe and pleural effusion in the left lung (Fig. A). Chest computed tomography indicated consolidation in multiple lung lobes, with left-sided pleural effusion and an adhesive collapsed lung appearance in the right upper-lung field (Fig. B). Temporal manual drainage was performed to relieve the left pleural exudative effusion; the sputum, blood, and pleural effusion all tested negative for a bacterial infection. Although sputum culture had yielded Cunninghamella spp. 6 months prior to the most recent presentation, the infection seemed to have been a respiratory tract colonization because of his good condition. Laboratory findings revealed that the only abnormalities were anemia (hemoglobin: 10.5 g/100 mL), a low albumin level (3.0 g/100 mL), and an elevated C-reaction protein level (14.0 mg/100 mL). Tests for β-d glucan and galactomannan were normal; thus, an Aspergillus infection was ruled out. Although there were no bacterial evidence, tazobactam-piperacillin hydrate (13.5 g/day) was administered empirically; however, it was discontinued on the ninth disease day because renal dysfunction occurred as an adverse event. The bilateral lung consolidation around the emphysema worsened gradually, and repeated sputum cultures yielded fungal agents on the 17 th disease day (Fig. A). Mass spectrometry and polymerase chain reaction (PCR)-based direct sequencing revealed the pathogen to be Cunninghamella bertholletiae (according to the DDBJ/EMBL/GenBank [ http://blast.ncbi.nlm.nih.gov/Blast.cgi ] and MycoBank Database [ http://www.mycobank.org/ ]). Cytological analysis mainly revealed sporangiophores, with all branches swelling up to vesicles producing unicellular sporangioles (Fig. B, C). These findings were consistent with a pulmonary Cunninghamella infection. Although liposomal amphotericin B (5 mg/kg/day) was administered since the 28 th disease day in addition to cefepime (2 g/day), chest radiography and electrocardiography revealed cardiomegaly and atrial fibrillation on the 29 th disease day, respectively. The serum brain natriuretic peptide (BNP) level was also elevated (956 pg/mL). Respiratory dysfunction occurred gradually and the plasma BNP level increased to 2,338 pg/mL. The patient died from multiorgan dysfunction on the 37 th disease day (Fig. C). Macroscopically, a postmortem examination (Fig. A) revealed a cavity region and coagulative necrosis in the right upper lung, a small amount (50 mL) of bilateral pleural effusion, and pericardial fluid (50 mL). Histopathological examination revealed a cluster of fungal hyphae within the arteries of the right cavity wall (Fig. B), subpericardial artery (Fig. C), intramyocardial capillary blood vessels (Fig. D–G), and esophageal subserosa vein. The fungus cultures from the right upper bronchial secretion, pleural, and pericardial fluid were positive for Cunninghamella bertholletiae (Fig. A, B, F). These findings suggested that the colonized Cunninghamella bertholletiae in the upper bronchus had invaded the blood vessels and disseminated to other organs, and myocardial invasion had resulted in the critical damage that led to the death in this case.
This was a rare case of pulmonary Cunninghamella bertholletiae infection that occurred in an immunocompetent patient. Most Cunninghamella infections tend to occur in immunocompromised hosts, including transplantation recipients and patients with hematological malignancies . Recently, Cunninghamella was reported to have accounted for only 7% of all mucormycosis cases and isolated primarily from patients with pulmonary or disseminated disease. The associated mortality was significantly higher than that associated with other Mucorales (71% [23/30] vs. 44% [185/417]; p < 0.001) . However, the pathophysiology has not been fully understood because of its rarity and rapid disease progression in immunocompromised patients. Only two cases of pulmonary Cunninghamella bertholletiae infection have been reported in immunocompetent patients . One of these occurred in a 61-year-old White man with a history of alcoholic binges who had right pleural effusion, pneumothorax, and a right upper lobe cavitary lesion. Cunninghamella bertholletiae was detected in the abscess wall and surrounding lung parenchyma sections without vascular involvement or dissemination. The lung deformity may have promoted the growth of Cunninghamella bertholletiae , similar to in our case . The other case was of a 74-year-old man with exacerbated asthma–chronic obstructive pulmonary disease overlap syndrome, who was diagnosed with allergic bronchopulmonary mycosis; tests detected a prolonged serum-specific immunoglobulin type E antibody against mucormycetes. This indicates the possibility of Cunninghamella bertholletiae having contaminated his organs (such as the respiratory tract) or emphysema . Both cases suggest the colonization of Cunninghamella bertholletiae with pulmonary deformation, as in our case, which is sometimes misdiagnosed as a contamination. Distinguishing between a definite diagnosis of mucormycetes and colonization seems rather difficult. Therefore, the detailed morphological features were investigated in this case. Generally, temperature plays an important role in fungal growth; the optimal growth temperature for Cunninghamella elegans is known to be approximately 35℃, which is lower than the human body temperature or the temperature of the respiratory tract. Conversely, for other Cunninghamella spp. (such as Cunninghamella bertholletiae and Cunninghamella echinulata ), the best growth temperature is higher than that for Cunninghamella elegans , and these species are thought to have thermotolerance . In our case, biological culture examination suggested that the Cunninghamella bertholletiae infection occurred prior to admission to our hospital; the pathogen may have colonized the respiratory tract for a long time and grown gradually to invade the cardiovascular system until the critical stage of death. In immunocompetent patients in particular, Cunninghamella bertholletiae might likely colonize the respiratory tract. To detect mucormycetes at the earliest stage, a novel DNA sequencing method has been suggested . In this present case, the serum Cunninghamella bertholletiae DNA load was measured using quantitative PCR. The copy number on the day of the patient’s death was higher than that on the onset day. This suggests that quantification of Cunninghamella bertholletiae DNA in the serum could be useful for the diagnosis and evaluation of mucormycosis. Because the DNA is not detected in healthy or non-pathological patients, this would be a useful way for distinguishing between definite infection and colonization. In our case, we observed biological and pathological evidence in autopsy tissues from several organs in an immunocompetent patient. This finding is indicative of the common invasion sites of Cunninghamella bertholletiae itself. Fungal hyphae were found within the pulmonary cavity wall, subpericardial artery, intramyocardial capillary blood vessels, and esophageal subserosa veins (Fig. ); this is in line with previous reports [ , , , ]. Pulmonary and cardiovascular deformities are considered common invasive regions ; cardiovascular invasion sometimes follows a critical course in the presence of a Cunninghamella infection, with arrhythmia or acute exacerbation of heart failure, as in our case. Only two cases of cardiovascular invasion with a large mass within the left inferior wall of the ventricular cavity have been reported; one involved hypokinesis and the other involved invasion of the septate hyphae of Cunninghamella spp. into the vascular wall . Both patients in these cases died, and an autopsy was performed. In conclusion, we have presented a rare case of an Cunninghamella bertholletiae infection that occurred in an immunocompetent patient and followed a critical course even under antifungal treatment. Because useful diagnostic markers are lacking, it is difficult to distinguish between colonization and definite diagnosis in order to initiate antifungal treatment at an earlier stage. This pathogen can rapidly progress from colonizing the bronchi to infecting the surrounding organs via vascular invasion even in immunocompetent patients.
|
Post-explant profiling of subcellular-scale carbon fiber intracortical electrodes and surrounding neurons enables modeling of recorded electrophysiology | efdac400-9f09-402c-90da-c802932dbe74 | 10022369 | Physiology[mh] | Introduction Recording and interpreting neural activity in the mammalian cortex is paramount to understanding brain function and for controlling brain-machine interfaces (BMIs). Capturing the signals of individual neurons yields the most fundamental dynamics of neural activity (Harris et al , Hong and Lieber ), thereby providing the highest precision when decoding neural circuits (Schwartz et al ). Intracortical electrode recordings have high spatiotemporal resolution to capture individual neurons’ signaling associated with fast-paced behaviors (Chorev et al ). Therefore, electrode architectures with many dense recording sites are desired for sampling large populations of neurons (Seymour et al , Hong and Lieber ). Currently, the most widely used intracortical electrodes are composed of silicon shanks fabricated using standard cleanroom techniques (HajjHassan et al ). These electrodes are becoming increasingly sophisticated, with newer designs approaching or exceeding a thousand densely-packed recording sites (Shobe et al , Scholvin et al , Jun et al , Steinmetz et al , Zardini et al ). For example, the Neuropixels probe has demonstrated simultaneous recording from hundreds of individual neurons along its length (Jun et al , Steinmetz et al , Paulk et al ). Planar silicon electrode arrays, e.g. the Utah Electrode Array (UEA), can sample wide areas of cortex (Nordhausen et al ) for use in BMIs (Serruya et al ), enabling the restoration of function lost to neurological disease (Hochberg et al , Collinger et al , Pandarinath et al ). Moreover, the recent advancements of multi-shank Michigan-style electrodes (Scholvin et al ), such as the Neuropixels 2.0 (Steinmetz et al ), and variable-length UEA-style electrodes, such as the Sea of Electrodes Array (Zardini et al ), signify that recording neuronal populations in 3D is possible. Extensive evidence, however, indicates that chronic implantation of silicon electrodes can elicit a multi-faceted foreign body response (FBR) at the implant site, which includes significant glial scarring (Turner et al ), high microglia presence (Szarowski et al ), large voids of tissue (Nolta et al , Black et al ), blood-brain-barrier disruption (Saxena et al ), and neurodegeneration (Biran et al , Winslow et al ). Recent studies have uncovered more effects, including hypoxia and progressive neurite degeneration (Welle et al ), a shift toward inhibitory activity (Salatino et al ), myelin injury and oligodendrocytes loss (Chen et al ), and mechanical distortion of neurons (Du et al , Eles et al ). RNA sequencing of implant site tissue also yielded differential expression of more than 100 genes, signifying the FBR results in complex biological changes (Thompson et al ). Moreover, recorded signal amplitudes degrade over chronic time periods (Chestek et al , Sponheim et al ) and can both increase and decrease during experimental sessions (Chestek et al , Perge et al ), which have been attributed to the FBR (Nolta et al ). The chronic neuron loss, particularly within the single unit recording range (Henze et al , Buzsáki ), brings to question silicon electrodes’ ability to reliably record activity attributed to individual neurons chronically. Silicon probes’ large size has been implicated as a primary reason for the FBR (Szarowski et al , Seymour and Kipke , Thompson et al ). Many newer electrode technologies have been designed to overcome the FBR (Salatino et al , Hong and Lieber , Thompson et al ). In particular, smaller electrodes that have cellular to subcellular dimensions (Kozai et al , Luan et al , Deku et al , Musk , Yang et al ) generate smaller tissue displacement (Obaid et al ) and reduced FBR and neuron loss (Seymour and Kipke , Thompson et al ), suggesting an ability to record more naturalistic neural populations. While computational models have been proposed to decode the relationship between recorded spikes and contributing neurons (Moffitt and McIntyre , Pettersen and Einevoll , Lempka et al , Mechler and Victor , Malaga et al ), their validity remains unresolved without empirical measurements of neuron locations and their spike timing (Marques-Smith et al ). Attempts at acquiring these ‘ground truth’ measurements have been performed using tetrodes or electrodes with similar geometry (Henze et al , Du et al , Mechler and Victor ), which Marques-Smith et al assert as differing considerably from state-of-the-art electrodes such as Neuropixels (Jun et al ), under ex vivo conditions (Anastassiou et al , Yger et al ), simulated data (Pedreira et al , Magland et al , Buccino and Einevoll ), or acutely in vivo (Neto et al , Allen et al ) and with low resolution in cell localization (Marques-Smith et al ). As modeling suggests that biofouling in the electrode-tissue interface influences neural recording quality (Malaga et al ), ‘ground-truth’ must be measured in cases where electrodes are implanted in vivo over long time periods. Precisely localizing chronically implanted electrodes in situ and surrounding neurons has begun to bridge this gap (Luan et al , Yang et al , Patel et al , Sharon et al ). Subcellular-scale (6.8 µ m diameter) carbon fiber electrodes that elicit a minimal FBR and maintain high neuron densities after chronic implantation (Kozai et al , Patel et al , , Welle et al ) are ideal candidates for acquiring these ‘ground truth’ recordings. Previously, we demonstrated a ‘slice-in-place’ technique to retain the electrodes in brain slices for localizing the recording tips in deep brain structures (Patel et al ). However, brain curvature and the bone screws required for skull-mounted headcaps render this method incompatible with fibers implanted in shallower cortical regions. In this report, we demonstrate that explanting carbon fiber electrodes from cortical recording sites followed by slicing thick horizontal brain sections with headcaps removed retains the ability to localize the tips with high-resolution 3D images. We used rats chronically implanted in layer V motor cortex to assess neuronal health and glial responses and for modeling the relationship between recorded spikes and surrounding neurons. From 3D reconstructions of neuron somas, we found a minimal loss of 18% in mean neuron count per volume, although neurons were stretched compared to neurons in the contralateral hemisphere. The distance of the nearest neuron to implanted fibers (17.2 ± 4.6 µ m, [12pt]{minimal} }{}$}$ X ˉ ± S) was close to that of simulated electrodes positioned in the contralateral hemisphere (16.2 ± 4.8 µ m, [12pt]{minimal} }{}$}$ X ˉ ± S) such that the distances were not significantly different. Given the minimal disruption in surrounding neurons, we modeled the extracellular spikes that could be recorded from the neuron population at the implant site, which suggested that their natural distribution is a fundamental limiting factor in the number of spike clusters that can be sorted. Methods 2.1. Carbon fiber electrode array fabrication High density carbon fiber (HDCF) electrode arrays with 16 channels were fabricated using previously reported methods (Huan et al ). Briefly, silicon support tines were fabricated from 4” silicon wafers using silicon micromachining processes. The support tines had trenches etched into them via deep reactive ion etching to hold the fibers for facile insertion into the brain, tapered to a width of 15.5 µ m, had a pitch of 80 µ m, and had a length of 3 mm for targeting cortex. The support tines had gold pads on them to interface with the carbon fibers. These gold pads then led to a separate set of gold bond pads to interface with a printed circuit board (PCB). Once tines were fabricated, they were bonded to a custom PCB with Epo-Tek 301 epoxy (Epoxy Technology, Billerica, MA). 2-Part epoxy (1FBG8, Grainger, Lake Forest, IL) was applied to the underside of the silicon portion to provide buttress support. The gold bond pads were then wire-bonded to pads on the PCB and the wire bonds coated in Epo-Tek 353ND-T epoxy. Carbon fibers were then laid into the support tines by exploiting capillary action from a combination of deionized water (electrical pad end) and Norland Optical Adhesive 61 (NOA 61) (Norland Products, Inc., Cranbury, NJ) (distal end). A NLP 2000 (Advanced Creative Solutions, Carlsbad, CA) was used to apply Epo-Tek H20E silver epoxy to the gold pads and the carbon fibers to establish an electrical connection. NOA 61 was applied to the gold pads and the carbon fibers to further secure them. Fibers were then cut to a length of 1000 µ m and coated with ∼800 nm of Parylene C (PDS2035CR, Specialty Coatings Systems, Indianapolis, IN). Fibers were then laser cut to a final length of 300 µ m beyond the silicon support tine ends with a 532 nm Nd:YAG pulsed laser (LCS-1, New Wave Research, Fremont, CA) as described previously (Welle et al ). Carbon fibers were then plasma ashed in a Glen 1000P Plasma Cleaner (Glen Technologies Inc., Fremont, CA). Fiber tips were functionalized in one of two ways: (1) electrodeposition by dipping carbon fibers in a solution of 0.01 M 3,4-ethylenedioxythiophene (483 028, MilliporeSigma, Darmstadt, Germany) and 0.1 M sodium p-toluenesulfonate (152 536, MilliporeSigma) and applying 600 pA/channel using a PGSTAT12 potentiostat (EcoChemie, Utrecht, Netherlands) to coat the tips with PEDOT:pTS (Patel et al , Welle et al ) ( N = 3 arrays) or (2) electrodeposition of Platinum Iridium (PtIr) with a Gamry 600+ potentiostat (Gamry Instruments, Warminster, PA) ( N = 2 arrays) using previously published methods (della Valle et al ). Silver ground and reference wires (AGT05100, World Precision Instruments, Sarasota, FL) were soldered to the PCB, completing assembly. For one electrode, a support tine was broken off prior to implant as a sham channel for a separate study (for rat #1). Once electrodes were completed, electrochemical impedance spectroscopy (EIS) was measured with electrodes immersed in 1x phosphate buffered saline (PBS) using previously published methods (Kozai et al , Patel et al ). Impedances at 1 kHz were measured to be 129.4 ± 259.0 kΩ ( [12pt]{minimal} }{}$}$ X ˉ ± S) ( n = 79 fibers, five electrode arrays), where probes functionalized with PEDOT:pTS were measured at 24.6 ± 20.8 kΩ ( n = 47 fibers, three electrode arrays) and probes functionalized with PtIr were measured at 283.4 ± 353.7 kΩ ( n = 32 fibers, two electrode arrays). All electrodes underwent ethylene oxide gas sterilization prior to implantation. 2.2. Electrode implantation Adult male Long-Evans rats ( N = 5) weighing 393–630 g were implanted with one HDCF electrode array each. Surgical implantation closely followed previously reported procedures (Patel et al , Welle et al ). Throughout the surgeries, temperature was monitored with a rectal thermometer and breath rate was monitored with a pulse oximeter. Isoflurane (5% (v/v) induction, 1%–3% maintenance) was used as a general anesthetic and carprofen (5 mg kg −1 ) as a general analgesic. After opening the scalp, seven bone screws (19010-00, Fine Science Tools, Foster City, CA) were screwed into the skull. One screw at the posterior end of the skull was used for referencing. A 2 × 2 mm craniotomy was drilled in the right hemisphere, where the bottom left corner of the craniotomy was 1 mm lateral and 1 mm anterior to bregma. The probe was then lowered to the dura mater to zero its dorsal/ventral position. After durotomy with a 23 G needle, the probe was immediately inserted to a depth of 1.2–1.3 mm to reach layer V of motor cortex. The craniotomy was then filled with DOWSIL silicone gel (DOWSIL 3-4680, Dow Silicones Corporation, Midland, MI). Ground and reference wires were wrapped around the most posterior bone screw for referencing. A headcap was formed by applying methyl methacrylate (Teets denture material, 525 000 & 52 600, Co-oral-ite Dental MFG. Co., Diamond Springs, CA) onto the skull until the probe’s electrical connector was firmly in place and bone screws were covered. The scalp was sutured around the connector and surgery was complete. It is important to note that rat #2 was one of the rats reported in Welle et al , but only up through day 63 of 92 and with a focus on electrophysiological yield over time. 2.3. Electrophysiological recording and spike sorting Electrophysiological recordings were collected in chronically implanted rats while awake and freely moving in a Faraday cage (Welle et al ). Signals were recorded using ZC16 and ZC32 headstages, RA16PA pre-amplifiers, and a RX7 Pentusa base station (Tucker-Davis Technologies, Alachua, FL) at 24 414.1 Hz. Recordings were collected at least weekly for 10 min sessions. Spike sorting was semi-automated and based upon previously reported procedures (Patel et al , Welle et al ). Channels were excluded from a session if the impedance at 10 Hz was abnormally high compared to other channels and previous sessions (∼1–2 weeks), where impedance was measured using EIS (Patel et al ). This exclusion was based on exclusion criteria reported in Patel et al . However, no channels were excluded if 10 Hz impedances were not collected or measured using different methods (rats 1 & 3). Common average referencing was performed using the remaining channels to reduce noise (Ludwig et al ). The following steps were performed in Plexon Offline Sorter (version 3.3.5) (Plexon Inc., Dallas, TX) by a trained operator. Signals were high-pass filtered using a 250 Hz four-pole Butterworth filter. Five 100 ms snippets of signal with low neural activity and artifact noise were manually selected from each channel and used to measure V RMS noise for each channel (Patel et al ). The threshold for each channel was set at −3.5 × V RMS . Cross channel artifacts were then invalidated. Putative cluster centers were manually designated and waveforms assigned using K-Means clustering. Obvious noise waveform clusters were removed. Automated clustering was performed using the Standard Expectation-Maximization Scan function in Plexon Offline Sorter (Welle et al ). Persisting noise waveforms were removed, and obvious oversorting and undersorting errors were manually corrected. Clusters were also cleaned manually. Resultant waveforms were imported into and analyzed in MATLAB (version R2020b) (MathWorks, Natick, MA) using custom scripts. Electrophysiological recording capacity at the experimental endpoints for each probe are shown in figure S1. 2.4. Tissue preparation, immunohistochemistry, and imaging At the end of the implantation period, rat brains were prepared for immunohistochemistry and histological imaging. Rats were transcardially perfused on day 88–92 ( N = 3) or day 42 ( N = 2) as described previously (Patel et al , Welle et al ). If the perfusion fixation was successful, brains were extracted and soaked in 4% paraformaldehyde (PFA) (19 210, Electron Microscope Sciences, Hatfield, PA) in 1x PBS for 1–3 d. If more fixation was required, brains remained in the skull while soaking in PFA solution for two days before extraction, followed by an additional 24-hour incubation in PFA solution. In all cases, the electrode array, headcap, and skull-mounted bone screws were removed from the brain. Brains were then incubated in 30% sucrose (S0389, MilliporeSigma) in 1x PBS with 0.02% sodium azide (S2002, MilliporeSigma) for at least 72 h until cryoprotected. Brains were then sliced to a thickness of 300 µ m (Patel et al ) with a cryostat. Slices were selected for staining based upon estimated depth and/or from the observation of holes in a brightfield microscope. Immunohistochemistry closely followed previously reported staining techniques (Welle et al ), but modified to accommodate 300 µ m brain slices (Patel et al ). All incubation periods and washes were performed with brain slices in well plates on nutators. Chosen slices were first incubated in 4% PFA (sc-281 692, Santa Cruz Biotechnology, Dallas TX) for 1 d at 4 °C. Slices were washed for 1 h in 1x PBS twice at room temperature and then incubated in a solution containing 1% Triton X-100 (93 443, MilliporeSigma) in StartingBlock (PBS) Blocking Buffer (37 538, ThermoFisher Scientific, Waltham, MA) overnight at room temperature to permeabilize and block the tissue, respectively. Slices were then washed in 0.1%–0.5% Triton X-100 in 1x PBS (PBST) solution for one hour at room temperature three times. Slices were incubated in primary antibodies for 7 d at 4 °C, where antibodies were added to a solution containing 1% Triton X-100, 0.02% sodium azide (1% of solution containing 2% sodium azide in 1x PBS), and StartingBlock. The primary antibody cocktail differed between rats implanted for 6 weeks ( N = 2) and 12+ weeks ( N = 3). In all rats, antibodies staining for neurons (Mouse anti-NeuN, MAB377, MiliporeSigma) and astrocytes (Rabbit anti-Glial fibrillary acidic protein (GFAP), Z0334, Dako/Agilent, Santa Clara, CA) were used, where both had dilution ratios of 1:250. For 6 week rats, staining for axon initial segments (AIS) with Goat anti-Ankyrin-G (1:1000 dilution ratio) was added. The Ankyrin-G antibody was made and provided by the Paul Jenkins Laboratory (University of Michigan, Ann Arbor, MI) using methods published previously (He et al ). For 12+ week rats, staining for microglia (Guinea Pig anti-IBA1, 234 004, Synaptic Systems, Göttingen, Germany) (1:250 dilution ratio) was added. Slices were washed in 0.1%–0.5% PBST three times at room temperature for one hour each wash before secondary antibody incubation at 4 °C for 5 d. Secondary antibodies used the same base solution as the primary cocktail. The following secondary antibodies were used: Donkey anti-Mouse Alexa Fluor 647 (715-605-150, Jackson ImmunoResearch Laboratories, Inc., West Grove, PA) for neurons, Donkey anti-Rabbit Alexa Fluor 546 (A10040, Invitrogen, Carlsbad, CA) for astrocytes in 6 week rats, Goat anti-Rabbit Alexa Fluor 532 (A11009, Invitrogen) for astrocytes in 12+ week rats, Donkey anti-Guinea Pig Alexa Fluor 488 (706-545-148, Jackson ImmunoResearch Laboratories, Inc.) for microglia, Donkey anti-Goat-Alexa Fluor 488 (705-545-003, Jackson ImmunoResearch Laboratories, Inc.) for AISs, which all had dilution ratios of 1:250, and 4ʹ,6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI) (D1306, Invitrogen) for cellular nuclei (1:250–1:500 dilution ratio). Slices were washed in 0.1%–0.5% PBST twice at room temperature for two hours each wash and were washed in 1x PBS with 0.02% azide for at least overnight before imaging could commence. Stained slices were stored in 1x PBS with 0.02% azide at 4 °C outside of imaging sessions after staining. Prior to imaging, slices were rapidly cleared using an ultrafast optical clearing method (FOCM) (Zhu et al ). FOCM was prepared as a solution of 30% (w/v) urea (BP169500, ThermoFisher Scientific), 20% (w/v) D-sorbitol (DSS23080-500, Dot Scientific, Burton, MI), and 5% glycerol (w/v) (BP229-1, ThermoFisher Scientific) in dimethyl sulfoxide (DMSO) (D128-500, ThermoFisher Scientific). Glycerol was added at least one day after other reagents started mixing and after urea and D-sorbitol were sufficiently dissolved in DMSO. FOCM was diluted in either MilliQ water ( N = 1 rat) or 1x PBS ( N = 4 rats) to 25%, 50% and 75% (v/v) solutions. During clearing, slices were titrated to 75% FOCM in 25% concentration increments, 5 min per step. 75% FOCM in water was found to expand tissue laterally (6.7%) after imaging rat #2, so other slices from other rats were cleared with FOCM in 1x PBS. This expansion was not corrected in subsequent analyses. The clearing process was repeated immediately before each imaging session. The samples remained suspended in the 75% FOCM solution during imaging. Images were collected with a Zeiss LSM 780 Confocal Microscope (Carl Zeiss AG, Oberkochen, Germany) with 10x and 20x objectives. The microscope recorded transmitted light and excitation from the following lasers: 405, 488, 543, and 633 nm. Z-stacks of regions of interested were collected along most, if not all, of the thickness of the sample. Images had XY pixel size of 0.81 µ m or 0.202 µ m and z-step of 3 µ m or 0.5–0.6 µ m, respectively. Laser power and gain were adjusted to yield the highest signal-to-noise (SNR) ratio while also minimizing photobleaching. The ‘Auto Z Brightness Correction’ feature in ZEN Black (Carl Zeiss) was used to account for differences in staining brightness along the slice thickness. The implant site was located by overview imaging and quick scanning, where high astrocyte staining signal in dorsal regions and a straight line of holes delimited the electrode site. Sites with approximately similar stereotaxic coordinates to those of the implant site but in the contralateral hemisphere of the same slice were imaged as control. After imaging, slices were titrated back to 0.02% sodium azide in 1x PBS, and stored. Clearing was repeated again when further imaging was needed, as each implant site required multiple imaging sessions. 2.5. Putative tip localization Putative carbon fiber electrode tips were localized independently by three reviewers experienced in histology to estimate the confidence and the reproducibility of localization. Tips were localized using ImageJ (Fiji distribution, Schindelin et al ) with additional cross-referencing in Zen Black (2012) (Zeiss Microscopy). Fiber tips were localized by first identifying electrode tracts in more dorsal focal planes. These tracts presented as dark holes in transmitted light and fluorescent imaging channels that were typically positioned in an approximately straight line and surrounded by high GFAP intensities. Putative electrode tracts were enhanced by using a combination of basic imaging processing and visualization techniques including contrast adjustment, histogram matching (Miura ) to account for changes in brightness throughout the brain slice, median or Gaussian filters (including 3D versions (Ollion et al )) for reducing noise, maximum intensity projections, and toggling the imaging channels that were visualized simultaneously. The estimated tip location was determined by finding the z -plane and x-y coordinates at which the tract rapidly began to reduce in size and become filled with surrounding background fluorescence or parenchyma when scrolling dorsal-ventrally through the image. The tip width, height, and center were determined by manually fitting an ellipse to the electrode tract. 2.6. Verification of tip localization Putative tip localization was verified by measuring electrode tract pitches, tip cross-sectional area, and the agreement between three reviewers in tip localization. Electrode tract pitches were measured in ImageJ after stitching images (Preibisch et al ). The positions of electrode tracts were measured in the same z-step or in nearby z -steps to achieve an approximately planar arrangement of tracts. Ellipses were manually fit to tracts, and the Euclidean distance between the centers of neighboring tracts were measured in MATLAB to determine pitch. The cross-sectional area of each tip was calculated as the area of the ellipse fit to the tip localized as described above. The agreement between three reviewers experienced in histology was determined by measuring the absolute value of the Euclidean distance between the three centers of the ellipses fit to each fiber. When determining the localization confidence, all comparisons for each group of fibers were grouped together. The median and interquartile range were measured from each group of comparisons. Fiber tips that were not localized by all three reviewers were excluded from further analyses and from agreement determination. Three fibers were excluded from rat #5 because one fiber was not found by one reviewer, and two fibers were found in different images than the other reviewers, so localizations were not directly comparable. Tissue damage in rat #3 was so severe (figure S3) that even the number of fibers in the tissue was not consistently identified by reviewers, therefore rat #3 was excluded from quantitative analyses. For cross-sectional area measurements and subsequent analyses requiring fiber localization, the positions localized by the first author were used, where exclusion from other reviewers were not considered. 2.7. Segmenting neuron somas into 3D scaffolds The somas of neurons surrounding electrodes were manually segmented in 3D after putative fiber localization to measure soma morphology and determine the Euclidean distance between neuron somata centroids and carbon fiber electrode tips, which we called the neurons’ relative positions. We intentionally segmented a larger number of neurons than required, as neuron proximity to putative fibers is difficult to determine from volumetric confocal imaging by inspection alone. Therefore, we segmented any neuron that was wholly or partially present within the volume of a 100 × 100 µ m (diameter X height) cylinder centered at each electrode tip to ensure that any neuron with a centroid within a 50 µ m radius from the fiber tips would be segmented and computer algorithms could determine measurements onward (e.g. relative distance, morphometrics). Image pre-processing and segmentation were performed using ImageJ plugins. The NeuN channel was used to visualize neuron somas for segmentation. First, a z -step with a qualitatively high SNR was selected and contrast adjusted to make the image clearer. The remaining z -steps in the z -stack were histogram matched (Miura ) to the selected z -step to account for variations in brightness throughout the slice along its thickness. A 3D median filter (Ollion et al ) was applied to reduce noise. These image pre-processing steps ensured that the somata, stained by NeuN, had high SNRs and were clearly differentiable from other tissue. The ImageJ plugin nTracer (primarily version 1.3.5) (Roossien et al ) was used to segment each neuron by manually tracing along the cell outline in each z -step that the neuron was present. Neurons were also traced in contralateral regions as a control. Figure S2 shows several z -planes of two neurons that were manually traced and a single plane that segmented many neurons to provide better illustration. For rat #1, five points with similar coordinates to fiber tip centroids were selected and neurons traced within a cylinder (100 µ m diameter, 100 µ m height) ( N = 5 hypothetical fibers, N = 199 neurons). For rat #2, all neurons that bounded voxels in a cylindrical region (300 µ m diameter, 300 µ m height) were traced ( N = 926 neurons). Traced neurons were imported into MATLAB using custom scripts. For some fibers (rat #1: N = 8 fibers; rat #2: N = 5 fibers), the electrode tip was closer to the top of the brain section than 50 µ m. Instead of 50 µ m, the distance between the fiber tip and the top of the brain section at that fiber’s tract was selected as half the cylinder height for tracing and used in subsequent analyses as the radius around that fiber. Additionally, neuron somas that were cut along the plane of cryosectioning were excluded if it was clear that the middle of the soma was excluded from the brain slice. 2.8. Neuron morphometrics and densities Neuron morphometrics and positions were extracted from the 3D scaffolds of neurons produced by tracing. These measurements were performed in MATLAB using custom scripts after having been imported into MATLAB storage formats. To measure neuron soma volume, all voxels bounded by the scaffold were counted and the total number was multiplied by the volume of a single voxel (Prakash et al ). The centroids of these sets of points, which were 3D point clouds, were used to determine neuron positions and distances relative to the putative electrode tips. Centroid positions were also used for counting the number of neurons within 3D radii, such as 50 µ m, and in determining neuron density. To determine the extent of neuron elongation around implanted fibers, the cell shape strain index (CSSI) as defined by Du et al with equation was measured. An ellipse was fit to the soma trace in the z -step that had the greatest cross-sectional area for that neuron (Ohad Gal. fit_ellipse [ www.mathworks.com/matlabcentral/fileexchange/3215-fit_ellipse ], MATLAB Central File Exchange). CSSI was determined using equation (Du et al ), where a is the minor axis and b is the major axis from elliptical fitting: [12pt]{minimal} }{}CSSI = }{{}{2}}}. C S S I = b − a b + a 2 . To measure the length of neurons in the dorsoventral direction parallel to the axis of explantation, the number of z -steps in which each neuron was traced was multiplied by the image’s z -step resolution. These same measurements were repeated for neurons in contralateral sites. Comparisons of neurons within a 50 µ m spherical radius of localized tips included neurons where the centroid was within 50 µ m. To determine the relationship between distance and CSSI or volume, only neurons that were ventral to all tips analyzed in the source image were included to remove the effects of multiple nearby tips. 2.9. Nearest neuron positions Neuron positions relative to implanted fibers were determined by calculating the Euclidean distance between the center of the ellipse manually fitted to each carbon fiber electrode tip (see Tip localization ) and centroid of the point cloud bounded by each traced neuron’s 3D scaffold. Therefore, the neuron’s position was the centroid of all points bounded by the NeuN stain. These distances were sorted from shortest to longest to determine the nearest neuron positions relative to implanted fibers. When determining the mean neuron position, if that position was not measured for one or more fibers due to a reduced tracing radius, that fiber was excluded from the mean neuron position calculation for that position. Similar measurements were performed at contralateral sites as a control. In N = 1 rat, neurons within 50 µ m of five hypothetical fiber points with similar coordinates to localized fiber tips were traced, and the nearest neuron positions were determined in the same manner as with implanted fibers. For another rat, all neurons within a 300 × 300 µ m (diameter X height) cylindrical volume were traced. Points were placed in a 3D grid with 12.5 µ m spacing within this volume but were excluded if positioned within 50 µ m of the volume’s border or if within the set of points bounded by a traced neuron. These points were used as hypothetical carbon fiber tip locations in unimplanted contralateral tissue. Similarly to implanted fibers, the Euclidean distance between each hypothetical point and the centroid of all points bounded by each traced neuron was measured to determine relative neuron distances. As described earlier (see Neuron morphometrics and densities ), these neuron centroid positions were also used to determine the neuron densities and neuron counts within spherical radii from hypothetical fiber locations. 2.10. Glial responses around implanted carbon fibers Glial responses to chronic implantation of carbon fiber electrodes were evaluated by measuring the staining intensity of GFAP and IBA1 surrounding implanted fibers, similar to measurements commonly made previously (e.g. Patel et al and Jang et al ). Z -steps that captured focal planes that included regions of missing tissue due to capturing either the top or bottom of the brain slice were excluded. Fibers were excluded if their tips were localized in excluded z -steps. Stitched (Preibisch et al ) confocal images were imported into MATLAB using the Bio-Formats MATLAB Toolbox (The Open Microscopy Environment, www.openmicroscopy.org/bio-formats/downloads/ ) for analysis. The background intensity for each z -step was determined first. The Euclidean distance of each pixel in the z -step to every tip center was determined. If the z -step was dorsal to the tip, then the distance to the center of the electrode tract in that z -step was determined instead. The mean intensity of pixels that were 300–310 µ m away from those tip positions was measured as the background intensity for that z -step. Next, a line (the electrode axis) was fit to the coplanar fiber tip positions. The minimum pitch of the electrode tip locations along this line was used to determine bounded lanes centered at each tip for measurement. The intensity of glial fluorescence at increasing distances from each electrode tip were measured by determining the mean intensity of pixels bounded by concentric rings that were 10 µ m thick, centered at each electrode tip location, and were bounded by these lanes with equal width to prevent overlap between fibers. Glial intensity was reported as the ratio of the mean intensity for each bin to the mean intensity of the background for the z -step containing the tip. Since fiber tips were localized at a range of z -steps, the measurement was performed at the appropriate z -step for each fiber. This process was performed for both GFAP and IBA1. As an additional control, the process was repeated using images collected of sites in the same brain sections as the implanted tips with similar stereotaxic coordinates in the contralateral hemisphere. 2.11. Modeling of electrophysiology A simple point source model (Lee et al ), similar to the model reported in Nason et al , was used to predict the extracellular spike waveforms recorded by carbon fiber electrodes that originated from neurons surrounding implanted carbon fibers. This model is defined by equation : [12pt]{minimal} }{}}}}( r ) = }}}}}{{4 {}{r}}} V pp r = I pp 4 σ π r where V pp is the extracellular potential recorded by an electrode, I pp is the peak–peak extracellular current generated by a neuron firing an action potential, σ is the conductivity of the brain tissue between the electrode and the neuron, and r is the relative distance between the neuron and the electrode. The model operates on the following assumptions: (1) that the brain is an isotropic medium with regard to frequency (Logothetis et al ), (2) neurons are treated as point sources (Holt and Koch , Lee et al , Nason et al ), where the point used in this study is the centroid of the NeuN stain (the soma), 3) cell spiking output over time is constant (Jog et al ) and across neurons (Lempka et al , Nason et al ). Using values for I pp and σ that were empirically fit or identified in literature renders the equation a single-parameter model, where the relative distance of the source neuron to the electrode, r , is the required input. The model was fit empirically using three methods. In the first, σ = 0.27 S m −1 was selected from literature (Slutzky et al ) and I pp was determined from fitting. Electrophysiology from rat #2 was used for fitting because all fibers could be putatively localized with high confidence. Also, the penultimate recording session proved to be exemplary, and using recording sessions collected towards the end of the implantation period increases the likelihood of coherence between recorded electrophysiology and histological outcomes (Michelson et al ). Spike clusters that were recorded 84 d post implant in rat #2 were sorted and ranked by mean amplitude in descending order. Spikes associated with the largest cluster recorded on each channel that recorded sortable units ( N = 12 channels) were grouped, and the mean peak–peak amplitude was plotted against the mean position of the closest neuron ( N = 2 rats). This was repeated for the second and third largest clusters and matched to the second and third closest neuron positions, respectively. I pp was then fit to these three value pairs using the MATLAB function lsqcurvefit . In the second and third methods, the fit was performed using individual spike cluster and neuron distance pairs. For each channel, spike clusters were sorted in descending order by mean peak–peak waveform amplitude. These cluster amplitudes were plotted against the positions of neurons surrounding those channels with the same rank in position (e.g. the amplitude of the second largest cluster plotted against the position of the second closest neuron). One cluster was excluded because the position of the third closest neuron was not measured due to the electrode tip’s proximity to the top of the brain slice. In the second method, both σ and I pp were fit using lsqcurvefit . In the third method, σ = 0.27 S m −1 and I pp was fit. To plot predicted waveforms, all spikes sorted across all clusters on day 84 for rat #2 were collected and normalized to have peak–peak amplitudes of 1. The mean positions of the nearest ten neurons were used as input into equation using the first empirical fit, which yielded the scaling factor for the normalized spikes. The mean and standard deviation of the scaled spikes was evaluated for each position and plotted in figures (d) and S8. 2.12. Statistical analyses Comparisons of neuron soma volumes and CSSIs between the neurons surrounding implanted fibers and control regions were performed using two-sided Kolmogorov–Smirnov tests (Du et al ) with an alpha of 0.05. Neuron densities, neuron positions, and glial intensities in radial bins surrounding the implanted fibers compared to control were performed using two-sided two sample t -tests with an alpha of 0.05. Linear regressions were determined using the fitlm function in MATLAB. All statistical tests were performed in MATLAB. 2.13. Figures and graphics All figures were generated using a combination of ImageJ, MATLAB (both versions R2020b and R2022a), Inkscape (version 1.1.2), and Adobe Illustrator 2022 (version 26.1). ImageJ was used for figures showing histology. MATLAB produced numerical plots. Inkscape was used to compile and complete these figures, with some help from Adobe Illustrator. Videos were produced using ImageJ and Adobe Media Encoder 2022 (22.61). 2.14. MATLAB code add-ons In addition to previously stated code add-ons for MATLAB, we used the following: shadederrorbar (Rob Campbell. raacampbell/shadedErrorBar ( https://github.com/raacampbell/shadedErrorBar ), GitHub) (multiple versions) and subtightplot (Felipe G. Nievinski. subtightplot ( www.mathworks.com/matlabcentral/fileexchange/39664-subtightplot ), MATLAB Central File Exchange.). Both were used for figure generation. Carbon fiber electrode array fabrication High density carbon fiber (HDCF) electrode arrays with 16 channels were fabricated using previously reported methods (Huan et al ). Briefly, silicon support tines were fabricated from 4” silicon wafers using silicon micromachining processes. The support tines had trenches etched into them via deep reactive ion etching to hold the fibers for facile insertion into the brain, tapered to a width of 15.5 µ m, had a pitch of 80 µ m, and had a length of 3 mm for targeting cortex. The support tines had gold pads on them to interface with the carbon fibers. These gold pads then led to a separate set of gold bond pads to interface with a printed circuit board (PCB). Once tines were fabricated, they were bonded to a custom PCB with Epo-Tek 301 epoxy (Epoxy Technology, Billerica, MA). 2-Part epoxy (1FBG8, Grainger, Lake Forest, IL) was applied to the underside of the silicon portion to provide buttress support. The gold bond pads were then wire-bonded to pads on the PCB and the wire bonds coated in Epo-Tek 353ND-T epoxy. Carbon fibers were then laid into the support tines by exploiting capillary action from a combination of deionized water (electrical pad end) and Norland Optical Adhesive 61 (NOA 61) (Norland Products, Inc., Cranbury, NJ) (distal end). A NLP 2000 (Advanced Creative Solutions, Carlsbad, CA) was used to apply Epo-Tek H20E silver epoxy to the gold pads and the carbon fibers to establish an electrical connection. NOA 61 was applied to the gold pads and the carbon fibers to further secure them. Fibers were then cut to a length of 1000 µ m and coated with ∼800 nm of Parylene C (PDS2035CR, Specialty Coatings Systems, Indianapolis, IN). Fibers were then laser cut to a final length of 300 µ m beyond the silicon support tine ends with a 532 nm Nd:YAG pulsed laser (LCS-1, New Wave Research, Fremont, CA) as described previously (Welle et al ). Carbon fibers were then plasma ashed in a Glen 1000P Plasma Cleaner (Glen Technologies Inc., Fremont, CA). Fiber tips were functionalized in one of two ways: (1) electrodeposition by dipping carbon fibers in a solution of 0.01 M 3,4-ethylenedioxythiophene (483 028, MilliporeSigma, Darmstadt, Germany) and 0.1 M sodium p-toluenesulfonate (152 536, MilliporeSigma) and applying 600 pA/channel using a PGSTAT12 potentiostat (EcoChemie, Utrecht, Netherlands) to coat the tips with PEDOT:pTS (Patel et al , Welle et al ) ( N = 3 arrays) or (2) electrodeposition of Platinum Iridium (PtIr) with a Gamry 600+ potentiostat (Gamry Instruments, Warminster, PA) ( N = 2 arrays) using previously published methods (della Valle et al ). Silver ground and reference wires (AGT05100, World Precision Instruments, Sarasota, FL) were soldered to the PCB, completing assembly. For one electrode, a support tine was broken off prior to implant as a sham channel for a separate study (for rat #1). Once electrodes were completed, electrochemical impedance spectroscopy (EIS) was measured with electrodes immersed in 1x phosphate buffered saline (PBS) using previously published methods (Kozai et al , Patel et al ). Impedances at 1 kHz were measured to be 129.4 ± 259.0 kΩ ( [12pt]{minimal} }{}$}$ X ˉ ± S) ( n = 79 fibers, five electrode arrays), where probes functionalized with PEDOT:pTS were measured at 24.6 ± 20.8 kΩ ( n = 47 fibers, three electrode arrays) and probes functionalized with PtIr were measured at 283.4 ± 353.7 kΩ ( n = 32 fibers, two electrode arrays). All electrodes underwent ethylene oxide gas sterilization prior to implantation. Electrode implantation Adult male Long-Evans rats ( N = 5) weighing 393–630 g were implanted with one HDCF electrode array each. Surgical implantation closely followed previously reported procedures (Patel et al , Welle et al ). Throughout the surgeries, temperature was monitored with a rectal thermometer and breath rate was monitored with a pulse oximeter. Isoflurane (5% (v/v) induction, 1%–3% maintenance) was used as a general anesthetic and carprofen (5 mg kg −1 ) as a general analgesic. After opening the scalp, seven bone screws (19010-00, Fine Science Tools, Foster City, CA) were screwed into the skull. One screw at the posterior end of the skull was used for referencing. A 2 × 2 mm craniotomy was drilled in the right hemisphere, where the bottom left corner of the craniotomy was 1 mm lateral and 1 mm anterior to bregma. The probe was then lowered to the dura mater to zero its dorsal/ventral position. After durotomy with a 23 G needle, the probe was immediately inserted to a depth of 1.2–1.3 mm to reach layer V of motor cortex. The craniotomy was then filled with DOWSIL silicone gel (DOWSIL 3-4680, Dow Silicones Corporation, Midland, MI). Ground and reference wires were wrapped around the most posterior bone screw for referencing. A headcap was formed by applying methyl methacrylate (Teets denture material, 525 000 & 52 600, Co-oral-ite Dental MFG. Co., Diamond Springs, CA) onto the skull until the probe’s electrical connector was firmly in place and bone screws were covered. The scalp was sutured around the connector and surgery was complete. It is important to note that rat #2 was one of the rats reported in Welle et al , but only up through day 63 of 92 and with a focus on electrophysiological yield over time. Electrophysiological recording and spike sorting Electrophysiological recordings were collected in chronically implanted rats while awake and freely moving in a Faraday cage (Welle et al ). Signals were recorded using ZC16 and ZC32 headstages, RA16PA pre-amplifiers, and a RX7 Pentusa base station (Tucker-Davis Technologies, Alachua, FL) at 24 414.1 Hz. Recordings were collected at least weekly for 10 min sessions. Spike sorting was semi-automated and based upon previously reported procedures (Patel et al , Welle et al ). Channels were excluded from a session if the impedance at 10 Hz was abnormally high compared to other channels and previous sessions (∼1–2 weeks), where impedance was measured using EIS (Patel et al ). This exclusion was based on exclusion criteria reported in Patel et al . However, no channels were excluded if 10 Hz impedances were not collected or measured using different methods (rats 1 & 3). Common average referencing was performed using the remaining channels to reduce noise (Ludwig et al ). The following steps were performed in Plexon Offline Sorter (version 3.3.5) (Plexon Inc., Dallas, TX) by a trained operator. Signals were high-pass filtered using a 250 Hz four-pole Butterworth filter. Five 100 ms snippets of signal with low neural activity and artifact noise were manually selected from each channel and used to measure V RMS noise for each channel (Patel et al ). The threshold for each channel was set at −3.5 × V RMS . Cross channel artifacts were then invalidated. Putative cluster centers were manually designated and waveforms assigned using K-Means clustering. Obvious noise waveform clusters were removed. Automated clustering was performed using the Standard Expectation-Maximization Scan function in Plexon Offline Sorter (Welle et al ). Persisting noise waveforms were removed, and obvious oversorting and undersorting errors were manually corrected. Clusters were also cleaned manually. Resultant waveforms were imported into and analyzed in MATLAB (version R2020b) (MathWorks, Natick, MA) using custom scripts. Electrophysiological recording capacity at the experimental endpoints for each probe are shown in figure S1. Tissue preparation, immunohistochemistry, and imaging At the end of the implantation period, rat brains were prepared for immunohistochemistry and histological imaging. Rats were transcardially perfused on day 88–92 ( N = 3) or day 42 ( N = 2) as described previously (Patel et al , Welle et al ). If the perfusion fixation was successful, brains were extracted and soaked in 4% paraformaldehyde (PFA) (19 210, Electron Microscope Sciences, Hatfield, PA) in 1x PBS for 1–3 d. If more fixation was required, brains remained in the skull while soaking in PFA solution for two days before extraction, followed by an additional 24-hour incubation in PFA solution. In all cases, the electrode array, headcap, and skull-mounted bone screws were removed from the brain. Brains were then incubated in 30% sucrose (S0389, MilliporeSigma) in 1x PBS with 0.02% sodium azide (S2002, MilliporeSigma) for at least 72 h until cryoprotected. Brains were then sliced to a thickness of 300 µ m (Patel et al ) with a cryostat. Slices were selected for staining based upon estimated depth and/or from the observation of holes in a brightfield microscope. Immunohistochemistry closely followed previously reported staining techniques (Welle et al ), but modified to accommodate 300 µ m brain slices (Patel et al ). All incubation periods and washes were performed with brain slices in well plates on nutators. Chosen slices were first incubated in 4% PFA (sc-281 692, Santa Cruz Biotechnology, Dallas TX) for 1 d at 4 °C. Slices were washed for 1 h in 1x PBS twice at room temperature and then incubated in a solution containing 1% Triton X-100 (93 443, MilliporeSigma) in StartingBlock (PBS) Blocking Buffer (37 538, ThermoFisher Scientific, Waltham, MA) overnight at room temperature to permeabilize and block the tissue, respectively. Slices were then washed in 0.1%–0.5% Triton X-100 in 1x PBS (PBST) solution for one hour at room temperature three times. Slices were incubated in primary antibodies for 7 d at 4 °C, where antibodies were added to a solution containing 1% Triton X-100, 0.02% sodium azide (1% of solution containing 2% sodium azide in 1x PBS), and StartingBlock. The primary antibody cocktail differed between rats implanted for 6 weeks ( N = 2) and 12+ weeks ( N = 3). In all rats, antibodies staining for neurons (Mouse anti-NeuN, MAB377, MiliporeSigma) and astrocytes (Rabbit anti-Glial fibrillary acidic protein (GFAP), Z0334, Dako/Agilent, Santa Clara, CA) were used, where both had dilution ratios of 1:250. For 6 week rats, staining for axon initial segments (AIS) with Goat anti-Ankyrin-G (1:1000 dilution ratio) was added. The Ankyrin-G antibody was made and provided by the Paul Jenkins Laboratory (University of Michigan, Ann Arbor, MI) using methods published previously (He et al ). For 12+ week rats, staining for microglia (Guinea Pig anti-IBA1, 234 004, Synaptic Systems, Göttingen, Germany) (1:250 dilution ratio) was added. Slices were washed in 0.1%–0.5% PBST three times at room temperature for one hour each wash before secondary antibody incubation at 4 °C for 5 d. Secondary antibodies used the same base solution as the primary cocktail. The following secondary antibodies were used: Donkey anti-Mouse Alexa Fluor 647 (715-605-150, Jackson ImmunoResearch Laboratories, Inc., West Grove, PA) for neurons, Donkey anti-Rabbit Alexa Fluor 546 (A10040, Invitrogen, Carlsbad, CA) for astrocytes in 6 week rats, Goat anti-Rabbit Alexa Fluor 532 (A11009, Invitrogen) for astrocytes in 12+ week rats, Donkey anti-Guinea Pig Alexa Fluor 488 (706-545-148, Jackson ImmunoResearch Laboratories, Inc.) for microglia, Donkey anti-Goat-Alexa Fluor 488 (705-545-003, Jackson ImmunoResearch Laboratories, Inc.) for AISs, which all had dilution ratios of 1:250, and 4ʹ,6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI) (D1306, Invitrogen) for cellular nuclei (1:250–1:500 dilution ratio). Slices were washed in 0.1%–0.5% PBST twice at room temperature for two hours each wash and were washed in 1x PBS with 0.02% azide for at least overnight before imaging could commence. Stained slices were stored in 1x PBS with 0.02% azide at 4 °C outside of imaging sessions after staining. Prior to imaging, slices were rapidly cleared using an ultrafast optical clearing method (FOCM) (Zhu et al ). FOCM was prepared as a solution of 30% (w/v) urea (BP169500, ThermoFisher Scientific), 20% (w/v) D-sorbitol (DSS23080-500, Dot Scientific, Burton, MI), and 5% glycerol (w/v) (BP229-1, ThermoFisher Scientific) in dimethyl sulfoxide (DMSO) (D128-500, ThermoFisher Scientific). Glycerol was added at least one day after other reagents started mixing and after urea and D-sorbitol were sufficiently dissolved in DMSO. FOCM was diluted in either MilliQ water ( N = 1 rat) or 1x PBS ( N = 4 rats) to 25%, 50% and 75% (v/v) solutions. During clearing, slices were titrated to 75% FOCM in 25% concentration increments, 5 min per step. 75% FOCM in water was found to expand tissue laterally (6.7%) after imaging rat #2, so other slices from other rats were cleared with FOCM in 1x PBS. This expansion was not corrected in subsequent analyses. The clearing process was repeated immediately before each imaging session. The samples remained suspended in the 75% FOCM solution during imaging. Images were collected with a Zeiss LSM 780 Confocal Microscope (Carl Zeiss AG, Oberkochen, Germany) with 10x and 20x objectives. The microscope recorded transmitted light and excitation from the following lasers: 405, 488, 543, and 633 nm. Z-stacks of regions of interested were collected along most, if not all, of the thickness of the sample. Images had XY pixel size of 0.81 µ m or 0.202 µ m and z-step of 3 µ m or 0.5–0.6 µ m, respectively. Laser power and gain were adjusted to yield the highest signal-to-noise (SNR) ratio while also minimizing photobleaching. The ‘Auto Z Brightness Correction’ feature in ZEN Black (Carl Zeiss) was used to account for differences in staining brightness along the slice thickness. The implant site was located by overview imaging and quick scanning, where high astrocyte staining signal in dorsal regions and a straight line of holes delimited the electrode site. Sites with approximately similar stereotaxic coordinates to those of the implant site but in the contralateral hemisphere of the same slice were imaged as control. After imaging, slices were titrated back to 0.02% sodium azide in 1x PBS, and stored. Clearing was repeated again when further imaging was needed, as each implant site required multiple imaging sessions. Putative tip localization Putative carbon fiber electrode tips were localized independently by three reviewers experienced in histology to estimate the confidence and the reproducibility of localization. Tips were localized using ImageJ (Fiji distribution, Schindelin et al ) with additional cross-referencing in Zen Black (2012) (Zeiss Microscopy). Fiber tips were localized by first identifying electrode tracts in more dorsal focal planes. These tracts presented as dark holes in transmitted light and fluorescent imaging channels that were typically positioned in an approximately straight line and surrounded by high GFAP intensities. Putative electrode tracts were enhanced by using a combination of basic imaging processing and visualization techniques including contrast adjustment, histogram matching (Miura ) to account for changes in brightness throughout the brain slice, median or Gaussian filters (including 3D versions (Ollion et al )) for reducing noise, maximum intensity projections, and toggling the imaging channels that were visualized simultaneously. The estimated tip location was determined by finding the z -plane and x-y coordinates at which the tract rapidly began to reduce in size and become filled with surrounding background fluorescence or parenchyma when scrolling dorsal-ventrally through the image. The tip width, height, and center were determined by manually fitting an ellipse to the electrode tract. Verification of tip localization Putative tip localization was verified by measuring electrode tract pitches, tip cross-sectional area, and the agreement between three reviewers in tip localization. Electrode tract pitches were measured in ImageJ after stitching images (Preibisch et al ). The positions of electrode tracts were measured in the same z-step or in nearby z -steps to achieve an approximately planar arrangement of tracts. Ellipses were manually fit to tracts, and the Euclidean distance between the centers of neighboring tracts were measured in MATLAB to determine pitch. The cross-sectional area of each tip was calculated as the area of the ellipse fit to the tip localized as described above. The agreement between three reviewers experienced in histology was determined by measuring the absolute value of the Euclidean distance between the three centers of the ellipses fit to each fiber. When determining the localization confidence, all comparisons for each group of fibers were grouped together. The median and interquartile range were measured from each group of comparisons. Fiber tips that were not localized by all three reviewers were excluded from further analyses and from agreement determination. Three fibers were excluded from rat #5 because one fiber was not found by one reviewer, and two fibers were found in different images than the other reviewers, so localizations were not directly comparable. Tissue damage in rat #3 was so severe (figure S3) that even the number of fibers in the tissue was not consistently identified by reviewers, therefore rat #3 was excluded from quantitative analyses. For cross-sectional area measurements and subsequent analyses requiring fiber localization, the positions localized by the first author were used, where exclusion from other reviewers were not considered. Segmenting neuron somas into 3D scaffolds The somas of neurons surrounding electrodes were manually segmented in 3D after putative fiber localization to measure soma morphology and determine the Euclidean distance between neuron somata centroids and carbon fiber electrode tips, which we called the neurons’ relative positions. We intentionally segmented a larger number of neurons than required, as neuron proximity to putative fibers is difficult to determine from volumetric confocal imaging by inspection alone. Therefore, we segmented any neuron that was wholly or partially present within the volume of a 100 × 100 µ m (diameter X height) cylinder centered at each electrode tip to ensure that any neuron with a centroid within a 50 µ m radius from the fiber tips would be segmented and computer algorithms could determine measurements onward (e.g. relative distance, morphometrics). Image pre-processing and segmentation were performed using ImageJ plugins. The NeuN channel was used to visualize neuron somas for segmentation. First, a z -step with a qualitatively high SNR was selected and contrast adjusted to make the image clearer. The remaining z -steps in the z -stack were histogram matched (Miura ) to the selected z -step to account for variations in brightness throughout the slice along its thickness. A 3D median filter (Ollion et al ) was applied to reduce noise. These image pre-processing steps ensured that the somata, stained by NeuN, had high SNRs and were clearly differentiable from other tissue. The ImageJ plugin nTracer (primarily version 1.3.5) (Roossien et al ) was used to segment each neuron by manually tracing along the cell outline in each z -step that the neuron was present. Neurons were also traced in contralateral regions as a control. Figure S2 shows several z -planes of two neurons that were manually traced and a single plane that segmented many neurons to provide better illustration. For rat #1, five points with similar coordinates to fiber tip centroids were selected and neurons traced within a cylinder (100 µ m diameter, 100 µ m height) ( N = 5 hypothetical fibers, N = 199 neurons). For rat #2, all neurons that bounded voxels in a cylindrical region (300 µ m diameter, 300 µ m height) were traced ( N = 926 neurons). Traced neurons were imported into MATLAB using custom scripts. For some fibers (rat #1: N = 8 fibers; rat #2: N = 5 fibers), the electrode tip was closer to the top of the brain section than 50 µ m. Instead of 50 µ m, the distance between the fiber tip and the top of the brain section at that fiber’s tract was selected as half the cylinder height for tracing and used in subsequent analyses as the radius around that fiber. Additionally, neuron somas that were cut along the plane of cryosectioning were excluded if it was clear that the middle of the soma was excluded from the brain slice. Neuron morphometrics and densities Neuron morphometrics and positions were extracted from the 3D scaffolds of neurons produced by tracing. These measurements were performed in MATLAB using custom scripts after having been imported into MATLAB storage formats. To measure neuron soma volume, all voxels bounded by the scaffold were counted and the total number was multiplied by the volume of a single voxel (Prakash et al ). The centroids of these sets of points, which were 3D point clouds, were used to determine neuron positions and distances relative to the putative electrode tips. Centroid positions were also used for counting the number of neurons within 3D radii, such as 50 µ m, and in determining neuron density. To determine the extent of neuron elongation around implanted fibers, the cell shape strain index (CSSI) as defined by Du et al with equation was measured. An ellipse was fit to the soma trace in the z -step that had the greatest cross-sectional area for that neuron (Ohad Gal. fit_ellipse [ www.mathworks.com/matlabcentral/fileexchange/3215-fit_ellipse ], MATLAB Central File Exchange). CSSI was determined using equation (Du et al ), where a is the minor axis and b is the major axis from elliptical fitting: [12pt]{minimal} }{}CSSI = }{{}{2}}}. C S S I = b − a b + a 2 . To measure the length of neurons in the dorsoventral direction parallel to the axis of explantation, the number of z -steps in which each neuron was traced was multiplied by the image’s z -step resolution. These same measurements were repeated for neurons in contralateral sites. Comparisons of neurons within a 50 µ m spherical radius of localized tips included neurons where the centroid was within 50 µ m. To determine the relationship between distance and CSSI or volume, only neurons that were ventral to all tips analyzed in the source image were included to remove the effects of multiple nearby tips. Nearest neuron positions Neuron positions relative to implanted fibers were determined by calculating the Euclidean distance between the center of the ellipse manually fitted to each carbon fiber electrode tip (see Tip localization ) and centroid of the point cloud bounded by each traced neuron’s 3D scaffold. Therefore, the neuron’s position was the centroid of all points bounded by the NeuN stain. These distances were sorted from shortest to longest to determine the nearest neuron positions relative to implanted fibers. When determining the mean neuron position, if that position was not measured for one or more fibers due to a reduced tracing radius, that fiber was excluded from the mean neuron position calculation for that position. Similar measurements were performed at contralateral sites as a control. In N = 1 rat, neurons within 50 µ m of five hypothetical fiber points with similar coordinates to localized fiber tips were traced, and the nearest neuron positions were determined in the same manner as with implanted fibers. For another rat, all neurons within a 300 × 300 µ m (diameter X height) cylindrical volume were traced. Points were placed in a 3D grid with 12.5 µ m spacing within this volume but were excluded if positioned within 50 µ m of the volume’s border or if within the set of points bounded by a traced neuron. These points were used as hypothetical carbon fiber tip locations in unimplanted contralateral tissue. Similarly to implanted fibers, the Euclidean distance between each hypothetical point and the centroid of all points bounded by each traced neuron was measured to determine relative neuron distances. As described earlier (see Neuron morphometrics and densities ), these neuron centroid positions were also used to determine the neuron densities and neuron counts within spherical radii from hypothetical fiber locations. Glial responses around implanted carbon fibers Glial responses to chronic implantation of carbon fiber electrodes were evaluated by measuring the staining intensity of GFAP and IBA1 surrounding implanted fibers, similar to measurements commonly made previously (e.g. Patel et al and Jang et al ). Z -steps that captured focal planes that included regions of missing tissue due to capturing either the top or bottom of the brain slice were excluded. Fibers were excluded if their tips were localized in excluded z -steps. Stitched (Preibisch et al ) confocal images were imported into MATLAB using the Bio-Formats MATLAB Toolbox (The Open Microscopy Environment, www.openmicroscopy.org/bio-formats/downloads/ ) for analysis. The background intensity for each z -step was determined first. The Euclidean distance of each pixel in the z -step to every tip center was determined. If the z -step was dorsal to the tip, then the distance to the center of the electrode tract in that z -step was determined instead. The mean intensity of pixels that were 300–310 µ m away from those tip positions was measured as the background intensity for that z -step. Next, a line (the electrode axis) was fit to the coplanar fiber tip positions. The minimum pitch of the electrode tip locations along this line was used to determine bounded lanes centered at each tip for measurement. The intensity of glial fluorescence at increasing distances from each electrode tip were measured by determining the mean intensity of pixels bounded by concentric rings that were 10 µ m thick, centered at each electrode tip location, and were bounded by these lanes with equal width to prevent overlap between fibers. Glial intensity was reported as the ratio of the mean intensity for each bin to the mean intensity of the background for the z -step containing the tip. Since fiber tips were localized at a range of z -steps, the measurement was performed at the appropriate z -step for each fiber. This process was performed for both GFAP and IBA1. As an additional control, the process was repeated using images collected of sites in the same brain sections as the implanted tips with similar stereotaxic coordinates in the contralateral hemisphere. Modeling of electrophysiology A simple point source model (Lee et al ), similar to the model reported in Nason et al , was used to predict the extracellular spike waveforms recorded by carbon fiber electrodes that originated from neurons surrounding implanted carbon fibers. This model is defined by equation : [12pt]{minimal} }{}}}}( r ) = }}}}}{{4 {}{r}}} V pp r = I pp 4 σ π r where V pp is the extracellular potential recorded by an electrode, I pp is the peak–peak extracellular current generated by a neuron firing an action potential, σ is the conductivity of the brain tissue between the electrode and the neuron, and r is the relative distance between the neuron and the electrode. The model operates on the following assumptions: (1) that the brain is an isotropic medium with regard to frequency (Logothetis et al ), (2) neurons are treated as point sources (Holt and Koch , Lee et al , Nason et al ), where the point used in this study is the centroid of the NeuN stain (the soma), 3) cell spiking output over time is constant (Jog et al ) and across neurons (Lempka et al , Nason et al ). Using values for I pp and σ that were empirically fit or identified in literature renders the equation a single-parameter model, where the relative distance of the source neuron to the electrode, r , is the required input. The model was fit empirically using three methods. In the first, σ = 0.27 S m −1 was selected from literature (Slutzky et al ) and I pp was determined from fitting. Electrophysiology from rat #2 was used for fitting because all fibers could be putatively localized with high confidence. Also, the penultimate recording session proved to be exemplary, and using recording sessions collected towards the end of the implantation period increases the likelihood of coherence between recorded electrophysiology and histological outcomes (Michelson et al ). Spike clusters that were recorded 84 d post implant in rat #2 were sorted and ranked by mean amplitude in descending order. Spikes associated with the largest cluster recorded on each channel that recorded sortable units ( N = 12 channels) were grouped, and the mean peak–peak amplitude was plotted against the mean position of the closest neuron ( N = 2 rats). This was repeated for the second and third largest clusters and matched to the second and third closest neuron positions, respectively. I pp was then fit to these three value pairs using the MATLAB function lsqcurvefit . In the second and third methods, the fit was performed using individual spike cluster and neuron distance pairs. For each channel, spike clusters were sorted in descending order by mean peak–peak waveform amplitude. These cluster amplitudes were plotted against the positions of neurons surrounding those channels with the same rank in position (e.g. the amplitude of the second largest cluster plotted against the position of the second closest neuron). One cluster was excluded because the position of the third closest neuron was not measured due to the electrode tip’s proximity to the top of the brain slice. In the second method, both σ and I pp were fit using lsqcurvefit . In the third method, σ = 0.27 S m −1 and I pp was fit. To plot predicted waveforms, all spikes sorted across all clusters on day 84 for rat #2 were collected and normalized to have peak–peak amplitudes of 1. The mean positions of the nearest ten neurons were used as input into equation using the first empirical fit, which yielded the scaling factor for the normalized spikes. The mean and standard deviation of the scaled spikes was evaluated for each position and plotted in figures (d) and S8. Statistical analyses Comparisons of neuron soma volumes and CSSIs between the neurons surrounding implanted fibers and control regions were performed using two-sided Kolmogorov–Smirnov tests (Du et al ) with an alpha of 0.05. Neuron densities, neuron positions, and glial intensities in radial bins surrounding the implanted fibers compared to control were performed using two-sided two sample t -tests with an alpha of 0.05. Linear regressions were determined using the fitlm function in MATLAB. All statistical tests were performed in MATLAB. Figures and graphics All figures were generated using a combination of ImageJ, MATLAB (both versions R2020b and R2022a), Inkscape (version 1.1.2), and Adobe Illustrator 2022 (version 26.1). ImageJ was used for figures showing histology. MATLAB produced numerical plots. Inkscape was used to compile and complete these figures, with some help from Adobe Illustrator. Videos were produced using ImageJ and Adobe Media Encoder 2022 (22.61). MATLAB code add-ons In addition to previously stated code add-ons for MATLAB, we used the following: shadederrorbar (Rob Campbell. raacampbell/shadedErrorBar ( https://github.com/raacampbell/shadedErrorBar ), GitHub) (multiple versions) and subtightplot (Felipe G. Nievinski. subtightplot ( www.mathworks.com/matlabcentral/fileexchange/39664-subtightplot ), MATLAB Central File Exchange.). Both were used for figure generation. Results 3.1. Assessment of FBRs induced by whole carbon fiber electrode arrays implanted chronically We first verified that subcellular-scale (6.8 µ m diameter) carbon fiber electrode arrays implanted for this study yielded minimal FBRs similar to those observed with other carbon fiber electrode designs (Patel et al , , Welle et al ). To assess the FBR along the arrays, we implanted one HDCF electrode array (Huan et al ) targeting layer V motor cortex in each of three rats for 12+ weeks. This design included tapering silicon support tines that extended up to the last 300 µ m of the fibers’ length to enable direct insertion (figure (a)) (Huan et al ). Similar to other implanted silicon electrodes (Turner et al , Salatino et al ), we found elevated GFAP staining around the silicon supports, indicating astrocyte enrichment as part of the FBR and signifying the implant site (figure (b)). The explanted electrodes left an approximately straight row of black holes in each image plane, illustrating the electrode tracts in 3D in brain slices containing the tips. At depths near putative electrode tips and far from the silicon supports (∼300 µ m), while enriched astrocytes persisted, minimal microglia responses and high neuron densities were observed (figure (c) and video 1). These microglial and neuronal responses appeared similar to those observed in previous reports (Patel et al , , Welle et al ) and those located in symmetrical regions in the contralateral hemisphere (figure (d)). It is worth noting that we observed varying degrees of FBRs in histology showing the implant sites for all three rats (figures and S3). Such variation may reflect differing structural damage during implantation (Ward et al ). Furthermore, we quantified glial fluorescent intensities with known carbon fiber locations for the first time. GFAP and IBA1 intensities were elevated up to 200 µ m and 60 µ m from putative electrode tips (figure S4), respectively, which is considerably closer than previous measurements with silicon shanks (Patel et al ). 3.2. Carbon fiber recording sites can be localized in post-explant motor cortex with cellular confidence Localizing the recording sites of carbon fiber electrodes in histology can better inform the structure, function (Yang et al ), and health (Eles et al ) of nearby neurons that putatively contribute the spikes detected by the electrodes. In previous work, the recording site tips of carbon fibers could be localized in deep brain regions using a ‘slice-in-place’ method (Patel et al ). However, using this method to cryosection implants at superficial brain regions, such as motor cortex, is difficult due to the intrusion of supporting skull screws and the curvature of the brain. Here, we explanted the electrodes and estimated tip locations using biomarkers and visual cues captured with submicron-resolution confocal imaging instead. Tips could be identified by a combination of factors, including the FBR itself. Elevated GFAP immunostaining typically delimited electrode tracts as astrocytes enriched around a series of dark holes, particularly at shallower depths (figure (b)), but also at depths close to some tips. Simultaneous transmitted-light imaging demonstrated that these holes were not immunostaining artifacts (figure (a)). Such dark holes also appeared in other fluorescent channels contrasting with background staining. Successive confocal imaging produced high-resolution and high-contrast renditions in 3D. Collectively, we utilized the disappearance of these holes in the dorsal-ventral direction to corroborate fiber tracts and localize putative tips. For instance, as shown in figure (b) and videos 2 and 3, at the dorsal side of the brain slice, GFAP+ astrocytes wrapped around the dark hole of the electrode tract, signifying the fiber’s pre-explant location. This hole could then be followed in the ventral direction to a depth where it was rapidly filled in by surrounding parenchyma and background fluorescence, signifying the electrode tip. We localized 29 putative tips of 31 fibers that were implanted for 12+ weeks ( N = 2 rats), and 32 of 32 tips in an additional cohort implanted for 6 weeks ( N = 2 rats). Two tips in rat #1 were not found, as their locations were likely included in a slice where some electrode tracts merged (data not shown). The distance between adjacent putative electrode tracts was measured at 82.1 ± 9.2 µ m, ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 57 pitches), which was close to the array design pitch (80 µ m) (figure (c)). The cross-sectional area of the tips was 50.5 ± 24.7 µ m 2 ( N = 61 fibers) and comparable to the expected 36.3 µ m 2 from a bare carbon fiber (figure (d)). These measurements indicated that we correctly identified fiber locations. To estimate the precision in localizing tips, three observers independently corroborated tip locations (table ). The median absolute difference between estimates was 9.6 µ m, suggesting subcellular precision. However, this difference was considerably lower in rats implanted for six weeks (5.2 µ m) than 12+ (14.7 µ m), likely the result of better tip positioning and elevated background staining. Furthermore, that the median difference in the horizontal plane was 2.2 µ m suggests that tip depth contributed more to tip localization error than tract position. 3.3. Neuron soma morphology is geometrically altered surrounding carbon fiber tips Having localized the recording site tips with high confidence and captured nearby neurons with submicron resolution imaging, we sought to assess changes in neuron soma morphology near the implants, as previous studies reported that neurons surrounding silicon electrode implants were mechanically stretched compared to neurons in non-implanted regions (Du et al , Eles et al ). We traced the 3D outlines of nearby neuron somas ( N = 944 neurons, N = 28 fibers) and somas in symmetric contralateral sites ( N = 1125 neurons) from NeuN staining images (figures (e)–(g) and S2) to measure soma shape and volume within a 50 µ m radial sphere from implanted fiber tips and hypothetical tips positioned in the contralateral hemisphere. Here, we also found that neurons close to the fiber tips appeared to be stretched in one direction compared to neurons in contralateral tissue (figures (a) and (b)). To quantify the degree of soma distortion, the CSSI was calculated using equation (Du et al , Eles et al ). The CSSI was defined as the ratio of the difference to the sum of the longest axis and the shortest axis of the soma, where a high CSSI indicates the soma has a more asymmetric shape and a CSSI near zero indicates a nearly circular soma (Du et al ). In figure (c), we plot the distributions of neuron soma CSSIs within a 50 µ m radius from implanted fibers (0.53 ± 0.22, [12pt]{minimal} }{}$}$ X ˉ ± S, N = 348 neurons), and found those were 111% larger than those at the contralateral sites (0.25 ± 0.12, [12pt]{minimal} }{}$}$ X ˉ ± S, N = 1125 neurons). Plotting the CSSI over the distance of each neuron relative to carbon fiber tips (red dots) allowed us to fit a linear regression line (red line) to extrapolate the distance at which neurons at the implant site would have CSSIs similar to the average CSSI (blue line) measured for neurons in the contralateral hemisphere (figure (d)). The slope of the linear regression ( R = 0.13, p = 1.4 × 10 −2 ) predicted this distance is 162 µ m. Since there was a morphological change for neurons in the close vicinity of the electrodes, we were also curious whether there was a concomitant change in neuron soma volume, which has not been investigated in previous FBR studies to our knowledge. We found that neuron somas within 50 µ m from fibers were 23% smaller (2.3 × 10 3 ± 1.5 × 10 3 µ m 3 , N = 348 neurons) than those in the contralateral hemisphere (2.9 × 10 3 ± 1.3 × 10 3 µ m 3 , N = 1125 neurons) ( p < 0.001, Kolmogorov–Smirnov Test, figure (e)). Fitting soma volumes (red points) over distance to fibers resulted in a regression (red line, R = 0.12, p = 2.1 × 10 −2 ) that predicted the neuron volume would increase to that of the contralateral hemisphere (blue line) at 74 µ m away from the fibers (figure (f)). We also compared neuron length in the dorsoventral direction along the direction of explantation to determine whether explanting the probes themselves may have influenced these morphological changes. While the distributions of neuron length were significantly different between the implant and control sites ( p < 0.001, Kolmogorov–Smirnov Test), neurons at the implant site were 5.3% shorter on average and showed no meaningful trend over distance from the implant ( R = 0.03, p = 0.55, linear regression) (figure S5). 3.4. Neuron placement around implanted carbon fibers resembles naturalistic neuron distributions As we observed many neurons surrounding the arrays, we desired to quantify how naturally these neurons were distributed in the fibers’ immediate vicinities. Using the centroids of the aforementioned 3D reconstructions of neuron somas (figures (e)–(g)), we quantified neuron densities within a 50 µ m radial sphere from implanted and hypothetical control fibers. The neuron density around implanted tips was 3.5 × 10 4 ± 0.9 × 10 4 neurons mm −3 ( N = 16 fibers), which was 82 ± 22% of that measured around hypothetical tips ( N = 2932 fibers). In contrast, conventional single-shank silicon electrodes retain 40% of a healthy neuron density within 50 µ m (Winslow et al ) and 60% within 100 µ m (Biran et al ). As spikes from single neurons can putatively be separated into individual clusters within 50 µ m from an electrode (Henze et al , Buzsáki ), the higher density around carbon fibers within this range may explain their improved recorded unit yield over silicon probes (Patel et al ). Having confirmed that carbon fibers preserve most neurons within 50 µ m, we examined whether the nearest neuron positions relative to the tips were altered. This is important for modeling because the extracellular spike amplitudes recorded is inversely proportional to the distance between neurons and recording sites (Jog et al , Lee et al , Seymour et al ). To do so, we plot histograms of the distances between the nearest six somas’ centroids to each electrode tip (figure (a)) and compared them to the distances to each hypothetically positioned probe in the contralateral hemisphere (figure (b)). Importantly, the neuron nearest to tips was measured at 17.2 ± 4.6 µ m ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 28 fibers) away, which was only 1.0 µ m and not significantly farther away ( p = 0.30, two sample t -test) than the neuron nearest to hypothetical tips in the contralateral hemisphere (16.2 ± 4.8 µ m, N = 2932 simulated fibers). We summarized the nearest six neuron positions in table . Of those six positions, the difference between neurons of matching positions around implanted and hypothetical fibers was of subcellular scale at 2.7 ± 1.0 µ m ( [12pt]{minimal} }{}$}$ X ˉ ± S). These results suggest that the neurons, which were mostly preserved with approximately proper positions, surrounding carbon fiber electrodes may have produced naturalistic physiological spiking activity. 3.5. Modeling suggests neuron distribution contributes to the number of sorted spike clusters Previous work suggests that spikes can be sorted into clusters from individual neurons that were recorded as far as 50 µ m away from an electrode (Henze et al , Buzsáki ). It has been assumed that each spike cluster includes action potentials from a single nearby neuron (Buzsáki , Carlson and Carin ), but it is possible that the electrode records similar spike waveforms generated from multiple neurons (Lewicki ). At the same time, it has been widely noted that the number of clusters determined by spike sorting, reported to be between one and four (Rey et al ) and between one and six clusters (Buzsáki , Shoham et al , Pedreira et al ), is at least an order of magnitude lower than the neuron count within a 50 µ m radius in mammalian brain (Buzsáki , Shoham et al , Pedreira et al ). The FBR is speculated to contribute to this discrepancy (Pedreira et al ), as fewer viable neurons are observed surrounding electrodes after chronic implantation (Polikov et al ). Particularly salient is a 60% percent reduction within 50 µ m (Winslow et al ). Other possible factors are that neurons remain non-active during recording sessions (Shoham et al , Pedreira et al ) or spike-sorting methods are currently insufficient (Pedreira et al , Carlson and Carin ). Presently, this discrepancy remains unresolved. Since carbon fiber electrodes maintained a nearly natural distribution of the nearest six neuron positions, which were within 50 µ m from the recording sites, we were well positioned to further consider this discrepancy by modeling electrophysiology. Also, since carbon fiber electrodes yield higher SNRs than silicon electrodes (Patel et al ) and higher SNRs are expected to yield more spikes (Carlson and Carin ) with higher accuracy (Magland et al ), we anticipated spike sorting more clusters. Therefore, we sorted electrophysiology from an exemplary recording session towards the end of the implantation period (day 84 of 92) that yielded large spike clusters, where the mean peak–peak amplitude of the largest cluster recorded on channels that yielded clusters was 354.8 ± 237.2 µV pp ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 12 channels) and the single largest cluster had a mean amplitude of 998.9 µV pp (figure (a)). Furthermore, 12 of 16 channels yielded spike clusters with mean amplitude >100 µV pp , signifying a high chronic signal yield consistent with previous work (Welle et al ). Signal yield at experimental endpoints for all five implanted devices are presented in figure S1. However, our spike sorting for this exemplary recording session yielded a median of 2 and maximum of 3 spike clusters per electrode, which is considerably lower than the 18.3 ± 4.9 neurons observed within 50 µ m ( N = 16 fibers) even after inducing a lower FBR with high signal yield. We hypothesized that neuron distribution itself may contribute to the low number of sorted clusters. As expected (Henze et al ) from the geometry of concentric spheres with increasing radius, the number of neurons relative to the electrode grew rapidly with increasing spherical volume. On average, we found fewer than one neuron (0.0 ± 0.2) within 10 µ m, three neurons (3.3 ± 1.7) within 30 µ m, and fifteen neurons (15.3 ± 4.1) 30–50 µ m away from fiber tips (figure (b)). Given the inverse relationship between neuron distance and the recorded spike amplitude (Jog et al , Buzsáki , Pedreira et al ), the large neuron count at further distances would likely generate similar spiking amplitudes that would be difficult to sort (Pedreira et al ). We used the simplest point source model where neurons are treated as points (Lee et al , Nason et al ) and each neuron has the same spiking output (Lempka et al ). In this paper, a point source is defined as the centroid of a neuron’s NeuN stain. Figure (c) illustrates this model with the relative distances of the closest ten neurons based on our measurements (table ). We fit the model for I pp using spikes sorted into the largest three units’ mean spike amplitudes ( N = 12 channels) (figure (c)) and the nearest three neurons’ mean positions (table ), where conductance, σ, was 0.27 S m −1 (Slutzky et al ). This fit yielded I pp = 16.6 nA (figure S6(a)), which is similar to the 10 nA determined from fitting 60 µ V to 50 µ m, as recommended by Pedreira et al from combined intracellular and extracellular recordings (Henze et al , Buzsáki ). To verify this parameter, we fit the model in two other ways by matching each sorted cluster with a respective neuron positioned around the same electrode (e.g. the second largest cluster associated with the second closest neuron) (figure S6(b)). Fitting for both I pp and σ yielded similar values of 14.4 nA and 0.23 S m −1 , respectively, while fitting for just I pp yielded I pp = 15.9 nA. That all three modeling approaches produced similar results (figure S6) that were consistent with literature suggests this model can reasonably predict spike amplitudes with neuron distance as input. Using their average relative positions to implanted fibers in the model, we plot the average spike amplitudes of the nearest ten neurons to determine spike cluster discriminability (figure (d)). As expected from the neuron distribution we observed, predicted amplitudes quickly approached an asymptote as neuron position increased. Since baseline noise may contribute to spike detection and cluster differentiability (Du et al , Pedreira et al ), we hypothesized that noise may obfuscate these small differences in amplitudes from neighboring neurons. Previous work comparing noise levels recorded in Michigan-style silicon probes and carbon fiber electrodes probes measured baseline noise levels of 10 and 15 µV rms , respectively (Patel et al ). Therefore, we compared the average difference in spiking amplitude between consecutive neurons to these noise levels to estimate neuron differentiability (figure S7). This difference remained smaller than 15 µV pp for comparisons of neuron positions beyond the third and fourth neuron and smaller than 10 µV pp for comparisons beyond the fourth and fifth neuron. Therefore, the fourth closest neuron (30.7 ± 4.6 µ m) is situated along the boundary at which spike clusters become indistinguishable, capturing activity from multiple neurons, and is consistent with the 1–4 sortable clusters typically observed in literature (Rey et al ). To further illustrate, we grouped simulated spikes according to differentiability, where the four largest clusters are more easily separable than the fifth through the tenth (figure (d)) and plotted all ten waveforms together (figure S8). When separated, the four largest units could easily merge into two or three units with variation in position and consequently amplitude. This is plausible given that within the four closest neurons observed for 11/28 fibers, at least two neighboring neuron positions were within 1 µ m apart. Similarly, when plotted together, the clusters begin to merge starting with the fourth largest cluster. Assessment of FBRs induced by whole carbon fiber electrode arrays implanted chronically We first verified that subcellular-scale (6.8 µ m diameter) carbon fiber electrode arrays implanted for this study yielded minimal FBRs similar to those observed with other carbon fiber electrode designs (Patel et al , , Welle et al ). To assess the FBR along the arrays, we implanted one HDCF electrode array (Huan et al ) targeting layer V motor cortex in each of three rats for 12+ weeks. This design included tapering silicon support tines that extended up to the last 300 µ m of the fibers’ length to enable direct insertion (figure (a)) (Huan et al ). Similar to other implanted silicon electrodes (Turner et al , Salatino et al ), we found elevated GFAP staining around the silicon supports, indicating astrocyte enrichment as part of the FBR and signifying the implant site (figure (b)). The explanted electrodes left an approximately straight row of black holes in each image plane, illustrating the electrode tracts in 3D in brain slices containing the tips. At depths near putative electrode tips and far from the silicon supports (∼300 µ m), while enriched astrocytes persisted, minimal microglia responses and high neuron densities were observed (figure (c) and video 1). These microglial and neuronal responses appeared similar to those observed in previous reports (Patel et al , , Welle et al ) and those located in symmetrical regions in the contralateral hemisphere (figure (d)). It is worth noting that we observed varying degrees of FBRs in histology showing the implant sites for all three rats (figures and S3). Such variation may reflect differing structural damage during implantation (Ward et al ). Furthermore, we quantified glial fluorescent intensities with known carbon fiber locations for the first time. GFAP and IBA1 intensities were elevated up to 200 µ m and 60 µ m from putative electrode tips (figure S4), respectively, which is considerably closer than previous measurements with silicon shanks (Patel et al ). Carbon fiber recording sites can be localized in post-explant motor cortex with cellular confidence Localizing the recording sites of carbon fiber electrodes in histology can better inform the structure, function (Yang et al ), and health (Eles et al ) of nearby neurons that putatively contribute the spikes detected by the electrodes. In previous work, the recording site tips of carbon fibers could be localized in deep brain regions using a ‘slice-in-place’ method (Patel et al ). However, using this method to cryosection implants at superficial brain regions, such as motor cortex, is difficult due to the intrusion of supporting skull screws and the curvature of the brain. Here, we explanted the electrodes and estimated tip locations using biomarkers and visual cues captured with submicron-resolution confocal imaging instead. Tips could be identified by a combination of factors, including the FBR itself. Elevated GFAP immunostaining typically delimited electrode tracts as astrocytes enriched around a series of dark holes, particularly at shallower depths (figure (b)), but also at depths close to some tips. Simultaneous transmitted-light imaging demonstrated that these holes were not immunostaining artifacts (figure (a)). Such dark holes also appeared in other fluorescent channels contrasting with background staining. Successive confocal imaging produced high-resolution and high-contrast renditions in 3D. Collectively, we utilized the disappearance of these holes in the dorsal-ventral direction to corroborate fiber tracts and localize putative tips. For instance, as shown in figure (b) and videos 2 and 3, at the dorsal side of the brain slice, GFAP+ astrocytes wrapped around the dark hole of the electrode tract, signifying the fiber’s pre-explant location. This hole could then be followed in the ventral direction to a depth where it was rapidly filled in by surrounding parenchyma and background fluorescence, signifying the electrode tip. We localized 29 putative tips of 31 fibers that were implanted for 12+ weeks ( N = 2 rats), and 32 of 32 tips in an additional cohort implanted for 6 weeks ( N = 2 rats). Two tips in rat #1 were not found, as their locations were likely included in a slice where some electrode tracts merged (data not shown). The distance between adjacent putative electrode tracts was measured at 82.1 ± 9.2 µ m, ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 57 pitches), which was close to the array design pitch (80 µ m) (figure (c)). The cross-sectional area of the tips was 50.5 ± 24.7 µ m 2 ( N = 61 fibers) and comparable to the expected 36.3 µ m 2 from a bare carbon fiber (figure (d)). These measurements indicated that we correctly identified fiber locations. To estimate the precision in localizing tips, three observers independently corroborated tip locations (table ). The median absolute difference between estimates was 9.6 µ m, suggesting subcellular precision. However, this difference was considerably lower in rats implanted for six weeks (5.2 µ m) than 12+ (14.7 µ m), likely the result of better tip positioning and elevated background staining. Furthermore, that the median difference in the horizontal plane was 2.2 µ m suggests that tip depth contributed more to tip localization error than tract position. Neuron soma morphology is geometrically altered surrounding carbon fiber tips Having localized the recording site tips with high confidence and captured nearby neurons with submicron resolution imaging, we sought to assess changes in neuron soma morphology near the implants, as previous studies reported that neurons surrounding silicon electrode implants were mechanically stretched compared to neurons in non-implanted regions (Du et al , Eles et al ). We traced the 3D outlines of nearby neuron somas ( N = 944 neurons, N = 28 fibers) and somas in symmetric contralateral sites ( N = 1125 neurons) from NeuN staining images (figures (e)–(g) and S2) to measure soma shape and volume within a 50 µ m radial sphere from implanted fiber tips and hypothetical tips positioned in the contralateral hemisphere. Here, we also found that neurons close to the fiber tips appeared to be stretched in one direction compared to neurons in contralateral tissue (figures (a) and (b)). To quantify the degree of soma distortion, the CSSI was calculated using equation (Du et al , Eles et al ). The CSSI was defined as the ratio of the difference to the sum of the longest axis and the shortest axis of the soma, where a high CSSI indicates the soma has a more asymmetric shape and a CSSI near zero indicates a nearly circular soma (Du et al ). In figure (c), we plot the distributions of neuron soma CSSIs within a 50 µ m radius from implanted fibers (0.53 ± 0.22, [12pt]{minimal} }{}$}$ X ˉ ± S, N = 348 neurons), and found those were 111% larger than those at the contralateral sites (0.25 ± 0.12, [12pt]{minimal} }{}$}$ X ˉ ± S, N = 1125 neurons). Plotting the CSSI over the distance of each neuron relative to carbon fiber tips (red dots) allowed us to fit a linear regression line (red line) to extrapolate the distance at which neurons at the implant site would have CSSIs similar to the average CSSI (blue line) measured for neurons in the contralateral hemisphere (figure (d)). The slope of the linear regression ( R = 0.13, p = 1.4 × 10 −2 ) predicted this distance is 162 µ m. Since there was a morphological change for neurons in the close vicinity of the electrodes, we were also curious whether there was a concomitant change in neuron soma volume, which has not been investigated in previous FBR studies to our knowledge. We found that neuron somas within 50 µ m from fibers were 23% smaller (2.3 × 10 3 ± 1.5 × 10 3 µ m 3 , N = 348 neurons) than those in the contralateral hemisphere (2.9 × 10 3 ± 1.3 × 10 3 µ m 3 , N = 1125 neurons) ( p < 0.001, Kolmogorov–Smirnov Test, figure (e)). Fitting soma volumes (red points) over distance to fibers resulted in a regression (red line, R = 0.12, p = 2.1 × 10 −2 ) that predicted the neuron volume would increase to that of the contralateral hemisphere (blue line) at 74 µ m away from the fibers (figure (f)). We also compared neuron length in the dorsoventral direction along the direction of explantation to determine whether explanting the probes themselves may have influenced these morphological changes. While the distributions of neuron length were significantly different between the implant and control sites ( p < 0.001, Kolmogorov–Smirnov Test), neurons at the implant site were 5.3% shorter on average and showed no meaningful trend over distance from the implant ( R = 0.03, p = 0.55, linear regression) (figure S5). Neuron placement around implanted carbon fibers resembles naturalistic neuron distributions As we observed many neurons surrounding the arrays, we desired to quantify how naturally these neurons were distributed in the fibers’ immediate vicinities. Using the centroids of the aforementioned 3D reconstructions of neuron somas (figures (e)–(g)), we quantified neuron densities within a 50 µ m radial sphere from implanted and hypothetical control fibers. The neuron density around implanted tips was 3.5 × 10 4 ± 0.9 × 10 4 neurons mm −3 ( N = 16 fibers), which was 82 ± 22% of that measured around hypothetical tips ( N = 2932 fibers). In contrast, conventional single-shank silicon electrodes retain 40% of a healthy neuron density within 50 µ m (Winslow et al ) and 60% within 100 µ m (Biran et al ). As spikes from single neurons can putatively be separated into individual clusters within 50 µ m from an electrode (Henze et al , Buzsáki ), the higher density around carbon fibers within this range may explain their improved recorded unit yield over silicon probes (Patel et al ). Having confirmed that carbon fibers preserve most neurons within 50 µ m, we examined whether the nearest neuron positions relative to the tips were altered. This is important for modeling because the extracellular spike amplitudes recorded is inversely proportional to the distance between neurons and recording sites (Jog et al , Lee et al , Seymour et al ). To do so, we plot histograms of the distances between the nearest six somas’ centroids to each electrode tip (figure (a)) and compared them to the distances to each hypothetically positioned probe in the contralateral hemisphere (figure (b)). Importantly, the neuron nearest to tips was measured at 17.2 ± 4.6 µ m ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 28 fibers) away, which was only 1.0 µ m and not significantly farther away ( p = 0.30, two sample t -test) than the neuron nearest to hypothetical tips in the contralateral hemisphere (16.2 ± 4.8 µ m, N = 2932 simulated fibers). We summarized the nearest six neuron positions in table . Of those six positions, the difference between neurons of matching positions around implanted and hypothetical fibers was of subcellular scale at 2.7 ± 1.0 µ m ( [12pt]{minimal} }{}$}$ X ˉ ± S). These results suggest that the neurons, which were mostly preserved with approximately proper positions, surrounding carbon fiber electrodes may have produced naturalistic physiological spiking activity. Modeling suggests neuron distribution contributes to the number of sorted spike clusters Previous work suggests that spikes can be sorted into clusters from individual neurons that were recorded as far as 50 µ m away from an electrode (Henze et al , Buzsáki ). It has been assumed that each spike cluster includes action potentials from a single nearby neuron (Buzsáki , Carlson and Carin ), but it is possible that the electrode records similar spike waveforms generated from multiple neurons (Lewicki ). At the same time, it has been widely noted that the number of clusters determined by spike sorting, reported to be between one and four (Rey et al ) and between one and six clusters (Buzsáki , Shoham et al , Pedreira et al ), is at least an order of magnitude lower than the neuron count within a 50 µ m radius in mammalian brain (Buzsáki , Shoham et al , Pedreira et al ). The FBR is speculated to contribute to this discrepancy (Pedreira et al ), as fewer viable neurons are observed surrounding electrodes after chronic implantation (Polikov et al ). Particularly salient is a 60% percent reduction within 50 µ m (Winslow et al ). Other possible factors are that neurons remain non-active during recording sessions (Shoham et al , Pedreira et al ) or spike-sorting methods are currently insufficient (Pedreira et al , Carlson and Carin ). Presently, this discrepancy remains unresolved. Since carbon fiber electrodes maintained a nearly natural distribution of the nearest six neuron positions, which were within 50 µ m from the recording sites, we were well positioned to further consider this discrepancy by modeling electrophysiology. Also, since carbon fiber electrodes yield higher SNRs than silicon electrodes (Patel et al ) and higher SNRs are expected to yield more spikes (Carlson and Carin ) with higher accuracy (Magland et al ), we anticipated spike sorting more clusters. Therefore, we sorted electrophysiology from an exemplary recording session towards the end of the implantation period (day 84 of 92) that yielded large spike clusters, where the mean peak–peak amplitude of the largest cluster recorded on channels that yielded clusters was 354.8 ± 237.2 µV pp ( [12pt]{minimal} }{}$}$ X ˉ ± S, N = 12 channels) and the single largest cluster had a mean amplitude of 998.9 µV pp (figure (a)). Furthermore, 12 of 16 channels yielded spike clusters with mean amplitude >100 µV pp , signifying a high chronic signal yield consistent with previous work (Welle et al ). Signal yield at experimental endpoints for all five implanted devices are presented in figure S1. However, our spike sorting for this exemplary recording session yielded a median of 2 and maximum of 3 spike clusters per electrode, which is considerably lower than the 18.3 ± 4.9 neurons observed within 50 µ m ( N = 16 fibers) even after inducing a lower FBR with high signal yield. We hypothesized that neuron distribution itself may contribute to the low number of sorted clusters. As expected (Henze et al ) from the geometry of concentric spheres with increasing radius, the number of neurons relative to the electrode grew rapidly with increasing spherical volume. On average, we found fewer than one neuron (0.0 ± 0.2) within 10 µ m, three neurons (3.3 ± 1.7) within 30 µ m, and fifteen neurons (15.3 ± 4.1) 30–50 µ m away from fiber tips (figure (b)). Given the inverse relationship between neuron distance and the recorded spike amplitude (Jog et al , Buzsáki , Pedreira et al ), the large neuron count at further distances would likely generate similar spiking amplitudes that would be difficult to sort (Pedreira et al ). We used the simplest point source model where neurons are treated as points (Lee et al , Nason et al ) and each neuron has the same spiking output (Lempka et al ). In this paper, a point source is defined as the centroid of a neuron’s NeuN stain. Figure (c) illustrates this model with the relative distances of the closest ten neurons based on our measurements (table ). We fit the model for I pp using spikes sorted into the largest three units’ mean spike amplitudes ( N = 12 channels) (figure (c)) and the nearest three neurons’ mean positions (table ), where conductance, σ, was 0.27 S m −1 (Slutzky et al ). This fit yielded I pp = 16.6 nA (figure S6(a)), which is similar to the 10 nA determined from fitting 60 µ V to 50 µ m, as recommended by Pedreira et al from combined intracellular and extracellular recordings (Henze et al , Buzsáki ). To verify this parameter, we fit the model in two other ways by matching each sorted cluster with a respective neuron positioned around the same electrode (e.g. the second largest cluster associated with the second closest neuron) (figure S6(b)). Fitting for both I pp and σ yielded similar values of 14.4 nA and 0.23 S m −1 , respectively, while fitting for just I pp yielded I pp = 15.9 nA. That all three modeling approaches produced similar results (figure S6) that were consistent with literature suggests this model can reasonably predict spike amplitudes with neuron distance as input. Using their average relative positions to implanted fibers in the model, we plot the average spike amplitudes of the nearest ten neurons to determine spike cluster discriminability (figure (d)). As expected from the neuron distribution we observed, predicted amplitudes quickly approached an asymptote as neuron position increased. Since baseline noise may contribute to spike detection and cluster differentiability (Du et al , Pedreira et al ), we hypothesized that noise may obfuscate these small differences in amplitudes from neighboring neurons. Previous work comparing noise levels recorded in Michigan-style silicon probes and carbon fiber electrodes probes measured baseline noise levels of 10 and 15 µV rms , respectively (Patel et al ). Therefore, we compared the average difference in spiking amplitude between consecutive neurons to these noise levels to estimate neuron differentiability (figure S7). This difference remained smaller than 15 µV pp for comparisons of neuron positions beyond the third and fourth neuron and smaller than 10 µV pp for comparisons beyond the fourth and fifth neuron. Therefore, the fourth closest neuron (30.7 ± 4.6 µ m) is situated along the boundary at which spike clusters become indistinguishable, capturing activity from multiple neurons, and is consistent with the 1–4 sortable clusters typically observed in literature (Rey et al ). To further illustrate, we grouped simulated spikes according to differentiability, where the four largest clusters are more easily separable than the fifth through the tenth (figure (d)) and plotted all ten waveforms together (figure S8). When separated, the four largest units could easily merge into two or three units with variation in position and consequently amplitude. This is plausible given that within the four closest neurons observed for 11/28 fibers, at least two neighboring neuron positions were within 1 µ m apart. Similarly, when plotted together, the clusters begin to merge starting with the fourth largest cluster. Discussion Here, we consistently localized the positions of carbon fiber electrode tips that had been chronically implanted in motor cortex with subcellular-scale confidence for the first time. Since our confidence measurement was based upon independent measurements made by three individuals, we estimate that this measurement is also evidence of our method’s reproducibility. Volumetric confocal imaging of the implant site enabled this localization and an examination of the surrounding neurons and FBR with more 3D details and precision (Biran et al , Nolta et al , Patel et al , , Black et al , Yang et al , Welle et al , Sharon et al ). In particular, we focused on the nearest 50 µ m regarded as the single-unit recording zone (Henze et al , Buzsáki ), in which we observed a neuron distribution similar to that of healthy cortex. In contrast, multi-shank silicon arrays such as the UEA can remodel the nearby neuron distribution by inducing large tissue voids (Nolta et al , Black et al ) and widespread necrosis (Szymanski et al ). A more direct comparison can be made with Michigan-style electrodes, which can induce a 60% neuron loss within the 50 µ m recording radius (Winslow et al ). Given that we observed an 18% neuron loss on average within this radius of carbon fibers, a conservative estimate suggests that carbon fibers retain at least nine more neurons per penetrating electrode on average without accounting for the shanks’ larger sizes. When scaling up to the 100+ channels used in BMI devices, several hundreds of neurons could be preserved by carbon fibers instead of silicon shanks. Carbon fibers have also proven to have much higher functional probe yield and record more units per probe with better recording SNR (Patel et al , Black et al , Welle et al ). Therefore, our work provides strong evidence that subcellular-scale electrodes such as carbon fibers retain recordable neurons to a considerably greater degree. At the same time, our detailed examination revealed components of the FBR that could inform future designs. This included neuron soma stretching around carbon fibers, which has similarly been observed surrounding conventional microwire and silicon electrodes and has been attributed to chronic micromotion relative to surrounding tissue (Du et al ) or to electrode insertion during surgery (Eles et al ). We considered whether the removal of the electrode arrays post-fixation may have contributed to these morphological changes by determining whether neurons were stretched along the direction of explantation, as the removal likely generated considerable strain in the immediate vicinity of the probes. Given that there was no meaningful difference in neuron length in that direction and that neurons have previously been shown to stretch along the horizontal plane surrounding soft (Du et al ) and carbon fiber electrodes (Patel et al ) that were sliced in place, these morphological changes must have occurred prior to euthanasia. Also, the persistence of astrocyte enrichment along the electrode tracts to depths close to the tips suggests that the array architecture may induce a greater FBR than previous carbon fiber designs, which have shown a minimal response (Patel et al , ). That the astrocyte response reduces closer to the putative tips suggests that the permanent silicon shuttles, even with small feature size (15.5–40.5 µ m), may have been the foremost contributor to this increased FBR. As sharpening the tips of electrodes with microwire-like geometry enables facile and unsupported insertion to deep cortical depths, such as 1.2 mm (Welle et al ) and 1.5 mm (Obaid et al ) deep, future iterations could use fire-sharpening (Guitchounts et al , Welle et al ) or electro-sharpening (El-Giar and Wipf , Obaid and Wu et al , Sahasrabuddhe et al ) to obviate the need for shuttles and to reduce insertion force (Obaid and Wu et al ). Additionally, although Massey et al reported a higher carbon fiber electrode pitch at 38 µ m, the pitch reported here is the highest that has been assessed with histology after a chronic implant at 80 µ m. This higher pitch may have been a factor in the increased FBR that we observed, and warrants a histological sensitivity analysis of the effect that microwire or microwire-like shank density may have on the surrounding tissue. Furthermore, skull-fixing the electrodes induces an increased FBR compared to floating electrodes (Biran et al ). Given that the neuron distortion observed here likely accompanied implant micromotion (Du et al ), subcellular-scale electrode designs should adopt a floating architecture similar to that of the UEA, as flexible electrodes such as syringe-injectable (Hong et al , Schuhmann Jr. et al , Yang et al ) and NET (Luan et al ) probes already have, even with the added benefits of their small size. At the same time, our close inspection of the immediate vicinity of the tips with known positions may have uncovered a previously overlooked and increase in the FBR that is more pronounced at the recording site tips. This is plausible as a moderate increase in GFAP and IBA1 intensity was observed previously surrounding carbon fibers compared to unimplanted tissue (Kozai et al ). Having visualized an approximately naturalistic neuron distribution around carbon fibers and recorded electrophysiology from them, we sought to characterize the relationship between the surrounding neuron population and spikes recorded through modeling. We considered predicting individual neuron positions with electrophysiology, but tip localization would need to be more precise. Although our estimated localization error was lower (9.6 µ m, table , <1 soma) than the approximate error of 3–4 somata reported in recent work by Marques-Smith et al , which attempted neuron localization using Neuropixels probes (Jun et al ), our error was higher than the average difference in position between the first and second closest neurons (6.1 ± 5.1 µ m) and subsequent positions (figure S9). Additionally, our point source model was simplistic, and likely would have to incorporate more biological phenomena to accurately predict the locations of surrounding neurons. Previous modeling and in vitro recordings suggest that dendritic morphology (Pettersen and Einevoll ) and the axon initial segment (Bakkum et al ) contribute greatly to the extracellular potentials recorded, respectively, and therefore immunostaining and 3D segmenting these structures in addition to neuron somata (NeuN) and accounting for their contributions to recorded potentials may increase the accuracy of our model. Previous work has shown that in vivo optical imaging, such as two-photon imaging, can be used to measure the morphological changes of nearby genetically-labeled brain cells (Eles et al ) and other elements of the FBR over the course of chronic implantations surrounding non-functional probes (Kozai et al , , Wellman et al , Savya et al ) and visualize neuronal firing in situ (Lin and Schnitzer ). However, our functional study required the installation of the whole recording headstage and connector, which would block the cranial window for optical imaging. Additionally, optical imaging is limited to shallow cortical layers (Siegle et al ), which is not suitable for deeper brain regions, such as the depth to which we inserted the carbon fibers to record layer 5 motor neurons in the rat brain. In summary, despite these limitations, our method provides a viable solution for assessment of the FBR at the end point for similar recording devices, including the Utah array, and modeling recorded electrophysiology at the level of the neural population. Regardless, modeling the ten closest neurons’ spike amplitudes suggests that the fourth closest may be situated on the boundary at which clusters become indistinguishable and may explain the low number of clusters (1–4) (Pedreira et al , Rey et al ) attributed to individual neurons. That this single unit boundary is 30.7 ± 4.6 µ m away and dependent on neuron distribution disagrees with the notion that this boundary is 50 µ m (Henze et al , Buzsáki ). That said, spikes from neurons up to 140 µ m away have been reported as distinguishable from background noise in the hippocampus, thereby contributing to multi-units beyond the above-mentioned single unit boundary (Henze et al , Buzsáki ). However, this distance may also be brain region dependent, because recording in regions with varied neuron densities or cell type distributions (Collins et al , Herculano-Houzel et al ) likely produce distinct background noise levels (Lempka et al ) that heavily influence spike detectability. Therefore, the neuron density of target regions must be considered when interpreting intracortical electrophysiology and should influence electrode design parameters such as recording site pitch (Kleinfeld et al ). Additionally, that our exemplary recording session yielded a median of two clusters suggests that other previously identified factors, such as silent neurons (Shoham et al , Pedreira et al ) (although disputed by Marques-Smith et al , limitations in spike sorting algorithms (Pedreira et al ), and baseline noise (Du et al ) may have contributed to the low cluster count. Furthermore, recent work suggests that favorable histological outcomes may not correlate with high recording yield (Michelson et al ). This may be explained by neuronal hypoexcitability following electrode implantation (Eles et al ) or an observed shift toward a higher proportion of activity from inhibitory neurons surrounding chronic implants (Salatino et al , Michelson et al ). Therefore, our results contribute to an increasing list of factors that comprise the mismatch between cluster sortability and histology. Combined with recent work demonstrating that the downstream biological effects of electrode implantation are more complex than traditionally thought, our modeling and assessments of the FBR corroborate the need for further investigation into the interactions between electrodes and surrounding tissue, even for designs more biocompatible than traditional silicon electrodes (Salatino et al , Michelson et al , Thompson et al , ). Conclusion In this work, we demonstrated that the recording site tips of subcellular-scale carbon fiber electrodes can be localized with cellular-subcellular resolution after explanting the electrodes. This enabled measurement of the surrounding neurons in 3D, which indicated that their somata were stressed, but were still positioned in a nearly natural distribution. Modeling the electrophysiological signals that this geometric distribution of neurons might produce suggests that the low number of spike clusters typically identified in spike sorting may arise, at least partially, in neuron placement, and likely varies with neuron density. Overall, our work informs design considerations for carbon fiber electrodes and other intracortical electrodes with similar subcellular feature size. |
Dentists’ Self-evaluated Ability in Diagnosing and Updating About Pulpotomy | d457f6bc-e9cc-41d8-9809-6ecdd60d032d | 10023525 | Dental[mh] | Dental caries in children is a public health issue because it affects thousands, especially in developing countries. , , Besides caries, dental trauma can also involve the pulp, either reversibly or irreversibly. , Amongst the different vital pulp therapy techniques on primary teeth, pulpotomy is widely used and consists of removing the coronal pulp and maintaining the vital root pulp, thereby maintaining the vital pulp until physiologic tooth resorption. , , , , Although pulpotomy of primary teeth has been studied for many years, it still causes many controversies and discussions either due to the difficulty that many dentists have in diagnosing the pulp condition correctly or the doubts regarding the different materials used for capping, protecting, and repairing the pulp remnant. , , , , Both the diagnosis and the material directly affect the success of the technique. Due to the continuous discussion on the subject and the different existing protocols for vital pulp therapy, especially pulpotomy, this study aimed to self-evaluate the knowledge of different dental professionals’ profiles in Brazil in the diagnosis and indications for pulpotomy in primary teeth.
This study was approved by the Institutional Review Board (Protocol number CAAE 43,951,215.0.0000.5417). A 20-question self-administered electronic questionnaire was developed based on the inputs of selected researchers in discrete review rounds and was applied to determine the profile of the participant and the knowledge on vital pulp therapies in primary teeth. The questionnaire was hosted online using the Google Forms tool ( ). Participants were divided into 3 groups: paediatric dentist professors (G1), nonprofessorial paediatric dentist specialists (G2), and other dentists not belonging to any of the previous groups, (G3). The link containing the questionnaire was sent via email, social media, and communication applications to the paediatric dentistry professors, all paediatric dentists duly registered in the Federal Council of Dentistry, and general dentists/specialists in other areas. The participants remained anonymous. Data were tabulated and analysed based on their normality and homogeneity. The groups were compared using the Chi-square tests, with a 5% level of significance.
After removing possible duplicated questionnaires, the total number of questionnaires evaluated was 416, of which 91 were from G1, 109 from G2, and 216 from G3. The question “Do you feel able (theoretically/technically) to indicate pulpotomy in primary teeth?” was associated with the following factors: groups (G1, G2, and G3) ( A). G1 and G2 mostly indicated the technique. Specialists in paediatric dentistry mostly self-reported to indicate pulpotomy more than those who have a specialisation in another area or do not have a specialisation ( B). Amongst the three groups, those who sought updates on the topic were those who self-reported to indicate pulpotomy ( C). The association of updating search modes with the groups showed that all three groups used academic materials ( ). All G1 participants used academic materials, whilst G2 participants had a high demand for updates through congresses, and G3 through social media (Google and Google Scholar). The association of pulpotomy indication with the groups ( ) found that G1 and G2 mostly chose the “accidental pulpal exposure”, whilst G3 chose “Teeth with extensive caries and pulp chamber involvement without periapical lesion, confirmed on radiographs”. All groups indicated that they would perform the technique for “Maintenance of the tooth in the arch in case of treatment success”( ).
The literature reports the most varied aspects of vital pulp therapies in primary teeth; however, the investigation on how the different profiles of dentists would self-evaluate their tendency towards the diagnosis and indication of vital pulp therapies in primary teeth, especially pulpotomy, and how they would be updated on this topic makes this study innovative. , , Amongst vital therapies for primary teeth, pulpotomy is the gold-standard procedure. But there is no consensus on how best to do so, a fact noted in the response to the question “What is the reason or diagnosis that leads to the indication of pulpotomy?”, in which most of G1 and G2 answered “Accidental pulp exposure,” whilst G3 answered “Teeth with extensive caries and involvement of the pulp chamber without periapical lesion, confirmed on radiography.” This result demonstrates that those who work directly with paediatric dentistry tend to be more optimistic about the removal of the decayed tissue, opting for pulpotomy after accidental exposure, and other dentists were more invasive. , , , The importance of maintaining the primary tooth in position until its physiologic exfoliation has consensus in the literature, which is in line with the response obtained when the groups were asked about the motivation for performing the pulpotomy. All groups answered, “Tooth maintenance in the arch in case of successful treatment.” The correct diagnosis is known to be the most important part of the dentist's role, especially in pulp therapies. Paediatric dentistry professors and specialists mostly self-assessed yes to the question “Do you feel able (theoretically/technically) to indicate pulpotomy in primary teeth?” Similarly, in India, Nayak et al reported that paediatric dentists and general practitioners act differently when asked about different procedures in primary teeth, with paediatric dentists showing more knowledge. , Science has an essential role in dentistry, especially in times of evidence-based dentistry. However, the way in which the different dentists sought such updates was significantly different, as observed through the response to the question “Select the different ways you use to be updated on vital pulp treatments.” G1 mostly chose academic material, which was expected since this group is intimately involved in research on the subject. In G2, about 65% use social media and the Google search tool, which highlighted the search for Portuguese instructions performed by experts in the area. , It is also worth noting that most of the articles available in the databases have paid access, a fact that has a direct influence on the way this group is updated. In G3, about 18.5% of the participants do not seek updates on pulpotomy, probably because they were not executing this procedure in their daily routine. Amongst those who update themselves on the subject, approximately 47% use social media and the Google search tool, probably due to the same reason as in G2. , These data are similar to the findings of Aldhilan and Al-Haj Ali in 2018, which demonstrated that Saudi Arabian paediatric dentists tend to have more knowledge on primary dentition treatments when compared to general practitioners. , , , Relating the search for updates with the self-assessed ability, it was observed that those who feel more able were also the ones who consume the most content on the subject, which suggests that the lack of self-assessed ability would relate to the lack of interest in being updated. ,
The professionals who work directly with paediatric dentistry (professors or specialists) felt more capable of diagnosing and treating cases of pulpotomy in primary teeth. Although most of the professionals interviewed in the three groups used scientifically based sources, paediatric dentistry specialists and dentists in general should be aware of the importance of evidence-based dentistry. The low clinical utility of pulpotomy in G3 may be the due to the low interest in being updated on this topic.
None disclosed.
|
The role of ferroptosis in prostate cancer: a novel therapeutic strategy | 2c7b2880-5efb-4203-8ec8-e97c877913a4 | 10023567 | Internal Medicine[mh] | Prostate cancer is the most common type of cancer among male urogenital system tumors . Prostate cancer is characterized by abnormal cell division in the prostate, resulting in abnormal growth of the prostate. Most men don’t die from prostate cancer, but are affected by slow-growing tumors. The death of prostate cancer is mainly due to the spread of cancer cells to other parts of the body, such as bone, pelvic, lumbar vertebra, bladder, rectum, and brain . Currently, there are only three recognized independent risk factors for prostate cancer: age (i.e., increased risk of prostate cancer with age), race (i.e., increased risk of African Americans) and genetic factors (hereditary factor) . The incidence rate of prostate cancer is positively correlated with age. As life expectancy increases, so does the incidence of prostate cancer. According to different types of prostate cancer, there are four key treatment methods: radical prostatectomy, chemotherapy, androgen deprivation and radiotherapy . These can improve the early survival rate of patients with prostate cancer. However, most patients were found in the late stage with metastasis, which made the prognosis worse. Androgen deprivation therapy (ADT) has always been the basic treatment for advanced and metastatic prostate cancer. Despite the high initial response rate, most of advanced patients eventually develop progressive prostate cancer after ADT. This is called castration-resistant prostate cancer (CRPC) . At the same time, the drastic reduction of serum testosterone levels induced by ADT produces multiple side effects such as bone fracture, cardiovascular disease, sexual dysfunction anemia. It reduces the quality of patients’ life [ – ]. Therefore, it is particularly important to develop new methods for treat prostate cancer. In 2012, Dixon et al. found that the antitumor drug erastin induces unique iron dependent non-apoptotic cell death in Ras mutant tumors. This cell death process cannot be inhibited by regulatory cell death specific inhibitors, but antioxidants and chelators can prevent and reverse this process. They proposed the term ‘ferroptosis’ to describe a new mode of regulated death distinct from apoptosis, autophagy and necrosis, characterized by lipid peroxidation dependent on iron . The excessive accumulation of lipid reactive oxygen species (ROS) leads to destroying the cell membrane, leading to cell death . It was observed that mitochondria are smaller than normal, membrane density increased and mitochondrial cristae decreased . Ferroptosis is considered to be associated with a variety of human diseases, such as neurodegeneration, ischemia-reperfusion injury, and various cancers, including prostate cancer [ – ]. Tumor cells are more dependent on iron than normal cells for their high proliferation rate. This phenomenon is called iron addiction . The discovery of ferroptosis makes people have a new understanding of the occurrence and development of tumor diseases. There is increasing evidence that ferroptosis leads to tumor growth inhibition. Using inducers to induce ferroptosis or regulate ferroptosis-related genes may become an anti-cancer strategy. Therefore, it is great significance to understand the mechanism of ferroptosis and its research progress in prostate cancer.
Inhibiting the cysteine-glutamate transporter system Xc − can induce ferroptosis System Xc − is a sodium independent antiporter that export intracellular glutamate and import extracellular cystine across the membrane in a ratio of 1:1 to synthesize glutathione in cells . It consists of two subunits, SLC7A11 and SLC3A2. SLC7A11 is connected to SLC3A2 through a disulfide bond between the conserved residue cys158 of SLC7A11 and cys109 of SLC3A2 . A recent study showed that the CD44 variant subtype (CD44v) also interacts with and stabilizes SLC7A11 on the surface of cancer cells . SLC7A11 is a multichannel transmembrane protein that serves as a functional component of system Xc − . SLC3A2, a single transmembrane protein, is the molecular chaperone that maintains SLC7A11 protein stability and proper membrane localization . Novel small molecules such as erastin and sorafenib have been identified as system Xc − inhibitors to promote ferroptosis . Cadmium (Cd) is a toxic metal element and a pollutant existing in the environment. Cd exposure is primarily through the intake of contaminated food and water, and largely through inhalation and smoking. International cancer and other epidemiological research institutions suggest that Cd can lead to prostate cancer . Zhang et al. found that chronic cadmium exposure inhibited ferroptosis and promoted the proliferation of prostate cancer cell. RNA sequencing revealed that lncRNA OIP5-AS1 was significantly up-regulated in the proliferation of prostate cancer induced by Cd exposure. OIP5-AS1 inhibits ferroptosis by miR-128-3p/SLC7A11 axis . As a tumor suppressor gene, p53 plays an important role in inhibiting tumor growth . Recent study have found that p53 is involved in ferroptosis, which inhibits cystine uptake by inhibiting the expression of SLC7A11 and sensitizing cells to ferroptosis . Flubendazole inhibits the proliferation of CRPC by inhibiting the expression of SLC7A11 by inducing p53, further downregulating glutathione peroxidase 4 (GPX4), and promoting ferroptosis. In addition, flubendazole displayed a synergistic effect with 5-fluorouracil (5-FU) in CRPC chemotherapy. After combined use, it further promotes the decrease of SLC7A11 expression to promote ferroptosis and enhance the drug effect . Inhibiting the activity of GPX4 can induce ferroptosis GPX4 is an antioxidant enzyme that protects cells and membranes from peroxidation by using glutathione as a cofactor to protect cells from lipid peroxidation. Glutathione can cycle between reduced (GSH) and oxidized (GSSG) states, enabling this metabolite to participate in redox biochemical reactions . ChaC glutathione specific γ-glutamylcyclotransferase 1 (CHAC1) can decrease the content of intracellular GSH to prompt ferroptosis in prostate cancer cell with increasing the sensitivity of prostate cancer cells to docetaxel . Inhibition of GPX4 can lead to accumulation of ROS, accompanied by lipid peroxidation, and eventually lead to ferroptosis . Besides, GPX4 can reduce toxic lipid peroxides (e.g., R–OOH) to correspond lipid alcohols (e.g., R–OH). RSL3 can covalently inactivate GPX4 by binding to selenocysteine in the active site of GPX4 . GPX4 is the core inhibitor of ferroptosis. Recent studies have reported that serum miRNA is a promising target for cancer research and therapy. MiRNAs mainly induce the degradation of mRNAs or inhibit their translation by interacting with the ‘3’-UTR of target mRNAs, leading to changes in regulatory factors in cellular physiological processes. Down regulation of miR-15a expression was observed in patients with prostate cancer. MiR-15a can interact with the 3 ‘- untranslated region (UTR) of GPX4 mRNA to negatively regulate the expression of GPX4. The use of miR-15a mimic or siGPX4 can promote the death of prostate cancer cells . SLC7A11 and GPX4 are highly expressed in advanced prostate cancer cells. The use of ferroptosis activator erastin or RSL3 can promote cancer cell death by inducing ferroptosis. The second-generation anti-androgen drugs such as enzalutamide or abiraterone in standard treatment of advanced prostate cancer combined with ferroptosis activator can further inhibit tumor proliferation . ROS production is essential for ferroptosis ROS are usually composed of superoxide, peroxides and free radicals . As unstable molecules, they are produced in living cells as normal metabolites which play a significant role in signal transduction and maintaining tissue homeostasis . ROS are involved in various physiological or pathological processes, such as metabolism, inflammation, neurogeneration and carcinogenesis [ – ]. When cells respond to oxidative stress, large amounts of highly reactive and toxic ROS are produced and lead to adverse changes in cellular components such as proteins, lipids, and DNA damage .Cell membrane is particularly vulnerable to ROS damage due to its high polyunsaturated fatty acids (PUFA), called “lipid peroxidation”, which is the most significant feature of ferroptosis. Compared with normal cells, cancer cells are more vulnerable to ferroptosis and ROS accumulation. Cisplatin is a widely used anticancer drug. The resistance to cisplatin is an important obstacle to chemotherapy in patients with prostate cancer. The ferroptosis activator RSL3 increases the sensitivity of prostate cancer cells to cisplatin by producing ROS, aggravating cell cycle arrest and apoptosis caused by cisplatin . Diallyl trisulfide (DAT) is kind of the main decomposition products and active components of allicin. Studies have found that DAT has a variety of biological effects, such as anti-tumor, bacteriostasis, antioxidant stress and participation in the regulation of inflammatory response. In prostate cancer, it causes an increase in reactive oxygen species, accompanied by ferritin degradation to prompt ferroptosis, and inhibits the growth of cancer cells . Artemisinin was first extracted from Artemisia annua in 1971 by Tu youyou . It is a semiterpene lactone with antimalarial effect. In recent years, it has been found that it has anticancer effect and plays an important role in inducing ferroptosis. Artemisinin can induce ferroptosis in prostate cancer cell DU145, but no similar effect was observed in PC3 and LNCaP cell lines . Dihydroartemisinin (DHA) is an active metabolite of artemisinin. Numerous of studies have shown that DHA is cytotoxic to a variety of cancer cells, such as lung cancer, glioma cancer and so on . It can induce cancer cell ferroptosis, autophagy, and inhibit the proliferation of cancer cells. At present, there is no research to prove its role in prostate cancer ferroptosis, which can be a future research direction. Traditional Chinese medicine usually contains a variety of active components, which can produce additive or synergistic effects at the same time. Compared with targeted drugs, traditional Chinese medicine has the characteristics of multiple targets and can regulate a variety of signal pathways, such as regulating ADAMTS18, ROS, Nrf2, GPX4 and other molecules, so as to regulate ferroptosis. Using traditional Chinese medicine to induce ferroptosis in prostate cancer cells may become a future research direction. Iron-mediated oxidative damage in ferroptosis Iron as a cofactor is important for maintaining a range of biological processes . Iron overload can lead to fatal ROS production and lipid peroxidation . Transferrin receptor 1(TFR1) is a transmembrane glycoprotein responsible for importing iron which is stored and transported in the form of the iron–protein complex (mainly ferritin) . Iron oxide reductase steam3 (STEAP3) reduces iron in the form of Fe 3+ to iron in the form of Fe 2+ . Finally, Fe 2+ is released from divalent metal transporter 1 (DMT1) mediated endosome into unstable iron pool in the cytoplasm . As an important factor in the formation of ROS through enzymatic or non-enzymatic reactions, iron plays an essential role in sensitizing cells to ferroptosis. Bordini et al. found that high dose of iron can inhibit the proliferation of prostate cancer cells through oxidative damage. In bicalutamide resistant cells, iron showed synergistic effect with bicalutamide . In recent years, many studies have explored the mechanism of ferroptosis induced by ferroptosis inducers. The possible signal pathways and targets are shown in Table .
− can induce ferroptosis System Xc − is a sodium independent antiporter that export intracellular glutamate and import extracellular cystine across the membrane in a ratio of 1:1 to synthesize glutathione in cells . It consists of two subunits, SLC7A11 and SLC3A2. SLC7A11 is connected to SLC3A2 through a disulfide bond between the conserved residue cys158 of SLC7A11 and cys109 of SLC3A2 . A recent study showed that the CD44 variant subtype (CD44v) also interacts with and stabilizes SLC7A11 on the surface of cancer cells . SLC7A11 is a multichannel transmembrane protein that serves as a functional component of system Xc − . SLC3A2, a single transmembrane protein, is the molecular chaperone that maintains SLC7A11 protein stability and proper membrane localization . Novel small molecules such as erastin and sorafenib have been identified as system Xc − inhibitors to promote ferroptosis . Cadmium (Cd) is a toxic metal element and a pollutant existing in the environment. Cd exposure is primarily through the intake of contaminated food and water, and largely through inhalation and smoking. International cancer and other epidemiological research institutions suggest that Cd can lead to prostate cancer . Zhang et al. found that chronic cadmium exposure inhibited ferroptosis and promoted the proliferation of prostate cancer cell. RNA sequencing revealed that lncRNA OIP5-AS1 was significantly up-regulated in the proliferation of prostate cancer induced by Cd exposure. OIP5-AS1 inhibits ferroptosis by miR-128-3p/SLC7A11 axis . As a tumor suppressor gene, p53 plays an important role in inhibiting tumor growth . Recent study have found that p53 is involved in ferroptosis, which inhibits cystine uptake by inhibiting the expression of SLC7A11 and sensitizing cells to ferroptosis . Flubendazole inhibits the proliferation of CRPC by inhibiting the expression of SLC7A11 by inducing p53, further downregulating glutathione peroxidase 4 (GPX4), and promoting ferroptosis. In addition, flubendazole displayed a synergistic effect with 5-fluorouracil (5-FU) in CRPC chemotherapy. After combined use, it further promotes the decrease of SLC7A11 expression to promote ferroptosis and enhance the drug effect .
GPX4 is an antioxidant enzyme that protects cells and membranes from peroxidation by using glutathione as a cofactor to protect cells from lipid peroxidation. Glutathione can cycle between reduced (GSH) and oxidized (GSSG) states, enabling this metabolite to participate in redox biochemical reactions . ChaC glutathione specific γ-glutamylcyclotransferase 1 (CHAC1) can decrease the content of intracellular GSH to prompt ferroptosis in prostate cancer cell with increasing the sensitivity of prostate cancer cells to docetaxel . Inhibition of GPX4 can lead to accumulation of ROS, accompanied by lipid peroxidation, and eventually lead to ferroptosis . Besides, GPX4 can reduce toxic lipid peroxides (e.g., R–OOH) to correspond lipid alcohols (e.g., R–OH). RSL3 can covalently inactivate GPX4 by binding to selenocysteine in the active site of GPX4 . GPX4 is the core inhibitor of ferroptosis. Recent studies have reported that serum miRNA is a promising target for cancer research and therapy. MiRNAs mainly induce the degradation of mRNAs or inhibit their translation by interacting with the ‘3’-UTR of target mRNAs, leading to changes in regulatory factors in cellular physiological processes. Down regulation of miR-15a expression was observed in patients with prostate cancer. MiR-15a can interact with the 3 ‘- untranslated region (UTR) of GPX4 mRNA to negatively regulate the expression of GPX4. The use of miR-15a mimic or siGPX4 can promote the death of prostate cancer cells . SLC7A11 and GPX4 are highly expressed in advanced prostate cancer cells. The use of ferroptosis activator erastin or RSL3 can promote cancer cell death by inducing ferroptosis. The second-generation anti-androgen drugs such as enzalutamide or abiraterone in standard treatment of advanced prostate cancer combined with ferroptosis activator can further inhibit tumor proliferation .
ROS are usually composed of superoxide, peroxides and free radicals . As unstable molecules, they are produced in living cells as normal metabolites which play a significant role in signal transduction and maintaining tissue homeostasis . ROS are involved in various physiological or pathological processes, such as metabolism, inflammation, neurogeneration and carcinogenesis [ – ]. When cells respond to oxidative stress, large amounts of highly reactive and toxic ROS are produced and lead to adverse changes in cellular components such as proteins, lipids, and DNA damage .Cell membrane is particularly vulnerable to ROS damage due to its high polyunsaturated fatty acids (PUFA), called “lipid peroxidation”, which is the most significant feature of ferroptosis. Compared with normal cells, cancer cells are more vulnerable to ferroptosis and ROS accumulation. Cisplatin is a widely used anticancer drug. The resistance to cisplatin is an important obstacle to chemotherapy in patients with prostate cancer. The ferroptosis activator RSL3 increases the sensitivity of prostate cancer cells to cisplatin by producing ROS, aggravating cell cycle arrest and apoptosis caused by cisplatin . Diallyl trisulfide (DAT) is kind of the main decomposition products and active components of allicin. Studies have found that DAT has a variety of biological effects, such as anti-tumor, bacteriostasis, antioxidant stress and participation in the regulation of inflammatory response. In prostate cancer, it causes an increase in reactive oxygen species, accompanied by ferritin degradation to prompt ferroptosis, and inhibits the growth of cancer cells . Artemisinin was first extracted from Artemisia annua in 1971 by Tu youyou . It is a semiterpene lactone with antimalarial effect. In recent years, it has been found that it has anticancer effect and plays an important role in inducing ferroptosis. Artemisinin can induce ferroptosis in prostate cancer cell DU145, but no similar effect was observed in PC3 and LNCaP cell lines . Dihydroartemisinin (DHA) is an active metabolite of artemisinin. Numerous of studies have shown that DHA is cytotoxic to a variety of cancer cells, such as lung cancer, glioma cancer and so on . It can induce cancer cell ferroptosis, autophagy, and inhibit the proliferation of cancer cells. At present, there is no research to prove its role in prostate cancer ferroptosis, which can be a future research direction. Traditional Chinese medicine usually contains a variety of active components, which can produce additive or synergistic effects at the same time. Compared with targeted drugs, traditional Chinese medicine has the characteristics of multiple targets and can regulate a variety of signal pathways, such as regulating ADAMTS18, ROS, Nrf2, GPX4 and other molecules, so as to regulate ferroptosis. Using traditional Chinese medicine to induce ferroptosis in prostate cancer cells may become a future research direction.
Iron as a cofactor is important for maintaining a range of biological processes . Iron overload can lead to fatal ROS production and lipid peroxidation . Transferrin receptor 1(TFR1) is a transmembrane glycoprotein responsible for importing iron which is stored and transported in the form of the iron–protein complex (mainly ferritin) . Iron oxide reductase steam3 (STEAP3) reduces iron in the form of Fe 3+ to iron in the form of Fe 2+ . Finally, Fe 2+ is released from divalent metal transporter 1 (DMT1) mediated endosome into unstable iron pool in the cytoplasm . As an important factor in the formation of ROS through enzymatic or non-enzymatic reactions, iron plays an essential role in sensitizing cells to ferroptosis. Bordini et al. found that high dose of iron can inhibit the proliferation of prostate cancer cells through oxidative damage. In bicalutamide resistant cells, iron showed synergistic effect with bicalutamide . In recent years, many studies have explored the mechanism of ferroptosis induced by ferroptosis inducers. The possible signal pathways and targets are shown in Table .
The phosphatase and tensin homolog (PTEN) deleted on chromosome 10 gene is a tumor suppressor gene discovered in recent years. Its product PTEN protein has lipid phosphatase activity and protein phosphatase activity. PTEN exerts its anti-tumor effect mainly by acting on the downstream target molecule PIP3 of PI3K through its lipid phosphatase activity, thereby blocking the PI3K/Akt signaling pathway . Sterol regulatory element-binding protein 1 (SREBP1) is a key transcription factor regulating lipid metabolism and encodes multiple genes for key enzymes in the adipogenesis pathway (such as SCD, FASN and ACLY). It was found that PI3K activation or PTEN gene deficiency promotes SREBP1/SCD mediated adipogenesis by activating PI3K/AKT/mTOR pathway to suppress ferroptosis. Inhibiting mTOR may be a new method for the treatment of prostate cancer . The ferroptosis-related genes AIFM2 and NFSI were identified in a prostate cancer gene risk model, and in vivo and in vitro experiments showed that knockout of these two genes promotes ferroptosis . Besides, pannexin2 (PANX2) is high expression in prostate cancer. Knocking out this gene promotes ferroptosis and inhibits prostate cancer cell growth . Intriguingly, the discovery of database mining shows that ferroptosis-related genes are promising prognostic biomarkers and potential drug targets in prostate cancer patients. The growth of prostate cancer cells depends on the continuous activation of androgens, the androgen receptor (AR) and its splice variants, which remain the main driver of CRPC progression. As a classical inducer of ferroptosis, erastin can inhibit the transcriptional activity of AR and its splice variants in vitro and in vivo. In addition, when erastin was combined with docetaxel in the treatment of CRPC, the growth inhibitory effect of docetaxel was found to be enhanced. In vivo experiments, erastin can further enhance the antitumor effect of docetaxel, and there is no obvious damage to various organs of mice, with less toxic and side effects, which can provide experimental basis for clinical research . Li et al. also found that anti androgen combined with ferroptosis activator RSL3 inhibited the growth of prostate cancer cells in mouse xenografts . Further clinical trials can be conducted in the future to prove the role of ferroptosis in the treatment of prostate cancer. Meanwhile, 2,4-Dienoyl-CoA reductase (DECR1) was discovered when analyzing genes associated with AR inhibitor resistance. DECR1 is a target gene negatively regulated by AR. When this gene is knocked out, it promotes ferroptosis in CRPC cells . In the latest study, isothiocyanate (ITC)-containing AR antagonists were synthesized, which downregulates AR and its spliceosome. Combination with BSO, a GSH inhibitor promotes lipid peroxidation and ferroptosis in prostate cancer cells . Kumar et al. showed that supraphysiological testosterone can inhibit tumor proliferation by producing lipid peroxides, targeting prostate cancer cell associated lipid metabolism to inhibit prostate cancer cell growth is considered as a possible therapeutic strategy . It is worth mentioning that recent studies have found that endoplasmic reticulum stress response plays an important role in ferroptosis. On the one hand, the activation of endoplasmic reticulum stress pathway in cancer cells can inhibit ferroptosis and participate in the induction of drug resistance. On the other hand, endoplasmic reticulum stress can promote cell ferroptosis and may be involved in the co-regulation of ferroptosis and apoptosis . Some studies have also shown that ferroptosis inducers can activate ERK-eIF2 mediated by endoplasmic reticulum stress ATFα-ATF4-CHOP cascade without inducing apoptosis. LNCaP-AI cells have higher ATF6 expression than LNCap-A cells. The highly expressed ATF6 mediates tolerance to ferroptosis through transcriptional activation of PLA2G4A, and inhibition of ATF6α signaling by Ceapin-A7 enhances the effect of enzalutamide on CRPC xenograft growth . Understanding the relationship between ferroptosis and endoplasmic reticulum stress, apoptosis, autophagy is of great significance for overcoming the drug resistance of cancer cells. However, little research has been done in this area of prostate cancer. Whether such mutual regulation exists in prostate cancer remains to be further explored.
In 2012, Dixon et al. proposed the term ferroptosis to describe an iron-dependent regulatory form of cell death caused by accumulation of lipid reactive oxygen species . We review the important role of ferroptosis in prostate cancer. Multiple drugs combined with ferroptosis inducers can enhance their anticancer effects. For example, flubendazole combined with 5-FU can induce CRPC death by promoting ferroptosis . The second-generation antiandrogens combined with ferroptosis activators can further inhibit tumor proliferation by inhibiting the expression of GPX4 . Ferroptosis activators can also make prostate cancer cells more sensitive to DDP . In the latest study, isothiocyanate (ITC)-containing AR antagonists were synthesized which can down regulates AR and its spliceosomes. When combined with BSO, it can promote ferroptosis in prostate cancer cells . Although these studies revealed some important findings, we have already explored only parts of the mechanism of ferroptosis in prostate cancer (Fig. ), which has not been extensively studied in prostate cancer. Whether there are other important mechanisms of ferroptosis remains unclear. Through database analysis, many ferroptosis-related genes may be associated with prostate cancer, but the specific mechanisms by which these genes affect prostate cancer cells are unclear. Targeting these genes to induce ferroptosis may be a new therapy for prostate cancer. Several in vivo experiments have proved that the combination of ferroptosis activators can inhibit the proliferation of prostate cancer cells . Further clinical experiments can be carried out in the future to prove the role of ferroptosis in the treatment of prostate cancer. Different prostate cancer cell lines show different sensitivities to ferroptosis. Therefore, we should consider this factor and how to address cellular tolerance to ferroptosis before using ferroptosis-inducing agents to treat prostate cancer. Whether there are more FDA-approved drugs that can induce ferroptosis in prostate cancer for clinical use or whether other unapproved drugs can induce ferroptosis in the treatment of prostate cancer will be the direction of our future exploration.
|
Quality of life changes over time and predictors in a large head and neck patients’ cohort: secondary analysis from an Italian multi-center longitudinal, prospective, observational study—a study of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) head and neck working group | be303eb4-30a4-43e2-9feb-34d6d50200fe | 10023607 | Internal Medicine[mh] | Head and neck carcinoma (HNC) is becoming common worldwide, and it is anticipated to rise by 30% accounting for an estimated 1.08 million new cancer cases annually by 2030 . In particular, the increasing rates of human papilloma virus (HPV)-related tumors, with better prognosis compared to the counterpart, have contributed to this high prevalence of HNC especially in the United States of America and Western Europe . Currently, regardless of HPV status, evidenced-based treatments are multimodal and may produce several physical complications and psychological distress, which may persist beyond treatment . The main treatment-related side effects are oral mucositis, taste impairment, salivary gland dysfunction, xerostomia, incapacity to chew and swallow, bacterial and fungal infections, neuropathy, trismus, and skin changes and reactions of the treated area . All these complications impair patients’ ability to perform on daily activities , resulting in social withdrawal, mental, and emotional distress and impacting patients’ health-related (HR) quality of life (QoL) domains but also more general QoL domains . HRQoL may be described as a subjective and multi-dimensional concept related to one’s perception of well-being and satisfaction with one’s own health as well daily life functioning , which encompasses physical, psychological, and social functioning and disease-treatment related symptoms and side effects . Thus, it may be considered a subset of the broader concept of QoL, defined as “an individual’s perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns” . Accordingly, we have decided to focus on the more comprehensive term of QoL. As it was abovementioned said, HNC patients’ face unique physical, emotional, and psychological challenges and life disruptions, in comparison to other cancer sites . Hence, understanding QoL changes and patients’ needs during and after therapy is essential to manage the disease more effectively and to set up rehabilitative strategies for the patients . Longitudinal studies reported that QoL usually decreases during radiation therapy (RT) and starts to improve 3–6 months after treatment, with a global amelioration one year after RT end, without a complete return to pre-treatment status, and with a pattern varies depending on the dimension of QoL evaluated . In addition, information about clinical and treatment-related predictors impacting on improvement and recovery on QOL is not comprehensive enough so far. A multi-center longitudinal, prospective, observational study of consecutive HNC patients, treated at seven Italian Oncology Radiotherapy Departments, was conducted on behalf of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) Head and Neck Working Group. The first endpoint was the Italian language psychometric validation of the M.D. Anderson Symptom Inventory Head and Neck (MDASI-HN) questionnaire . Here, we present results of secondary endpoints: (i) investigate QoL in patients with HNC using the MDASI-HN module to measure symptom burden during RT and in the follow-up period, namely, (1, 3, 6, and 12 months after completion of RT) and (ii) analyze whether QoL may be predicted by socio-demographic and clinical characteristics.
Procedure This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT. Questionnaire and data collection The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected. Statistical analysis Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT.
The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected.
Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
Participants From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition. Socio-demographic and clinical and variables and changes of QoL over time Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL. Changes of QoL over time: the role of HPV Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition.
Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL.
Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
In this prospective longitudinal study, we used the PROM MDASI-HN to detect patients’ symptoms burden and implement interventions and therapy adjustments specific to each patient. A 3-factor solution, including GC-RS, HNC-RS, and SIDA, was considered, and a series of linear mixed model analyses were conducted. In both GC-RS and HNC-RS domains, time was the only significant predictor of patient’s QoL, whereas concerning the SIDA, time and HPV status were significant, resulting in HPV-positive patients with worst QoL than negative ones. It was evident that HNC patients’ QoL declined during RT (Fig. a), especially those symptoms specific to HNC, such as problems with mucus and difficulty in swallowing, that resulted to be more painful; nonetheless, QoL slowly improved as soon as treatment ended, which is consistent with the pattern found by other findings . Indeed, it is plausible that symptom severity is worse during RT because of tumor presence as well as therapy short-term side effects, which consequently affect patients’ life, whereas after therapy completion, there should be a physical relief due to tumor size reduction, thus, an improvement of patients’ perception of their life quality. However, it is also important to consider those findings in which side effects and problems persisted up to 1-year follow-up and even beyond it . In these cases, the sequelae were related to specific HNC-related symptoms, such as dry mouth, sticky saliva, or senses dysfunctions, showing that although general and global QoL recovered, the same did not happen for specific HNC symptoms. For instance, Oskam and colleagues found that QoL decrease related to HNC specific symptoms persisted up to a period between 8 to 11 years post-diagnosis. A possible explanation is that these problems and symptomatology are long-term side-effects of treatments, which appear only years after therapy, whereas other symptoms, such as nausea or pain, are caused by the presence of tumor or treatment administration . Among the studies found, only a few employed the M.D. Anderson Symptom Inventory Head and Neck module (MDASI-HN), 28-item version, which was used to assess symptoms severity during RT as well as in the follow up period. Most of previous research used QoL measures that were longer than MDASI-HN, although measuring similar dimensions; thus, future research could use this questionnaire to address patients’ QoL and avoid extra burden to them. The same abovementioned analyses were conducted among oropharynx cancer patients, distinguished by HPV positive and negative. Concerning HPV-negative patients, only the variable of time resulted to predict patients’ QoL. Among HPV-positive patients, time resulted to be significant in all the three factors. Regarding the GC-RS factor, being female, those patients who underwent surgery, those with low educational level, or patients that have never drunk alcohol had a worst QoL. Moreover, older patients were likely to have decreased QoL. It seems understandable that patients who had surgery may be debilitated, thus, having low QoL; similarly, patients with low educational level may engage in unhealthy behaviors and have less resources to cope with their disease. In relation to the HNC-RS factor, patients restricted in physically strenuous activity (ECOG 1) or with high educational level had a better QoL than fully active patients (ECOG 0) or those with a lower educational level. As for ECOG, our results appear to be contradictory at the first glance. We need to underline that a good performance status is generally classified as state 0 or 1one for the other. ECOG 0–1 is linked to better values in several scales of QOL. A possible explanation of our finding is that for patients with no functional impairment or premorbid lifestyle depicting a ECOG 0 status before starting RT, any impact on QOL is more perceived since the difference from baseline conditions is greater compared to patients with ECOG 1. For the SIDA, it was found that older patients, female subjects, those patients who were employed, or those who never used alcohol showed worst QoL. Unexpectedly, those subjects who never drink alcohol had worst QoL; this result would need to be further explored, considering that previous studies have focused on the prognostic role of alcohol use in developing HNC regardless its specific role during cancer treatment. Comparing HPV-positive and HPV-negative patients’ QoL trends over time (Fig. b-d), it is possible to notice that although HPV-positive patients had worse QoL during treatment and immediately after it, especially in relation to GC-RS and HNC-RS factors, their QoL levels increase in the follow-up period; on the other hand, HPV-negative patients had worse QoL during the weeks after concluding treatment, thus, in the follow-up period. Our results are in agreement with literature data. Indeed, HPV-related oropharyngeal cancer patients’ population tends to be younger and healthier, with a very good baseline QOL, compared with individuals with other HPV-unrelated HNC. However, HPV-positive cancer patients are more likely to suffer a deterioration on their QOL during treatment. In a sub-study conducted within a prospective phase 3 randomized trial of concurrent standard radiation versus accelerated radiation plus cisplatin for locally advanced HN Carcinoma: NRG Oncology RTOG 0129, p16-positive oropharyngeal cancer (OPC) patients had better QOL than p16-negative patients did, before treatment and after 1 year after treatment. However, QOL/PS decreased more significantly from pretreatment to the last 2 weeks of treatment in the p16-positive group than in the p16-negative group . Again, in a sub-analysis of the randomized trial Trans-Tasman Radiation Oncology Group (TROG) 02.02 (HeadSTART), HPV-positive patients showed a more dramatic QOL drop with concurrent chemoradiation compared to HPV-negative ones . The current study has some limitations that should be noted and may have an influence on results generalization. First, due to drop-out the sample size of those who completed the questionnaire up to the last time point was smaller than the one who answered at the beginning of the research. Second, our sample consisted mainly of male patients with a prevalence of oropharynx tumors. Although the presence of these limitations, using the MDASI-HN, is a valid and short PROM, having a timeline that included both the treatment and the follow-up period resulted to be fundamental to have deeper understanding of patients’ QoL. Future research should give further attention to treatments sequelae specific to HNC, especially in the long-term period; extending the follow-up period would allow to better understand symptoms trajectories and their interference with daily life, considering that HNC specific symptoms may persist even years after ending treatments. Furthermore, it seems important to consider other psycho-social variables (for instance, gender and financial toxicity ), which may have an impact on treatment outcomes as well as patients’ QoL, and analyze their trajectories over time, allowing to understand how these variables interact with patients’ physical and psychological well-being. This would help to develop more specific treatments and interventions that would answer to patients’ needs.
Although QoL is an important indicator of healthcare systems quality and is included within the assessment of treatments benefits , some of its aspects may be often underdiagnosed and thus undertreated by physicians . Moreover, clinical as well as socio-demographic variables may have an impact on patients’ QoL. Hence, PROM as a standard procedure should be included in patients’ condition assessment, allowing deeper insights of their disease experience and excluding response misunderstanding .
|
Demographic and clinical characteristics determining patient-centeredness in endometriosis care | 32e1db2b-3c51-440b-b952-8a22e5b0b82b | 10023645 | Gynaecology[mh] | Endometriosis is a chronic, inflammatory gynecological disease affecting approximately 10% of all women in reproductive age . In many cases, endometriosis has a negative effect on women’s health-related quality of life (HRQoL) and is associated with lower emotional, physical, psychological, social and sexual health . The most common symptoms are pain during menstruation and ovulation, during intercourse, urination or defecation, low back pain, and chronic pelvic pain . The “gold standard” for diagnosing endometriosis is a laparoscopy with histological confirmation of endometrial tissue . Typically, it takes many years to get diagnosed and to find a proper treatment . During the road toward a diagnosis, women typically meet many different healthcare professionals. They frequently describe encounters as problematic including normalization and trivialization of symptoms . Given the challenges with endometriosis care, there is room for quality improvements . There is a growing body of knowledge on the benefits of quality improvement strategies when seeking to enhance healthcare services for chronic diseases . The interest in improving patient-centeredness of endometriosis care has increased over the years, and improvement work for patient-centredness is today promoted at legislative and healthcare regulatory levels . For endometriosis, patient-centeredness is defined as a combination of understanding the burden of illness and treatment from patients’ points of view while still relying on scientific knowledge . In quality improvement work, we need to identify which areas of endometriosis care are of importance to women and identify patient-specific determinants associated with high patient-centeredness. This information can be used to raise healthcare professionals’ awareness to promote and preserve patient-centeredness and to tailor care on an individual level. The primary aim of this study was to assess patient-centeredness of endometriosis care in a national sample of Swedish women with endometriosis. The secondary aims were to assess the importance of different dimensions of endometriosis care and to analyze demographic and clinical determinants associated with the experience of patient-centeredness. Design This cross-sectional study was conducted in a national sample of Swedish women with endometriosis recruited from ten gynecology clinics: three university hospitals, five county hospitals and two district hospitals. Sampling and data collection Inclusion criteria were women aged ≥ 18 years having any endometriosis diagnosis and who had visited the clinic due to endometriosis-related problems any time during the past five years. The 150 women who had most recently visited each clinic were selected, and out of this group, 100 were randomly selected. The 1000 women were invited by mail in September 2021. A reminder was send to those who had not responded within three weeks. The invitation letter included a link to the website containing the survey. The digital survey In 2011, the ENDOCARE questionnaire (ECQ) was designed to measure patient-centeredness of endometriosis care . The digital survey consisted of ECQ, with the addition of three background questions: do you have a responsible gynecologist to care for your endometriosis-related problems? Do you have a plan for treatment of endometriosis? Are you currently receiving desired care for endometriosis? The ECQ consists of 38 statements answered on a four-point Likert scale on two dimensions: experience of the statement (disagree completely, disagree, agree, and agree completely) and personal importance of the statement (not important, fairly important, important, and of the utmost importance). The statements are clustered into ten dimensions of patient-centeredness of endometriosis care: respect for patient’s values, Preferences and needs, Coordination and integration of care, Information, communication and education, Physical comfort, Emotional support and alleviation of fear and anxiety, Involvement of significant others, Continuity and transition, Access to care, Technical skills and Endometriosis clinic staff. At the end, the patient is asked to grade her overall endometriosis care on a scale from very bad (0) to excellent (10). Three outcome measures are generated from the instrument. First, the percentage of negative experiences (PNP) is calculated on a 0 to 100 scale, with higher scores indicating worse performance. Then, the importance score (MIS) is calculated on a scale from 0 to 10, with higher scores indicating greater importance. From the PNP and MIS scores, a patient-centeredness score (PCS) is calculated and presented on a scale from 0 to 10, with higher scores indicating higher patient-centeredness . The Swedish version of the ENDOCARE instrument has undergone psychometric validation and has been tested for reliability, with satisfactory results . Statistical analysis Variables on continuous scales are described as mean and standard deviation (SD) and nominal data as frequency and percentage. To enable comparison with earlier research, MIS and PCS values are also presented as median and 25th and 75th percentiles. Missing answers were omitted in the calculations by changing the denominator in the equations for PNP, MIS and PCS. No participants had > 25% missing answers. To analyze which patient-specific demographic and clinical determinants were associated with the experience of patient-centeredness, univariate and multiple regression analyses were used. Determinants with a p < 0.2 in the univariate analysis were further analyzed in a multiple regression analysis using “enter” model building in order to detect and evaluate independent predictive factors for patient-centeredness . Determinants were analyzed in relation to the ten dimensions of patient-centeredness and to overall PCS. Nominal determinants with more than two categories were dichotomized. The degree of multicollinearity was tested for the determinants in each multiple model by examining the variance inflation factor (VIF). The VIFs for these determinants were < 5, which indicates that there was no considerable multicollinearity between the variables . The following determinants were analyzed: age, ever given birth (yes/no), higher education (university degree) (yes/no), currently in an intimate partner relationship (yes/no), age at first symptoms of endometriosis, patient delay (time from symptom onset to seeking care), doctor delay (time from first seeking care to diagnosis), diagnostic delay (time from symptom onset to diagnosis), number of consultations with general practitioners before referral to gynecologist, moderate/severe self-reported stage of endometriosis (yes/no), having a responsible gynecologist to care for endometriosis (yes/no), having a plan for treatment of endometriosis (yes/no), ever tried to conceive > 12 months (yes/no) and overall grading of endometriosis care. The level of statistical significance was set at p < 0.05. Regression coefficients (β) represent the mean change in the outcome variable (PCS score) for every 1-unit of change in the determinant , keeping all the other determinants constant. The explained variance of the multivariate models is presented by adjusted R 2 . Data were analyzed using IBM SPSS 28.0. This cross-sectional study was conducted in a national sample of Swedish women with endometriosis recruited from ten gynecology clinics: three university hospitals, five county hospitals and two district hospitals. Inclusion criteria were women aged ≥ 18 years having any endometriosis diagnosis and who had visited the clinic due to endometriosis-related problems any time during the past five years. The 150 women who had most recently visited each clinic were selected, and out of this group, 100 were randomly selected. The 1000 women were invited by mail in September 2021. A reminder was send to those who had not responded within three weeks. The invitation letter included a link to the website containing the survey. In 2011, the ENDOCARE questionnaire (ECQ) was designed to measure patient-centeredness of endometriosis care . The digital survey consisted of ECQ, with the addition of three background questions: do you have a responsible gynecologist to care for your endometriosis-related problems? Do you have a plan for treatment of endometriosis? Are you currently receiving desired care for endometriosis? The ECQ consists of 38 statements answered on a four-point Likert scale on two dimensions: experience of the statement (disagree completely, disagree, agree, and agree completely) and personal importance of the statement (not important, fairly important, important, and of the utmost importance). The statements are clustered into ten dimensions of patient-centeredness of endometriosis care: respect for patient’s values, Preferences and needs, Coordination and integration of care, Information, communication and education, Physical comfort, Emotional support and alleviation of fear and anxiety, Involvement of significant others, Continuity and transition, Access to care, Technical skills and Endometriosis clinic staff. At the end, the patient is asked to grade her overall endometriosis care on a scale from very bad (0) to excellent (10). Three outcome measures are generated from the instrument. First, the percentage of negative experiences (PNP) is calculated on a 0 to 100 scale, with higher scores indicating worse performance. Then, the importance score (MIS) is calculated on a scale from 0 to 10, with higher scores indicating greater importance. From the PNP and MIS scores, a patient-centeredness score (PCS) is calculated and presented on a scale from 0 to 10, with higher scores indicating higher patient-centeredness . The Swedish version of the ENDOCARE instrument has undergone psychometric validation and has been tested for reliability, with satisfactory results . Variables on continuous scales are described as mean and standard deviation (SD) and nominal data as frequency and percentage. To enable comparison with earlier research, MIS and PCS values are also presented as median and 25th and 75th percentiles. Missing answers were omitted in the calculations by changing the denominator in the equations for PNP, MIS and PCS. No participants had > 25% missing answers. To analyze which patient-specific demographic and clinical determinants were associated with the experience of patient-centeredness, univariate and multiple regression analyses were used. Determinants with a p < 0.2 in the univariate analysis were further analyzed in a multiple regression analysis using “enter” model building in order to detect and evaluate independent predictive factors for patient-centeredness . Determinants were analyzed in relation to the ten dimensions of patient-centeredness and to overall PCS. Nominal determinants with more than two categories were dichotomized. The degree of multicollinearity was tested for the determinants in each multiple model by examining the variance inflation factor (VIF). The VIFs for these determinants were < 5, which indicates that there was no considerable multicollinearity between the variables . The following determinants were analyzed: age, ever given birth (yes/no), higher education (university degree) (yes/no), currently in an intimate partner relationship (yes/no), age at first symptoms of endometriosis, patient delay (time from symptom onset to seeking care), doctor delay (time from first seeking care to diagnosis), diagnostic delay (time from symptom onset to diagnosis), number of consultations with general practitioners before referral to gynecologist, moderate/severe self-reported stage of endometriosis (yes/no), having a responsible gynecologist to care for endometriosis (yes/no), having a plan for treatment of endometriosis (yes/no), ever tried to conceive > 12 months (yes/no) and overall grading of endometriosis care. The level of statistical significance was set at p < 0.05. Regression coefficients (β) represent the mean change in the outcome variable (PCS score) for every 1-unit of change in the determinant , keeping all the other determinants constant. The explained variance of the multivariate models is presented by adjusted R 2 . Data were analyzed using IBM SPSS 28.0. In total, 476 women answered the digital survey, resulting in a response rate of 47.6%. Background characteristics and possible determinants of patient-centeredness are presented in Table . Participants’ mean age was 36.5 years (range 18–60). A majority had a university degree and were working full-time. Most women were currently in an intimate relationship and around half of them had children. The time between symptom onset and diagnosis (e.g., diagnostic delay) was 9.3 years. Around two out of three had a responsible gynecologist to care for endometriosis, a treatment plan and reported that they were currently receiving desired care. As shown in Table , the overall mean PCS score was 3.73, indicating a low patient-centeredness. The dimension with the highest PCS was “Endometriosis clinic staff” (mean 5.21) followed by “Respect for patients’ values, preferences and needs” (mean 5.09) and “Information, communication and education” (mean 4.81). The lowest PCS score was reported for the dimension “Emotional support and alleviation of fear and anxiety” (mean 0.85). The dimension “Respect for patients’ values, preferences and needs” had the highest MIS mean score (9.34), i.e., it was experienced as the most important dimension. It was followed by “Endometriosis clinic staff” (mean 9.05), “Technical skills” (mean 9.02). “Physical comfort” was experienced as the least important dimension (mean 5.85). In the univariate regression analysis between each determinant, PCS dimensions and overall PCS, several determinants were associated with PCS (Supplement 1). Table shows the results of the multiple regression analyses for each determinant having a significant and independent influence on PCS. Overall grading of endometriosis care was the determinant associated with most PCS dimensions. Having a responsible gynecologist to care for the patient was an independent determinant for the PCS dimensions “Coordination and integration of care,” “Information, communication and education,” “Emotional support and alleviation of fear and anxiety,” “Continuity and transition,” “Access to care” and for overall PCS (Table ). Overall PCS had the highest explained variance (adjusted R 2 = 0.64) and was associated with having a specific gynecologist to care for endometriosis ( β = 0.61) and overall grading of endometriosis care ( β = 0.56). Although the dimension “Endometriosis clinic staff” had only one significantly associated determinant, overall grading of endometriosis care ( β = 0.95), it had a high explained variance (adjusted R 2 = 0.50). The dimension “Physical comfort” also had only one associated determinant, numbers of consultations with GPs before referral ( β = − 0.06), and a very low explained variance (adjusted R 2 = 0.05) (Table ). “Respect for patient’s values, preferences and expressed needs” had only one associated determinant, overall grading of endometriosis care ( β = 0.80), but a relatively high explained variance (adjusted R 2 = 0.49). Both “Coordination and integration of care” and “Emotional support and alleviation of fear and anxiety” had four associated determinants. Three of the determinants were the same for both dimensions: age at first symptoms (β = 0.06 resp. β = − 0.04), having a responsible gynecologist to care for endometriosis ( β = 0.92 resp. β = 0.63) and overall grading of endometriosis care ( β = 0.22 resp. β = 0.10). However, the explained variances were relatively low for both models (adjusted R 2 = 0.11 resp. R 2 = 0.12) (Table ). Having a higher education was associated with lower scores on the dimension “Coordination and integration of care” ( β = − 0.81), as was having an intimate partner relationship with scores on “Involvement of significant others” ( β = − 0.80). This is the first study to measure patient-centeredness and associated determinants in a larger national sample including several clinics of varying sizes. On average, the women’s rating of overall PCS in this study was lower than what has been shown in previous comparable studies . An explanation could be that our data is based on a national sample including university hospitals, county hospitals and district hospitals, while earlier studies collected data from specialized endometriosis centers . Our results showed that “Respect for patients’ values, preferences and needs” and “Endometriosis clinic staff” were the two most patient-centered dimensions of endometriosis care, while “Emotional support and alleviation of fear and anxiety” had the lowest score. This is similar to earlier studies . The items measuring “Respect for patients’ values, preferences and needs” and “Endometriosis clinic staff” mainly focus on healthcare professionals’ ability to meet their patients with respect, to invite them to participate in their own care and to be supportive and friendly. The items regarding “Emotional support and alleviation of fear and anxiety” are more focused on the psychological impact of endometriosis, the opportunity to consult a counsellor and if they are given information on a patients’ organization. This could indicate that healthcare professionals being respectful and friendly is not sufficient to alleviate fear and anxiety, and more concern should be given to provide emotional support. The lack of sufficient emotional support has been highlighted before . The most important finding was the independent association between having a responsible gynecologist and several dimensions of PCS and overall PCS. The determinant of having a responsible gynecologist also had the highest β coefficients, meaning that it had more influence on PCS than the other determinants. Having a responsible gynecologist seems to increase the chances of experiencing patient-centeredness. In the literature, this has been described by the term “most responsible physician.” This means having a certain physician that has the responsibility for the long- and short-term medical treatment of a patient, including follow-up and evaluation . According to Swedish law, clinics are obligated to provide a most responsible physician if it is necessary to satisfy a patient’s safety, continuity and coordination of care. Therefore, most patients with chronic diseases have a most responsible physician. It could be argued, that, at least, all women with complex endometriosis should have a responsible gynecologist. This is something that could be highlighted in national and international guidelines. The National Guidelines for Endometriosis Care in Sweden emphasize the importance of multi-professional teams working with the more complex cases, but there are limited implications of the guidance on the continuity of care. In the recently updated endometriosis guidelines from the European Society of Human Reproduction and Embryology, there is no implication of the structure of care . In our sample, two thirds had a responsible gynecologist, indicating that most clinics have a routine regarding responsible gynecologists, but the issue warrants further investigation. Having a responsible gynecologist to care for endometriosis patients provides continuity in the contact with healthcare professionals. The importance of continuity has been noticed in endometriosis literature before , but to the best of our knowledge, this is the first study to show an association between continuity and patient-centeredness. Apers et al. showed that the ECQ dimension “Continuity and transition” was associated with overall HRQoL and the experience of emotional well-being and social support. Moreover, continuity has been identified as a specific target for improvement of patient-centeredness in endometriosis care . However, physicians should to bear in mind that continuity sometimes leads to a risk for tunnel vision thinking, which limits the holistic approach that is also often necessary to give proper care to women with complex endometriosis. Ideally, the care could be monitored by the responsible gynecologist in close cooperation with multiprofessional teams. The importance of a well-functioning relationship with healthcare professionals is also reflected in the MIS scores, where “Respect for patients’ values, preferences and needs,” “Information, communication and education,” “Continuity and transition,” “Technical skills” and “Endometriosis clinic staff” were the most important dimensions. “Physical comfort” was the least important aspect, indicating that improvement work should focus on relational aspects rather than comfort in the waiting room. Overall grading of endometriosis care was a significant determinant for overall PCS and for nine out of the ten dimensions of care. This suggests that a basic 0–10 grading scale can be used by healthcare professionals as a tool to obtain an indication of the experience of patient-centeredness in endometriosis care at their clinic. However, ECQ is preferred for a thorough assessment of patient-centeredness in endometriosis care . In 2020, Schreurs et al. made a secondary analysis of patient-centeredness using two studies with data from four endometriosis care centers in Belgium and the Netherlands. Their multivariate analysis showed that overall grading of endometriosis care, a lower educational level, being member of a patient organization and having seen other specialists for endometriosis complaints were independently associated with higher overall PCS . Some of their results are similar to ours, where overall grading of endometriosis care gave higher PCS scores, and higher education gave lower PCS scores for the dimension “Coordination and integration of care.” The studies are not totally comparable since the included determinants vary, for example we added the background questions about having a responsible gynecologist and having a treatment plan. However, the results suggest that there might be universal factors contributing to the feelings of patient-centeredness. It would be interesting to investigate further what determinants might differ and conform between countries. One strength of this study is that study participants constitute a random sample of women with confirmed endometriosis from ten clinics of varying sizes from different parts of Sweden, including two endometriosis specialist centers. All women had a confirmed endometriosis diagnosis, which seldom is the case in endometriosis research. Also, our population had a similar socioeconomic level as an age- and gender-matched population of Swedish women . One limitation is the risk of self-selection bias, i.e., responding depends on having either very positive or negative experiences of care. Furthermore, ECQ can been criticized for risking a high recall bias, since women are obliged to answer with their lifetime endometriosis care history in mind. The clinical implication of the results is that women with endometriosis could benefit from having a responsible gynecologist, and that clinics should organize their work around the idea of gynecologists having a handful of endometriosis patients to especially care for. Furthermore, possible interventions and actions to emotionally support women and alleviate fear and anxiety need more attention. Future studies could also focus on symptom severity and disease complexity in relation to patient-centeredness, as well as how to design team-based services together with women and healthcare professionals aiming to improve quality of care . In conclusion, our results show that Swedish women with endometriosis experience low patient-centeredness, reflecting an urgent need for improvement. More effort should be given to develop the relational aspects of care. Furthermore, women with endometriosis benefit from having a responsible gynecologist to care for treatment and follow-up. Given the random selection of participants from a national sample, the results should be generalizable to other countries with a similar organizational structure of healthcare. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 15 KB) |
Cancer burden in adolescents and young adults in Europe | 651a49de-62a5-42d4-9a14-501aa6fe6d8c | 10024081 | Internal Medicine[mh] | In Europe, many estimates of the burden of cancer in adolescents and young adults (AYAs) are derived from single centre or in single country data. Often different definitions of AYAs are used, , and therefore, the estimates are unreliable for the design of specialised clinical services that can meet their specific needs. , , Data from EUROCARE have shown lower survival for AYAs than for children or adults for most cancers that affect these groups and modest survival improvements. In Europe, the European Society for Medical Oncology (ESMO) and the European Society for Paediatric Oncology (SIOP Europe) founded a Cancer in AYA Working Group, to exchange knowledge and improve the care for AYA patients with cancer. International collaboration is particularly relevant as services vary, the incidence of AYA cancers is increasing and the incidence is at least double in regions with a very high human development index (HDI) than in regions with a low/medium HDI. , In this study, the ESMO/SIOPE AYA Working Group aims to describe the burden of cancer in AYAs in Europe in terms of incidence and mortality. These data will form a reference to guide health care organisations and collaborations at national and European levels for this underserved population.
We used the all-inclusive definition of the AYA age range, 15-39 years, that has been accepted in Europe ( https://www.siope.eu/encca/ ) and internationally. We used data available on the Global Cancer Observatory (GCO) ( https://gco.iarc.fr/help ). The GCO data were accessed from their interactive web-based platform, provided by the Cancer Surveillance branch of the International Agency for Research on Cancer. The GCO data are only derived from the many European population-based cancer registries (CRs). This provides less biased epidemiological estimates than institutional registers. We retrieved crude and age-standardised (World Standard Population) incidence and mortality rates. The results are provided for 2020. However, different methods were used to provide the incidence rates for 2020. A brief description follows (for further details refer to , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). Incidence rates - The most recently observed incidence rates (national or local) were applied to the 2020 population (20 countries). - Rates were estimated from national mortality data by modelling, using mortality-to-incidence ratios derived from CRs in that country (five countries). - Rates were estimated from national mortality estimates by modelling, using mortality-to-incidence ratios derived from CRs in neighbouring countries (four countries). - In Slovakia 2001-2010 incidence rates from a single registry were applied to the 2020 population. Regarding mortality, the most recently observed national mortality rates were applied to the 2020 population in all the countries included in this study (for further details please refer to , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). We report incidence and mortality for all cancers except non-melanoma skin cancers. We also selected the most common cancers to the AYA population that are available on the GCO website: nasopharynx [International Classification of Diseases (ICD-10) code C11]; colon (C18); rectum (C19-20); liver and intrahepatic bile ducts (C22); melanoma of skin (C43); breast (C50); cervix uteri (C53); testis (C62); central nervous system (CNS) (C70-72); thyroid (C73); Hodgkin’s lymphoma (HL) (C81); non-Hodgkin’s lymphoma (NHL) (C82-86, C96); leukaemia (C91-95) . To report on the AYA burden in Europe and among the different European countries, we selected the 28 member states of the European Union (EU) in 2018 (Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and the UK) and Iceland and Norway within the European economic area with data available in the GCO. We included the UK, although it was then leaving the EU, due to the specific clinical services for AYA in the UK, long-standing research on AYA outcomes and for comparability with previous studies. Age-standardised incidence and mortality rates for Europe were calculated as weighted averages, giving each country a weight equal to the contribution of its population to the total population. Finally, we retrieved incidence trends from 1998 to 2021 for countries with available data: Bulgaria, Croatia, Czech Republic, Denmark, Estonia, France, Germany, Iceland, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Poland, Slovakia, Slovenia, Spain, Sweden, Switzerland and the UK. Data for 2011-2012 did not include Slovakia and Spain. Data for Austria, Belgium, Cyprus, Finland, Greece, Hungary, Luxembourg, Portugal and Romania were not available for the trend analysis.
- The most recently observed incidence rates (national or local) were applied to the 2020 population (20 countries). - Rates were estimated from national mortality data by modelling, using mortality-to-incidence ratios derived from CRs in that country (five countries). - Rates were estimated from national mortality estimates by modelling, using mortality-to-incidence ratios derived from CRs in neighbouring countries (four countries). - In Slovakia 2001-2010 incidence rates from a single registry were applied to the 2020 population. Regarding mortality, the most recently observed national mortality rates were applied to the 2020 population in all the countries included in this study (for further details please refer to , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). We report incidence and mortality for all cancers except non-melanoma skin cancers. We also selected the most common cancers to the AYA population that are available on the GCO website: nasopharynx [International Classification of Diseases (ICD-10) code C11]; colon (C18); rectum (C19-20); liver and intrahepatic bile ducts (C22); melanoma of skin (C43); breast (C50); cervix uteri (C53); testis (C62); central nervous system (CNS) (C70-72); thyroid (C73); Hodgkin’s lymphoma (HL) (C81); non-Hodgkin’s lymphoma (NHL) (C82-86, C96); leukaemia (C91-95) . To report on the AYA burden in Europe and among the different European countries, we selected the 28 member states of the European Union (EU) in 2018 (Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and the UK) and Iceland and Norway within the European economic area with data available in the GCO. We included the UK, although it was then leaving the EU, due to the specific clinical services for AYA in the UK, long-standing research on AYA outcomes and for comparability with previous studies. Age-standardised incidence and mortality rates for Europe were calculated as weighted averages, giving each country a weight equal to the contribution of its population to the total population. Finally, we retrieved incidence trends from 1998 to 2021 for countries with available data: Bulgaria, Croatia, Czech Republic, Denmark, Estonia, France, Germany, Iceland, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Poland, Slovakia, Slovenia, Spain, Sweden, Switzerland and the UK. Data for 2011-2012 did not include Slovakia and Spain. Data for Austria, Belgium, Cyprus, Finland, Greece, Hungary, Luxembourg, Portugal and Romania were not available for the trend analysis.
In 2020, there were ∼112 000 new cancer cases and 12 700 cancer deaths in AYAs in Europe. Overall, AYA cancers represented 5% of the new cancers diagnosed in the European countries selected in 2020. describes age-standardised incidence and mortality rates (ASR) for all cancers except non-melanoma skin cancers in AYA. Incidence varied widely between countries. Italy was the country with the highest incidence (ASR 79 per 100 000) followed by France, Belgium, Denmark, Hungary, the Netherlands, Portugal and Norway (all with incidence ≥64 per 100 000). Malta was the country with the lowest incidence (ASR 36 per 100 000); other countries with low incidence were Iceland (ASR 42 per 100 000) and Estonia (ASR 46 per 100 000). In the remaining countries, incidence ranged from 50 to 64 per 100 000. Mortality also varied between countries with the highest mortality observed in Lithuania, Bulgaria and Romania (ASR ∼11 per 100 000) followed by Portugal, Poland and Hungary (ASR ∼9 per 100 000). Mortality was <5 in Estonia, Spain, Denmark, Iceland, Czech Republic and Slovenia and <2 in Malta and Luxembourg. Eastern European countries (e.g. Bulgaria, Romania, Poland) had low incidence rates but high mortality rates, while the other countries from south, centre and north of Europe (e.g. Italy, the Netherlands, Belgium, Norway) had high incidence rates with relatively low mortality rates. describes incidence rates specifically for AYA-relevant cancers in 2020, in both sexes, separately for males and females and compares countries. Cancers of the female breast, thyroid and testis were the most common cancers across countries followed by melanoma of skin and cancers of the cervix. CNS cancers and haematological malignancies were less common across all countries. Colorectal, nasopharyngeal and liver cancers were the rarest across countries. Differences between countries were observed for cancers of the female breast, thyroid, testis, cervix and skin melanoma. Female breast cancer’s ASR varied from 30 per 100 000 to 13 per 100 000 in France and Slovakia, respectively. The incidence of thyroid cancer was highest in Italy and Cyprus (ASR ∼20 per 100 000) and lowest in Estonia (ASR 2.5 per 100 000). The incidence of testicular cancer was highest in Nordic countries (ASR ∼20 per 100 000) and lowest in Lithuania and Latvia. The incidence of cervical cancer was ∼15-16 per 100 000 in the UK, Hungary, Norway, Latvia and Lithuania, and was lowest in Malta (ASR 2.1 per 100 000). The incidence of skin melanoma varied from 19 per 100 000 in Denmark to 1.3 per 100 000 in Cyprus. The incidence of thyroid cancers and skin melanoma was higher among females than males in all countries. reports incidence trends for all cancers excluding non-melanoma skin cancers in AYAs by sex. Cancer incidence in AYA is increasing in both sexes and slightly more in females than in males. reports age-adjusted mortality rates for all cancers excluding non-melanoma skin cancers and for AYA-relevant cancers by country. Mortality rates were low for most cancers with the exception of the cancers of the CNS and leukaemia confirming the good prognosis for most cancers of AYAs. Differences in mortality were observed between countries. Eastern European countries (e.g. Bulgaria, Latvia, Lithuania, Poland, Romania) had high mortality rates for many of the AYA-relevant cancers (notably cancers of the cervix, CNS, testis, HL, NHL, leukaemia). Relatively large differences in mortality between countries were observed for testicular and cervical cancers, skin melanoma, HL and NHL. Differences in mortality between countries were relatively small for leukaemia and CNS cancers.
Our cooperative ESMO/SIOPE AYA Working Group’s article describes, for the first time, the burden of cancers in AYAs in Europe. Our data confirm the rarity of tumours in this population and their increase in the current era and, most importantly, highlights differences in incidence and mortality within European countries, with Eastern European countries having higher mortality from many cancer types. Variations in cancer incidence rates across different populations may reflect different distribution of risk factors, variations in the implementation or uptake of screening as well as overdiagnosis. We observed that differences in incidence between countries were in five main cancer types, in particular thyroid, breast, melanoma, cervical and testicular cancers. Thyroid cancer is a common cancer in AYAs, especially in females as our results also confirm. Different use of diagnostic ultrasounds and fine-needle aspiration biopsies (leading to indolent cases) and different distribution of risk factors such as obesity may contribute to explain differences in thyroid cancer incidence between countries and sex. Breast cancer incidence rates vary widely around the world; however, most factors responsible for the observed differences (parity, obesity, use of hormone replacement therapy, mammogram screening) are relevant for postmenopausal women only. Risk factors for breast cancer before age 40 years include family history, age at menarche, age at first birth, breast-feeding habits, low body mass index (<20), use of an oral contraceptive, alcohol intake, etc. Most risk factors in women aged <40 years were similar to those described in breast cancer epidemiology at any age. Previous international comparisons showed that the pattern of these exposures does not closely follow the observed incidence patterns in GLOBOCAN. Among these factors, only the prevalence of underweight varies markedly between European countries. However, this variation in the proportion of underweight women across Europe seems to only loosely follow the incidence patterns we observe. Therefore, the European differences in breast cancer incidence in young women remain poorly explained. Incidence in cervical cancer is influenced by screening, behavioural risk factors and public health policy between populations. The main risk factor for cervical cancer is human papillomavirus (HPV), which is sexually transmitted and thus associated with sexual behaviour. Smoking, parity and hormonal contraceptive use are also associated with cervical cancer risk. HPV vaccine coverage is low in countries where we observe the highest incidence and screening performance is heterogeneous among European countries. , , In our data, most countries that completed the roll-out of the cervical cancer screening programme (e.g. the Netherlands, Sweden, Slovenia) had the lowest mortality rates, whereas countries with no cervical screening programme or that are rolling out the screening have mortality rates higher than the EU average. Other cervical cancer-associated risk behaviours differ between EU countries. , , , Preventive campaigns and vaccination policies should be encouraged for decreasing the impact of HPV-related cancers in Europe. Breast, cervical and thyroid cancers account for a substantial burden of cancer among AYAs and especially young women as our results confirm. A cancer diagnosis can be catastrophic with repercussions beyond an individual’s health, and indeed beyond the individual, especially if it occurs at a young age. Ensuring that young people make informed choices about their health and encouraging policy makers to tailor effective and cost-effective preventive measures would make a great impact on cancer risk and outcomes especially for women. Differences in melanoma incidence across countries may be due to the different exposure to ultraviolet radiation on sunbeds or natural sun. In addition, different skin pigmentation characteristics may contribute to susceptibility to melanoma. Melanoma is very common in the young adult population, more so in AYA women than in men as our results also confirm. This may be, in part, due to the greater use of risky behaviours among girls in seeking to suntan (using sunbeds or natural sun) which is socially determined. Testicular cancer is the most common cancer among young men aged 15-39 years. Our data confirmed that incidence tends to be greatest in Northern European countries, while the lowest rates occurred in Eastern European countries. This heterogeneity is not well explained, as there are no strong behavioural, public health or screening factors identified. Birth cohort effect, occupational, environmental and maternal exposure to exogenous toxins have been considered as possible risk factors. Further research using cases collected through national and regional population-based registers and case-control studies are needed together with greater consideration given the public health importance of testicular cancer among young men, and the need for high-quality cancer service delivery to maximise survival prospects and quality of life. Our study confirmed an increasing incidence of tumours in AYAs. Previous studies have reported that cancers with increasing incidence were those related to obesity (e.g. colorectal cancers, thyroid cancers); thyroid tumours attributed to diagnosis of small low-risk tumours at routine imaging, and cervical tumours attributed to changes in sexual behaviour, while an impact is not yet clearly visible from HPV vaccination. More work is needed to understand the growing incidence of testicular and breast cancers in AYAs, both in Europe and North America. Our analysis confirmed greater mortality for most AYA cancers, in Eastern Europe more than in Western Europe. This is not unexpected, as Eastern European countries also have lower cancer survival in children and adults. , Variations in mortality reflect, in part, variations in incidence, but also differences in early diagnosis and available treatment modalities among others. For many AYA cancers (e.g. testicular, breast, melanoma, HL and NHL), we observed variation in mortality, with higher mortality not always associated with higher incidence, supporting the importance of the health care organisation in providing earlier detection and the most effective treatments. These data concur with previous studies that attributed AYA cancer mortality disparities to variation in early-stage diagnoses, especially where young people are not included in cancer screening protocols, as well as to different public education and awareness of cancer symptoms, different degrees of access or availability of treatment. Many of these are underpinned by different expenditures on public health systems. , , , Different access or administration of available treatment may be particularly relevant for cancers that arise in young people because a cancer in a young person tends to have distinctive clinical features and delivering treatment can be more complex than a similar cancer in an adult. For example, breast cancer in young women is a biologically more aggressive cancer than in older women and often diagnosed at later stages; young-onset skin melanoma has a distinct biology; and colorectal cancer in the young has a distinctive molecular profile, more commonly presents as symptomatic, later stage, mucinous and poorly differentiated disease. For thyroid and cervical cancers, the variation in incidence outweighed the variation in mortality. For thyroid cancer this may be attributed to the rates of use of ultrasound scans, resulting in diagnosis of tiny incidental nodules which may have never caused life-threatening disease. This is inflating the incidence with tumours with a limited impact on thyroid cancer mortality, which remains similar across countries. Such overdiagnosis will usually lead to treatment, lifelong medical care and adverse effects that can negatively affect the quality of life for a particularly long time in young patients. For cervical cancers, different availability and access to screening or vaccination may explain some of the mortality differences between countries. In relatively poor-prognosis young-onset cancers included in this analysis, such as CNS cancers and leukaemia, there was little variation in mortality across Europe. Tumour biology may affect mortality together with stage of diagnosis or currently available treatments. Of note, for leukaemia, the mortality differences between countries were lower for patients aged 15-24 years compared to those aged 25-39 years (data available from the corresponding author). This is most likely due to the widespread use of homogeneous treatment protocols by paediatric cancer hospitals. Cancers are rare in people under 39 years of age, but unlike most rare cancers they can be effectively treated in the majority of cases. To ensure the best outcomes, young people who develop malignancies should be referred to specialist centres and treated in accordance with national or international protocols. Where such protocols are not available, they should be developed. Multi-institutional cooperation and European inter-group cooperative studies have an important role to play in developing treatment protocols, and also in organising and coordinating clinical research in these cancers. Our study has limitations. Firstly, we reported results for AYAs between the ages of 15 and 39 years. This definition does not allow to compare with previous data from the different ranges of tumours of younger AYA patients (e.g. aged 15-24, 15-29 years). However, our aim was to provide a general burden in Europe from the inclusive definition of AYAs. The European estimates were only calculated from countries with the indicator data available. For a few cancers and indicators (e.g. thyroid and nasopharyngeal cancers), the mortality ASR was based on a small number of countries and therefore, the European ASR must be interpreted with caution. Finally, we only analysed tumours available in the GCO. GCO classifies tumours by ICD, and therefore relevant AYA tumours, such as the heterogeneous group of sarcomas, could not be included in this study. Finally, variations in data quality and data comparability can vitiate comparison of cancer incidence and survival between populations. CRs included in this paper had good and comparable data quality indicators. The proportion of microscopically verified (MV) cases was >90% in most CRs considered; few exceptions with MV% between 80% and 90% were Bulgaria, Croatia, Czech Republic and Italy. The proportion of ‘death certificate only’ (DCO) cases was very low with few exceptions (e.g. Austria, Bulgaria, Germany and Croatia) for which the % was anyway below 10%. CRs of Belgium, France and the Netherlands do not register DCO cases. Our study has considerable strengths, such as population-based data, the breadth of coverage and duration of data available. This is the first study providing a comprehensive overview of the burden of cancers in AYAs in different European countries. Our results highlight the future health care needs and requirements for specialised services related to treatment as well as the urgency for preventive initiatives that can mitigate the increasing burden.
|
Off-label despite high-level evidence: a clinical practice review of commonly used off-patent cancer medicines | 7a11f1ee-bfb1-4b31-a123-0c1f30731b5c | 10024100 | Internal Medicine[mh] | Cancer is among the leading causes of death and an ongoing challenge for health care systems worldwide. During the last decade, a plethora of new medicines have been approved by regulatory bodies for the treatment of neoplastic diseases and in most countries, medicines are reimbursed according to their labelling indication. , However, the off-label use of medicines is quite common, especially in oncology. According to the results of an American Society of Clinical Oncology (ASCO) survey, ∼50% of patients with cancer have received chemotherapy in an off-label indication during their disease course. , At the same time, the off-label use of antineoplastic high-cost medicines is increasingly implemented based on identified plausible molecular targets, despite support of clinical benefit by low-level evidence only, or none. , Interestingly, many ‘old’, off-patent and low-cost cancer medicines remain off-label for specific indications. Most of these medicines have new clinical applications based on large-scale phase III clinical trials, with sufficient scientific data to support their safety and effectiveness. However, as manufacturers of these medicines that are now out of regulatory and patent protection lack financial incentive to proceed with a regulatory application for these new indications, these off-patent cancer therapeutics remain off-label. For example, oxaliplatin, formally approved for use in patients with colorectal cancer, is commonly administered in an off-label context for patients with localised and advanced gastric and pancreatic adenocarcinomas, based on survival benefits. This practice is supported by large, methodologically robust phase III clinical trials and is strongly endorsed by the European Society for Medical Oncology (ESMO) Clinical Practice Guidelines recommendations. , , , Nevertheless, oxaliplatin has not been formally approved for the latter indications, and its application in FOLFIRINOX, FOLFOX, EOX, and FLOT combination regimens remains off-label. This anomaly is sometimes the cause of prescription and reimbursement obstacles across different health care systems contributing to impaired access to cancer medicines, especially in health care settings that strictly limit medications’ coverage to their licensed indications. This is a far cry from the aspiration that product labelling should include ‘all clinical indications for which adequate data are available to establish the product’s safety and effectiveness.’ Given the substantial lacunae in the licensed indications, professional organisations must fill the void in providing high-level evidence-based guidance. ESMO aimed to identify these ‘old’ and commonly used cancer medicines that remain off-label, despite strong evidence from robust, randomised phase III clinical trials. In this manuscript, we describe this common scenario as ‘off-label despite high-level evidence’ (OLDE) and note that this mainly applies to off-patent or off-commercial protection medicines for which generic therapeutics are available in the market. Our four primary objectives were (i) to record the extent of this paradox in oncology therapeutics; (ii) to obtain the consensus of expert reviewers on the clinical value of OLDE medicines and verify their respective ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) scores, where applicable; (iii) to crosscheck their inclusion in the ESMO Clinical Practice Guidelines and 21st World Health Organisation Model List of Essential Medicines (WHO EML); and finally (iv) to provide a prima facie regulatory assessment of the robustness of the main clinical evidence of efficacy supporting the extension of indications. The results aim to raise awareness of the issue and encourage marketing authorisation holders to seek clinical and regulatory advice from research groups, professional organisations and regulators in view of possible submission of evidence-based off-label applications to streamline indication extension, patient access, physician workflow, and sustainability of cancer care.
Three expert authors developed a list of commonly used off-label anticancer agents (GP, GZ, and YM). The product labelling was reviewed for each therapeutic applied in the neoadjuvant, adjuvant, or metastatic setting across different neoplastic diseases using the following as reference documents: the summary of product characteristics on the electronic medicines compendium (EMC) providing up-to-date information about medicines licensed for use in the United Kingdom ; the European Medicines Agency (EMA) medicines database, providing information on all medicinal products that have been centrally authorised in Europe ; and several national formularies accessible online. References to randomised phase III trials supporting the off-label use of each medicine in a specific disease were retrieved from the ClinicalTrials.gov database, expert opinion, and published literature in peer-reviewed medical journals. The ESMO staff (NL and MG) cross-checked the medications for inclusion in the ESMO Clinical Practice Guideline recommendations and the 21st WHO EML, where available. , A library of 20 OLDE agents in eight disease groups containing all peer-reviewed publications was created and the preliminary ESMO-MCBS version 1.1 , scores were calculated (NL and MG) and sent to expert evaluators, including biostatisticians (NIC, UD, and PZ), for validation. Studies not achieving a statistically significant benefit in their endpoints, as well as studies that did not meet ESMO-MCBS predefined criteria for evaluation, were designated as no evaluable benefit (NEB). ESMO-MCBS scores A and B in the adjuvant setting and scores 4 and 5 in the non-curative setting indicate medicines with substantial benefit. , Five cancer medicines (commonly used in a total of nine off-label clinical contexts as defined by tumour type and stage) were excluded because of a lack of supportive evidence of efficacy and one because subsequently it was found not to be off-label. , , , , , , , The remaining 14 off-label cancer medicines across 6 disease groups (breast, gastrointestinal, genitourinary, gynaecological, head and neck, and thoracic cancers) for a total of 37 scenarios were subjected to an expert peer review process for accountability for reasonableness. Expert and regulatory peer review The peer review was developed and conducted using the online tool Qualtrics. Each reviewer provided feedback to a series of questions regarding the off-label indication under study ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ). The peer review was conducted by specialists in solid tumours drawn from the ESMO-MCBS Working Group, ESMO Faculty, ESMO Guidelines Committee, ESMO Practicing Oncologists Working Group, and ESMO Young Oncologists Committee. A total of 76 reviewers from 23 countries were invited to participate ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ). During the first peer review, participants were invited to suggest additional OLDE scenarios for evaluation. The aforesaid process was repeated for these additional medicines. presents a summary of the final 17 off-patent cancer medicines reviewed across six disease groups, representing 42 scenarios. After the results of the two peer reviews, an in-depth survey aiming to understand the challenges faced by medical oncologists when prescribing off-label generic cancer medicines despite the presence of high-level evidence was developed using again the online tool Qualtrics. The survey, which involved the initial reviewers with the addition of the ESMO National Society Committee members from the European Union 27 countries, consisted of two parts ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ): (i) administrative and regulatory challenges and (ii) daily workflow challenges. Where discrepancies within the same country were identified, two authors (MG and NL) contacted the reviewers to understand the source of the disagreement to support the robustness of the information provided. As a final step, two regulatory experts (CV and FP) reviewed a ‘plenary’ list of OLDE medications to assess, based on the library of peer-reviewed publications, the prima facie robustness of the efficacy evidence. The review did not assess whether available efficacy evidence would be sufficient for a successful submission, but identified any visible uncertainties in selected publications that would likely constitute blocking issues according to generally agreed regulatory standards and would likely require additional studies to be submitted. describes the development process to obtain the final list of commonly used off-label anticancer agents.
The peer review was developed and conducted using the online tool Qualtrics. Each reviewer provided feedback to a series of questions regarding the off-label indication under study ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ). The peer review was conducted by specialists in solid tumours drawn from the ESMO-MCBS Working Group, ESMO Faculty, ESMO Guidelines Committee, ESMO Practicing Oncologists Working Group, and ESMO Young Oncologists Committee. A total of 76 reviewers from 23 countries were invited to participate ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ). During the first peer review, participants were invited to suggest additional OLDE scenarios for evaluation. The aforesaid process was repeated for these additional medicines. presents a summary of the final 17 off-patent cancer medicines reviewed across six disease groups, representing 42 scenarios. After the results of the two peer reviews, an in-depth survey aiming to understand the challenges faced by medical oncologists when prescribing off-label generic cancer medicines despite the presence of high-level evidence was developed using again the online tool Qualtrics. The survey, which involved the initial reviewers with the addition of the ESMO National Society Committee members from the European Union 27 countries, consisted of two parts ( , available at https://doi.org/10.1016/j.esmoop.2022.100604 ): (i) administrative and regulatory challenges and (ii) daily workflow challenges. Where discrepancies within the same country were identified, two authors (MG and NL) contacted the reviewers to understand the source of the disagreement to support the robustness of the information provided. As a final step, two regulatory experts (CV and FP) reviewed a ‘plenary’ list of OLDE medications to assess, based on the library of peer-reviewed publications, the prima facie robustness of the efficacy evidence. The review did not assess whether available efficacy evidence would be sufficient for a successful submission, but identified any visible uncertainties in selected publications that would likely constitute blocking issues according to generally agreed regulatory standards and would likely require additional studies to be submitted. describes the development process to obtain the final list of commonly used off-label anticancer agents.
In total, 47 of the 76 (62%) invited experts completed the first round of reviews (for a total of 62 reviews) and 29 of the 36 (81%) invited experts completed the second round for the review of the additional 5 medicines (for a total of 38 reviews). Please note that in both rounds, reviewers could select and review more than one disease group. Out of the total 42 scenarios, 38 (91%) had an ESMO Clinical Practice Guideline for the same off-label indication, while only 12 (29%) were included in the 21st WHO EML . Overview of results by disease group Gastrointestinal cancers Gastrointestinal cancers constitute the most representative group regarding the OLDE scenario. Six off-label medicines (capecitabine, cisplatin, gemcitabine, irinotecan, mitomycin, and oxaliplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion, and were reviewed by 16 experts. These agents were well supported by robust phase III trials as confirmed by at least 79% and up to 100% of the experts depending on the scenario . In the perioperative as well as in the first-line treatment of patients with oesophagogastric cancer, combinations based on docetaxel, oxaliplatin, epirubicin, and capecitabine (EOX, FLOT) resulted in significant benefit in overall survival (OS), with an ESMO-MCBS version 1.1 score of 4 and A respectively. , , Moreover, irinotecan has also been investigated in the context of a phase III trial as part of the FOLFIRI regimen in advanced gastric and gastroesophageal cancers and achieved similar outcomes to platinum-based therapy. Oxaliplatin and irinotecan, both components of the FOLFIRINOX regimen, have been granted an ESMO-MCBS version 1.1 score of A/4 and 5 in the adjuvant and advanced settings of pancreatic cancer respectively. , In addition, capecitabine combined with gemcitabine improved OS over gemcitabine monotherapy when used as adjuvant therapy in pancreatic cancer according to the ESPAC-4 trial, with an ESMO-MCBS version 1.1 score of A/1. Two additional clinical off-label scenarios in the gastrointestinal oncologic landscape stand out. First, a treatment with mitomycin plus 5-fluorouracil (5-FU) and radiotherapy was associated with low local failure and enhanced OS compared with radiotherapy alone in a randomised phase III trial enrolling patients with anal cancer (ESMO-MCBS version 1.1 score A), while mitomycin remains off-label for this indication. Second, the use of capecitabine after resection of biliary adenocarcinoma proved effective with improved OS and disease-free survival (DFS) over observation in a prespecified sensitivity analysis of a randomised phase III study, whereas gemcitabine-based regimens provided survival gains in the first-line setting. Both medicines remain formally off-label for the treatment of biliary cancer even though included in current clinical practice guidelines [ESMO Clinical Practice Guideline and National Comprehensive Cancer Network (NCCN)]. , , An overview of the results of the off-label medicines in gastrointestinal cancers is reported in . Genitourinary cancers In genitourinary cancers, four off-label medicines (carboplatin, docetaxel, doxorubicin, and vinblastine) had sufficient evidence to justify peer review inclusion and were reviewed by 7 experts . Based on a phase III trial that established non-inferiority of carboplatin to radiotherapy in stage I seminoma, 100% of the experts agreed that the medicine is off-label but supported by high evidence in this setting. Furthermore, regarding the application of doxorubicin and vinblastine, components of the approved M-VAC regimen for patients with urothelial cancer in both the advanced and neoadjuvant disease settings, high levels of agreement among experts were recorded on the administration of these agents in an off-label setting (86% for advanced and 100% for neoadjuvant). A large clinical benefit was described with an ESMO-MCBS version 1.1 score of 4 in the advanced setting and A in the neoadjuvant setting of all four regimens. , Finally, 57% of surveyed experts apply docetaxel in cases of urothelial cancer, although off-label for this indication. An overview of the results of the off-label medicines in genitourinary cancers is reported in . Thoracic cancers In thoracic cancers, four off-label medicines (carboplatin, etoposide, pemetrexed, and vinorelbine) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 14 experts . For patients with resected non-small-cell lung cancer (NSCLC), the ESMO and NCCN guidelines suggest the use of adjuvant vinorelbine in combination with cisplatin based on the results of randomised phase III trials proving its effectiveness in improving DFS and OS over observation (ESMO-MCBS version 1.1 score A). , , , , Over 90% of the experts agreed that there is high-level scientific evidence for the adjuvant administration of this combination in patients with resected stage II and IIIA NSCLC . Etoposide provided significant OS advantage over observation when combined with cisplatin in the adjuvant setting of resected stages I, II, and III NSCLC (ESMO-MCBS version 1.1 score B). Carboplatin combined with paclitaxel or vinorelbine as adjuvant treatment of completely resected stage III NSCLC proved effective with DFS and OS benefit over observation (ESMO-MCBS version 1.1 score A). Carboplatin in combination with docetaxel proved superior to cisplatin in combination with vinorelbine (ESMO-MCBS version 1.1 score 4). Finally, pemetrexed in combination with cisplatin has been tested in a superiority phase III trial in the setting of completely resected adjuvant stage II-IIIA nonsquamous NSCLC. Although superiority has not been proven over cisplatin in combination with vinorelbine, in view of the similar clinical efficacy, the combination is recommended by current clinical practice guidelines for neoadjuvant and adjuvant therapy of nonsquamous NSCLC. , Again, a formal indication of pemetrexed in this setting is still lacking. An overview of results of the off-label medicines in thoracic cancers is reported in . Breast cancer In breast cancer, three off-label medicines (bisphosphonates, carboplatin, and cisplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 12 experts. The most representative scenario was carboplatin, with confirmation of the presence of solid data for its efficacy in the indication by 100% of experts. The phase III BCIRG 006 adjuvant trial of carboplatin plus docetaxel and trastuzumab compared with standard anthracycline plus cyclophosphamide, followed by docetaxel for patients with localised HER2-positive breast cancer, showed a 5-year OS benefit of 4% and a 5-year DFS benefit of 6% with fewer acute toxicity effects (ESMO-MCBS version 1.1 score B; ). Cisplatin in combination with gemcitabine as first-line therapy for patients with triple-negative metastatic breast cancer reported a small but significant gain in progression-free survival (PFS) and a better-tolerated side-effect profile compared with paclitaxel plus gemcitabine according to the results of the phase III CBCSG 006 clinical trial (ESMO-MCBS version 1.1 score 2). Bisphosphonates were also reviewed. According to results from well-powered meta-analyses of adjuvant trials, a clinically meaningful and statistically significant improvement in DFS and a reduction in bone recurrence were established in postmenopausal patients; 75% of the experts agreed that the medication is off-label and commonly used and 100% of them confirmed that high-level evidence in the literature supported bisphosphonates application. An overview of the results of the off-label medicines in breast cancer is reported in . Gynaecological cancers In gynaecological malignancies, four off-label medicines (carboplatin, docetaxel, paclitaxel, and pegylated liposomal doxorubicin) had sufficient supportive evidence for efficacy and safety to justify peer review inclusion. The experts’ agreement on their OLDE designation ranged from 80% to 100% (eight experts; ). The combination of carboplatin plus paclitaxel added to radiotherapy in the adjuvant treatment of patients with endometrial carcinoma provided a 5-year OS benefit of 5.3% compared with radiation alone in PORTEC-3, a randomised phase III study (ESMO-MCBS version 2.0 score B). This trial was not scorable using ESMO-MCBS version 1.1, thus ESMO-MCBS version 2 (although not yet published) was used to provide the score. There is also adequate supportive data for the use of this combination as first-line treatment of advanced or recurrent endometrial cancer (GOG0209 study with ESMO-MCBS version 1.1 score 4). All the experts agreed that carboplatin use for adjuvant or advanced endometrial cancer is supported by robust clinical trial data and used by >80% of them in their daily practice. Paclitaxel, when added to a doxorubicin plus cisplatin regimen, has shown robust efficacy in stage III or stage IV endometrial cancer in a phase III trial, with ESMO-MCBS version 1.1 score of 3. Over 80% of experts confirmed the existence of high-level evidence for taxanes in various settings in the therapeutic armamentarium of common gynaecological cancers. Pegylated liposomal doxorubicin in combination with bevacizumab provided an OS of 4 months over the combination of carboplatin plus gemcitabine and bevacizumab in relapsed ovarian or peritoneal cancer but still off-label. An overview of the results of the off-label medicines in gynaecological malignancies is reported in . Head and neck cancers In head and neck cancers, five experts reviewed two off-label medicines (carboplatin and paclitaxel) having sufficient supportive evidence to justify peer review inclusion . Carboplatin is occasionally used for patients ineligible for cisplatin therapy and although evaluated in the context of a phase III trial establishing the superiority of carboplatin plus 5-FU over methotrexate monotherapy in patients with advanced disease, it is not approved for use in this indication. According to the experts’ response, 60% agreed that carboplatin qualifies as OLDE for the relevant indication. Furthermore, paclitaxel was also confirmed as an off-label medicine with high-level evidence for efficacy with 60% of agreement among the experts. It has been tested in combination with cisplatin and achieved similar efficacy to cisplatin plus 5-FU in previously untreated extensive locoregional or metastatic disease. Of note, the phase III trial did not incorporate a non-inferiority design, rather it was a superiority study in which paclitaxel plus cisplatin as investigational therapy failed to show superiority over the control arm. Consequently, no ESMO-MCBS score can be derived. An overview of the results of the off-label medicines in head and neck cancers is reported in . In , available at https://doi.org/10.1016/j.esmoop.2022.100604 , we report the most illustrative examples of OLDE medicines reviewed for which high-level phase III trial evidence and high ESMO-MCBS scores have been identified and confirmed by the experts. Survey on administrative, regulatory, and daily workflow challenges In >60% of surveyed countries, the off-label use of off-patent cancer medicines was regulated/reimbursed by National Medicine Agencies, often in conjunction with other regulatory bodies, the hospital, or the patient’s insurance . According to 45% of respondents, the main prerequisite for requesting off-label use of cancer medicines was the optimised access of patients to effective treatments with fulfilment of unmet medical needs. Other reasons included high clinical evidence for efficacy and safety (42%) and potential economic advantage (13%). Approximately 51% of responders had to implement a distinct specific administrative process to use cancer medicines in a clinical indication that remains off-label despite supporting high-level clinical evidence. The majority (74%) of the responders were willing to apply this process, whereas the rest were reluctant due to its time-consuming nature, need for supporting documentation, often a low rate of approval, and fear of litigation. In addition, 59% of physicians were responsible for implementing the logistical tasks related to this process without administrative support, despite time constraints and heavy clinical workload. The time for obtaining a response from application was on average 1-2 weeks. Substantial heterogeneity of processes within countries was observed due to (i) the context of practice (e.g. private versus public hospital), (ii) national and regional differences in processes and regulations, (iii) the frequency of use and cost of the medicine in the off-label indication. More than 74% of respondents affirmed that they needed patient consent and >66% had to acknowledge and assume legal responsibility for potential patient harm when prescribing an off-label cancer medicine. Patient perceptions of the application process are depicted in together with an overview of the survey results. Regulatory assessment of plenary OLDE scenarios A prima facie regulatory review of the most illustrative examples of OLDE (9 medicines in 5 disease settings for a total of 18 scenarios, selected on basis of common use and ESMO-MCBS scores) identified two studies having critical uncertainties, with the need for additional data for an eventual extension of indication: a phase III study evaluating adjuvant capecitabine in resected biliary adenocarcinoma and a phase III trial assessing the combination of etoposide with cisplatin in unresectable stage III NSCLC, both not scorable with ESMO-MCBS (see Supplementary Annex IV, available at https://doi.org/10.1016/j.esmoop.2022.100604 ). , The uncertainties were mainly related to the fact that the respective trials did not meet their primary endpoint in the intention-to-treat population. Uncertainties were identified in seven other scenarios (39%), including statistical limitations, failure to prove noninferiority, heterogeneous study populations, or immature study data. , , , , , , However, the latter were considered likely to be resolved with further data scrutinisation from the existing studies, additional analyses, or through appropriate labelling changes. For example, limited evidence for a treatment effect in a specific subpopulation could potentially be resolved by restricting the finally approved indication. Notably, nine scenarios in which well-known authorised anticancer agents were studied in an off-label indication did not flag any obvious critical issues from a preliminary regulatory perspective in the current review. , , , , , , ,
Gastrointestinal cancers Gastrointestinal cancers constitute the most representative group regarding the OLDE scenario. Six off-label medicines (capecitabine, cisplatin, gemcitabine, irinotecan, mitomycin, and oxaliplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion, and were reviewed by 16 experts. These agents were well supported by robust phase III trials as confirmed by at least 79% and up to 100% of the experts depending on the scenario . In the perioperative as well as in the first-line treatment of patients with oesophagogastric cancer, combinations based on docetaxel, oxaliplatin, epirubicin, and capecitabine (EOX, FLOT) resulted in significant benefit in overall survival (OS), with an ESMO-MCBS version 1.1 score of 4 and A respectively. , , Moreover, irinotecan has also been investigated in the context of a phase III trial as part of the FOLFIRI regimen in advanced gastric and gastroesophageal cancers and achieved similar outcomes to platinum-based therapy. Oxaliplatin and irinotecan, both components of the FOLFIRINOX regimen, have been granted an ESMO-MCBS version 1.1 score of A/4 and 5 in the adjuvant and advanced settings of pancreatic cancer respectively. , In addition, capecitabine combined with gemcitabine improved OS over gemcitabine monotherapy when used as adjuvant therapy in pancreatic cancer according to the ESPAC-4 trial, with an ESMO-MCBS version 1.1 score of A/1. Two additional clinical off-label scenarios in the gastrointestinal oncologic landscape stand out. First, a treatment with mitomycin plus 5-fluorouracil (5-FU) and radiotherapy was associated with low local failure and enhanced OS compared with radiotherapy alone in a randomised phase III trial enrolling patients with anal cancer (ESMO-MCBS version 1.1 score A), while mitomycin remains off-label for this indication. Second, the use of capecitabine after resection of biliary adenocarcinoma proved effective with improved OS and disease-free survival (DFS) over observation in a prespecified sensitivity analysis of a randomised phase III study, whereas gemcitabine-based regimens provided survival gains in the first-line setting. Both medicines remain formally off-label for the treatment of biliary cancer even though included in current clinical practice guidelines [ESMO Clinical Practice Guideline and National Comprehensive Cancer Network (NCCN)]. , , An overview of the results of the off-label medicines in gastrointestinal cancers is reported in . Genitourinary cancers In genitourinary cancers, four off-label medicines (carboplatin, docetaxel, doxorubicin, and vinblastine) had sufficient evidence to justify peer review inclusion and were reviewed by 7 experts . Based on a phase III trial that established non-inferiority of carboplatin to radiotherapy in stage I seminoma, 100% of the experts agreed that the medicine is off-label but supported by high evidence in this setting. Furthermore, regarding the application of doxorubicin and vinblastine, components of the approved M-VAC regimen for patients with urothelial cancer in both the advanced and neoadjuvant disease settings, high levels of agreement among experts were recorded on the administration of these agents in an off-label setting (86% for advanced and 100% for neoadjuvant). A large clinical benefit was described with an ESMO-MCBS version 1.1 score of 4 in the advanced setting and A in the neoadjuvant setting of all four regimens. , Finally, 57% of surveyed experts apply docetaxel in cases of urothelial cancer, although off-label for this indication. An overview of the results of the off-label medicines in genitourinary cancers is reported in . Thoracic cancers In thoracic cancers, four off-label medicines (carboplatin, etoposide, pemetrexed, and vinorelbine) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 14 experts . For patients with resected non-small-cell lung cancer (NSCLC), the ESMO and NCCN guidelines suggest the use of adjuvant vinorelbine in combination with cisplatin based on the results of randomised phase III trials proving its effectiveness in improving DFS and OS over observation (ESMO-MCBS version 1.1 score A). , , , , Over 90% of the experts agreed that there is high-level scientific evidence for the adjuvant administration of this combination in patients with resected stage II and IIIA NSCLC . Etoposide provided significant OS advantage over observation when combined with cisplatin in the adjuvant setting of resected stages I, II, and III NSCLC (ESMO-MCBS version 1.1 score B). Carboplatin combined with paclitaxel or vinorelbine as adjuvant treatment of completely resected stage III NSCLC proved effective with DFS and OS benefit over observation (ESMO-MCBS version 1.1 score A). Carboplatin in combination with docetaxel proved superior to cisplatin in combination with vinorelbine (ESMO-MCBS version 1.1 score 4). Finally, pemetrexed in combination with cisplatin has been tested in a superiority phase III trial in the setting of completely resected adjuvant stage II-IIIA nonsquamous NSCLC. Although superiority has not been proven over cisplatin in combination with vinorelbine, in view of the similar clinical efficacy, the combination is recommended by current clinical practice guidelines for neoadjuvant and adjuvant therapy of nonsquamous NSCLC. , Again, a formal indication of pemetrexed in this setting is still lacking. An overview of results of the off-label medicines in thoracic cancers is reported in . Breast cancer In breast cancer, three off-label medicines (bisphosphonates, carboplatin, and cisplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 12 experts. The most representative scenario was carboplatin, with confirmation of the presence of solid data for its efficacy in the indication by 100% of experts. The phase III BCIRG 006 adjuvant trial of carboplatin plus docetaxel and trastuzumab compared with standard anthracycline plus cyclophosphamide, followed by docetaxel for patients with localised HER2-positive breast cancer, showed a 5-year OS benefit of 4% and a 5-year DFS benefit of 6% with fewer acute toxicity effects (ESMO-MCBS version 1.1 score B; ). Cisplatin in combination with gemcitabine as first-line therapy for patients with triple-negative metastatic breast cancer reported a small but significant gain in progression-free survival (PFS) and a better-tolerated side-effect profile compared with paclitaxel plus gemcitabine according to the results of the phase III CBCSG 006 clinical trial (ESMO-MCBS version 1.1 score 2). Bisphosphonates were also reviewed. According to results from well-powered meta-analyses of adjuvant trials, a clinically meaningful and statistically significant improvement in DFS and a reduction in bone recurrence were established in postmenopausal patients; 75% of the experts agreed that the medication is off-label and commonly used and 100% of them confirmed that high-level evidence in the literature supported bisphosphonates application. An overview of the results of the off-label medicines in breast cancer is reported in . Gynaecological cancers In gynaecological malignancies, four off-label medicines (carboplatin, docetaxel, paclitaxel, and pegylated liposomal doxorubicin) had sufficient supportive evidence for efficacy and safety to justify peer review inclusion. The experts’ agreement on their OLDE designation ranged from 80% to 100% (eight experts; ). The combination of carboplatin plus paclitaxel added to radiotherapy in the adjuvant treatment of patients with endometrial carcinoma provided a 5-year OS benefit of 5.3% compared with radiation alone in PORTEC-3, a randomised phase III study (ESMO-MCBS version 2.0 score B). This trial was not scorable using ESMO-MCBS version 1.1, thus ESMO-MCBS version 2 (although not yet published) was used to provide the score. There is also adequate supportive data for the use of this combination as first-line treatment of advanced or recurrent endometrial cancer (GOG0209 study with ESMO-MCBS version 1.1 score 4). All the experts agreed that carboplatin use for adjuvant or advanced endometrial cancer is supported by robust clinical trial data and used by >80% of them in their daily practice. Paclitaxel, when added to a doxorubicin plus cisplatin regimen, has shown robust efficacy in stage III or stage IV endometrial cancer in a phase III trial, with ESMO-MCBS version 1.1 score of 3. Over 80% of experts confirmed the existence of high-level evidence for taxanes in various settings in the therapeutic armamentarium of common gynaecological cancers. Pegylated liposomal doxorubicin in combination with bevacizumab provided an OS of 4 months over the combination of carboplatin plus gemcitabine and bevacizumab in relapsed ovarian or peritoneal cancer but still off-label. An overview of the results of the off-label medicines in gynaecological malignancies is reported in . Head and neck cancers In head and neck cancers, five experts reviewed two off-label medicines (carboplatin and paclitaxel) having sufficient supportive evidence to justify peer review inclusion . Carboplatin is occasionally used for patients ineligible for cisplatin therapy and although evaluated in the context of a phase III trial establishing the superiority of carboplatin plus 5-FU over methotrexate monotherapy in patients with advanced disease, it is not approved for use in this indication. According to the experts’ response, 60% agreed that carboplatin qualifies as OLDE for the relevant indication. Furthermore, paclitaxel was also confirmed as an off-label medicine with high-level evidence for efficacy with 60% of agreement among the experts. It has been tested in combination with cisplatin and achieved similar efficacy to cisplatin plus 5-FU in previously untreated extensive locoregional or metastatic disease. Of note, the phase III trial did not incorporate a non-inferiority design, rather it was a superiority study in which paclitaxel plus cisplatin as investigational therapy failed to show superiority over the control arm. Consequently, no ESMO-MCBS score can be derived. An overview of the results of the off-label medicines in head and neck cancers is reported in . In , available at https://doi.org/10.1016/j.esmoop.2022.100604 , we report the most illustrative examples of OLDE medicines reviewed for which high-level phase III trial evidence and high ESMO-MCBS scores have been identified and confirmed by the experts.
Gastrointestinal cancers constitute the most representative group regarding the OLDE scenario. Six off-label medicines (capecitabine, cisplatin, gemcitabine, irinotecan, mitomycin, and oxaliplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion, and were reviewed by 16 experts. These agents were well supported by robust phase III trials as confirmed by at least 79% and up to 100% of the experts depending on the scenario . In the perioperative as well as in the first-line treatment of patients with oesophagogastric cancer, combinations based on docetaxel, oxaliplatin, epirubicin, and capecitabine (EOX, FLOT) resulted in significant benefit in overall survival (OS), with an ESMO-MCBS version 1.1 score of 4 and A respectively. , , Moreover, irinotecan has also been investigated in the context of a phase III trial as part of the FOLFIRI regimen in advanced gastric and gastroesophageal cancers and achieved similar outcomes to platinum-based therapy. Oxaliplatin and irinotecan, both components of the FOLFIRINOX regimen, have been granted an ESMO-MCBS version 1.1 score of A/4 and 5 in the adjuvant and advanced settings of pancreatic cancer respectively. , In addition, capecitabine combined with gemcitabine improved OS over gemcitabine monotherapy when used as adjuvant therapy in pancreatic cancer according to the ESPAC-4 trial, with an ESMO-MCBS version 1.1 score of A/1. Two additional clinical off-label scenarios in the gastrointestinal oncologic landscape stand out. First, a treatment with mitomycin plus 5-fluorouracil (5-FU) and radiotherapy was associated with low local failure and enhanced OS compared with radiotherapy alone in a randomised phase III trial enrolling patients with anal cancer (ESMO-MCBS version 1.1 score A), while mitomycin remains off-label for this indication. Second, the use of capecitabine after resection of biliary adenocarcinoma proved effective with improved OS and disease-free survival (DFS) over observation in a prespecified sensitivity analysis of a randomised phase III study, whereas gemcitabine-based regimens provided survival gains in the first-line setting. Both medicines remain formally off-label for the treatment of biliary cancer even though included in current clinical practice guidelines [ESMO Clinical Practice Guideline and National Comprehensive Cancer Network (NCCN)]. , , An overview of the results of the off-label medicines in gastrointestinal cancers is reported in .
In genitourinary cancers, four off-label medicines (carboplatin, docetaxel, doxorubicin, and vinblastine) had sufficient evidence to justify peer review inclusion and were reviewed by 7 experts . Based on a phase III trial that established non-inferiority of carboplatin to radiotherapy in stage I seminoma, 100% of the experts agreed that the medicine is off-label but supported by high evidence in this setting. Furthermore, regarding the application of doxorubicin and vinblastine, components of the approved M-VAC regimen for patients with urothelial cancer in both the advanced and neoadjuvant disease settings, high levels of agreement among experts were recorded on the administration of these agents in an off-label setting (86% for advanced and 100% for neoadjuvant). A large clinical benefit was described with an ESMO-MCBS version 1.1 score of 4 in the advanced setting and A in the neoadjuvant setting of all four regimens. , Finally, 57% of surveyed experts apply docetaxel in cases of urothelial cancer, although off-label for this indication. An overview of the results of the off-label medicines in genitourinary cancers is reported in .
In thoracic cancers, four off-label medicines (carboplatin, etoposide, pemetrexed, and vinorelbine) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 14 experts . For patients with resected non-small-cell lung cancer (NSCLC), the ESMO and NCCN guidelines suggest the use of adjuvant vinorelbine in combination with cisplatin based on the results of randomised phase III trials proving its effectiveness in improving DFS and OS over observation (ESMO-MCBS version 1.1 score A). , , , , Over 90% of the experts agreed that there is high-level scientific evidence for the adjuvant administration of this combination in patients with resected stage II and IIIA NSCLC . Etoposide provided significant OS advantage over observation when combined with cisplatin in the adjuvant setting of resected stages I, II, and III NSCLC (ESMO-MCBS version 1.1 score B). Carboplatin combined with paclitaxel or vinorelbine as adjuvant treatment of completely resected stage III NSCLC proved effective with DFS and OS benefit over observation (ESMO-MCBS version 1.1 score A). Carboplatin in combination with docetaxel proved superior to cisplatin in combination with vinorelbine (ESMO-MCBS version 1.1 score 4). Finally, pemetrexed in combination with cisplatin has been tested in a superiority phase III trial in the setting of completely resected adjuvant stage II-IIIA nonsquamous NSCLC. Although superiority has not been proven over cisplatin in combination with vinorelbine, in view of the similar clinical efficacy, the combination is recommended by current clinical practice guidelines for neoadjuvant and adjuvant therapy of nonsquamous NSCLC. , Again, a formal indication of pemetrexed in this setting is still lacking. An overview of results of the off-label medicines in thoracic cancers is reported in .
In breast cancer, three off-label medicines (bisphosphonates, carboplatin, and cisplatin) had sufficient evidence for efficacy and safety to justify peer review inclusion and were reviewed by 12 experts. The most representative scenario was carboplatin, with confirmation of the presence of solid data for its efficacy in the indication by 100% of experts. The phase III BCIRG 006 adjuvant trial of carboplatin plus docetaxel and trastuzumab compared with standard anthracycline plus cyclophosphamide, followed by docetaxel for patients with localised HER2-positive breast cancer, showed a 5-year OS benefit of 4% and a 5-year DFS benefit of 6% with fewer acute toxicity effects (ESMO-MCBS version 1.1 score B; ). Cisplatin in combination with gemcitabine as first-line therapy for patients with triple-negative metastatic breast cancer reported a small but significant gain in progression-free survival (PFS) and a better-tolerated side-effect profile compared with paclitaxel plus gemcitabine according to the results of the phase III CBCSG 006 clinical trial (ESMO-MCBS version 1.1 score 2). Bisphosphonates were also reviewed. According to results from well-powered meta-analyses of adjuvant trials, a clinically meaningful and statistically significant improvement in DFS and a reduction in bone recurrence were established in postmenopausal patients; 75% of the experts agreed that the medication is off-label and commonly used and 100% of them confirmed that high-level evidence in the literature supported bisphosphonates application. An overview of the results of the off-label medicines in breast cancer is reported in .
In gynaecological malignancies, four off-label medicines (carboplatin, docetaxel, paclitaxel, and pegylated liposomal doxorubicin) had sufficient supportive evidence for efficacy and safety to justify peer review inclusion. The experts’ agreement on their OLDE designation ranged from 80% to 100% (eight experts; ). The combination of carboplatin plus paclitaxel added to radiotherapy in the adjuvant treatment of patients with endometrial carcinoma provided a 5-year OS benefit of 5.3% compared with radiation alone in PORTEC-3, a randomised phase III study (ESMO-MCBS version 2.0 score B). This trial was not scorable using ESMO-MCBS version 1.1, thus ESMO-MCBS version 2 (although not yet published) was used to provide the score. There is also adequate supportive data for the use of this combination as first-line treatment of advanced or recurrent endometrial cancer (GOG0209 study with ESMO-MCBS version 1.1 score 4). All the experts agreed that carboplatin use for adjuvant or advanced endometrial cancer is supported by robust clinical trial data and used by >80% of them in their daily practice. Paclitaxel, when added to a doxorubicin plus cisplatin regimen, has shown robust efficacy in stage III or stage IV endometrial cancer in a phase III trial, with ESMO-MCBS version 1.1 score of 3. Over 80% of experts confirmed the existence of high-level evidence for taxanes in various settings in the therapeutic armamentarium of common gynaecological cancers. Pegylated liposomal doxorubicin in combination with bevacizumab provided an OS of 4 months over the combination of carboplatin plus gemcitabine and bevacizumab in relapsed ovarian or peritoneal cancer but still off-label. An overview of the results of the off-label medicines in gynaecological malignancies is reported in .
In head and neck cancers, five experts reviewed two off-label medicines (carboplatin and paclitaxel) having sufficient supportive evidence to justify peer review inclusion . Carboplatin is occasionally used for patients ineligible for cisplatin therapy and although evaluated in the context of a phase III trial establishing the superiority of carboplatin plus 5-FU over methotrexate monotherapy in patients with advanced disease, it is not approved for use in this indication. According to the experts’ response, 60% agreed that carboplatin qualifies as OLDE for the relevant indication. Furthermore, paclitaxel was also confirmed as an off-label medicine with high-level evidence for efficacy with 60% of agreement among the experts. It has been tested in combination with cisplatin and achieved similar efficacy to cisplatin plus 5-FU in previously untreated extensive locoregional or metastatic disease. Of note, the phase III trial did not incorporate a non-inferiority design, rather it was a superiority study in which paclitaxel plus cisplatin as investigational therapy failed to show superiority over the control arm. Consequently, no ESMO-MCBS score can be derived. An overview of the results of the off-label medicines in head and neck cancers is reported in . In , available at https://doi.org/10.1016/j.esmoop.2022.100604 , we report the most illustrative examples of OLDE medicines reviewed for which high-level phase III trial evidence and high ESMO-MCBS scores have been identified and confirmed by the experts.
In >60% of surveyed countries, the off-label use of off-patent cancer medicines was regulated/reimbursed by National Medicine Agencies, often in conjunction with other regulatory bodies, the hospital, or the patient’s insurance . According to 45% of respondents, the main prerequisite for requesting off-label use of cancer medicines was the optimised access of patients to effective treatments with fulfilment of unmet medical needs. Other reasons included high clinical evidence for efficacy and safety (42%) and potential economic advantage (13%). Approximately 51% of responders had to implement a distinct specific administrative process to use cancer medicines in a clinical indication that remains off-label despite supporting high-level clinical evidence. The majority (74%) of the responders were willing to apply this process, whereas the rest were reluctant due to its time-consuming nature, need for supporting documentation, often a low rate of approval, and fear of litigation. In addition, 59% of physicians were responsible for implementing the logistical tasks related to this process without administrative support, despite time constraints and heavy clinical workload. The time for obtaining a response from application was on average 1-2 weeks. Substantial heterogeneity of processes within countries was observed due to (i) the context of practice (e.g. private versus public hospital), (ii) national and regional differences in processes and regulations, (iii) the frequency of use and cost of the medicine in the off-label indication. More than 74% of respondents affirmed that they needed patient consent and >66% had to acknowledge and assume legal responsibility for potential patient harm when prescribing an off-label cancer medicine. Patient perceptions of the application process are depicted in together with an overview of the survey results.
A prima facie regulatory review of the most illustrative examples of OLDE (9 medicines in 5 disease settings for a total of 18 scenarios, selected on basis of common use and ESMO-MCBS scores) identified two studies having critical uncertainties, with the need for additional data for an eventual extension of indication: a phase III study evaluating adjuvant capecitabine in resected biliary adenocarcinoma and a phase III trial assessing the combination of etoposide with cisplatin in unresectable stage III NSCLC, both not scorable with ESMO-MCBS (see Supplementary Annex IV, available at https://doi.org/10.1016/j.esmoop.2022.100604 ). , The uncertainties were mainly related to the fact that the respective trials did not meet their primary endpoint in the intention-to-treat population. Uncertainties were identified in seven other scenarios (39%), including statistical limitations, failure to prove noninferiority, heterogeneous study populations, or immature study data. , , , , , , However, the latter were considered likely to be resolved with further data scrutinisation from the existing studies, additional analyses, or through appropriate labelling changes. For example, limited evidence for a treatment effect in a specific subpopulation could potentially be resolved by restricting the finally approved indication. Notably, nine scenarios in which well-known authorised anticancer agents were studied in an off-label indication did not flag any obvious critical issues from a preliminary regulatory perspective in the current review. , , , , , , ,
In this study, we were able to identify a number of ‘old’ cancer medicines that remain off-label for their use in specific settings, despite rigorous scientific evidence based on generally agreed scientific standards. For most medicines questioned, the reviewers affirmed that although off-label, they are commonly used in their country due to high-level evidence for the respective off-label indications. This was further supported by observed high ESMO-MCBS scores in those clinical scenarios representing substantial clinical benefit. Our study highlights the administrative and/or liability burdens associated with the prescription of these medicines in many of the health care systems surveyed. Upon prescribing an off-label medicine, the treating physician is often burdened with increased bureaucratic and operational workload and a legal liability, potentially demotivating the prescription of the medicine. If approval for the use of the medicine is required by regulatory health care bodies or health insurance companies on a per-patient basis, the process often affects workflows, sometimes affects reimbursement policies and, if negative, deprives the patient of a safe and effective therapy. When results from large randomised phase III clinical trials indicate that an authorised medicinal product is safe and effective for use in a new therapeutic indication, a regulatory application for extension of indication should follow. , , , Although such applications would trigger a comprehensive assessment of all the available evidence and ensure adequate labelling and conditions of use, the lack of financial and market incentives demotivates manufacturers of these now generic medicines from investing in such a pathway. While not aiming to replace or pre-empt a formal regulatory assessment, our prima facie review of the main publications found that most of the studies published for this selected group of well-known, authorised, anticancer products used off-label did not appear to trigger any major issues from a clinical or regulatory perspective. Accordingly, applicant companies are encouraged to seek early regulatory advice in case of applications where the main evidence of efficacy is based on robust, randomised academic trials, to ensure strengths, gaps, and remedial steps are timely identified. In Europe, several initiatives have been established to support patient access to already authorised medicinal products that are out of basic patent and regulatory protection and for which relevant data exist and/or may need to be further generated to support a new indication outside their authorisation, where research has shown value to the patient. For example, the European Commission Expert Group on Safe and Timely Access to Medicines for Patients (STAMP) created a framework proposal to support not-for-profit and academic stakeholders who have evidence and scientific rationale for a new therapeutic indication in bringing this new indication ‘on-label’, in collaboration with a commercial entity applying for marketing authorisation. Currently, a pilot is ongoing, with more information available on the EMA web page. Furthermore, the EMA is committed to supporting the development and implementation of a repurposing framework, as expressed as part of the agency’s ‘Regulatory science to 2025’ strategy. In the ever-changing landscape of contemporary oncology therapeutics, there are common, off-label medicine uses with sufficient scientific evidence to justify regulatory submission. As EMA applications for extension of indication have to be submitted by the pharmaceutical companies that hold the marketing authorisation, the results of our study emphasise the need to streamline the legal/regulatory framework. This would facilitate the update of indications of ‘old’, off-patent medicines based on results from academic or independent clinical trials and empower the clinicians to fulfil their mission, making all valid treatment options optimally available to patients.
None declared.
TA has received personal fees and travel grants from 10.13039/100002491 Bristol-Myers Squibb (BMS); personal fees, grants and travel grants from Novartis; personal fees from 10.13039/100013226 Pierre Fabre , grants from NeraCare, 10.13039/100004339 Sanofi , and SkylineDx; and personal fees from CeCaVa outside the submitted work. UD declares institutional financial support from ESMO for biostatistical contribution; reports being a member of the Tumour Agnostic Evidence Generation Working Group, 10.13039/100004337 Roche . RG reports being a core member of the Cancer Drug Development Forum (CDDF), 10.13039/501100013120 European Medicines Agency (EMA) Scientific Advisory Group Oncology, Member of the EMA Healthcare Professional Working Party (HCPWP) and the EMA Cancer Medicines Forum, an expert evaluator for the EU Commission in 2020 on the topic ‘Global Alliance for Chronic Diseases (GACD) 2 - Prevention and/or early diagnosis of cancer’, evaluator of proposals submitted to Horizon Europe Health Cluster-2022 [The Health and Digital Executive Agency (HaDEA)], a member of the Steering Committee of the WHO-DECIDE Health Decision Hub, a member of the EUnetHTA Stakeholder group, consultative role in the AIFA (Italian Agency for Drugs) Working group on hemato-oncology drug; provides consultation/lectures (no remuneration) for 10.13039/100004336 Novartis , 10.13039/100016259 Mylan , 10.13039/100004337 Roche , Lilly , Apogen, and Pfizer; and institutional financial support (clinical trials, Italy) from MSD and 10.13039/100004336 Novartis . KJ is on the advisory board and/or received honoraria for presentations for MSD, 10.13039/100002429 Amgen , Hexal, Riemser, 10.13039/100008129 Helsinn , Volontis, G1, Art-Tempi, Onkowissen, 10.13039/100004337 Roche , 10.13039/100004325 AstraZeneca , Takeda , Mundipharma, med update GmbH, Vifor, Takeda , and Karyopharma; and royalties from Kluwer and Elsevier. FL is on the advisory board for 10.13039/100002429 Amgen , Astellas , 10.13039/100004326 Bayer , Beigene, BMS, 10.13039/501100002973 Daiichi-Sankyo , Eli Lilly , MSD, 10.13039/100004336 Novartis , and Roche; is an invited speaker for 10.13039/100004325 AstraZeneca , BMS, Eli Lilly , 10.13039/100006972 Imedex , Incyte, Medscape, MedUpdate, 10.13039/100004334 Merck Serono, MSD, 10.13039/100004337 Roche , 10.13039/501100011725 Servier , and StreamedUp!, reports expert testimony for Biontech, Elsevier; writing engagement for Iomedico, Springer-Nature, and Deutscher Ärzteverlag; and research grant from BMS. MS is on the advisory board for Janssen, 10.13039/100004334 Merck , and Roche; is an invited speaker for Janssen and Ispen; has received a travel grant from Ispen; is a member of ASCO, BSMO, and EORTC; a principal investigator for Janssen. GP received institutional financial support for advisory board/consultancy from 10.13039/100004337 Roche , 10.13039/100002429 Amgen , 10.13039/100004334 Merck , MSD, and BMS; and institutional support for clinical trials or contracted research from 10.13039/100002429 Amgen , 10.13039/100004337 Roche , 10.13039/100004325 AstraZeneca , 10.13039/100004319 Pfizer , 10.13039/100004334 Merck , BMS, MSD, 10.13039/100004336 Novartis , and Lilly . EGEdV declares institutional financial support for advisory board/consultancy from 10.13039/100004339 Sanofi , Daiichi, Sankyo, NSABP, 10.13039/100004319 Pfizer , and Merck; and institutional support for clinical trials or contracted research from 10.13039/100002429 Amgen , Crescendo Biologics, 10.13039/100004328 Genentech, Roche , AstraZeneca , Synthon, 10.13039/100015841 Nordic Nanovector , G1 Therapeutics, Bayer, Chugai Pharma, CytomX Therapeutics, 10.13039/501100011725 Servier , and 10.13039/100015831 Radius Health . GZ received speaker’s honoraria from 10.13039/100002429 Amgen , 10.13039/501100014382 Ipsen , 10.13039/100004334 Merck , and Leo Pharma. PZ declares institutional financial support from ESMO for biostatistical contribution. CV and FP : The views presented here are those of the authors and not to be understood or quoted as those of the 10.13039/501100013120 European Medicines Agency or its scientific committees. All other authors have declared no conflicts of interest.
|
Design, power, and alpha levels in randomized phase II oncology trials | d7d60544-1001-4e5e-bf4f-af4770276a80 | 10024120 | Internal Medicine[mh] | Phase II trials are an important step in drug development because a successful trial can lead to further testing in phase III trials and/or drug approval, while an unsuccessful phase II trial may lead to a discontinuation in testing. Ideally, there would be a good balance between minimizing the premature termination of trials for potentially beneficial therapies (i.e. false negatives) and the further, costly testing of ineffective drugs (i.e. false positives). Randomization at the phase II level has been encouraged. This trial design allows for greater assurance that the tested therapy is ‘promising’ or effective. However, even when trials are randomized, factors related to a study’s design and interpretation, besides a drug’s efficacy, influence whether a trial is viewed as successful. A prior evaluation of reporting in phase II oncology trials found that reporting is poor in many trials, which may lead to a biased interpretation. To examine the methodology and reporting of phase II oncology trials in recent years and bias in the interpretation of outcomes, we systematically reviewed current published literature.
Search strategy We sought to systematically assemble a list of phase II randomized oncology trials by searching PubMed with the search terms ‘oncology drug’ OR [(‘oncology’/exp OR oncology) AND (‘drug’/exp OR drug)]. We limited our search to randomized phase II clinical trials in the English language. We included all full-length articles published from 1 January 2021 to 20 June 2022 (our search date). We excluded articles that were long-term, pooled, or secondary analysis; did not include an intervention; did not test an antitumor intervention; were not randomized; were protocols only; did not include patients with cancer; were phase trials other than phase II; were reports on quality of life; were noninferiority studies; were retracted or inaccessible; were cost-effectiveness studies; or were biomarker/pharmacokinetic studies. Data abstraction and variable coding For each of the included studies, we abstracted the journal, tumor type, intervention, patients allocated to the control and intervention arm, the randomization ratio, if there was a sample size calculation, the estimated number of participants needed for the control and intervention arms based on sample size calculation, the α value and power for the sample size calculation and whether the calculation assumed a 1- or 2-sided α level, the value(s) used to determine the sample size (e.g. assumed effect size), the outcome used to determine the effect size, the primary and secondary outcomes, the results of the primary and secondary outcomes, and the authors’ conclusion of the study. We then recoded journals to either a top journal or other, based on Google Scholar’s h5-index (≥100 versus <100) for Oncology, Hematology, and Health and Medical Science categories. ( https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=med_oncology ) We recoded all α levels to a 1-sided value for comparability. Based on the effect size value(s) used to determine the sample size, we coded a variable to indicate a relative difference (e.g. hazard ratio), a percentage improvement based on absolute differences from prior studies, an absolute difference based off prior studies, or a predefined threshold value (e.g. a desirable threshold for continuing testing of the drug). We also compared the estimated number of participants from the sample size calculation with the number of participants allocated. If the number of participants was <10% lower in the allocated group than the estimated group, we classified the study as being underpowered; otherwise it was considered as being adequate. We classified the study conclusion as being positive, negative, or neutral, based on two blinded reviewer’s assessments. We coded each study as having met or not met the primary study endpoints, also based on two blinded reviewer’s assessments (AH and TO). For studies that had positive conclusions but did not meet the primary study point or the endpoint result was equivocal, we coded this study as having spin. As a sensitivity analysis, we also coded spin with an expanded definition as others have done, which also considers spin when studies reported a negative overall survival secondary endpoint, but the authors’ conclusion was positive. Determination of further testing To differentiate phase II trials that were conducted as a step toward phase III testing or not, we used several methods: (i) we looked for discussion in the text on further/future testing in larger phase II or phase III trials, as determined by two reviewers (AH and TO); (ii) we searched PubMed to see if there were phase III trials or phase II trials with a bigger sample size carried out at a later date than the initial phase II trial, using the drug name and tumor type in the search; and (iii) we searched ClinicalTrials.gov for registered studies for the same drug and indication, using the drug name and tumor type in the search. If there was evidence of future testing for a drug in a given indication, we considered the phase II trial as being a basis for future/phase III testing. We were looking for a priori intent of future testing, so for negative trials, this meant that the authors concluded that there was no need for further testing. Indicating future testing based on the results of post hoc or subgroup analysis was not counted as having intent. Because not all studies clearly stated whether there was intent, we categorized intent as clear, vague, or none. Statistical analysis To determine agreement between reviewer’s assessment, we calculated Cohen’s κ coefficients for both the study meeting its primary endpoint and the authors’ conclusion (R Statistical Software, package ‘irr’; R Foundation). Descriptive characteristics were calculated for the total sample and stratified by presence of spin. We used a Fisher’s exact test to determine an association between a study meeting its primary endpoint and the tone of the authors’ conclusion. We looked at whether there was interaction from intent to test in a phase III trial on the association between meeting the study endpoint and the tone of the authors’ conclusion, by using the Cochran–Mantel–Haenszel test. We also carried out a logistic regression analysis to see which variables were associated with questionable statistical issues, which was a dichotomous composite variable of the presence of spin, being underpowered (<90% of estimated sample size), and/or high α or low β limits. We initially included the journal impact factor, funding (industry, nonindustry, none, not indicated), year of publication, blinding status, and intent for future larger phase II or phase III studies. We removed variables if their removal resulted in a lower Akaike information criterion value. We used Microsoft Excel and R Statistical Software (version 4.2.1; R Foundation) for all analysis. In accordance with 45 CFR §46.102(f), this study was not submitted for institutional review board approval because it involved publicly available data and did not involve individual patient data.
We sought to systematically assemble a list of phase II randomized oncology trials by searching PubMed with the search terms ‘oncology drug’ OR [(‘oncology’/exp OR oncology) AND (‘drug’/exp OR drug)]. We limited our search to randomized phase II clinical trials in the English language. We included all full-length articles published from 1 January 2021 to 20 June 2022 (our search date). We excluded articles that were long-term, pooled, or secondary analysis; did not include an intervention; did not test an antitumor intervention; were not randomized; were protocols only; did not include patients with cancer; were phase trials other than phase II; were reports on quality of life; were noninferiority studies; were retracted or inaccessible; were cost-effectiveness studies; or were biomarker/pharmacokinetic studies. Data abstraction and variable coding For each of the included studies, we abstracted the journal, tumor type, intervention, patients allocated to the control and intervention arm, the randomization ratio, if there was a sample size calculation, the estimated number of participants needed for the control and intervention arms based on sample size calculation, the α value and power for the sample size calculation and whether the calculation assumed a 1- or 2-sided α level, the value(s) used to determine the sample size (e.g. assumed effect size), the outcome used to determine the effect size, the primary and secondary outcomes, the results of the primary and secondary outcomes, and the authors’ conclusion of the study. We then recoded journals to either a top journal or other, based on Google Scholar’s h5-index (≥100 versus <100) for Oncology, Hematology, and Health and Medical Science categories. ( https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=med_oncology ) We recoded all α levels to a 1-sided value for comparability. Based on the effect size value(s) used to determine the sample size, we coded a variable to indicate a relative difference (e.g. hazard ratio), a percentage improvement based on absolute differences from prior studies, an absolute difference based off prior studies, or a predefined threshold value (e.g. a desirable threshold for continuing testing of the drug). We also compared the estimated number of participants from the sample size calculation with the number of participants allocated. If the number of participants was <10% lower in the allocated group than the estimated group, we classified the study as being underpowered; otherwise it was considered as being adequate. We classified the study conclusion as being positive, negative, or neutral, based on two blinded reviewer’s assessments. We coded each study as having met or not met the primary study endpoints, also based on two blinded reviewer’s assessments (AH and TO). For studies that had positive conclusions but did not meet the primary study point or the endpoint result was equivocal, we coded this study as having spin. As a sensitivity analysis, we also coded spin with an expanded definition as others have done, which also considers spin when studies reported a negative overall survival secondary endpoint, but the authors’ conclusion was positive.
For each of the included studies, we abstracted the journal, tumor type, intervention, patients allocated to the control and intervention arm, the randomization ratio, if there was a sample size calculation, the estimated number of participants needed for the control and intervention arms based on sample size calculation, the α value and power for the sample size calculation and whether the calculation assumed a 1- or 2-sided α level, the value(s) used to determine the sample size (e.g. assumed effect size), the outcome used to determine the effect size, the primary and secondary outcomes, the results of the primary and secondary outcomes, and the authors’ conclusion of the study. We then recoded journals to either a top journal or other, based on Google Scholar’s h5-index (≥100 versus <100) for Oncology, Hematology, and Health and Medical Science categories. ( https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=med_oncology ) We recoded all α levels to a 1-sided value for comparability. Based on the effect size value(s) used to determine the sample size, we coded a variable to indicate a relative difference (e.g. hazard ratio), a percentage improvement based on absolute differences from prior studies, an absolute difference based off prior studies, or a predefined threshold value (e.g. a desirable threshold for continuing testing of the drug). We also compared the estimated number of participants from the sample size calculation with the number of participants allocated. If the number of participants was <10% lower in the allocated group than the estimated group, we classified the study as being underpowered; otherwise it was considered as being adequate. We classified the study conclusion as being positive, negative, or neutral, based on two blinded reviewer’s assessments. We coded each study as having met or not met the primary study endpoints, also based on two blinded reviewer’s assessments (AH and TO). For studies that had positive conclusions but did not meet the primary study point or the endpoint result was equivocal, we coded this study as having spin. As a sensitivity analysis, we also coded spin with an expanded definition as others have done, which also considers spin when studies reported a negative overall survival secondary endpoint, but the authors’ conclusion was positive.
To differentiate phase II trials that were conducted as a step toward phase III testing or not, we used several methods: (i) we looked for discussion in the text on further/future testing in larger phase II or phase III trials, as determined by two reviewers (AH and TO); (ii) we searched PubMed to see if there were phase III trials or phase II trials with a bigger sample size carried out at a later date than the initial phase II trial, using the drug name and tumor type in the search; and (iii) we searched ClinicalTrials.gov for registered studies for the same drug and indication, using the drug name and tumor type in the search. If there was evidence of future testing for a drug in a given indication, we considered the phase II trial as being a basis for future/phase III testing. We were looking for a priori intent of future testing, so for negative trials, this meant that the authors concluded that there was no need for further testing. Indicating future testing based on the results of post hoc or subgroup analysis was not counted as having intent. Because not all studies clearly stated whether there was intent, we categorized intent as clear, vague, or none.
To determine agreement between reviewer’s assessment, we calculated Cohen’s κ coefficients for both the study meeting its primary endpoint and the authors’ conclusion (R Statistical Software, package ‘irr’; R Foundation). Descriptive characteristics were calculated for the total sample and stratified by presence of spin. We used a Fisher’s exact test to determine an association between a study meeting its primary endpoint and the tone of the authors’ conclusion. We looked at whether there was interaction from intent to test in a phase III trial on the association between meeting the study endpoint and the tone of the authors’ conclusion, by using the Cochran–Mantel–Haenszel test. We also carried out a logistic regression analysis to see which variables were associated with questionable statistical issues, which was a dichotomous composite variable of the presence of spin, being underpowered (<90% of estimated sample size), and/or high α or low β limits. We initially included the journal impact factor, funding (industry, nonindustry, none, not indicated), year of publication, blinding status, and intent for future larger phase II or phase III studies. We removed variables if their removal resulted in a lower Akaike information criterion value. We used Microsoft Excel and R Statistical Software (version 4.2.1; R Foundation) for all analysis. In accordance with 45 CFR §46.102(f), this study was not submitted for institutional review board approval because it involved publicly available data and did not involve individual patient data.
Our search resulted in 520 articles, of which 186 met our inclusion criteria. The flow diagram for our search strategy is in . Of the 186 studies, the median allocated sample size was 100 (interquartile range 70-140). Most studies were open label ( n = 153, 82.3%), and 36% ( n = 67) were published in a top journal. Common tumor types on which studies reported were lung ( n = 32, 17.2%), breast ( n = 26, 14%), and gastrointestinal ( n = 24, 12.9%). Study characteristics, by meeting the study endpoint, are presented in , and study characteristics, by intent for future testing, are in , available at https://doi.org/10.1016/j.esmoop.2022.100779 . The statistical power in most studies was between 80% and 89% ( n = 113, 60.8%); 5.4% ( n = 10) of studies used a statistical power that was inferior to 80%; and 16.7% ( n = 34) did not indicate the level of power for the sample size calculation. 16.7% ( n = 31) of studies used a one-sided α level of ≤0.025, 29.0% ( n = 54) used an α of 0.05, 25.3% ( n = 47) had a one-sided α level of 0.1, and 18.3% ( n = 34) did not indicate the α level. Most studies ( n = 141, 75.8%) used a 1 : 1 randomization ratio; 29% ( n = 54) of studies did not report at least one of the elements of sample size calculations. The median estimated sample size was 105 (interquartile range 80-142). The most common primary outcome was progression-free survival ( n = 74; 39.8%), followed by response ( n = 51, 27.4%) and overall survival ( n = 21, 27.4%). Most studies ( n = 126, 67.7%) recruited a study population size of ≥90% of the estimated sample size, but 19.9% ( n = 37) recruited a study population size of <90% of the estimated sample size. It was indeterminate in the remaining studies. Nearly 38% ( n = 71) of studies used a hazard ratio from previous studies as an effect size for determining the sample size; 15.6% ( n = 29) of studies used a predefined difference between groups, based on previous effect sizes; 19.9% ( n = 37) of studies used prior estimates for two groups with no predefined difference between groups; and 17.7% ( n =33) of studies used a predefined threshold (no comparator effect size or difference between groups) to determine the sample size. In 8.6% ( n = 16) of studies, it was unclear. The primary endpoint was met in 33.3% ( n = 62) of studies and was negative in 64.0% ( n = 119). The authors’ conclusion was positive in 59.7% ( n = 111) of studies, negative in 32.3% ( n = 60), and equivocal in 8.1% ( n = 15) studies. The κ coefficient for the authors’ conclusion was 0.84, and the κ coefficient for a study meeting its primary endpoint was 0.93. The percentage of studies with spin (i.e. positive conclusion by the author but the study did not meet the primary endpoint, or the endpoint was equivocal) was 27.4% ( n = 51). If using the expanded definition of spin, 37.1% ( n = 69) of studies had spin. There was a strong association between a study meeting its primary endpoint and the tone of the authors’ conclusion ( P < 0.001; ). There was significant interaction in this association by intent to publish in further, phase III trials (χ 2 = 58.90; d.f. = 4; P < 0.001). In studies with clear intent for further testing, 29% of studies that did not meet the study endpoint had positive authors’ conclusions; for studies with no intent, 41% of studies not meeting the study endpoint had positive authors’ conclusions; and 73% of studies with vague intent on further testing had positive authors’ conclusions. We also noted a higher percentage of spin among studies where intent to further test was vague (50.0%), as compared with studies with clear (24.1%) or no intent (20.5%; P < 0.001; , available at https://doi.org/10.1016/j.esmoop.2022.100779 ). We found that there were statistical or reporting issues (high α, low β, unreported α or β, underpowered, or spin) in 74.2% ( n = 138) of studies. There were 8.6% ( n = 16) of studies that had potential bias in all three areas ; 21.0% ( n = 39) with high α or low β and underpowered; 12.4% ( n = 23) that were underpowered and had spin; and 18.3% ( n = 34) that had high α or low β and spin. After adjusting for journal impact factor, funding, and intent for future testing, being published in a high-impact journal (odds ratio 0.42, 95% confidence interval 0.20-0.88), compared with a non-high-impact journal, and having no funding (odds ratio 0.10, 95% confidence interval 0.01-0.66), compared with having industry funding, were both associated with lower odds of having either high α or low β limits, being underpowered, and/or having spin. We found no other variables were associated with these statistical issues.
We found that among phase II randomized oncology trials, about one-third (29%) failed to adequately report data on sample size calculations, and about one-fifth were underpowered. In addition, almost one-third of studies (27%) had spin, meaning they presented positive conclusions even though the results were negative or equivocal. The presence of spin was especially notable in studies where intent for further testing was vague. These findings are noteworthy given that phase III studies are often conducted based on phase II trials results. However, if the decision to pursue the drug development from phase II to phase III is based on unreliable positive findings, because of either spin in the reporting of results or spurious findings due to statistical consideration, this may result in more phase III trials being conducted unnecessarily. When the sample size and power calculation are based on primary endpoints, this may result in inadequate sample size for nonprimary endpoints, and spurious results, including positive ones, are more likely to occur. First, we detected spin in 27% of studies if using the conservative definition, but 37% if using a more liberal definition as others have used. This was most often because of a focus on secondary outcomes or because the determination of success was unclear (e.g. no testing between groups) and results could be interpreted subjectively. Spin in scientific publications can unfavorably lead to misinformed clinical practice guidelines or health policies, or it could lead to the implementation of health practices that are later found to be ineffective, especially because the perception of benefit among many practitioners and oncologists is influenced by the authors’ conclusion in the abstract only. , Second, our findings raise concerns about the risk of spurious findings in phase II trials due to statistical consideration. When designing a trial, a set of statistical values is prespecified, for instance, α and power (1 – β). These prespecified values, in addition to other known or previously reported values, allow researchers to calculate the sample size. All these values are key in interpreting trial results. The α value, or the significance level, is the probability of type I error, or false positive, meaning the risk one accepts of wrongly ‘rejecting’ the null hypothesis when it is true. P values and α levels are related: P values are interpreted based on prespecified α levels. The P value in a trial is the probability to obtain the observed data or more extreme data, assuming the null hypothesis were true. In other words, if a P value is 0.01, that means there is a 1% chance of observing the given result or more extreme results if the null hypothesis was true. The α levels are set arbitrarily, most commonly at 5% (i.e. 0.05) in biomedicine. In this hypothetical scenario, because α is set at 0.05 and the P value of 0.01 is lower than this threshold, you can ‘reject’ the null hypothesis and conclude a statistically significant result. Therefore, using less stringent (higher) α levels leads to higher probability of concluding significant results while observing the same dataset. Researchers might justify a higher α level during phase II testing because the studies may be viewed as exploratory, , but this practice may also result in spurious findings, which may allow potentially ineffective or harmful drugs to undergo further testing, with more patients being exposed to the drug. This has prompted debate about using, conversely, even lower α levels in clinical trials at-large. , In our work, we showed that the most commonly accepted α level (of 0.025) was used in <17% of phase II trials. We found that many studies (35%) assumed a one-sided α level of >0.5, and only four studies provided justification for using an α level higher than a traditional two-sided 0.05 (all used a one-sided α level of ≥0.1). This is concerning given a high number of oncology drug being approved on phase II trial data. , Power (or 1 – β) defines the risk of false-negative results (type 2 error) one is willing to accept when running a trial. We found that ∼5% of trials were powered at <80%. Using low power levels, by definition, reduces the probability of detecting a true effect, and may also lead to an exaggerated estimate of effect when the effect is significant. Another underappreciated phenomenon induced by low power levels is the increased risk of statistically significant results not reflecting a true effect, but rather be spurious findings. Further, we found a notable number of studies lacking data for readers to assess the quality of sample size calculations or meeting study endpoint, with about one-third (34%) of studies lacking at least one basic element of the sample size calculation (α, 1 – β, one- or two-sided α, or expected outcomes for control/intervention groups). A missing α level was the most common reason for lacking sample size estimation data. Others have found even higher rates (72.1%) of data omission in phase III oncology trials when using an expanded list of criteria. We also found that a notable percentage (18%) of studies used a predefined threshold for determining effect size in the sample size calculations. This means that about one-fifth of studies did not actually use the comparator group to determine efficacy and could have been conducted without a randomized study design. Our umbrella review of recent phase II trials found statistical concerns that may lead to spurious findings or spin in the results of 74% of them. The risk of such findings in phase II trials is the resulting justification to pursue drug development in phase III trials based on less stringent standards. An example of this is the testing of olaratumab in soft tissue sarcoma, which found a statistically significant overall survival result (a secondary endpoint) in a phase II trial with positive primary endpoint (thus based on >0.05 α level). The trial results led to olaratumab’s Food and Drug Administration approval, but the drug was later withdrawn after negative phase III results were released. Strengths and limitations A strength of this study is that it is a contemporary analysis of a comprehensive list of variables related to the reporting of clinical trials. We were also able to evaluate the level of bias in the reporting of conclusions. This study did have several limitations. The categorization of spin could be somewhat subjective, but we tried to use predefined criteria, and two independent reviewers coded this variable. As there are word limits for some journals, some authors may have omitted information in the manuscript, which meant that these studies were coded as poor reporting, even though the methods could have been adequate. We also only included articles from 2021 and 2022, so our results only apply to those years. Changing methodological and reporting practices over time may mean that studies from other years could have different reporting quality. Finally, our categorization of intent for future studies may not have been correct because authors did not always report whether a future phase III trial was planned, and language describing future studies could be subjective. To allay potential bias, we used two reviewers, and if there was doubt, we gave credit to the study. Conclusion We found that many randomized phase II studies in the oncology failed to report essential data for determining sample size calculations, many did not actually use a comparator to determine efficacy even though the studies were randomized, and many had positive conclusions even though the results were indeterminate or the primary endpoint was not met. Phase II trials are not usually confirmatory, and therefore may be considered somewhat exploratory, but they should still adhere to the same reporting standards and be interpreted in the context of their primary endpoint and endpoints important for the patient.
A strength of this study is that it is a contemporary analysis of a comprehensive list of variables related to the reporting of clinical trials. We were also able to evaluate the level of bias in the reporting of conclusions. This study did have several limitations. The categorization of spin could be somewhat subjective, but we tried to use predefined criteria, and two independent reviewers coded this variable. As there are word limits for some journals, some authors may have omitted information in the manuscript, which meant that these studies were coded as poor reporting, even though the methods could have been adequate. We also only included articles from 2021 and 2022, so our results only apply to those years. Changing methodological and reporting practices over time may mean that studies from other years could have different reporting quality. Finally, our categorization of intent for future studies may not have been correct because authors did not always report whether a future phase III trial was planned, and language describing future studies could be subjective. To allay potential bias, we used two reviewers, and if there was doubt, we gave credit to the study.
We found that many randomized phase II studies in the oncology failed to report essential data for determining sample size calculations, many did not actually use a comparator to determine efficacy even though the studies were randomized, and many had positive conclusions even though the results were indeterminate or the primary endpoint was not met. Phase II trials are not usually confirmatory, and therefore may be considered somewhat exploratory, but they should still adhere to the same reporting standards and be interpreted in the context of their primary endpoint and endpoints important for the patient.
|
Pan-Asian adapted ESMO Clinical Practice Guidelines for the diagnosis, treatment and follow-up of patients with endometrial cancer | 90b9fd9a-8459-431c-af74-5b0883a778d6 | 10024150 | Internal Medicine[mh] | Cancer of the corpus uteri (endometrial cancer) is the most common gynaecological malignancy in high- and intermediate-income countries. , In 2020, endometrial cancer was the sixth most commonly diagnosed cancer in women, with 417 367 new cases recorded, accounting for 2.2% of the new cancers diagnosed worldwide. Approximately 40% of these new cases occurred in Asia, with China, where endometrial cancer is the third most common female malignancy, accounting for nearly half (81 964) of the cases. Endometrial cancer was in turn responsible for 97 370 cancer deaths representing 1% of all cancer deaths worldwide. Although endometrial cancer has a higher incidence in Western countries than in Asia, the incidence is increasing worldwide. Risk factors that are associated with sporadic endometrial cancer include obesity (high body mass index), diabetes, polycystic ovary syndrome, early age at menarche, late menopause, infertility, menopausal estrogen therapy and the use of tamoxifen, , whilst inherited endometrial cancer is linked to Lynch and Cowden syndromes. A rising trend in endometrial cancer is being observed in several Asian countries. The number of new cases of endometrial cancer in 2020 was 16 413 cases in India, 4524 cases in Thailand, 4374 cases in the Philippines, 3425 cases in South Korea, 1401 cases in Malaysia and 775 cases in Singapore. The increasing incidence is attributed to evolving lifestyle, younger age at menarche, late age at menopause and fewer children, especially in women living in urban areas. , Although endometrial cancer occurs most frequently in postmenopausal women, there is a higher proportion of younger women being diagnosed with endometrial cancer in China, , with ∼40% of patients diagnosed before their menopause compared with <25% of Western women. In Hong Kong, 65% of 1165 new cases of endometrial cancer diagnosed in 2018 occurred in women aged between 45 and 64 years ( www3.ha.org.hk/cancereg ). The majority of endometrial cancers are diagnosed at an early stage and the 5-year overall survival rate for patients with localised disease is high (95%), However, endometrial cancers with high-risk factors such as high-grade serous pathology and TP53 mutation have a tendency to recur. , Patients with recurrent endometrial cancer have a poor prognosis, with a 5-year overall survival of <20%, particularly in patients with metastatic disease. Guidelines and recommendations for the treatment and management of patients with endometrial cancer in Asia have been published for the Asia-Pacific region, India [National Cancer Grid (NCG) guidelines for endometrial cancer (tmc.gov.in)], Japan, Korea, Singapore, Taiwan, China, Thailand, the Philippines and Indonesia, and are important for the standardisation of diagnostic and treatment approaches. These guidelines aim to optimise clinical outcomes for what is a growing health care problem in each Asian country. The European Society for Medical Oncology (ESMO) guidelines for the diagnosis, treatment and follow-up of patients with endometrial cancer were published in 2022, and a decision was taken by ESMO and the Indian Society of Medical and Paediatric Oncology (ISMPO) that these guidelines should be adapted for the management and treatment of patients in Asian countries. Consequently, representatives of ISMPO, ESMO, the Chinese Society of Clinical Oncology (CSCO), the Indonesian Society of Hematology and Medical Oncology (ISHMO), the Japanese Society of Medical Oncology (JSMO), the Korean Society of Medical Oncology (KSMO), the Malaysian Oncological Society (MOS), the Philippine Society of Medical Oncology (PSMO), the Singapore Society of Oncology (SSO), the Taiwan Oncology Society (TOS) and the Thai Society of Clinical Oncology (TSCO) convened for a virtual, ‘face-to-face’ working meeting on 9 July 2022, hosted by ISMPO, to adapt the recent ESMO Clinical Practice Guidelines for use in the clinical management of Asian patients with endometrial cancer. This manuscript summarises the Pan-Asian adapted guidelines developed at the meeting accompanied by the level of evidence (LoE), grade of recommendation (GoR) and percentage consensus reached for each recommendation.
This Pan-Asian adaptation of the current ESMO Clinical Practice Guidelines was prepared in accordance with the principles of ESMO standard operating procedures ( http://www.esmo.org/Guidelines/ESMO-Guidelines-Methodology ) and was an ISMPO–ESMO initiative endorsed by CSCO, ISHMO, JSMO, KSMO MOS, PSMO, SSO, TOS and TSCO. An international panel of experts was selected from the ISPMO ( n = 6), the ESMO ( n = 6) and two experts representing each of the oncological societies of China (CSCO), Indonesia (ISHMO), Japan (JSMO), Korea (KSMO), Malaysia (MOS), the Philippines (PSMO), Singapore (SSO), Taiwan (TOS) and Thailand (TSCO). One expert from Thailand (ST) was member of the Thai Gynecologic Cancer Society endorsed by TSCO. Only two of the six expert members from the ISMPO (SG and KGB) were allowed to vote on the recommendations together with the experts from each of the nine other Asian oncology societies ( n = 20). Among the six experts from ISMPO, three were medical oncologists and one a gynaecological oncologist, one a radiation oncologist and one a pathologist. The majority of experts from the other Asian societies were medical oncologists or gynaecological oncologists. None of the additional ISMPO members present and none of the ESMO experts were allowed to vote and were present only in an advisory role. A modified Delphi process was used to review, accept or adapt each of the individual recommendations in the latest ESMO Clinical Practice Guidelines. The 20 voting Asian experts were asked to vote YES or NO (one vote per society) on the ‘acceptability’ (agreement with the scientific content of the recommendation) and ‘applicability’ (availability, reimbursement and practical challenges) of each of the ESMO recommendations in a pre-meeting survey (see Methodology in , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). For recommendations, where a consensus was not reached, the Asian experts were invited to modify the wording of the recommendation(s) at the virtual ‘face-to-face’ meeting using further rounds of voting, if necessary, in order to determine the definitive acceptance or rejection of an adapted recommendation and discuss the applicability challenges. The ‘Infectious Diseases Society of America-United States Public Health Service Grading System’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was used to define the LoE and strength (grade) of each recommendation. Any modifications to the initial recommendations were highlighted in bold text in a summary table of the final Asian recommendations and in the main text, if applicable. A consensus was considered to have been achieved when ≥80% of experts voted that a recommendation was acceptable.
In the initial pre-meeting survey, the 20 voting Asian experts reported on the ‘acceptability’ and ‘applicability’ of the 51 recommendations for the diagnosis, treatment and follow-up of patients with endometrial cancer from the 2022 ESMO Clinical Practice Guidelines. These recommendations were made in the five categories outlined in the text below and in . During the pre-meeting survey there were 32 voting discrepancies in relation to scientific ‘acceptability’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ; ‘recommendations 3a, 3e, 3f, 3j, 3k, 3l, 3m, 3n, 3o, 3p, 3q2, 3q3, 3q4, 3r1, 3r2, 3r3, 3s, 3t, 3u, 4a, 4b, 4c, 4e, 4f, 4g, 4h, 4i, 4j,4k, 5a, 5b and 5c’), and 37 voting discrepancies in relation to the ‘applicability’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) across the 10 different Asian societies. 1 Diagnosis, pathology and molecular biology—recommendations 1a-b Endometrial cancer is clinically a very heterogeneous malignancy for which the assignment of histological subtype, grade, disease extension and lymphovascular space invasion (LVSI) has been highly subjective, , impacting on the accurate assessment of an individual patient’s risk of recurrence and metastasis, and therefore management. Furthermore, it has reduced the ability to accurately compare different clinical studies in terms of outcome due to uncertainty over the classification of patient risk. The traditional histopathological classification of Bokhman identified two types of endometrial cancer, type I [endometrioid, grade 1-2 (G1-2) with a favourable prognosis], ∼70% of cases, and type II (G3 endometrioid and non-endometrioid histologies with a poor prognosis), ∼30% of cases. There is general agreement, however, that endometrioid tumours should now be classified according to the International Federation of Gynecology and Obstetrics (FIGO) defined criteria, , providing a two-tier grading system with G1 and G2 endometrioid tumours grouped together as low grade, and G3 tumours classified as high grade. Factors traditionally associated with a high risk of recurrent disease include histologic subtype, FIGO G3 histology, myometrial invasion ≥50%, LVSI, , , L1 cell adhesion molecule expression, , lymph node metastases and tumour diameter >2 cm. However, the heterogeneity of endometrial cancer is due to an array of underlying molecular alterations. The results of The Cancer Genome Atlas (TCGA) analysis showed that the molecular diversity of endometrial cancer could be stratified into four distinct molecular subgroups ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). The four molecular subgroups are: (i) patients with copy number stable, ultra-mutated endometrial cancers characterised by pathogenic variants in the exonuclease domain of DNA polymerase-epsilon ( POLE ), (ii) patients with hyper-mutated endometrial cancer characterised by microsatellite instability (MSI) due to dysfunctional/deficient mismatch repair genes (dMMR), (iii) an MMR-proficient, low somatic copy number aberration (SCNA) subgroup with a low mutational burden and (iv) a high SCNA subgroup with frequent TP53 mutations. Therefore, well-established immunohistochemical (IHC) staining techniques for the detection of p53 and MMR proteins (MLH1, PMS2, MSH2, MSH6) are now recommended as standard practice for all endometrial cancer pathology specimens, regardless of histological type, together with sequencing of the exonuclease domain of POLE if available. Patients presenting with either newly diagnosed or recurrent/metastatic endometrial cancer should have a biopsy to confirm histology and assess tumour molecular biology. These molecular classes are identified across all of the histological subtypes, , and correlate with endometrial cancer prognosis. Thus, molecular classification could facilitate more accurate comparison of clinical outcomes between different groups of patients. Furthermore, it could impact treatment considerations. Firstly, testing for MMR/MSI status serves not only as a screening test for Lynch syndrome, but also identifies patients with metastatic disease who could benefit from immune checkpoint blockade agent. Secondly, the benefit of adjuvant chemotherapy is observed in patients with p53 mut endometrial cancer, whilst the de-escalation of therapy in patients with POLE mutated ( POLE mut) endometrial cancer, which has a favourable outcome, is being investigated. Thirdly, the overexpression/gene amplification of human epidermal growth factor receptor 2 (HER2), which has been demonstrated in 20%-40% of type II non-endometrioid endometrial cancers, supports the use of HER2-targeted therapy in combination with chemotherapy. This combined treatment has also recently been shown to be an effective treatment approach for patients with advanced and recurrent serous endometrial cancer. , , , , As a consequence, HER2 testing is now being proposed to guide the management of these patients. , Endometrial cancers that have not been completely molecularly classified should be designated as endometrial cancers not-otherwise-specified and use the histology-based classification system. With improved tumour characterisation facilitated by more sophisticated diagnostic testing and molecular profiling, the diagnosis and management of patients with endometrial cancer is evolving towards a more objective, reproducible, personalised medicine approach. The algorithm for the diagnostic work-up of endometrial cancer proposed by ESMO and adapted from Vermij et al. 2020 is presented in . The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations on diagnosis, pathology and molecular biology ‘recommendations 1a-b’ below and in . However, they mentioned that POLE hotspot mutation analysis was not available as part of the standard molecular evaluation in many centres in Asia. 1a. Histological type, FIGO grade, myometrial invasion and LVSI (focal/substantial) should be described for all endometrial cancer pathology specimens [V, A]. 1b. Molecular classification through well-established IHC staining for p53 and MMR proteins (MLH1, PMS2, MSH2, MSH6) in combination with targeted tumour sequencing ( POLE hotspot analysis) , should be carried out for all endometrial cancer pathology specimens regardless of histological type [IV, A]. See , available at https://doi.org/10.1016/j.esmoop.2022.100744 , for hereditary endometrial cancer testing and surveillance. 2 Staging and risk assessment—recommendations 2a-c The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations on diagnosis, pathology and molecular biology ‘recommendations 2a-c’ below and in . 2a. Obtaining endometrial sampling by biopsy or dilation and curettage (D & C) are acceptable initial approaches to the histological diagnosis of endometrial cancer [IV, A]. 2b. The preoperative work-up should include clinical and gynaecological examination, transvaginal ultrasound, pelvic magnetic resonance imaging (MRI), a full blood count and liver and renal function profiles [IV, B]. 2c. Additional imaging tests [e.g. abdominal and thoracic computed tomography (CT) scan and/or [ 18 F]2-fluoro-2-deoxy-D-glucose–positron emission tomography ( 18 FDG–PET)–CT may be considered in those patients at high risk of extra-pelvic disease [IV, C]. 3 Management of local and locoregional disease—recommendations 3a-u Surgery Early endometrial cancer is typically treated with surgery to remove the macroscopic disease and stage the tumour for planning with regard to adjuvant therapy. Traditionally, surgery for endometrial cancer was carried out via laparotomy until the results of two large, randomised trials showed minimally invasive laparoscopic techniques to have no negative impact on either staging or clinical outcomes. , An algorithm for the surgical treatment and management of patients with stage I endometrial cancer is presented in . Preservation of fertility in younger patients with endometrial carcinoma should be considered when appropriate ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations 3a-d below and in , without change. 3a. Hysterectomy with bilateral salpingo-oophorectomy is the standard surgical procedure in early-stage endometrial cancer [I, A]. 3b. Minimally invasive surgery is the recommended approach in stage I (G1-G2) endometrial cancer [I, A] . 3c. Minimally invasive surgery may also be the preferred surgical approach in stage I G3 [II, A] . 3d. Ovarian preservation can be considered in premenopausal women with stage IA, G1 endometrioid-type endometrial cancer [IV, A] . The comment of the Taiwanese experts with respect to inclusion of sentinel lymph node sampling as part of surgical procedure (recommendation 3a) is covered in recommendation 3e. However, some Asian experts did not accept ESMO ‘recommendations 3e and 3f’ because they did not reflect real-life clinical practice in their countries with respect to sentinel lymph node excision (SLNE), which is not available in many centres in Asia. Therefore, the original ‘recommendations 3e and 3f’ were modified, as per the bold text below and in . However, the consensus was that SLNE should be encouraged wherever possible, based on the evidence available from two studies, , including in patients with deeply invasive endometrioid endometrial cancer, but not in patients with the more aggressive type II histology , (see ‘recommendation 3g’ below). SLNE can be used for staging in patients with low- or intermediate-risk endometrial cancer and may represent an alternative to systematic lymphadenectomy (LNE) in high-intermediate- or high-risk stage I-II disease. The randomised Endometrial Cancer Lymphadenectomy Trial (ECLAT) is ongoing in patients with FIGO stage I and II disease with a high risk of recurrence, and should provide more evidence. 3e. SLNE can be considered as a strategy for nodal assessment in cases of low-risk/intermediate-risk endometrial cancer (e.g. stage IA, G1-G3 and stage IB, G1-G2) in experienced centres [II, A]. It can be omitted in cases without myometrial invasion. When SLNE is not available, lymphadenectomy (LNE) can be carried out in patients with stage IA G3 and stage IB disease [II, B; consensus = 100%]. 3f. Surgical lymph node staging should be carried out in patients with high-intermediate-risk/high-risk disease. Sentinel lymph node biopsy is an acceptable alternative to systematic LNE for lymph node staging in patients with high-intermediate/high-risk stage I-II endometrial cancer, when available and in centres with experience [III, B; consensus = 100%]. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3g and 3h’ below. 3g. Full surgical staging including omentectomy, peritoneal biopsies and lymph node staging should be considered in serous endometrial cancers and carcinosarcomas [IV, B] . 3h. When feasible, and with acceptable morbidity, cytoreductive surgery to the maximal surgical extent should be considered in patients with stage III and IV disease [IV, B]. The risk groups for endometrial cancer are summarised in , available at https://doi.org/10.1016/j.esmoop.2022.100744 . Low-risk endometrial cancer There is no indication for the use of adjuvant therapy for the treatment of patients with low-risk endometrial cancer, , , due to a low risk of recurrence. Also, in the few patients in whom local recurrence does occur, it can be treated effectively with radiotherapy (RT). Combined analysis of cohorts from the PORTEC-1 and PORTEC-2 studies and other studies , , has shown the presence of a POLE mutation ( POLE mut) to be a favourable indicator of prognosis, independently of other clinicopathological characteristics. As a consequence, patients with stage I-II endometrial cancer with POLE mut tumours are now classified as low risk and unlikely to benefit from adjuvant therapy. Omitting adjuvant therapy in patients with G3 POLE mut endometrial cancer may also be an option, although currently there are no robust data available. Higher-level evidence from a prospective registry study is likely to be available shortly together with data from a cohort of the RAINBO trial (NCT05255653). The planned cohorts for the Trans PORTEC RAINBO programme of clinical trials aim to refine the adjuvant treatment of patients with endometrial cancer based on molecular profile including POLE mut status, dMMR, no specific molecular profile (NSMP) and abnormal p53 (p53abn). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3i’ below. 3i. For patients with stage IA (G1 and G2) endometrioid (dMMR and NSMP) type endometrial cancer with no or focal LVSI, adjuvant treatment is not recommended [I, E]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3j, 3k and 3l’, which suggest the omission of adjuvant treatment, because there are little supporting data on the safety of omitting therapy. However, in relation to ‘recommendation 3k’ for patients with stage I-II POLE mut disease, there is encouraging, although limited, evidence regarding the omission of adjuvant therapy. , When the POLE mut status of a tumour is unavailable, patients should be treated on the basis of the other available risk information. The current focus is on de-escalation of therapy in these patients, whenever possible. Thus, the wording of the original ‘recommendations 3j, 3k and 3l’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was revised, as per the bold text below and in to reflect the concerns of the Asian experts, with 100% consensus. 3j. For patients with stage IA non-endometrioid-type endometrial cancer (and/or p53abn), without myometrial invasion and no or focal LVSI, there are not enough data to make a definitive recommendation regarding adjuvant treatment. Adjuvant therapy (chemotherapy and/or brachytherapy) or no adjuvant treatment may be discussed on a case-by-case basis in a multidisciplinary team environment [IV, C; consensus = 100%]. 3k. For patients with stage I-II POLE mut cancers, omission of adjuvant treatment should be considered [III, D; consensus = 100%]. 3l. For patients with stage III POLE mut cancers, there is insufficient evidence on need for adjuvant treatment. Enrolment in clinical trials , adjuvant therapy or no adjuvant therapy are reasonable options [III, C; consensus = 100%]. The adjuvant therapy options for low-risk disease are outlined in . Intermediate-risk endometrial cancer The PORTEC-1 and Gynaecology Oncology Group (GOG)-99 trials demonstrated the benefit of pelvic external beam RT (EBRT) after surgery in reducing locoregional recurrence in patients with intermediate-risk endometrial cancer. However, a Norwegian trial and an ASTEC study group trial showed that EBRT and vaginal brachytherapy (VBT) achieve similar results. The long-term results of the PORTEC-2 study showed VBT to result in excellent vaginal control in women with high-intermediate-risk endometrial cancer, with 10-year vaginal control above 96% in both arms. Although the risk of pelvic recurrence was significantly higher in the VBT group (6% versus 1%), no differences were found in 10-year rates for distant metastasis and overall survival. There were lower toxicity rates and better health-related quality of life among women who received VBT compared with EBRT. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3m, 3n and 3o’ below without change, after much discussion over the use of adjuvant RT. Adjuvant RT is not commonly used in Japan ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ), with chemotherapy being used as an alternative based on a study by the Japanese Gynecologic Oncology Group. The experts from China and Taiwan favoured EBRT ± VBT or EBRT alone, respectively, over VBT for stage II G1 endometrial cancer ‘recommendation 3o’. 3m. For patients with stage IA G3 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [1, A; consensus = 100%]. 3n. For patients with stage IB G1-G2 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [I, A; consensus = 100%]. 3o. For patients with stage II G1 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI adjuvant VBT is recommended to decrease vaginal recurrence [II, B; consensus = 100%]. It was mentioned by the experts that molecular profiling was not available in certain regions of Asia. In such situations, patients should be treated according to their assessed risk of recurrence. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) ‘recommendation 3p’ below without any change. 3p. Omission of adjuvant VBT can be considered (especially for patients aged <60 years) for all above stages, after patient counselling and with appropriate follow-up [III, C]. High-intermediate-risk endometrial cancer with lymph node staging (pN0) There was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3q.1’ below, with the proposal from Taiwan that chemotherapy might be considered as an alternative. 3q.1. Adjuvant EBRT is recommended [I, A]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3q.2, 3q.3 and 3q.4’, regarding adjuvant treatment. With regard to ‘recommendation 3q.2’, some of the experts considered that stronger evidence was needed for the benefit of the addition of chemotherapy, but accepted the recommendation without change based on the data from the PORTEC-3 trial. However, it was felt that the high incidence of short- and long-term side-effects associated with the addition of chemotherapy to EBRT, whilst conferring minimal benefit, needed to be discussed with these patients. 3q.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered, especially for G3 and/or substantial LVSI [II, C; consensus = 100%]. With regard to ‘recommendation 3q.3’, some of the experts considered that there was insufficient evidence to use the presence or absence of LVSI to decide the type of RT (VBT versus EBRT). In Korea EBRT is used for G3 disease, except in those without LVSI. ‘Recommendation 3q.3’ was accepted completely by replacing ‘could be recommended ’ with ‘could be considered ’ as per the bold text below. 3q.3. Adjuvant VBT (instead of EBRT) could be considered to decrease vaginal recurrence, especially for those without substantial LVSI [II, B; consensus = 100%]. With regard to ‘recommendation 3q.4’, experts from 6 of the 10 Asian countries considered that adjuvant treatment should be recommended. Thus, the consensus was that the standard treatment for most patients should include adjuvant treatment. However, in highly selected patients (stage IA G1-G2), when close follow-up (every 3 months) is possible, adjuvant treatment may be withheld in consultation with the patient. Thus, the original ‘recommendation 3q.4’ was revised from: 3q.4. With close follow-up, omission of any adjuvant treatment is an option following shared decision making with the patient [IV, C], to read as the ‘recommendation 3q.4’ below with the new text highlighted in bold. 3q.4. Despite evidence of a benefit from adjuvant treatment , its omission is an option, when close follow-up can be ensured, following shared decision making with the patient [IV, C]. An algorithm for the treatment of these patients is presented in . High-intermediate-risk endometrial cancer without lymph node staging Again, there was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3r.1’ below without change. 3r.1. Adjuvant EBRT is recommended [I, A]. With regard to ‘recommendation 3r.2’, experts from some Asian countries, despite the evidence from the PORTEC-1 trial in patients who had undergone primary surgery (without node dissection) and the PORTEC-3 trial, were of the opinion that concomitant treatment should be reserved for medically fit patients, but was the preferred option for patients with substantial LVSI. For patients with no initial lymph node dissection, carrying out a lymph node dissection is also an option, followed by tailored adjuvant treatment. ‘Recommendation 3r.2’ below was accepted without change with consideration to be given to the observations cited above. 3r.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered especially for patients with substantial LVSI and G3 disease [II, C; consensus = 100%]. With regard to ‘recommendation 3r.3’, five of the Asian countries did not agree with the original recommendation, and it was generally accepted that in the absence of lymph node staging, EBRT should be considered. Thus the original ‘recommendation 3r.3’ was revised from: 3r.3. Adjuvant VBT could be considered for IB G3 disease without substantial LVSI to decrease vaginal recurrence [II, B], to read as the ‘recommendation 3r.3’ below with the new text highlighted in bold text and the LoE and GoR changed from II, B to III, C. 3r.3. Adjuvant VBT followed by chemotherapy could be considered for patients with IB G3 disease without substantial LVSI, if EBRT is not feasible [III, C; consensus = 100%]. This recommendation is based on evidence from a subgroup analysis of the phase III GOG-249 trial of adjuvant pelvic RT versus VBT plus paclitaxel/carboplatin in high-intermediate- and high-risk early-stage endometrial cancer. Radiological evaluation, if not already carried out, should be done before using this option. An algorithm for the treatment of these patients is presented in . High-risk endometrial cancer There were differences amongst the Asian experts in terms of ‘acceptability’ with regard to ‘recommendations 3s, 3t and 3u’ (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). There was much discussion over the adjuvant treatment of this group of patients with some of the experts considering the therapy proposed in ‘recommendation 3s’ below too toxic for patients with endometrial cancer due to their age and comorbidities although there are supporting data from the PORTEC-3 trial , and GOG trial for the benefits of combining chemotherapy with RT in this patient group. High-risk endometrial cancer patients include those with stage III-IVA cancers without residual disease regardless of histology and regardless of molecular subtype, or stage I-IVA p53abn with myometrial invasion, or non-endometrioid cancers without residual disease with myometrial invasion (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). Carcinosarcomas (metaplastic dedifferentiated endometrial cancers) are also regarded as high risk and are commonly classified as p53abn. However, the Asian experts decided to accept completely the original ESMO ‘recommendation 3s’ below, without change, provided that patients are properly evaluated based on individual factors for this treatment. For patients with major comorbidities or for whom there is an unambiguous contraindication for chemotherapy, RT alone can be considered. 3s. Adjuvant EBRT with concurrent and adjuvant chemotherapy is recommended [I, A; consensus = 100%]. After discussion, the Asian experts also accepted ‘recommendations 3t and 3u’ without change. Extended field RT can be considered along with EBRT and chemotherapy for patients with para-aortic node disease. 3t. Sequential chemotherapy and RT can be used [I, B; consensus = 100%]. 3u. Chemotherapy alone is an alternative option [I, B; consensus = 100%]. However, concern was expressed over the use of chemotherapy alone (‘recommendation 3u’), due to the fact that the data regarding comparable efficacy were inconsistent. Certainly, data from the PORTEC-3 trial showed the treatment effect to differ between the different molecular subgroups. Poor prognosis patients with p53abn endometrial cancer benefitted significantly from chemoradiotherapy (CRT) regardless of stage and histological subtype, whilst patients with POL Emut cancers achieved an excellent benefit with either RT or CRT. No benefit was observed for CRT over RT for patients with dMMR endometrial cancer, whilst a trend for benefit was observed in the NSMP subgroup. An algorithm for the treatment of these patients is presented in . For any patients with endometrial cancer who are medically unfit for surgery, by virtue of severe comorbidities, definitive RT is an option (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). 4 Recurrent/metastatic disease—recommendations 4a-m As stated previously, the outcomes in patients with recurrent and/or metastatic endometrial cancer are poor. The management of these patients should, wherever possible, involve a multidisciplinary team approach, treatment in specialised centres and the development of individualised treatment plans. Algorithms for the treatment of recurrent locoregional and metastatic disease are presented in and , respectively. Several factors influence the outcomes (local control and survival) in patients with recurrent and/or metastatic disease, including its site and extent (isolated vaginal or peritoneal involvement), size (<2 cm or ≥2 cm), histology and relapse-free survival (RFS). Isolated vaginal recurrence, lower grade, endometrioid histology and longer RFS are associated with a better prognosis. , Additionally, prior treatment (surgery and/or RT) and patient’s general condition also influence outcome. The Asian experts expressed concern over the omission of surgery from the ESMO ‘recommendation 4a’, and the recommendation of only VBT, which should be considered if there is isolated vaginal recurrence. Thus, ‘recommendation 4a’ was revised by inclusion of the text in bold below. 4a. For patients with locoregional recurrence following primary surgery alone, the preferred primary therapy should be EBRT with or without VBT, depending on the site of recurrence [IV, A; consensus = 100%]. It was discussed that surgery could be considered in selected patients in whom it is possible to achieve complete surgical resection in the absence of excessive morbidity, and that the use of VBT alone can be considered in the subgroup of patients with a small vaginal recurrence. ‘Recommendations 4b-e’ were accepted without change with the caveat that they may not be applicable in all cases, depending on extent of disease. 4b. Adding systemic therapy to salvage RT could be considered [IV, C; consensus = 100%]. 4c. For patients with recurrent disease following RT, surgery should be considered only if a complete debulking with acceptable morbidity is anticipated [IV, C; consensus = 100%]. 4d. Complementary systemic therapy after surgery could be considered , , [IV, C; consensus = 100%] (see ). 4e. The standard first-line chemotherapy treatment is carboplatin AUC 5-6 plus paclitaxel 175 mg/m 2 every 21 days for six cycles [I, A; consensus = 100%]. In relation to ‘recommendation 4e’ there is no evidence of an increased benefit for >6 cycles of chemotherapy, but it was agreed that this could be considered on an individual basis. Some Asian experts did not agree with the original ‘recommendation 4f’ because hormone therapy is rarely offered as first-line systemic therapy in these patients. The experts agreed that chemotherapy is the first choice of treatment. Hormone therapy can be considered for patients with low-grade, low-volume disease who are not suitable for chemotherapy, dependent on knowledge of the hormone receptor status [estrogen receptor (ER) and progesterone receptor (PgR)] of the tumour at the time of treatment. However, the predictive value of hormone receptor expression in endometrial cancer is not as strong as it is for patients with breast cancer due to the limitations associated with a lack of standardisation of tissue processing and factors such as a well-defined cut-off limit in relation to receptor levels. Furthermore, responses to hormone therapy have been reported in ER-/PgR-negative disease. Thus, due to these concerns, the text of the original recommendation ‘recommendation 4f’ below was modified by the inclusion of the bold text. 4f. Hormone therapy could be considered as an option for front-line systemic therapy in patients with low-grade carcinomas of endometrioid histology with low-volume disease [III, A; consensus = 100%]. The Asian experts accepted without change ‘recommendations 4g, 4h and 4i’ below, despite some discussion and the removal of the dosing details for medroxyprogesterone acetate and megestrol acetate ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) in ‘recommendation 4g’. Aromatase inhibitors and fulvestrant are alternative options with limited benefits. A phase II study of anastrozole in recurrent ER-/PgR-positive endometrial cancer (the PARAGON trial) showed a low objective response but a meaningful clinical benefit in 44% of patients. 4g. Progestins are the recommended agents [II, A; consensus = 100%]. 4h. Other options for hormonal therapies include aromatase inhibitors (AIs), tamoxifen and fulvestrant [III, C; consensus = 100%]. 4i. There is no standard of care for second-line chemotherapy. Doxorubicin and weekly paclitaxel are considered the most active therapies , , [IV, C; consensus = 100%]. The Asian experts queried ‘recommendation 4j’, but eventually accepted it without change with the provision that for patients with a long disease-free interval after prior chemotherapy, retreatment with further platinum-based treatment can also be considered, based on a retrospective analysis, when immune checkpoint inhibitor therapy is not available. After discussion, the GoR of ‘recommendation 4j’ was revised from B to A [ESCAT IA, ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) 3], as per the bold text below. 4j. Immune checkpoint blockade monotherapy should be considered after platinum-based therapy failure in patients with MSI-H/dMMR , [III, A; consensus =100%]. Immune checkpoint blockade alone or in combination with targeted therapies has emerged as a promising intervention in patients with recurrent endometrial cancer in view of a high mutational burden (dMMR/POLEmut subtypes), tumour-infiltrating lymphocytes and programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1) expression. Pembrolizumab, which targets PD-1, has been investigated in the endometrial cohorts of the KEYNOTE-158 trial in patients pre-treated with chemotherapy, and a short progression-free survival (PFS), and showed PD-1 blockade to be highly effective. Data from the GARNET trial with the anti-PD-1 monoclonal antibody dostarlimab, which blocks interaction with the programmed death ligands PD-L1 and -L2, have led to the approval of dostarlimab monotherapy by the Food and Drug Administration (FDA) in the United States to treat dMMR recurrent or advanced endometrial cancer that has progressed on platinum-containing regimens . Agents that target PD-L1 such as avelumab and durvalumab have also shown promising activity in patients with dMMR endometrial cancer, as well as atezolizumab and nivolumab (anti-PD-1). The phase Ib/II KEYNOTE 146 trial showed encouraging response, PFS and overall survival rates with the combination of pembrolizumab and the multi-kinase inhibitor lenvatinib, and the phase III KEYNOTE-775 trial demonstrated the statistically significant PFS ( P < 0.0001) and overall survival ( P < 0.0001) benefits of this combination compared with standard chemotherapy. As a consequence, pembrolizumab in combination with lenvatinib has been approved by the FDA for patients with advanced endometrial cancer, that is not MSI-high (MSI-H) or dMMR, who have disease progression following prior systemic therapy in any setting and are not candidates for curative surgery or RT. The European Medicines Agency (EMA) approved pembrolizumab in combination with lenvatinib for the treatment of advanced or recurrent endometrial cancer in patients who have disease progression on or following prior treatment with a platinum-containing regimen in any setting regardless of MMR status and who are not candidates for curative surgery or RT . However, due to the lack of availability of dostarlimab in 6 of the 10 Asian countries, the wording of the original ‘recommendation 4k’ was reworded from the original ESMO recommendation below, 4k. Dostarlimab has recently been approved by both the EMA and the FDA for this indication [III, B; ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) v1.1 score: 3], to read as follows: 4k. Dostarlimab can be considered in patients with dMMR or MSI-H recurrent or advanced endometrial cancer after failure of prior platinum-based chemotherapy and has recently been approved by both the EMA and the FDA for this indication [III, B; consensus = 100%; ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) v1.1 score: 3]. The Asian experts accepted completely without change (100% consensus) the original ESMO recommendations ‘recommendations 4l and 4m’ below and in . 4l. Pembrolizumab is FDA approved for the treatment of TMB-H solid tumours (as determined by the FoundationOne CDx assay) that have progressed following prior therapy for endometrial cancer [III, B; ESMO-MCBS v1.1 score: 3; not EMA approved]. 4m. Pembrolizumab with lenvatinib is approved by the EMA for endometrial cancer patients who have failed a previous platinum-based therapy, and who are not candidates for curative surgery or RT. FDA approval is for endometrial cancer patients whose tumours are not dMMR/MSI-H [I, A; ESMO-MCBS v1.1 score: 4]. Targeted therapy approaches are also being investigated in patients with endometrial cancer. Uterine serous carcinoma (USC) is an aggressive endometrial cancer subtype associated with a poor outcome. One-third of USCs overexpress HER2/Neu, a target for trastuzumab in breast cancer. A small randomised phase II trial for the addition of trastuzumab to paclitaxel/carboplatin compared with paclitaxel/carboplatin alone in stage III-IV or recurrent USC demonstrated a meaningful benefit for PFS [hazard ratio (HR) 0.46, P = 0.005] and overall survival (HR 0.58).The benefit for stage III-IV was greater than in recurrent disease. The cyclin-dependent kinase inhibitor palbociclib has shown superiority in combination with letrozole in previously treated patients with ER-positive disease in the phase II ENGOT EN3 PALEO trial, and the WEE1 inhibitor adavosertib has been investigated in heavily pre-treated patients with serous tumours. Future directions include immune checkpoint blockade strategies in combination with other targeted therapies, immunotherapeutic agents, chemotherapy and RT. 5 Follow-up, long-term implications and survivorship—recommendations 5a-e There is no evidence from randomised studies to support intensive, clinician-led, hospital-based, follow-up evaluations for patients with endometrial cancer and no consensus on what surveillance tests should be carried out. , Thus, clinical monitoring can be adjusted according to the risk factors of the patient. There was considerable discussion amongst the Asian experts about the frequency of follow-up appointments with no evidence of a survival benefit from intensive versus minimalist follow-up, even in high-risk patients, as demonstrated by the results of the European multicentre phase III TOTEM trial. Furthermore, the evidence showed that there was no need to add routine vaginal cytology, laboratory investigations or imaging to the minimalist follow-up strategies. Thus, ‘recommendation 5a’ was modified very slightly as per the bold text below. 5a. For low-risk endometrial cancer, the proposed surveillance is at least every 6 months for the first 2 years and then yearly until 5 years. A physical and gynaecological examination should be performed at each follow-up [V, C; consensus = 100%]. With regard to ‘recommendation 5b’ the experts were concerned that access to phone follow-up would be difficult in certain regions. Therefore, ‘recommendation 5b’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was reworded to: 5b. In the low-risk group, remote follow-up can be integrated in to hospital-based follow-up [II, B; consensus = 100%]. The Asian experts accepted ‘recommendations 5c, d and e’ below without change despite concern over the frequency/timing of follow-up in ‘recommendation 5c’. 5c. For the high-risk groups, physical and gynaecological examinations are recommended every 3 months for the first 3 years, and then every 6 months until 5 years [V, C]. 5d. A CT scan or PET–CT could be considered in the high-risk group, particularly if node extension was present [V, D]. 5e. Regular exercise, healthy diet and weight management should be promoted with all endometrial cancer survivors [II, B]. Availability of diagnostic tests, drugs and equipment Following the virtual face-to-face meeting hosted by ISMPO, the Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the adapted ESMO guidelines listed in . The drug and treatment availability for each of the 10 Asian countries is summarised in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and the ESMO-MCBSs for the different systemic therapy options and new therapy combinations for the treatment of endometrial cancer are presented in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and %%=%%=+%%=+ https://www.esmo.org/guidelines/esmo-mcbs/esmo-mcbs-scorecards?mcbs_score_cards_form5BsearchText5D%mcbs_score_cards_form%5Btumour-type%5D=Gynaecological+Malignancies&mcbs_score_cards_form%5Btumour-sub-type%5D=Endometrial+Cancer . There was only one area of discrepancy in terms of diagnostic tests, drugs and equipment. This was POLE hotspot mutation analysis and the lack of/limited availability of such analysis in five of the Asian countries represented at the meeting. Conclusions The results of voting by the Asian experts before ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) and after the virtual/face-to-face working meeting showed >80% concordance with the ESMO recommendations for the treatment of patients with endometrial cancer. Following the virtual ‘face-to-face’ discussions, revisions were made to the wording of ‘recommendations 3e, 3f, 3j, 3l, 3q.4, 3r.3, 4a, 4k and 5b’ , and resulted in the achievement of 100% consensus for all the recommendations listed in . Thus, the recommendations detailed in can be considered the consensus clinical practice guidelines for the treatment of patients with endometrial cancer in Asia. As mentioned previously, the acceptance of each recommendation by each of the Asian experts was based on the available scientific evidence and was independent of the approval and reimbursement status of certain procedures and drugs in the individual Asian countries. A summary of the availability of the recommended treatment modalities and recommended drugs, as of July 2022, is presented for each participating Asian country in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and will impact on some management strategies that can be adopted by certain Asian countries.
Diagnosis, pathology and molecular biology—recommendations 1a-b Endometrial cancer is clinically a very heterogeneous malignancy for which the assignment of histological subtype, grade, disease extension and lymphovascular space invasion (LVSI) has been highly subjective, , impacting on the accurate assessment of an individual patient’s risk of recurrence and metastasis, and therefore management. Furthermore, it has reduced the ability to accurately compare different clinical studies in terms of outcome due to uncertainty over the classification of patient risk. The traditional histopathological classification of Bokhman identified two types of endometrial cancer, type I [endometrioid, grade 1-2 (G1-2) with a favourable prognosis], ∼70% of cases, and type II (G3 endometrioid and non-endometrioid histologies with a poor prognosis), ∼30% of cases. There is general agreement, however, that endometrioid tumours should now be classified according to the International Federation of Gynecology and Obstetrics (FIGO) defined criteria, , providing a two-tier grading system with G1 and G2 endometrioid tumours grouped together as low grade, and G3 tumours classified as high grade. Factors traditionally associated with a high risk of recurrent disease include histologic subtype, FIGO G3 histology, myometrial invasion ≥50%, LVSI, , , L1 cell adhesion molecule expression, , lymph node metastases and tumour diameter >2 cm. However, the heterogeneity of endometrial cancer is due to an array of underlying molecular alterations. The results of The Cancer Genome Atlas (TCGA) analysis showed that the molecular diversity of endometrial cancer could be stratified into four distinct molecular subgroups ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). The four molecular subgroups are: (i) patients with copy number stable, ultra-mutated endometrial cancers characterised by pathogenic variants in the exonuclease domain of DNA polymerase-epsilon ( POLE ), (ii) patients with hyper-mutated endometrial cancer characterised by microsatellite instability (MSI) due to dysfunctional/deficient mismatch repair genes (dMMR), (iii) an MMR-proficient, low somatic copy number aberration (SCNA) subgroup with a low mutational burden and (iv) a high SCNA subgroup with frequent TP53 mutations. Therefore, well-established immunohistochemical (IHC) staining techniques for the detection of p53 and MMR proteins (MLH1, PMS2, MSH2, MSH6) are now recommended as standard practice for all endometrial cancer pathology specimens, regardless of histological type, together with sequencing of the exonuclease domain of POLE if available. Patients presenting with either newly diagnosed or recurrent/metastatic endometrial cancer should have a biopsy to confirm histology and assess tumour molecular biology. These molecular classes are identified across all of the histological subtypes, , and correlate with endometrial cancer prognosis. Thus, molecular classification could facilitate more accurate comparison of clinical outcomes between different groups of patients. Furthermore, it could impact treatment considerations. Firstly, testing for MMR/MSI status serves not only as a screening test for Lynch syndrome, but also identifies patients with metastatic disease who could benefit from immune checkpoint blockade agent. Secondly, the benefit of adjuvant chemotherapy is observed in patients with p53 mut endometrial cancer, whilst the de-escalation of therapy in patients with POLE mutated ( POLE mut) endometrial cancer, which has a favourable outcome, is being investigated. Thirdly, the overexpression/gene amplification of human epidermal growth factor receptor 2 (HER2), which has been demonstrated in 20%-40% of type II non-endometrioid endometrial cancers, supports the use of HER2-targeted therapy in combination with chemotherapy. This combined treatment has also recently been shown to be an effective treatment approach for patients with advanced and recurrent serous endometrial cancer. , , , , As a consequence, HER2 testing is now being proposed to guide the management of these patients. , Endometrial cancers that have not been completely molecularly classified should be designated as endometrial cancers not-otherwise-specified and use the histology-based classification system. With improved tumour characterisation facilitated by more sophisticated diagnostic testing and molecular profiling, the diagnosis and management of patients with endometrial cancer is evolving towards a more objective, reproducible, personalised medicine approach. The algorithm for the diagnostic work-up of endometrial cancer proposed by ESMO and adapted from Vermij et al. 2020 is presented in . The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations on diagnosis, pathology and molecular biology ‘recommendations 1a-b’ below and in . However, they mentioned that POLE hotspot mutation analysis was not available as part of the standard molecular evaluation in many centres in Asia. 1a. Histological type, FIGO grade, myometrial invasion and LVSI (focal/substantial) should be described for all endometrial cancer pathology specimens [V, A]. 1b. Molecular classification through well-established IHC staining for p53 and MMR proteins (MLH1, PMS2, MSH2, MSH6) in combination with targeted tumour sequencing ( POLE hotspot analysis) , should be carried out for all endometrial cancer pathology specimens regardless of histological type [IV, A]. See , available at https://doi.org/10.1016/j.esmoop.2022.100744 , for hereditary endometrial cancer testing and surveillance.
Staging and risk assessment—recommendations 2a-c The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations on diagnosis, pathology and molecular biology ‘recommendations 2a-c’ below and in . 2a. Obtaining endometrial sampling by biopsy or dilation and curettage (D & C) are acceptable initial approaches to the histological diagnosis of endometrial cancer [IV, A]. 2b. The preoperative work-up should include clinical and gynaecological examination, transvaginal ultrasound, pelvic magnetic resonance imaging (MRI), a full blood count and liver and renal function profiles [IV, B]. 2c. Additional imaging tests [e.g. abdominal and thoracic computed tomography (CT) scan and/or [ 18 F]2-fluoro-2-deoxy-D-glucose–positron emission tomography ( 18 FDG–PET)–CT may be considered in those patients at high risk of extra-pelvic disease [IV, C].
Management of local and locoregional disease—recommendations 3a-u Surgery Early endometrial cancer is typically treated with surgery to remove the macroscopic disease and stage the tumour for planning with regard to adjuvant therapy. Traditionally, surgery for endometrial cancer was carried out via laparotomy until the results of two large, randomised trials showed minimally invasive laparoscopic techniques to have no negative impact on either staging or clinical outcomes. , An algorithm for the surgical treatment and management of patients with stage I endometrial cancer is presented in . Preservation of fertility in younger patients with endometrial carcinoma should be considered when appropriate ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations 3a-d below and in , without change. 3a. Hysterectomy with bilateral salpingo-oophorectomy is the standard surgical procedure in early-stage endometrial cancer [I, A]. 3b. Minimally invasive surgery is the recommended approach in stage I (G1-G2) endometrial cancer [I, A] . 3c. Minimally invasive surgery may also be the preferred surgical approach in stage I G3 [II, A] . 3d. Ovarian preservation can be considered in premenopausal women with stage IA, G1 endometrioid-type endometrial cancer [IV, A] . The comment of the Taiwanese experts with respect to inclusion of sentinel lymph node sampling as part of surgical procedure (recommendation 3a) is covered in recommendation 3e. However, some Asian experts did not accept ESMO ‘recommendations 3e and 3f’ because they did not reflect real-life clinical practice in their countries with respect to sentinel lymph node excision (SLNE), which is not available in many centres in Asia. Therefore, the original ‘recommendations 3e and 3f’ were modified, as per the bold text below and in . However, the consensus was that SLNE should be encouraged wherever possible, based on the evidence available from two studies, , including in patients with deeply invasive endometrioid endometrial cancer, but not in patients with the more aggressive type II histology , (see ‘recommendation 3g’ below). SLNE can be used for staging in patients with low- or intermediate-risk endometrial cancer and may represent an alternative to systematic lymphadenectomy (LNE) in high-intermediate- or high-risk stage I-II disease. The randomised Endometrial Cancer Lymphadenectomy Trial (ECLAT) is ongoing in patients with FIGO stage I and II disease with a high risk of recurrence, and should provide more evidence. 3e. SLNE can be considered as a strategy for nodal assessment in cases of low-risk/intermediate-risk endometrial cancer (e.g. stage IA, G1-G3 and stage IB, G1-G2) in experienced centres [II, A]. It can be omitted in cases without myometrial invasion. When SLNE is not available, lymphadenectomy (LNE) can be carried out in patients with stage IA G3 and stage IB disease [II, B; consensus = 100%]. 3f. Surgical lymph node staging should be carried out in patients with high-intermediate-risk/high-risk disease. Sentinel lymph node biopsy is an acceptable alternative to systematic LNE for lymph node staging in patients with high-intermediate/high-risk stage I-II endometrial cancer, when available and in centres with experience [III, B; consensus = 100%]. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3g and 3h’ below. 3g. Full surgical staging including omentectomy, peritoneal biopsies and lymph node staging should be considered in serous endometrial cancers and carcinosarcomas [IV, B] . 3h. When feasible, and with acceptable morbidity, cytoreductive surgery to the maximal surgical extent should be considered in patients with stage III and IV disease [IV, B]. The risk groups for endometrial cancer are summarised in , available at https://doi.org/10.1016/j.esmoop.2022.100744 . Low-risk endometrial cancer There is no indication for the use of adjuvant therapy for the treatment of patients with low-risk endometrial cancer, , , due to a low risk of recurrence. Also, in the few patients in whom local recurrence does occur, it can be treated effectively with radiotherapy (RT). Combined analysis of cohorts from the PORTEC-1 and PORTEC-2 studies and other studies , , has shown the presence of a POLE mutation ( POLE mut) to be a favourable indicator of prognosis, independently of other clinicopathological characteristics. As a consequence, patients with stage I-II endometrial cancer with POLE mut tumours are now classified as low risk and unlikely to benefit from adjuvant therapy. Omitting adjuvant therapy in patients with G3 POLE mut endometrial cancer may also be an option, although currently there are no robust data available. Higher-level evidence from a prospective registry study is likely to be available shortly together with data from a cohort of the RAINBO trial (NCT05255653). The planned cohorts for the Trans PORTEC RAINBO programme of clinical trials aim to refine the adjuvant treatment of patients with endometrial cancer based on molecular profile including POLE mut status, dMMR, no specific molecular profile (NSMP) and abnormal p53 (p53abn). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3i’ below. 3i. For patients with stage IA (G1 and G2) endometrioid (dMMR and NSMP) type endometrial cancer with no or focal LVSI, adjuvant treatment is not recommended [I, E]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3j, 3k and 3l’, which suggest the omission of adjuvant treatment, because there are little supporting data on the safety of omitting therapy. However, in relation to ‘recommendation 3k’ for patients with stage I-II POLE mut disease, there is encouraging, although limited, evidence regarding the omission of adjuvant therapy. , When the POLE mut status of a tumour is unavailable, patients should be treated on the basis of the other available risk information. The current focus is on de-escalation of therapy in these patients, whenever possible. Thus, the wording of the original ‘recommendations 3j, 3k and 3l’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was revised, as per the bold text below and in to reflect the concerns of the Asian experts, with 100% consensus. 3j. For patients with stage IA non-endometrioid-type endometrial cancer (and/or p53abn), without myometrial invasion and no or focal LVSI, there are not enough data to make a definitive recommendation regarding adjuvant treatment. Adjuvant therapy (chemotherapy and/or brachytherapy) or no adjuvant treatment may be discussed on a case-by-case basis in a multidisciplinary team environment [IV, C; consensus = 100%]. 3k. For patients with stage I-II POLE mut cancers, omission of adjuvant treatment should be considered [III, D; consensus = 100%]. 3l. For patients with stage III POLE mut cancers, there is insufficient evidence on need for adjuvant treatment. Enrolment in clinical trials , adjuvant therapy or no adjuvant therapy are reasonable options [III, C; consensus = 100%]. The adjuvant therapy options for low-risk disease are outlined in . Intermediate-risk endometrial cancer The PORTEC-1 and Gynaecology Oncology Group (GOG)-99 trials demonstrated the benefit of pelvic external beam RT (EBRT) after surgery in reducing locoregional recurrence in patients with intermediate-risk endometrial cancer. However, a Norwegian trial and an ASTEC study group trial showed that EBRT and vaginal brachytherapy (VBT) achieve similar results. The long-term results of the PORTEC-2 study showed VBT to result in excellent vaginal control in women with high-intermediate-risk endometrial cancer, with 10-year vaginal control above 96% in both arms. Although the risk of pelvic recurrence was significantly higher in the VBT group (6% versus 1%), no differences were found in 10-year rates for distant metastasis and overall survival. There were lower toxicity rates and better health-related quality of life among women who received VBT compared with EBRT. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3m, 3n and 3o’ below without change, after much discussion over the use of adjuvant RT. Adjuvant RT is not commonly used in Japan ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ), with chemotherapy being used as an alternative based on a study by the Japanese Gynecologic Oncology Group. The experts from China and Taiwan favoured EBRT ± VBT or EBRT alone, respectively, over VBT for stage II G1 endometrial cancer ‘recommendation 3o’. 3m. For patients with stage IA G3 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [1, A; consensus = 100%]. 3n. For patients with stage IB G1-G2 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [I, A; consensus = 100%]. 3o. For patients with stage II G1 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI adjuvant VBT is recommended to decrease vaginal recurrence [II, B; consensus = 100%]. It was mentioned by the experts that molecular profiling was not available in certain regions of Asia. In such situations, patients should be treated according to their assessed risk of recurrence. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) ‘recommendation 3p’ below without any change. 3p. Omission of adjuvant VBT can be considered (especially for patients aged <60 years) for all above stages, after patient counselling and with appropriate follow-up [III, C]. High-intermediate-risk endometrial cancer with lymph node staging (pN0) There was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3q.1’ below, with the proposal from Taiwan that chemotherapy might be considered as an alternative. 3q.1. Adjuvant EBRT is recommended [I, A]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3q.2, 3q.3 and 3q.4’, regarding adjuvant treatment. With regard to ‘recommendation 3q.2’, some of the experts considered that stronger evidence was needed for the benefit of the addition of chemotherapy, but accepted the recommendation without change based on the data from the PORTEC-3 trial. However, it was felt that the high incidence of short- and long-term side-effects associated with the addition of chemotherapy to EBRT, whilst conferring minimal benefit, needed to be discussed with these patients. 3q.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered, especially for G3 and/or substantial LVSI [II, C; consensus = 100%]. With regard to ‘recommendation 3q.3’, some of the experts considered that there was insufficient evidence to use the presence or absence of LVSI to decide the type of RT (VBT versus EBRT). In Korea EBRT is used for G3 disease, except in those without LVSI. ‘Recommendation 3q.3’ was accepted completely by replacing ‘could be recommended ’ with ‘could be considered ’ as per the bold text below. 3q.3. Adjuvant VBT (instead of EBRT) could be considered to decrease vaginal recurrence, especially for those without substantial LVSI [II, B; consensus = 100%]. With regard to ‘recommendation 3q.4’, experts from 6 of the 10 Asian countries considered that adjuvant treatment should be recommended. Thus, the consensus was that the standard treatment for most patients should include adjuvant treatment. However, in highly selected patients (stage IA G1-G2), when close follow-up (every 3 months) is possible, adjuvant treatment may be withheld in consultation with the patient. Thus, the original ‘recommendation 3q.4’ was revised from: 3q.4. With close follow-up, omission of any adjuvant treatment is an option following shared decision making with the patient [IV, C], to read as the ‘recommendation 3q.4’ below with the new text highlighted in bold. 3q.4. Despite evidence of a benefit from adjuvant treatment , its omission is an option, when close follow-up can be ensured, following shared decision making with the patient [IV, C]. An algorithm for the treatment of these patients is presented in . High-intermediate-risk endometrial cancer without lymph node staging Again, there was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3r.1’ below without change. 3r.1. Adjuvant EBRT is recommended [I, A]. With regard to ‘recommendation 3r.2’, experts from some Asian countries, despite the evidence from the PORTEC-1 trial in patients who had undergone primary surgery (without node dissection) and the PORTEC-3 trial, were of the opinion that concomitant treatment should be reserved for medically fit patients, but was the preferred option for patients with substantial LVSI. For patients with no initial lymph node dissection, carrying out a lymph node dissection is also an option, followed by tailored adjuvant treatment. ‘Recommendation 3r.2’ below was accepted without change with consideration to be given to the observations cited above. 3r.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered especially for patients with substantial LVSI and G3 disease [II, C; consensus = 100%]. With regard to ‘recommendation 3r.3’, five of the Asian countries did not agree with the original recommendation, and it was generally accepted that in the absence of lymph node staging, EBRT should be considered. Thus the original ‘recommendation 3r.3’ was revised from: 3r.3. Adjuvant VBT could be considered for IB G3 disease without substantial LVSI to decrease vaginal recurrence [II, B], to read as the ‘recommendation 3r.3’ below with the new text highlighted in bold text and the LoE and GoR changed from II, B to III, C. 3r.3. Adjuvant VBT followed by chemotherapy could be considered for patients with IB G3 disease without substantial LVSI, if EBRT is not feasible [III, C; consensus = 100%]. This recommendation is based on evidence from a subgroup analysis of the phase III GOG-249 trial of adjuvant pelvic RT versus VBT plus paclitaxel/carboplatin in high-intermediate- and high-risk early-stage endometrial cancer. Radiological evaluation, if not already carried out, should be done before using this option. An algorithm for the treatment of these patients is presented in . High-risk endometrial cancer There were differences amongst the Asian experts in terms of ‘acceptability’ with regard to ‘recommendations 3s, 3t and 3u’ (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). There was much discussion over the adjuvant treatment of this group of patients with some of the experts considering the therapy proposed in ‘recommendation 3s’ below too toxic for patients with endometrial cancer due to their age and comorbidities although there are supporting data from the PORTEC-3 trial , and GOG trial for the benefits of combining chemotherapy with RT in this patient group. High-risk endometrial cancer patients include those with stage III-IVA cancers without residual disease regardless of histology and regardless of molecular subtype, or stage I-IVA p53abn with myometrial invasion, or non-endometrioid cancers without residual disease with myometrial invasion (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). Carcinosarcomas (metaplastic dedifferentiated endometrial cancers) are also regarded as high risk and are commonly classified as p53abn. However, the Asian experts decided to accept completely the original ESMO ‘recommendation 3s’ below, without change, provided that patients are properly evaluated based on individual factors for this treatment. For patients with major comorbidities or for whom there is an unambiguous contraindication for chemotherapy, RT alone can be considered. 3s. Adjuvant EBRT with concurrent and adjuvant chemotherapy is recommended [I, A; consensus = 100%]. After discussion, the Asian experts also accepted ‘recommendations 3t and 3u’ without change. Extended field RT can be considered along with EBRT and chemotherapy for patients with para-aortic node disease. 3t. Sequential chemotherapy and RT can be used [I, B; consensus = 100%]. 3u. Chemotherapy alone is an alternative option [I, B; consensus = 100%]. However, concern was expressed over the use of chemotherapy alone (‘recommendation 3u’), due to the fact that the data regarding comparable efficacy were inconsistent. Certainly, data from the PORTEC-3 trial showed the treatment effect to differ between the different molecular subgroups. Poor prognosis patients with p53abn endometrial cancer benefitted significantly from chemoradiotherapy (CRT) regardless of stage and histological subtype, whilst patients with POL Emut cancers achieved an excellent benefit with either RT or CRT. No benefit was observed for CRT over RT for patients with dMMR endometrial cancer, whilst a trend for benefit was observed in the NSMP subgroup. An algorithm for the treatment of these patients is presented in . For any patients with endometrial cancer who are medically unfit for surgery, by virtue of severe comorbidities, definitive RT is an option (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ).
Early endometrial cancer is typically treated with surgery to remove the macroscopic disease and stage the tumour for planning with regard to adjuvant therapy. Traditionally, surgery for endometrial cancer was carried out via laparotomy until the results of two large, randomised trials showed minimally invasive laparoscopic techniques to have no negative impact on either staging or clinical outcomes. , An algorithm for the surgical treatment and management of patients with stage I endometrial cancer is presented in . Preservation of fertility in younger patients with endometrial carcinoma should be considered when appropriate ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO recommendations 3a-d below and in , without change. 3a. Hysterectomy with bilateral salpingo-oophorectomy is the standard surgical procedure in early-stage endometrial cancer [I, A]. 3b. Minimally invasive surgery is the recommended approach in stage I (G1-G2) endometrial cancer [I, A] . 3c. Minimally invasive surgery may also be the preferred surgical approach in stage I G3 [II, A] . 3d. Ovarian preservation can be considered in premenopausal women with stage IA, G1 endometrioid-type endometrial cancer [IV, A] . The comment of the Taiwanese experts with respect to inclusion of sentinel lymph node sampling as part of surgical procedure (recommendation 3a) is covered in recommendation 3e. However, some Asian experts did not accept ESMO ‘recommendations 3e and 3f’ because they did not reflect real-life clinical practice in their countries with respect to sentinel lymph node excision (SLNE), which is not available in many centres in Asia. Therefore, the original ‘recommendations 3e and 3f’ were modified, as per the bold text below and in . However, the consensus was that SLNE should be encouraged wherever possible, based on the evidence available from two studies, , including in patients with deeply invasive endometrioid endometrial cancer, but not in patients with the more aggressive type II histology , (see ‘recommendation 3g’ below). SLNE can be used for staging in patients with low- or intermediate-risk endometrial cancer and may represent an alternative to systematic lymphadenectomy (LNE) in high-intermediate- or high-risk stage I-II disease. The randomised Endometrial Cancer Lymphadenectomy Trial (ECLAT) is ongoing in patients with FIGO stage I and II disease with a high risk of recurrence, and should provide more evidence. 3e. SLNE can be considered as a strategy for nodal assessment in cases of low-risk/intermediate-risk endometrial cancer (e.g. stage IA, G1-G3 and stage IB, G1-G2) in experienced centres [II, A]. It can be omitted in cases without myometrial invasion. When SLNE is not available, lymphadenectomy (LNE) can be carried out in patients with stage IA G3 and stage IB disease [II, B; consensus = 100%]. 3f. Surgical lymph node staging should be carried out in patients with high-intermediate-risk/high-risk disease. Sentinel lymph node biopsy is an acceptable alternative to systematic LNE for lymph node staging in patients with high-intermediate/high-risk stage I-II endometrial cancer, when available and in centres with experience [III, B; consensus = 100%]. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3g and 3h’ below. 3g. Full surgical staging including omentectomy, peritoneal biopsies and lymph node staging should be considered in serous endometrial cancers and carcinosarcomas [IV, B] . 3h. When feasible, and with acceptable morbidity, cytoreductive surgery to the maximal surgical extent should be considered in patients with stage III and IV disease [IV, B]. The risk groups for endometrial cancer are summarised in , available at https://doi.org/10.1016/j.esmoop.2022.100744 .
There is no indication for the use of adjuvant therapy for the treatment of patients with low-risk endometrial cancer, , , due to a low risk of recurrence. Also, in the few patients in whom local recurrence does occur, it can be treated effectively with radiotherapy (RT). Combined analysis of cohorts from the PORTEC-1 and PORTEC-2 studies and other studies , , has shown the presence of a POLE mutation ( POLE mut) to be a favourable indicator of prognosis, independently of other clinicopathological characteristics. As a consequence, patients with stage I-II endometrial cancer with POLE mut tumours are now classified as low risk and unlikely to benefit from adjuvant therapy. Omitting adjuvant therapy in patients with G3 POLE mut endometrial cancer may also be an option, although currently there are no robust data available. Higher-level evidence from a prospective registry study is likely to be available shortly together with data from a cohort of the RAINBO trial (NCT05255653). The planned cohorts for the Trans PORTEC RAINBO programme of clinical trials aim to refine the adjuvant treatment of patients with endometrial cancer based on molecular profile including POLE mut status, dMMR, no specific molecular profile (NSMP) and abnormal p53 (p53abn). The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3i’ below. 3i. For patients with stage IA (G1 and G2) endometrioid (dMMR and NSMP) type endometrial cancer with no or focal LVSI, adjuvant treatment is not recommended [I, E]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3j, 3k and 3l’, which suggest the omission of adjuvant treatment, because there are little supporting data on the safety of omitting therapy. However, in relation to ‘recommendation 3k’ for patients with stage I-II POLE mut disease, there is encouraging, although limited, evidence regarding the omission of adjuvant therapy. , When the POLE mut status of a tumour is unavailable, patients should be treated on the basis of the other available risk information. The current focus is on de-escalation of therapy in these patients, whenever possible. Thus, the wording of the original ‘recommendations 3j, 3k and 3l’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was revised, as per the bold text below and in to reflect the concerns of the Asian experts, with 100% consensus. 3j. For patients with stage IA non-endometrioid-type endometrial cancer (and/or p53abn), without myometrial invasion and no or focal LVSI, there are not enough data to make a definitive recommendation regarding adjuvant treatment. Adjuvant therapy (chemotherapy and/or brachytherapy) or no adjuvant treatment may be discussed on a case-by-case basis in a multidisciplinary team environment [IV, C; consensus = 100%]. 3k. For patients with stage I-II POLE mut cancers, omission of adjuvant treatment should be considered [III, D; consensus = 100%]. 3l. For patients with stage III POLE mut cancers, there is insufficient evidence on need for adjuvant treatment. Enrolment in clinical trials , adjuvant therapy or no adjuvant therapy are reasonable options [III, C; consensus = 100%]. The adjuvant therapy options for low-risk disease are outlined in .
The PORTEC-1 and Gynaecology Oncology Group (GOG)-99 trials demonstrated the benefit of pelvic external beam RT (EBRT) after surgery in reducing locoregional recurrence in patients with intermediate-risk endometrial cancer. However, a Norwegian trial and an ASTEC study group trial showed that EBRT and vaginal brachytherapy (VBT) achieve similar results. The long-term results of the PORTEC-2 study showed VBT to result in excellent vaginal control in women with high-intermediate-risk endometrial cancer, with 10-year vaginal control above 96% in both arms. Although the risk of pelvic recurrence was significantly higher in the VBT group (6% versus 1%), no differences were found in 10-year rates for distant metastasis and overall survival. There were lower toxicity rates and better health-related quality of life among women who received VBT compared with EBRT. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3m, 3n and 3o’ below without change, after much discussion over the use of adjuvant RT. Adjuvant RT is not commonly used in Japan ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ), with chemotherapy being used as an alternative based on a study by the Japanese Gynecologic Oncology Group. The experts from China and Taiwan favoured EBRT ± VBT or EBRT alone, respectively, over VBT for stage II G1 endometrial cancer ‘recommendation 3o’. 3m. For patients with stage IA G3 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [1, A; consensus = 100%]. 3n. For patients with stage IB G1-G2 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI, adjuvant VBT is recommended to decrease vaginal recurrence [I, A; consensus = 100%]. 3o. For patients with stage II G1 endometrioid (dMMR or NSMP)-type endometrial cancer and no or focal LVSI adjuvant VBT is recommended to decrease vaginal recurrence [II, B; consensus = 100%]. It was mentioned by the experts that molecular profiling was not available in certain regions of Asia. In such situations, patients should be treated according to their assessed risk of recurrence. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) ‘recommendation 3p’ below without any change. 3p. Omission of adjuvant VBT can be considered (especially for patients aged <60 years) for all above stages, after patient counselling and with appropriate follow-up [III, C].
There was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendation 3q.1’ below, with the proposal from Taiwan that chemotherapy might be considered as an alternative. 3q.1. Adjuvant EBRT is recommended [I, A]. However, some of the Asian experts did not accept the ESMO ‘recommendations 3q.2, 3q.3 and 3q.4’, regarding adjuvant treatment. With regard to ‘recommendation 3q.2’, some of the experts considered that stronger evidence was needed for the benefit of the addition of chemotherapy, but accepted the recommendation without change based on the data from the PORTEC-3 trial. However, it was felt that the high incidence of short- and long-term side-effects associated with the addition of chemotherapy to EBRT, whilst conferring minimal benefit, needed to be discussed with these patients. 3q.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered, especially for G3 and/or substantial LVSI [II, C; consensus = 100%]. With regard to ‘recommendation 3q.3’, some of the experts considered that there was insufficient evidence to use the presence or absence of LVSI to decide the type of RT (VBT versus EBRT). In Korea EBRT is used for G3 disease, except in those without LVSI. ‘Recommendation 3q.3’ was accepted completely by replacing ‘could be recommended ’ with ‘could be considered ’ as per the bold text below. 3q.3. Adjuvant VBT (instead of EBRT) could be considered to decrease vaginal recurrence, especially for those without substantial LVSI [II, B; consensus = 100%]. With regard to ‘recommendation 3q.4’, experts from 6 of the 10 Asian countries considered that adjuvant treatment should be recommended. Thus, the consensus was that the standard treatment for most patients should include adjuvant treatment. However, in highly selected patients (stage IA G1-G2), when close follow-up (every 3 months) is possible, adjuvant treatment may be withheld in consultation with the patient. Thus, the original ‘recommendation 3q.4’ was revised from: 3q.4. With close follow-up, omission of any adjuvant treatment is an option following shared decision making with the patient [IV, C], to read as the ‘recommendation 3q.4’ below with the new text highlighted in bold. 3q.4. Despite evidence of a benefit from adjuvant treatment , its omission is an option, when close follow-up can be ensured, following shared decision making with the patient [IV, C]. An algorithm for the treatment of these patients is presented in .
Again, there was much discussion over the adjuvant treatment of this group of patients which includes those with stage IA and IB disease with substantial LVSI, stage IB G3 and stage II G1 disease with substantial LVSI and stage II G2-G3 (dMMR or NSMP) disease. The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the ESMO ‘recommendations 3r.1’ below without change. 3r.1. Adjuvant EBRT is recommended [I, A]. With regard to ‘recommendation 3r.2’, experts from some Asian countries, despite the evidence from the PORTEC-1 trial in patients who had undergone primary surgery (without node dissection) and the PORTEC-3 trial, were of the opinion that concomitant treatment should be reserved for medically fit patients, but was the preferred option for patients with substantial LVSI. For patients with no initial lymph node dissection, carrying out a lymph node dissection is also an option, followed by tailored adjuvant treatment. ‘Recommendation 3r.2’ below was accepted without change with consideration to be given to the observations cited above. 3r.2. Adding (concomitant and/or sequential) chemotherapy to EBRT could be considered especially for patients with substantial LVSI and G3 disease [II, C; consensus = 100%]. With regard to ‘recommendation 3r.3’, five of the Asian countries did not agree with the original recommendation, and it was generally accepted that in the absence of lymph node staging, EBRT should be considered. Thus the original ‘recommendation 3r.3’ was revised from: 3r.3. Adjuvant VBT could be considered for IB G3 disease without substantial LVSI to decrease vaginal recurrence [II, B], to read as the ‘recommendation 3r.3’ below with the new text highlighted in bold text and the LoE and GoR changed from II, B to III, C. 3r.3. Adjuvant VBT followed by chemotherapy could be considered for patients with IB G3 disease without substantial LVSI, if EBRT is not feasible [III, C; consensus = 100%]. This recommendation is based on evidence from a subgroup analysis of the phase III GOG-249 trial of adjuvant pelvic RT versus VBT plus paclitaxel/carboplatin in high-intermediate- and high-risk early-stage endometrial cancer. Radiological evaluation, if not already carried out, should be done before using this option. An algorithm for the treatment of these patients is presented in .
There were differences amongst the Asian experts in terms of ‘acceptability’ with regard to ‘recommendations 3s, 3t and 3u’ (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). There was much discussion over the adjuvant treatment of this group of patients with some of the experts considering the therapy proposed in ‘recommendation 3s’ below too toxic for patients with endometrial cancer due to their age and comorbidities although there are supporting data from the PORTEC-3 trial , and GOG trial for the benefits of combining chemotherapy with RT in this patient group. High-risk endometrial cancer patients include those with stage III-IVA cancers without residual disease regardless of histology and regardless of molecular subtype, or stage I-IVA p53abn with myometrial invasion, or non-endometrioid cancers without residual disease with myometrial invasion (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ). Carcinosarcomas (metaplastic dedifferentiated endometrial cancers) are also regarded as high risk and are commonly classified as p53abn. However, the Asian experts decided to accept completely the original ESMO ‘recommendation 3s’ below, without change, provided that patients are properly evaluated based on individual factors for this treatment. For patients with major comorbidities or for whom there is an unambiguous contraindication for chemotherapy, RT alone can be considered. 3s. Adjuvant EBRT with concurrent and adjuvant chemotherapy is recommended [I, A; consensus = 100%]. After discussion, the Asian experts also accepted ‘recommendations 3t and 3u’ without change. Extended field RT can be considered along with EBRT and chemotherapy for patients with para-aortic node disease. 3t. Sequential chemotherapy and RT can be used [I, B; consensus = 100%]. 3u. Chemotherapy alone is an alternative option [I, B; consensus = 100%]. However, concern was expressed over the use of chemotherapy alone (‘recommendation 3u’), due to the fact that the data regarding comparable efficacy were inconsistent. Certainly, data from the PORTEC-3 trial showed the treatment effect to differ between the different molecular subgroups. Poor prognosis patients with p53abn endometrial cancer benefitted significantly from chemoradiotherapy (CRT) regardless of stage and histological subtype, whilst patients with POL Emut cancers achieved an excellent benefit with either RT or CRT. No benefit was observed for CRT over RT for patients with dMMR endometrial cancer, whilst a trend for benefit was observed in the NSMP subgroup. An algorithm for the treatment of these patients is presented in . For any patients with endometrial cancer who are medically unfit for surgery, by virtue of severe comorbidities, definitive RT is an option (see , available at https://doi.org/10.1016/j.esmoop.2022.100744 ).
Recurrent/metastatic disease—recommendations 4a-m As stated previously, the outcomes in patients with recurrent and/or metastatic endometrial cancer are poor. The management of these patients should, wherever possible, involve a multidisciplinary team approach, treatment in specialised centres and the development of individualised treatment plans. Algorithms for the treatment of recurrent locoregional and metastatic disease are presented in and , respectively. Several factors influence the outcomes (local control and survival) in patients with recurrent and/or metastatic disease, including its site and extent (isolated vaginal or peritoneal involvement), size (<2 cm or ≥2 cm), histology and relapse-free survival (RFS). Isolated vaginal recurrence, lower grade, endometrioid histology and longer RFS are associated with a better prognosis. , Additionally, prior treatment (surgery and/or RT) and patient’s general condition also influence outcome. The Asian experts expressed concern over the omission of surgery from the ESMO ‘recommendation 4a’, and the recommendation of only VBT, which should be considered if there is isolated vaginal recurrence. Thus, ‘recommendation 4a’ was revised by inclusion of the text in bold below. 4a. For patients with locoregional recurrence following primary surgery alone, the preferred primary therapy should be EBRT with or without VBT, depending on the site of recurrence [IV, A; consensus = 100%]. It was discussed that surgery could be considered in selected patients in whom it is possible to achieve complete surgical resection in the absence of excessive morbidity, and that the use of VBT alone can be considered in the subgroup of patients with a small vaginal recurrence. ‘Recommendations 4b-e’ were accepted without change with the caveat that they may not be applicable in all cases, depending on extent of disease. 4b. Adding systemic therapy to salvage RT could be considered [IV, C; consensus = 100%]. 4c. For patients with recurrent disease following RT, surgery should be considered only if a complete debulking with acceptable morbidity is anticipated [IV, C; consensus = 100%]. 4d. Complementary systemic therapy after surgery could be considered , , [IV, C; consensus = 100%] (see ). 4e. The standard first-line chemotherapy treatment is carboplatin AUC 5-6 plus paclitaxel 175 mg/m 2 every 21 days for six cycles [I, A; consensus = 100%]. In relation to ‘recommendation 4e’ there is no evidence of an increased benefit for >6 cycles of chemotherapy, but it was agreed that this could be considered on an individual basis. Some Asian experts did not agree with the original ‘recommendation 4f’ because hormone therapy is rarely offered as first-line systemic therapy in these patients. The experts agreed that chemotherapy is the first choice of treatment. Hormone therapy can be considered for patients with low-grade, low-volume disease who are not suitable for chemotherapy, dependent on knowledge of the hormone receptor status [estrogen receptor (ER) and progesterone receptor (PgR)] of the tumour at the time of treatment. However, the predictive value of hormone receptor expression in endometrial cancer is not as strong as it is for patients with breast cancer due to the limitations associated with a lack of standardisation of tissue processing and factors such as a well-defined cut-off limit in relation to receptor levels. Furthermore, responses to hormone therapy have been reported in ER-/PgR-negative disease. Thus, due to these concerns, the text of the original recommendation ‘recommendation 4f’ below was modified by the inclusion of the bold text. 4f. Hormone therapy could be considered as an option for front-line systemic therapy in patients with low-grade carcinomas of endometrioid histology with low-volume disease [III, A; consensus = 100%]. The Asian experts accepted without change ‘recommendations 4g, 4h and 4i’ below, despite some discussion and the removal of the dosing details for medroxyprogesterone acetate and megestrol acetate ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) in ‘recommendation 4g’. Aromatase inhibitors and fulvestrant are alternative options with limited benefits. A phase II study of anastrozole in recurrent ER-/PgR-positive endometrial cancer (the PARAGON trial) showed a low objective response but a meaningful clinical benefit in 44% of patients. 4g. Progestins are the recommended agents [II, A; consensus = 100%]. 4h. Other options for hormonal therapies include aromatase inhibitors (AIs), tamoxifen and fulvestrant [III, C; consensus = 100%]. 4i. There is no standard of care for second-line chemotherapy. Doxorubicin and weekly paclitaxel are considered the most active therapies , , [IV, C; consensus = 100%]. The Asian experts queried ‘recommendation 4j’, but eventually accepted it without change with the provision that for patients with a long disease-free interval after prior chemotherapy, retreatment with further platinum-based treatment can also be considered, based on a retrospective analysis, when immune checkpoint inhibitor therapy is not available. After discussion, the GoR of ‘recommendation 4j’ was revised from B to A [ESCAT IA, ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) 3], as per the bold text below. 4j. Immune checkpoint blockade monotherapy should be considered after platinum-based therapy failure in patients with MSI-H/dMMR , [III, A; consensus =100%]. Immune checkpoint blockade alone or in combination with targeted therapies has emerged as a promising intervention in patients with recurrent endometrial cancer in view of a high mutational burden (dMMR/POLEmut subtypes), tumour-infiltrating lymphocytes and programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1) expression. Pembrolizumab, which targets PD-1, has been investigated in the endometrial cohorts of the KEYNOTE-158 trial in patients pre-treated with chemotherapy, and a short progression-free survival (PFS), and showed PD-1 blockade to be highly effective. Data from the GARNET trial with the anti-PD-1 monoclonal antibody dostarlimab, which blocks interaction with the programmed death ligands PD-L1 and -L2, have led to the approval of dostarlimab monotherapy by the Food and Drug Administration (FDA) in the United States to treat dMMR recurrent or advanced endometrial cancer that has progressed on platinum-containing regimens . Agents that target PD-L1 such as avelumab and durvalumab have also shown promising activity in patients with dMMR endometrial cancer, as well as atezolizumab and nivolumab (anti-PD-1). The phase Ib/II KEYNOTE 146 trial showed encouraging response, PFS and overall survival rates with the combination of pembrolizumab and the multi-kinase inhibitor lenvatinib, and the phase III KEYNOTE-775 trial demonstrated the statistically significant PFS ( P < 0.0001) and overall survival ( P < 0.0001) benefits of this combination compared with standard chemotherapy. As a consequence, pembrolizumab in combination with lenvatinib has been approved by the FDA for patients with advanced endometrial cancer, that is not MSI-high (MSI-H) or dMMR, who have disease progression following prior systemic therapy in any setting and are not candidates for curative surgery or RT. The European Medicines Agency (EMA) approved pembrolizumab in combination with lenvatinib for the treatment of advanced or recurrent endometrial cancer in patients who have disease progression on or following prior treatment with a platinum-containing regimen in any setting regardless of MMR status and who are not candidates for curative surgery or RT . However, due to the lack of availability of dostarlimab in 6 of the 10 Asian countries, the wording of the original ‘recommendation 4k’ was reworded from the original ESMO recommendation below, 4k. Dostarlimab has recently been approved by both the EMA and the FDA for this indication [III, B; ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) v1.1 score: 3], to read as follows: 4k. Dostarlimab can be considered in patients with dMMR or MSI-H recurrent or advanced endometrial cancer after failure of prior platinum-based chemotherapy and has recently been approved by both the EMA and the FDA for this indication [III, B; consensus = 100%; ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) v1.1 score: 3]. The Asian experts accepted completely without change (100% consensus) the original ESMO recommendations ‘recommendations 4l and 4m’ below and in . 4l. Pembrolizumab is FDA approved for the treatment of TMB-H solid tumours (as determined by the FoundationOne CDx assay) that have progressed following prior therapy for endometrial cancer [III, B; ESMO-MCBS v1.1 score: 3; not EMA approved]. 4m. Pembrolizumab with lenvatinib is approved by the EMA for endometrial cancer patients who have failed a previous platinum-based therapy, and who are not candidates for curative surgery or RT. FDA approval is for endometrial cancer patients whose tumours are not dMMR/MSI-H [I, A; ESMO-MCBS v1.1 score: 4]. Targeted therapy approaches are also being investigated in patients with endometrial cancer. Uterine serous carcinoma (USC) is an aggressive endometrial cancer subtype associated with a poor outcome. One-third of USCs overexpress HER2/Neu, a target for trastuzumab in breast cancer. A small randomised phase II trial for the addition of trastuzumab to paclitaxel/carboplatin compared with paclitaxel/carboplatin alone in stage III-IV or recurrent USC demonstrated a meaningful benefit for PFS [hazard ratio (HR) 0.46, P = 0.005] and overall survival (HR 0.58).The benefit for stage III-IV was greater than in recurrent disease. The cyclin-dependent kinase inhibitor palbociclib has shown superiority in combination with letrozole in previously treated patients with ER-positive disease in the phase II ENGOT EN3 PALEO trial, and the WEE1 inhibitor adavosertib has been investigated in heavily pre-treated patients with serous tumours. Future directions include immune checkpoint blockade strategies in combination with other targeted therapies, immunotherapeutic agents, chemotherapy and RT.
Follow-up, long-term implications and survivorship—recommendations 5a-e There is no evidence from randomised studies to support intensive, clinician-led, hospital-based, follow-up evaluations for patients with endometrial cancer and no consensus on what surveillance tests should be carried out. , Thus, clinical monitoring can be adjusted according to the risk factors of the patient. There was considerable discussion amongst the Asian experts about the frequency of follow-up appointments with no evidence of a survival benefit from intensive versus minimalist follow-up, even in high-risk patients, as demonstrated by the results of the European multicentre phase III TOTEM trial. Furthermore, the evidence showed that there was no need to add routine vaginal cytology, laboratory investigations or imaging to the minimalist follow-up strategies. Thus, ‘recommendation 5a’ was modified very slightly as per the bold text below. 5a. For low-risk endometrial cancer, the proposed surveillance is at least every 6 months for the first 2 years and then yearly until 5 years. A physical and gynaecological examination should be performed at each follow-up [V, C; consensus = 100%]. With regard to ‘recommendation 5b’ the experts were concerned that access to phone follow-up would be difficult in certain regions. Therefore, ‘recommendation 5b’ ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) was reworded to: 5b. In the low-risk group, remote follow-up can be integrated in to hospital-based follow-up [II, B; consensus = 100%]. The Asian experts accepted ‘recommendations 5c, d and e’ below without change despite concern over the frequency/timing of follow-up in ‘recommendation 5c’. 5c. For the high-risk groups, physical and gynaecological examinations are recommended every 3 months for the first 3 years, and then every 6 months until 5 years [V, C]. 5d. A CT scan or PET–CT could be considered in the high-risk group, particularly if node extension was present [V, D]. 5e. Regular exercise, healthy diet and weight management should be promoted with all endometrial cancer survivors [II, B].
Following the virtual face-to-face meeting hosted by ISMPO, the Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the adapted ESMO guidelines listed in . The drug and treatment availability for each of the 10 Asian countries is summarised in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and the ESMO-MCBSs for the different systemic therapy options and new therapy combinations for the treatment of endometrial cancer are presented in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and %%=%%=+%%=+ https://www.esmo.org/guidelines/esmo-mcbs/esmo-mcbs-scorecards?mcbs_score_cards_form5BsearchText5D%mcbs_score_cards_form%5Btumour-type%5D=Gynaecological+Malignancies&mcbs_score_cards_form%5Btumour-sub-type%5D=Endometrial+Cancer . There was only one area of discrepancy in terms of diagnostic tests, drugs and equipment. This was POLE hotspot mutation analysis and the lack of/limited availability of such analysis in five of the Asian countries represented at the meeting.
The results of voting by the Asian experts before ( , available at https://doi.org/10.1016/j.esmoop.2022.100744 ) and after the virtual/face-to-face working meeting showed >80% concordance with the ESMO recommendations for the treatment of patients with endometrial cancer. Following the virtual ‘face-to-face’ discussions, revisions were made to the wording of ‘recommendations 3e, 3f, 3j, 3l, 3q.4, 3r.3, 4a, 4k and 5b’ , and resulted in the achievement of 100% consensus for all the recommendations listed in . Thus, the recommendations detailed in can be considered the consensus clinical practice guidelines for the treatment of patients with endometrial cancer in Asia. As mentioned previously, the acceptance of each recommendation by each of the Asian experts was based on the available scientific evidence and was independent of the approval and reimbursement status of certain procedures and drugs in the individual Asian countries. A summary of the availability of the recommended treatment modalities and recommended drugs, as of July 2022, is presented for each participating Asian country in , available at https://doi.org/10.1016/j.esmoop.2022.100744 , and will impact on some management strategies that can be adopted by certain Asian countries.
|
Geriatric assessment and the variance of treatment recommendations in geriatric patients with gastrointestinal cancer—a study in AIO oncologists | 548d58a9-8d38-4179-b31e-886254601835 | 10024156 | Internal Medicine[mh] | The frequency of malignant tumors increases with age. The median age of onset was ∼70 years in 2018 in Germany. Similarly, it has been projected that by 2030 70% of all cancers will be diagnosed in older adults in the United States. While the burden of malignant diseases in the group of elderly persons is significantly higher than in their younger counterparts, evidence-based data on the optimal treatment regimen are scarce. One of the main reasons for this lack of hard data is that most clinical trials are conducted in a population that is younger than that encountered in the real world. This phenomenon has been known for more than two decades. , , However, there are attempts and appeals from oncological societies such as the American Society of Clinical Oncology (ASCO), European Society of Medical Oncology (ESMO), European Organisation for Research and Treatment of Cancer (EORTC), International Society of Geriatric Oncology (SIOG) as well as state actors such as the National Institutes of Health (NIH) to promote the inclusion of older patients in larger trials or designing trials especially for this patient group. , , , , Aging is a very heterogeneous process and may lead to inter-individual differences and impairments , which may or may not be clinically apparent. While the association between the age by itself and an unfavorable outcome is unclear, the link between a decreased functional status and a poorer outcome has been well documented. , , , A strict cut-off from which age a patient is to be considered ‘old’ is not known. Generally, it is assumed that from the age of 65 years, patients have a higher risk of serious deficiencies, and therefore, a geriatric assessment (GA) is recommended for those patients. , , A GA consists of various validated tools that aid to identify areas that are typically impaired in older patients. , , While in geriatric departments a GA is carried out regularly, it is rarely used in elderly oncological patients. , Limitations in supporting staff, in training or knowledge about GA or lack of time are regarded as the most common barriers. To address the issues of a lack of time and support staff, one possible strategy is to first carry out a screening which may be followed up by a full GA if potential problems are detected. If carried out correctly, a GA may influence the choice or intensity of therapy , , , , , as well as predict toxicity, completion of therapy and mortality. , , , , , , However, there are also contradictory data from studies where primary endpoints such as prediction of toxicity, hospitalization, quality of life, completion of therapy, progression-free and overall survival were not significantly improved by a GA. , , It is a problem in clinical practice that the information generated by a GA is not always used or acted upon. , This may also be due to a lack of training or research results. More data on the clinical ramification of GA are needed. To investigate the treatment recommendation in elderly patients and the contribution of a GA, we asked medical oncologists to participate in a study on treatment recommendations for gastrointestinal (GI) cancer patients and to give their treatment recommendations according to tumor findings (‘tumor board of an otherwise healthy, younger person’), after having seen a video sequence of the patient (simulating the situation of a clinical consultation) and after the GA results had been disclosed. Information on demographics of participating oncologists and practice characteristics was collected.
Survey development and deployment Patients with GI tumors were recruited for this study at University Hospital Carl Gustav Carus, Dresden, Germany between September 2018 and August 2019. After a written informed consent was obtained, a GA with commonly used screening instruments [Barthel Index (BI), Cumulative Illness Rating Scale (CIRS), Geriatric 8 (G8), Geriatric Depression Scale (GDS), Mini Mental Status Examination (MMSE), Mini-Nutritional Assessment (MNA), Timed Get Up and Go (TGUG), EORTC Quality of Life Questionnaire-C30 (QLQ-C30), reviewed by SIOG ] was carried out as well as a stair climb test (SCT). Results of interventions based on this geriatric screening were not part of the study. A video of the patient walking into the consultation room including a short dialog to sum up their medical history was recorded and the video sequences were shortened to ∼2 min. The 10 most complex patient cases were selected to be presented in a survey to German-speaking oncologists at an oncological convention in Dresden/Radebeul, at the annual AIO meeting, Berlin and in a web-based survey in AIO members between September 2019 and March 2020. Each patient was presented to the oncologists in three steps. The time to complete the survey showing two different patients was ∼15-20 min. In the first step, a tumor board situation was simulated and should result in a guideline-based treatment recommendation for younger patients without comorbidities. For that purpose, the participants were asked to imagine a 50-year-old patient for whom the cancer stage, histology, grading, immunohistochemistry, molecular biology, examinations (e.g. endoscopy) and relevant imaging were presented. Participants were asked to recommend or advise against several options of therapeutic regimens using a slider on a range of 0-100, similar to a visual analog scale ( A). In the second step, a clinical consultation of an elderly patient with the same tumor was simulated. In addition to the information already known from step 1, the actual patient age, comorbidities, medication and laboratory values were shown together with the video of the patient simulating a consultation situation. Participants were asked to make a treatment recommendation on the same scales as step 1 ( B). In the third step, the results of the GA were disclosed. The normal values of each instrument and a short interpretation aid were provided to the participants for reference. This step was used to simulate a setting of optimized care for elderly patients including the GA. Finally, participants were asked to make a treatment recommendation again ( C). Before viewing the case stories, participants were asked their opinion regarding the geriatric tools used in the survey, their use in clinical practice and whether the instruments were regarded as meaningful in general ( D). After completing two case vignettes, participants were asked to rate the same tools with regard to the usefulness in the choice of therapy for the two patients. An option to vote in two additional patients was given. Furthermore, the specialization of the participants, years of experience and place of work (hospital, outpatient unit, private oncologists) were part of the questionnaire. Data analysis A descriptive statistical analysis was conducted for the responses to survey questions such as the participants’ background, rating of geriatric tools and recommendation of therapeutic regimens. Data visualization was carried out via column charts and radar plots. In the radar plot, the mean and the standard deviation for each single recommendation (i.e. no chemotherapy, regimen 1, regimen 2, radio-chemotherapy) are printed in the different axes per patient. To analyze the agreement between the treatment recommendations, we calculated the variance (squared standard deviation of each recommendation). Differences were considered significant if P < 0.05 without correction for multiple testing. Ethics All patients had given an informed consent to use their data and videos in the survey. The responsible ethics committee at the Technical University Dresden approved the study before initiation.
Patients with GI tumors were recruited for this study at University Hospital Carl Gustav Carus, Dresden, Germany between September 2018 and August 2019. After a written informed consent was obtained, a GA with commonly used screening instruments [Barthel Index (BI), Cumulative Illness Rating Scale (CIRS), Geriatric 8 (G8), Geriatric Depression Scale (GDS), Mini Mental Status Examination (MMSE), Mini-Nutritional Assessment (MNA), Timed Get Up and Go (TGUG), EORTC Quality of Life Questionnaire-C30 (QLQ-C30), reviewed by SIOG ] was carried out as well as a stair climb test (SCT). Results of interventions based on this geriatric screening were not part of the study. A video of the patient walking into the consultation room including a short dialog to sum up their medical history was recorded and the video sequences were shortened to ∼2 min. The 10 most complex patient cases were selected to be presented in a survey to German-speaking oncologists at an oncological convention in Dresden/Radebeul, at the annual AIO meeting, Berlin and in a web-based survey in AIO members between September 2019 and March 2020. Each patient was presented to the oncologists in three steps. The time to complete the survey showing two different patients was ∼15-20 min. In the first step, a tumor board situation was simulated and should result in a guideline-based treatment recommendation for younger patients without comorbidities. For that purpose, the participants were asked to imagine a 50-year-old patient for whom the cancer stage, histology, grading, immunohistochemistry, molecular biology, examinations (e.g. endoscopy) and relevant imaging were presented. Participants were asked to recommend or advise against several options of therapeutic regimens using a slider on a range of 0-100, similar to a visual analog scale ( A). In the second step, a clinical consultation of an elderly patient with the same tumor was simulated. In addition to the information already known from step 1, the actual patient age, comorbidities, medication and laboratory values were shown together with the video of the patient simulating a consultation situation. Participants were asked to make a treatment recommendation on the same scales as step 1 ( B). In the third step, the results of the GA were disclosed. The normal values of each instrument and a short interpretation aid were provided to the participants for reference. This step was used to simulate a setting of optimized care for elderly patients including the GA. Finally, participants were asked to make a treatment recommendation again ( C). Before viewing the case stories, participants were asked their opinion regarding the geriatric tools used in the survey, their use in clinical practice and whether the instruments were regarded as meaningful in general ( D). After completing two case vignettes, participants were asked to rate the same tools with regard to the usefulness in the choice of therapy for the two patients. An option to vote in two additional patients was given. Furthermore, the specialization of the participants, years of experience and place of work (hospital, outpatient unit, private oncologists) were part of the questionnaire.
A descriptive statistical analysis was conducted for the responses to survey questions such as the participants’ background, rating of geriatric tools and recommendation of therapeutic regimens. Data visualization was carried out via column charts and radar plots. In the radar plot, the mean and the standard deviation for each single recommendation (i.e. no chemotherapy, regimen 1, regimen 2, radio-chemotherapy) are printed in the different axes per patient. To analyze the agreement between the treatment recommendations, we calculated the variance (squared standard deviation of each recommendation). Differences were considered significant if P < 0.05 without correction for multiple testing.
All patients had given an informed consent to use their data and videos in the survey. The responsible ethics committee at the Technical University Dresden approved the study before initiation.
Patients The median age of the 10 patients featured in the vignettes was 77.5 years (range 73-84 years). Eight patients were male and two were female. Patients’ malignancies included adenocarcinoma of the gastroesophageal junction (4), pancreatic (3), colorectal (2) and gastric cancers (1). The treatment situation was neoadjuvant (5), adjuvant (4) and palliative (3). Oncologists Of the 76 participants, 6 were excluded due to their non-appropriate specialization (surgery, gynecology, neuro-oncology, pneumology and others). The majority were board-certified specialists (91%) with working experience as specialists of >6 years (80%). Most participants were hemato-/oncologists. Regarding the place of work, 33% of participants worked in a hospital ward, 33% in hospital-based outpatient units and 29% in a private medical office. Out of the hospital-based participants, most worked in larger hospitals (>800 beds, 81%) and were employed as senior physicians (60%, ). The participants rated the geriatric scores on a visual analog scale. The mean for using the geriatric scores in clinical practice was 19.4 (±28.4). The mean for regarding the scores as generally meaningful was 52.5 (±32.2), and the mean for the question whether the GA was helpful in deciding a therapeutic regimen for the vignettes was 48.7 (±30.5). The differences were significant ( P < 0.0001, ). The same trend was also observed for the results of the individual geriatric tools. ‘Use in clinical practice’ was always significantly lower than the ratings as ‘meaningful’ or ‘helpful in the presented case’. Most of the geriatric tools were regarded as being slightly less useful in the case vignettes than they were regarded to be meaningful in general. These differences were only significant for MMSE, MNA and SCT. The TGUG and EORTC QLQ-C30 symptom scale were rated to be slightly more useful than meaningful, although these differences were not significant . As described in the Materials and methods section, the patient vignettes were presented in three steps. Data from 70 participants who had given a total of 164 recommendations were analyzed. shows a combined radar plot of the recommended therapeutic regimens. In the first graph, step 1 (50 years old, no comorbidities, cross-sectional imaging, stage of disease) and step 2 (actual age, video, comorbidities, medication, laboratory results) are shown. In the second graph, steps 2 and 3 (elderly patients without or with results of GA) are shown. The individual data for all patient cases are provided in the Supplementary Material , available at https://doi.org/10.1016/j.esmoop.2022.100761 . As visualized in the overview graph , large differences can be observed between step 1 and step 2 for the individual therapeutic options of a given patient. These differences were significantly different in 24 out of 48 therapeutic options. Between steps 2 and 3, the differences were less pronounced and significant in 4 out of 48 therapeutic options only. As a parameter of the agreement between the recommendations of therapeutic regimens, we analyzed the variance of the treatment recommendations for each option. This variance was significantly smaller in step 1 than in step 2 expressing a higher agreement in the standard situation than in elderly patients [mean of variances 602 (step 1) versus 944 (step 2), P < 0.0001]. The variance of step 3 (940) was only slightly lower than in step 2 ( P = 0.92) indicating that the agreement between the recommendations was not higher with known results of GA. To investigate whether GA results had a higher impact in decision making for more frail patients, we additionally divided the cases according to the result of the GA into two groups (upper and lower half of results). This stratified analysis showed no consistent trends in the change of variance (data not shown). Furthermore, a subgroup analysis according to the demographics of participants was carried out. In step 1, participants working as private oncologists had a higher variance than participants based in a hospital. In step 2 and step 3, participants working in larger hospitals (>800 beds) and specialists with >6 years of experience showed a slightly lower variance than participants in smaller hospitals or specialists with <6 years of experience. However, these differences were not statistically significant .
The median age of the 10 patients featured in the vignettes was 77.5 years (range 73-84 years). Eight patients were male and two were female. Patients’ malignancies included adenocarcinoma of the gastroesophageal junction (4), pancreatic (3), colorectal (2) and gastric cancers (1). The treatment situation was neoadjuvant (5), adjuvant (4) and palliative (3).
Of the 76 participants, 6 were excluded due to their non-appropriate specialization (surgery, gynecology, neuro-oncology, pneumology and others). The majority were board-certified specialists (91%) with working experience as specialists of >6 years (80%). Most participants were hemato-/oncologists. Regarding the place of work, 33% of participants worked in a hospital ward, 33% in hospital-based outpatient units and 29% in a private medical office. Out of the hospital-based participants, most worked in larger hospitals (>800 beds, 81%) and were employed as senior physicians (60%, ). The participants rated the geriatric scores on a visual analog scale. The mean for using the geriatric scores in clinical practice was 19.4 (±28.4). The mean for regarding the scores as generally meaningful was 52.5 (±32.2), and the mean for the question whether the GA was helpful in deciding a therapeutic regimen for the vignettes was 48.7 (±30.5). The differences were significant ( P < 0.0001, ). The same trend was also observed for the results of the individual geriatric tools. ‘Use in clinical practice’ was always significantly lower than the ratings as ‘meaningful’ or ‘helpful in the presented case’. Most of the geriatric tools were regarded as being slightly less useful in the case vignettes than they were regarded to be meaningful in general. These differences were only significant for MMSE, MNA and SCT. The TGUG and EORTC QLQ-C30 symptom scale were rated to be slightly more useful than meaningful, although these differences were not significant . As described in the Materials and methods section, the patient vignettes were presented in three steps. Data from 70 participants who had given a total of 164 recommendations were analyzed. shows a combined radar plot of the recommended therapeutic regimens. In the first graph, step 1 (50 years old, no comorbidities, cross-sectional imaging, stage of disease) and step 2 (actual age, video, comorbidities, medication, laboratory results) are shown. In the second graph, steps 2 and 3 (elderly patients without or with results of GA) are shown. The individual data for all patient cases are provided in the Supplementary Material , available at https://doi.org/10.1016/j.esmoop.2022.100761 . As visualized in the overview graph , large differences can be observed between step 1 and step 2 for the individual therapeutic options of a given patient. These differences were significantly different in 24 out of 48 therapeutic options. Between steps 2 and 3, the differences were less pronounced and significant in 4 out of 48 therapeutic options only. As a parameter of the agreement between the recommendations of therapeutic regimens, we analyzed the variance of the treatment recommendations for each option. This variance was significantly smaller in step 1 than in step 2 expressing a higher agreement in the standard situation than in elderly patients [mean of variances 602 (step 1) versus 944 (step 2), P < 0.0001]. The variance of step 3 (940) was only slightly lower than in step 2 ( P = 0.92) indicating that the agreement between the recommendations was not higher with known results of GA. To investigate whether GA results had a higher impact in decision making for more frail patients, we additionally divided the cases according to the result of the GA into two groups (upper and lower half of results). This stratified analysis showed no consistent trends in the change of variance (data not shown). Furthermore, a subgroup analysis according to the demographics of participants was carried out. In step 1, participants working as private oncologists had a higher variance than participants based in a hospital. In step 2 and step 3, participants working in larger hospitals (>800 beds) and specialists with >6 years of experience showed a slightly lower variance than participants in smaller hospitals or specialists with <6 years of experience. However, these differences were not statistically significant .
In this study for geriatric patients with GI tumors and comorbidities, we found—not surprisingly—different treatment recommendations for elderly versus younger patients. However, the variance in the treatment recommendation in elderly patients is substantial and did not differ significantly after the disclosure of GA results. By providing all physicians the same information on a patient, we could exclude that the wider range of health status in elderly patients is the background for the higher variance in treatment recommendations. Because not all treatment options can be considered equally ‘right’, we hypothesize that it is substantially less likely for elderly patients to be treated with the optimal regimen. We had expected that the additional information of a GA would lead—by providing more standardized information—to a more homogenous treatment recommendation and to a reduced variability, but this was clearly not observed. Interestingly, the variance in treatment recommendation was not markedly decreased even in more experienced oncologists which one might have expected. Other subgroup analyses by demographics and working place showed small differences in the variances between participant groups. However, these trends were neither significant nor consistent. Therefore, we hypothesize that it is not (much) more likely to receive the ‘right’ treatment recommendation from a more experienced physician or in a special setting. This is of concern and underlines that a more structured way of approaching elderly patients is urgently warranted—but contrasts with the fact that the GA results had finally very limited influence on the simulated treatment recommendation. The variance in recommendation of treatment for a younger, 50-year-old patient without comorbidities (step 1) was not negligible, probably due to some controversies regarding the optimal treatment even for non-elderly patients, i.e. regarding the best neoadjuvant therapy for carcinoma of the gastroesophageal junction. However, the variance was significantly higher for older patients with chronic illnesses (944 versus 602, step 2 versus step 1). This is in line with the growing body of data that the variability in treatment recommendation is greater in geriatric patients than in younger individuals, , , , , but it expresses the uncertainty of the oncologists regarding the optimal treatment. While for younger patients without comorbidities a multitude of high-quality studies and guidelines are available leading to a stronger consensus on the choice of therapy, the increased variance for elderly patients with comorbidities might be driven by the lack of evidence-based data on the best treatment. The underrepresentation of elderly patients in clinical trials , , contributes to this problem. The highly selected elderly patients actually enrolled in the standard trials are limiting the generalizability of the results of subgroup analyses, and trials focusing on elderly patients are rare. As a result, the theoretical knowledge on the optimal treatment decision is limited for the elderly population. Geriatric screening tools used in our survey are developed to identify vulnerabilities that might not be captured in the routine assessment. , Two recently published randomized trials have demonstrated that a comprehensive intervention following GA decreased the toxicity , without influence on the survival. In addition, there are instruments to estimate the chemotherapy risks like CARG , or CRASH. Although these tools allow a better prediction of risks and have been proven to increase the tolerability of the treatment, disease-specific randomized trials for the best treatment option based on a standardized assessment are rare or lacking for most situations. This leaves the physician alone in the therapeutic dilemma to weigh the risks and benefit on adhering more to the general treatment guideline and to avoid undertreatment and to avoid unnecessary toxicity. Some strategies include dose reduction that has evidence, i.e. in palliative treatment of gastric and colon cancer, , , and adjusting the therapy based on the individual tolerability. While this strategy might be appropriate in palliative therapy, there are clear limitations for this strategy in perioperative or adjuvant settings. The perception that the GA has limited answers to this question might be one explanation for the fact that geriatric instruments were not used regularly. Even the most popular tools MNA, BI and SCT (which is not part of the SIOG reviewed tests) were only used by 30%, 28% and 27% of participants, and because of the social desirability bias, these proportions might even be overestimated. This is in line with a recent survey among ASCO members reporting the use of formally validated tools by 29% of respondents. Despite the strong recommendation by medical associations to carry out a GA for potentially vulnerable elderly, , , the implementation into clinical practice is still lacking. To overcome a potential lack of experience with the results of the GA, an interpretation aid was given in our survey. It is promising, however, that even though not used regularly, the geriatric tools were regarded to be meaningful in general and helpful in the vignettes by roughly half of the participants. To lower the threshold of using a GA, visual aids like those used in our study may be beneficial. There are also limitations to this study. The sample size of 70 participants might be regarded as relatively small but is related to the relatively long survey. With German as the language of communication, the survey was restricted to German speakers, potentially limiting the generalizability of the results. With a clear focus on geriatric patients, it is possible that participation was biased toward physicians with special interest in this geriatric oncology. There are also limitations in the study design. To avoid additional complexity of the treatment recommendation, only different protocols, but no additional option for dose reduction, were provided which may be applied in clinical practice for older or frail patients , and have been successfully validated for some malignancies and stages. , , Furthermore, we had not provided the results of the CARG or CRASH tools that can be used to estimate the chemotherapy risks. While the format of vignettes has been validated to survey physicians and has already been successfully deployed in the geriatric oncology setting, , , , vignettes remain a simulation. Hence, the results should be extrapolated carefully to clinical practice. The clear advantage of the survey situation is the standardization of the questions and the exclusion of patient factors as reasons for the variability. In conclusion, the variance of recommended therapeutic regimens was significantly higher for elderly patients with comorbidities than for younger patients without comorbidities indicating a lower likelihood to receive the optimal treatment. The additional information of a GA did not influence the variance significantly. While a GA is recommended for elderly patients and can identify potential problems, it was not used routinely by most oncologists. Further efforts to promote GA and its implementation into clinical practice should include recommendations for daily clinical practice and in clinical trials to provide subgroup analyses based on a standardized evaluation by GA tools.
|
Association of Primary Care Physicians’ Individual- and Community-Level Characteristics With Contraceptive Service Provision to Medicaid Beneficiaries | eab3d5da-c144-475a-b341-6a43b0d421a3 | 10024198 | Gynaecology[mh] | Contraceptive care is a critical component of comprehensive health care that helps individuals achieve their preferred family size and birth spacing and is associated with improved health outcomes. However, this care is not uniformly accessible across the US. Disparities exist for vulnerable populations due to barriers such as lack of insurance coverage and scarcity of practitioners offering contraception care. , Medicaid, which insures more than 87 million individuals, mostly populations with disabilities and low income, is one of the largest payers of contraception care. While it is known that the income, insurance coverage, and socioeconomic status of an individual are associated with contraceptive use, little is known about the factors associated with physicians choosing to provide contraception care, especially to Medicaid beneficiaries. Medicaid beneficiaries often struggle to find physicians willing to see them. Nationally, approximately one-third of primary care physicians do not accept Medicaid. Medicaid beneficiaries’ wait times for appointments are longer than those of private insurance beneficiaries. , These workforce barriers further interact with other impediments in contraception access. First, there is little consistency across states in terms of which type of contraceptive care is covered under Medicaid. While family planning services are a mandatory benefit through Medicaid, there are few specifications on what services should be covered under them. Second, contraception access differs due to the variation in Medicaid eligibility requirements. In Medicaid expansion states, a larger group of beneficiaries has coverage than in nonexpansion states. Some states additionally expand eligibility for family planning services through Section 1115 Medicaid waivers or state plan amendments. These family planning expansions increase the beneficiary pool as well as types of contraceptive services available to beneficiaries. Thus, workforce, structural, and policy factors may either hinder or facilitate Medicaid beneficiaries’ access to contraception care. As a previous study by some of us found, clinicians from a variety of specialties and professions provide multiple types of contraception care in the US, and the rates of Medicaid acceptance differ both by clinician specialty and by state. However, other studies , on the contraception workforce relied on surveys or other self-reported data, which could potentially introduce reporting errors. Additionally, past studies have focused on a limited sample or subset of clinicians (eg, those practicing in a single state or from a single specialty). As a result, to date, how primary care physicians provide contraceptive care to Medicaid populations is not yet known. Furthermore, it is also not clear whether and how individual-level physician characteristics and community-level factors are associated with Medicaid contraceptive care participation. The objective of this study was to describe the primary care physician workforce that provides contraceptive services to Medicaid beneficiaries. Using multiple data sources, we aimed to comprehensively analyze this workforce and explore physician- and community-level factors associated with Medicaid contraceptive service participation. This cross-sectional study was conducted from August 1 to October 10, 2022. We followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. This study was approved by The George Washington University institutional review board. Because this is a study of secondary data, it did not need informed consent. We did not collect data from participants but used previously collected data from medical claims and other sources. Data The main data sources for this study were the 2016 Transformed Medicaid Statistical Information System (T-MSIS) Other Services, Pharmacy, and Annual Provider files (release 2). We identified contraceptive service provision among physicians using a modified set of Current Procedural Terminology codes and National Drug Codes released by the Office of Population Affairs. We used National Provider Identifiers in T-MSIS to merge T-MSIS claims data with the National Plan and Provider Enumeration System data set (January 2017). This enabled us to identify the provider type (an individual or an organization) of all active physicians. We obtained information on physicians’ sex, age, specialty, and type of medical degree they received from the American Medical Association Masterfile data set. We used data on physicians’ county-level socioeconomic factors from the Area Health Resources Files and the US Census Bureau. Data from the Kaiser Family Foundation were used to determine the Medicaid expansion status of individual states in 2016. Sample We limited the sample to Medicaid-participating physicians practicing in primary care specialties, including family medicine, general internal medicine, general pediatrics, and obstetrics and gynecology (OBGYN). We restricted the analysis to physicians from states that did not have any data quality issues in the 2016 T-MSIS. We excluded data from 6 states (Arkansas, Florida, Maine, Minnesota, Pennsylvania, and Rhode Island) and Washington, DC, and included data from the remaining 44 states and Puerto Rico (eTable 6 in ). We excluded physicians for whom data on their demographic characteristics were missing. To account for the differences in patient panels of physicians practicing in different specialties, we excluded physicians who did not see any reproductive-age (15-44 years) female Medicaid beneficiaries in 2016. The analytic sample included 251 107 physicians who saw at least 1 Medicaid beneficiary in 2016 and did not have data missing on any study measure (eFigure 1 in ). Measures We included outcome measures for 2 sets of contraceptive services: (1) intrauterine devices (IUDs) or implants and (2) hormonal birth control methods, including a pill, patch, or ring (hereafter referred to as hormonal contraception). Overall, we used 4 physician-level outcome measures: (1) whether a physician provided IUDs or implants to at least 1 Medicaid beneficiary in 2016, (2) whether a physician prescribed hormonal contraception to at least 1 Medicaid beneficiary, (3) the total number of beneficiaries provided IUDs or implants, and (4) the total number of beneficiaries prescribed hormonal contraception. eTable 1 in gives a list of codes used to identify contraceptive services. In multivariate analysis, we controlled for several physician-level characteristics, including their age, sex, and type of medical degree received and whether they were an international medical graduate (IMG). We assigned a state to each physician in the sample based on the number of Medicaid claims submitted. For example, if a physician was located in state A according to the National Plan and Provider Enumeration System but submitted most of their claims to the Medicaid program of state B, for the purpose of this analysis, we assigned state B as that physician’s state. A total of 18 978 physicians in the sample were reassigned states based on the number of Medicaid claims submitted (eFigures 4 and 5 in ). Similarly, we assigned a county to each physician based on the highest number of claims from a county. Next, we used the socioeconomic characteristics of the physician’s county, including the percentage of the population with low income, and whether the county was rural (based on the Rural-Urban Continuum Codes classification scheme ). As an indirect measure of the demand for contraceptive services, we controlled for the population of women between ages 15 and 44 years in a physician’s county. We also used a county-level measures for the populations that belonged to certain racial and ethnic groups since these can potentially influence contraceptive choice and provision. Race and ethnicity categories included American Indian or Alaska Native, Asian, Black, Hispanic and White; details were previously reported. Finally, we controlled for the Medicaid expansion status of the physician’s state and whether the state had a Medicaid family planning waiver. Statistical Analysis We began with descriptive analysis of the sample. Next, we used multivariate logistic regression models to evaluate the first and second outcomes (providing each type of contraception method to at least 1 Medicaid beneficiary) and multivariate negative binomial regression models to assess the third and fourth outcomes (the total number of beneficiaries provided each type of contraceptive method), controlling for physician- and county-level characteristics. Since it is known from past analyses that contraceptive provision patterns differ substantially among specialties, we analyzed physicians from each specialty separately. Adjusted odds ratios (ORs) for logistic regressions and average marginal effects (AMEs) for negative binomial regressions, along with 95% CIs for coefficients from both models, were calculated. Standard errors were clustered at the state level to account for heteroskedasticity. A 2-sided significance threshold of P = .025 was used. Stata MP, version 17 (StataCorp LLC) was used to conduct all analyses. The main data sources for this study were the 2016 Transformed Medicaid Statistical Information System (T-MSIS) Other Services, Pharmacy, and Annual Provider files (release 2). We identified contraceptive service provision among physicians using a modified set of Current Procedural Terminology codes and National Drug Codes released by the Office of Population Affairs. We used National Provider Identifiers in T-MSIS to merge T-MSIS claims data with the National Plan and Provider Enumeration System data set (January 2017). This enabled us to identify the provider type (an individual or an organization) of all active physicians. We obtained information on physicians’ sex, age, specialty, and type of medical degree they received from the American Medical Association Masterfile data set. We used data on physicians’ county-level socioeconomic factors from the Area Health Resources Files and the US Census Bureau. Data from the Kaiser Family Foundation were used to determine the Medicaid expansion status of individual states in 2016. We limited the sample to Medicaid-participating physicians practicing in primary care specialties, including family medicine, general internal medicine, general pediatrics, and obstetrics and gynecology (OBGYN). We restricted the analysis to physicians from states that did not have any data quality issues in the 2016 T-MSIS. We excluded data from 6 states (Arkansas, Florida, Maine, Minnesota, Pennsylvania, and Rhode Island) and Washington, DC, and included data from the remaining 44 states and Puerto Rico (eTable 6 in ). We excluded physicians for whom data on their demographic characteristics were missing. To account for the differences in patient panels of physicians practicing in different specialties, we excluded physicians who did not see any reproductive-age (15-44 years) female Medicaid beneficiaries in 2016. The analytic sample included 251 107 physicians who saw at least 1 Medicaid beneficiary in 2016 and did not have data missing on any study measure (eFigure 1 in ). We included outcome measures for 2 sets of contraceptive services: (1) intrauterine devices (IUDs) or implants and (2) hormonal birth control methods, including a pill, patch, or ring (hereafter referred to as hormonal contraception). Overall, we used 4 physician-level outcome measures: (1) whether a physician provided IUDs or implants to at least 1 Medicaid beneficiary in 2016, (2) whether a physician prescribed hormonal contraception to at least 1 Medicaid beneficiary, (3) the total number of beneficiaries provided IUDs or implants, and (4) the total number of beneficiaries prescribed hormonal contraception. eTable 1 in gives a list of codes used to identify contraceptive services. In multivariate analysis, we controlled for several physician-level characteristics, including their age, sex, and type of medical degree received and whether they were an international medical graduate (IMG). We assigned a state to each physician in the sample based on the number of Medicaid claims submitted. For example, if a physician was located in state A according to the National Plan and Provider Enumeration System but submitted most of their claims to the Medicaid program of state B, for the purpose of this analysis, we assigned state B as that physician’s state. A total of 18 978 physicians in the sample were reassigned states based on the number of Medicaid claims submitted (eFigures 4 and 5 in ). Similarly, we assigned a county to each physician based on the highest number of claims from a county. Next, we used the socioeconomic characteristics of the physician’s county, including the percentage of the population with low income, and whether the county was rural (based on the Rural-Urban Continuum Codes classification scheme ). As an indirect measure of the demand for contraceptive services, we controlled for the population of women between ages 15 and 44 years in a physician’s county. We also used a county-level measures for the populations that belonged to certain racial and ethnic groups since these can potentially influence contraceptive choice and provision. Race and ethnicity categories included American Indian or Alaska Native, Asian, Black, Hispanic and White; details were previously reported. Finally, we controlled for the Medicaid expansion status of the physician’s state and whether the state had a Medicaid family planning waiver. We began with descriptive analysis of the sample. Next, we used multivariate logistic regression models to evaluate the first and second outcomes (providing each type of contraception method to at least 1 Medicaid beneficiary) and multivariate negative binomial regression models to assess the third and fourth outcomes (the total number of beneficiaries provided each type of contraceptive method), controlling for physician- and county-level characteristics. Since it is known from past analyses that contraceptive provision patterns differ substantially among specialties, we analyzed physicians from each specialty separately. Adjusted odds ratios (ORs) for logistic regressions and average marginal effects (AMEs) for negative binomial regressions, along with 95% CIs for coefficients from both models, were calculated. Standard errors were clustered at the state level to account for heteroskedasticity. A 2-sided significance threshold of P = .025 was used. Stata MP, version 17 (StataCorp LLC) was used to conduct all analyses. Descriptive Results describes demographic and other characteristics of the study sample. Among the sample of 251 017 physicians, the mean (SD) age was 49.17 (12.58) years; 46% were female, and 54% were male. A total of 9% graduated from schools of osteopathic medicine, 28% were IMGs, 6% practiced in rural areas, and 70% practiced in a state that had expanded Medicaid in 2016. About one-third (34%) of the sample were family medicine physicians, 34% were internal medicine physicians, 14% were OBGYN physicians, and 17% were pediatricians. The number of physicians prescribing hormonal contraceptives (121 167 [48%]) was approximately 5 times the number of physicians providing IUDs or implants (25 115 [10%]) (eTables 2 and 3 in ). Obstetrics and gynecology physicians accounted for nearly two-thirds (16 481 of 25 115 [66%]) of the overall IUD- or implant-providing physicians, and family medicine physicians were the next-largest group of physicians (7262 [29%]) in this category (eTable 2 in ). For each type of contraception service, family medicine and OBGYN physicians were the 2 main specialties with substantially higher mean numbers of beneficiaries treated. Results From Multivariate Regression Analysis Adjusted ORs from multivariate logistic regressions showed that while female family medicine physicians (OR, 1.95; 95% CI, 1.68-2.26) had higher odds of providing IUDs or implants to at least 1 Medicaid beneficiary, the same was not true for other specialties. Family medicine graduates from osteopathic medical schools (OR, 0.53; 95% CI, 0.45-0.62) had lower odds of providing IUDs or implants compared with allopathy school graduates. While family medicine IMGs had lower odds of providing IUDs or implants (OR, 0.44; 95% CI, 0.36-0.55), this pattern was not observed for other specialties. Compared with physicians younger than 35 years, family physicians in most other age groups had lower odds of providing IUDs or implants (45-54 years: OR, 0.66 [95% CI, 0.55-0.80]; 55-64 years: OR, 0.51 [95% CI, 0.39-0.65]; and 65 years or older: OR, 0.29 [95% CI, 0.19-0.44]). However, the same age groups from all other specialties had higher odds of prescribing IUDs or implants. For OBGYN physicians, compared with being younger than 35 years, being aged 35 to 44 years (OR, 3.51; 95% CI, 2.93-4.21), 45 to 54 years (OR, 3.01; 95% CI, 2.43-3.72), or 55 to 64 years (OR, 2.27; 95% CI, 1.82-2.83) was associated with higher odds of providing IUDs and implants. Practicing in a rural area was significantly negatively associated with providing IUDs or implants for both OBGYN physicians (OR, 0.49; 95% CI, 0.38-0.62) and pediatricians (OR, 0.56; 95% CI, 0.36-0.88). In terms of providing IUDs or implants, the proportion of a physician’s county population that was Black was associated with lower odds for family medicine physicians (OR, 0.98; 95% CI, 0.96-1.00), and the proportion of a physician’s county that was Asian was associated with lower odds for OBGYN physicians (OR, 0.96; 95% CI, 0.94-0.99). Conversely, the proportion of the county population that had income below the poverty line was associated with higher odds of internal medicine physicians (OR, 1.08; 95% CI, 1.02-1.14) and pediatricians (OR, 1.08; 95% CI, 1.02-1.13) providing IUDs or implants. Medicaid expansion and family planning waiver status of a physician’s state were associated with lower odds of IUD or implant provision for internal medicine physicians (Medicaid expansion: OR, 0.19 [95% CI, 0.05-0.63]; waiver status: OR, 0.12 [95% CI, 0.02-0.80]), while Medicaid expansion was associated with lower odds for pediatricians (OR, 0.28; 95% CI, 0.08-0.95). Similar trends were observed for prescribing hormonal contraception to at least 1 Medicaid beneficiary . For physicians from all specialties, being female was generally associated with higher odds of prescribing hormonal contraception to Medicaid beneficiaries. For all specialties except OBGYN, graduating from osteopathic schools and being an IMG were associated with lower odds of prescribing hormonal contraception to at least 1 Medicaid beneficiary. Except for those specializing in OBGYN, being an IMG was associated with lower odds of providing hormonal contraception (family medicine IMGs: OR, 0.80 [95% CI, 0.73-0.88]; internal medicine IMGs: OR, 0.85 [95% CI, 0.77-0.93]; and pediatric IMGs: OR, 0.85 [95% CI, 0.78-0.93]). Family physicians in all age groups older than 35 years had lower odds of prescribing hormonal contraception, but the same age groups among pediatricians and internal medicine physicians had higher odds of prescribing hormonal contraception. Rural OBGYN physicians (OR, 0.60; 95% CI, 0.48-0.76) had lower odds of prescribing hormonal contraception to Medicaid beneficiaries, but the reverse was true of internal medicine physicians (OR, 1.54; 95% CI, 1.27-1.88). The percentage of the population that was below the poverty line was associated with somewhat higher odds of physicians from all specialties prescribing hormonal contraception. The Medicaid expansion status of a state was not associated with this outcome for OBGYN physicians and pediatricians but was significantly associated for family medicine (OR, 1.50; 95% CI, 1.06-2.12) and internal medicine (OR, 1.71; 95% CI, 1.18-2.48) physicians. Average marginal effects from regression models for outcomes associated with the total number of beneficiaries who were provided IUDs or implants and prescribed hormonal contraception showed that among OBGYN physicians, being a female was associated with having approximately 2 fewer beneficiaries (−2.10 beneficiaries; 95% CI, −3.28 to −0.91 beneficiaries) provided IUDs or implants and approximately 5 fewer beneficiaries (−5.32 beneficiaries; 95% CI, −7.48 to −3.15 beneficiaries) prescribed hormonal contraception . In contrast, being a female physician had differing directions of associations with family medicine physicians providing IUDs or implants (AME, 0.66 beneficiaries; 95% CI, 0.42-0.91 beneficiaries) and prescribing hormonal contraception (AME, 2.90 beneficiaries; 95% CI, 2.26-3.55 beneficiaries) . For OBGYN physicians, practicing in a rural county was associated with having approximately 7 fewer beneficiaries (−7.27 beneficiaries; 95% CI, −10.15 to −4.38 beneficiaries) prescribed hormonal contraception and 4 fewer beneficiaries (−3.91 beneficiaries; 95% CI, –5.35 to –2.48 beneficiaries) provided with IUDs or implants . However, family medicine physicians in rural areas prescribed hormonal contraception to 1.44 additional beneficiaries (95% CI, 0.36-2.51 beneficiaries) . State Medicaid expansion by 2016 or prior was significantly positively associated with having 12.17 additional beneficiaries (95% CI, 6.95-17.38 beneficiaries) provided hormonal contraception by OBGYN physicians and 1.92 additional beneficiaries (95% CI, 0.69-3.16 beneficiaries) provided hormonal contraception by family medicine physicians . describes demographic and other characteristics of the study sample. Among the sample of 251 017 physicians, the mean (SD) age was 49.17 (12.58) years; 46% were female, and 54% were male. A total of 9% graduated from schools of osteopathic medicine, 28% were IMGs, 6% practiced in rural areas, and 70% practiced in a state that had expanded Medicaid in 2016. About one-third (34%) of the sample were family medicine physicians, 34% were internal medicine physicians, 14% were OBGYN physicians, and 17% were pediatricians. The number of physicians prescribing hormonal contraceptives (121 167 [48%]) was approximately 5 times the number of physicians providing IUDs or implants (25 115 [10%]) (eTables 2 and 3 in ). Obstetrics and gynecology physicians accounted for nearly two-thirds (16 481 of 25 115 [66%]) of the overall IUD- or implant-providing physicians, and family medicine physicians were the next-largest group of physicians (7262 [29%]) in this category (eTable 2 in ). For each type of contraception service, family medicine and OBGYN physicians were the 2 main specialties with substantially higher mean numbers of beneficiaries treated. Adjusted ORs from multivariate logistic regressions showed that while female family medicine physicians (OR, 1.95; 95% CI, 1.68-2.26) had higher odds of providing IUDs or implants to at least 1 Medicaid beneficiary, the same was not true for other specialties. Family medicine graduates from osteopathic medical schools (OR, 0.53; 95% CI, 0.45-0.62) had lower odds of providing IUDs or implants compared with allopathy school graduates. While family medicine IMGs had lower odds of providing IUDs or implants (OR, 0.44; 95% CI, 0.36-0.55), this pattern was not observed for other specialties. Compared with physicians younger than 35 years, family physicians in most other age groups had lower odds of providing IUDs or implants (45-54 years: OR, 0.66 [95% CI, 0.55-0.80]; 55-64 years: OR, 0.51 [95% CI, 0.39-0.65]; and 65 years or older: OR, 0.29 [95% CI, 0.19-0.44]). However, the same age groups from all other specialties had higher odds of prescribing IUDs or implants. For OBGYN physicians, compared with being younger than 35 years, being aged 35 to 44 years (OR, 3.51; 95% CI, 2.93-4.21), 45 to 54 years (OR, 3.01; 95% CI, 2.43-3.72), or 55 to 64 years (OR, 2.27; 95% CI, 1.82-2.83) was associated with higher odds of providing IUDs and implants. Practicing in a rural area was significantly negatively associated with providing IUDs or implants for both OBGYN physicians (OR, 0.49; 95% CI, 0.38-0.62) and pediatricians (OR, 0.56; 95% CI, 0.36-0.88). In terms of providing IUDs or implants, the proportion of a physician’s county population that was Black was associated with lower odds for family medicine physicians (OR, 0.98; 95% CI, 0.96-1.00), and the proportion of a physician’s county that was Asian was associated with lower odds for OBGYN physicians (OR, 0.96; 95% CI, 0.94-0.99). Conversely, the proportion of the county population that had income below the poverty line was associated with higher odds of internal medicine physicians (OR, 1.08; 95% CI, 1.02-1.14) and pediatricians (OR, 1.08; 95% CI, 1.02-1.13) providing IUDs or implants. Medicaid expansion and family planning waiver status of a physician’s state were associated with lower odds of IUD or implant provision for internal medicine physicians (Medicaid expansion: OR, 0.19 [95% CI, 0.05-0.63]; waiver status: OR, 0.12 [95% CI, 0.02-0.80]), while Medicaid expansion was associated with lower odds for pediatricians (OR, 0.28; 95% CI, 0.08-0.95). Similar trends were observed for prescribing hormonal contraception to at least 1 Medicaid beneficiary . For physicians from all specialties, being female was generally associated with higher odds of prescribing hormonal contraception to Medicaid beneficiaries. For all specialties except OBGYN, graduating from osteopathic schools and being an IMG were associated with lower odds of prescribing hormonal contraception to at least 1 Medicaid beneficiary. Except for those specializing in OBGYN, being an IMG was associated with lower odds of providing hormonal contraception (family medicine IMGs: OR, 0.80 [95% CI, 0.73-0.88]; internal medicine IMGs: OR, 0.85 [95% CI, 0.77-0.93]; and pediatric IMGs: OR, 0.85 [95% CI, 0.78-0.93]). Family physicians in all age groups older than 35 years had lower odds of prescribing hormonal contraception, but the same age groups among pediatricians and internal medicine physicians had higher odds of prescribing hormonal contraception. Rural OBGYN physicians (OR, 0.60; 95% CI, 0.48-0.76) had lower odds of prescribing hormonal contraception to Medicaid beneficiaries, but the reverse was true of internal medicine physicians (OR, 1.54; 95% CI, 1.27-1.88). The percentage of the population that was below the poverty line was associated with somewhat higher odds of physicians from all specialties prescribing hormonal contraception. The Medicaid expansion status of a state was not associated with this outcome for OBGYN physicians and pediatricians but was significantly associated for family medicine (OR, 1.50; 95% CI, 1.06-2.12) and internal medicine (OR, 1.71; 95% CI, 1.18-2.48) physicians. Average marginal effects from regression models for outcomes associated with the total number of beneficiaries who were provided IUDs or implants and prescribed hormonal contraception showed that among OBGYN physicians, being a female was associated with having approximately 2 fewer beneficiaries (−2.10 beneficiaries; 95% CI, −3.28 to −0.91 beneficiaries) provided IUDs or implants and approximately 5 fewer beneficiaries (−5.32 beneficiaries; 95% CI, −7.48 to −3.15 beneficiaries) prescribed hormonal contraception . In contrast, being a female physician had differing directions of associations with family medicine physicians providing IUDs or implants (AME, 0.66 beneficiaries; 95% CI, 0.42-0.91 beneficiaries) and prescribing hormonal contraception (AME, 2.90 beneficiaries; 95% CI, 2.26-3.55 beneficiaries) . For OBGYN physicians, practicing in a rural county was associated with having approximately 7 fewer beneficiaries (−7.27 beneficiaries; 95% CI, −10.15 to −4.38 beneficiaries) prescribed hormonal contraception and 4 fewer beneficiaries (−3.91 beneficiaries; 95% CI, –5.35 to –2.48 beneficiaries) provided with IUDs or implants . However, family medicine physicians in rural areas prescribed hormonal contraception to 1.44 additional beneficiaries (95% CI, 0.36-2.51 beneficiaries) . State Medicaid expansion by 2016 or prior was significantly positively associated with having 12.17 additional beneficiaries (95% CI, 6.95-17.38 beneficiaries) provided hormonal contraception by OBGYN physicians and 1.92 additional beneficiaries (95% CI, 0.69-3.16 beneficiaries) provided hormonal contraception by family medicine physicians . In this cross-sectional study of the Medicaid contraceptive care workforce in 2016, we found that physician characteristics, including age, sex, specialty, medical training, and rural location, and the socioeconomic conditions of a physician’s county were associated with both providing any contraceptive care and the total number of beneficiaries provided contraceptive care. Results from descriptive analysis provide first glimpses, to our knowledge, of how physicians engage in reproductive health–related services in state Medicaid programs. Of the physicians in the sample, 48% prescribed hormonal birth control methods, while 10% provided IUDs or implants. A previous report showed that about 5% of women aged 21 to 44 years who were covered by Medicaid and at risk of pregnancy received IUDs or implants, and about 25% of this group received hormonal contraception. While patient preferences may have influenced some of these differences, there was a large proportion of Medicaid-participating primary care clinicians who provided hormonal contraception but not IUDs or implants. For Medicaid beneficiaries seeking these services, a primary care clinician who needs to refer them to someone else can be an additional barrier to access. We found a wide variation in contraception provision by individual specialties. These differences reflect the variations in specialties’ target populations and their revenue reliance on Medicaid. However, these findings also suggest that there is opportunity for increased engagement in contraceptive services by certain specialties. It is important to note that the demand for OBGYN physicians is projected to outpace supply as early as 2031. Thus, increasing the provision of contraceptive services across all primary care specialties will be important to meet the demand for these services in the future. Results from regression analyses showed that female physicians from most specialties had higher odds of providing contraceptive services and provided these services to a higher number of Medicaid beneficiaries. This is consistent with prior literature on general Medicaid participation of primary care physicians. This may also reflect patient preference for female physicians due to the perception that these physicians have personal knowledge of contraception and indicate patients having higher comfort levels discussing contraception with female physicians. , , International medical graduates from nearly all specialties except OBGYN had lower odds of providing contraceptive services, but the proportion of IMGs among OBGYN physicians is a fraction of that in other specialties. In addition, compared with their colleagues trained in US medical schools, IMGs are more likely to practice in areas that have physician shortages, , and they generally see a higher proportion of Medicaid beneficiaries. Policy makers should take note of these findings since the growth in the number of IMGs has outpaced that of US medical graduates in recent years. Younger physicians had higher odds of providing both hormonal contraception and IUDs or implants except in the case of family medicine physicians. It is possible that younger physicians have more knowledge and training and are less likely to hold negative beliefs about modern contraceptive methods such as IUDs and implants. Studies have shown that younger physicians are more likely to initiate conversations about contraceptive care with their patients and are more likely to offer IUDs and implants. , Finally, as younger physicians are more likely to see Medicaid patients, our results may reflect the additive impact of these issues. Several community-level factors were associated with contraceptive service provision. The percentage of the population that was below the poverty line was associated with somewhat higher odds of physicians from all specialties prescribing hormonal contraception. Family medicine and OBGYN physicians in counties with a higher percentage of Black individuals had lower odds of prescribing hormonal contraception. These findings could be related to patient preferences. However, given the increasing evidence of clinician biases in health care, these findings need further research to understand the underlying mechanisms. Finally, we found that state policy characteristics were associated with contraceptive service provision. Belonging to a state that expanded Medicaid before 2016 was associated with significantly higher odds of prescribing hormonal contraception for physicians from certain specialties. Having a Medicaid family planning waiver in a state was generally not associated with physicians’ contraceptive service provision. A previous study showed that nearly one-third of women in newly Medicaid-eligible populations received contraceptive services. Since Medicaid expansion is associated with an increase in the odds of physicians’ providing IUDs or implants, this may suggest an additional barrier related to these contraception services. Understanding contraception provision by physicians has become more important in the context of recent developments in state policies on access to abortion. Over the next few years, medical students’ training on this subject is expected to be constrained by state laws on abortions. In the future, physicians may have to optimize how they prescribe contraception (by prescribing it for longer durations), assess the possibility of contraception failure among its users, and recommend the use of tools such as app-based reminders or alarms. Additionally, since a relatively smaller proportion of physicians provide contraceptives, more physicians and medical students may have to undergo training for effective, long-acting, reversible contraception methods such as IUDs. Many states’ recent expansion of Medicaid coverage for up to 1 year post partum will make contraception more accessible for a large group of beneficiaries. Early evidence showed that such coverage expansion in Texas was associated with substantially higher utilization of contraceptive services. This may also positively impact physicians’ contraception provision in such states. Limitations There are several limitations to our analysis. First, there are limitations of the data used. Substantial variation existed in the quality of T-MSIS data submitted by states. We used data from 44 states and Puerto Rico (eTables 4 and 5 and eFigures 2 and 3 in ), for which programs covered approximately 86% of all Medicaid beneficiaries in 2016. We used data from 2016, which are somewhat dated. Due to data quality issues, we did not use information about physicians’ patient panels. We did not analyze any claims that appeared exclusively in the T-MSIS inpatient file since such claims do not identify the individual practitioner who provided the service. We may have therefore missed the provision of some contraceptive services. We did not control for the percentage of a county’s population that belonged to the “other” race category. This category may have included those who self-reported as belonging to more than 1 race (≥2). Second, our analysis did not include any information about the practices in which physicians operated. Several practice-level factors, such as ownership structure, size, and location, may impact physicians’ willingness to provide contraceptive care to Medicaid beneficiaries. Third, our method of assigning states to physicians could have influenced the findings on state policy characteristics. However, we reassigned states of only 8% (18 978 of 251 017) of physicians in the sample (eFigures 4 and 5 in ). Finally, we did not include nonphysicians in our analysis (nurse practitioners and physician assistants) since we did not have complete information about their individual-level characteristics. There are several limitations to our analysis. First, there are limitations of the data used. Substantial variation existed in the quality of T-MSIS data submitted by states. We used data from 44 states and Puerto Rico (eTables 4 and 5 and eFigures 2 and 3 in ), for which programs covered approximately 86% of all Medicaid beneficiaries in 2016. We used data from 2016, which are somewhat dated. Due to data quality issues, we did not use information about physicians’ patient panels. We did not analyze any claims that appeared exclusively in the T-MSIS inpatient file since such claims do not identify the individual practitioner who provided the service. We may have therefore missed the provision of some contraceptive services. We did not control for the percentage of a county’s population that belonged to the “other” race category. This category may have included those who self-reported as belonging to more than 1 race (≥2). Second, our analysis did not include any information about the practices in which physicians operated. Several practice-level factors, such as ownership structure, size, and location, may impact physicians’ willingness to provide contraceptive care to Medicaid beneficiaries. Third, our method of assigning states to physicians could have influenced the findings on state policy characteristics. However, we reassigned states of only 8% (18 978 of 251 017) of physicians in the sample (eFigures 4 and 5 in ). Finally, we did not include nonphysicians in our analysis (nurse practitioners and physician assistants) since we did not have complete information about their individual-level characteristics. This cross-sectional study offers, to our knowledge, the first national-level assessment of how individual physician- and community-level characteristics are associated with contraceptive service provision to Medicaid beneficiaries. We found that physician characteristics, including age, sex, specialty, medical training, and rural location, and the socioeconomic conditions of a physician’s county were associated with both providing any contraceptive care and the total number of beneficiaries provided contraceptive care. These findings varied across clinical specialties; thus, policies tailored for different physician types are essential to ensure that Medicaid beneficiaries have access to contraception. |
High Satisfaction with Patient-Centered Telemedicine for Hepatitis C Virus Delivered to Substance Users: A Mixed-Methods Study | ebf6aa90-6d06-4e51-a075-6dc10e7c0aa2 | 10024261 | Patient-Centered Care[mh] | Access to satisfactory health care can be challenging particularly for vulnerable populations. Telemedicine, two-way interaction between a patient and a provider separated geographically, may circumvent these obstacles. , Recent evidence-based systematic reviews have reported that telemedicine-based clinical outcomes are at least equivalent to or better than in-person care. Patient satisfaction with telemedicine, especially when targeted to vulnerable populations, including people with opioid use disorder (PWOUD), remains largely undefined. According to the Institute of Medicine, high-quality care is safe, efficient, timely, patient centered, and equitable. For telemedicine to achieve this designation, especially when targeted to PWOUD, patient satisfaction, patient centeredness, and equitability must be prioritized. Specifically, how does substituting in-person interactions with telemedicine affect empathy conveyed during health care encounters? What attributes among the PWOUD population might improve satisfaction with telemedicine? PWOUD have the highest hepatitis C virus (HCV) incidence and prevalence. Referral to a liver specialist has been the conventional HCV management strategy. Due to stigma and other competing priorities, many HCV-infected PWOUD, however, elect not to pursue HCV treatment when referred. Consequently, PWOUD access to curative HCV therapy remains limited. Telemedicine integrated into the nonstigmatizing environment within opioid treatment programs (OTPs) has been shown to be a promising HCV treatment delivery modality. , , Furthermore, PWOUD appear to prefer the convenience and accessibility of telemedicine encounters compared with offsite referral. As PWOUD typically consider OTPs comfortable and familiar environments with reduced stigma compared to conventional health care delivery sites, these sentiments may translate into high satisfaction with telemedicine. , We conducted a mixed-methods study to assess PWOUD satisfaction with health care delivery among individuals who had successfully completed HCV treatment, either through offsite referral to an HCV provider or through telemedicine encounters situated onsite in the OTP. We initially administered the Patient Satisfaction Questionnaire (PSQ) at two time points, and we subsequently conducted interviews to explore participants' experiences of facilitated telemedicine. The insights learned through this investigation may have broad applicability to achieving high satisfaction with telemedicine encounters targeted to vulnerable populations.
STUDY DESCRIPTION All study participants included in this analysis are part of an ongoing stepped wedge cluster randomized controlled trial that is comparing the HCV cure rates among PWOUD treated through telemedicine conducted onsite in OTPs with offsite referral. The study was approved by the University at Buffalo Institutional Review Board (IRB) and the IRB at each study site. The analysis we conducted is “as treated,” meaning that the 344 participants had provided PSQ scores at both time points without encountering any missing values. PWOUD who obtained treatment for HCV infection either onsite in one of 12 participating OTPs in New York State or through offsite referral completed the PSQ at the initial and last provider encounters. All study participants had to be actively enrolled in one of the OTPs for at least 6 months before assessment of study eligibility and had to be HCV antibody and HCV RNA positive. Potential study participants were referred by OTP staff to study-supported case managers (CMs) who then conducted all screening activities. All telemedicine encounters were facilitated by CMs and occurred entirely within the OTP. The CM situated the participant in the area designated for telemedicine and addressed all telemedicine-associated technical issues. We also sought to maximize telemedicine encounter quality using sponsor financial support to provide uniform wide-screen computers with high-quality cameras, microphones, and speakers that were distributed to all sites. As part of the study eligibility determination process, all potential participants underwent serological testing for HIV and hepatitis B virus (HBV). Any HIV- or HBV-infected participants were treated for HCV according to HCV treatment guidelines. Participants were treated with direct acting antivirals for 2–3 months followed by 3 months to assess for viral elimination. The telemedicine providers, who were gastroenterologists/hepatologists or advanced practice providers working under the direction of the hepatologist, directed the care of cirrhotic patients with local referrals for radiologic or endoscopic procedures as appropriate. For a complete description of the trial, please see Talal et al. At the initial study visit, participants provided information about demographics, living arrangements, comorbid conditions, and socioeconomic status. They also completed the Drug Abuse Screen Test (DAST-10) , and National Institute on Drug Abuse Quick Screen to provide information on substance use history. The DAST was also administered at the last time point. We utilized a mixed-methods approach guided by the theory of pragmatism, which combines quantitative and qualitative approaches to analyze data. , We used an Explanatory Sequential mixed-methods design, initially assessing participant's satisfaction with health care delivery by questionnaire and subsequently by interviewing PWOUD for enhanced understanding and context. Pragmatism, as an underlying theory for mixed-methods research, supports pluralism in research methodology. PSQ ADMINISTRATION AND SCORING We utilized the short-form PSQ (Modified PSQ-18) that is composed of 18 questions distributed into seven subscales ( Supplementary Table S1 ). We modified the PSQ-18 for HCV care and subsequently piloted it with a racial/ethnicity and literacy level diverse population to ensure comprehension. The outcome corresponds to the score for each participant per time point and is calculated as the average of all questions answered out of 18 and subsequently rounded to the nearest integer (see Section 1 in Supplementary Data ). Overall and subscale PSQ outcome results are presented in Supplementary Table S2 . MODELING The patient satisfaction response scores, originally recorded on a 5-point scale, are modeled using a partial proportional odds model (see Section 2 in Supplementary Data ). We fit the cumulative model for ordinal data, using each participant's average score per time point as illustrated in Supplementary Data . Model covariates are presented in Supplementary Table S3 . We included demographic covariates, such as race, ethnicity, gender, and age that have been shown to be important determinants of telemedicine encounter completion and satisfaction. We also assessed socioeconomic and health-related covariates we previously identified as promoting satisfaction with telemedicine among PWOUD. Due to limited data on telemedicine satisfaction in PWOUD, we assume the effect of covariates time, arm, age, gender, highest level of education, combined monthly income, residence type, and comorbid conditions on PSQ scores as being the same across categories. In contrast, covariates race and ethnicity are not assumed to have the same effect across scoring categories. The Generalized Linear Mixed Models permit ordinal outcomes that are not normally distributed and account for repeated measurements. PARTICIPANT INTERVIEWS AND QUALITATIVE ANALYSIS We used purposive sampling to obtain a representative sample of interviewees from the 238 telemedicine participants, consistent with the study design for hermeneutic studies. As our goal was to understand the experiences and to explicate common meanings of PWOUD undergoing HCV care through telemedicine integrated into an OTP, we interviewed participants who were referred by CMs, OTP staff, or members of the sites' patient advisory committees. After obtaining informed consent to conduct the interviews, we explored participants' experiences with HCV treatment through telemedicine using open-ended questions to maximize participant elaboration. We used the hermeneutic phenomenological research approach to understand patients' common meanings of HCV treatment integrated in an OTP (see Supplementary Figure S1 and Section 3 in Supplementary Data ). CONSTRUCTION OF WEIGHTS We utilized NVivo (QSR International, Burlington, MA) to determine the frequency of specific code or word mention by participants and calculated the term frequency (tf), as well as the inverse document frequency (idf). The idf is the natural logarithm of the fraction of the number of documents (i.e., N = 25) over the number of documents containing the codes/words. We then calculate a normalized weight factor (WF) (tf − idf), which indicates the code/word's importance on a (0,1) scale as part of the interviews. In the case that there is more than one subtheme, the average WF is computed (see Section 3 in Supplementary Data ).
All study participants included in this analysis are part of an ongoing stepped wedge cluster randomized controlled trial that is comparing the HCV cure rates among PWOUD treated through telemedicine conducted onsite in OTPs with offsite referral. The study was approved by the University at Buffalo Institutional Review Board (IRB) and the IRB at each study site. The analysis we conducted is “as treated,” meaning that the 344 participants had provided PSQ scores at both time points without encountering any missing values. PWOUD who obtained treatment for HCV infection either onsite in one of 12 participating OTPs in New York State or through offsite referral completed the PSQ at the initial and last provider encounters. All study participants had to be actively enrolled in one of the OTPs for at least 6 months before assessment of study eligibility and had to be HCV antibody and HCV RNA positive. Potential study participants were referred by OTP staff to study-supported case managers (CMs) who then conducted all screening activities. All telemedicine encounters were facilitated by CMs and occurred entirely within the OTP. The CM situated the participant in the area designated for telemedicine and addressed all telemedicine-associated technical issues. We also sought to maximize telemedicine encounter quality using sponsor financial support to provide uniform wide-screen computers with high-quality cameras, microphones, and speakers that were distributed to all sites. As part of the study eligibility determination process, all potential participants underwent serological testing for HIV and hepatitis B virus (HBV). Any HIV- or HBV-infected participants were treated for HCV according to HCV treatment guidelines. Participants were treated with direct acting antivirals for 2–3 months followed by 3 months to assess for viral elimination. The telemedicine providers, who were gastroenterologists/hepatologists or advanced practice providers working under the direction of the hepatologist, directed the care of cirrhotic patients with local referrals for radiologic or endoscopic procedures as appropriate. For a complete description of the trial, please see Talal et al. At the initial study visit, participants provided information about demographics, living arrangements, comorbid conditions, and socioeconomic status. They also completed the Drug Abuse Screen Test (DAST-10) , and National Institute on Drug Abuse Quick Screen to provide information on substance use history. The DAST was also administered at the last time point. We utilized a mixed-methods approach guided by the theory of pragmatism, which combines quantitative and qualitative approaches to analyze data. , We used an Explanatory Sequential mixed-methods design, initially assessing participant's satisfaction with health care delivery by questionnaire and subsequently by interviewing PWOUD for enhanced understanding and context. Pragmatism, as an underlying theory for mixed-methods research, supports pluralism in research methodology.
We utilized the short-form PSQ (Modified PSQ-18) that is composed of 18 questions distributed into seven subscales ( Supplementary Table S1 ). We modified the PSQ-18 for HCV care and subsequently piloted it with a racial/ethnicity and literacy level diverse population to ensure comprehension. The outcome corresponds to the score for each participant per time point and is calculated as the average of all questions answered out of 18 and subsequently rounded to the nearest integer (see Section 1 in Supplementary Data ). Overall and subscale PSQ outcome results are presented in Supplementary Table S2 .
The patient satisfaction response scores, originally recorded on a 5-point scale, are modeled using a partial proportional odds model (see Section 2 in Supplementary Data ). We fit the cumulative model for ordinal data, using each participant's average score per time point as illustrated in Supplementary Data . Model covariates are presented in Supplementary Table S3 . We included demographic covariates, such as race, ethnicity, gender, and age that have been shown to be important determinants of telemedicine encounter completion and satisfaction. We also assessed socioeconomic and health-related covariates we previously identified as promoting satisfaction with telemedicine among PWOUD. Due to limited data on telemedicine satisfaction in PWOUD, we assume the effect of covariates time, arm, age, gender, highest level of education, combined monthly income, residence type, and comorbid conditions on PSQ scores as being the same across categories. In contrast, covariates race and ethnicity are not assumed to have the same effect across scoring categories. The Generalized Linear Mixed Models permit ordinal outcomes that are not normally distributed and account for repeated measurements.
We used purposive sampling to obtain a representative sample of interviewees from the 238 telemedicine participants, consistent with the study design for hermeneutic studies. As our goal was to understand the experiences and to explicate common meanings of PWOUD undergoing HCV care through telemedicine integrated into an OTP, we interviewed participants who were referred by CMs, OTP staff, or members of the sites' patient advisory committees. After obtaining informed consent to conduct the interviews, we explored participants' experiences with HCV treatment through telemedicine using open-ended questions to maximize participant elaboration. We used the hermeneutic phenomenological research approach to understand patients' common meanings of HCV treatment integrated in an OTP (see Supplementary Figure S1 and Section 3 in Supplementary Data ).
We utilized NVivo (QSR International, Burlington, MA) to determine the frequency of specific code or word mention by participants and calculated the term frequency (tf), as well as the inverse document frequency (idf). The idf is the natural logarithm of the fraction of the number of documents (i.e., N = 25) over the number of documents containing the codes/words. We then calculate a normalized weight factor (WF) (tf − idf), which indicates the code/word's importance on a (0,1) scale as part of the interviews. In the case that there is more than one subtheme, the average WF is computed (see Section 3 in Supplementary Data ).
POPULATION DESCRIPTION We analyzed study participant responses to the PSQ at both time points (344 in total, 106 in the referral and 238 in the telemedicine arms). Sociodemographic characteristics are illustrated ( and ). The mean age is 48 ± 13 years; most participants are male (63.95%), Non-Hispanic (71.80%), and White (52.03%). Most participants use illicit drugs (61%) and reside in a private residence (85.76%). Approximately one third (38.08%) does not or is unsure whether they have a comorbid condition. Most participants (40.70%) had attended high school or obtained an equivalency degree, and one-third (35.47%) were in the highest category of monthly income. HIGH SATISFACTION WITH TELEMEDICINE ENCOUNTERS, EQUIVALENT TO IN-PERSON ENCOUNTERS Overall health care satisfaction was rated high (i.e., 96.2% [scores ≥4 at timepoint 1] and 96.5% [scores ≥4 at timepoint 2]) among all study participants ( Supplementary Table S2 ). At the second timepoint, an ∼10% shift in scores occurred, an increase by one point (i.e., 4–5) in comparison with the initial timepoint. Less than 2% of patients were dissatisfied or highly dissatisfied (i.e., scored values 1 or 2) overall or to any of the subscales per timepoint. ATTRIBUTES OF TELEMEDICINE SATISFACTION FROM PARTICIPANT INTERVIEWS We interviewed 25 telemedicine study participants to understand the factors, communication about the study, trust, and patient-centered care, that led to high satisfaction with telemedicine ( ). Through Communicating information promoting study enrollment and retention (Theme 1), participants discussed the importance of communication and transparency with OTP and study staff. “Every time I asked a question, they answered.” Participant's desire for HCV education and support enabled them to overcome skepticism and to accept HCV treatment and follow-up through telemedicine. “I know for myself that in the black community, we're skeptical about a lot of things medical, very skeptical.” Communication promotes Gaining trust in the OTP (Theme 2). Participants described meanings, such as the trust that emanates from the venue and the providers. Trust was able to mitigate anxiety toward telemedicine encounters and alleviate privacy, confidentiality, and security concerns. As one participant indicated, “The atmosphere in the clinic, they're very confidential.” Over time, participants became more comfortable with telemedicine. Over time participants described Realizing advantages of patient-centered HCV care (Theme 3). Participants recognized the tangible advantages of ready access to HCV providers. They also recognized the convenience of collocated HCV and opioid use disorder (OUD) treatment. Individuals with questionable adherence due to active addiction especially value integrated HCV and OUD care. “I would absolutely recommend it, especially if … a lot of addicts can be like me, where they don't want to go to hospitals, they don't want to sit in doctors' offices.” Participants also appreciated how an HCV cure is integral to substance use recovery. ATTRIBUTES OF HEALTH CARE DELIVERY SATISFACTION AT THE ENCOUNTER LEVEL Participant interviews provided insight into attributes that increased telemedicine satisfaction over the course of the entire study. We next sought to investigate the specific attributes associated with health care delivery satisfaction at the encounter level. The evaluation of the overall PSQ and subscale scores revealed that the three most frequently mentioned codes and weighting factor were “Time Spent with Doctor,” “General Satisfaction,” and “Interpersonal Manner.” Less frequently mentioned were “Technical Quality” and “Accessibility and Convenience” ( ). These results suggest that study participants valued trust and empathy over technical aspects or accessibility and convenience. CHANGES IN ATTRIBUTES OF SATISFACTION WITH HEALTH CARE DELIVERY OVER TIME When evaluating the changes in individual PSQ subscales comparing timepoints 1 and 2, we noted substantial improvements at timepoint 2 in “General Satisfaction,” “Time Spent with Doctor,” and “Accessibility and Convenience” (see Supplementary Figure S2 and Section 4 in Supplementary Data ). When adjusting for covariates, we observed that overall satisfaction improved significantly ( p = 0.0015, 95% confidence interval [CI]: −5.2618 to −1.2488) comparing the last and the initial timepoints ( ). The time coefficient is −0.7155 indicating that participants at the second timepoint have a higher probability of assigning scores in the higher patient satisfaction categories in comparison to the first timepoint. Significant differences between males and females were observed ( p = 0.0368, 95% CI: −1.4299 to −0.0454). The coefficient of gender is −0.7376 indicating that female participants have a higher probability of assigning scores in the higher patient satisfaction categories in comparison to male participants. There are also two intercept terms that correspond to the two cumulative logits defined on the score categories (scores 1–3 vs. scores 4 or 5 and scores 1–3 or 4 vs. score 5, respectively) with respective p -values 0.0015 and 0.0303 and respective estimates −3.2553 and 2.0105. These results indicate that the participants are more likely to assign higher scores (4 or 5) than lower scores (1–3) and less likely to assign the highest score (5) compared to the other scores (1–3 or 4). The nested patient level random effect is significant ( p < 0.05), and the intraclass correlation coefficient is 0.5858. Thus, individual satisfaction scores vary across study sites, indicating site-to-site differences. PARTICIPANT SUGGESTIONS REGARDING IMPROVEMENTS IN TELEMEDICINE DELIVERY Some participants recommended publicity to promote participation in telemedicine. “I think it needs advertisement to let people know.” Additional recommendations include provider contact information and education specifically targeted to individuals skeptical about medical technology (Theme 1). Furthermore, OTP and study staff played critical roles in initial engagement and retention in HCV treatment. “I was getting it [HCV treatment] here [OTP clinic]; it helped me to finish it” (Theme 2).
We analyzed study participant responses to the PSQ at both time points (344 in total, 106 in the referral and 238 in the telemedicine arms). Sociodemographic characteristics are illustrated ( and ). The mean age is 48 ± 13 years; most participants are male (63.95%), Non-Hispanic (71.80%), and White (52.03%). Most participants use illicit drugs (61%) and reside in a private residence (85.76%). Approximately one third (38.08%) does not or is unsure whether they have a comorbid condition. Most participants (40.70%) had attended high school or obtained an equivalency degree, and one-third (35.47%) were in the highest category of monthly income.
Overall health care satisfaction was rated high (i.e., 96.2% [scores ≥4 at timepoint 1] and 96.5% [scores ≥4 at timepoint 2]) among all study participants ( Supplementary Table S2 ). At the second timepoint, an ∼10% shift in scores occurred, an increase by one point (i.e., 4–5) in comparison with the initial timepoint. Less than 2% of patients were dissatisfied or highly dissatisfied (i.e., scored values 1 or 2) overall or to any of the subscales per timepoint.
We interviewed 25 telemedicine study participants to understand the factors, communication about the study, trust, and patient-centered care, that led to high satisfaction with telemedicine ( ). Through Communicating information promoting study enrollment and retention (Theme 1), participants discussed the importance of communication and transparency with OTP and study staff. “Every time I asked a question, they answered.” Participant's desire for HCV education and support enabled them to overcome skepticism and to accept HCV treatment and follow-up through telemedicine. “I know for myself that in the black community, we're skeptical about a lot of things medical, very skeptical.” Communication promotes Gaining trust in the OTP (Theme 2). Participants described meanings, such as the trust that emanates from the venue and the providers. Trust was able to mitigate anxiety toward telemedicine encounters and alleviate privacy, confidentiality, and security concerns. As one participant indicated, “The atmosphere in the clinic, they're very confidential.” Over time, participants became more comfortable with telemedicine. Over time participants described Realizing advantages of patient-centered HCV care (Theme 3). Participants recognized the tangible advantages of ready access to HCV providers. They also recognized the convenience of collocated HCV and opioid use disorder (OUD) treatment. Individuals with questionable adherence due to active addiction especially value integrated HCV and OUD care. “I would absolutely recommend it, especially if … a lot of addicts can be like me, where they don't want to go to hospitals, they don't want to sit in doctors' offices.” Participants also appreciated how an HCV cure is integral to substance use recovery.
Participant interviews provided insight into attributes that increased telemedicine satisfaction over the course of the entire study. We next sought to investigate the specific attributes associated with health care delivery satisfaction at the encounter level. The evaluation of the overall PSQ and subscale scores revealed that the three most frequently mentioned codes and weighting factor were “Time Spent with Doctor,” “General Satisfaction,” and “Interpersonal Manner.” Less frequently mentioned were “Technical Quality” and “Accessibility and Convenience” ( ). These results suggest that study participants valued trust and empathy over technical aspects or accessibility and convenience.
When evaluating the changes in individual PSQ subscales comparing timepoints 1 and 2, we noted substantial improvements at timepoint 2 in “General Satisfaction,” “Time Spent with Doctor,” and “Accessibility and Convenience” (see Supplementary Figure S2 and Section 4 in Supplementary Data ). When adjusting for covariates, we observed that overall satisfaction improved significantly ( p = 0.0015, 95% confidence interval [CI]: −5.2618 to −1.2488) comparing the last and the initial timepoints ( ). The time coefficient is −0.7155 indicating that participants at the second timepoint have a higher probability of assigning scores in the higher patient satisfaction categories in comparison to the first timepoint. Significant differences between males and females were observed ( p = 0.0368, 95% CI: −1.4299 to −0.0454). The coefficient of gender is −0.7376 indicating that female participants have a higher probability of assigning scores in the higher patient satisfaction categories in comparison to male participants. There are also two intercept terms that correspond to the two cumulative logits defined on the score categories (scores 1–3 vs. scores 4 or 5 and scores 1–3 or 4 vs. score 5, respectively) with respective p -values 0.0015 and 0.0303 and respective estimates −3.2553 and 2.0105. These results indicate that the participants are more likely to assign higher scores (4 or 5) than lower scores (1–3) and less likely to assign the highest score (5) compared to the other scores (1–3 or 4). The nested patient level random effect is significant ( p < 0.05), and the intraclass correlation coefficient is 0.5858. Thus, individual satisfaction scores vary across study sites, indicating site-to-site differences.
Some participants recommended publicity to promote participation in telemedicine. “I think it needs advertisement to let people know.” Additional recommendations include provider contact information and education specifically targeted to individuals skeptical about medical technology (Theme 1). Furthermore, OTP and study staff played critical roles in initial engagement and retention in HCV treatment. “I was getting it [HCV treatment] here [OTP clinic]; it helped me to finish it” (Theme 2).
Participants in our investigation were equally satisfied with the facilitated telemedicine model and referral for HCV management. Based upon PSQ scores and participant interviews, we observed that satisfaction with health care delivery increased over time among telemedicine and referral participants. Specific attributes that improved PWOUD satisfaction with telemedicine were communication and education about the study, HCV, and telemedicine that promoted study participation and retention. , Study-supported CMs were essential to facilitate addressing participants' competing priorities, assuaging their concerns, and answering their questions, all of which promoted satisfaction with telemedicine. Participants indicated that communication promotes trust in the OTP and by extension to the telemedicine encounters and providers. They further explained that trust in telemedicine as a health care delivery modality potentiates the provision of patient-centered HCV care. Substitution of in-person encounters with telemedicine had minimal effect on empathy. This observation is based upon scores on the two relevant PSQ subscales, time spent with the doctor and the interpersonal manner. Females were significantly more satisfied with health care delivery than males. We understood that situating telemedicine encounters in the OTP promotes participant confidence in the security and confidentiality of the health care delivery modality. The combination of a trusting environment and an empathetic provider appears to promote telemedicine acceptance by PWOUD. In our study, health care delivery through telemedicine adds value without compromising quality, as others have recently recommended. OTP clinical staff were available to review the patient's history, perform physical examinations, and answer questions. These actions reinforced connectivity with the telemedicine provider. Onsite phlebotomy facilitated data acquisition, a necessity since PWOUD rarely adhere to offsite laboratory referral. HCV treatment through telemedicine also increased visit adherence compared to usual care consistent with a recent study that reported 50% fewer “no shows” for telemedicine patients compared with in-person evaluations. The facilitated telemedicine model also increases value through simultaneously treating OUD and HCV by dispensing HCV medications with methadone. Contemporaneous HCV and OUD treatment has recently been shown to increase medication adherence, retention-in-care, and treatment effectiveness. The cumulative effect of these interventions is to increase satisfaction with telemedicine. Telemedicine satisfaction and accessibility requires entry points that are safe, equitable, and patient centered. Our facilitated telemedicine model appears to decrease health care disparities as others have suggested and consistent with data from a recent study among persons experiencing homelessness. Another recent study illustrated high telehealth satisfaction among rural residents along the U.S.-Mexican border, and it enabled substance users to receive primary care during the COVID-19 pandemic. Colocating all telemedicine encounters in OTPs ensured adequate broadband strength and leveraged their familiar and destigmatizing environments. Frequent in-person attendance requirements for methadone treatment offer communication opportunities and promote encounter and medication adherence. We learned that explanation of study procedures and security and confidentiality safeguards promoted and reinforced comfort in digital technology, consistent with American College of Physicians' guidelines. Furthermore, participants indicated that education delivered by CM in a cultural- and literacy-appropriate manner can mitigate skepticism toward HCV and telemedicine as recommended by others. Our results are also consistent with a recent study that showed that telehealth familiarity can increase telemedicine completion rates. As the deployment and appropriate operation of telemedicine equipment to all individuals may be infeasible, additional research is needed to evaluate methods to utilize telemedicine creatively to decrease health care disparities. A facilitated telemedicine approach may be helpful in certain situations. The use of mixed-methods methodology is a study strength. Patient interviews provided contextual understanding of the facilitated telemedicine experience, in which participants felt comfortable in the familiar OTP setting. Participant response weighting strengthened and quantified the importance of identified themes and codes. Participants noted that they felt connected to the telemedicine provider, and they valued behaviors designed to express empathy as recommended by others. In terms of limitations, we only measured patient satisfaction at two time points, we interviewed only telemedicine participants, and we had unequal numbers of telemedicine and usual care participants. Furthermore, additional research should assess the generalizability of the facilitated telemedicine model to other venues particularly those outside of New York State. For example, Medicare now reimburses providers for telemedicine examinations conducted in people's homes, as has been suggested for OUD treatment. We also noted site-to-site differences in participant satisfaction through modeling, and ongoing work is investigating the reasons for the site-specific differences.
PWOUD satisfaction with telemedicine is equivalent to in-person care when delivered from destigmatized familiar sites by empathetic providers. Our facilitated telemedicine model using familiar staff as facilitators augments quality, adds value, and achieves high patient satisfaction. Participants experienced provider empathy virtually and developed trust over time. We also leveraged the accessibility and convenience of the familiar and comfortable OTP environment. Our findings of high satisfaction with telemedicine health care delivery are consistent with others who report high satisfaction with video visits across a variety of gastroenterology conditions not necessarily targeted to vulnerable populations. , , Future work should investigate if the model is generalizable to other venues and situations where telemedicine can deliver highly satisfactory health care, which simultaneously augments quality and adds value.
Supplemental data
|
Laboratory evaluation of the miniature direct-on-blood PCR nucleic acid lateral flow immunoassay (mini-dbPCR-NALFIA), a simplified molecular diagnostic test for | 0a860f16-3020-40ff-834f-15f23548ec82 | 10024383 | Pathology[mh] | Correct and timely diagnosis of malaria is key in the management and control of this disease. Traditionally, microscopy of Giemsa-stained thick and thin blood film has been the standard diagnostic technique applied in endemic settings. Although it is able to differentiate the causative Plasmodium species, its sensitivity for low parasite densities is limited and adequate slide reading requires extensive training and experience . The development of rapid diagnostic tests (RDTs) has brought a fast and easy-to-use alternative for malaria diagnosis. Since their introduction, RDTs have proven to be an essential tool for malaria control in remote endemic regions . However, they usually do not detect < 100 parasites per microliter of blood, which makes them of limited use in near-elimination areas where such low parasite counts are often prevalent . False-negative RDT results can also arise for P. falciparum strains with a genetic deletion for the antigen targeted by RDTs, histidine-rich protein 2 (HRP2). Over the past decade, this genotype has become widespread in South America, and increasing prevalence has now been reported for African and Asian countries as well . Conversely, residual parasite antigen in the blood after treatment and complete parasite clearance is frequently observed and may result in false-positive RDT diagnosis . The limitations of microscopy and RDTs can be overcome by the use of nucleic acid amplification techniques (NAATs) . Examples are endpoint polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR), techniques that are commonly applied for malaria diagnosis and research in high-resource settings . However, the requirement of well-trained laboratory personnel as well as expensive PCR machines that rely on a stable power source, restrict the use of NAATs in malaria-endemic countries. An alternative to PCR is loop-mediated isothermal amplification (LAMP), a simplified molecular assay with an easy readout that makes use of isothermal DNA amplification . Nevertheless, current LAMP formats are generally unsuited for multiplex amplification, hampering Plasmodium species differentiation . Consequently, there is still a need for a highly sensitive, user-friendly and field-deployable diagnostic test for malaria that can discriminate Plasmodium species. An innovative assay has recently been developed to meet these requirements: the miniature direct-on-blood PCR nucleic acid lateral flow immunoassay (mini-dbPCR-NALFIA, Fig. ). This platform combines three techniques to overcome the issues encountered when attempting to implement traditional PCR methods in limited-resource settings. First of all, the direct-on-blood PCR (dbPCR) uses a specialized reagent mix that eliminates the need of DNA extraction prior to amplification . Instead, the PCR can be performed directly on a template of EDTA-anticoagulated whole blood. The dbPCR also has a duplex format which can detect all (pan) Plasmodium species infecting humans and differentiate P. falciparum infections. The second innovative element is the use of a miniature thermal cycler to run the dbPCR, called miniPCR (miniPCR bio, Massachusetts, USA). It is a hand-held, portable device that can be programmed with a smartphone or laptop application, either through USB cable or Bluetooth connection. The latest model, mini16, has an affordable price of approximately 800 USD (compared to 3000–5000 USD for a conventional PCR thermal cycler) and can process 16 samples per run. The mini16 can run on mains power, but also on a portable and solar-chargeable power pack, making the system completely autonomous and suitable for rural or emergency settings with unstable or no electricity supply. Finally, the result of the dbPCR is easily and rapidly read out with NALFIA, an immunochromatographic flow strip that can detect labelled PCR amplicons . A NALFIA strip is placed in a mixture of dbPCR product and running buffer, after which the dbPCR amplicons will flow over the strip. Neutravidin-labelled carbon particles on the NALFIA strip will bind to the labelled dbPCR amplicons, and this complex is visualized within 10 min when it is captured by the two amplicon-specific antibody lines on the NALFIA strip. Earlier prototypes of the dbPCR-NALFIA assay have shown promising results in field evaluations, with sensitivity and specificity results up to 97.2% and 95.5%, respectively, using light microscopy as reference standard, and a detection limit for P. falciparum infections of 1 parasite per microlitre (p/μL) of blood . In these studies, the dbPCR was still run on a conventional thermal cycler. By optimizing the dbPCR protocol, the mini-dbPCR-NALFIA can now be run on a miniPCR device, making the method better adapted to field settings with limited resources. This article describes the laboratory evaluation of the optimized mini-dbPCR-NALFIA as a multiplex assay for the detection of pan- Plasmodium and P. falciparum infections in blood. Direct-on-blood PCR reagent mix The dbPCR is a duplex reaction targeting two regions in the Plasmodium 18S rRNA gene: one that is highly conserved in the genus Plasmodium (the pan- Plasmodium target), and a second that is specific for P. falciparum . By using 5’-labelled primer pairs (Eurogentec, Liège, Belgium) previously described in literature, both target amplicons will carry a biotin label and a target-specific label (Table ) . The dbPCR reagent mix consists of 10 μL of 2× Phusion Blood PCR buffer (Thermo Fisher Scientific, Waltham, MA, USA), 0.1 μL of Phire Hot Start II DNA polymerase (Thermo Fisher Scientific), labelled primers and sterile water to make a total volume of 22.5 μL per sample. Direct-on-blood PCR on miniature thermal cycler The template format for the dbPCR is 2.5 μL of EDTA-anticoagulated blood. Every mini-dbPCR-NALFIA run includes controls, which are a P. falciparum -infected EDTA blood sample and a Plasmodium -negative EDTA blood sample. As a first step, the samples were lysed at 98 °C for 10 min on the mini16 thermal cycler (miniPCR bio, Massachusetts, USA), a miniature endpoint PCR device (dimensions: 5 × 13 × 10 cm, weight: 0.5 kg) which can also be used for heat block protocols. The miniPCR smartphone application was used to programme the lysis protocol on the mini16 device through Bluetooth connection. After the lysis of the EDTA blood templates, 22.5 μL of the dbPCR reagent mix was added to each (total reaction volume 25 μL). The dbPCR was also run on the mini16 thermal cycler. Its protocol consisted of an initial activation step of 1 min at 98 °C, followed by 10 cycles of 5 s at 98 °C, 15 s at 61 °C and 30 s at 72 °C; next, 28 cycles of 5 s at 98 °C, 15 s at 58 °C and 30 s at 72 °C; and a final extension step of 72 °C for 2 min. Read-out with NALFIA Read-out of the results was done with NALFIA (Abingdon Health, York, UK). The test strip consists of a sample absorption pad, a conjugate pad with neutravidin-labelled carbon binding to the amplicons’ biotin label, and a nitrocellulose membrane coated with anti-digoxigenin (Dig) and anti-fluorescein isothiocyanate (FITC) antibody lines detecting and visualizing the amplicon-carbon complex. A third line on the membrane functions as a flow control (Fig. ). After completion of the dbPCR run on the mini16, a NALFIA strip was placed in a tube with 10 μL of dbPCR product and 140 μL running buffer. After a 10 min incubation, the NALFIA results were read out. When the first line directed against the Dig-labelled pan- Plasmodium amplicon was positive, it indicated the presence of Plasmodium infection. If the second anti-FITC test line for the fluorescein amidite (FAM)-labelled P. falciparum amplicon was also positive, the sample was infected specifically with P. falciparum (or a mixed infection including P. falciparum ). A sample with a positive pan- Plasmodium line and an absent P. falciparum line was classified positive for a non- falciparum malaria species, i.e. Plasmodium vivax, Plasmodium malariae, Plasmodium ovale or Plasmodium knowlesi . When only the P. falciparum line was visible, this result w as interpreted to be positive for this species. A NALFIA test was considered invalid when the flow control line was absent. Laboratory evaluation Limit of detection The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). Sensitivity and specificity To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. Statistical analysis Sensitivity and specificity were calculated for the pan- Plasmodium target, the P. falciparum target and the overall assay. The Clopper-Pearson Exact method was used to calculate the 95% confidence interval (CI) of the sensitivity and specificity. Accordance and concordance were calculated in random framework, using the formulae proposed by Van der Voet and Van Raamsdonk (2004): [12pt]{minimal} $${ACC}_{random}=_{i}({{p}_{0,i}}^{2}+{{p}_{1,i}}^{2}+{{p}_{2,i}}^{2}{+{p}_{3,i}}^{2})$$ ACC random = 1 L ∑ i ( p 0 , i 2 + p 1 , i 2 + p 2 , i 2 + p 3 , i 2 ) , where L represents the number of tested samples, p 0 the proportion of negative results, p 1 the proportion of pan- Plasmodium single positive results (i.e. only the pan line), p 2 the proportion of P. falciparum single positive results (i.e. only the P. falciparum line) and p 3 the proportion of double positive results (i.e. both pan and P. falciparum lines), for a particular sample i. For the random concordance, the following formula was used: [12pt]{minimal} $${CON}_{random}={{P}_{0,i}}^{2}+{{P}_{1,i}}^{2}+{{P}_{2,i}}^{2}+{{P}_{3,i}}^{2}$$ CON random = P 0 , i 2 + P 1 , i 2 + P 2 , i 2 + P 3 , i 2 , where [12pt]{minimal} $${P}_{0,i}=_{i}^{L}{p}_{0,i}$$ P 0 , i = 1 L ∑ i L p 0 , i , [12pt]{minimal} $${P}_{1,i}=_{i}^{L}{p}_{1,i}$$ P 1 , i = 1 L ∑ i L p 1 , i , [12pt]{minimal} $${P}_{2,i}=_{i}^{L}{p}_{2,i}$$ P 2 , i = 1 L ∑ i L p 2 , i and [12pt]{minimal} $${P}_{3,i}=_{i}^{L}{p}_{3,i}$$ P 3 , i = 1 L ∑ i L p 3 , i . Here, L represents the number of different operators, and p 0,i , p 1,i , p 2,i and p 3,i represent the proportion of negative, pan single positive, P. falciparum single positive and double positive results for a particular operator i . The 95% CI of the accordance and concordance estimates was calculated by means of bootstrapping . The dbPCR is a duplex reaction targeting two regions in the Plasmodium 18S rRNA gene: one that is highly conserved in the genus Plasmodium (the pan- Plasmodium target), and a second that is specific for P. falciparum . By using 5’-labelled primer pairs (Eurogentec, Liège, Belgium) previously described in literature, both target amplicons will carry a biotin label and a target-specific label (Table ) . The dbPCR reagent mix consists of 10 μL of 2× Phusion Blood PCR buffer (Thermo Fisher Scientific, Waltham, MA, USA), 0.1 μL of Phire Hot Start II DNA polymerase (Thermo Fisher Scientific), labelled primers and sterile water to make a total volume of 22.5 μL per sample. The template format for the dbPCR is 2.5 μL of EDTA-anticoagulated blood. Every mini-dbPCR-NALFIA run includes controls, which are a P. falciparum -infected EDTA blood sample and a Plasmodium -negative EDTA blood sample. As a first step, the samples were lysed at 98 °C for 10 min on the mini16 thermal cycler (miniPCR bio, Massachusetts, USA), a miniature endpoint PCR device (dimensions: 5 × 13 × 10 cm, weight: 0.5 kg) which can also be used for heat block protocols. The miniPCR smartphone application was used to programme the lysis protocol on the mini16 device through Bluetooth connection. After the lysis of the EDTA blood templates, 22.5 μL of the dbPCR reagent mix was added to each (total reaction volume 25 μL). The dbPCR was also run on the mini16 thermal cycler. Its protocol consisted of an initial activation step of 1 min at 98 °C, followed by 10 cycles of 5 s at 98 °C, 15 s at 61 °C and 30 s at 72 °C; next, 28 cycles of 5 s at 98 °C, 15 s at 58 °C and 30 s at 72 °C; and a final extension step of 72 °C for 2 min. Read-out of the results was done with NALFIA (Abingdon Health, York, UK). The test strip consists of a sample absorption pad, a conjugate pad with neutravidin-labelled carbon binding to the amplicons’ biotin label, and a nitrocellulose membrane coated with anti-digoxigenin (Dig) and anti-fluorescein isothiocyanate (FITC) antibody lines detecting and visualizing the amplicon-carbon complex. A third line on the membrane functions as a flow control (Fig. ). After completion of the dbPCR run on the mini16, a NALFIA strip was placed in a tube with 10 μL of dbPCR product and 140 μL running buffer. After a 10 min incubation, the NALFIA results were read out. When the first line directed against the Dig-labelled pan- Plasmodium amplicon was positive, it indicated the presence of Plasmodium infection. If the second anti-FITC test line for the fluorescein amidite (FAM)-labelled P. falciparum amplicon was also positive, the sample was infected specifically with P. falciparum (or a mixed infection including P. falciparum ). A sample with a positive pan- Plasmodium line and an absent P. falciparum line was classified positive for a non- falciparum malaria species, i.e. Plasmodium vivax, Plasmodium malariae, Plasmodium ovale or Plasmodium knowlesi . When only the P. falciparum line was visible, this result w as interpreted to be positive for this species. A NALFIA test was considered invalid when the flow control line was absent. Limit of detection The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). Sensitivity and specificity To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. The limit of detection (LoD) for the pan- Plasmodium and P. falciparum targets was determined by testing 23 aliquots of a tenfold dilution series of a FCR3 ring-stage P. falciparum culture. The parasite density of the culture was determined by light microscopy. Dilutions were made in Plasmodium -negative EDTA blood from the Dutch blood bank. Tested parasite densities ranged from 1000 to 0.1 p/μL. LoD was defined as the lowest parasite density that was detected with 90% confidence (≥ 21 of 23 runs). To determine the laboratory sensitivity and specificity of the mini-dbPCR-NALFIA, a set of 87 blood specimen was tested, including samples from returned Dutch travellers with suspected malaria infection, Dutch blood donors, and intensive care unit patients from the Academic Medical Centre (Amsterdam, the Netherlands). All samples were derived from a pre-established Biobank at the Laboratory for Experimental Parasitology at the Academic Medical Centre. Both blood donors and intensive care unit patients did not travel to malaria-endemic areas in the 6 months before blood collection. The malaria status of all samples had been determined previously using the Alethia Malaria assay (Meridian Bioscience, Cincinnati, USA), a highly sensitive LAMP-based method for diagnosing malaria in non-endemic settings with a detection limit of 2 p/µL for P. falciparum and 0.1 p/µL for P. vivax . For samples with a positive Alethia result (n = 29, all returned travellers), the infecting Plasmodium species had been determined with expert microscopy. This set included 23 P. falciparum , 3 P. vivax , 2 P. ovale and 1 P. malariae infections. The P. falciparum samples had been quantified microscopically and ranged from 10 6 to 10 2 p/μL; the parasite counts of the non- falciparum malaria samples had not been determined at the time of microscopic examination. The 58 Plasmodium -negative samples comprised 19 samples from the Dutch blood donors, 16 samples from intensive care unit patients and 23 samples from malaria-suspected returned travellers with a negative Alethia diagnosis. The operator that tested all samples with mini-dbPCR-NALFIA was blinded to the reference test outcomes. Accordance and concordance are measures to express, respectively, the repeatability (intra-operator variability) and reproducibility (inter-operator variability) of qualitative tests . To evaluate the accordance and concordance of the mini-dbPCR-NALFIA, a single individual prepared 8 aliquots of a dilution series of FCR3 ring-stage P. falciparum culture and five Plasmodium -negative blood samples. For the accordance assessment, one operator tested three sets of aliquots with mini-dbPCR-NALFIA on three consecutive days, using the same equipment and dbPCR reagent batch numbers. To determine the concordance of the mini-dbPCR-NALFIA, five different operators from the same laboratory each tested a set of sample aliquots once. All five operators were blinded to the nature of the samples and used the same equipment and dbPCR reagent batch numbers. Sensitivity and specificity were calculated for the pan- Plasmodium target, the P. falciparum target and the overall assay. The Clopper-Pearson Exact method was used to calculate the 95% confidence interval (CI) of the sensitivity and specificity. Accordance and concordance were calculated in random framework, using the formulae proposed by Van der Voet and Van Raamsdonk (2004): [12pt]{minimal} $${ACC}_{random}=_{i}({{p}_{0,i}}^{2}+{{p}_{1,i}}^{2}+{{p}_{2,i}}^{2}{+{p}_{3,i}}^{2})$$ ACC random = 1 L ∑ i ( p 0 , i 2 + p 1 , i 2 + p 2 , i 2 + p 3 , i 2 ) , where L represents the number of tested samples, p 0 the proportion of negative results, p 1 the proportion of pan- Plasmodium single positive results (i.e. only the pan line), p 2 the proportion of P. falciparum single positive results (i.e. only the P. falciparum line) and p 3 the proportion of double positive results (i.e. both pan and P. falciparum lines), for a particular sample i. For the random concordance, the following formula was used: [12pt]{minimal} $${CON}_{random}={{P}_{0,i}}^{2}+{{P}_{1,i}}^{2}+{{P}_{2,i}}^{2}+{{P}_{3,i}}^{2}$$ CON random = P 0 , i 2 + P 1 , i 2 + P 2 , i 2 + P 3 , i 2 , where [12pt]{minimal} $${P}_{0,i}=_{i}^{L}{p}_{0,i}$$ P 0 , i = 1 L ∑ i L p 0 , i , [12pt]{minimal} $${P}_{1,i}=_{i}^{L}{p}_{1,i}$$ P 1 , i = 1 L ∑ i L p 1 , i , [12pt]{minimal} $${P}_{2,i}=_{i}^{L}{p}_{2,i}$$ P 2 , i = 1 L ∑ i L p 2 , i and [12pt]{minimal} $${P}_{3,i}=_{i}^{L}{p}_{3,i}$$ P 3 , i = 1 L ∑ i L p 3 , i . Here, L represents the number of different operators, and p 0,i , p 1,i , p 2,i and p 3,i represent the proportion of negative, pan single positive, P. falciparum single positive and double positive results for a particular operator i . The 95% CI of the accordance and concordance estimates was calculated by means of bootstrapping . Limit of detection The results of the P. falciparum culture dilution series testing are displayed in Table . At a confidence level of 90%, LoD was determined to be 100 p/µL for the pan- Plasmodium test line and 10 p/µL for the P. falciparum line. Sensitivity and specificity Of the 29 Plasmodium samples, 28 tested positive for the pan- Plasmodium line in the mini-dbPCR-NALFIA, while 1 P. vivax sample was false-negative for this line. All 23 P. falciparum samples also showed the P. falciparum test line. 57 Plasmodium -negative blood samples were negative for both test lines with mini-dbPCR-NALFIA; 1 sample from a Dutch blood donor was false-positive for the P. falciparum line. None of the test samples had an invalid NALFIA result. This resulted in a sensitivity of 96.6% (95% CI, 82.2%–99.9%) and a specificity of 100% (95% CI, 93.8%–100%) for the pan- Plasmodium line. The sensitivity of the P. falciparum test line was calculated to be 100% (95% CI, 85.2%–100%), and its specificity 98.4% (95% CI, 91.6%–100%) . When the results of the two NALFIA test lines were combined, there were three possible outcomes: a non- falciparum infection, a P. falciparum infection and Plasmodium -negative. This approach resulted in an overall sensitivity of 96.6% (95% CI, 82.2%–99.9%) and specificity of 98.3% (95% CI, 90.8%–100%) of the mini-dbPCR-NALFIA. Accordance and concordance An overview of the accordance test results for the mini-dbPCR-NALFIA is shown in Table . The overall accordance of all tested samples in a random framework was 93.7% (95% CI, 89.5%–97.8%). Table summarizes the test results for the five different operators of the mini-dbPCR-NALFIA. Based on these data, the random concordance was calculated to be 84.6% (95% CI, 79.5%–89.6%). The results of the P. falciparum culture dilution series testing are displayed in Table . At a confidence level of 90%, LoD was determined to be 100 p/µL for the pan- Plasmodium test line and 10 p/µL for the P. falciparum line. Of the 29 Plasmodium samples, 28 tested positive for the pan- Plasmodium line in the mini-dbPCR-NALFIA, while 1 P. vivax sample was false-negative for this line. All 23 P. falciparum samples also showed the P. falciparum test line. 57 Plasmodium -negative blood samples were negative for both test lines with mini-dbPCR-NALFIA; 1 sample from a Dutch blood donor was false-positive for the P. falciparum line. None of the test samples had an invalid NALFIA result. This resulted in a sensitivity of 96.6% (95% CI, 82.2%–99.9%) and a specificity of 100% (95% CI, 93.8%–100%) for the pan- Plasmodium line. The sensitivity of the P. falciparum test line was calculated to be 100% (95% CI, 85.2%–100%), and its specificity 98.4% (95% CI, 91.6%–100%) . When the results of the two NALFIA test lines were combined, there were three possible outcomes: a non- falciparum infection, a P. falciparum infection and Plasmodium -negative. This approach resulted in an overall sensitivity of 96.6% (95% CI, 82.2%–99.9%) and specificity of 98.3% (95% CI, 90.8%–100%) of the mini-dbPCR-NALFIA. An overview of the accordance test results for the mini-dbPCR-NALFIA is shown in Table . The overall accordance of all tested samples in a random framework was 93.7% (95% CI, 89.5%–97.8%). Table summarizes the test results for the five different operators of the mini-dbPCR-NALFIA. Based on these data, the random concordance was calculated to be 84.6% (95% CI, 79.5%–89.6%). This study demonstrates that the mini-dbPCR-NALFIA is a robust, highly sensitive and specific tool for molecular diagnosis of malaria. It has a simpler workflow than traditional NAATs and requires much less resources. By incorporating the mini16 as portable, battery-powered thermal cycler, the mini-dbPCR-NALFIA can be used even in remote healthcare settings without an extensive laboratory infrastructure or stable power supply. With an excellent overall sensitivity of 96.6% and specificity of 98.3%, the diagnostic accuracy of the mini-dbPCR-NALFIA is similar to that of traditional molecular techniques for malaria diagnosis, such as conventional PCR, qPCR and nested PCR . One P. vivax sample gave a false-negative result. This may have been due to a low parasite density, which is common in P. vivax infections . Unfortunately, whether this was indeed the case for this sample was unknown, as its parasitaemia had not been determined with microscopy at the time of diagnosis. Also, this particular sample had been in − 20 °C storage for 2 years, which may have affected the DNA integrity. The occasional false-positive result in one Plasmodium -negative sample could have been the result of carry-over contamination from a Plasmodium -positive sample during the preparation of the dbPCR or NALFIA. The LoDs of 100 p/μL for the pan- Plasmodium line and 10 p/μL for the P. falciparum line demonstrate the high sensitivity of the mini-dbPCR-NALFIA for low falciparum parasite densities. Although the LoD of extremely sensitive nested and qPCR techniques can go as low as 0.1 p/μL , most importantly, the mini-dbPCR-NALFIA is still significantly more sensitive for low falciparum parasitaemias than light microscopy and RDTs, which generally fail to detect infections below 50 to 200 p/µL . As such, the assay will be able to diagnose the majority of symptomatic malaria patients in an endemic setting, who often present with a parasitaemia above 1000 p/μL . On top of that, mini-dbPCR-NALFIA could potentially be used for screening and detection of asymptomatic falciparum cases with sub-microscopic infections . As no quantified non- falciparum samples were available for this study, additional evaluation of the LoD of the mini-dbPCR-NALFIA for these other Plasmodium species is warranted. When analysing a P. falciparum blood dilution series and five malaria-negative blood samples, the mini-PCR-NALFIA showed a high accordance of 93.7%, demonstrating the robustness of the method. Discordant results were mainly observed for parasite densities < 10 p/μL, which are close to the LoD of the test. At such low Plasmodium DNA concentrations, stochastic variations tend to have a more prominent influence on the assay’s outcome. This phenomenon was also believed to be the main reason for the concordance being 84.6%. The laboratory experience of the different operators in the concordance evaluation ranged from basic to proficient. They were only given written and oral instructions, which was sufficient for them to correctly perform the mini-dbPCR-NALFIA. This observation underlined its simplicity and user-friendliness. Compared to other molecular methods for malaria diagnosis, mini-dbPCR-NALFIA shares some characteristics with LAMP, which also has a simplified protocol with easy read-out and high accuracy for diagnosing malaria, including low density falciparum infections . However, LAMP currently has no multiplex capability and, therefore, cannot differentiate Plasmodium species in one reaction. This issue is not encountered with mini-dbPCR-NALFIA, a duplex assay that can distinguish falciparum malaria from infections with other Plasmodium species. To further evaluate the performance of the mini-dbPCR-NALFIA for diagnosis of (submicroscopic) infections with P. vivax, P. malariae and P. ovale ., additional research is required, since this study tested only a limited number of non- falciparum malaria blood samples. The adaptation of the assay described by Roth et al. to operate on a portable, battery-powered mini16 thermal cycler has made it possible to run the dbPCR in harsh, resource-limited conditions of sub-Saharan Africa. Implementation in such settings is also supported by the stability of the dbPCR reagents, which did not show loss of performance after storage at 4 °C for 9 months . Another strength of the mini-dbPCR-NALFIA is its affordability: the testing costs per sample are economical (0.30 USD for the dbPCR reagents, 2.80 USD per NALFIA test) and introduction of the mini16 greatly reduces the cost of the required equipment (800 USD per device). A planned economic evaluation will assess the cost-effectiveness of the mini-dbPCR-NALFIA in different endemic areas, compared to currently implemented malaria point-of-care diagnostics. A limitation of the current mini-dbPCR-NALFIA is its inability to differentiate between the non- falciparum malaria species and identify mixed infections. Although the vast majority of malaria cases in Africa is caused by P. falciparum , the relative contribution of P. vivax , P. malariae and P. ovale infections in this region appears to be increasing . Fortunately, the mini-dbPCR-NALFIA has a flexible design: an alternative format is currently under development, which will have a P. falciparum and a P. vivax test line. In the same way, the mini-dbPCR-NALFIA also has the potential to be modified to detect other blood-borne pathogens. In areas with high malaria transmission, the mini-dbPCR-NALFIA could be a valuable alternative to RDTs, which are likely to suffer from false-positive results due to P. falciparum HRP2 antigen persistence in the blood after clearance of the parasites . Nevertheless, it is possible that a similar issue may arise for molecular diagnostic techniques: there have been a number of studies showing that PCR-based detection of Plasmodium DNA in blood can remain positive up to seven weeks after curative malaria treatment . This could either be caused by residual circulating DNA fragments or by a small subset of parasites with extended survival. Although this phenomenon could have its implications for the specificity of the mini-dbPCR-NALFIA, its relevance for the application of the assay as a field diagnostic remains a subject of further study. The mini-dbPCR-NALFIA is an easy-to-use method for sensitive and specific diagnosis of malaria. Compared to other simplified molecular diagnostics, it has the advantages that there is no need of prior sample processing and that differentiation of P. falciparum and non- falciparum infections is possible thanks to its duplex format. A handheld miniature thermal cycler makes the assay well-adapted to resource-poor conditions in malaria endemic regions. The high diagnostic accuracy and low LoD of the mini-dbPCR-NALFIA could make it a valuable tool in many malaria control programmes, especially for detection of asymptomatic and low-density cases in near-elimination areas. A phase-3 field trial is currently being conducted to evaluate the potential of the mini-dbPCR-NALFIA in different epidemiological settings. |
Exploring disease perception in Behçet’s syndrome: combining a quantitative and a qualitative study based on a narrative medicine approach | 69975861-2916-42fa-92d7-310f9cefcd99 | 10024433 | Patient-Centered Care[mh] | Behçet syndrome (BS) is a rare, chronic and multisystemic disorder affecting mucosa, skin, joints, eyes, nervous and gastrointestinal system. The multi-organ involvement and the wide range of clinical spectrum make often the management of BS challenging; moreover, the relapsing course of the disease can determine exacerbations and remission of symptoms over time . Various demographic factors, such as age at disease onset, duration of disease or gender, are considered predictive of poor outcomes in the short and long-term. In fact, younger male patients have a more severe disease, leading to increased morbidity and mortality . BS has a significant psychological and social impact on the patients, on their caregivers and families. In routine clinical practice, BS patients frequently describe to have experienced several emotions such as fear, anxiety, stress, depression and anger because of the difficulty to adapt their lives to the disease, as well as uncertainty about their future [ – ]. Furthermore, patients sometimes highlight the difficulty to socialise due to their symptoms and in several cases, they refer that the disease negatively impacts on their familiar relationships . Studies exploring the experience and the patients' perception in BS are few; literature data are mainly focused on Quality of Life (QoL) in terms of depression, anxiety and sleep quality, while less data are available on the qualitative evaluation of QoL. The qualitative assessment of the patient’s perception of the disease, however, is very important to have a holistic approach to the disease and support patient centered therapeutic and management decisions. From a qualitative study performed in New Zealand exploring the experience and the challenges of a small group of patients living with BS, some important challenges emerged such as the difficulty to obtain a correct and timely diagnosis, loneliness and isolation due to the rarity and the difficulty to interact with the healthcare system . Therefore, the present study is aimed at exploring disease perception among a large community of Italian BS patients, by means of a co-designed survey and applying the narrative medicine (NM) approach [ – ] collecting stories of BS patients.
The main objectives of the study were: (i) to evaluate disease perception in a large community of BS adult patient; (ii) to identify eventual clusters of BS patients with different perception of disease; (iii) to explore areas affecting disease perception that are not captured with conventional assessment, through patients’ stories collected using the NM approach.
Study design and population A cross-sectional study was conducted to investigate disease perception among adult Italian BS patients. In detail, two different approaches were used. On a large community of Italian BS patients, disease perception was assessed by means of an ad-hoc questionnaire developed in co-design with patients and caregivers, clinicians and other experts; the general aim was to investigate the dimensions of quality of life and disease perception in BS. A smaller group of BS patients provided insights into disease perception using the NM approach in a separate form. Participation to the questionnaire was voluntary and anonymous and they were asked for their consent to analyse their answers for research purpose (a specific approval was asked in the introduction text of the survey). For anonymous surveys only notification of the Ethical Committee of the University of Pisa is needed, which deemed formal IRB approval unnecessary. Measures An ad-hoc questionnaire was co-designed in Italian by clinicians expert in the management of BS, health economists, patients’ representative and caregivers in collaboration with the Italian Association for Behçet Disease (SIMBA OdV) . The questionnaire was implemented online using EUSurvey and promoted among Italian BS patients trough different dissemination channels with the support of SIMBA OdV that contributed to the dissemination of the survey (i.e., website, Social media, etc.). Participation to the questionnaire was voluntary and data were collected from July 2019 to October 2019-point Likert-scale questions and were explored asking patients about the impact of the disease on different aspects of their life (i.e., work, family, social relations, etc.). In order to further explore real-life perspectives of BS patients, the NM approach was adopted and the stories of illness of BS patients were anonymously collected online from September 2019 to December 2019. In details, a semi-structured questionnaire was developed to capture the demographic profile of the respondents (Table ), while a wider section was dedicated to guide patients in telling their stories (e.g. “How did you feel when you were diagnosed with BS?”, “Did you experience issues in informing your employer about your disease?”, “Do you feel at the centre of your care? Which are your needs and expectations for the future?”), for which 3600 characters were available. Statistical analysis Data collected with the survey were first analysed using standard descriptive statistics, considering mean and standard deviation to describe quantitative variables and frequency for categorical variables. A cluster analysis was performed to identify possible subgroups of the overall study population based on the variables related to disease perception collected through the survey and using an approach specifically dedicated to the analysis of mixed continuous and categorical data. In particular, the method adopted is based on the application of the partitive k-medoid method, which consists of iteratively grouping the most similar units. Given the nature of the variables, the method was applied to the matrix of dissimilarity between the calculated variables using Gower's distance. The optimal number of clusters was determined on the basis of the Silhouette index. A descriptive analysis of variables used in the cluster analysis and the main socio-demographics characteristics of patients grouped into the different cluster identified was performed in order to explore differences between clusters not only in terms of variables contributing to cluster identification, the Fisher exact test and the Chi-square test were used to assess differences among clusters. Before performing the analysis of patients’ stories with dedicated software, pre-processing and cleaning of the texts (i.e., removing punctuation, converting all text to lowercase, removing unnecessary terms such as articles) as well as tokenization of the words by breaking up the texts into discrete words were performed. In order to explore the main words used and the concepts expressed in the stories, a word frequency analysis was also completed, and results were presented through a word cloud image. In addition, a sentiment analysis was also performed using the get_nrc_sentiment function implemented in the syuzhet R package , emotions expressed within stories collected were identified and scored according to Saif Mohammad’s National Research Council (NRC) Emotion lexicon . Basically, the NRC associates the retrieved text words with eight emotions: anger, fear, anticipation, trust, surprise, sadness, joy, and disgust. Total score for each emotions detected was reported. All analyses were performed using R version 3.6.2 and p value < 0.05 was considered statistically significant. Considering the narrative nature of the stories, a further deep qualitative analysis was performed by experts in narrative medicine with the aim of identifying the emergent topics (both needs and experiences) and to explore the most personal and specific characteristics related to living with BS.
A cross-sectional study was conducted to investigate disease perception among adult Italian BS patients. In detail, two different approaches were used. On a large community of Italian BS patients, disease perception was assessed by means of an ad-hoc questionnaire developed in co-design with patients and caregivers, clinicians and other experts; the general aim was to investigate the dimensions of quality of life and disease perception in BS. A smaller group of BS patients provided insights into disease perception using the NM approach in a separate form. Participation to the questionnaire was voluntary and anonymous and they were asked for their consent to analyse their answers for research purpose (a specific approval was asked in the introduction text of the survey). For anonymous surveys only notification of the Ethical Committee of the University of Pisa is needed, which deemed formal IRB approval unnecessary.
An ad-hoc questionnaire was co-designed in Italian by clinicians expert in the management of BS, health economists, patients’ representative and caregivers in collaboration with the Italian Association for Behçet Disease (SIMBA OdV) . The questionnaire was implemented online using EUSurvey and promoted among Italian BS patients trough different dissemination channels with the support of SIMBA OdV that contributed to the dissemination of the survey (i.e., website, Social media, etc.). Participation to the questionnaire was voluntary and data were collected from July 2019 to October 2019-point Likert-scale questions and were explored asking patients about the impact of the disease on different aspects of their life (i.e., work, family, social relations, etc.). In order to further explore real-life perspectives of BS patients, the NM approach was adopted and the stories of illness of BS patients were anonymously collected online from September 2019 to December 2019. In details, a semi-structured questionnaire was developed to capture the demographic profile of the respondents (Table ), while a wider section was dedicated to guide patients in telling their stories (e.g. “How did you feel when you were diagnosed with BS?”, “Did you experience issues in informing your employer about your disease?”, “Do you feel at the centre of your care? Which are your needs and expectations for the future?”), for which 3600 characters were available.
Data collected with the survey were first analysed using standard descriptive statistics, considering mean and standard deviation to describe quantitative variables and frequency for categorical variables. A cluster analysis was performed to identify possible subgroups of the overall study population based on the variables related to disease perception collected through the survey and using an approach specifically dedicated to the analysis of mixed continuous and categorical data. In particular, the method adopted is based on the application of the partitive k-medoid method, which consists of iteratively grouping the most similar units. Given the nature of the variables, the method was applied to the matrix of dissimilarity between the calculated variables using Gower's distance. The optimal number of clusters was determined on the basis of the Silhouette index. A descriptive analysis of variables used in the cluster analysis and the main socio-demographics characteristics of patients grouped into the different cluster identified was performed in order to explore differences between clusters not only in terms of variables contributing to cluster identification, the Fisher exact test and the Chi-square test were used to assess differences among clusters. Before performing the analysis of patients’ stories with dedicated software, pre-processing and cleaning of the texts (i.e., removing punctuation, converting all text to lowercase, removing unnecessary terms such as articles) as well as tokenization of the words by breaking up the texts into discrete words were performed. In order to explore the main words used and the concepts expressed in the stories, a word frequency analysis was also completed, and results were presented through a word cloud image. In addition, a sentiment analysis was also performed using the get_nrc_sentiment function implemented in the syuzhet R package , emotions expressed within stories collected were identified and scored according to Saif Mohammad’s National Research Council (NRC) Emotion lexicon . Basically, the NRC associates the retrieved text words with eight emotions: anger, fear, anticipation, trust, surprise, sadness, joy, and disgust. Total score for each emotions detected was reported. All analyses were performed using R version 3.6.2 and p value < 0.05 was considered statistically significant. Considering the narrative nature of the stories, a further deep qualitative analysis was performed by experts in narrative medicine with the aim of identifying the emergent topics (both needs and experiences) and to explore the most personal and specific characteristics related to living with BS.
Analysis of data from the survey A total of 207 patients participated in the survey and the main characteristics of participants are detailed in Table . Patients answering the survey were mainly female (67.15%) and the majority of them were aged between 31 and 50 years (66.18%). About 63% of patients were employed and about 66% also declared they had the need to change their working life due to BS. With respect to the disease, time since diagnosis was largely variable while almost all patients experienced the first symptoms before 40 years of age. Table details results related to the questions specifically related to disease perception and QoL. Globally, answers to questions related to disease perception and QoL showed some degree of variability, while it emerged that most patients reported some concerns with respect to their health status and the impact of BS on their life. In details, 76% (n = 158) of patients declared to feel guilty towards people close to them because of their health condition “Sometimes” to “Always”, with the same frequency, 81% (n = 167) of patients experiencing apprehension, concern or fear for their health. The fact that BS can be very unpredictable is perceived almost unanimously among the study population and 90% (n = 187) felt the unpredictability of BS “Sometimes” to “Always”; moreover, 49% (n = 102) felt they can’t do anything to improve their symptoms. BS was perceived to substantially affect how patients perceive themselves (n = 174, 84%) and that the disease has changed them (n = 163, 79%); the disease was reported to have an impact on the life of the patients, determining moderate to significant economic consequences in about 73% (n = 151) responders and also determining an impact on their family (n = 166, 80%). Results from the cluster analysis, performed to identify groups of patients reporting diverse feelings with respect to disease perception, revealed the presence of three different groups with different attitudes towards disease perception but also characterized by some heterogeneity with respect to socio-demographics characteristics. Details of the three groups identified are reported in Table and a graphical representation of the cluster identified on a bi-dimensional plane is reported as Additional file : Fig. S1. Cluster 1 grouped mainly young (< = 40 years) women and 80% of them had the first symptoms before 31 years; about 50% had a degree or higher level of education; the majority was convinced that therapy is able to control disease; a variable percentage felt guilty towards people close to them because of their health condition; the majority rarely or never felt lonely because of rarity of their disease; about 40% had a caregiver; the majority perceive the unpredictability of their disease often/always; more than 50% never felt ashamed of their illness; about 60% felt that their illness had an impact on their family; about 80% knew other people suffering from BS and were in contact with of the association (or used their service). Cluster 2 comprised mainly men and women older than 40 years; more than 80% had first symptoms between 11 and 50 years; more than 50% had diagnosis in the last 5 years; the majority was neutral or not really convinced that therapy is able to control the disease. More than 60% of them felt “sometimes” or “often” concerned or fear about their health; the minority had a caregiver; more than 50% never felt that they were able to do something to improve their symptoms. In addition, more than 50% of them never felt ashamed of their illness; the majority thought their illness will get worse over time; more than 50% had family members frequently worried about their health; about 60% didn’t know other people suffering from BS. Patients grouped in Cluster 3 were aged mainly between 21 and 50 years and being mainly women; more than 80% had first symptoms before 31 years and reported their QoL as bad or fair; 50% is neutral with respect to the assumption that therapy is able to control the disease; more than 50% often or always felt guilty towards people close to them because of their health condition, experienced concern or fear for their health, felt lonely because of their rare disease and experienced economic consequences because of the disease. More than 50% of them had a caregiver, while the majority often/always perceive the unpredictability of their disease and they had difficulty living with their illness; more than 50% never felt able to do something to improve their symptoms; the majority perceived that the illness affected the way others see them; more than 50% felt ashamed of their illness at least sometimes. Moreover, only about 40% of them was able to openly talk about their disease and the majority tough their illness will get worse over time. The large majority had not, or not completely, accepted the fact that they have BS and declared illness affected the perception of themselves; about 80% often or always felt worried about their health and thought they get sick more easily than others. In addition, almost all taught their illness frequently had many effects on your life; more than 50% had family members frequently worried about their health; about 80% felt their illness had an impact on their family; less than 80% knew other people suffering from BS and about 70% were in contact with the association (or used their service). Analysis of BS Patients’ stories A total of 43 stories were collected from patients and their demographic data are summarised in Table . The most frequent words expressed in the stories, after removing articles, conjunctions and punctuations, are represented in a word cloud (Fig. ). The most frequent words used in the stories were years (found n.75 times), disease (found n.73 times) and Behçet (found n.38 times), while a series of other words such as diagnosis , symptoms and ulcers were repeated in the stories with a similar frequency (found n.30, 29, 29 times). In addition, the words problems , life and work also emerged as frequent words (found n.27, 26, 26 times). The sentiment analysis showed fear and anger as the most prevalent emotions that were expressed in the stories probably with reference to the long and difficult journey lived by BS patients before getting the diagnosis as well as concerns on the different symptoms experienced. However, a sense of trust also emerged, possibly linked to the hope of having more experts centres for BS and of having a future cure for BS available for all patients (Fig. ). The stories provided a very deep and emotional glance into the journey of BS patients. As a matter of fact, using a qualitative approach to analyse text allow identification of feelings and perceptions related to three main phases that were related to the pre-diagnosis phase, the time of diagnosis and after the diagnosis. “I didn’t understand, I didn’t know”. In the pre-diagnosis phase, patients expressed frustration and concern due to the many exams, consultations and hospitals they had to experience before getting the diagnosis. In addition, patients experienced a deep feeling of mortification when being addressed as “hypochondriac” or “depressed” and felt not understood or listened by the healthcare professionals that were treating them. The stories tell, in fact, of the many years (and money) spent by patients while travelling across the country in different hospitals searching for the right clinician that could formulate a specific diagnosis. “ When I had my diagnosis, I felt reassured, because I knew who I was fighting ”. When receiving the diagnosis of BS, patients tell about their mixed feelings that range from confusion and rage to relief and satisfaction. Having a precise diagnosis was perceived as the beginning of a new journey, something unknown due to the uncertainty, the complexity and the rarity of BS and, parallelly, as something that brought “certainty” in the life of BS patients, “knowing who to fight” is perceived as a liberation for the dark feelings lived during the pre-diagnosis phase. “After all, life can be beautiful also with BS”. After the diagnosis, patients have clearly expressed that in many cases, the new journey have brought them into a new dimension, in which personal life, relationships and working life had to realign to the new scenario. Being a BS patient brought a new awareness of themselves and their priorities, pushing them to reconsider what really matters in life and to have a new, deep sensitivity toward the outside world. In terms of personal relationships, patients report that informing their friends, families and other people close to them caused opposite reactions: on one side, people that either denied the disease or that “disappeared” and on the other side, people that understood their feelings, that provided help and continuous support, “slowing down” when patients weren’t able to face life at same speed. Globally, an important enhancer was represented by the role played by the BS patients’ association, that was perceived as a source of information, of help and as “safe place” in which patients could share their emotions and feeling and most of all “didn’t feel alone anymore”.
A total of 207 patients participated in the survey and the main characteristics of participants are detailed in Table . Patients answering the survey were mainly female (67.15%) and the majority of them were aged between 31 and 50 years (66.18%). About 63% of patients were employed and about 66% also declared they had the need to change their working life due to BS. With respect to the disease, time since diagnosis was largely variable while almost all patients experienced the first symptoms before 40 years of age. Table details results related to the questions specifically related to disease perception and QoL. Globally, answers to questions related to disease perception and QoL showed some degree of variability, while it emerged that most patients reported some concerns with respect to their health status and the impact of BS on their life. In details, 76% (n = 158) of patients declared to feel guilty towards people close to them because of their health condition “Sometimes” to “Always”, with the same frequency, 81% (n = 167) of patients experiencing apprehension, concern or fear for their health. The fact that BS can be very unpredictable is perceived almost unanimously among the study population and 90% (n = 187) felt the unpredictability of BS “Sometimes” to “Always”; moreover, 49% (n = 102) felt they can’t do anything to improve their symptoms. BS was perceived to substantially affect how patients perceive themselves (n = 174, 84%) and that the disease has changed them (n = 163, 79%); the disease was reported to have an impact on the life of the patients, determining moderate to significant economic consequences in about 73% (n = 151) responders and also determining an impact on their family (n = 166, 80%). Results from the cluster analysis, performed to identify groups of patients reporting diverse feelings with respect to disease perception, revealed the presence of three different groups with different attitudes towards disease perception but also characterized by some heterogeneity with respect to socio-demographics characteristics. Details of the three groups identified are reported in Table and a graphical representation of the cluster identified on a bi-dimensional plane is reported as Additional file : Fig. S1. Cluster 1 grouped mainly young (< = 40 years) women and 80% of them had the first symptoms before 31 years; about 50% had a degree or higher level of education; the majority was convinced that therapy is able to control disease; a variable percentage felt guilty towards people close to them because of their health condition; the majority rarely or never felt lonely because of rarity of their disease; about 40% had a caregiver; the majority perceive the unpredictability of their disease often/always; more than 50% never felt ashamed of their illness; about 60% felt that their illness had an impact on their family; about 80% knew other people suffering from BS and were in contact with of the association (or used their service). Cluster 2 comprised mainly men and women older than 40 years; more than 80% had first symptoms between 11 and 50 years; more than 50% had diagnosis in the last 5 years; the majority was neutral or not really convinced that therapy is able to control the disease. More than 60% of them felt “sometimes” or “often” concerned or fear about their health; the minority had a caregiver; more than 50% never felt that they were able to do something to improve their symptoms. In addition, more than 50% of them never felt ashamed of their illness; the majority thought their illness will get worse over time; more than 50% had family members frequently worried about their health; about 60% didn’t know other people suffering from BS. Patients grouped in Cluster 3 were aged mainly between 21 and 50 years and being mainly women; more than 80% had first symptoms before 31 years and reported their QoL as bad or fair; 50% is neutral with respect to the assumption that therapy is able to control the disease; more than 50% often or always felt guilty towards people close to them because of their health condition, experienced concern or fear for their health, felt lonely because of their rare disease and experienced economic consequences because of the disease. More than 50% of them had a caregiver, while the majority often/always perceive the unpredictability of their disease and they had difficulty living with their illness; more than 50% never felt able to do something to improve their symptoms; the majority perceived that the illness affected the way others see them; more than 50% felt ashamed of their illness at least sometimes. Moreover, only about 40% of them was able to openly talk about their disease and the majority tough their illness will get worse over time. The large majority had not, or not completely, accepted the fact that they have BS and declared illness affected the perception of themselves; about 80% often or always felt worried about their health and thought they get sick more easily than others. In addition, almost all taught their illness frequently had many effects on your life; more than 50% had family members frequently worried about their health; about 80% felt their illness had an impact on their family; less than 80% knew other people suffering from BS and about 70% were in contact with the association (or used their service).
A total of 43 stories were collected from patients and their demographic data are summarised in Table . The most frequent words expressed in the stories, after removing articles, conjunctions and punctuations, are represented in a word cloud (Fig. ). The most frequent words used in the stories were years (found n.75 times), disease (found n.73 times) and Behçet (found n.38 times), while a series of other words such as diagnosis , symptoms and ulcers were repeated in the stories with a similar frequency (found n.30, 29, 29 times). In addition, the words problems , life and work also emerged as frequent words (found n.27, 26, 26 times). The sentiment analysis showed fear and anger as the most prevalent emotions that were expressed in the stories probably with reference to the long and difficult journey lived by BS patients before getting the diagnosis as well as concerns on the different symptoms experienced. However, a sense of trust also emerged, possibly linked to the hope of having more experts centres for BS and of having a future cure for BS available for all patients (Fig. ). The stories provided a very deep and emotional glance into the journey of BS patients. As a matter of fact, using a qualitative approach to analyse text allow identification of feelings and perceptions related to three main phases that were related to the pre-diagnosis phase, the time of diagnosis and after the diagnosis. “I didn’t understand, I didn’t know”. In the pre-diagnosis phase, patients expressed frustration and concern due to the many exams, consultations and hospitals they had to experience before getting the diagnosis. In addition, patients experienced a deep feeling of mortification when being addressed as “hypochondriac” or “depressed” and felt not understood or listened by the healthcare professionals that were treating them. The stories tell, in fact, of the many years (and money) spent by patients while travelling across the country in different hospitals searching for the right clinician that could formulate a specific diagnosis. “ When I had my diagnosis, I felt reassured, because I knew who I was fighting ”. When receiving the diagnosis of BS, patients tell about their mixed feelings that range from confusion and rage to relief and satisfaction. Having a precise diagnosis was perceived as the beginning of a new journey, something unknown due to the uncertainty, the complexity and the rarity of BS and, parallelly, as something that brought “certainty” in the life of BS patients, “knowing who to fight” is perceived as a liberation for the dark feelings lived during the pre-diagnosis phase. “After all, life can be beautiful also with BS”. After the diagnosis, patients have clearly expressed that in many cases, the new journey have brought them into a new dimension, in which personal life, relationships and working life had to realign to the new scenario. Being a BS patient brought a new awareness of themselves and their priorities, pushing them to reconsider what really matters in life and to have a new, deep sensitivity toward the outside world. In terms of personal relationships, patients report that informing their friends, families and other people close to them caused opposite reactions: on one side, people that either denied the disease or that “disappeared” and on the other side, people that understood their feelings, that provided help and continuous support, “slowing down” when patients weren’t able to face life at same speed. Globally, an important enhancer was represented by the role played by the BS patients’ association, that was perceived as a source of information, of help and as “safe place” in which patients could share their emotions and feeling and most of all “didn’t feel alone anymore”.
The present study offers an overview of disease perception among adult BS patients, combining two different approaches. On one side, a first assessment was performed by means of a co-designed survey aimed at exploring both disease perception and quality of life among BS patients; on the other side, the NM approach was adopted to allow patients to freely express their feeling about the disease, thus also disclosing aspects potentially not covered with the survey. Results from the survey revealed that, despite some degree of variability among the study population, patients generally reported some concerns with respect to the impact of BS on their life and families, also in view of the unpredictable nature of the disease. BS is also perceived to significantly affect patients’ perception of themselves and of the world around them, especially in terms of working life and personal relationships. The cluster analysis performed in our study allowed the identification of three different groups of subjects that perceive the disease differently. The three groups were characterized by diverse feelings about their disease perception but also characterized by different socio-demographics profiles. The first group of BS patients is convinced that their treatment can control the disease and were in contact with other people affected by BS. The second group is not really convinced that the therapy is able to control BS, while about two thirds of them didn’t know anyone else affected by BS. On the other hand, a third group have not accepted the disease even if they are in contact with other BS patients. Therefore, we can assume that knowing other BS patients and being in contact with a patients’ organisation can help. However, accepting or not the disease has a strong impact not only on the daily life, but also in terms of how they perceive themselves and in terms of hope for the future. The NM approach adopted in this study allowed to further explore individual perceptions and needs of BS patients. Despite telling their individual story, patients often addressed common issues, such as the long and complex journey faced from the disease onset until the BS diagnosis is formulated, which was strongly connected to the concept of time and perceived as an exhausting period of their lives. Data from the literature described how delays in BS is a well-known issue [ – ] and this can be aligned to the fact that many stories described in great detail the different milestones of the period lived before the diagnosis, including specificities on the hospitals visited and on the clinicians consulted. A strong focus on emotions and feelings permitted to enter the complexity of living with BS. The combination of very different emotions perceived at the time of diagnosis highlights how important it is to ensure an early diagnosis for BS patients and to provide an appropriate flow of information on the disease when communicating the diagnosis, also taking into account the important role played by the patients’ organizations. Although the findings are not directly comparable (due to the different methodological approaches adopted), the results of our study are partially in line with previous studies also from different countries on the impact that BS has on the lives of the patients [ – ]. To our knowledge few studies tried to get insight into patients’ perceptions using the NM approach and a structured qualitative analysis, some recent experience emerged for diseases other than BS none combine a quantitative and qualitative approach to deepen into disease perception among BS patients. Some limitations of our study need to be acknowledged. First, the selected nature of the patients cannot exclude the presence of selection bias, thus limiting the generalizability of results; second, the approach used to collect answers from the survey and patients’ stories does not allow to link the answers to the questionnaire with the surveys, also preventing to know if there are patients who participated in both evaluations.
To our knowledge, this is the first study on BS that addressed disease perception with a combined approach involving questionnaires co-designed with patients and narrative medicine that allows to take into account the perspectives and the experiences of BS patients. Listening to the voice of patients is really important and several methodological approaches can be adopted to do that; in fact, the main novelty of our study is represented by the combination of different approaches, such as narrative medicine, supporting the fact that the usual evidence-based medicine techniques can be integrated with different methodologies, in order to improve the understanding of the perspective of the patient. As a matter of fact, this combined approach can provide invaluable information not only for the BS community, but also for the real-life clinical practice, since having a better understanding of how the BS patient perceive the disease, also in terms of disease activity, and the impact of BS in his/her life, can definitely support the usual approaches to the disease and improve the management of BS patients.
Additional file 1. Graphical representation of subjects on a bidimensional plane according to cluster membership
|
Subsets and Splits