Dataset Viewer
title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Covid-19: The challenges facing endocrinology | ff314167-4510-480d-8807-2be12734d138 | 7195032 | Physiology[mh] | The Covid-19 pandemic has hit the planet like a tidal wave, imperiling the lives of thousands and threatening health systems with collapse. In this issue of the Annals of Endocrinology, Alexandre et al. remind us that the gateway to Sars-CoV-2 is the angiotensin II converting enzyme (ACE2), physiological regulator of the renin-angiotensin system (RAS). Angiotensin II stimulates the secretion of aldosterone via the AT1 receptor of the adrenal gland (glomerulated zone) and has its own vasoconstrictive, pro-fibrosing, pro-inflammatory activity. ACE2 converts angiotensin II-[1–8] to angiotensin-[1–7], which has properties opposite to those of angiotensin II, and is therefore a negative regulator of the RAS. The legitimate question raised by the authors is whether the prescription of converting enzyme inhibitor [1–10] (ACEi) and angiotensin receptor type 1 blockers (ARAII), very widely used in hypertension treatment, may increase the risk of developing severe acute respiratory syndrome in Covid-19-infected patients. It is convincingly explained why, on the basis of the available evidence, scientific societies do not recommend discontinuation of hypertension treatment by ACEi and ARAIIs in Covid-19+ patients. Therapeutic prospects and the first trials of the use of the soluble form of ACE2 as a virus trap are underway. Covid-19 challenges the endocrinologist in many ways. Three in particular are worth raising here: • the relative protection of children, presumed to be healthy carriers, and the lower incidence of death in women (1/3) suggest that the hormonal environment and genetic aspects are factors for surviving Covid-19. We know for example that the TLR7 gene present on the X chromosome is a receptor which influences antiviral response . Paradoxically, as immune response is stronger in women, contributing to greater susceptibility to autoimmune disease, this flaw becomes an advantage over viral infection. In men, who make up more than two-thirds of deaths from Covid-19, risk is higher after 50 years and even more after 70 years. But age is not the only factor; a context of metabolic syndrome with overweight, diabetes and hypertension seems to be strongly associated with this risk. It is important to remember that the decline in testosterone secretion in humans can be explained by 4 factors: age, obesity, associated comorbidities, and smoking . Strangely, the ACE2 protein is expressed in many tissues, including the testicle . These avenues could be explored to better identify the influence of sex hormones on the ability to resist viral disease; • a cytokine storm is reported during the respiratory distress phase in 20% of Covid-19 + patients with multi-organ failure and hypotension refractory to standard treatment . How does the adrenal cortex function react in this critical situation? Is there an analogy with what has been well described in septic shock? . We already know that corticosteroids are not useful in the treatment of pulmonary lesions associated with severe respiratory distress, and are indeed deleterious by delaying elimination of the virus. Is there any form of resistance to glucocorticoids? • the Covid-19 epidemic has served as an opportunity to adopt teleconsultation to respond to the urgent need for continuity of care. This is bound to lead to reimagining a definitive shift to telemedicine in the management of chronic diseases, which is liable to disrupt our practice and teaching and also the entire economic system of health-care. In this issue of the Annals, a current update on the relationships of diabetes, but also obesity, with the risk of contracting Covid-19 or developing a severe form is presented. This didactic article by Laura Orioli et al. , documented by a literature in full effervescence would be very useful for the informed reader or the general practitioner and will enable him to follow the future recommendations to treat the Covid+ diabetic patients. Endocrinologists and diabetologists, like many other specialists, must prepare for this very immediate deadline. The reader will find the answers to the questions posed by aging, which is the center of attention of the whole medical community, in the management of thyroid diseases. This consensus statement, coordinated by Philippe Caron, is remarkable, practical and exhaustive. It is based on the extensive clinical experience of its authors.
La pandémie de Covi19 s’est abattue sur la planète comme une déferlante qui menace la vie de milliers d’individus avec le risque d’explosion des systèmes de santé. Dans ce numéro des Annales d’Endocrinologie, Alexandre et al. nous rappelle que la porte d’entrée du Sars-CoV-2 est l’enzyme de conversion de l’angiotensine II (ACE2) régulateur physiologique du système rénine-angiotensine (SRA). L’angiotensine II stimule la sécrétion d’aldostérone via le récepteur AT1 de la surrénale (zone glomérulée) et possède une activité propre vasoconstrictrice, pro-fibrosante, et pro-inflammatoire. L’ACE2 en convertissant l’angiotensine II [1–8] en angiotensine [1–7], qui a des propriétés opposées à celles de l’angiotensine II, est donc un régulateur négatif du système rénine-angiotensine. La question légitime soulevée par les auteurs est de savoir si la prescription d’inhibiteur de l’enzyme de conversion [1–10] (IEC) et les bloqueurs du récepteur de type 1 à l’angiotensine (ARAII), très largement utilisés dans le traitement de l’hypertension, pourraient augmenter le risque de développer un syndrome respiratoire aigu sévère en cas d’infection au COVID-19. Il est expliqué de façon convaincante pourquoi, sur la base des preuves disponibles, les sociétés savantes ne recommandent pas l’interruption des traitements de l’hypertension artérielle par IEC et ARAII chez les patients Covi19+. Les perspectives thérapeutiques et les premiers essais de l’utilisation de la forme soluble d’ACE2 comme piège de virus sont en cours. Le Covi-19 interpelle l’endocrinologue par bien des aspects. Trois peuvent être identifiés: • la relative protection des enfants, présumés porteurs sains, et la moindre incidence de décès chez la femme (1/3) suggèrent que l’environnement hormonal et des facteurs génétiques pourraient compter parmi les facteurs de chance de survivre au Covid-19. On sait par exemple que le gène TLR7 présent sur le chromosome X est un récepteur qui influence la réponse antivirale . Paradoxalement, si la réponse immunitaire est plus forte chez la femme ce qui contribue à une plus grande susceptibilité au développement de maladies auto-immunes, cette aptitude devient un avantage vis à vis de l’infection virale. Chez l’homme qui représente plus de 2/3 des décès lors du Covid-19, le risque est plus élevé à 50 ans et plus encore après 70 ans. Mais l’âge n’est pas le seul facteur, le contexte de syndrome métabolique avec surpoids, diabète et hypertension semble en grande partie associé à ce risque. Il est important de noter que le déclin de la sécrétion de testostérone chez l’homme s’explique par 4 facteurs: l’âge, l’obésité, les comorbidités associées et le tabac . Curieusement la protéine ACE2 est exprimée dans de nombreux tissus dont le testicule . Ces pistes pourraient être explorées pour mieux identifier l’influence des hormones sexuelles sur la capacité de se défendre contre les maladies virales; • un orage de cytokines a été décrit lors de la phase de détresse respiratoire dans 20 % des patients Covid-19 + avec une défaillance multi-organique et hypotension réfractaire au traitement standard . Comment la fonction corticosurrénale réagit-elle dans cette situation critique? Y-a-t-il une analogie avec ce qui a été bien décrit au cours du choc septique? . On sait déjà que les corticoïdes ne sont pas utiles voir délétères, en retardant l’élimination du virus, dans le traitement des lésions pulmonaires associées au tableau de détresse respiratoire sévères . Existe-t-il une forme de résistance aux glucocorticoïdes? • l’épidémie de Covid-19 a créé l’opportunité d’adopter les consultations téléphoniques pour répondre à l’urgence de la continuité des soins . Il s’en suivra une réflexion sur la pérennité de l’usage de la télémédecine dans la prise en charge des maladies chroniques qui bouleverserait à la fois nos pratiques et son enseignement mais aussi tout le système économique de santé. Les endocrinologues et les diabétologues comme beaucoup de spécialistes doivent se préparer à cette échéance immédiate. Dans ce numéro des Annales, le lecteur trouvera une mise au point d'actualité sur les relations du diabète, mais aussi de l'obésité, avec le risque de contracter le Covid-19 ou de développer une forme sévère. Cet article didactique et documenté par une littérature en pleine effervescence sera très utile pour le lecteur averti ou le médecin généraliste et lui permettra de suivre les recommandations futures pour traiter les patients diabétiques infectés. Le lecteur trouvera les réponses aux questions que pose le vieillissement, qui fait l’objet de toute l’attention du corps médical, dans la prise en charge de la pathologie thyroïdienne. Ce consensus coordonné par Philippe Caron est remarquable, pratique et exhaustif. Il repose sur une grande expérience clinique de ses auteurs.
The authors declare that they have no competing interest.
|
Optimising paediatric afferent component early warning systems: a hermeneutic systematic literature review and model development | 0f3d3621-7b30-46da-ba89-5d421d0f74b2 | 6886951 | Pediatrics[mh] | Failure to recognise and act on signs of clinical deterioration in the hospitalised child is an acknowledged safety concern. Track and trigger tools (TTT) are a common response to this problem. A TTT consists of sequential recording and monitoring of physiological, clinical and observational data. When a certain score or trigger is reached then a clinical action should occur including, but not limited to, altered frequency of observation, senior review or more appropriate treatment or management. Tools may be paper based or electronic and monitoring can be automated or undertaken manually by staff. Despite the growing use of TTTs there is limited evidence of their effectiveness as a single intervention in reducing mortality or arrest rates in hospitalised children. Results from the largest international cluster randomised controlled trial of a TTT (the Bedside Paediatric Early Warning System (BedsidePEWS)) did not support TTT use to reduce mortality, and highlighted the multifactorial mechanisms involved in detecting and initiating action in response to deterioration. These findings lend further weight to a developing consensus about the need to look beyond TTTs to the impact of wider system factors on detecting and responding to deterioration in the inpatient paediatric population. This paper reports on a theoretically informed systematic hermeneutic literature review to identify the core components and mechanisms of action of successful afferent component early warning systems (EWS) in paediatric hospitals and is one of three linked reviews undertaken as part of a wider UK study commissioned to develop and evaluate an evidence-based paediatric warning system. It addressed the following question: What sociomaterial and contextual factors are associated with successful or unsuccessful Paediatric Early Warning Systems (with or without TTTs)?
Design We performed a hermeneutic systematic review of the relevant literature. A hermeneutic systematic review is an iterative process, integrating analysis and interpretation of evidence with literature searching and is designed to develop a better understanding of the field. The popularity of the method is growing in health services research where it has value in generating insights from heterogeneous literatures that cannot be synthesised through standard review methodology and would otherwise produce inconclusive findings (see ref ). The purpose of the review was not exhaustive aggregation of evidence, but to develop an understanding of the social, material and contextual factors associated with successful or unsuccessful paediatric early warning systems (PEWS). Theoretical framework Data extraction and interpretation was informed by translational mobilisation theory (TMT) and normalisation process theory (NPT). TMT is a practice theory which explains how goal-oriented collaborative activity is mobilised in unpredictable environments and how the relevant mechanisms of action are conditioned by the local context. It is well suited for understanding EWS which require the organisation of action in evolving conditions, in a variety of clinical environments, with different teams, skill mixes, resources, structures and technologies. NPT shares the same domain assumptions as TMT and is concerned with ‘how and why things become, or do not become, routine and normal components of everyday work’, directing attention to the preconditions necessary for successful implementation of interventions. The theoretical framework informed our data extraction strategy, interpretation of the evidence and the development of a propositional model of an optimal paediatric early warning system. Box 1 Mechanisms of translational mobilisation and their application to rescue trajectories Object formation— how people draw on the interpretative resources available to them within a strategic action field to create the objects of their practice. Enrolment into an escalation trajectory requires multiple examples of object formation beginning with construction of an individual as at risk of deterioration and a regime of vital signs monitoring instigated, through recognition that the patient’s physiological status is a cause for concern, to the identification of the patient as requiring a specific intervention. How this is achieved is highly dependent on the features of the local strategic action field. Translation —the processes that enable practice objects to be shared and different understandings accommodated. It points to the actions necessary in order for a patient that is an object of concern for nursing staff to be translated into a clinical priority for the doctor and, if necessary, to be translated into the focus of intervention by the emergency response team. Articulation refers to the secondary work processes that align the actions, knowledge and resources necessary for the mobilisation of projects of collective action. It is the work that makes the work, work. Responding to deterioration is time critical and articulation work is necessary to ensure the availability of resources and materials to support clinical management. This is not a mundane observation; catastrophic failures in patient safety are often attributed to the lack of functioning equipment and the absence of monitoring equipment has been identified as a factor undermining the implementation of early warning track and trigger tools. Attending to articulation in rescue trajectories also underlines the temporal ordering of action and the mechanisms required to achieve this, directing improvement efforts towards the organisation’s escalation policy, for example. Reflexive monitoring refers to the processes through which people collectively or individually appraise and review activity. In a distributed field of action, reflexive monitoring is the means through which members accomplish situational awareness of an overall project. The importance of situation awareness in rescue trajectories is well recognised, but achieving this is challenging. Reflexive monitoring is conditioned by the wider institutional context which will include a multiplicity of informal and formal mechanisms designed for this purpose: nursing and medical handovers, the ward round, safety briefings. The form, frequency and effectiveness of these processes in supporting detecting and acting on deterioration would need to be taken into account in any improvement initiative. Sensemaking refers to the processes through which agents create order in conditions of complexity. It draws attention to how the material and discursive processes by which members organise their work, account for their actions and construct the objects of their practice also give meaning and substance to the institutional components of strategic action fields that shape activity and condition future activity. Focus of the review The literature in this field identifies four integrated components which work together to provide a safety system for at-risk patients: (1) the afferent component which detects deterioration and triggers timely and appropriate action; (2) the efferent component which consists of the people and resources providing a response; (3) a process improvement component, which includes system auditing and monitoring; and (4) an administrative component focusing on organisational leadership and education required to implement and sustain the system. Our focus was limited to the afferent components of the system. Stages of the review Stage 1: scoping the literature Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data Stage 2: searching for the evidence We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. Stage 3: data extraction and appraisal Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer. Stage 4: developing a propositional model A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice. Patient and public involvement This review was conducted as part of a larger mixed methods study (ISRCTN 94228292), which used a formal, facilitated parental advisory group. The group comprised parents of children who had experienced an unexpected adverse event in a paediatric unit and provided input which helped shape the broader research questions and wider contextual factors to consider, specifically within the family involvement element of the system. The results of the review will be disseminated to parents through this group.
We performed a hermeneutic systematic review of the relevant literature. A hermeneutic systematic review is an iterative process, integrating analysis and interpretation of evidence with literature searching and is designed to develop a better understanding of the field. The popularity of the method is growing in health services research where it has value in generating insights from heterogeneous literatures that cannot be synthesised through standard review methodology and would otherwise produce inconclusive findings (see ref ). The purpose of the review was not exhaustive aggregation of evidence, but to develop an understanding of the social, material and contextual factors associated with successful or unsuccessful paediatric early warning systems (PEWS).
Data extraction and interpretation was informed by translational mobilisation theory (TMT) and normalisation process theory (NPT). TMT is a practice theory which explains how goal-oriented collaborative activity is mobilised in unpredictable environments and how the relevant mechanisms of action are conditioned by the local context. It is well suited for understanding EWS which require the organisation of action in evolving conditions, in a variety of clinical environments, with different teams, skill mixes, resources, structures and technologies. NPT shares the same domain assumptions as TMT and is concerned with ‘how and why things become, or do not become, routine and normal components of everyday work’, directing attention to the preconditions necessary for successful implementation of interventions. The theoretical framework informed our data extraction strategy, interpretation of the evidence and the development of a propositional model of an optimal paediatric early warning system. Box 1 Mechanisms of translational mobilisation and their application to rescue trajectories Object formation— how people draw on the interpretative resources available to them within a strategic action field to create the objects of their practice. Enrolment into an escalation trajectory requires multiple examples of object formation beginning with construction of an individual as at risk of deterioration and a regime of vital signs monitoring instigated, through recognition that the patient’s physiological status is a cause for concern, to the identification of the patient as requiring a specific intervention. How this is achieved is highly dependent on the features of the local strategic action field. Translation —the processes that enable practice objects to be shared and different understandings accommodated. It points to the actions necessary in order for a patient that is an object of concern for nursing staff to be translated into a clinical priority for the doctor and, if necessary, to be translated into the focus of intervention by the emergency response team. Articulation refers to the secondary work processes that align the actions, knowledge and resources necessary for the mobilisation of projects of collective action. It is the work that makes the work, work. Responding to deterioration is time critical and articulation work is necessary to ensure the availability of resources and materials to support clinical management. This is not a mundane observation; catastrophic failures in patient safety are often attributed to the lack of functioning equipment and the absence of monitoring equipment has been identified as a factor undermining the implementation of early warning track and trigger tools. Attending to articulation in rescue trajectories also underlines the temporal ordering of action and the mechanisms required to achieve this, directing improvement efforts towards the organisation’s escalation policy, for example. Reflexive monitoring refers to the processes through which people collectively or individually appraise and review activity. In a distributed field of action, reflexive monitoring is the means through which members accomplish situational awareness of an overall project. The importance of situation awareness in rescue trajectories is well recognised, but achieving this is challenging. Reflexive monitoring is conditioned by the wider institutional context which will include a multiplicity of informal and formal mechanisms designed for this purpose: nursing and medical handovers, the ward round, safety briefings. The form, frequency and effectiveness of these processes in supporting detecting and acting on deterioration would need to be taken into account in any improvement initiative. Sensemaking refers to the processes through which agents create order in conditions of complexity. It draws attention to how the material and discursive processes by which members organise their work, account for their actions and construct the objects of their practice also give meaning and substance to the institutional components of strategic action fields that shape activity and condition future activity.
The literature in this field identifies four integrated components which work together to provide a safety system for at-risk patients: (1) the afferent component which detects deterioration and triggers timely and appropriate action; (2) the efferent component which consists of the people and resources providing a response; (3) a process improvement component, which includes system auditing and monitoring; and (4) an administrative component focusing on organisational leadership and education required to implement and sustain the system. Our focus was limited to the afferent components of the system.
Stage 1: scoping the literature Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data Stage 2: searching for the evidence We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer. Stage 3: data extraction and appraisal Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer. Stage 4: developing a propositional model A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice.
Literature was identified through a recent scoping review, team members’ knowledge of the field, hand searches and snowball sampling techniques. The purpose was to (1) inform our review question and eligibility criteria and (2) identify emerging themes and issues. While we drew on several reviews of the literature we always consulted original papers. Data were extracted using data extraction template 1 and analysed to produce a provisional conceptual model of the core components of paediatric early warning systems. Additional themes of relevance were identified: family involvement, situational awareness (SA), structured handover, observations and monitoring and the impact of electronic systems and new technologies. 10.1136/bmjopen-2018-028796.supp1 Supplementary data
We undertook systematic searches of the paediatric and adult EWS literature (the goals and mechanisms of collective action in detection and rescue trajectories are the same). For the adult literature we used the same search strategies but added a qualitative filter to limit the scope to studies most likely to yield the level of sociomaterial and contextual detail of value to the review. Literature informing additional areas of interest was located through a combination of systematic and hand searches. Systematic searches (searches 2 and 3) were undertaken in areas where we anticipated locating evidence of the effectiveness of specific interventions to strengthen EWS. Theory-driven searches reflected the conceptual requirements of the model development. Systematic searches A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable. Theory-driven searches Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published. Screening After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer.
A systematic search was initially conducted across a range of databases from 1995 to September 2016 to identify relevant studies on the PEWS literature. This search was updated to cover literature from September 2016 to May 2018. An additional three systematic searches were conducted from 1995 to September 2016 to identify supplementary papers to aid in developing understanding on the PEWS literature: Adult EWS. Interventions to improve SA. Structured communication tools for handover and handoff. Detailed information on the search methodology can be found in . Grey literature was excluded in order to keep the review manageable.
Additional theory-driven searches were conducted in the following areas: Family involvement. Observations and monitoring. The impact of electronic systems. These were a combination of exploratory, computerised, snowball and hand searches. As the analysis progressed, we continued to review new literature on EWS as this was published.
After removing duplicates 5284 references were identified for screening. A modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram is provided . Papers were screened by title to assess eligibility and then by full text to assess relevance for data extraction. The PEWS and adult EWS searches were screened by two researchers, searches 2 and 3 were screened by the lead reviewer.
Data extraction template 2 was applied to all papers included in the review. As is typical of reviews of this kind, evidential fragments and partial lines of inquiry formed the unit of analysis rather than whole papers. These fragments were quality assessed according to the contribution they made to the developing analysis rather than assessing the paper as whole through the use of formal appraisal tools. Data extraction and quality appraisal were undertaken concurrently and double checked by a second reviewer.
A propositional model was developed specifying the core ingredients of a paediatric early warning system . It comprises logical inferences derived from the theoretical framework and evidence synthesis, informed by clinical experts on the team. Iterations of the model were developed in collaboration with clinical colleagues. A series of face-to-face meetings were conducted to review structure, wording and applicability to clinical practice.
This review was conducted as part of a larger mixed methods study (ISRCTN 94228292), which used a formal, facilitated parental advisory group. The group comprised parents of children who had experienced an unexpected adverse event in a paediatric unit and provided input which helped shape the broader research questions and wider contextual factors to consider, specifically within the family involvement element of the system. The results of the review will be disseminated to parents through this group.
Included studies Eighty-two papers were included in the review. Forty-six papers focused on TTT implementation and use in paediatric and adult contexts (24 from the paediatric search and the remaining 22 from the adult-focused search); the remaining 36 papers contributed supplementary data on factors related to the wider warning system. See for a detailed breakdown of this process. No studies were located that adopted a whole systems approach to detecting and responding to deterioration. Analysis In TMT the primary unit of analysis is the ‘project’, which defines the social and material actors (people, materials, technologies) and their relationships involved in achieving a particular goal. The goals of the afferent paediatric warning system are: first, that the child is identified as at risk and a vital signs monitoring regime instigated; second, that evidence of deterioration is identified through monitoring and categorised as such; and third, that timely and appropriate action is initiated in response to deterioration. Our analysis of the literature suggests that three subsystems within the afferent component of EWS support these processes: the detection of signs deterioration; the planning needed to ensure teams are ready to act when deterioration is detected; and the initiation of timely action . While we have focused on the afferent component, it is important to remember that all elements of the overall safety system (efferent component, process improvement and administrative arm) need to be working in concert in order to maintain an optimal paediatric early warning system. In the next section, we report on the literature in relation to each subsystem. Detection The goal of the detection subsystem is to recognise early signs of deterioration, so the child becomes the focus of further clinical attention. This requires, first, that the child is identified as at risk and a vital signs monitoring regime instigated and, second, that the child is identified as showing signs of deterioration. Despite widespread use, the evidence on TTT effectiveness in predicting adverse outcomes in hospitalised children is weak. Many TTTs have only been validated retrospectively and postpredictive values were generally low. Studies reporting significant decreases in cardiac arrest calls or mortality had methodological concerns. The literature does suggest that TTTs have value in supporting process mechanisms in the detection subsystem. Vital signs monitoring is undertaken on all hospital inpatients and, like other high-volume routine activity, is often delegated to junior staff who may not have sufficient skills to interpret results. TTTs have value in mitigating these risks: by specifying physiological thresholds that indicate deterioration they take knowledge to the bedside and act as prompts to action which can lead to a more systematic and frequent approach to monitoring and improved detection of deterioration. TTT’s effectiveness in fulfilling these functions depends on certain preconditions. The review highlighted that TTT use was impacted by the availability of appropriate and functioning equipment, (in)adequate staffing and night-time pressures and an appropriately skilled workforce. On this latter point, while several papers report on education packages to improve the detection of deterioration, the evidence is not robust enough to recommend specific programmes. There were also times whereby nursing staff prioritised sleep over waking a patient to take vital signs. TTTs are also used differently depending on the experience of the user. For juniors, they provide a methodology and structure for monitoring clinical instability and identifying deterioration, whereas more experienced staff reportedly use TTTs as confirmatory technologies. The importance of professional intuition in detecting deterioration is extensively reported across the literature and several authors recommend the inclusion of ‘staff concern’ in tool criteria. This is important; TTTs may be of less value in patients with chronic conditions because of altered normal physiology or where subtle changes are difficult to detect. It is also the case that TTTs are implemented in contexts governed by competing organisational logics which impact on their value and use. For example, Mohammed Iddrisu et al show TTTs have limited value immediately after surgery because acceptable vital sign parameters are different in the immediate postoperative period. There is growing interest in the literature in strategies that facilitate patient and relative involvement in the early detection of deterioration. Healthcare professionals depend on families to explain their child’s normal physiological baseline and identify subtle changes in their child’s condition but this information is not always systematically obtained. Some authors propose family involvement in interdisciplinary rounds (This is an editorial paper), but this requires parents to have detailed information about the signs and symptoms they should be attending to and as yet there is little evidence on effective strategies for how they might be involved in the detection of deterioration. While much of the literature reports on intermittent manual vital signs monitoring and paper-based recording systems, across the developed world there is a growing use of electronic technologies, which have important implications for the wider detection subsystem. We considered a number of evaluations of new technologies which indicated that electronic vital signs recording is associated with a number of positive outcomes, particularly timeliness and accuracy, when compared with paper-based systems. They can provide prompts or alerts for monitoring, which facilitates better recognition of deterioration and is associated with a reduction in mortality. These studies tend to evaluate new technologies in isolation, however, and do not engage with the literature highlighting alarm fatigue which is known to mitigate effectiveness over time or concerns about overburdening staff with alerts. Moreover, the successful implementation of new technologies is conditioned by the local context. For instance, where manual input into an electronic device is required, access to computers is an essential precondition. When computers were not available, staff ‘batch’ the collection of vital signs before data entry, thereby delaying the timely detection of deterioration. In another study where the electronic system was found to be cumbersome and separated the collection and entry of data from the review of vital signs, verbal reports were favoured to ensure timely communication of information. See for a summary of the evidence reported. Planning Detecting and responding to deterioration involves the coordination of action in conditions of uncertainty and competing priorities. The goal of the ‘Planning’ subsystem is to ensure the clinical team are ready to act in the event of evidence of deterioration and is reflected in the growing interest in the literature on structures to facilitate team SA, group decisions and planning. TTTs have been found to support SA. Their use enabled clinicians to have a ‘bird’s-eye’ view over all admitted patients on a ward as well as encouraging staff to consider projected acuity levels of the ward. A number of studies also report on ‘huddles’ in facilitating SA. A huddle is a multidisciplinary event scheduled at predetermined times where members discuss specific risk factors around deterioration and develop mitigation plans. One study combined the introduction of huddles with a ‘watchstander’, a role fulfilled by a charge nurse or senior resident, whose primary function is to know patients at high risk for deterioration. These initiatives were associated with a near 50% reduction in transfers from acute to intensive care determined to be unrecognised situation awareness events. A further strategy identified by Goldenhar et al describes the use of the ‘watcher’ category to designate a patient as at risk where staff have a ‘gut feeling’ deterioration is likely. A recent study used the category of ‘watcher’ to create a bundle of expectations to standardise communication and contingency planning. Once a patient was labelled ‘a watcher’ a series of five specific tasks, such as documentation of physician awareness of watcher status and that the family had been notified of the change in the patient’s status, needed to be completed within 2 hours. Handovers are integral to clinical communication and contribute to SA. The extensive literature on handover indicates that information sharing can be of variable quality and there is growing evidence that structured approaches improve this. Ranging from a checklist system to a cognitive aid developed through consensus, most of the published interventions are variations of the Situation-Background-Assessment-Recommendation (SBAR) tool. While effective handover depends on communicative forms that extend beyond the information transfer that is typically the focus of structured handover tools, in the context of EWS a lack of standardisation allows greater margin for individualistic practices and difficulties accessing complementary knowledge and establishing shared understandings. There is also a literature on the use of common information spaces—such as whiteboards—in facilitating SA in the healthcare team. These should be in a visible location and colour coded to correspond with the TTT score, where relevant. Electronic systems automate this information and allow information to be reviewed remotely. However, they disconnect vital signs data from the patient and hence other indicators of clinical status and access to data is contingent upon the availability of computers. The literature indicates that SA can be facilitated in different ways in different contexts and it is the relationship between system elements that is important. In their study on SA in delivery suites, Mackintosh et al discuss the three main supports for SA—whiteboard, handover and coordinator role—and illustrate how these interacted in organisations with strong SA compared with those with reduced levels. Crucially, this ‘interplay’ between the different activities was highly context dependent; ‘the same supports used differently generate different outcomes’ (p 52). See for a summary of the planning evidence. Action The goal of the ‘Action’ subsystem is to initiate appropriate action in response to evidence of deterioration. The literature suggests that mobilising action across professional boundaries/hierarchies is challenging, with differences in language between doctors and nurses and power dynamics contributory factors. TTTs are in part a response to the challenges of communication in mobilising action in response to deterioration. By transforming a series of discrete observations into a summative indicator of deterioration—such as a score or a trigger—TTTs ‘translate’ and package the patient’s status into a form that can be readily communicated enabling individual-level clinical data to be synthesised, made sense of and shared. One study, however, found that TTTs were regarded as a nursing tool and were therefore not valued by clinicians. Consequently, nurses encountered difficulties in summoning a response. Several studies also report on the use of SBAR in this context. Like TTTs, SBAR translates information into a form that provides structure, consistency and predictability when presenting patient information. SBAR has been shown to help establish common language and expectations, minimising differences in training, experience and hierarchy and facilitating nurse–clinician communication. While several papers advocate combining SBAR with TTTs, none specifically evaluated SBAR use. Mackintosh et al highlight that audit data suggest resistance to SBAR, with others cautioning that overextending SBAR use carries the risk of SBAR fatigue and attenuation of its effects. Structured communication tools like TTTs and SBAR do not solve all the challenges of acting in response to evidence of deterioration. Barriers to action were widely reported in the literature where these tools were in place. These include: a general disinclination to seek help, concerns about appearing inadequate in front of colleagues and failure of staff to invest in the escalation or calling criteria. A number of papers also reported negative attitudes to rapid response team (RRT) or medical emergency team (MET) use in the efferent component of safety systems. METs and RRTs operate outside the immediate medical team and create different issues in paediatric warning systems than when the escalation response is managed by the treating team. These include a reluctance to activate because of the perceived busyness of paediatric intensive care unit or medical staff, because previous expectations about an appropriate response were not met, or a sense that the situation was under control (particularly when the physiological instability is in the area of expertise of the treating team). No literature reported on successful interventions to facilitate RRT use, but several propose strategies to support escalation where there was no designated response team in place in the efferent component. These include informal peer support, where inexperienced staff team up with more experienced staff ; clear structures to support action and a supportive culture that does not penalise individual decision-making, including the use of a ‘no false alarms’ policy so staff are not deterred from escalating care. Senior leadership is consistently identified as important ; lack of support from superiors meant that staff are less likely to escalate and more likely to adhere to hierarchies within the current system. There is some evidence to suggest that any escalation policy should be linked to an administrative arm that reinforces the system, measures outcomes and works to ensure an effective system. There is a small literature on family involvement in the Action subsystem. Several studies report on Condition-Help, a programme developed in the USA to support families to directly activate an RRT if they have concerns about their child’s condition. Families are also becoming increasingly recognised as playing a key role in the activation of RRTs in Australia. Research has evaluated the appropriateness of calls that were made by patients or relatives but has not considered why calls were not made. Involving family members in escalation demands vigilance, requiring them to take a proactive and interactive role with staff with potentially some degree of confrontation, particularly if challenging the appropriateness of decisions taken. Families need both cognitive and emotional resources to raise concerns that involve negotiating hierarchies and boundaries. The literature points to a degree of professional resistance to family involvement in activation, with reports of physician concern that their role would be undermined, that resources would be stretched with an increase in calls and that it might divert attention away from those in need although these fears are not supported by the evidence. See for a summary of the evidence relating to the action component of the model. Synthesis and model development The literature in this field is heterogeneous and stronger on the sociomaterial barriers to successful afferent component paediatric early warning systems than it is on solutions. While a number of different single interventions have been proposed and some have been evaluated, there is limited evidence to recommend their use beyond the specific clinical contexts described in the papers. This reflects both the weight and quality of the evidence, the extent to which paediatric systems are conditioned by the local clinical context and also the need to attend to the relationship between system components and interventions which work in concert not in isolation. There is also a growing realisation in the quality improvement field that an intervention that has been successful in one context does not necessarily produce the same results elsewhere which cautions against a ‘one size fits all’ approach. While it is not possible to make empirical recommendations for practice, a hermeneutic review methodology enabled the generation of theoretical inferences about the core components of an optimal paediatric early warning system. These model components are logical inferences derived from an overall synthesis of the evidence, informed by our theoretical framework and clinical expertise. These are presented as a propositional model conceptualised as three subsystems: detection, planning and action (see ).
Eighty-two papers were included in the review. Forty-six papers focused on TTT implementation and use in paediatric and adult contexts (24 from the paediatric search and the remaining 22 from the adult-focused search); the remaining 36 papers contributed supplementary data on factors related to the wider warning system. See for a detailed breakdown of this process. No studies were located that adopted a whole systems approach to detecting and responding to deterioration.
In TMT the primary unit of analysis is the ‘project’, which defines the social and material actors (people, materials, technologies) and their relationships involved in achieving a particular goal. The goals of the afferent paediatric warning system are: first, that the child is identified as at risk and a vital signs monitoring regime instigated; second, that evidence of deterioration is identified through monitoring and categorised as such; and third, that timely and appropriate action is initiated in response to deterioration. Our analysis of the literature suggests that three subsystems within the afferent component of EWS support these processes: the detection of signs deterioration; the planning needed to ensure teams are ready to act when deterioration is detected; and the initiation of timely action . While we have focused on the afferent component, it is important to remember that all elements of the overall safety system (efferent component, process improvement and administrative arm) need to be working in concert in order to maintain an optimal paediatric early warning system. In the next section, we report on the literature in relation to each subsystem.
The goal of the detection subsystem is to recognise early signs of deterioration, so the child becomes the focus of further clinical attention. This requires, first, that the child is identified as at risk and a vital signs monitoring regime instigated and, second, that the child is identified as showing signs of deterioration. Despite widespread use, the evidence on TTT effectiveness in predicting adverse outcomes in hospitalised children is weak. Many TTTs have only been validated retrospectively and postpredictive values were generally low. Studies reporting significant decreases in cardiac arrest calls or mortality had methodological concerns. The literature does suggest that TTTs have value in supporting process mechanisms in the detection subsystem. Vital signs monitoring is undertaken on all hospital inpatients and, like other high-volume routine activity, is often delegated to junior staff who may not have sufficient skills to interpret results. TTTs have value in mitigating these risks: by specifying physiological thresholds that indicate deterioration they take knowledge to the bedside and act as prompts to action which can lead to a more systematic and frequent approach to monitoring and improved detection of deterioration. TTT’s effectiveness in fulfilling these functions depends on certain preconditions. The review highlighted that TTT use was impacted by the availability of appropriate and functioning equipment, (in)adequate staffing and night-time pressures and an appropriately skilled workforce. On this latter point, while several papers report on education packages to improve the detection of deterioration, the evidence is not robust enough to recommend specific programmes. There were also times whereby nursing staff prioritised sleep over waking a patient to take vital signs. TTTs are also used differently depending on the experience of the user. For juniors, they provide a methodology and structure for monitoring clinical instability and identifying deterioration, whereas more experienced staff reportedly use TTTs as confirmatory technologies. The importance of professional intuition in detecting deterioration is extensively reported across the literature and several authors recommend the inclusion of ‘staff concern’ in tool criteria. This is important; TTTs may be of less value in patients with chronic conditions because of altered normal physiology or where subtle changes are difficult to detect. It is also the case that TTTs are implemented in contexts governed by competing organisational logics which impact on their value and use. For example, Mohammed Iddrisu et al show TTTs have limited value immediately after surgery because acceptable vital sign parameters are different in the immediate postoperative period. There is growing interest in the literature in strategies that facilitate patient and relative involvement in the early detection of deterioration. Healthcare professionals depend on families to explain their child’s normal physiological baseline and identify subtle changes in their child’s condition but this information is not always systematically obtained. Some authors propose family involvement in interdisciplinary rounds (This is an editorial paper), but this requires parents to have detailed information about the signs and symptoms they should be attending to and as yet there is little evidence on effective strategies for how they might be involved in the detection of deterioration. While much of the literature reports on intermittent manual vital signs monitoring and paper-based recording systems, across the developed world there is a growing use of electronic technologies, which have important implications for the wider detection subsystem. We considered a number of evaluations of new technologies which indicated that electronic vital signs recording is associated with a number of positive outcomes, particularly timeliness and accuracy, when compared with paper-based systems. They can provide prompts or alerts for monitoring, which facilitates better recognition of deterioration and is associated with a reduction in mortality. These studies tend to evaluate new technologies in isolation, however, and do not engage with the literature highlighting alarm fatigue which is known to mitigate effectiveness over time or concerns about overburdening staff with alerts. Moreover, the successful implementation of new technologies is conditioned by the local context. For instance, where manual input into an electronic device is required, access to computers is an essential precondition. When computers were not available, staff ‘batch’ the collection of vital signs before data entry, thereby delaying the timely detection of deterioration. In another study where the electronic system was found to be cumbersome and separated the collection and entry of data from the review of vital signs, verbal reports were favoured to ensure timely communication of information. See for a summary of the evidence reported.
Detecting and responding to deterioration involves the coordination of action in conditions of uncertainty and competing priorities. The goal of the ‘Planning’ subsystem is to ensure the clinical team are ready to act in the event of evidence of deterioration and is reflected in the growing interest in the literature on structures to facilitate team SA, group decisions and planning. TTTs have been found to support SA. Their use enabled clinicians to have a ‘bird’s-eye’ view over all admitted patients on a ward as well as encouraging staff to consider projected acuity levels of the ward. A number of studies also report on ‘huddles’ in facilitating SA. A huddle is a multidisciplinary event scheduled at predetermined times where members discuss specific risk factors around deterioration and develop mitigation plans. One study combined the introduction of huddles with a ‘watchstander’, a role fulfilled by a charge nurse or senior resident, whose primary function is to know patients at high risk for deterioration. These initiatives were associated with a near 50% reduction in transfers from acute to intensive care determined to be unrecognised situation awareness events. A further strategy identified by Goldenhar et al describes the use of the ‘watcher’ category to designate a patient as at risk where staff have a ‘gut feeling’ deterioration is likely. A recent study used the category of ‘watcher’ to create a bundle of expectations to standardise communication and contingency planning. Once a patient was labelled ‘a watcher’ a series of five specific tasks, such as documentation of physician awareness of watcher status and that the family had been notified of the change in the patient’s status, needed to be completed within 2 hours. Handovers are integral to clinical communication and contribute to SA. The extensive literature on handover indicates that information sharing can be of variable quality and there is growing evidence that structured approaches improve this. Ranging from a checklist system to a cognitive aid developed through consensus, most of the published interventions are variations of the Situation-Background-Assessment-Recommendation (SBAR) tool. While effective handover depends on communicative forms that extend beyond the information transfer that is typically the focus of structured handover tools, in the context of EWS a lack of standardisation allows greater margin for individualistic practices and difficulties accessing complementary knowledge and establishing shared understandings. There is also a literature on the use of common information spaces—such as whiteboards—in facilitating SA in the healthcare team. These should be in a visible location and colour coded to correspond with the TTT score, where relevant. Electronic systems automate this information and allow information to be reviewed remotely. However, they disconnect vital signs data from the patient and hence other indicators of clinical status and access to data is contingent upon the availability of computers. The literature indicates that SA can be facilitated in different ways in different contexts and it is the relationship between system elements that is important. In their study on SA in delivery suites, Mackintosh et al discuss the three main supports for SA—whiteboard, handover and coordinator role—and illustrate how these interacted in organisations with strong SA compared with those with reduced levels. Crucially, this ‘interplay’ between the different activities was highly context dependent; ‘the same supports used differently generate different outcomes’ (p 52). See for a summary of the planning evidence.
The goal of the ‘Action’ subsystem is to initiate appropriate action in response to evidence of deterioration. The literature suggests that mobilising action across professional boundaries/hierarchies is challenging, with differences in language between doctors and nurses and power dynamics contributory factors. TTTs are in part a response to the challenges of communication in mobilising action in response to deterioration. By transforming a series of discrete observations into a summative indicator of deterioration—such as a score or a trigger—TTTs ‘translate’ and package the patient’s status into a form that can be readily communicated enabling individual-level clinical data to be synthesised, made sense of and shared. One study, however, found that TTTs were regarded as a nursing tool and were therefore not valued by clinicians. Consequently, nurses encountered difficulties in summoning a response. Several studies also report on the use of SBAR in this context. Like TTTs, SBAR translates information into a form that provides structure, consistency and predictability when presenting patient information. SBAR has been shown to help establish common language and expectations, minimising differences in training, experience and hierarchy and facilitating nurse–clinician communication. While several papers advocate combining SBAR with TTTs, none specifically evaluated SBAR use. Mackintosh et al highlight that audit data suggest resistance to SBAR, with others cautioning that overextending SBAR use carries the risk of SBAR fatigue and attenuation of its effects. Structured communication tools like TTTs and SBAR do not solve all the challenges of acting in response to evidence of deterioration. Barriers to action were widely reported in the literature where these tools were in place. These include: a general disinclination to seek help, concerns about appearing inadequate in front of colleagues and failure of staff to invest in the escalation or calling criteria. A number of papers also reported negative attitudes to rapid response team (RRT) or medical emergency team (MET) use in the efferent component of safety systems. METs and RRTs operate outside the immediate medical team and create different issues in paediatric warning systems than when the escalation response is managed by the treating team. These include a reluctance to activate because of the perceived busyness of paediatric intensive care unit or medical staff, because previous expectations about an appropriate response were not met, or a sense that the situation was under control (particularly when the physiological instability is in the area of expertise of the treating team). No literature reported on successful interventions to facilitate RRT use, but several propose strategies to support escalation where there was no designated response team in place in the efferent component. These include informal peer support, where inexperienced staff team up with more experienced staff ; clear structures to support action and a supportive culture that does not penalise individual decision-making, including the use of a ‘no false alarms’ policy so staff are not deterred from escalating care. Senior leadership is consistently identified as important ; lack of support from superiors meant that staff are less likely to escalate and more likely to adhere to hierarchies within the current system. There is some evidence to suggest that any escalation policy should be linked to an administrative arm that reinforces the system, measures outcomes and works to ensure an effective system. There is a small literature on family involvement in the Action subsystem. Several studies report on Condition-Help, a programme developed in the USA to support families to directly activate an RRT if they have concerns about their child’s condition. Families are also becoming increasingly recognised as playing a key role in the activation of RRTs in Australia. Research has evaluated the appropriateness of calls that were made by patients or relatives but has not considered why calls were not made. Involving family members in escalation demands vigilance, requiring them to take a proactive and interactive role with staff with potentially some degree of confrontation, particularly if challenging the appropriateness of decisions taken. Families need both cognitive and emotional resources to raise concerns that involve negotiating hierarchies and boundaries. The literature points to a degree of professional resistance to family involvement in activation, with reports of physician concern that their role would be undermined, that resources would be stretched with an increase in calls and that it might divert attention away from those in need although these fears are not supported by the evidence. See for a summary of the evidence relating to the action component of the model.
The literature in this field is heterogeneous and stronger on the sociomaterial barriers to successful afferent component paediatric early warning systems than it is on solutions. While a number of different single interventions have been proposed and some have been evaluated, there is limited evidence to recommend their use beyond the specific clinical contexts described in the papers. This reflects both the weight and quality of the evidence, the extent to which paediatric systems are conditioned by the local clinical context and also the need to attend to the relationship between system components and interventions which work in concert not in isolation. There is also a growing realisation in the quality improvement field that an intervention that has been successful in one context does not necessarily produce the same results elsewhere which cautions against a ‘one size fits all’ approach. While it is not possible to make empirical recommendations for practice, a hermeneutic review methodology enabled the generation of theoretical inferences about the core components of an optimal paediatric early warning system. These model components are logical inferences derived from an overall synthesis of the evidence, informed by our theoretical framework and clinical expertise. These are presented as a propositional model conceptualised as three subsystems: detection, planning and action (see ).
This paper reports on one of three linked reviews undertaken as part of a wider UK study commissioned to develop and evaluate an evidence-based national paediatric early warning system. Drawing on TMT and NPT, we have synthesised and analysed the findings from the review to develop a propositional model to specify the core components of optimal afferent component paediatric early warning systems. While there is a growing consensus of the need to think beyond TTTs to consider the whole system, no frameworks exist to support such an approach. Clinical teams wishing to improve rescue trajectories should take a whole systems perspective focused on the constellation of factors necessary to support detection, planning and action and consider how these relationships can be managed in their local setting. TTTs have value in paediatric early warning systems but they are not the sole solution and depend on certain preconditions for their use. An emerging literature highlights the importance of planning and indicates that combinations of interventions may facilitate situation awareness. Professional judgement is also important in detecting and acting on deterioration and the evidence points to the importance of a wider organisational culture that is supportive of this. Innovative approaches are needed to support family involvement in all aspects of paediatric early warning systems, which are sensitive to the cognitive and emotional resources this requires. System effectiveness requires attention to the sociomaterial relationships in the local context, senior support and leadership and continuous monitoring and evaluation. New technologies, such as moving from paper-based to electronic TTTs, have important implications for all three subsystems and critical consideration should be given to their wider impacts and the preconditions for their integration into practice. Limitations of the review The literature in this field is heterogeneous and better at identifying system weakness than it is effective improvement interventions. It was only by deploying social theories and a hermeneutic review methodology did it prove possible to develop a propositional model of the core components of an afferent component paediatric early warning system. This model is derived from logical inferences drawing on the overall evidence synthesis, social theories and clinical expertise, rather than strong empirical evidence of single intervention effectiveness. Consequently, there is a growing consensus of the need to take a whole systems approach to improve the detection and response to deterioration in the inpatient paediatric population.
The literature in this field is heterogeneous and better at identifying system weakness than it is effective improvement interventions. It was only by deploying social theories and a hermeneutic review methodology did it prove possible to develop a propositional model of the core components of an afferent component paediatric early warning system. This model is derived from logical inferences drawing on the overall evidence synthesis, social theories and clinical expertise, rather than strong empirical evidence of single intervention effectiveness. Consequently, there is a growing consensus of the need to take a whole systems approach to improve the detection and response to deterioration in the inpatient paediatric population.
Failure to recognise and act on signs of deterioration is an acknowledged safety concern and TTTs are a common response to this problem. There is, however, a growing recognition of the importance of wider system factors on the effectiveness of responses to deterioration. We have reviewed a wide literature and analysed this using social theories to develop a propositional model of an optimal afferent component paediatric early warning system that can be used as a framework for paediatric units to evaluate their current practices and identify areas for improvement. TTT use should be driven by the extent to which teams think that they will help improve the effectiveness of their system as a whole.
Reviewer comments Author's manuscript
|
Identification of plasma protein biomarkers for endometriosis and the development of statistical models for disease diagnosis | 72b4f771-a93f-457c-9af0-365b7d9992e8 | 11788222 | Biochemistry[mh] | Endometriosis is a chronic and progressive inflammatory disease characterized by the presence of estrogen-dependent endometrial-like tissue (or lesions) outside the uterus. Its symptoms include persistent pelvic pain and infertility. Endometriosis occurs in ∼11% of women and girls of reproductive age and has been observed in 35% of women using assisted reproductive technology procedures . Disease severity does not always correlate with symptom severity, leading to diagnostic challenges and limited treatment options. In addition to the acute and chronic symptoms associated with the condition, endometriosis has been linked to long-term negative health consequences, including a higher risk of cardiovascular disease, ovarian cancer, and autoimmune diseases . Based on the location and depth of lesions, the main types of endometriosis are superficial (peritoneal or other sites), deep endometriosis (DE), and ovarian (endometrioma). Disease stage is most commonly classified based on the revised American Society for Reproductive Medicine (rASRM) system, which considers the location, extent, and depth of lesions, as well as adhesions, all visualized at surgery . Despite its recognition for over a century, the exact cause of endometriosis remains elusive, resulting in delays in diagnosis and treatment. With the time for patient diagnosis averaging 7 years from symptom onset, there are negative impacts on physical, mental, and social well-being . Endometriosis also imposes a substantial economic burden due to productivity losses and healthcare costs . Diagnosis involves medical history, physical examination, imaging, laparoscopy, and histopathology. The dependability of current diagnostic tools varies, owing to factors such as the location and severity of lesions, as well as the experience of the healthcare provider . The gold standard for diagnosing endometriosis is via laparoscopy, but the procedure is invasive and costly, and carries risks including adverse events such as nerve damage, damage to pelvic organs or major blood vessels, and formation of post-surgical adhesions . Imaging techniques such as transvaginal ultrasound and magnetic resonance imaging can identify ovarian endometriosis and DE, nonetheless, the non-invasive identification of endometriosis, particularly in superficial cases, continues to pose a challenge . Non-invasive diagnostic biomarkers would significantly improve early detection and management of endometriosis. Several potential blood biomarkers have been proposed, however, studies to date have been limited by cohort size or lacked validation studies . This study aimed to identify and validate plasma protein biomarkers specific to endometriosis using a proteomics-based approach, which involved discovery, analytical, and clinical validation phases. The study hypothesized that people with endometriosis would have significantly different plasma concentrations of select proteins compared to the general population, or to those with similar pelvic symptoms but no endometriosis, and that such plasma biomarkers could be used for early diagnostic screening. Ethical approval Recruitment and collection protocols were approved by the appropriate ethical review boards (Bellberry Human Research Ethics Committee (ref: 2016-05-383); Royal Women’s Hospital Human Research Ethics Committee (Project No. 10-43, No. 11-24, and No. 16-43)), and all participants provided informed written consent. Clinical and reference samples Discovery phase The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Analytical validation phase Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. Clinical validation phase To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. Sample collection In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. Participant characteristics The clinical and demographic characteristics of participants in the discovery and clinical validation cohort are presented in . Associations between clinical variables and outcome (endometriosis vs symptomatic controls or general population), were tested using the chi-square test of independence for categorical clinical variables and the Point-biserial correlation test for continuous clinical variables. Proteomics workflow Discovery phase This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). Analytical validation phase For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. Clinical validation phase In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. Statistical and data analyses The peptide data presented reflect the relative concentration of a protein biomarker between samples. To maximize the likelihood of identifying biomarkers for the disease, changes in protein concentration were initially assessed at the extremes of the disease spectrum, for example, symptomatic controls versus severe endometriosis or general population controls versus endometriosis. To improve the normality of the data, a natural logarithmic transformation was applied to all measurements. Candidate biomarkers were confirmed in bivariate analysis by two-way comparisons of medians using a Mann–Whitney U -test. To evaluate the diagnostic relationship between clinical characteristics, biomarker concentration, and clinical groups, elastic-net logistic regression modeling was employed (R Statistical Software, v4.2.2; ). Clinical variables for inclusion in the models were restricted to age and BMI due to practical usability and accessibility. Repeated or nested cross-validation was performed (glmnet package v4.1-6; caret v6.0-93 ; nestedcv.glmnet package v0.7.4). During the nested cross-validation approach, variables were filtered using a Wilcoxon U -test with a significance threshold of 0.2. A series of multivariate logistic regression models containing both clinical factors and biomarker concentrations were developed to distinguish: (i) endometriosis cases from general population controls and (ii) endometriosis cases (stages II–IV) from symptomatic controls. To further evaluate the complex interactions and non-linear relationships between predictors, a random forest classifier was employed using the predictors identified during elastic-net logistic regression modeling. This third model was constructed by comparing stage IV endometriosis and symptomatic controls. The performance of Model 3 was then tested across the stages of endometriosis (stages I–IV, i.e. minimal to severe) to assess its effectiveness in diagnosing endometriosis at different disease levels. The randomForest package v4.6-14 was used with 5-fold cross-validation and hyper-parameter tuning (mtry = 2, 3, 4, ntree = 100). Only participants with complete data were included in each model. To assess the discriminative performance of each model, the area under the receiver operating characteristic curve (AUC) was assessed. DeLong’s test was used to compare the AUC between biomarker models with and without clinical variables. The optimal predicted probability threshold was determined at the maximum Youden Index. Diagnostic performance metrics were computed based at this optimal threshold, including sensitivity (Sn) and specificity (Sp), and positive predictive value (PPV) and negative predictive value (NPV). A power analysis was conducted to assess the study’s power for subgroup analysis in different stages of endometriosis. The power analysis was performed using the pwr package version 1.3-0 in R. The parameters for the power analysis included a sample size for the subgroup (stage I: n = 241, stage II: n = 65, stage III: n = 58, and stage IV: n = 89; only participants with complete data were included in this analysis), an effect size of 0.5 (Cohen’s d ), and a significance level of 0.05. The target statistical power was set at 0.8. The interaction pathways of the proteins identified in the diagnostic models were examined to provide insights into the biological processes and molecular functions associated with these proteins (STRING database, v12.0; ). Only interactions above a score of 4.0 (medium) were included in the predicted network. Recruitment and collection protocols were approved by the appropriate ethical review boards (Bellberry Human Research Ethics Committee (ref: 2016-05-383); Royal Women’s Hospital Human Research Ethics Committee (Project No. 10-43, No. 11-24, and No. 16-43)), and all participants provided informed written consent. Discovery phase The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Analytical validation phase Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. Clinical validation phase To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. Sample collection In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. The discovery phase included samples from 22 endometriosis cases, 15 symptomatic controls, and 19 general population controls obtained from the Wesley Medical Research Institute Biobank (Brisbane, Australia). Samples were pooled in each of the clinical groups and differentially expressed proteins were compared between groups. All endometriosis and symptomatic control samples had their status confirmed by laparoscopy. Pooled reference plasma (from three donors, obtained from the Australian Red Cross Lifeblood) was used to design targeted assays to measure each biomarker peptide identified from the discovery phase and test the robustness of that measurement, providing analytical validation of the reproducibility of the biomarkers identified. To clinically validate the candidate protein biomarkers identified in the discovery phase each protein was measured in individual clinical samples from a separate cohort. Samples comprised those of endometriosis cases (n = 464 diagnosed via laparoscopy and confirmed with histopathology) and symptomatic controls without endometriosis (n = 132, confirmed with laparoscopy), obtained from the Royal Women’s Hospital (RWH) (Melbourne, Australia). In addition, general population control samples (n = 153) were obtained from healthy volunteers in the Perth metropolitan area. All RWH participants attended the Endometriosis and Pelvic Pain Clinic with pelvic, menstrual, and/or intercourse pain and underwent laparoscopy for treatment of endometriosis or suspected endometriosis with histopathology to confirm the presence or absence of endometriosis. Endometriosis severity was classified by the rASRM score including stage I/minimal (1–5), stage II/mild (6–15), stage III/moderate (16–40), and stage IV/severe (>40) . Exclusion criteria included menopause, positive pregnancy test or unknown pregnancy status, and malignancy. Comprehensive demographic and clinical information including age, BMI, age at menarche, gravidity, live births, problems conceiving, type of pelvic pain (menstrual/pelvic/intercourse), menstrual cycle length, smoking status, exogenous hormone medication use, family history of endometriosis, and ethnicity was available. All general population controls answered a comprehensive survey to exclude possible symptomatic endometriosis and other gynecological pathologies. In all cases, whole blood was collected in EDTA-treated vacutainers (Becton Dickinson, USA) and plasma was prepared by centrifugation at 1500 g for 10 min at 4°C. Plasma samples were stored at −80°C until biomarker analysis. For endometriosis cases and symptomatic controls (Wesley and RWH participants), plasma was collected on the day of admission for surgery. The median time for plasma processing was within one day of collection for all cohorts, with samples stored at 4°C between collection and centrifugation. The Wesley samples were collected between 2010 and 2017, the RWH samples between 2012 and 2022, and the general population controls between 2021 and 2022. The clinical and demographic characteristics of participants in the discovery and clinical validation cohort are presented in . Associations between clinical variables and outcome (endometriosis vs symptomatic controls or general population), were tested using the chi-square test of independence for categorical clinical variables and the Point-biserial correlation test for continuous clinical variables. Discovery phase This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). Analytical validation phase For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. Clinical validation phase In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. This study analyzed plasma protein biomarkers using a proteomics workflow as previously described . In brief, quantitative biomarker discovery (iTRAQ labeling) was performed in quadruplicate experiments on pooled samples across the three groups: endometriosis cases, symptomatic controls, and general population. Each experiment involved immunodepletion of the pooled plasma sample to remove the 14 most abundant proteins. The immunodepleted fraction was then diafiltrated before reduction, alkylation, and enzymatic digestion with trypsin. The resulting peptide solutions were labeled with iTRAQ reagents (Sciex, USA) before mixing 1:1:1 for the three groups of pooled plasma. Desalted samples were then fractionated on a high-pH HPLC system with the resulting 12 fractions injected onto an LCMS system with analysis by a QE-HF Orbitrap (Thermo Fisher Scientific, USA) mass spectrometer. Proteins observed to be differentially expressed (proteins required to have a P -value ≤0.05 with a relative ratio change of >10%) between endometriosis and symptomatic or general population groups were designated as candidate biomarkers if significant across the experiments. To this list, 12 putative biomarkers previously reported in the literature as having an association with endometriosis were added (see and ). For analytical validation, targeted mass spectrometry assays using multiple reaction monitoring (MRM) were defined for each candidate protein biomarker as described in . Each assay measured changes in relative peptide abundances of individual plasma samples against an 18 O-labeled reference plasma to calculate peak area ratios for each of the candidate biomarkers. These ratios were normalized to the median value for each peptide. In brief, the analytical targeted assay was designed utilizing the following method. Each plasma sample was immunodepleted (removal of top 14 abundant proteins) before diafiltration, reduction, alkylation, and digestion of the plasma proteins. The reverse phase desalted sample was then injected along with a fixed amount of the internal standard 18 O labeled reference plasma digest onto a microflow (5 µl/min) HPLC system and analyzed on a Sciex 6500 Triple Quad mass spectrometer (Sciex, USA). Assays were assessed for robustness with analytical validation considered successful if the MRM signal for each peptide was individually verified to be unique and where the signal to noise (S/N) was >3. In clinical validation, a new cohort comprising individual samples (n = 464 endometriosis cases, n = 132 symptomatic controls, and n = 153 general population controls) was measured using the analytically validated targeted MRM mass spectrometry assay. Samples were randomized across plates before analysis to minimize batch effects and ensure consistency. Analysis of the mass spectrometry data was carried out in Skyline software (University of Washington, USA) with both unlabeled and 18 O labeled peptide peaks, integrated with peak areas exported to enable calculation of the relative peak ratios. The peptide data presented reflect the relative concentration of a protein biomarker between samples. To maximize the likelihood of identifying biomarkers for the disease, changes in protein concentration were initially assessed at the extremes of the disease spectrum, for example, symptomatic controls versus severe endometriosis or general population controls versus endometriosis. To improve the normality of the data, a natural logarithmic transformation was applied to all measurements. Candidate biomarkers were confirmed in bivariate analysis by two-way comparisons of medians using a Mann–Whitney U -test. To evaluate the diagnostic relationship between clinical characteristics, biomarker concentration, and clinical groups, elastic-net logistic regression modeling was employed (R Statistical Software, v4.2.2; ). Clinical variables for inclusion in the models were restricted to age and BMI due to practical usability and accessibility. Repeated or nested cross-validation was performed (glmnet package v4.1-6; caret v6.0-93 ; nestedcv.glmnet package v0.7.4). During the nested cross-validation approach, variables were filtered using a Wilcoxon U -test with a significance threshold of 0.2. A series of multivariate logistic regression models containing both clinical factors and biomarker concentrations were developed to distinguish: (i) endometriosis cases from general population controls and (ii) endometriosis cases (stages II–IV) from symptomatic controls. To further evaluate the complex interactions and non-linear relationships between predictors, a random forest classifier was employed using the predictors identified during elastic-net logistic regression modeling. This third model was constructed by comparing stage IV endometriosis and symptomatic controls. The performance of Model 3 was then tested across the stages of endometriosis (stages I–IV, i.e. minimal to severe) to assess its effectiveness in diagnosing endometriosis at different disease levels. The randomForest package v4.6-14 was used with 5-fold cross-validation and hyper-parameter tuning (mtry = 2, 3, 4, ntree = 100). Only participants with complete data were included in each model. To assess the discriminative performance of each model, the area under the receiver operating characteristic curve (AUC) was assessed. DeLong’s test was used to compare the AUC between biomarker models with and without clinical variables. The optimal predicted probability threshold was determined at the maximum Youden Index. Diagnostic performance metrics were computed based at this optimal threshold, including sensitivity (Sn) and specificity (Sp), and positive predictive value (PPV) and negative predictive value (NPV). A power analysis was conducted to assess the study’s power for subgroup analysis in different stages of endometriosis. The power analysis was performed using the pwr package version 1.3-0 in R. The parameters for the power analysis included a sample size for the subgroup (stage I: n = 241, stage II: n = 65, stage III: n = 58, and stage IV: n = 89; only participants with complete data were included in this analysis), an effect size of 0.5 (Cohen’s d ), and a significance level of 0.05. The target statistical power was set at 0.8. The interaction pathways of the proteins identified in the diagnostic models were examined to provide insights into the biological processes and molecular functions associated with these proteins (STRING database, v12.0; ). Only interactions above a score of 4.0 (medium) were included in the predicted network. Participant demographics and clinical characteristics presents the demographics and clinical characteristics of the participants (n = 805) in both the discovery and clinical validation cohorts. Age was the only characteristic available for the discovery cohort and no significant difference was observed across the clinical groups. In the clinical validation cohort, BMI, gravidity, and live births were significantly different between endometriosis patients and symptomatic controls. Additionally, age, smoking status, family history of endometriosis, pain characteristics, cycle length, cycle stage, exogenous hormone medication use, and ethnicity were significantly different between the endometriosis patients and the general population. The significant differences in the cycle stage between endometriosis patients and general population women may be largely explained by the higher proportion of endometriosis patients in the ‘unknown or on hormones’ group. A common management option for symptomatic endometriosis is hormone therapies, which are aimed at inducing amenorrhea and therefore hormone effects will be visualized on histology which do not permit grouping into menstrual/proliferative/secretory phases. It should also be noted that for the general population controls, the menstrual phase was calculated from self-reported data, whereas this was not the case for endometriosis participants (where menstrual dating was carried out using histological assessment by a pathologist). Biomarker identification The proteomics discovery experiment identified 48 candidate plasma protein biomarkers that were differentially expressed between endometriosis cases and both symptomatic controls and general population controls ( and ). Targeted mass spectrometry assays were then built against all candidates, and well-defined assays were successfully developed for 39 of these, plus 12 putative biomarkers taken from the literature. Analytical validation was successful if analytically acceptable levels of reproducibility and signal to noise were achieved. For the clinical validation phase, 51 protein biomarkers were analyzed. During two-way comparisons using a Mann–Whitney U -test, significant ( P ≤ 0.05) differences were observed for 41 of the 51 candidate proteins across one or both clinical group comparisons. Ten protein biomarkers were found to be independently associated with endometriosis after adjusting for age and BMI. These biomarkers were assessed for any correlation with the other available clinical information (e.g. menstrual cycle length), and no significant strong or moderate correlations were observed (maximum correlation coefficient of 0.26). Model development and validation Regression models were developed to discriminate between endometriosis cases and the general population (Model 1) or symptomatic controls (Model 2), as shown in . A random forest model (Model 3) was subsequently developed using the same biomarkers as Model 2 and constructed by comparing severe endometriosis and symptomatic controls, before being applied to all stages of endometriosis. No proteins provided utility in both models 1 and 2/3. For each model, the predicted probabilities for an endometriosis diagnosis were significantly higher ( P < 0.0001) in the endometriosis group compared to the general population and symptomatic control groups. and the receiver operating characteristic (ROC) curves in compare the outcomes predicted by the models against the observed diagnosis of endometriosis, along with the performance metrics (AUC, Sn, Sp, PPV, NPV) for each model. Three of the 10 protein biomarkers demonstrated excellent utility in distinguishing between the two clinical groups in Model 1 (AUC = 0.993, 95% CI 0.988–0.998) compared to age and BMI alone ( P < 0.001). In Model 2, age and BMI were significant independent associates of endometriosis (stages II–IV) (AUC = 0.649, 95% CI 0.589–0.709). After adjusting for age and BMI, the remaining seven biomarkers provided significant incremental value to Model 2 (AUC = 0.729, 95% CI 0.676–0.783, P < 0.01). The same seven biomarkers demonstrated significant diagnostic accuracy in Model 3, with an AUC of 0.997 (95% CI 0.994–1.000) for discriminating stage IV endometriosis from symptomatic controls. Critically for clinical usage, Model 3 also showed strong diagnostic performance when applied to all stages of endometriosis (AUC for stage I: 0.852 (95% CI 0.811–0.893); stage II: 0.903 (95% CI 0.853–0.953); stage III: 0.908 (95% CI 0.852–0.965); stage IV: 0.997 (95% CI 0.994–1.000), respectively) . Power analysis indicates that the study is well-powered for subgroup analysis in stage I, II, and IV endometriosis groups, with power levels of 100%, 80.8%, and 91.3%, respectively, however, the power for the stage III endometriosis subgroup was below the desired threshold at 76.1%. Functional enrichment in the network of protein biomarkers for endometriosis A network analysis of the 10 protein biomarkers associated with endometriosis revealed that most can be broadly categorized into three groups: coagulation cascade, complement system, and protein–lipid complex. Specific associations include: Coagulation factor XII, Complement component C9, and Vitamin K-dependent protein S with the complement and coagulation cascades ( P < 0.01); and Afamin and Serum paraoxonase/arylesterase 1 with protein–lipid complex ( P < 0.001). presents the demographics and clinical characteristics of the participants (n = 805) in both the discovery and clinical validation cohorts. Age was the only characteristic available for the discovery cohort and no significant difference was observed across the clinical groups. In the clinical validation cohort, BMI, gravidity, and live births were significantly different between endometriosis patients and symptomatic controls. Additionally, age, smoking status, family history of endometriosis, pain characteristics, cycle length, cycle stage, exogenous hormone medication use, and ethnicity were significantly different between the endometriosis patients and the general population. The significant differences in the cycle stage between endometriosis patients and general population women may be largely explained by the higher proportion of endometriosis patients in the ‘unknown or on hormones’ group. A common management option for symptomatic endometriosis is hormone therapies, which are aimed at inducing amenorrhea and therefore hormone effects will be visualized on histology which do not permit grouping into menstrual/proliferative/secretory phases. It should also be noted that for the general population controls, the menstrual phase was calculated from self-reported data, whereas this was not the case for endometriosis participants (where menstrual dating was carried out using histological assessment by a pathologist). The proteomics discovery experiment identified 48 candidate plasma protein biomarkers that were differentially expressed between endometriosis cases and both symptomatic controls and general population controls ( and ). Targeted mass spectrometry assays were then built against all candidates, and well-defined assays were successfully developed for 39 of these, plus 12 putative biomarkers taken from the literature. Analytical validation was successful if analytically acceptable levels of reproducibility and signal to noise were achieved. For the clinical validation phase, 51 protein biomarkers were analyzed. During two-way comparisons using a Mann–Whitney U -test, significant ( P ≤ 0.05) differences were observed for 41 of the 51 candidate proteins across one or both clinical group comparisons. Ten protein biomarkers were found to be independently associated with endometriosis after adjusting for age and BMI. These biomarkers were assessed for any correlation with the other available clinical information (e.g. menstrual cycle length), and no significant strong or moderate correlations were observed (maximum correlation coefficient of 0.26). Regression models were developed to discriminate between endometriosis cases and the general population (Model 1) or symptomatic controls (Model 2), as shown in . A random forest model (Model 3) was subsequently developed using the same biomarkers as Model 2 and constructed by comparing severe endometriosis and symptomatic controls, before being applied to all stages of endometriosis. No proteins provided utility in both models 1 and 2/3. For each model, the predicted probabilities for an endometriosis diagnosis were significantly higher ( P < 0.0001) in the endometriosis group compared to the general population and symptomatic control groups. and the receiver operating characteristic (ROC) curves in compare the outcomes predicted by the models against the observed diagnosis of endometriosis, along with the performance metrics (AUC, Sn, Sp, PPV, NPV) for each model. Three of the 10 protein biomarkers demonstrated excellent utility in distinguishing between the two clinical groups in Model 1 (AUC = 0.993, 95% CI 0.988–0.998) compared to age and BMI alone ( P < 0.001). In Model 2, age and BMI were significant independent associates of endometriosis (stages II–IV) (AUC = 0.649, 95% CI 0.589–0.709). After adjusting for age and BMI, the remaining seven biomarkers provided significant incremental value to Model 2 (AUC = 0.729, 95% CI 0.676–0.783, P < 0.01). The same seven biomarkers demonstrated significant diagnostic accuracy in Model 3, with an AUC of 0.997 (95% CI 0.994–1.000) for discriminating stage IV endometriosis from symptomatic controls. Critically for clinical usage, Model 3 also showed strong diagnostic performance when applied to all stages of endometriosis (AUC for stage I: 0.852 (95% CI 0.811–0.893); stage II: 0.903 (95% CI 0.853–0.953); stage III: 0.908 (95% CI 0.852–0.965); stage IV: 0.997 (95% CI 0.994–1.000), respectively) . Power analysis indicates that the study is well-powered for subgroup analysis in stage I, II, and IV endometriosis groups, with power levels of 100%, 80.8%, and 91.3%, respectively, however, the power for the stage III endometriosis subgroup was below the desired threshold at 76.1%. A network analysis of the 10 protein biomarkers associated with endometriosis revealed that most can be broadly categorized into three groups: coagulation cascade, complement system, and protein–lipid complex. Specific associations include: Coagulation factor XII, Complement component C9, and Vitamin K-dependent protein S with the complement and coagulation cascades ( P < 0.01); and Afamin and Serum paraoxonase/arylesterase 1 with protein–lipid complex ( P < 0.001). This study sought to develop a diagnostic blood test for endometriosis. A proteomics discovery workflow was used to identify and validate a novel panel of plasma protein biomarkers for the disease. Utilization of a large, clinically well-defined, independent cohort (n = 749) led to the development of three multivariate models, which demonstrated good to excellent performance for distinguishing endometriosis from both the general population and symptomatic controls. The models contained a panel of 10 protein biomarkers, which added significant value to clinical factors. This research contributes to the development of non-invasive diagnostic tools for endometriosis, which will have significant implications in reducing diagnostic delay and providing screening tools for surveillance of disease recurrence. The primary objective of developing a diagnostic test for endometriosis was to distinguish symptomatic controls from endometriosis patients. To facilitate this, general population controls were included to allow investigation of the extremes of disease, thereby identifying potential protein biomarkers for the disease. A model distinguishing healthy women from those with endometriosis also has both biological and clinical relevance. The biology enables an understanding of disease pathophysiology, while clinically there is relevance in the context of fertility where a 3-fold increased incidence of endometriosis in women undergoing fertility treatments is observed . Infertility is a common consequence of endometriosis and a test to rule in or rule out endometriosis could help guide clinical decisions for assisted reproductive treatments. Model 1 (endometriosis vs the general population) demonstrates the biological association of these protein biomarkers with the disease state. Model 2 (stage II–IV endometriosis vs symptomatic controls) extends the utility of the test into a real-world scenario: differentiating the presence of endometriosis (lesions) from symptomatic pelvic pain in the absence of lesions. The inferior performance of Model 2 may reflect common symptom attributions between groups, and the marginal differences between patients with stage II endometriosis as compared to symptomatic patients, where no endometriosis is observed. Individuals with stage I endometriosis were specifically excluded to improve this, and further work is required to examine this. Nonetheless, Model 3, which applied alternative statistical modeling to allow for the complex interactions and non-linear relationships between predictors in Model 2 (and was built by comparing the extremes of disease, namely stage IV endometriosis vs symptomatic controls), demonstrates strong performance for discriminating disease across all stages of endometriosis, suggesting a clear association of the biomarkers with disease state. Laparoscopy is the gold standard for diagnosing endometriosis, but it is invasive and costly, carries risks, and is not readily accessible to all patients. Of known plasma biomarkers, CA-125 is sometimes used as a single biomarker for endometriosis. However, CA-125 has limited Sn and Sp, and elevated CA-125 levels can occur in multiple conditions such as ovarian cancer, pelvic inflammatory disease, and menstruation. A recent multicenter study showed that CA-125 differentiated endometriosis from symptomatic controls with Sn 61% at a pre-defined Sp of 60% , with better performance for stage III/IV endometriosis compared to stage I/II (AUC 0.795 vs 0.583, respectively). A 2016 systematic review and meta-analysis reported a pooled Sn of 52% and Sp of 93% for CA-125, with significantly higher Sn for stage III/IV compared to stage I/II endometriosis (Sn 63% vs 25%, respectively) . CA-125 can be effective for diagnosing stage IV endometriosis cases, such as those with dense pelvic adhesions or ovarian endometriomas but is less reliable for other forms of endometriosis, and its use may lead to potential false positives due to the presence of other conditions. Consequently, it is not widely recommended as a diagnostic or screening tool by major guidelines such as the ESHRE . The diagnostic models distinguishing endometriosis patients from symptomatic controls are particularly relevant for clinical practice. In comparison to known biomarkers, the multivariate biomarker models developed in the present study to distinguish endometriosis from symptomatic controls have sensitivities of 73% and 98% and specificities of 67% and 96%, for Models 2 and 3, respectively. Important to improving patient outcomes by enabling earlier and more accurate diagnosis, results indicate that Model 3 has potential utility across earlier stages of the disease, with AUCs of ≥0.85 for stage I–III endometriosis. These results compare favorably to the performance of known biomarkers in terms of AUC. In the present study, model cut-offs to assess performance metrics such as Sn and Sp were arbitrarily set at the maximum Youden Index, but further optimization should be considered before using in a clinical setting. By providing a non-invasive diagnostic method to differentiate endometriosis from other causes of pelvic pain, such tools can help clinicians make more informed decisions about which patients should undergo invasive procedures like laparoscopy, and facilitate more targeted and effective treatment plans, enhancing overall patient care. The discrepancies observed between bivariate and multivariate results in (Complement component C9, Inter-alpha-trypsin inhibitor light chain, Selenoprotein P, and Proteoglycan 4) can be attributed to two key factors: unmeasured confounders and the suppressor effect. Bivariate analysis assesses the association between two variables without considering other confounding factors. A suppressor variable, unlike a typical confounder, does not directly affect the outcome but interacts with the predictor, altering the strength or direction of the association. Inclusion of a suppressor variable in a multivariate model, can reveal the true relationship between the predictor and the outcome . Biologically, each of the 10 protein biomarkers identified in the diagnostic models for endometriosis plays a role relevant to disease pathophysiology, including in the coagulation and complement cascades, lipid metabolism, oxidative defense, immune regulation, and tissue homeostasis and morphogenesis. Of the 10 proteins listed in , only three (Selenoprotein P, Neuropilin-1, and Serum paraoxonase/arylesterase 1) have previously been directly linked with endometriosis, as discussed below. In the complement cascade, Complement component C9 is required for target cell lysis during complement activation . In Model 2, Complement component C9 has a positive association with endometriosis. Complement dysregulation has been implicated in the pathophysiology of endometriosis . For the coagulation cascade, Coagulation factor XII is crucial for fibrin clot formation . Similarly, Vitamin K-dependent protein S has a role in regulating coagulation . In Model 1, Vitamin K-dependent protein S showed a negative association with endometriosis, whereas Coagulation factor XII had a negative association in Model 2. A previous small study failed to find a statistically significant difference in blood Vitamin K-dependent protein S levels between endometriosis patients and controls . Hemoglobin subunit beta, a component of hemoglobin, indirectly contributes to coagulation by contributing to overall blood function . In Model 1, Hemoglobin subunit beta exhibited a positive association with endometriosis. Afamin, Selenoprotein P, and Serum paraoxonase/arylesterase 1 play roles in lipid metabolism and oxidative defense . While earlier work has found no significant difference in mean serum Afamin concentrations between endometriosis and control groups using ELISA , Afamin correlated negatively with endometriosis in Model 2. A positive correlation was observed for Selenoprotein P in Model 2, and the bivariate results for Selenoprotein P are consistent with previous findings from a small study (n = 8), where downregulated gene expression was reported in tissue samples from patients with endometriosis . Additionally, a positive correlation exists between Serum paraoxonase/arylesterase 1 and endometriosis in Model 1. Interestingly, another study reported reduced Serum paraoxonase/arylesterase 1 activity (not concentration) in women with endometriosis compared to controls . This apparent discrepancy may arise from assessing different aspects of Serum paraoxonase/arylesterase 1 within distinct biological contexts or using varied measurement methods. Neuropilin-1, Inter-alpha-trypsin inhibitor light chain, and Proteoglycan 4 each contribute significantly to immune regulation . Neuropilin-1 also plays important roles in vascular development , while Inter-alpha-trypsin inhibitor light chain contributes to extracellular matrix organization and Proteoglycan 4 acts as a boundary lubricant . The negative association observed for Neuropilin-1 in Model 2 contrasts to previous findings of elevated serum Neuropilin-1 for endometriosis patients when compared to healthy controls by ELISA . Both Inter-alpha-trypsin inhibitor light chain and Proteoglycan 4 also showed negative associations with endometriosis in Model 2. Further investigations are warranted to unravel the precise mechanisms underlying the associations of these protein biomarkers with endometriosis and potential therapeutic implications. The results presented here, which contrast with the literature, highlight the challenges of biomarker analysis in small cohorts and across different laboratories. Differences between published studies and the results in this manuscript may be also due to pre-analytical and analytical factors. Varying gene expression profiles across different tissues affected by endometriosis, along with post-transcriptional regulation and translation efficiency, may contribute to the divergent results. The strengths of this study are in its robust sample size, independent cohorts, a well-defined clinical validation cohort, and high-performing models using simple components. The use of clinical variables in the models was deliberately limited to age and BMI because this information can be easily and precisely determined. In concert, the exclusion from the models of other clinical information such as menstrual cycle stage, exogenous hormone use, or family history of endometriosis avoids potentially imprecise variables, whilst also ensuring the test can be widely used. The robust sample size enhances statistical power, leading to more reliable conclusions. The utilization of independent cohorts strengthens the findings by validating the biomarkers across different populations, thereby minimizing bias and increasing generalizability. This study also benefits from a diagnosis method grounded in laparoscopy and histopathology, ensuring a more reliable assessment of disease absence or presence. The use of both elastic-net logistic regression and random forest algorithms in the analysis underscores the robustness and versatility of the models, providing a comprehensive evaluation of the diagnostic potential of the identified biomarkers. The random forest approach confirms the importance of the biomarkers, but may be dataset specific, and further validation is warranted. The study also has potential limitations. The participants were mostly of European ethnicity, and the study was not powered to detect differences across ethnic groups. The use of minimal clinical variables in the models may reduce diagnostic performance, however, information on more complex clinical variables may not always be available or consistently measured. It is also possible that some general population controls might have endometriosis, which could potentially skew results. Given the nature of the condition, the prevalence of asymptomatic endometriosis at the general population level is difficult to ascertain, but has been reported to be as high as 11% . While the study was not specifically designed to stratify patients based on the stage of endometriosis, it is well-powered for subgroup analysis of stage I, II, and IV endometriosis. The experimental design used matched samples within each cohort, however, the delay in time between sample collection and processing and the difference in sample storage could affect biomarker concentrations observed in this study. Further analysis is required to enable generalizability of the findings to other populations or settings, including stratification of patients by type or stage of endometriosis. This study represents an advancement toward precise non-invasive endometriosis diagnosis and personalized care, achieved through the integration of proteomics and clinical expertise. A panel of novel plasma protein biomarkers was identified that enabled the development of diagnostic models demonstrating strong discriminatory capabilities. The reported functions of these protein biomarkers offer potential insights into endometriosis pathogenesis. Further validation of these biomarkers will fortify the robustness and reliability of this diagnostic tool and enable its integration into clinical practice, benefiting individuals affected by endometriosis and paving the way for improved patient care. deae278_Supplementary_Data |
Decentralising paediatric hearing services through district healthcare screening in Western Cape province, South Africa | bbe9c928-da92-42ca-8d98-ae2cbebefc08 | 8252164 | Pediatrics[mh] | Hearing loss is the second most prevalent developmental disability, affecting approximately 15.5 million children under the age of 5 years globally. Approximately 95% of children with developmental disabilities reside in low- and middle-income countries (LMICs). Sub-Saharan Africa has one of the highest prevalence rates of hearing loss, with an estimated 10.3 million children under the age of 10 years who suffer from permanent disabling hearing loss. Undetected and untreated hearing loss has a major negative impact on a child’s speech, language, cognitive, educational and socio-emotional development. Hearing healthcare services in LMICs are not prioritised by health systems overwhelmed by life-threatening diseases. Identification of hearing loss in children is often impeded in LMICs because of the absence of well-managed hearing screening programmes, the impact of poverty and malnutrition on hearing and the lack of public and professional awareness of hearing loss and its devastating effects in children. In addition, poor hearing health infrastructure and resources (personnel and equipment) and geographical barriers such as distance, lead to limited accessibility of hearing healthcare services. , Children born into a lower socioeconomic status have considerably less access to non-emergency health resources. , , Furthermore, the risk of poor follow-up rates for hearing assessments and timely intervention is higher in families who need to travel greater distances. , Compared with high-income countries, LMICs have an unequal proportion of hearing loss burden and a limited number of well-trained hearing healthcare professionals. The number of audiologists and Ear–Nose–Throat (ENT) specialists are reported to be lowest in African countries, with an average estimate of one audiologist for every 0.8 million people and one ENT specialist for every 1.2 million people in sub-Saharan Africa. Over a 10-year period, between 2005 and 2015, there has been no substantial improvement in these numbers. In LMICs such as South Africa, healthcare facilities are typically tiered into three main levels of care: primary such as point-of-entry clinics, secondary that includes district and regional hospitals and tertiary which encompasses specialised services. As a result of the limited number of primary-level hearing screening sites in these settings, children are often referred directly to a centralised tertiary-level hospital for initial hearing screening, when available. Referrals for primary care services such as hearing screening at central tertiary-level hospitals add to growing waiting lists for specialised care such as diagnostic hearing assessments and hearing aid fittings. Direct referrals to a central tertiary hospital often imply that parents and caregivers must travel further to access hearing healthcare infrastructure, which may in turn lead to poor follow-up rates, late diagnoses and late access to hearing technology. Childhood hearing loss impedes speech, language and academic development, and early auditory stimulation is crucial to minimise the adverse effects of hearing loss in children. Access to sustainable hearing healthcare services in LMICs is an important public health priority. Innovative service delivery models, with an emphasis on decentralisation, are required to develop sustainable services in these settings. Decentralisation is the transfer of responsibility for planning, management and financing from central to peripheral levels of government and has been a key health sector reform in a wide range of LMICs over the past decade. Despite being implemented as a strategy across many health systems, the impact of decentralisation on health equity is still unclear. However, it has been suggested that in order to minimise such inequity, government, health sectors and communities must address socio-economic and financial barriers and implement complementary mechanisms alongside decentralisation. The growing burden of hearing loss in LMICs is disproportionate to the lack of hearing healthcare services available and current efforts to reach underserved communities are inadequate. If hearing healthcare services are not available at primary-level healthcare clinics, many communities in LMICs do not have access to these services at all and tertiary-level services are being overburdened with screening services that should be conducted at a lower level of care. Therefore, approaches that incorporate the delivery of community-based hearing care in order to decentralise hearing healthcare services is a priority. , This study aimed to compare a centralised tertiary model of hearing healthcare to a decentralised model through district hearing screening for children in the Western Cape province, South Africa. The effects of a decentralised model of hearing healthcare were measured in terms of attendance rates for initial hearing screening, patient travelling distance, number of referrals to a tertiary-level hospital and hearing outcomes.
Study design A pragmatic quasi-experimental study design was implemented, with a 7-month control group receiving standard hearing service provision at a tertiary hospital (from June 2018 to December 2018), compared with a 7-month intervention group where hearing screening was offered at a district hospital (from June 2019 to December 2019). Setting The Cape Town metropole has a population of 4 067 774 and is situated in the Southern Peninsula of the Western Cape province, South Africa. The metropole incorporates eight health subdistricts with eight district-level hospitals of which only three have audiology services. Victoria Hospital is a district hospital with 159 beds in the South Peninsula health district of the metropolitan region and currently has no audiology services. No audiological services are available at any of the primary healthcare clinics or maternity and obstetric units (MOU) in this area, which result in referrals for initial hearing screening of older children based on risk factors or concerns for hearing loss. All patients aged 0–13 years who are from the district hospital catchment area and who need audiology services are referred directly to Red Cross War Memorial Children’s Hospital, which is a central tertiary-level hospital in Cape Town. The Western Cape has three tertiary academic hospitals. Red Cross War Memorial Children’s Hospital is one of two dedicated paediatric tertiary-level academic hospitals in sub-Saharan Africa and serves as a central referral hospital for paediatric patients across the entire Western Cape who require specialised healthcare services. The Department of Audiology at this tertiary facility assesses and provides hearing rehabilitation for approximately 300 children per month. Referrals are received from district hospitals, primary level clinics and MOUs. Both the district and tertiary hospitals in this study are situated in a LMIC and serve mostly children from the public healthcare sector who do not have access to private medical insurance. Study population and sampling strategy Consecutive sampling was used to select participants for both the tertiary and district groups. Tertiary group sampling All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients. District group sampling All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital. Data collection An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital. Equipment The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. , Data analysis Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant. Ethical considerations The study was approved by the University of Pretoria Research Ethics Committee of the Faculty of Humanities (HUM024/0419), the University of Cape Town Human Research Ethics Committee (365/2019), Red Cross War Memorial Children’s Hospital Ethics Committee (RCC203) and the Western Cape Health Research sub-directorate (WC_201906_023). The tertiary hospital in this study has an Outreach Policy Agreement with all Western Cape Health Facilities, which was used in conjunction with a letter requesting institutional permission from the district hospital to conduct an outreach OAE-screening service there twice per month for 7 months. A letter of informed consent was issued to the caregivers of participants prior to data collection. Informed assent was obtained from children over the age of 7 years.
A pragmatic quasi-experimental study design was implemented, with a 7-month control group receiving standard hearing service provision at a tertiary hospital (from June 2018 to December 2018), compared with a 7-month intervention group where hearing screening was offered at a district hospital (from June 2019 to December 2019).
The Cape Town metropole has a population of 4 067 774 and is situated in the Southern Peninsula of the Western Cape province, South Africa. The metropole incorporates eight health subdistricts with eight district-level hospitals of which only three have audiology services. Victoria Hospital is a district hospital with 159 beds in the South Peninsula health district of the metropolitan region and currently has no audiology services. No audiological services are available at any of the primary healthcare clinics or maternity and obstetric units (MOU) in this area, which result in referrals for initial hearing screening of older children based on risk factors or concerns for hearing loss. All patients aged 0–13 years who are from the district hospital catchment area and who need audiology services are referred directly to Red Cross War Memorial Children’s Hospital, which is a central tertiary-level hospital in Cape Town. The Western Cape has three tertiary academic hospitals. Red Cross War Memorial Children’s Hospital is one of two dedicated paediatric tertiary-level academic hospitals in sub-Saharan Africa and serves as a central referral hospital for paediatric patients across the entire Western Cape who require specialised healthcare services. The Department of Audiology at this tertiary facility assesses and provides hearing rehabilitation for approximately 300 children per month. Referrals are received from district hospitals, primary level clinics and MOUs. Both the district and tertiary hospitals in this study are situated in a LMIC and serve mostly children from the public healthcare sector who do not have access to private medical insurance.
Consecutive sampling was used to select participants for both the tertiary and district groups. Tertiary group sampling All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients. District group sampling All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital. Data collection An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital. Equipment The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. , Data analysis Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant.
All patients who were referred to the tertiary hospital via email for initial hearing screening from the district hospital catchment area during the control period (June 2018 to December 2018), and who attended their hearing screening appointment at the tertiary hospital, were included in the tertiary group, regardless of the reason for referral. These patients were retrospectively selected from the audiology departmental electronic database at the tertiary hospital to form the tertiary group of 315 paediatric patients.
All consecutive referrals for initial hearing screening from facilities that fell within the district hospital catchment area were sent via email to the tertiary hospital during the intervention period (from June 2019 to December 2019). These referrals were selected for the decentralised hearing screening project at the district hospital. Only referrals who met the specified inclusion criteria for the district hearing screening project were included in the district group. The primary method of hearing screening for the district group utilised otoacoustic emissions (OAEs), which assesses cochlear function, therefore, referrals for initial screening of high-risk patients who presented with risk factors for retro-cochlear pathology or auditory neuropathy spectrum disorder (e.g. prematurity < 34 weeks gestation, low birthweight, hyperbilirubinaemia and congenital syndromes associated with hearing loss) were excluded and booked at the tertiary hospital. Patients with known middle ear pathology such as otitis media or otorrhoea were also excluded from the district group, as they were likely to fail screening because of middle ear abnormality and would have been better served at the tertiary hospital with a diagnostic hearing assessment. As a result of limited time and space available at the district hospital, only 10–15 paediatric patients were booked per afternoon twice per month for the 7-month intervention period, which equated to a sample size of 190 referred patients. Parents of referred children were contacted telephonically by the tertiary hospital’s audiology clerk to arrange an appointment for a hearing screening at the district hospital during the intervention period (from June 2019 to December 2019). Children who attended their initial hearing screening appointment at the district hospital were included and formed the district group of 158 patients. The hearing screening at the district hospital was conducted by two audiologists from the tertiary hospital. Most of the hearing screening appointments coincided with routine follow-up paediatrician visits at the district hospital.
An electronic patient database from the Department of Audiology at the tertiary hospital was used to retrospectively review data of the patients from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the control period (from June 2018 to December 2018). Data included demographic information, reason for referral, initial hearing screening results and number of children from the district hospital catchment area who were referred directly to the tertiary hospital. Only initial OAE hearing screening results were included for the tertiary group, as diagnostic testing was carried out on the same day at the tertiary hospital if a patient referred OAE screening unilaterally or bilaterally, instead of scheduling a rescreen 2 weeks later at the tertiary hospital. Diagnostic assessment results were also included for those children who referred initial OAE screening unilaterally or bilaterally in the tertiary group. The same electronic patient database was used to review the number of children from the district hospital catchment area who were referred to the tertiary hospital for initial hearing screening during the 7-month intervention period at the district hospital (from June 2019 to December 2019). A hearing screening data sheet for the 7-month intervention period at the district hospital (from June to December 2019) was used to record patient data in terms of demographics, geographical area of residence, reason for referral, OAE screening results and need for further diagnostic testing. Patients in the district group who referred the initial screening unilaterally or bilaterally underwent tympanometry to check their middle ear status and were referred to the paediatrician at the district hospital on the same day as the initial hearing screening in order to treat any middle ear pathology. These patients were rescreened at the district hospital after 2 weeks, and if another unilateral or bilateral refer result was obtained on the rescreen, they were referred for diagnostic hearing assessment at the tertiary hospital.
The Maico Eroscan® OAE test system was used for initial hearing screening during both the control and intervention periods. The system incorporates a screening function with a four-frequency (2000 hertz [Hz] – 5000 Hz) low-to-high distortion-product OAE testing protocol and conducts a fast, automatic test showing a pass or refer result. The signal-to-noise ratio is set at 6 decibels [dB], and a pass result is obtained if three frequencies pass. The reliability and validity of OAEs for use in a screening setting are well-established. ,
Data were entered into Microsoft Excel 2016 (Microsoft Corp, Washington) and descriptive analysis was performed. Data were imported into the Statistical Package for the Social Sciences (SPSS) (version 26.0. New York, IBM Corp.) for inferential analysis. Pearson’s Chi-square test was utilised for categorical data, whereas Student’s t -test was utilised for parametrical numerical data. A p -value of ≤ 0.05 was considered significant.
The study was approved by the University of Pretoria Research Ethics Committee of the Faculty of Humanities (HUM024/0419), the University of Cape Town Human Research Ethics Committee (365/2019), Red Cross War Memorial Children’s Hospital Ethics Committee (RCC203) and the Western Cape Health Research sub-directorate (WC_201906_023). The tertiary hospital in this study has an Outreach Policy Agreement with all Western Cape Health Facilities, which was used in conjunction with a letter requesting institutional permission from the district hospital to conduct an outreach OAE-screening service there twice per month for 7 months. A letter of informed consent was issued to the caregivers of participants prior to data collection. Informed assent was obtained from children over the age of 7 years.
Demographics The mean age of patients at the time of initial hearing screening was 48.4 months (39.0 standard deviation [s.d.]; range: 1–156) and 52.3 months (35.1 s.d.; range: 1–144) in the tertiary and district groups, respectively. The tertiary and district groups were similar in terms of age, gender and language distribution . Attendance rates An attendance rate of 83.2% (158/190) was found during the 7-month intervention period for patients attending the district hearing screening project, which was significantly higher than the attendance rate of 70.2% (315/449) for patients from the district hospital catchment area who were seen for initial hearing screening at the tertiary hospital during the control period ( p < 0.001). Travel distance The mean travel distance for patients in the district group commuting from home to the district hospital was 12.6 km (7.7 s.d.; range: 1.2–36.8). This distance was significantly shorter than the travel distance of 19.1 km (9.1 s.d.; range: 5.1–37.6), which patients would have had to travel from home to the tertiary hospital ( p < 0.001). Number of initial hearing screening referrals to the tertiary hospital A total of 1729 patients were referred from facilities across the Western Cape to the tertiary hospital during the control period (from June 2018 to December 2018), of which 449 (26.0%) referrals were for initial hearing screening from the district hospital catchment area. Throughout the intervention period (from June 2019 to December 2019), during which the district screening project was being conducted, the tertiary hospital received a total of 1601 referrals from facilities across the Western Cape province, with a significant decrease to 114 (7.1%) referrals for initial hearing screening from the district hospital catchment area ( p < 0.001). Reasons for referral The reasons for referral for initial hearing screening are depicted . During the control period ( n = 315), 115 referrals (36.5%) were received for reasons that were excluded from the intervention period analysis. When excluding these 115 referrals, the most common reasons for referral in the tertiary group were speech delay (35.0%) and behavioural or school-related concerns (28.5%) ( n = 200). In the district group, speech delay (33.5%) and meningitis (33.5%) were the most common reasons for referral ( n = 158). Hearing screening outcomes for the control and intervention period Outcomes of the initial OAE hearing screenings for the tertiary group and diagnostic assessment results for patients who referred initial OAE screening unilaterally or bilaterally from June 2018 to December 2018 are presented . For the tertiary group, most patients ( n = 248/315, 78.7%) passed the initial OAE screening bilaterally. The number of patients who required diagnostic assessment in the tertiary group were 67 (21.3%). Of the 67 patients who required diagnostic assessment, 54 (80.6%) attended their appointments. Half of the patients ( n = 27/54, 50%) were diagnosed with mild conductive hearing loss. Outcomes of the initial OAE screenings from the intervention period at the district hospital and the diagnostic assessment results for patients referred to the tertiary hospital after a unilateral or bilateral refer result on rescreening at the district hospital, are also presented . For the district group, most patients ( n = 127/158, 80.4%) passed OAE screening bilaterally, whilst less than 10% referred OAE screening in both ears. The follow-up attendance rate for rescreening at the district hospital 2 weeks after the initial screening was 80.8% ( n = 21/26). The total number of patients in the district group that needed referral to the tertiary hospital for specialised diagnostic assessment were 15 ( n = 15/158, 9.5%), of which 11 ( n = 11/15, 73.3%) attended the diagnostic hearing assessment appointment. Of these 11 patients, nearly half ( n = 5/11, 45.5%) presented with mild conductive hearing loss.
The mean age of patients at the time of initial hearing screening was 48.4 months (39.0 standard deviation [s.d.]; range: 1–156) and 52.3 months (35.1 s.d.; range: 1–144) in the tertiary and district groups, respectively. The tertiary and district groups were similar in terms of age, gender and language distribution .
An attendance rate of 83.2% (158/190) was found during the 7-month intervention period for patients attending the district hearing screening project, which was significantly higher than the attendance rate of 70.2% (315/449) for patients from the district hospital catchment area who were seen for initial hearing screening at the tertiary hospital during the control period ( p < 0.001).
The mean travel distance for patients in the district group commuting from home to the district hospital was 12.6 km (7.7 s.d.; range: 1.2–36.8). This distance was significantly shorter than the travel distance of 19.1 km (9.1 s.d.; range: 5.1–37.6), which patients would have had to travel from home to the tertiary hospital ( p < 0.001).
A total of 1729 patients were referred from facilities across the Western Cape to the tertiary hospital during the control period (from June 2018 to December 2018), of which 449 (26.0%) referrals were for initial hearing screening from the district hospital catchment area. Throughout the intervention period (from June 2019 to December 2019), during which the district screening project was being conducted, the tertiary hospital received a total of 1601 referrals from facilities across the Western Cape province, with a significant decrease to 114 (7.1%) referrals for initial hearing screening from the district hospital catchment area ( p < 0.001).
The reasons for referral for initial hearing screening are depicted . During the control period ( n = 315), 115 referrals (36.5%) were received for reasons that were excluded from the intervention period analysis. When excluding these 115 referrals, the most common reasons for referral in the tertiary group were speech delay (35.0%) and behavioural or school-related concerns (28.5%) ( n = 200). In the district group, speech delay (33.5%) and meningitis (33.5%) were the most common reasons for referral ( n = 158).
Outcomes of the initial OAE hearing screenings for the tertiary group and diagnostic assessment results for patients who referred initial OAE screening unilaterally or bilaterally from June 2018 to December 2018 are presented . For the tertiary group, most patients ( n = 248/315, 78.7%) passed the initial OAE screening bilaterally. The number of patients who required diagnostic assessment in the tertiary group were 67 (21.3%). Of the 67 patients who required diagnostic assessment, 54 (80.6%) attended their appointments. Half of the patients ( n = 27/54, 50%) were diagnosed with mild conductive hearing loss. Outcomes of the initial OAE screenings from the intervention period at the district hospital and the diagnostic assessment results for patients referred to the tertiary hospital after a unilateral or bilateral refer result on rescreening at the district hospital, are also presented . For the district group, most patients ( n = 127/158, 80.4%) passed OAE screening bilaterally, whilst less than 10% referred OAE screening in both ears. The follow-up attendance rate for rescreening at the district hospital 2 weeks after the initial screening was 80.8% ( n = 21/26). The total number of patients in the district group that needed referral to the tertiary hospital for specialised diagnostic assessment were 15 ( n = 15/158, 9.5%), of which 11 ( n = 11/15, 73.3%) attended the diagnostic hearing assessment appointment. Of these 11 patients, nearly half ( n = 5/11, 45.5%) presented with mild conductive hearing loss.
This study explored the effect of decentralising hearing healthcare services from a tertiary-level hospital to a district-level hospital in the Western Cape province, South Africa. Decentralised hearing screening resulted in increased attendance rates for initial hearing screening, shorter travelling distances for patients and decreased referral rates to a tertiary-level hospital. Attendance rates were significantly higher for initial hearing screening at the district hospital when compared with initial screening at the tertiary hospital. Non-attendance can result in underutilisation of healthcare provider time and can lead to longer appointment waiting time for patients. Furthermore, especially in severely resource-constrained settings typical of LMICs, non-attendance delays the identification, diagnosis and timeous intervention of healthcare conditions. The Health Professions Council of South Africa Early Hearing Detection and Intervention Guidelines suggest that a 70% and higher follow-up return rate for hearing screening is considered ideal, but that the feasibility of attaining a high follow-up rate is influenced by various factors such as access to healthcare facilities and personal constraints such as poverty. The follow-up attendance rate for rescreening at the district hospital two weeks after the initial screening was high (80.8%). This could be attributed to the fact that the second screening was also conducted at a community level and coincided with a paediatrician visit to follow up on middle ear pathology for the majority of patients who referred OAE screening bilaterally. A high follow-up attendance rate (89.4%) for hearing screening was also found in a recent South African community-based study when the rescreening was conducted at a community-level as opposed to a public healthcare institution. Patients who needed referral to the tertiary hospital for specialised diagnostic assessment had an attendance rate of 73.3%, which is in line with a previous South African community-based hearing screening study that found an attendance rate for diagnostic assessments of 75.8%. Patient travelling distance was significantly shorter to the district hospital as opposed to the tertiary hospital. Access to services is one of the leading barriers to hearing healthcare in underserved communities. The costs involved in attending healthcare appointments, both in terms of time taken off from work and travel costs for patients with limited resources, remain a further challenge in accessing healthcare in LMICs. Therefore, primary healthcare is an important strategy employed in South Africa, in order to provide more accessible patient-centred services closer to home. Community delivered hearing healthcare models have been identified as an important strategy to increase the accessibility and affordability of hearing healthcare in underserved communities. , The inaccessibility of hearing healthcare services at a primary- or district-level, which adds severe strain on tertiary-level specialised services, may be alleviated by decentralising services. The results of this study corroborate this. The number of direct referrals for initial hearing screening from the district hospital catchment area to the tertiary hospital significantly decreased after implementation of the decentralised hearing screening project at the district hospital. The decreased number of referrals to the tertiary hospital for initial hearing screening support decreased waiting times and improved capacity to provide specialised diagnostic hearing assessments and intervention to patients requiring tertiary-level care. More than 80% of children who attended the initial hearing screening during the intervention period at the district hospital passed initial OAEs bilaterally. This high pass rate is a positive outcome for the premise of decentralising hearing screening services to a more appropriate level of care. The majority of patients (78.7%) in the tertiary group also passed initial OAE screening, which supports the premise that hearing outcomes are similar for initial hearing screening regardless of the level of care where hearing screening is conducted. Telehealth applications are available for hearing assessment of older children, however, utilising OAEs in a screening setting is advantageous in terms of time taken to conduct and minimal training that is required. The referral rate for diagnostic hearing assessment at the tertiary hospital for the children who attended hearing screening during the intervention period at the district hospital was 9.5%. This percentage is higher than the reported referral rate of a South African community-based hearing and vision screening study of 5.4%, which utilised smartphone-based pure tone audiometry screening. A possible reason for the higher referral rate is the method of screening. Otoacoustic emissions screening is sensitive to middle ear pathology and it is more likely to fail in the presence of abnormal middle ear function. Referral for diagnostic testing in the tertiary group (21.3%) was twice as high in the district group (9.5%). The higher number of diagnostic assessments in the tertiary group were because of the fact that no opportunity for rescreening after two weeks was provided, as all patients who referred initial screening unilaterally or bilaterally or those for whom OAE screening results could not be elicited, underwent diagnostic assessment on the same day in order to minimise follow-up appointments at the tertiary hospital. Providing hearing screening at a district level increased access to medical treatment for all children who presented with middle ear pathology as evidenced by abnormal tympanometry results on the day of initial OAE screening. These children were assessed and treated by the paediatrician on the same day, instead of waiting for months to get an ENT appointment at the tertiary hospital. Thus, middle ear pathology was treated timeously and effectively at a more appropriate level of care, decreasing the added burden to long tertiary waiting lists. Early identification of middle ear pathology is a primary-level healthcare service, and it would be more appropriate to refer children even closer to home to their nearest community healthcare centres for treatment. This would in turn minimise the burden on district level staff and address the problem of preventative hearing loss in children at grassroots level. A limitation of this study was that tertiary-level audiologists conducted the hearing screening at the district hospital during the intervention period. Future studies should assess the training needs of community healthcare workers and nurses to conduct hearing screening at district hospital facilities. The premise of task-shifting through community-based hearing screening programmes has been proposed as a way to improve access to hearing healthcare. , Community healthcare workers and nurses can be trained to screen for hearing loss using mobile health technology via home-based visits to reach vulnerable communities in LMICs, thereby improving access to hearing healthcare services and reducing the demands on the limited number of hearing healthcare professionals in South Africa. In addition no sample size calculation was conducted and group size was pragmatically determined by number of patients over the specified time periods.
Decentralised hearing screening programmes conducted at the appropriate level of care can increase access to hearing healthcare, reduce patient travelling distances and associated costs and reduce the burden on tertiary-level hospitals. Accessible hearing screening yields higher attendance rates, leading to more effective and timeous treatment of the adverse effects of childhood hearing loss.
|
An overview of the challenges and key initiatives in hepatology practice in the UK in 2022: a cautionary tale, but reasons for optimism – British Association for the Study of the Liver (BASL) Annual Meeting 2022 Conference Report | 030c5191-4f5e-42bb-8503-07745172b443 | 11046520 | Internal Medicine[mh] | Morbidity and mortality from chronic liver disease have significantly increased in the UK over the past 50 years, in stark contrast to other common conditions, such as heart disease. There is a need to develop liver services to meet this increasing demand while also developing early detection strategies aligned with public health policies to prevent the development of significant liver disease. This report highlights the major themes in work presented at the 2022 British Association for the Study of the Liver (BASL) Annual Meeting and summarises the challenges facing the specialty. We discuss innovative work relating to sustainable hepatology, telemedicine, hepatology training and the growing role of allied health professionals (AHPs) in the care of hepatology patients as the specialty continues to develop. These challenges and innovations are ubiquitously encountered in other primary and secondary care specialties across the UK. Access to specialist hepatology services A major theme across the conference was regional variability in liver services across the UK. The Trainee Collaborative for Research and Audit in Hepatology UK (ToRcH-UK) presented a subgroup analysis from their recently completed UK audit of decompensated cirrhosis admissions, comparing patients presenting with hepatic encephalopathy (HE) with those presenting with ascites and variceal haemorrhage. Patients presenting with HE to non-specialist centres were less likely to survive their admission compared with other patients with decompensated cirrhosis. This was not the case at specialist centres. There was also significant differences in care provision for patients with HE at non-specialist centres compared with specialist centres, where they were more likely to be looked after by a gastroenterologist/hepatologist on a dedicated specialist ward. Variation between different geographical areas and between specialist and non-specialist centres was also demonstrated in the likelihood of patients with end-stage liver disease being referred for transplant assessment. Additionally, the UK-PBC audit highlighted variability between specialist and non-specialist centres in prescribing second-line drugs for patients with inadequate response to first-line medication. It was noted that there was no significant variation in the prescribing practices in England and Wales compared with Scotland. These data highlight the need to standardise care delivered across the UK while improving access to specialist services to deliver better outcomes for patients with liver disease. The Royal College of Physicians Improving Quality in Liver Services (IQILS) accreditation process represents a possible strategy to reduce variability between hospitals and regions by establishing requisite standards for hepatology services and supporting hospitals to achieve these. It is encouraging that many hospitals have already signed up to IQILS and have become accredited. NHS Trusts will need to commit time and funding to service development to meet these standards and improve outcomes for their patients. The association between deprivation and liver disease Liver disease is strongly associated with deprivation, which might explain some of the regional variations in hospital admissions relating to liver disease. In Leeds, a strong association was demonstrated between areas of deprivation, high alcohol use, obesity and severe liver disease. Interestingly, however, healthcare utilisation by those in the most deprived cohorts was lower than those from less deprived areas. This suggests that current strategies to prevent liver disease are targeting the wrong cohorts of patients. Developing services and increasing workforce within areas of high deprivation could improve healthcare delivery and maximise the impact of early detection pathways. Sustainability Liver disease is associated with significant economic ramifications and is set to become the largest cause of working-years lost in Europe Similarly, this has enormous implications for the environmental cost of hospital admissions and there is an urgent need for UK healthcare to become more environmentally sustainable. Within our own specialty, it is clear that an inpatient admission of a patient with decompensated liver disease is associated with significant carbon emissions. In particular, admissions with specific clinical deterioration that requires emergency interventions, such as variceal bleeding or intensive care admission, has substantial ecological and resource implications. Not only does this highlight the importance of preventing liver disease, but also emphasises the need to utilise strategies to reduce both the associated carbon footprint and financial cost to the NHS. Telemedicine is increasingly being utilised to reduce patient travel time and is associated with increased patient satisfaction. Risk-stratifying patients using non-invasive modalities to avoid carbon-generating procedures, such as endoscopy, combined with regular medication review should reduce carbon emissions. Another novel approach is the use of implanted long-term ascitic drains (LTAD) rather than ambulatory paracentesis services. The REDUCe study demonstrated feasibility with preliminary evidence of LTAD effectiveness, safety, acceptability and reduced health resource utilisation. This is now the basis of a National Institute for Health and Care Research (NIHR) trial looking at establishing definitive evidence to support this development. These progressive approaches will improve not only the carbon footprint of the NHS, but possibly also patient care and satisfaction. Shape of training The changes to gastroenterology higher specialist training are a welcome step in hepatology training. Although the changes will see a shortened training scheme, significant efforts have gone into curriculum design to ensure that trainees are well equipped to work across all aspects of hepatology, including transplant hepatology, at the time of Certificate of Completion of Training (CCT). A survey of UK trainees found that over half of gastroenterology trainees intended to specialise in hepatology. However, it also demonstrated that trainees were more likely to prefer a consultant job in a specialist centre rather than in a non-specialist centre. How do we ensure that all patients with liver disease have the same access to services? It is impractical to expect patients to travel large distances to see a specialist from a convenience, cost and environmental perspective. The potential solutions to this problem were discussed in the ‘Variations in UK Liver disease’ panel session. Among the solutions discussed was the potential for job plans split across specialist and non-specialist centres. However, given that nearly 50% of advertised UK consultant gastroenterologist/hepatologist posts were unfilled in 2020, split-site job plans, particularly over significant geographical areas, might not be attractive to trainees applying for consultant posts. Developing formal regional networks of specialist centres and non-specialist centres has the potential to improve access to specialist services. Accessible multidisciplinary meetings across the network will increase dialogue between centres and likely improve patient care. Combining this approach with ‘levelling up’ of non-specialist services through IQILS and innovated job plans will hopefully attract future consultants to work in areas of need. Additionally, it was noted that future efforts to develop a sustainable hepatology workforce should focus on the current body of consultants within district general hospitals (DGHs) who deliver the majority of care to patients with decompensated liver disease in the UK. These consultants have ongoing educational needs, which should be recognised, in addition to access to collective research opportunities and collaborative engagement with referral centres as required. Embracing technology to improve patient care Multiple presentations highlighted the potential role for telemedicine in the care of hepatology patients. Examples included: • electronic mental health screening questionnaires, which allow prompt communication and resolution of concerns within outpatient clinics for patients with primary sclerosing cholangitis • the emerging role for app technology in early detection of complications associated with cirrhosis, with one group demonstrating promising results for the early identification of overt hepatic encephalopathy • the use of accelerometers and virtual follow-up calls to monitor exercise engagement within the ExaLT trial • the use of big data to improve/inform our detection programs. For example, a group in Somerset demonstrated changes in alanine aminotransferase and platelet count that, over time, could predict advanced chronic liver disease with high sensitivity and specificity. Embracing such technology should not only improve patient care, but also reduce costs to the NHS and the associated carbon footprint. Allied health professionals BASL 2022 had the largest attendance of AHPs on record. The main programme featured many elements of their involvement in liver disease care, demonstrating the integral role that they have within the multidisciplinary team. The prevalence of physical frailty is high among those with liver disease and is associated with poor outcomes. Regular assessment of physical frailty utilising quick and easy to use measures, such as the Liver Frailty Index and Duke Activity Status Index, were highlighted as a way of identifying those requiring access to AHP care. Although preventative medicine has been central to NHS initiatives for a long time, it is evident that this needs to be moved to the forefront of liver care. Lifestyle modification to include engagement of patients with physical activity has the potential to reduce the development and progression of liver disease as well as liver-related mortality. Investment in AHPs to engage and support patients with liver disease in these lifestyle modifications, as well as to treat those at risk of becoming physically frail, will not only likely improve patient care, but also reduce the NHS carbon footprint. Research There was significant focus on the inequity in research and research delivery in hepatology across the UK. Research in hepatology is not reflective of the diseases most likely to cause cirrhosis, with relatively few studies investigating alcohol-related liver disease compared with autoimmune diseases. Additionally, most research is conducted in a small number of specialist centres. A paradigm shift is required to deliver clinically relevant studies that improve outcomes for all of our patients. Developing research networks that focus on inclusivity, such as the Trainee Collaborative for Hepatology Research and Audit UK, is vital to achieve this goal. The recent NIHR funding call focused on proposals that aimed to develop relationships with less research active institutions. We are hopeful that this will start to address this inequity. A major theme across the conference was regional variability in liver services across the UK. The Trainee Collaborative for Research and Audit in Hepatology UK (ToRcH-UK) presented a subgroup analysis from their recently completed UK audit of decompensated cirrhosis admissions, comparing patients presenting with hepatic encephalopathy (HE) with those presenting with ascites and variceal haemorrhage. Patients presenting with HE to non-specialist centres were less likely to survive their admission compared with other patients with decompensated cirrhosis. This was not the case at specialist centres. There was also significant differences in care provision for patients with HE at non-specialist centres compared with specialist centres, where they were more likely to be looked after by a gastroenterologist/hepatologist on a dedicated specialist ward. Variation between different geographical areas and between specialist and non-specialist centres was also demonstrated in the likelihood of patients with end-stage liver disease being referred for transplant assessment. Additionally, the UK-PBC audit highlighted variability between specialist and non-specialist centres in prescribing second-line drugs for patients with inadequate response to first-line medication. It was noted that there was no significant variation in the prescribing practices in England and Wales compared with Scotland. These data highlight the need to standardise care delivered across the UK while improving access to specialist services to deliver better outcomes for patients with liver disease. The Royal College of Physicians Improving Quality in Liver Services (IQILS) accreditation process represents a possible strategy to reduce variability between hospitals and regions by establishing requisite standards for hepatology services and supporting hospitals to achieve these. It is encouraging that many hospitals have already signed up to IQILS and have become accredited. NHS Trusts will need to commit time and funding to service development to meet these standards and improve outcomes for their patients. Liver disease is strongly associated with deprivation, which might explain some of the regional variations in hospital admissions relating to liver disease. In Leeds, a strong association was demonstrated between areas of deprivation, high alcohol use, obesity and severe liver disease. Interestingly, however, healthcare utilisation by those in the most deprived cohorts was lower than those from less deprived areas. This suggests that current strategies to prevent liver disease are targeting the wrong cohorts of patients. Developing services and increasing workforce within areas of high deprivation could improve healthcare delivery and maximise the impact of early detection pathways. Liver disease is associated with significant economic ramifications and is set to become the largest cause of working-years lost in Europe Similarly, this has enormous implications for the environmental cost of hospital admissions and there is an urgent need for UK healthcare to become more environmentally sustainable. Within our own specialty, it is clear that an inpatient admission of a patient with decompensated liver disease is associated with significant carbon emissions. In particular, admissions with specific clinical deterioration that requires emergency interventions, such as variceal bleeding or intensive care admission, has substantial ecological and resource implications. Not only does this highlight the importance of preventing liver disease, but also emphasises the need to utilise strategies to reduce both the associated carbon footprint and financial cost to the NHS. Telemedicine is increasingly being utilised to reduce patient travel time and is associated with increased patient satisfaction. Risk-stratifying patients using non-invasive modalities to avoid carbon-generating procedures, such as endoscopy, combined with regular medication review should reduce carbon emissions. Another novel approach is the use of implanted long-term ascitic drains (LTAD) rather than ambulatory paracentesis services. The REDUCe study demonstrated feasibility with preliminary evidence of LTAD effectiveness, safety, acceptability and reduced health resource utilisation. This is now the basis of a National Institute for Health and Care Research (NIHR) trial looking at establishing definitive evidence to support this development. These progressive approaches will improve not only the carbon footprint of the NHS, but possibly also patient care and satisfaction. The changes to gastroenterology higher specialist training are a welcome step in hepatology training. Although the changes will see a shortened training scheme, significant efforts have gone into curriculum design to ensure that trainees are well equipped to work across all aspects of hepatology, including transplant hepatology, at the time of Certificate of Completion of Training (CCT). A survey of UK trainees found that over half of gastroenterology trainees intended to specialise in hepatology. However, it also demonstrated that trainees were more likely to prefer a consultant job in a specialist centre rather than in a non-specialist centre. How do we ensure that all patients with liver disease have the same access to services? It is impractical to expect patients to travel large distances to see a specialist from a convenience, cost and environmental perspective. The potential solutions to this problem were discussed in the ‘Variations in UK Liver disease’ panel session. Among the solutions discussed was the potential for job plans split across specialist and non-specialist centres. However, given that nearly 50% of advertised UK consultant gastroenterologist/hepatologist posts were unfilled in 2020, split-site job plans, particularly over significant geographical areas, might not be attractive to trainees applying for consultant posts. Developing formal regional networks of specialist centres and non-specialist centres has the potential to improve access to specialist services. Accessible multidisciplinary meetings across the network will increase dialogue between centres and likely improve patient care. Combining this approach with ‘levelling up’ of non-specialist services through IQILS and innovated job plans will hopefully attract future consultants to work in areas of need. Additionally, it was noted that future efforts to develop a sustainable hepatology workforce should focus on the current body of consultants within district general hospitals (DGHs) who deliver the majority of care to patients with decompensated liver disease in the UK. These consultants have ongoing educational needs, which should be recognised, in addition to access to collective research opportunities and collaborative engagement with referral centres as required. Multiple presentations highlighted the potential role for telemedicine in the care of hepatology patients. Examples included: • electronic mental health screening questionnaires, which allow prompt communication and resolution of concerns within outpatient clinics for patients with primary sclerosing cholangitis • the emerging role for app technology in early detection of complications associated with cirrhosis, with one group demonstrating promising results for the early identification of overt hepatic encephalopathy • the use of accelerometers and virtual follow-up calls to monitor exercise engagement within the ExaLT trial • the use of big data to improve/inform our detection programs. For example, a group in Somerset demonstrated changes in alanine aminotransferase and platelet count that, over time, could predict advanced chronic liver disease with high sensitivity and specificity. Embracing such technology should not only improve patient care, but also reduce costs to the NHS and the associated carbon footprint. BASL 2022 had the largest attendance of AHPs on record. The main programme featured many elements of their involvement in liver disease care, demonstrating the integral role that they have within the multidisciplinary team. The prevalence of physical frailty is high among those with liver disease and is associated with poor outcomes. Regular assessment of physical frailty utilising quick and easy to use measures, such as the Liver Frailty Index and Duke Activity Status Index, were highlighted as a way of identifying those requiring access to AHP care. Although preventative medicine has been central to NHS initiatives for a long time, it is evident that this needs to be moved to the forefront of liver care. Lifestyle modification to include engagement of patients with physical activity has the potential to reduce the development and progression of liver disease as well as liver-related mortality. Investment in AHPs to engage and support patients with liver disease in these lifestyle modifications, as well as to treat those at risk of becoming physically frail, will not only likely improve patient care, but also reduce the NHS carbon footprint. There was significant focus on the inequity in research and research delivery in hepatology across the UK. Research in hepatology is not reflective of the diseases most likely to cause cirrhosis, with relatively few studies investigating alcohol-related liver disease compared with autoimmune diseases. Additionally, most research is conducted in a small number of specialist centres. A paradigm shift is required to deliver clinically relevant studies that improve outcomes for all of our patients. Developing research networks that focus on inclusivity, such as the Trainee Collaborative for Hepatology Research and Audit UK, is vital to achieve this goal. The recent NIHR funding call focused on proposals that aimed to develop relationships with less research active institutions. We are hopeful that this will start to address this inequity. Despite the undoubted challenges facing the hepatology community, there is a clear focus on reducing healthcare inequities and improving outcomes for our patients. Emphasis has been placed on finding solutions to these inequities and providing quality, cost-effective, multidisciplinary and environmentally friendly care for our patients. Similarly, it is recognised that the far-reaching effects of liver disease will necessitate closer working with colleagues across all specialisms to deliver comprehensive, holistic, patient-centred liver care to those who are often marginalised and less likely to encounter healthcare services in a timely manner. PNB has received educational honoraria from Takeda. ODT has received educational honoraria from Gilead Sciences. He is also a former BASL trainee representative. |
"A qualitative study exploring strategies to improve the inter-professional management of diabetes a(...TRUNCATED) | 5c7d14df-699f-4d54-9639-3a542ae6d8ba | 7059110 | Health Communication[mh] | "Introduction Diabetes has been recognised as a risk factor for periodontitis (advanced gum disease)(...TRUNCATED) |
"Bone marrow embolism: should it result from traumatic bone lesions? A histopathological human autop(...TRUNCATED) | a1a0e28c-2a4a-499e-a225-946c0d9000a1 | 11297083 | Pathology[mh] | "Non-thrombotic pulmonary embolism is a less common cause of morbidity and mortality when compared (...TRUNCATED) |
The Pathology according to p53 Pathway | 8e06cd03-d0de-4b15-96d3-e2ebee480752 | 11313058 | Anatomy[mh] | "What is the basis for pathological diagnosis? In the case of tumors, the site of origin and histolo(...TRUNCATED) |
A Lightweight Drive Implant for Chronic Tetrode Recordings in Juvenile Mice | 6c346b54-0cff-4125-9c51-028ad67cd0e3 | 10903788 | Physiology[mh] | "The brain undergoes large-scale changes during the critical developmental windows of childhood and (...TRUNCATED) |
An Evaluation of Dental Caries Status in Children with Oral Clefts: A Cross-Sectional Study | 5013850a-04e1-4c2b-9540-ff7d9cb602cc | 11854929 | Dentistry[mh] | "Oral cleft (OC) is a common congenital craniofacial anomaly with a global prevalence of 0.15% per l(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 100