dataset
stringclasses 45
values | id
stringlengths 17
63
| messages
listlengths 2
2
|
---|---|---|
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.10108 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nit is well - documented in the international literature that elevated levels of cholesterol , triglycerides , and glucose are significant risk factors , among others , for coronary heart disease ( chd ) .\nconsiderable efforts were made in the past to inform people of the critical consequences their lifestyle could have on their metabolic profile . as a result of alterations in the dietary habits in western europe , usa , and canada , in the past 30 years , there has been a decline in cholesterol levels as well as in chd mortality in these countries .\nthe ingestion of saturated and trans fatty acids and dietary cholesterol and consumption of fruit and vegetables highly influence low - density and high - density lipoprotein cholesterol levels and consequently , the occurrence of chd .\na rational hypothesis is that in rural populations of countries around the mediterranean sea , the incidence of high glucose , cholesterol , and triglyceride levels in the blood should be significantly lower , compared to urban populations , because of the diet pattern similar to the mediterranean model .\nthe primary purpose of this study was to investigate the mean levels of serum lipids and glucose , compare them within different age and sex groups , in a rural population located in the ormos korthiou municipality in the southern region of andros island , greece [ figure 1 ] .\nthis study included measurements recorded in the regional surgery unit ( a primary care setting ) of ormos , the capital city of korthi municipality in the southern region of andros .\nthe examinations were conducted during a screening program for community health promotion supported by the municipal authorities .\nindividuals that had not measured the levels of glucose and cholesterol for at least 1 year prior to the study were invited through public announcement to participate in the study .\nthe exclusion criteria were age < 30 years and > 95 years , body mass index between 18.5 and 30 ( normal weight and preobese individuals , according to who classification ) , current diagnosis of coronary disease or diabetes , current use of lipid - and / or glucose - lowering drugs and of course denial of participation .\nall individuals presented in the regional surgery unit and did not meet any of the exclusion criteria were included in the study .\nafter acquiring the complete medical history of each participant , blood samples were obtained between 8.00 a.m. and 10.00 a.m. after an overnight fast .\nthe physician obtained blood samples from finger pricks made using disposable lancets and determined the plasma concentrations of total cholesterol and glucose with accutrend gct ( roche diagnostics , basel , switzerland ) .\nthe device used for the measurements did not precisely quantify cholesterol values below 150 mg / dl and above 300 mg / dl . in these cases , the values 150 mg / dl and 300 mg / dl were used for statistical analysis .\ntwo consecutive measurements were taken , and the mean value was used for the study .\ndata summary ( 242 participants ) for the purposes of the study , participants were arbitrarily divided into the following age groups : \n group 1 , comprising individuals who were between 30 and 49 years of agegroup 2 , 50 - 59 yearsgroup 3 , 60 - 69 yearsgroup 4 , 70 - 79 years ; andgroup 5 , 80 - 99 years . \n \ngroup 1 , comprising individuals who were between 30 and 49 years of age group 4 , 70 - 79 years ; and group 5 , 80 - 99 years .\nthe completed data forms were analyzed using microsoft excel 2007 for windows ( microsoft corporation , redmond , wa , usa ) and spss version 16.0 for windows ( spss inc . ,\nthe differences in the mean values in terms of age groups were assessed by the analysis of variance method and bonferroni post hoc test .\nthe work that was undertaken conforms to the provisions of the declaration of helsinki ( as revised in tokyo 2004 ) .\ntwo hundred and forty - two individuals , 90 men and 152 women ( 37% and 63% respectively ) , were included in the study .\ndata on the levels of glucose and cholesterol were available for 186 and 170 individuals , respectively .\nthe mean value of serum glucose level was 125.1 42.4 mg / dl with no significant difference found between men and women ( men 122.38 42.43 and women 126.61 42.59 .\np = 0.645 ) . a statistically positive correlation was established between age and glucose values ( p < 0.01 ) .\nthe mean value of total serum cholesterol level was 225.2 44.9 mg / dl with a statistically significant difference between men and women ( men 207.03 44.57 , women 236.09 41.89 .\nin contrast , no correlation was found between cholesterol values and age ( p = 0.694 ) .\ncomparison between the different groups did not reveal statistically significant differences even if for the case of glucose , the result was only marginal ( p = 0.051 ) [ figure 2 ] .\nabout glucose levels , the results emphasized the need for intensive monitoring of glucose levels in people who are unaware of their metabolic status .\nmost of the participants stated that they had not had their metabolic blood values measured for several years .\nthe total cholesterol values were quite elevated in all the participants irrespective of age ; moreover , the values were higher in women than in men .\nthis difference in values could be attributed to the fact that women underwent blood examinations very rarely compared to men , who were more aware of the threat of chd .\nthe study had a primary focus on the relatively younger population and an observation of special interest is that the total cholesterol concentration values were not very different between the age groups .\nthe cholesterol level in group 1 ( 30 - 49 years ) was 220 mg / dl while that in group 5 ( 80 - 99 years ) was 208 mg / dl .\nyounger men and women mentioned that cholesterol value was not an issue of major concern for them , although they admitted that it was widely known that even younger adults are in danger of chd .\nthe active population of the area is 2547 people ( 2001 census ) and consequently , approximately 10% of the population participated in the study .\nthe results of the study show an increase in the prevalence of hyperglycemia and hypercholesterolemia in the island population .\nthe findings of this study corroborate results from previous studies where the importance of screening control was demonstrated and highlight the absolute necessity for the regular investigation of the metabolic profile of populations inhabiting greece .",
"role": "user"
},
{
"content": "background : this study conducted as part of a screening program for the promotion of community health in the primary care setting of ormos korthi in andros , greece . \n the objective of this study was to identify the levels of glucose and total cholesterol in individuals without major cardiovascular problems in order to identify cases of undetected dyslipidemia and high blood glucose levels , in a rural population located in the southern part of andros island , greece.methods:in total , 242 individuals ( 152 women , mean age 65.1 12.3 and range 33 - 91 years ) were examined for the determination of serum glucose and total cholesterol levels . \n participants were arbitrarily divided into 5 age groups and statistical differences between groups investigated.results:the mean serum glucose level was 125.1 42.4 mg / dl and the mean total serum cholesterol level was 225.2 44.9 mg / dl . \n statistical differences were not observed between the age groups in either the glucose or cholesterol levels . \n difference between men and women was not significant regarding glucose levels while women had significantly elevated levels of total cholesterol ( p < 0.01 ) . \n age was significantly correlated with glucose , but statistical analysis did not show significant correlation between age and total cholesterol levels.conclusions:the present study confirms that dyslipidemia and high blood glucose levels are prevalent among the rural populations of greece and therefore informative campaigns and structured screening programs are required to promote preventive health care .",
"role": "assistant"
}
]
|
science.healthver_entailment | science.healthver_entailment.913 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Consuming alcohol can destroy the new coronavirus\n\nResearch Abstract:\nAre people drinking more now that they are locked in a house with their nearest and dearest, facing job loss or having lost a job, bored and stressed? Probably.\nIs drinking a healthy way of coping?\nNo.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Is drinking a healthy way of coping?\", \"No.\"]}",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.9610 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a super deep residual network for image recognition.\n\nKeywords: image recognition images\n\nDataset Information:\nBSD: BSD is a dataset used frequently for image denoising and super-resolution. Of the subdatasets, BSD100 is aclassical image dataset having 100 test images proposed by Martin et al..\nUNITOPATHO: Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade.\nIndian Pines: Indian Pines is a Hyperspectral image segmentation dataset. The input data consists of hyperspectral bands over a single landscape in Indiana, US, (Indian Pines data set) with 145×145 pixels.\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\n\nAvailable Datasets: A: BSD B: UNITOPATHO C: Indian Pines D: CIFAR-10",
"role": "user"
},
{
"content": "CIFAR-10",
"role": "assistant"
}
]
|
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.5714 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: dTULP, the Drosophila melanogaster Homolog of Tubby, Regulates Transient Receptor Potential Channel Localization in Cilia\nContent:\nAbstract:\nMechanically gated ion channels convert sound into an electrical signal for the sense of hearing. In Drosophila melanogaster, several transient receptor potential( TRP) channels have been implicated to be involved in this process. TRPN( NompC) and TRPV( Inactive) channels are localized in the distal and proximal ciliary zones of auditory receptor neurons, respectively. This segregated ciliary localization suggests distinct roles in auditory transduction. However, the regulation of this localization is not fully understood. Here we show that the Drosophila Tubby homolog, King tubby( hereafter called dTULP) regulates ciliary localization of TRPs. dTULP-deficient flies show uncoordinated movement and complete loss of sound-evoked action potentials. Inactive and NompC are mislocalized in the cilia of auditory receptor neurons in the dTulp mutants, indicating that dTULP is required for proper cilia membrane protein localization. This is the first demonstration that dTULP regulates TRP channel localization in cilia, and suggests that dTULP is a protein that regulates ciliary neurosensory functions.\nIntroduction:\nThe auditory system allows animals to communicate and obtain information about their environment. The hearing organs transform sound into an electrical signal through a process called mechanotransduction, the conversion of a mechanical force impinging on a cell into an intracellular signal[1]. Although the recent discovery of several molecules involved in mechanotransduction allows interpretation of the biophysical properties of the mechanotransduction process for hearing[2], many additional molecular players in auditory development and function are waiting to be unveiled. Drosophila melanogaster has been suggested as a model organism to study the fundamental process of hearing[3],[4]. Hearing in the fly is necessary for the detection of courtship songs[5]–[7]. Male-generated courtship song causes females to reduce locomotion and enhances female receptivity, whereas it causes males to chase each other[8]. The ability to hear courtship songs is ascribed to Johnston's organ( JO) in the second antennal segment. Near-field sounds rotate the sound receiver; the third antennal segment and the arista and this rotation of the antennal receiver transmits mechanical forces to the JO in the second antennal segment, which is connected to the third antennal segment by a thin stalk[9]. Each JO sensilla consists of two or three chordotonal neurons and several supporting cells. The outer dendritic segments of the JO neurons are compartmentalized cilia which are directly connected to the antennal sound receiver via extracellular caps. The distortion of the junction between the second and third segment stretches the cilia and stimulates the JO neurons. Several transient receptor potential( TRP) channels have been shown to be required for Drosophila hearing transduction and amplification[4],[10]–[14]. Mutation in nompC, the Drosophila TRPN channel, resulted in substantial reduction of sound-evoked potentials[4]. Reports showing that NompC and TRP-4( the C. elegans ortholog of NompC) are bona fide mechanotransduction channels support the idea that NompC is the Drosophila hearing transducer[15],[16]. Two Drosophila TRPV channel, inactive( iav) and nanchung( nan), mutants showed complete loss of sound-evoked action potentials[11]. However, they have not been considered to be the hearing transduction complex per se; rather they are thought to be required to amplify the electric signal generated by the hearing transduction complex, since Iav and Nan reside in the proximal cilia which are distant from the distal cilia where NompC is localized and mechanical force is directly transmitted[17],[18]. A recent study which employed a new method to measure subthreshold signals from the JO neurons suggested the opposite possibility that the TRPV( Iav and Nan) complex is the hearing transduction complex modulated by TRPN( NompC)[14]. Although the exact roles of each TRP in Drosophila hearing are still controversial, it is clear that TRPN and TRPV have essential and distinct roles in Drosophila hearing. Several attempts have been made to identify molecular players regulating the function of the ciliated mechanoreceptor neurons. Gene expression profiling identified chordotonal organ-enriched genes from campaniform mechanoreceptors, developing embryo chordotonal neurons, and the second antennal segment[19]–[21]. Alternatively, chordotonal neuron-specific genes were identified by searching for regulatory factor X( RFX)-binding sites, because ciliogenesis of the chordotonal neurons mainly depends on the RFX transcription factor[22]. However, so far only a limited number of genes involved in TRP channel localization in the JO neuron cilia have been identified and characterized, including axonemal components and intraflagellar transports( IFTs)[17],[23]. IFTs are indispensable for the formation and maintenance of cilia as well as for the transport of proteins along the microtubules in and out of the cilia[24]–[26]. Therefore, mutation of many of the characterized genes results in not only delocalization of the TRPs but also profound structural abnormality in cilia, rendering it difficult to delineate the gene functions specific to TRP localization. Tubby is the founding member of Tubby-like proteins( TULPs)[27]. Loss-of-function of the Tubby gene exhibits adult-onset obesity, retinal degeneration, and hearing loss in mice. The Drosophila genome encodes one Tubby homolog called King tubby( hereafter designated dTULP), which shares approximately 43% amino acid identity with mouse Tubby( Figure S1A)[28]. At the embryonic stage, dTULP is expressed in various types of neuronal cells including the chordotonal neurons. Although previous expression analyses and bioinformatic approaches detected dTulp in the chordotonal organs, its presence did not attract much interest because of its distribution in various neuronal cell types[22]. In this study, we aimed to investigate the novel molecular function of dTULP in Drosophila hearing. dTULP is localized to the well-defined ciliary structure of Drosophila auditory organs. Loss of dTULP has no effect on the ciliary structure of the JO neurons, but NompC and Iav localization in cilia was severely altered. These data demonstrate a new role of dTULP as a regulator of TRP localization in the hearing organs.\nDiscussion:\nIn the current study, we demonstrate that dTULP is a cilia trafficking regulator in the Drosophila hearing system. Mutation of dTulp results in hearing loss due to the mislocalization of two TRP channels, Iav and NompC, which are ciliary membrane proteins. In addition, Spam, whose localization is dependent on the IFT machinery, is also mislocalized in dTulp mutants. How does dTULP regulate the ciliary distribution of TRPs in the JO neurons? Several studies have shown that mutations in IFT machinery or cilia components result in mislocalization of Iav, NompC, and Spam, along with abnormal axonemal structure[17],[23]. It is notable that, in contrast to IFT or cilia component mutants, ciliogenesis and maintenance appear normal in dTULP-missing flies. Furthermore, the altered distribution of Iav, NompC, and Spam in dTulp mutants was not due to the mislocalization of IFT proteins, since the localization of two IFTs( NompB and RempA) was normal in dTulp mutants( Figure S6). These data suggest that dTULP acts downstream of the IFTs to regulate TRP localization. Even though the mutation of dTulp affected the trafficking of both Iav and NompC, the compartmentalized ciliary localization of Iav and NompC is differentially regulated by dTULP. An individual mutation in either the putative IFT- or PIP-binding domain reduced Iav expression levels in cilia, whereas NompC localization was not altered until both domains were mutated. Even after the double mutations in both domains of dTULP, NompC is still situated inside the cilia, but in abnormal locations. These findings demonstrate that ciliary entry of NompC is not dependent on dTULP while the distal ciliary localization of NompC is dependent on dTULP. One possibility is that dTULP allows NompC to disengage from the IFT complex at the distal cilia so that NompC is enriched in the distal cilia through the mechanism that required both IFT- and PIP-binding domains. It is also possible that the distal ciliary localization of NompC is regulated by an unidentified factor( s) whose ciliary localization is dTULP-dependent as is Iav. Both the putative IFT- and PIP-binding domains play important roles in the proper Iav distribution in cilia, but they appear to have different roles. Even though the IFT- or PIP-binding mutant forms of dTULP could only partially rescue the ciliary levels of Iav to the similar extent, the mutation of the IFT-binding domain reduced the ciliary levels of dTULP while disruption of the PIP-binding domain had no effect on the ciliary levels of dTULP. These findings suggest that two domains play distinct roles in the regulation of the ciliary localization of Iav. The IFT-binding domain is the motif required for the ciliary entry for dTULP, and the PIP-binding domain is not related to dTULP ciliary entry itself, rather it affects recruitment of Iav-containing preciliary vesicles to dTULP. By these two linked steps, Iav localization to cilia would be facilitated by dTULP. In mammals, IFT-A directs the ciliary localization of TULP3 through physical interaction between TULP3 and the IFT-A core complex( WDR19, IFT122, and IFT140), and in turn, promotes trafficking of GPCR to the cilia. Indeed, the depletion of individual IFT-A core complex components affects the ciliary localization of TULP3, which results in the inhibition of GPCR trafficking to the cilia[37]. It appears that dTULP and TULP3 have the similar molecular mechanisms to regulate ciliary membrane proteins. However, unlike TULP3, dTULP ciliary access is not dependent on IFT-A. dTULP ciliary trafficking was not affected by the mutation of Oseg1( an ortholog of human IFT-A, IFT122) or rempA( an ortholog of human IFT-A, IFT140). Furthermore, the presence of dTULP in cilia did not determine the normal localization of Iav. For example, in the rempA mutant, even when dTULP was localized to the cilia( Figure S6B), Iav was not found in cilia[23]. Taken together, dTULP facilitates the relay of preciliary vesicles to the IFT complex at the base of cilia rather than moving together with ciliary membrane proteins into the cilia as an adaptor between IFT and cargo. dTULP may have other additional roles in cilia, which needs to be explored in the future. Based on our finding that dTULP but not Iav could be found in cilia of IFT mutants, it is also possible that recruitment of Iav-containing preciliary vesicles requires dTULP and additional unknown factors, whose function is altered in IFT mutants. Thus, Iav-containing preciliary vesicles may not be able to form stable interactions with dTULP and IFTs. After the cloning of the Tubby gene two decades ago, one promising hypothesis has been that Tubby is a transcription factor, since Tubby translocates to the nucleus upon GPCR activation and the N-terminal region of Tubby has transactivation potentials[32],[39]. However, candidate target genes for Tubby have not been identified. Tubby is thought to have additional functions including vesicular trafficking, insulin signaling, endocytosis, or phagocytosis[40]–[43]. It is still not clear how these molecular functions lead to the in vivo phenotypes observed in the tubby mouse. Meanwhile, several studies have hinted at possible connections between the phenotypes of tubby mutant mice and ciliary dysfunction. Tubby mice phenotypes comprise syndromic manifestations that are commonly observed in ciliopathies such as Bardet-Biedle syndrome[44] and Usher syndrome[45],[46]. Recently, GPCR trafficking into neuronal cilia was reported to be misregulated in tubby mice[47]. Mutation of Tulp1, a member of the TULPs, in human and mice, exhibits retinal degeneration due to the mislocalization of rhodopsin[48]. TULP3 represses Hedgehog signalling, which is a crucial signalling cascade in cilia, via the regulation of the ciliary localization of GPCRs[49]. Our current study provides additional supports for the idea that TULPs play an important role in ciliary signalling and that the tubby mouse syndrome might be due to the ciliary defects. In contrast to mammalian cells, only specialized cell types have the ciliary structure in Drosophila, and the expression of dTULP is not restricted to organs with the ciliary structure, which suggested that dTULP may have other roles not related to the ciliary function[28]. For example, dTULP mediates rhodopsin endocytosis in Drosophila photoreceptor cells which do not have cilium in contrast to its mammalian counterpart[50]. In summary, we demonstrate an intriguing role of dTULP in governing the ciliary localization of TRP proteins. This is the first in vivo evidence showing that dTULP may have important roles in the maintenance of ciliary functions by regulating the localization of ciliary proteins, thereby maintaining sensory functions.",
"role": "user"
},
{
"content": "Tubby is a member of the Tubby-like protein ( TULP ) family . Tubby mutations in mice ( tubby mice ) cause late-onset obesity and neurosensory deficits such as retinal degeneration and hearing loss . However , the exact molecular mechanism of Tubby has not been determined . Here we show that Drosophila Tubby homolog , King tubby ( dTULP ) , regulates ciliary localization of transient receptor potential protein ( TRP ) . dTULP-deficient flies showed uncoordinated movement and complete loss of sound-evoked action potentials . dTULP was localized in the cilia of chordotonal neurons of Johnston's organ . Two TRP channels essential for auditory transduction , Inactive and NompC , were mislocalized in the cilia of chordotonal neurons in the dTulp mutants , indicating that dTULP is required for proper cilia membrane protein localization . This is the first demonstration that dTULP regulates TRP channel localization in cilia , and thus provides novel insights into the pathogenic mechanism of tubby mice .",
"role": "assistant"
}
]
|
science.genia_ner | science.genia_ner.739 | [
{
"content": "Analyze the provided text excerpt from a scientific document. Identify and extract distinct entities based on predefined categories.\n\nDeliver the results as a JSON structure where each key represents an entity category, and the associated value is a list of identified entities. If no entities are found for a category, return an empty list.\n\nText Excerpt:\n\nGlucocorticoid receptors in anorexia nervosa and Cushing 's disease. BACKGROUND: Patients with anorexia nervosa do not display cushingoid features in spite of elevated cortisol plasma levels. Whether a cortisol resistance or a reduced availability of the metabolic substrates necessary to develop the effect of glucocorticoids is responsible for this has not been established. METHODS: Twenty-two patients with severe restrictive anorexia nervosa, 10 patients with active Cushing 's disease, and 24 healthy volunteers without psychiatric disorders or mood alterations were investigated. Glucocorticoid receptor characteristics were examined on mononuclear leukocytes by measuring [3H]dexamethasone binding and the effect of dexamethasone on [3H]thymidine incorporation, which represents an index of DNA synthesis. RESULTS: The number of glucocorticoid receptors on mononuclear leukocytes (MNL) was comparable in patients with anorexia nervosa, patients with active Cushing 's disease, and normal subjects (binding capacity 3. 3 + /-0. 23 vs. 3. 7 + /-0. 30 and 3. 5 + /-0. 20 fmol / 10 (6) cells). Conversely, glucocorticoid receptor affinity was significantly decreased in anorexia nervosa as well as in Cushing 's patients compared to control subjects (dissociation constant 4. 0 + /-0. 31 and 4. 1 + /-0. 34 vs. 2. 9 + /-0. 29 nmol / L, p <. 001) and inversely correlated with the levels of urinary free cortisol in both groups of patients. Basal [3H]thymidine incorporation in MNL was significantly reduced in anorexia nervosa as well as in Cushing 's patients compared to control subjects (p <. 001) and was diminished by dexamethasone to an extent similar to control subjects in patients with anorexia nervosa, but significantly (p <. 001) less in those with Cushing 's disease. In patients with anorexia nervosa, the incorporation of [3H]thymidine into the MNL was inversely correlated with urinary free cortisol levels. CONCLUSIONS: These data indicate that the lack of cushingoid features in patients with anorexia nervosa is not ascribable to a reduced sensitivity to glucocorticoids but is more likely due to the paucity of metabolic substrates.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [\"mononuclear leukocytes\", \"mononuclear leukocytes (MNL)\", \"the MNL\", \"MNL\"], \"DNA\": [], \"Protein\": [\"Glucocorticoid receptors\", \"glucocorticoid receptor\", \"glucocorticoid receptors\"], \"RNA\": []}",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.31968 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nalthough the fractional quantum hall effect@xcite ( fqhe ) has not yet been observed in graphene , several papers have studied this possibility theoretically both in the su(2 ) limit@xcite ( when the zeeman splitting is not small but the valleys are degenerate ) and in the su(4 ) limit@xcite ( when both spins and valleys are degenerate ) .\nthese studies show that , as for gaas , the fqhe states are well described by the composite fermion ( cf ) theory@xcite and occur at the filling factors @xmath2 where @xmath3 is the partial filling of electrons ( corresponding to total filling of @xmath4 ) or holes@xcite ( at @xmath5 ) in the graphene landau level with index @xmath6 , @xmath7 is the cf vorticity , and @xmath8 is the number of filled @xmath9 levels ( also known as cf landau levels ) .\nthere are differences , however .\nwhile fqhe in gaas is much stronger in the lowest landau level ( ll ) , the fqhe in graphene is expected to be as strong in the @xmath10 landau level as in the @xmath0 ll@xcite .\nmore interestingly , many new incompressible cf states become possible because of the su(4 ) symmetry@xcite .\nthis work addresses the nature of the state at @xmath11 .\nif the model of weakly interacting composite fermions remains valid in the limit of @xmath12 , then we expect a fermi sea of composite fermions . in gaas\n, the fully spin polarized fermi sea of composite fermions has been extensively studied@xcite and confirmed@xcite at @xmath13 , and good evidence exists for a spin - singlet cf fermi sea ( cffs ) in the limit of vanishing zeeman energy@xcite . at @xmath14 in the second ( @xmath1 ) landau level , it is currently believed , although not confirmed , that the residual interactions between composite fermions produce a p - wave paired state of composite fermions , described by a pfaffian wave function@xcite . in still higher landau levels an anisotropic stripe phase is believed to occur .\ncf fermi sea is an obvious candidate at half fillings in graphene , although it will have a richer structure associated with it . in the su(4 ) symmetric limit , the mean field model of composite fermions predicts an su(4 ) singlet cf fermi sea , which has no analog in gaas .\nthe p - wave paired state of composite fermions is also a promising candidate , especially at @xmath15 in the @xmath1 ll , and it is interesting to ask if the graphene coulomb matrix elements can make it more stable than the standard gaas coulomb matrix elements . for completeness , we also consider a so - called hollow - core state@xcite describing the spin - singlet pairing of composite fermions , and , as in gaas@xcite , find it not to be relevant .\nwe note that our @xmath0 landau level results below , as well as in ref . , also apply to the cf physics in valley degenerate semiconductor systems @xcite .\nthe low - energy states of graphene are described in the continuum approximation by a massless dirac hamiltonian@xcite @xmath16 that acts on a 4-spinor hilbert space . here\n@xmath17 denotes the spin and @xmath18 the pseudospin associated with the valley degree of freedom , @xmath19 m / s is the fermi velocity , @xmath20 , and @xmath21 is the on - site energy difference between the two sublattices .\nthe single particle spectrum of @xmath22 is @xmath23 where @xmath24 are the eigenvalues of @xmath25 and @xmath26 , respectively , and @xmath6 is the landau level index . in the limit @xmath27\neach landau level is 4-fold degenerate , giving rise to an su(4 ) internal symmetry .\nwe consider below only the su(4 ) symmetric part of the hamiltonian explicitly ; from these results , the energy of any given wave function in the presence of certain kinds of symmetry breaking terms ( for example , the zeeman coupling ) can be obtained straightforwardly , and level crossing transitions as a function of @xmath28 and @xmath21 can be obtained . the conditions for su(4 )\nsymmetry have been discussed in refs . and .\nbecause we are interested in bulk properties , we will use the spherical geometry , in which electrons move on the surface of a sphere and a radial magnetic field is produced by a magnetic monopole of strength @xmath29 at the center.@xcite here @xmath30 is the magnetic flux through the surface of the sphere ; @xmath31 , and @xmath32 is an integer according to dirac s quantization condition . the interelectron interaction is conveniently parametrized in terms of pseudopotentials@xcite @xmath33 , where @xmath33 is the energy of two electrons in relative angular momentum state @xmath8 .\nthe problem of interacting electrons in the @xmath6-th ll of graphene can be mapped into a problem of electrons in the @xmath0 ll with an effective interaction that has pseudopotentials@xcite @xmath34 where the form factor @xmath35 is @xmath36 for an evaluation of the energies of various variational wave functions by the monte carlo method , we need the real - space interaction . in the @xmath0 ll this interaction\nis simply @xmath37 , where @xmath38 is taken as the chord distance in the spherical geometry . in other landau levels\nwe use an effective real - space interaction in the lowest landau level that reproduces the higher landau level pseudopotentials in eq .\n( [ effective ] ) .\nwe determine such an effective real space interaction in the _ planar _ geometry , and use it on the sphere .\nthis procedure is exact in the thermodynamic limit , and it is usually reasonable also for finite systems . following ref .\n, in the @xmath10 ll we use the form @xmath39 the coefficients @xmath40 are given in ref .\n. we will assume parameters such that the finite thickness of the 2deg and landau level mixing have negligible effect . to build composite fermion trial wave functions\n, we will use the following consequence of fock s cyclic condition@xcite .\nthe orbital part of one member of the su(@xmath6 ) , namely the highest weight state , can be constructed as @xmath41 where @xmath42 s are slater determinants such that any state @xmath43 in @xmath42 is also filled in @xmath44 ( conversely , if @xmath43 is empty in @xmath42 , then it is also empty in @xmath45 ) ; @xmath46 is the projection into the lowest ( @xmath0 ) landau level@xcite ; and the last factor , the jastrow factor , attaches @xmath7 vortices to each fermion .\nhere @xmath47 , and @xmath48 .\nthe complete wave function is @xmath49 where @xmath50 is a basis of the ( @xmath6-dimensional ) fundamental representation of su(@xmath6 ) , @xmath51 is the number of particles in the @xmath52 state , @xmath53 , and @xmath54 is the antisymmetrizer .\nwe define the cf fermi sea as the thermodynamic limit of an integral number of filled landau levels at an effective monopole strength @xmath55 for composite fermions .\nclearly , if @xmath56 are identical , then eq . ( [ orbital ] ) yields a legitimate trial wave function .\nwe will label this state \ncffs @xmath57 $ ] . \" as the effective monopole strength of composite fermions @xmath58 is related to the real monopole strength @xmath29 as @xmath59 the filling factor is , assuming @xmath60 , @xmath61 the pfaffian wave function @xcite , which is one of the candidates for the fqhe state at @xmath62 in gaas samples@xcite , has the form @xmath63 on the sphere . by assumption , the pfaffian wave function uses one spin band only .\nwe also consider the hollow - core state @xcite @xmath64 where @xmath65 is an @xmath66 matrix .\nthis state is a spin singlet in the system with su(2 ) symmetry ; its symmetry becomes su(2)@xmath67su(2 ) in the su(4 ) symmetric limit . because of the last factor in eqs .\n( [ pfaff1 ] ) and ( [ hollow1 ] ) , which converts electrons into composite fermions , these wave functions describe paired states of composite fermions .\nwe have studied cf fermi sea states containing as many as 256 composite fermions ( 64 particles per landau band ) , and our principal results@xcite are given in fig .\n[ fs ] and table [ energies ] .\nthese pertain to fillings @xmath68 ( @xmath69 ) ; @xmath70 ( @xmath71 ) ; @xmath15 ( @xmath72 ) ; @xmath73 ( @xmath74 ) .\n( in relating @xmath3 to @xmath75 , we have included the possibility of forming the state from either electrons or holes in a given landau level . )\nwhen the spin or valley degeneracy is broken , the above study applies to many other half integral states also . to obtain the energy of the cf fermi sea , we consider finite systems at @xmath76 and extrapolate the energy to the thermodynamic limit .\nthe energies at @xmath15 have a complicated dependence on @xmath77 , which makes extrapolation to the thermodynamic limit difficult .\nthe following conclusions can be drawn .\n\\(i ) for all fractions shown in fig . [ fs ] , the hollow - core state has a very high energy and is therefore not relevant .\n\\(ii ) the pfaffian wave function also has higher energy than all of the cf fermi sea states for all filling factors studied . in particular\n, it has higher energy than the fully polarized cf fermi sea ( @xmath78 $ ] ) in the @xmath1 ll , in contrast to gaas where the fully polarized cf fermi sea has higher energy@xcite .\nwe therefore conclude that the pfaffian state is not stabilized in either @xmath0 or @xmath10 landau level in graphene .\ninterestingly , for the fully polarized state , the overlaps given in table [ pfaff ] indicate the pfaffian wave function is actually a better representation of the exact coulomb ground state at @xmath15 in the @xmath1 ll of graphene than it is of the 5/2 state in gaas ( for the latter , the overlaps are 0.87 and 0.84 for 8 and 10 particles , respectively@xcite ) ; nonetheless , energetic considerations rule out the pfaffian state at @xmath15 in graphene .\n\\(iii ) the overlaps given in table [ pfaff ] show that the pfaffian is significantly worse at @xmath79 , indicating that it is not stabilized in the @xmath80 ll of graphene either .\n\\(iv ) we have considered cf fermi sea wave functions of four distinct symmetries , ranging from su(4 ) singlet to fully polarized .\nall of these have lower energies than either the pfaffian or the hollow - core state . without any symmetry breaking term ,\nthe su(4 ) singlet cf fermi sea has the lowest energy at @xmath68 , as expected from the model of non - interacting composite fermions .\nwhen the zeeman and the pseudo - zeeman energies are turned on , we expect a partially - polarized \" cf fermi sea , and eventually a fully polarized cf fermi sea .\n\\(v ) the cf fermi sea is also favored for @xmath15 and @xmath81 , but the energy differences between the various cf fermi sea states are very small , less than the statistical error in our monte carlo evaluations .\n.[energies ] the thermodynamic limit of the energy per particle , in units of @xmath82 , for various cf fermi sea ( cffs ) states as well as the pfaffian and the hollow - core wave functions at @xmath83 and @xmath84 for @xmath85 .\nthe notation for the cffs state is explained in the text .\n[ cols=\"^,^,^,^\",options=\"header \" , ] other authors@xcite have considered a cf fermi sea state at @xmath86 , where the fourfold degenerate @xmath0 ll is half full . here , the electron ( or hole ) density in the @xmath0 landau level is @xmath87 , which , upon composite fermionization of _ all _ electrons , gives an effective field of @xmath88 for composite fermions , which should be contrasted with @xmath76 at @xmath89 .\nkhveschenko@xcite considers a state in which each of the four degenerate landau bands is half filled forming a cf fermi sea ; the flux attachment does not introduce correlations between different bands in this approach . finally , we comment on some of the approximations made in the model considered above .\nwe have neglected ll mixing in our calculation ; given that the energy difference between the cffs and the pfaffian states is fairly large ( 3 - 5% ) , we believe that ll mixing will not cause a phase transition into a pfaffian ground state , which is known to become worse with ll mixing@xcite .\nwe thank the high performance computing ( hpc ) group at penn state university aset ( academic services and emerging technologies ) for assistance and computing time on the lion - xo cluster , and the center for scientic computing at j. w. goethe - universitt for computing time on cluster iii .\nwillett _ et al .\n_ , phys .\nlett . * 71 * , 3846 ( 1993 ) ; w. kang _ et al .\n_ , phys .\nlett . * 71 * , 3850 ( 1993 ) ; v.j .\ngoldman _ et al .\n_ , phys .\nlett . * 72 * , 2065 ( 1994 ) ; j.h .\nsmet _ et al .\n_ , phys .\n* 77 * , 2272 ( 1996 ) .\ng. w. semenoff , phys .\nlett . * 53 * , 2449 ( 1984 ) ; f. d. m. haldane , phys .\nlett . * 61 * , 2015 ( 1988 ) ; d. p. divincenzo and e. j. mele , phys .\nrev . b * 29 * , 1685 ( 1984 ) ; n. h. shon and t. ando , j. phys .\n* 67 * , 2421 ( 1998 ) .\nthe physically relevant energy is obtained by extrapolation of the finite system energies to the thermodynamic limit .\na uniformly charged positive background is assumed for a meaningful extrapolation , so that , in the spherical geometry , the total energy is given by @xmath91 for all trial wave functions @xmath92 , where @xmath93 is the radius of the sphere .\nthe energy differences between various states do not depend on the details of the background subtraction .\nnote that the graphene sheet in itself is not neutral , and part of the neutralizing charge resides on the backgate in the experimental setup ; the capacitive energy of the graphene sheet - backgate system is not taken into account , but , again , it is the same for all states , so does not change their energy ordering .",
"role": "user"
},
{
"content": "the physics of the state at even denominator fractional fillings of landau levels depends on the coulomb pseudopotentials , and produces , in different gaas landau levels , a composite fermion fermi sea , a stripe phase , or , possibly , a paired composite fermion state . \n we consider here even denominator fractions in graphene , which has different pseudopotentials as well as a possible four fold degeneracy of each landau level . \n we test various composite fermion fermi sea wave functions ( fully polarized , su(2 ) singlet , su(4 ) singlet ) as well as the paired composite fermion states in the @xmath0 and @xmath1 landau levels and predict that ( i ) the paired states are not favorable , ( ii ) cf fermi seas occur in both landau levels , and ( iii ) an su(4 ) singlet composite fermion fermi sea is stabilized in the appropriate limit \n . the results from detailed microscopic calculations are generally consistent with the predictions of the mean field model of composite fermions .",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.2756 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nBiocompatible Polylactide-block-Polypeptide-block-Polylactide Nanocarrier. Polypeptides are successfully incorporated into poly(l-lactide) (PLLA) chains in a ring-opening polymerization (ROP) of l-lactide by using them as initiators. The resulting ABA triblock copolymers possess molecular weights up to 11000 g·mol(-1) and polydispersities as low as 1.13, indicating the living character of the polymerization process. In a nonaqueous emulsion, peptide-initiated polymerization of l-lactide leads to well-defined nanoparticles, consisting of PLLA-block-peptide-block-PLLA copolymer. These nanoparticles are easily loaded by dye-encapsulation and transferred into aqueous media without aggregation (average diameter of 100 nm) or significant dye leakage. Finally, internalization of PLLA-block-peptide-block-PLLA nanoparticles by HeLa cells is demonstrated by a combination of coherent anti-Stokes Raman spectroscopy (CARS) and fluorescence microscopy. This demonstrates the promise of their utilization as cargo delivery vehicles.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"l-lactide\", \"poly(l-lactide)\", \"Polylactide\", \"PLLA\"]}",
"role": "assistant"
}
]
|
science.scitldr_aic | science.scitldr_aic.196 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nIt is important to detect anomalous inputs when deploying machine learning systems.\nThe use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.\nAt the same time, diverse image and text data are available in enormous quantities.\nWe propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE).\nThis enables anomaly detectors to generalize and detect unseen anomalies.\nIn extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance.\nWe also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue.\nWe also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.\nMachine Learning systems in deployment often encounter data that is unlike the model's training data.\nThis can occur in discovering novel astronomical phenomena, finding unknown diseases, or detecting sensor failure.\nIn these situations, models that can detect anomalies (Liu et al., 2018; Emmott et al., 2013) are capable of correctly flagging unusual examples for human intervention, or carefully proceeding with a more conservative fallback policy.Behind many machine learning systems are deep learning models (Krizhevsky et al., 2012) which can provide high performance in a variety of applications, so long as the data seen at test time is similar to the training data.\nHowever, when there is a distribution mismatch, deep neural network classifiers tend to give high confidence predictions on anomalous test examples (Nguyen et al., 2015) .\nThis can invalidate the use of prediction probabilities as calibrated confidence estimates (Guo et al., 2017) , and makes detecting anomalous examples doubly important.Several previous works seek to address these problems by giving deep neural network classifiers a means of assigning anomaly scores to inputs.\nThese scores can then be used for detecting outof-distribution (OOD) examples (Hendrycks & Gimpel, 2017; Lee et al., 2018; Liu et al., 2018) .\nThese approaches have been demonstrated to work surprisingly well for complex input spaces, such as images, text, and speech.\nMoreover, they do not require modeling the full data distribution, but instead can use heuristics for detecting unmodeled phenomena.\nSeveral of these methods detect unmodeled phenomena by using representations from only in-distribution data.In this paper, we investigate a complementary method where we train models to detect unmodeled data by learning cues for whether an input is unmodeled.\nWhile it is difficult to model the full data distribution, we can learn effective heuristics for detecting out-of-distribution inputs by exposing the model to OOD examples, thus learning a more conservative concept of the inliers and enabling the detection of novel forms of anomalies.\nWe propose leveraging diverse, realistic datasets for this purpose, with a method we call Outlier Exposure (OE).\nOE provides a simple and effective way to consistently improve existing methods for OOD detection.Through numerous experiments, we extensively evaluate the broad applicability of Outlier Exposure.\nFor multiclass neural networks, we provide thorough results on Computer Vision and Natural Language Processing tasks which show that Outlier Exposure can help anomaly detectors generalize to and perform well on unseen distributions of outliers, even on large-scale images.\nWe also demonstrate that Outlier Exposure provides gains over several existing approaches to out-of-distribution detection.\nOur results also show the flexibility of Outlier Exposure, as we can train various models with different sources of outlier distributions.\nAdditionally, we establish that Outlier Exposure can make density estimates of OOD samples significantly more useful for OOD detection.\nFinally, we demonstrate that Outlier Exposure improves the calibration of neural network classifiers in the realistic setting where a fraction of the data is OOD.\nOur code is made publicly available at https://github.com/hendrycks/outlier-exposure.\nExtensions to Multilabel Classifiers and the Reject Option.\nOutlier Exposure can work in more classification regimes than just those considered above.\nFor example, a multilabel classifier trained on CIFAR-10 obtains an 88.8% mean AUROC when using the maximum prediction probability as the OOD score.\nBy training with OE to decrease the classifier's output probabilities on OOD samples, the mean AUROC increases to 97.1%.\nThis is slightly less than the AUROC for a multiclass model tuned with OE.\nAn alternative OOD detection formulation is to give classifiers a \"reject class\" (Bartlett & Wegkamp, 2008) .\nOutlier Exposure is also flexible enough to improve performance in this setting, but we find that even with OE, classifiers with the reject option or multilabel outputs are not as competitive as OOD detectors with multiclass outputs.\nIn addition to size and realism, we found diversity of D OE out to be an important factor.\nConcretely, a CIFAR-100 classifier with CIFAR-10 as D OE out hardly improves over the baseline.\nA CIFAR-10 classifier exposed to ten CIFAR-100 outlier classes corresponds to an average AUPR of 78.5%.\nExposed to 30 such classes, the classifier's average AUPR becomes 85.1%.\nNext, 50 classes corresponds to 85.3%, and from thereon additional CIFAR-100 classes barely improve performance.\nThis suggests that dataset diversity is important, not just size.\nIn fact, experiments in this paper often used around 1% of the images in the 80 Million Tiny Images dataset since we only briefly fine-tuned the models.\nWe also found that using only 50,000 examples from this dataset led to a negligible degradation in detection performance.\nAdditionally, D OE Improves Calibration.\nWhen using classifiers for prediction, it is important that confidence estimates given for the predictions do not misrepresent empirical performance.\nA calibrated classifier gives confidence probabilities that match the empirical frequency of correctness.\nThat is, if a calibrated model predicts an event with 30% probability, then 30% of the time the event transpires.\nIn this paper, we proposed Outlier Exposure, a simple technique that enhances many current OOD detectors across various settings.\nIt uses out-of-distribution samples to teach a network heuristics to detect new, unmodeled, out-of-distribution examples.\nWe showed that this method is broadly applicable in vision and natural language settings, even for large-scale image tasks.\nOE can improve model calibration and several previous anomaly detection techniques.\nFurther, OE can teach density estimation models to assign more plausible densities to out-of-distribution samples.\nFinally, Outlier Exposure is computationally inexpensive, and it can be applied with low overhead to existing systems.\nIn summary, Outlier Exposure is an effective and complementary approach for enhancing out-of-distribution detection systems.",
"role": "user"
},
{
"content": "OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.3369 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nMany methods have been used to model epidemic spreading. They include ordinary differential equation systems for globally homogeneous environments and partial differential equation systems to take into account spatial localisation and inhomogeneity. Stochastic differential equations systems have been used to model the inherent stochasticity of epidemic spreading processes. In our case study, we wanted to model the numbers of individuals in different states of the disease, and their locations in the country. Among the many existing methods we used our own variant of the well known Gillespie stochastic algorithm, along with the sub-volumes method to take into account the spatial localisation. Our algorithm allows us to easily switch from stochastic discrete simulation to continuous deterministic resolution using mean values. We applied our approaches on the study of the Covid-19 epidemic in France. The stochastic discrete version of Pandaesim showed very good correlations between the simulation results and the statistics gathered from hospitals, both on day by day and on global numbers, including the effects of the lockdown. Moreover, we have highlighted interesting differences in behaviour between the continuous and discrete methods that may arise in some particular conditions.",
"role": "user"
},
{
"content": "[\"Prevention\", \"Epidemic Forecasting\"]",
"role": "assistant"
}
]
|
science.bioasq_factoid_qa | science.bioasq_factoid_qa.314 | [
{
"content": "Given the following biomedical passages, answer the question succinctly. Ensure your response is precise and limited to the answer only.\n\nQuestion: Which histone modification is primarily linked to elongating transcription?\n\nPassages: Similarly, H3K36 trimethylation, a mark associated with transcription elongation, was specifically increased at the HD locus in the striatum and not in the cerebellum.\ny expressing Myc protein fused with the estrogen receptor (Myc-ER) in fibroblasts, we observed that Myc, binding to the regulatory elements of Suz12, Ezh2, and Eed, induces the acetylation of histones H3 and H4 and the recruitment of elongating RNA polymerase II at their promoters\nA basal level of JIL-1 binding can be defined that correlates best with the methylation of histone H3 at lysine 36, a mark that is placed co-transcriptionally\nRNA polymerase II signals for deacetylation through the methylation of histone H3 lysine 36 (H3K36), which provides the recruitment signal for the Rpd3S histone deacetylase complex (HDAC)\nRecent studies reviewed here demonstrate that histone deacetylation on the body of a transcribed gene is regulated via Set2-mediated methylation of histone H3-K36\nH3K56 acetylation: a chromatin mark associated with the elongating RNA polymerase II\nFurthermore, Rtt109 and H3K56 acetylation appear to correlate with actively transcribed genes and associate with the elongating form of polymerase II in yeast\nIn addition to coordinating the processing of the nascent transcript, elongating RNA polymerase II recruits histone methyltransferases to methylate lysines 4 and 36 of histone H3 in nucleosomes in the body of actively transcribed genes\nSet1, the yeast histone H3-lysine 4 (H3-K4) methylase, is recruited by the Pol II elongation machinery to a highly localized domain at the 5' portion of active mRNA coding regions.",
"role": "user"
},
{
"content": "H3K36 trimethylation",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.102 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nNeonatal exposure to low dose corticosterone persistently modulates hippocampal mineralocorticoid receptor expression and improves locomotor/exploratory behaviour in a mouse model of Rett syndrome. Rett syndrome (RTT) is a pervasive neurodevelopmental disorder, primarily affecting girls. RTT causes a wide variety of debilitating symptoms and no cure currently exists. Mouse models bearing mutations in the Mecp2 gene recapitulate most physiological and behavioural RTT-related abnormalities. Stimulating neonatal environments (e.g. brief maternal separations or maternal low-dose corticosterone supplementation) reduce stress and fear responses at adulthood. The present study investigated whether impacting early in development the hypothalamic-pituitary-adrenal axis, by exposing Mecp2-308 mutant pups to a low dose of corticosterone (50 µg/ml, during the 1st week of life) may contrast RTT-related abnormalities in neuroendocrine regulation and behavioural adaptation at adulthood. In line with previous reports, when fully symptomatic, MeCP2-308 mice showed a reduction in the regular nocturnal hyperactivity in the home-cage and increased anxiety-like behaviours and plasma corticosterone (CORT) levels in response to restraint stress. An abnormal elevation in mRNA levels of mineralocorticoid receptors (MR) and BDNF gene was also evident in the hippocampus of fully symptomatic mutant mice. Neonatal CORT modulated MR gene expression and behavioural reactivity towards a novel object, also restoring wt-like levels of locomotor/exploratory behaviour in mutant mice. Enhanced sensitivity to the neonatal treatment (in terms of increase in GR and MR mRNA levels), was also evident in the hippocampus of MeCP2-308 mice compared to wt littermates. Present results corroborate the hypothesis that targeting the glucocorticoid system may prove valid in contrasting at least some of the RTT-related symptoms and provide evidence that pharmacological interventions during critical early time windows can persistently improve the behavioural phenotype of RTT mice. Current data also support the emerging role played by Mecp2 in mediating the epigenetic programming induced by early life events and indicate that, in the absence of functional MeCP2, programming of the central nervous system in response to early environmental stimuli is abnormally regulated. This article is part of the Special Issue entitled 'Neurodevelopmental Disorders'.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"CORT\", \"corticosterone\"]}",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.1016 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nAnnexinA7ispresentinbothvesicletypeswhereitismoreenrichedinnanovesicles(Fig.2).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.255 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nHepatic ATGL knockdown uncouples glucose intolerance from liver TAG accumulation. Adipose triglyceride lipase (ATGL) is the predominant triacylglycerol (TAG) hydrolase in mammals; however, the tissue-specific effects of ATGL outside of adipose tissue have not been well characterized. Hence, we tested the contribution of hepatic ATGL on mediating glucose tolerance and insulin action. Glucose or insulin tolerance tests and insulin signaling were performed in C57BL/6 mice administered control (nongene specific shRNA) or Atgl shRNA adenoviruses. Glucose and lipid metabolism assays were conducted in primary hepatocytes isolated from mice transduced with control or Atgl shRNA adenoviruses. Knocking down hepatic ATGL completely abrogated the increase in serum insulin following either 1 or 12 wk of feeding a high-fat (HF) diet despite higher hepatic TAG content. Glucose tolerance tests demonstrated that ATGL knockdown normalized glucose tolerance in HF-diet-fed mice. The observed improvements in glucose tolerance were present despite unaltered hepatic insulin signaling and increased liver TAG. Mice with suppressed hepatic ATGL had reduced hepatic glucose production in vivo, and hepatocytes isolated from Atgl shRNA-treated mice displayed a 26% decrease in glucose production and a 38% increase in glucose oxidation compared to control cells. Taken together, these data suggest that hepatic ATGL knockdown enhances glucose tolerance by increasing hepatic glucose utilization and uncouples impairments in insulin action from hepatic TAG accumulation.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Glucose\", \"TAG\", \"triglyceride\", \"triacylglycerol\", \"glucose\"]}",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.1977 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nThis negative correlation, such that more positive mean PBR occurred concurrently with a more negative mean NBR, is in agreement with previous studies (Klingner et al., 2010; Shmuel et al., 2002), suggesting that the magnitude of ipsilateral inhibition increases with the magnitude of increasing contralateral excitation.",
"role": "user"
},
{
"content": "Result",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.223 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey \nChapter Title: C. ADS-B \n1. Abstract of Cited Paper (BIB001): The National Airspace System (NAS) has experienced rapid growth in unmanned aircraft system (UAS) use and demand for airspace access. Expanding UAS military and civilian applications result in increasing demand for NAS access. It is driving the need for researches on UAS operations and air traffic management in a safe and effective manner. The ability for an unmanned aircraft to “sense and avoid” is the primary and pressing technical challenge. Sense and Avoid Systems (SAAS) have been perceived as potential solution for conflict detection and air traffic collisions avoidance, but the technical implementations for UAS have been far from satisfaction. One possible method for the SAAS to continuously localize surrounding aircraft and obtain traffic information necessary for integrating UAS may be through Automatic Dependent Surveillance-Broadcast (ADS-B). It is one of the most potential tracking technologies in Next Generation Air Transportation System (NextGen) to improve situational awareness for pilots. This paper examines the use of ADS-B system in future SAAS, possible implementations for UAS towards infrastructures, data link communication capabilities and satellite communication abilities. The technical challenges and safety concerns associated with UAS integration are presented. Finally, a practical approach of ADS-B unit for UAS based on soft defined radio in support with satellite relay and dynamic data link switch is discussed and developed.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Automatic Dependent Surveillance-Broadcast (ADS-B) is a standard developed by the FAA. Current ADS-B systems will evolve to a next generation, called \"ADS-B Next.\" The current systems work on 978/1090 MHz, but the next generation will be centered initially at 1030 MHz for more robustness and better efficiency. Complete maturity and equipping the ADS-B Next systems are expected to happen in 2025 as a part of the FAA's NextGen program . Upon deploying ADS-B Next, additional bandwidth would be provided through spectrum reallocation of the 1030 MHz band. In the ADS-B sense and avoid systems, there are two communication links with different frequencies. Aircraft operating below 6 km use 978 MHz Universal Access Transceiver (UAT), and aircraft operating above that height deploy 1090 MHz Extended Squitter (1090ES) data link. Currently, ADS-B systems mostly operate in a single frequency link, because operating in two different frequencies will cause compatibility issues for communication between different aircraft. Further, trying to address this problem may not be cost-efficient due to the SWaP and budget limitation. However, in manned aircraft, dual frequency ADS-B systems are widely used. The dual frequency scheme can be implemented in a switching manner BIB002 . ADS-B systems suffer from the data loss for distances above 280 km between the aircraft and the ground base, and this data loss starts increasing almost linearly after 50 km. Hence, the aircraft might not be able to get too far from the nearest base station BIB001 . UAT is planned to be implemented on all aviation aircraft operating at or below Class A altitudes in the NAS. UAT supplies the aircraft with traffic information, called Traffic Information Service-Broadcast (TIS-B) , and weather and aeronautical information, called Flight Information Service-Broadcast (FIS-B) . This multi-purpose data link architecture reduces significantly the operating costs. In addition, it increases the flight safety by providing traffic situational awareness, conflict detection, and alerts. An ADS-B enabled aircraft can send circumstantial information to other aircraft; this advantage allows unique situational awareness . As stated before, the UAS civil aviation is still under regulation process. ADS-B Next is a part of FAA's NextGen program and will replace radars on all aircraft by 2020. ADS-B systems employ existing GPS hardware and software using real-time satellite-based Internet communications. Conclusion: This automatic satellite-based standard provides a great coverage range. The provided services such as situational awareness is a bonus to be employed in UASs. Even though ADS-B does not offer enough flexibility and adaptability like other standards, this standard has great potential to be used in the UAVs' communication system.",
"role": "assistant"
}
]
|
science.acl_arc_intent_classification | science.acl_arc_intent_classification.608 | [
{
"content": "Examine the given text from a research paper in the field of NLP. Your task is to identify the role of the citation within the context provided. Focus on understanding the intent behind the citation and classify it accordingly.\n\nSection Title:\nexperiments\n\nText Before Citation:\nraw length value as a feature, we follow our previous work (Rubino et al., 2013; Wagner et al., 2014) and create multiple features for length using a decision tree (J48). Length of words (L): Instead of using the 3.\n\nCitation Text:\nraw length value as a feature , we follow our previous work ( Rubino et al. , 2013 ; Wagner et al. , 2014 ) and create multiple features for length using a decision tree ( J48 ) .\n\nText After Citation:\nWe use length as the only feature to train a decision tree for each fold and use the nodes obtained from the tree to create boolean features. 4. Capitalization (C): We use 3 boolean features to encode capitalization information: whether any letter in the word is capitalized, whether all letters in the word are capitalized and whether the first letter is capitalized. We perform experiments with an SVM classifier (linear kernel) for different combination of these features.3 Parameter optimizations (C range 2-15 to 210) for SVM are performed for each feature 3According to (Hsu et al., 2010) the SVM linear kernel with parameter C optimization is good enough when dealing with a large number of features.",
"role": "user"
},
{
"content": "Extends",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.952 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nWe present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.\n\nCited Works Summary:\n[1]: There are several examples in natural systems that exhibit the self-organizing behavior of segregation when different types of units interact with each other. One of the best examples is a system of biological cells of heterogeneous types that has the ability to self-organize into specific formations, form different types of organs and, ultimately, develop into a living organism. Previous research in this area has indicated that such segregations in biological cells and tissues are made possible because of the differences in adhesivity between various types of cells or tissues. Inspired by this differential adhesivity model, this technical note presents a decentralized approach utilizing differential artificial potential to achieve the segregation behavior in a swarm of heterogeneous robotic agents. The method is based on the proposition that agents experience different magnitudes of potential while interacting with agents of different types. Stability analysis of the system with the proposed approach in the Lyapunov sense is carried out in this technical note. Extensive simulations and analytical investigations suggest that the proposed method would lead a population of two types of agents to a segregated configuration.",
"role": "user"
},
{
"content": "Kumar [1] introduced the concept of differential potential'', whereby two robots experience a different artificial potential depending on their being part of the same class or not. The convergence of this approach is guaranteed for two classes, but when more classes are employed local minima prevent segregation from emerging.",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.2566 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nSynthesis of a DOTA (Gd(3+))-conjugate of proton-pump inhibitor pantoprazole for gastric wall imaging studies. Magnetic resonance imaging (MRI) is used to evaluate gastrointestinal (GI) structure and functions in humans. Despite filling the viscus lumen with a contrast agent, visualization of the viscus wall is limited. To overcome this limitation, we de novo synthesized a conjugate that covalently combines a Gd-based MRI contrast agent, encaged with a chelating agent (DOTA), with pantoprazole, which is a widely used proton pump inhibitor that binds to proton pumps in the stomach and colon. The DOTA linkage was installed at a mechanism-based strategic location in the pantoprazole molecule to minimize a possible negative effect of the structural modification on the drug. It is anticipated that by defining the wall of the stomach and colon, this compound will facilitate functional MRI of the GI tract in humans.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"pantoprazole\", \"Gd(3+)\", \"Gd\", \"DOTA\"]}",
"role": "assistant"
}
]
|
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.7440 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Immunotherapy with a standardized Dermatophagoides pteronyssinus extract. VI. Specific immunotherapy prevents the onset of new sensitizations in children.\nSummary: BACKGROUND The natural history of allergic sensitization is complex and poorly understood . A prospect i ve nonr and omized study was carried out in a population of asthmatic children younger than 6 years of age whose only allergic sensitivity was to house dust mites ( HDMs ) . OBJECTIVES The study was design ed to determine whether specific immunotherapy ( SIT ) with st and ardized allergen extracts could prevent the development of new sensitizations over a 3-year follow-up survey . METHODS We studied 22 children monosensitized to HDM who were receiving SIT with st and ardized allergen extracts and 22 other age-matched control subjects who were monosensitized to HDM . The initial investigation included a full clinical history , skin tests with a panel of st and ardized allergens , and the measurement of allergen-specific IgE , depending on the results of skin tests . Children were followed up on an annual basis for 3 years , and the development of new sensitizations in each group was recorded . RESULTS Ten of 22 children monosensitized to HDM who were receiving SIT did not have new sensitivities compared with zero of 22 children in the control group ( p = 0.001 , chi square test ) . CONCLUSIONS This study suggests that SIT in children monosensitized to HDM alters the natural course of allergy in preventing the development of new sensitizations\n\n\nReport 2:\nTitle: Factors associated with the development of peanut allergy in childhood.\nSummary: BACKGROUND The prevalence of peanut allergy appears to have increased in recent decades . Other than a family history of peanut allergy and the presence of atopy , there are no known risk factors . METHODS We used data from the Avon Longitudinal Study of Parents and Children , a geographically defined cohort study of 13,971 preschool children , to identify those with a convincing history of peanut allergy and the subgroup that reacted to a double-blind peanut challenge . We first prospect ively collected data on the whole cohort and then collected detailed information retrospectively by interview from the parents of children with peanut reactions and of children from two groups of controls ( a r and om sample from the cohort and a group of children whose mothers had a history of eczema and who had had eczema themselves in the first six months of life ) . RESULTS Forty-nine children had a history of peanut allergy ; peanut allergy was confirmed by peanut challenge in 23 of 36 children tested . There was no evidence of prenatal sensitization from the maternal diet , and peanut-specific IgE was not detectable in the cord blood . Peanut allergy was independently associated with intake of soy milk or soy formula ( odds ratio , 2.6 ; 95 percent confidence interval , 1.3 to 5.2 ) , rash over joints and skin creases ( odds ratio , 2.6 ; 95 percent confidence interval , 1.4 to 5.0 ) , and oozing , crusted rash ( odds ratio , 5.2 ; 95 percent confidence interval , 2.7 to 10.2 ) . Analysis of interview data showed a significant independent relation of peanut allergy with the use of skin preparations containing peanut oil ( odds ratio , 6.8 ; 95 percent confidence interval , 1.4 to 32.9 ) . CONCLUSIONS Sensitization to peanut protein may occur in children through the application of peanut oil to inflamed skin . The association with soy protein could arise from cross-sensitization through common epitopes . Confirmation of these risk factors in future studies could lead to new strategies to prevent sensitization in infants who are at risk for subsequent peanut allergy\n\n\nReport 3:\nTitle: Serum ovalbumin‐specific immunoglobulin G responses during pregnancy reflect maternal intake of dietary egg and relate to the development of allergy in early infancy\nSummary: Background The value of allergen elimination diets during pregnancy for primary prevention of infant allergy has been question ed . However , dietary compliance may influence effectiveness\n\n\nReport 4:\nTitle: Pollen immunotherapy reduces the development of asthma in children with seasonal rhinoconjunctivitis (the PAT-study).\nSummary: BACKGROUND Children with allergic rhinitis are likely to develop asthma . OBJECTIVE The purpose of this investigation was to determine whether specific immunotherapy can prevent the development of asthma and reduce bronchial hyperresponsiveness in children with seasonal allergic rhinoconjunctivitis . METHODS From 6 pediatric allergy centers , 205 children aged 6 to 14 years ( mean age , 10.7 years ) with grass and /or birch pollen allergy but without any other clinical ly important allergy were r and omized either to receive specific immunotherapy for 3 years or to an open control group . All subjects had moderate to severe hay fever symptoms , but at inclusion none reported asthma with need of daily treatment . Symptomatic treatment was limited to loratadine , levocabastine , sodium cromoglycate , and nasal budesonide . Asthma was evaluated clinical ly and by peak flow . Methacholine bronchial provocation tests were carried out during the season(s ) and during the winter . RESULTS Before the start of immunotherapy , 20 % of the children had mild asthma symptoms during the pollen season(s ) . Among those without asthma , the actively treated children had significantly fewer asthma symptoms after 3 years as evaluated by clinical diagnosis ( odds ratio , 2.52 ; P < .05 ) . Methacholine bronchial provocation test results improved significant in the active group ( P < .05 ) . CONCLUSION Immunotherapy can reduce the development of asthma in children with seasonal rhinoconjunctivitis\n\n\nReport 5:\nTitle: Presentation of allergen in different food preparations affects the nature of the allergic reaction – a case series\nSummary: Background Characterization of fatal and non‐fatal reactions to food indicates that the majority of reactions are due to the ingestion of prepared foods rather than the non‐processed allergen . In an ongoing study that used a double‐blind placebo‐controlled food challenge to investigate peanut allergy and clinical symptoms , the observed reaction severity in four of the first six subjects was greater than anticipated . We hypothesized that this was due to differences in the composition of the challenge vehicle",
"role": "user"
},
{
"content": "Initially there were anecdotal reports from communities where peanut ingestion was high .\nThese population s had an extremely low incidence of peanut allergy .\nThis has been followed by a recent formal study of Jewish children in Israel where early peanut exposure is associated with an approximate 10-fold lower incidence of peanut allergy compared with their UK Jewish counterparts .\nAt present it is believed that there is a bellshaped curve underpinning the relationship between allergen dose during pregnancy and early infancy , and the development of clinical sensitisation .\nThus , at very low doses of allergen the threshold for sensitisation is not achieved and the adaptive immune system remains naive to the presence of the allergen .\nAs the allergen dose is raised , sensitisation occurs via Th2dependent mechanisms with the consequent development of clinical manifestations .\nThis is supported by murine studies showing that egg immunisation during pregnancy reduces the susceptibility of offspring to egg sensitisation .\nHowever , not all studies have supported this concept as no difference was demonstrated in the development of asthma/rhinitis or skin prick test positivity to pollen allergens , between offspring whose mothers received immunotherapy compared with sibling controls .",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.1197 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nWe report the first successful treatment with the IL-1 receptor antagonist anakinra, in association with the most promising and available antiviral therapy, of a severe case of novel coronavirus disease 2019 (COVID-19). We describe the diagnosis, clinical course, and management of the case, including the respiratory failure at presentation, the progression to a scenario characterized by profound inflammatory dysregulation similar to that observed during macrophage activation syndrome, and the clinical improvement after treatment with the IL-1 receptor antagonist anakinra. This case highlights the high tolerability and the interesting immunomodulatory profile of the IL-1 receptor antagonist anakinra in the setting of severe COVID-19 associated with remdesivir therapy. Further studies are needed to confirm the safety and efficacy of this combination strategy in the treatment of this emerging infection.",
"role": "user"
},
{
"content": "[\"Case Report\"]",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3708 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nOnline video advertising gives content providers the ability to deliver compelling content, reach a growing audience, and generate additional revenue from online media. Recently, advertising strategies are designed to look for original advert(s) in a video frame, and replacing them with new adverts. These strategies, popularly known as product placement or embedded marketing, greatly help the marketing agencies to reach out to a wider audience. However, in the existing literature, such detection of candidate frames in a video sequence for the purpose of advert integration, is done manually. In this paper, we propose a deep-learning architecture called ADNet, that automatically detects the presence of advertisements in video frames. Our approach is the first of its kind that automatically detects the presence of adverts in a video frame, and achieves state-of-the-art results on a public dataset.\n\nCited Works Summary:\n[1]: Billboards are placed on the sides of a soccer field for advertisement during match telecast. Unlike regular commercials, which are introduced during a break, on-field billboards appear on the TV screen at uncertain time instances, in different sizes, and also for different durations. Automated processing of soccer telecasts for detection and analysis of such billboards can provide important information on the effectiveness of this mode of advertising. We propose a method in which shot boundaries are first identified and the type of each shot is determined. Frames within each shot are then segmented to locate possible regions of interests (ROIs) - locations in a frame where billboards are potentially present. Finally, we use a combination of local and global features for detecting individual billboards by matching with a set of given templates.\n\n[2]: In this paper we present a system for the localisation and tracking of billboards in streamed soccer matches. The application area for this research is the delivery of customised content to end users. When international soccer matches are broadcast, the diversity of the audience is very large and advertisers would like to be able to adapt the billboards to the different audiences. By replacing the billboards in the video stream this can be achieved. In order to build a more robust system, photometric invariant features are used. These colour features are less susceptible to the changes in illumination. Sensor noise is dealt with through variable kernel density estimation.\n\n[3]: Precise visibility measuring of billboard advertising is a key element for organizers and broadcasters to make cost effective their sport live relay. However, this activity currently is very manpower and time consuming as it is manually processed for the moment. In this paper we describe a technique for detection of commercial advertisement in sport TV. Based on some a priori knowledge of sport field and commercial advertisement, our technique makes use of fast Hough transform and text's geometry features in order to extract advertisement from sport TV images. Our experiments show that our technique achieves more than 90 accuracy rate.",
"role": "user"
},
{
"content": "In the area of sport analytics, a few works were done to detect the billboards in the soccer fields. attempted to localize the position of on-field billboards using template matching techniques [1] . in [3] used Hough transform for advertising detection in sport TV. used photometric invariant features for billboard track in soccer video scenes [2] . These techniques mainly used techniques from photogrammetry and traditional signal processing -- neural networks were not fully exploited. Owing to the recent development in the area of artificial intelligence, we propose a AI-driven advert detection neural network in this paper.",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.1989 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nPURPOSE: The aim of this study was to clarify the surgical supply situation of oncological colorectal patients in Germany during limitations of the OR caseload due to the COVID-19 pandemic. METHODS: Between 11th and 19th April 2020, all members of a consortium of German colorectal cancer centers were invited to participate in a web-based survey on the current status of surgical care situation of colorectal cancer patients in Germany. RESULTS: A total of 112 colorectal surgeons of 101 German hospitals participated in the survey. Eighty-seven percent of the participating hospitals had to reduce their total surgical caseload and 34% their surgical volume for oncological colorectal patients during COVID-19 pandemic. Restrictions of the surgical caseload were independent of the size of the hospital and the number of cases of COVID-19 in the federal state of the hospital. Sixteen percent of colorectal surgeons consider surgical limitations to be not justified and 78% to be justified only if the care of oncological patients is ensured. Ninety-five percent of the colorectal surgeons interviewed stated that all oncological colorectal patients with an indication for surgery should be operated in time, despite the current reservations for COVID-19 patients. For the majority of the respondents (63% and 51%, respectively), an extended waiting time for surgery of up to 2 weeks was acceptable for non-metastatic and metastatic patients, respectively. CONCLUSION: In Germany, there is a temporarily relevant reduction of surgical volume in oncological colorectal patients. Most colorectal surgeons stated that oncological colorectal surgery should not be compromised despite the measures taken during the COVID-19 pandemic.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.6312 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\ngeographically and time - limited , they aim to build learning and capacity to inform country - wide implementation . between 2007 and 2011 ,\ndemonstration projects were the main approach to delivering new vaccines against human papillomavirus ( hpv ) to prevent cervical cancer in low - income and lower - middle - income countries ( llmics ) .\ninitial pilots , with vaccines donated by merck & co. , inc . or glaxosmithkline biologicals to governments and external partners , allowed countries to gain experience vaccinating adolescent girls , who were not routinely targeted for immunisation . with the exception of rwanda and bhutan , vaccine donations for national delivery\nwere not generally available , so demonstration projects enabled resource - poor countries to gain hpv vaccination experience . from late 2012 , gavi ,\nthe vaccine alliance , began supporting hpv vaccination demonstration projects or national programmes if countries had prior experience vaccinating girls aged 913 .\nlearn by doing. gavi provided 2-year funding for vaccines and operational costs to allow time to assess delivery strategies and potential integration with other adolescent services , develop tools , and prepare applications to gavi for national program funding . by mid-2015\n, gavi had approved demonstration project funding for 25 countries , of which 6 ( 24% ) had already conducted at least one pilot , while another 3 were approved for national support . despite 55 pilots or demonstration projects being completed before 2015 , only 7 countries and 3 more in 2015 - 2016 - transitioned from demonstration to national provision .\nthis article aims to examine the value of hpv vaccination demonstration projects to date , including potential drawbacks and how value for informing national scale - up might be increased , drawing from a review of published and unpublished documents from 37 countries and key informant interviews from 23 countries conducted in 2015 .\ncountries with data in the public domain are specifically named , while others were anonymised with an identification number .\nlessons are relevant for countries intending to introduce hpv vaccination and more broadly for funders supporting introduction of new health interventions in llmics .\ndemonstration projects allowed both national and external partners to gain valuable experience in planning and budgeting for a new intervention , enumerating target populations , developing community acceptability and consent procedures for adolescent services , designing and piloting new reporting forms and systems , coordinating with the ministry of education for school - based vaccination , and using standardised evaluation tools . while countries had nuanced experiences and gained from the learning by doing process , lessons from later demonstration projects generally repeated those reported earlier , indicating that while individual countries might require first - hand experience , sufficient collective learning has been generated .\nfirst , the limited scale of projects did not allow assessment of potential health system integration , particularly as demonstration projects were often implemented outside routine services .\nsecond , communication and social mobilisation activities had to be carefully restricted to avoid perceived inequity among those not receiving vaccination .\nthird , demonstration project was an unclear term for many stakeholders with some communities concluding that the vaccine rather than delivery method was being piloted .\nfourth , many were conducted in areas primarily selected for convenience ( e.g. with higher routine vaccination coverage , more extensive infrastructure , and better education levels than national averages ) , potentially providing few lessons applicable to national scale - up .\nfifth , the resource - intensive delivery strategies used in demonstration projects were potentially unsustainable once vaccine donations and external support for operational costs ended .\na focus on project evaluation and demonstrating at least 50% vaccination coverage to secure gavi support for national introduction , led some countries to choose safe options ( e.g. , school - based delivery ) known for achieving good coverage but potentially too costly to be sustainably expanded nationally .\nprojects in only 3 countries simultaneously or sequentially tested different vaccination venues , only 2 simultaneously tested different eligibility criteria , and only one tested different vaccination timings . as approximately 13 projects tested hpv vaccination integration ( e.g. with tetanus toxoid vaccination , deworming , vitamin a supplementation , health education ) , learning around combination interventions was largely missed .\nvalue would increase substantially if projects were designed to learn specific lessons and inform realistic national scale - up .\nfor example , the prevailing learning by doing objective appeared disconnected from learning for national scale - up , while attaining good coverage, as a performance indicator for funding allocation , could create distortions similarly unsupportive of learning for scale - up .\ninstead , countries could test and cost different delivery strategies in different districts simultaneously , e.g. , school - based and health facility - based , to inform sustainable national expansion .\nalternatively , donors could fund further economic modeling of different strategies to inform both demonstration projects and national scale - up .\nevaluation is critical for any testing approach and cost and coverage evaluations were included in a few demonstration projects .\nhowever , evaluations often lacked sufficient evidence to inform decision - making , with data missing or derived from pre - set grants that generally magnified operational costs .\ntwenty completed demonstration projects before 2015 without announcing plans to scale - up , 5 of which ceased implementing hpv vaccination without attempting to co - finance vaccine and operational costs , 2 conducted further demonstration projects , and the remainder have not indicated a decision .\nimplementation often stalled due to vaccine donations ending , funding constraints , or lack of ownership by national immunisation programmes .\ndemonstration projects may thus delay decision - making or even discourage national scale - up due to the high - cost strategies tested during the demonstration phase .\nlessons from demonstration projects were not always relevant to scale - up . of seven llmics that scaled - up from demonstration to national implementation , only one reported that project lessons were useful for expansion , while another reported national expansion as more valuable for testing possible implementation strategies .\nseveral indicated that demonstration projects were too small to inform national expansion or selected a different delivery strategy than had been tested .\nof these , 3 used lessons from gardasil access program ( gap ) projects to test different delivery strategies as part of gavi - funded demonstration projects while 6 conducted multiple , sequential non - gavi projects for reasons unclear from the data . while some scale - up delays are probably due to insufficient national immunisation program engagement with early demonstration projects , an overall discontinuity exists between implementing demonstration projects and national expansion .\nlonger - term comprehensive planning of projects and expansion , or changing to phased national introduction approaches , could accelerate scale - up .\nthe pathway from demonstration to scale - up is not straightforward and major challenges to sustainability persist .\ninterviewees indicated that the expensive campaign - style delivery and reduced external support during scale - up challenged national expansion efforts .\nhowever , the question remains whether demonstration projects may have become a way to delay or discourage commitment to scale - up . with many lessons already from 55 demonstration projects ,\nlearning may be saturated and further learning likely to be most effective during national implementation , or in the context of phased national expansion .\nthis would maintain political commitment to scale - up and avoid loss of integration with national health systems .\nwell - designed demonstration projects can test multiple delivery strategies , implementation for challenging areas and populations , and integration with national systems .\ncountries implementing new interventions benefit through learning by doing and may need initial experiences to hone social mobilisation , delivery strategies , and reporting .\nhowever , demonstrations can distract momentum from national introduction and designs that test alternative scalable options are thus crucial .\ndemonstration projects were valuable when hpv vaccine was first offered to llmics before national funding was available\n. however , with many lessons already documented , few new lessons observed , and additional funding available , their value has decreased .\nprojects were designed to demonstrate whether llmics could implement hpv vaccination rather than whether they could implement it sustainably and at scale . while the latter is crucial , countries were reportedly reluctant to risk experimenting and potentially lose funding . in the example of hpv vaccination , initial demonstration projects quickly provided consistent lessons .\nhowever , scale - up is critical to maximise health impact , and further demonstrations could distract momentum and decision - making .\nany new demonstration projects should have guidelines that maximise value for national implementation , and flexibility for phased transition to national scale - up , without repeated funding applications .\nthus , introduction of vaccines or other interventions , particularly those involving new target groups or delivery strategies , needs flexible funding approaches so that pilots can address specific questions of scalability and sustainability .\nlessons from hpv vaccine introduction are relevant for other potential interventions that may be introduced in the coming years ( e.g. , menacwy vaccine , rts , s vaccine ) .\nnh , smj , kg and sk contributed to data collection , analysis , and interpretation .\ndwj , dsl , ug , hb and smj designed the study and contributed , along with mf , to data interpretation and critical review .\nthe findings and conclusions contained within are those of the authors and do not necessarily reflect positions or policies of the bill & melinda gates foundation or path .",
"role": "user"
},
{
"content": "abstractdemonstration projects or pilots of new public health interventions aim to build learning and capacity to inform country - wide implementation . \n authors examined the value of hpv vaccination demonstration projects and initial national programmes in low - income and lower - middle - income countries , including potential drawbacks and how value for national scale - up might be increased . \n data from a systematic review and key informant interviews , analyzed thematically , included 55 demonstration projects and 8 national programmes implemented between 2007 - 2015 ( 89 years ' experience ) . \n initial demonstration projects quickly provided consistent lessons . \n value would increase if projects were designed to inform sustainable national scale - up . \n well - designed projects can test multiple delivery strategies , implementation for challenging areas and populations , and integration with national systems . \n introduction of vaccines or other health interventions , particularly those involving new target groups or delivery strategies , needs flexible funding approaches to address specific questions of scalability and sustainability , including learning lessons through phased national expansion .",
"role": "assistant"
}
]
|
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.2879 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: Modeling Recursive RNA Interference\nContent:\nAbstract:\nAn important application of the RNA interference( RNAi) pathway is its use as a small RNA-based regulatory system commonly exploited to suppress expression of target genes to test their function in vivo. In several published experiments, RNAi has been used to inactivate components of the RNAi pathway itself, a procedure termed recursive RNAi in this report. The theoretical basis of recursive RNAi is unclear since the procedure could potentially be self-defeating, and in practice the effectiveness of recursive RNAi in published experiments is highly variable. A mathematical model for recursive RNAi was developed and used to investigate the range of conditions under which the procedure should be effective. The model predicts that the effectiveness of recursive RNAi is strongly dependent on the efficacy of RNAi at knocking down target gene expression. This efficacy is known to vary highly between different cell types, and comparison of the model predictions to published experimental data suggests that variation in RNAi efficacy may be the main cause of discrepancies between published recursive RNAi experiments in different organisms. The model suggests potential ways to optimize the effectiveness of recursive RNAi both for screening of RNAi components as well as for improved temporal control of gene expression in switch off–switch on experiments.\nIntroduction:\nRNA interference( RNAi) is an RNA-mediated pathway of gene silencing mediated by small RNA molecules[1],[2]. During RNAi, introduction of double-stranded RNA( dsRNA) encoding a sub-sequence of a gene leads to reduction in expression of the corresponding gene. The heart of the RNAi process involves two key steps. First, the dsRNA is cleaved into small RNA fragments by an enzyme called Dicer, and then these small fragments are used as a template by a complex called RISC which identifies matching sequences in target messages and leads to their degradation. RNAi technology has emerged as a powerful tool for artificially controlling gene expression, but it only works because cells have evolved small RNA based regulatory pathways in the first place. Natural regulatory pathways taking advantage of small RNAs include not only classical RNAi, which probably acts in host defense against viruses and transposons, but also microRNA-based( miRNA) regulatory pathways that regulate endogenous genes[3]. It is interesting to speculate that such pathways may have evolved in part because of unique aspects of regulation mediated by RNA. Compared to more classical regulatory networks based on transcription factors or kinases, the signal-processing properties of small RNA-based regulatory systems have not been extensively investigated at a theoretical level. One advantage of having a theoretical understanding of such pathways is that one could potentially predict the performance and response of systems that have been altered in defined ways, thus facilitating a “synthetic biology” of small RNA-mediated regulatory circuits[4],[5]. For a more short-term application, one might hope that a predictive level of understanding of RNAi pathway behavior could allow improved design of experiments using RNAi as a tool. In this report the RNAi system is explored theoretically by considering its behavior following addition of an artificial negative feedback loop. It is well known in electronics that when the output of a circuit is fed back into one of its inputs, the resulting closed-loop circuit can have dramatically different behaviors than the open-loop circuit before the feedback loop was added. A key challenge for systems biology is to be able to predict the effect of feedback loops on biological circuits, either naturally occurring feedback or synthetic feedback produced by adding new linkages from output to input[6]. In the case of naturally occurring small RNA-mediated regulatory loops based on micro-RNAs, feedback loops are sometimes seen in which components of the RNAi/miRNA machinery such as Dicer or Argonaute are themselves targets of miRNA-mediated inhibition[7],[8]. Being able to quantitatively or even qualitatively predict the effect of such feedback linkages would therefore seem crucial to developing a circuit theory for small RNA based signaling[9]. In the case of the RNAi pathway, synthetic feedback loops have been constructed by workers attempting to use RNAi to turn off the RNAi pathway. This is done simply by adding dsRNA molecules that target genes encoding components of the RNAi machinery. In such a situation, the feedback can be considered as arising from the output of the RNAi machinery( that is, degradation of target message) being applied as an input to the system in the form of message encoding RNAi components. This “recursive” RNAi has been used in genome-wide screens to discover new RNAi components[10]–[14]. In such screens, a reporter gene such as green fluorescent protein( GFP) or luciferase is silenced by RNAi, and then reporter activity is measured in the presence of a second dsRNA molecule targeting a candidate gene. Increased reporter expression indicates that the candidate gene is involved in the RNAi process. By using libraries of dsRNA molecules corresponding to all predicted genes in the genome, it is in principle possible to identify all components of the RNAi machinery. In order for screens of this type to be successful, the reporter activity must be significantly increased over the level seen when the reporter alone is targeted. Recursive RNAi has also been used as a way to reactivate genes previously silenced by RNAi. Such “switch-off/switch-on” experiments employ a procedure in which a dsRNA is introduced targeting a gene of interest, and then, following a period of inactivation, the RNAi is alleviated by adding a second dsRNA that targets the RNAi machinery itself[15]. This allows temporal control of gene expression during animal development, and has the advantage that it can be applied to any gene without having to engineer new inducible constructs for each experiment. In order for switch-off/switch-on experiments to work, the level of restoration of the targeted gene must be enough to restore approximately normal gene function. For strictly recessive genes this would probably require restoration to approximately half normal levels, while for haploinsufficient genes it would require a greater degree of restoration, to near wild-type levels. Recursive RNAi can thus potentially be a very powerful tool both for studying RNAi itself and also for controlling gene expression during development, provided a sufficient level of restoration can be achieved once the RNAi machinery is targeted. Despite the great potential of recursive RNAi, and the multiple published successes of the method, one cannot help but feel that the use of RNAi to inactive RNAi seems potentially self-defeating. Specifically, one might imagine that as the pathway is shut down, its ability to further shut itself down would be reduced, resulting potentially in a restoration of activity. Recursive RNAi presents the same difficulty as attempting to commit suicide by holding one's breath—even if one could hold one's breath to the point of passing out, the unconscious patient would at that point begin breathing again. The quantitative question thus arises as to whether introduction of recursive RNAi would provide a restoration of gene expression level that would be measurable or detectable relative to control levels. Indeed, in actual practice recursive RNAi doesn't always work. For instance, although some studies have reported that RNAi of genes encoding Dicer protein restores reporter gene expression[16], other studies failed to observe significant restoration following RNAi of Dicer[11]. One possible explanation for the variability in results between different systems is the efficacy of RNAi at knocking down gene expression. Some cell types such as S2 cells can achieve extremely high levels of knockdown to a few percent of wild-type expression levels[11] while other systems such as C. elegans RNAi-by-feeding seem to produce a more moderate degree of knockdown. Might such variation make recursive RNAi possible in some systems and impossible in others? This report investigates the conditions under which recursive RNAi can be effective, by constructing a mathematical model for recursive RNAi and predicting how its performance varies as a function of the efficacy of RNAi in a given system. The main prediction of the model is that increasing the efficacy of RNAi-mediated knockdown should make recursive RNAi less efficient and potentially impossible.\nDiscussion:\nThis report uses a mathematical model to predict the steady-state levels of reporter gene expression in recursive RNAi experiments. This model indicates that recursive RNAi is indeed possible, but that the level of restoration of a reporter gene, and therefore the ability to observe the effect of restoration, depends on the intrinsic efficacy of RNAi knockdown. Systems with more complete RNAi mediated knockdown are predicted to be less susceptible to RNAi. For screens in which the goal is simply to determine whether or not restoration has occurred in order to identify new RNAi components, the level of restoration only needs to be large enough relative to the measurement noise so that a reliable detection can be made. A much more stringent application is when recursive RNAi is used to restore expression of a gene previously inactivated by RNAi, as has been demonstrated in C. elegans[15]. For such switch-off/switch-on applications of recursive RNAi, the level of restoration needs to be sufficiently high to restore essentially wild-type levels of gene function. Restoration of the targeted gene to fully wild-type levels would correspond to a restoration ratio R = 1, which according to Figure 4 is impossible to attain. In many cases, for example genes that are not haplo-insufficient, it may not be necessary to restore gene expression levels all the way to wild-type to rescue the phenotype. However, the results of the model suggest that in many cases, even a more moderate restoration, say to one half or one quarter normal expression levels, would also not be possible if the efficacy of RNAi-mediated knockdown in the organism is too high. One could, in such cases, conduct the experiment in a mutant background with a partial defect in one or more components of the RNAi machinery, so that the value of γ is reduced enough to allow a high level of restoration. Of course, this would entail a design tradeoff because decreased γ would lead to less repression during the switch-off phase of the experiment. In practice, the value of γ might need to be tuned quite carefully to achieve desired results. Moreover, genetic manipulation of the RNAi machinery may lead to undesirable side-effects due to alteration of endogenous small RNA mediated regulatory pathways. A preferable strategy, therefore, may be to carefully tune the relative targeting efficiency[22]–[28] of the reporter versus the RNAi component, so as to reduce the efficacy of targeting of the reporter, which as shown in Figure 6 can produce improved restoration. It is also worth pointing out that inducible systems for turning on and off production of siRNA have been demonstrated[36]–[38]. Recursive-RNAi based switch-off/switch-on has only been documented in nematodes where RNAi constructs can be easily introduced by soaking or feeding, and may be much harder in other types of animals, representing a distinct advantage for inducible systems. Overall, it remains to be seen whether switch-on experiments using recursive RNAi would have any advantages over these chemically inducible approaches. Switch-on by RNAi of RISC components might yield faster dynamics as it would not be limited by the degradation or dilution rate of the siRNA molecules. In comparing the predictions of the model to experimentally measured levels of reporter gene restoration( see above), it was found that published values for the degree of restoration seen when Argonaute-2 is targeted are much higher than predicted by the model. This does not represent a discrepancy between the model and the data so much as a discrepancy between the experimentally observed behavior of Argonaute-2 and other RNAi components. Indeed, dramatically higher levels of reporter restoration have consistently been reported for Argonaute-2 compared to other RNAi components including Dcr-1, Dcr-2, R2D2, Tudor-SN, FMRp, Drosha, Aubergine, and Piwi[11],[39],[40]. The fact that this protein seems to consistently show a distinctly different behavior in recursive RNAi experiments compared to all other known RNAi components[11] suggests that Argonaute-2 acts somehow differently from the RNAi components described within the model. Perhaps Argonaute-2 might be involved within additional control loops not included in the present model. Consistent with the notion that Ago-2 is somehow unique in its functions and interactions, it has recently been reported that Ago-2 depletion has a distinct and specific effect on RNAi competition that is not seen when other RNAi components are targeted[40]. These considerations suggest that the model used here, in its present form, must not fully represent the range of behavior of Argonaute-2. The results of Equation 4 indicate that by some measures, the RNAi system may operate more reliably when operated in a closed-loop recursive mode. This result, together with the main result that the susceptibility of the RNAi machinery is to inhibition by RNAi, indicates that the RNAi pathway can demonstrate interesting properties when operated in a closed-loop “recursive” mode, even when represented by a fairly simple model. The favorable comparison with published levels of restoration versus efficacy suggests that the model may have predictive value. Other models of the RNAi pathway have previously been developed which model the system at varying levels of complexity[41]–[44], and it would be interesting to see whether these different models give similar predictions when adapted to represent recursive RNAi experiments. It is also feasible to extend the approach described here to an analysis of the dynamic properties of other types of small RNA mediated control systems such as micro-RNA networks.",
"role": "user"
},
{
"content": "RNA interference is a gene regulatory system in which small RNA molecules turn off genes that have similar sequences to the small RNAs . This has become a powerful tool because a researcher can use RNA interference to turn off any gene of interest in order to test its function . There is great interest in identifying the genes required for the RNA interference pathway , and one approach to identifying such genes has been to use RNA interference to turn off potential RNA interference genes and to ask whether RNA interference still functions when these genes are turned off . The goal of our report is to ask how it is possible for RNA interference to turn itself off , using a mathematical model of the system . The results show that RNA interference cannot turn itself off if the RNA interference pathway is too effective to start with , so that experiments in which RNA interference acts on itself will only work in systems having a low efficiency . The results of our model suggest possible ways to improve the self-inactivation of RNA interference .",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.6746 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: We propose a novel method for high-quality face hallucination using a Siamese GAN. The proposed\n\nKeywords: face hallucination image\n\nDataset Information:\nMulti-Modal CelebA-HQ: Multi-Modal-CelebA-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA dataset by following CelebA-HQ. Each image has high-quality segmentation mask, sketch, descriptive text, and image with transparent background.\nDeeperForensics-1.0: DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos.\nMaskedFace-Net: Proposes three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net).\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\n\nAvailable Datasets: A: Multi-Modal CelebA-HQ B: DeeperForensics-1.0 C: MaskedFace-Net D: CelebA",
"role": "user"
},
{
"content": "CelebA",
"role": "assistant"
}
]
|
science.pico_ner | science.pico_ner.2375 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffects of cognitive-behavioral therapy on anxiety in children with autism spectrum disorders : a randomized controlled trial . We compared the effects of a 16-week Cognitive-Behavioral Therapy ( CBT ) program and a Social Recreational ( SR ) program on anxiety in children with Autism Spectrum Disorders ( ASD ) . Seventy children ( 9-16 years old ) were randomly assigned to either of the programs ( n ( CBT ) = 36 ; n ( SR ) = 34 ) . Measures on child 's anxiety using the Spence Child Anxiety Scale -- Child ( SCAS-C ) and the Clinical Global Impression-Severity scale ( CGI -- S ) were administered at pre- , post-treatment , and follow-ups ( 3- and 6-month ) . Children in both programs showed significantly lower levels of generalized anxiety and total anxiety symptoms at 6-month follow-up on SCAS-C. Clinician ratings on the CGI-S demonstrated an increase in the percentage of participants rated as \" Normal \" and \" Borderline \" for both programs . Findings from the present study suggest factors such as regular sessions in a structured setting , consistent therapists , social exposure and the use of autism-friendly strategies are important components of an effective framework in the management of anxiety in children and adolescents with ASD .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"Cognitive-Behavioral Therapy ( CBT )\", \"program\", \"CBT\", \"autism-friendly strategies\", \"cognitive-behavioral therapy\", \"regular sessions in a structured setting , consistent therapists , social exposure\", \"Social Recreational ( SR ) program\"], \"Outcome\": [\"Clinical Global Impression-Severity scale ( CGI -- S )\", \"generalized anxiety and total anxiety symptoms\", \"Spence Child Anxiety Scale -- Child ( SCAS-C )\", \"CGI-S\"], \"Participant\": []}",
"role": "assistant"
}
]
|
science.medmentions_ner | science.medmentions_ner.2013 | [
{
"content": "Analyze the provided paragraph from a scientific document. Your goal is to identify and extract relevant entities.\n\nFormat your findings as a JSON object, where each key represents a category of entities, and the corresponding value is a list of entities identified in that category. If no entities are found for a category, return an empty list.\n\nFocus on precision and ensure that the output is strictly in JSON format.\n\nDocument:\n\nPlatelet rich plasma versus steroid on lateral epicondylitis: meta-analysis of randomized clinical trials Lateral epicondylitis (LE) is a common tendinopathy for which an effective treatment is still unknown. The purpose of this meta-analysis was to compare the effectiveness of platelet rich plasma (PRP) vs steroid in reducing pain and improving function of the elbow in the treatment of LE. A systematic search of the literature was conducted to identify related articles published from January 1980 to September 2016 in Pubmed, Embase, the Cochrane Library and SpringerLink. All studies that compared PRP with steroid administration on LE were included. Main outcomes were collected and analyzed by the Review Manager 5.1. Eight randomized controlled trials (RCTs) that involved 511 patients met the criteria. This meta-analysis showed that there was no significant difference in pain relief in the short-term (2 to 4 weeks: SMD = 1.02, P = .03; 6 to 8 weeks: SMD = .73, P = .24) and the intermediate-term (12 weeks: SMD = -0.28, P = .35). Steroid exhibited a better efficacy of function in the short-term (2 to 4 weeks: SMD = .61, P < .001; 6 to 8 weeks: SMD = .53, P < .001). However, PRP was superior to steroid for pain relief in the long-term (half year: SMD = -1.6, P < .001; one year: SMD = -1.45, P < .001), and also for function improving in the intermediate-term (12 weeks: SMD = -0.53, P < .001) and the long-term (half year: SMD = -0.56, P < .001; one year: SMD = -0.7, P < .001). No serious adverse effects of treatment were observed in the two groups. Treatment of patients with LE by steroid could slightly relieve pain and significantly improve function of elbow in the short-term (2 to 4 weeks, 6 to 8 weeks). PRP appears to be more effective in relieving pain and improving function in the intermediate-term (12 weeks) and long-term (half year and one year). Considering the long-term effectiveness of PRP, we recommend PRP as the preferred option for LE.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"LE\", \"lateral epicondylitis\", \"adverse effects\", \"tendinopathy\", \"Lateral epicondylitis\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [\"Platelet rich plasma\", \"PRP\", \"platelet rich plasma\"], \"BodySystem\": [], \"Chemical\": [\"Steroid\", \"steroid\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"pain\"], \"Food\": [], \"HealthCareActivity\": [\"treatment\", \"pain relief\", \"relieve pain\", \"Treatment\", \"relieving pain\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"articles\", \"SpringerLink\", \"Embase\", \"Cochrane Library\", \"Pubmed\", \"literature\", \"Review Manager 5.1\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"meta-analysis\", \"analyzed\", \"studies\", \"randomized clinical trials\", \"randomized controlled trials\", \"RCTs\"], \"SpatialConcept\": [\"elbow\"], \"Virus\": []}",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.3418 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\n[3] showed that additivity of entanglement of formation holds for a family of mixed states by utilizing the additivity of Holevo capacity for unital qubit channels [4] via Stinespring dilation [5].",
"role": "user"
},
{
"content": "Method",
"role": "assistant"
}
]
|
science.drug_combo_extraction_re | science.drug_combo_extraction_re.1128 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Immunomodulatory activity of polysaccharide from Acanthopanax obovatus roots. The effects of the Acanthopanax obovatus polysaccharide ( AOPS ) as well as its combination with cyclophosphamide ( CY ) or prednisolone on immune responses were investigated in mice . AOPS (250 mg/kg i.p. x 5) increased the spleen weight and the number of spleen cells, and augmented the phagocytosis of peritoneal macrophages both in normal mice and in immunosuppressed mice. In a haemagglutinin assay AOPS increased the production of specific antibodies and antagonized the suppressive effect of CY. AOPS not only enhanced the degree of in vitro spleen cell-mediated red blood cells (SRBC) hemolysis (quantitative hemolysis of SRBC) but also restored the suppressive effect of CY completely. From these results, AOPS was shown to have an enhancing and a modulating activity on immune responses.\n\nTarget Sentence: The effects of the Acanthopanax obovatus polysaccharide ( AOPS ) as well as its combination with cyclophosphamide ( CY ) or prednisolone on immune responses were investigated in mice .",
"role": "user"
},
{
"content": "[[\"Acanthopanax obovatus polysaccharide\", \"cyclophosphamide\", \"POS\"], [\"Acanthopanax obovatus polysaccharide\", \"prednisolone\", \"POS\"]]",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.2755 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nThe gradient of VS with pressure becomes less steep above 40 GPa in contrast to its trend at lower pressures, which appears similar to the behavior of SiO2 glass (Murakami and Bass 2010) and MgSiO3 glass (Murakami and Bass 2011) (Fig.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.736 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: An analysis of asthma treatment in the airway epithelium using gene expression data.\n\nKeywords: asthma treatment gene expression data\n\nDataset Information:\nBioGRID: BioGRID is a biomedical interaction repository with data compiled through comprehensive curation efforts. The current index is version 4.2.192 and searches 75,868 publications for 1,997,840 protein and genetic interactions, 29,093 chemical interactions and 959,750 post translational modifications from major model organism species.\nADAM: ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA. The ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients.\nBengali Hate Speech: Introduces three datasets of expressing hate, commonly used topics, and opinions for hate speech detection, document classification, and sentiment analysis, respectively.\nPGR: Phenotype-Gene Relations (PGR) is a corpus that consists of 1712 abstracts, 5676 human phenotype annotations, 13835 gene annotations, and 4283 relations.\n\nAvailable Datasets: A: BioGRID B: ADAM C: Bengali Hate Speech D: PGR",
"role": "user"
},
{
"content": "BioGRID",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.2281 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nResults",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.scitldr_aic | science.scitldr_aic.771 | [
{
"content": "Review the provided sections from a computer science paper and distill the primary contribution into a single sentence. Provide only the summary sentence.\n\nSections:\n\nThis paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster.\nThe key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.\nWe begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity).\nThis similarity is category-agnostic and can be learned from data in the source domain using a similarity network.\nWe then present two novel approaches for performing transfer learning using this similarity function.\nFirst, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs.\nSecond, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network.\nSince the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches.\nUsing this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet.\nOur results show that we can reconstruct semantic clusters with high accuracy.\nWe then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. \nOur approach doesn't explicitly deal with domain discrepancy.\nIf we combine with a domain adaptation loss, it shows further improvement.\nSupervised learning has made significant strides in the past decade, with substantial advancements arising from the use of deep neural networks.\nHowever, a large part of this success has come from the existence of extensive labeled datasets.\nIn many situations, it is not practical to obtain such data due to the amount of effort required or when the task or data distributions change dynamically.\nTo deal with these situations, the fields of transfer learning and domain adaptation have explored how to transfer learned knowledge across tasks or domains.\nMany approaches have focused on cases where the distributions of the features and labels have changed, but the task is the same (e.g., classification across datasets with the same categories).\nCross-task transfer learning strategies, on the other hand, have been widely adopted especially in the computer vision community where features learned by a deep neural network on a large classification task have been applied to a wide variety of other tasks (Donahue et al., 2014) .Most\nof the prior cross-task transfer learning works, however, require labeled target data to learn classifiers for the new task. If labels\nof the target data are absent, there is little choice other than to apply unsupervised approaches such as clustering on the target data with pre-trained feature representations. In this paper\n, we focus on the question of what can be transferred (besides features) to support both cross-domain and cross-task transfer learning. We address it\nwith a learned similarity function as the fundamental component of clustering. Clustering can\nthen be realized using a neural network trained using the output of the similarity function, which can be successfully used to achieve both cross-task and cross-domain transfer.The key idea is to formulate the clustering objective to use a learnable (and transferable) term, which in our proposed work is a similarity prediction function. Our proposed objective\nfunction can be easily combined with deep neural networks and optimized end-to-end. The features and clustering\nare optimized jointly, hence taking advantage of such side information in a robust way. Using this method, we show\nthat unsupervised learning can benefit from learning performed on a distinct task, and demonstrate the flexibility of further combining it with a classification loss and domain discrepancy loss.In summary, we make several contributions. First, we propose to use predictive\npairwise similarity as the knowledge that is transferred and formulate a learnable objective function to utilize the pairwise information in a fashion similar to constrained clustering. We then provide the methodologies to\ndeploy the objective function in both cross-task and cross-domain scenarios with deep neural networks. The experimental results for cross-task\nlearning on Omniglot and ImageNet show that we can achieve state of the art clustering results with predicted similarities. On the standard domain adaptation benchmark\nOffice-31 dataset, we demonstrate improvements over state-of-art even when not performing any explicit domain adaptation, and further improvements if we do. Finally, on another domain adaptation task,\nSVHN-to-MNIST, our approach using Omniglot as the auxiliary dataset achieves top performance with a large margin.\nWe report the average performance over the 20 alphabets in table 1.\nOur approach achieved the top performance on both metrics.\nThe CCN demonstrates strong robustness on the challenging scenario of unknown K. It achieved 78.1% average accuracy.\nCompared with 82.4% when K is known, CCN has a relatively small drop.\nCompared to the second best algorithm, CSP, which is 65.4%, CCN outperforms it with a large gap.\nThe classical approach MPCK-means works surprisingly well when the number of clusters is known, but its performance dropped dramatically from 81.9% to 53.9% when K = 100.\nIn the performance breakdown for the 20 individual alphabets, CCN achieved 94% clustering accuracy on Old Church Slavonic Cyrillic, which has 45 characters (appendix TAB4 ).\nTherefore the results show the feasibility of reconstructing semantic clusters using only noisy similarity predictions.When to use the semantic similarity?\nThe experiments in table 1 show a clear trend that utilizing the pairwise constraints jointly for both metric learning and minimizing the clustering loss achieves the best performance, including both MPCK-means and CCN.\nIn the case of unknown number of clusters, where we set K = 100, the algorithms that use constraints to optimize clustering loss have better robustness, for example, CSP and CCN.\nThe group that only use constraints for metric learning (ITML, SKMS, SKKm, and SKLR) significantly outperform the group that does not use it (K-means, LPNMF, LSC).\nHowever, their performance are still far behind CCN.\nOur results confirm the importance of jointly optimizing the metric and clustering.The robustness against noisy similarity prediction is the key factor to enable the cross-task transfer framework.\nTo the best of our knowledge, table 1 is the first comprehensive robustness comparisons using predicted constraints learned from real data instead of converting from ground-truth labels.\nThe accuracy of G in our experiment is shown in appendix table 7 and demonstrates the reasonable performance of G which is on par with Matching-Net BID34 .\nAfter binarizing the prediction at 0.5 probability, the similar pair precision, similar pair recall, dissimilar pair precision, and dissimilar pair recall among the 659 characters are (0.392, 0.927, 0.999, 0.995), accordingly.\nThe binarized predictions are better than uniform random guess (0.002, 0.500, 0.998, 0.500), but are still noisy.\nTherefore it is very challenging for constrained clustering.\nThe visualization of the robustness range of CCN are provided in appendix D, and shows that the robustness is related to the density of pairs involved in a mini-batch.\nWe hypothesize that during the optimization, the gradients from wrongly predicted pairs are canceled out by each other or by the correctly predicted pairs.\nTherefore the overall gradient still moves the solution towards a better clustering result.How to predict K?\nInferring the number of clusters (N C) is a hard problem, but with the pairwise similarity information, it becomes feasible.\nFor evaluation, we compute the difference between the number of dominant clusters (N DC) and the true number of categories (N C gt ) in a dataset.\nWe use a naive definition for N DC, which is the number of clusters that have a size larger than expected size when data is sampled uniformly.\nIn other words, TAB5 ).\nWe compare this with the baseline approach SKMS BID1 , which does not require a given K and supports a pipeline to estimate K automatically (therefore we only put it into the column K = 100 in table 1.).\nSKMS gets 16.3.\nFurthermore, 10 out of 20 datasets from CCN's prediction have a difference between N DC d and N C gt d smaller or equal to 3, which shows the feasibility of estimating K with predicted similarity.\nDISPLAYFORM0\nThe results are summarized in table 2.\nOur approach (CCN + ) demonstrates a strong performance boost for the unsupervised cross-domain transfer problem.\nIt reaches 77.5% average accuracy which gained 6.2 points from the 71.3% source-only baseline.\nAlthough our approach merely transfers more information from the auxiliary dataset, it outperforms the strong approach DANN (75.7%), and state-of-the-art JAN (76.9%).\nWhen combining ours with DANN (CCN ++ ), the performance is further boosted.\nThis indicates that LCO helps mitigate the transfer problem in a certain way that is orthogonal to minimizing the domain discrepancy.\nWe observe the same trend when using a deeper backbone network, i.e., ResNet-34.\nIn such a case the average accuracy achieved is 77.9%, 81.1% and 82% for source-only, CCN + and CCN ++ , respectively, though we used exactly the same G as before (with ResNet-18 backbone for G).\nThis indicates that the information carried in the similarity predictions is not equivalent to transferring features with deeper networks.\nMore discussions are in appendix C and the performance of G is provided in appendix table 11 to show that although the prediction has low precision for similar pairs (∼ 0.2), our approach still benefits from the dense similarity predictions.\nIn this paper, we demonstrated the usefulness of transferring information in the form of pairwise similarity predictions.\nSuch information can be transferred as a function and utilized by a loss formulation inspired from constrained clustering, but implemented more robustly within a neural network that can jointly optimize both features and clustering outputs based on these noisy predictions.\nThe experiments for both cross-task and cross-domain transfer learning show strong benefits of using the semantic similarity predictions resulting in new state of the art results across several datasets.\nThis is true even without explicit domain adaptation for the cross-domain task, and if we add a domain discrepancy loss the benefits increase further.There are two key factors that determine the performance of the proposed framework.\nThe first is the robustness of the constrained clustering and second is the performance of the similarity prediction function.\nWe show robustness of CCN empirically, but we do not explore situations where learning the similarity function is harder.\nFor example, such cases arise when there are a small number of categories in source or a large domain discrepancy between source and target.\nOne idea to deal with such a situation is learning G with domain adaptation strategies.\nWe leave these aspects for future work.\nThe resulting performance w.r.t different values of recall, density, and number of clusters is visualized in FIG7 .\nA bright color means high NMI score and is desired.\nThe larger the bright region, the more robust the clustering is against the noise of similarity prediction.\nThe ACC score shows almost the same trend and is thus not shown here.How does similarity prediction affect clustering?\nLooking at the top-left heat map in FIG7 , which has D = 1 and 10 clusters, it can be observed that the NMI score is very robust to low similar pair recall, even lower than 0.5.\nFor recall of dissimilar pairs, the effect of recall is divided at the 0.5 value: the clustering performance can be very robust to noise in dissimilar pairs if the recall is greater than 0.5; however, it can completely fail if recall is below 0.5.\nFor similar pairs, the clustering works on a wide range of recalls when the recall of dissimilar pairs is high.In practical terms, robustness to the recall of similar pairs is desirable because it is much easier to predict dissimilar pairs than similar pairs in real scenarios.\nIn a dataset with 10 categories e.g. Cifar-10, we can easily get 90% recall for dissimilar pairs with purely random guess if the number of classes is known, while the recall for similar pairs will be 10%.How\ndoes the density of the constraints affect clustering? We\nargue that the density of pairwise relationships is the key factor to improving the robustness of clustering. The\ndensity D = 1 means that every pair in a mini-batch is utilized by the clustering loss. For\ndensity D = 0.1, it means only 1 out of 10 possible constraints is used. We\ncould regard the higher density as better utilization of the pairwise information in a mini-batch, thus more learning instances contribute to the gradients at once. Consider\na scenario where there is one sample associated with 5 true similar pairs and 3 false similar pairs. In such\na case, the gradients introduced by the false similar pairs have a higher chance to be overridden by true similar pairs within the mini-batch, thus the loss can converge faster and is less affected by errors. In FIG7\n, we could see when density decreases, the size of the bright region shrinks significantly.In our implementation, enumerating the full pairwise relationships introduces negligible overhead in computation time using GPU. Although\nthere is overhead for memory consumption, it is limited because only the vector of predicted distributions has to be enumerated for calculating the clustering loss.The effect of varying the number of Clusters In the MNIST experiments, the number of categories is 10. We augment\nthe softmax output number up to 100. The rows of\nFIG7 show that even when the number of output categories is significant larger than the number of true object categories, e.g. 100 > 10, the clustering performance NMI score only degrades slightly.",
"role": "user"
},
{
"content": "A learnable clustering objective to facilitate transfer learning across domains and tasks",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.1036 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nWhile many genetic correlations between body parts in Drosophila are positive (Cowley and Atchley 1990; Norry et al. 1997), an exception appears to exist for the eyes and head.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.1661 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nLeft, right, or both? On the configuration of the phenanthroindolizidine alkaloid tylophorine from Tylophora indica. The alkaloid (-)-tylophorine was isolated from a sample of Tylophora indica, and the crude extract was analyzed by HPLC/MS(n) and chiral HPLC/MS. While the literature states that the naturally occurring form of this alkaloid is the R-enantiomer and that its S-antipode is usually not found in nature, we confirmed the hypothesis of Govindachari and Nagarajan that natural levorotatory tylophorine is indeed a nearly racemic mixture with a slight excess of the R-enantiomer.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"(-)-tylophorine\", \"levorotatory tylophorine\", \"phenanthroindolizidine alkaloid\"]}",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.7845 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nvaporization enthalpies are used frequently in adjusting enthalpies of formation of liquids to the standard state and in evaluating environmental transport properties .\naccurate thermodynamic correlations are required to enhance the reliability of such simulations . of the thermodynamic properties ,\nheat of vaporization is one of the most important parameters for a multi - component multistage vapor \nliquid equilibrium process as it is the one which controls the temperature as well as liquid and vapor profiles in a column .\nmoreover , this property is sometimes used in the prediction or correlation of other thermodynamic properties .\nthere is thus engineering and theoretical interest in the measurement and correlation of values of this property [ 212 ] .\nthe normal boiling enthalpy can be calculated using either equations of state applied to the liquid and vapor phases or more simply by means of empirical correlations that allow calculating the enthalpy of vaporization of pure fluids [ 622 ] .\nsome of them are general analytical expressions that only require as input parameters certain properties of the fluid , such as the critical temperature , critical pressure , normal boiling point temperature , and molecular weight . in this study ,\nan accurate empirical correlation was presented by incorporating the normal boiling temperature and critical points of the pure substances .\nthis equation can predict the heat of vaporizations for pure substances over the entire range of normal boiling point temperatures of practical interest .\nin this research , we considered some of well known analytical models that do not require specific adjustable coefficients for each substance , but rather are based on a knowledge of some properties of the liquid vapor equilibrium ( critical properties mainly ) or on molecular properties\n. in particular , we selected seven specific expressions that are valid only for the calculation of the vaporization enthalpy .\n( zny ) , the simplest method defined as trouton rule , two models presented by vetere and a more recent proposal of liu .\nriedel model \n ( 1)hb=1.093rtblnpc-1.0130.93-tb / tcwhere hvb is vaporization enthalpy ( j mol ) , r is universal gas constant ( 8.3145 j mol k ) , tb is normal boiling point ( k ) , tc is critical temperature ( k ) , and pc is critical pressure ( bar ) .\nchen model \n ( 2)hb = rtb3.978(tb / tc)-3.958 + 1.555lnpc1.07-tb / tc trouton rule \n\n( 3)hb=88tbjmol-1 zhao et al . model ( zny ) \n ( 4)hb = tb(36.6 + 8.314lntb ) vetere model ( v-79 ) \n ( 5)hb = rtb(1-tb / tc)0.38[ln(pc-0.513 + 0.5066tc2/(pctb2)]1-tb / tc+[1-(1-tb / tc)0.38]ln(tb / tc ) vetere model ( v-95 ) \n for hydrocarbons:(6)hb=4.1868tb9.08 + 4.36log10tb+0.0068tbm+0.0009tb2 m for alcohols:(7)hb=4.1868tb18.82 + 3.34log10tb-6.37tbm+0.036tb2m-5.210 - 5tb3mwhere m is molecular weight ( kg / kmol ) .\nliu \n ( 8)hb = rtbtb2200.0627(1-tb / tc)0.38ln(pc / pa)1-tb / tc+0.38(tb / tc)ln(tb / tc ) where pa is atmospheric pressure in bar . in this study , we tried to find a more accurate and rapid model to calculate vaporization enthalpies of pure substances based on experimental data [ 14,2426 ] .\nthermophysical properties of compounds are obtained from the literatures . by investigation of more than 452 data points vaporization enthalpy of pure substances and using 352 points of them in multiple regression analysis , a new empirical correlation is suggested to accurately prediction of vaporization enthalpy with the wide ranges of normal boiling temperatures ( 20.3722 k ) .\nthe new presented model has three dependent variables ( pc , tc , and tb ) and 10 independent variables as follows:(9)hb = rtb(a+btbr+ctbr2+dtbr3)(10)b = b1+b2pc+b3ln(pc)(11)c = c1+c2pc+c3ln(pc)(12)d = d1+d2pc+d3ln(pc)in this equation , hvb is vaporization enthalpy ( kj mol ) , r is universal gas constant and equals to 8.3145 j mol k , tb ( k ) is normal boiling temperature , tc ( k ) is critical temperature , tbr is reduced temperature defined as tb / tc , and pc ( bar ) is critical pressure . also , tuned coefficients that have been determined by minimizing the sum of square errors of the model are presented in table 1\nin this study , we tried to find a more accurate and rapid model to calculate vaporization enthalpies of pure substances based on experimental data [ 14,2426 ] .\nthermophysical properties of compounds are obtained from the literatures . by investigation of more than 452 data points vaporization enthalpy of pure substances and using 352 points of them in multiple regression analysis , a new empirical correlation is suggested to accurately prediction of vaporization enthalpy with the wide ranges of normal boiling temperatures ( 20.3722 k ) .\nthe new presented model has three dependent variables ( pc , tc , and tb ) and 10 independent variables as follows:(9)hb = rtb(a+btbr+ctbr2+dtbr3)(10)b = b1+b2pc+b3ln(pc)(11)c = c1+c2pc+c3ln(pc)(12)d = d1+d2pc+d3ln(pc)in this equation , hvb is vaporization enthalpy ( kj mol ) , r is universal gas constant and equals to 8.3145 j mol k , tb ( k ) is normal boiling temperature , tc ( k ) is critical temperature , tbr is reduced temperature defined as tb / tc , and pc ( bar ) is critical pressure . also , tuned coefficients that have been determined by minimizing the sum of square errors of the model are presented in table 1\nwe carried out regression analysis for 352 pure substances and also for 100 other substances which are not participate in fitting procedure .\nthe values of the critical pressure , critical temperature , normal boiling temperature , and molecular weight ( for comparison with other models ) were taken from the literatures [ 14,2426 ] . to compare the accuracy of presented empirical model , calculated enthalpies of vaporizations for 352 pure substances versus experimental measured enthalpies\n1 . in table 2 , the aard% of enthalpies calculated from proposed and other models for each substance include one or more isomers with respect to the values given by experimental measurements were presented .\nit showed that presented model was more accurate than other empirical correlations for all types of compounds considered in this study .\ndata points with aard of more than 40% were not participated in statistical parameters calculations .\ntable 3 presents the statistical parameters including average absolute relative deviation percentage ( aard% ) , average relative deviation , ( ard% ) , and root mean square deviation ( rmsd ) of the considered models and new proposed correlation .\n2 also shows the accuracy of different empirical methods in prediction of vaporization enthalpies of 352 pure substances . as shown in fig .\n2 , the new proposed model is more accurate than the seven commonly used correlations .\nthe new method has successfully predicted 75% of the all measurements with aard less than 3% and 84% of the data with aard less than 4% .\nonly 2% of the enthalpy measurements were predicted with aard of more than 10% by the new method .\nliu model , that is the second accurate empirical method , predicted 65% of the enthalpies measurements with aard less than 3% and 75% of the measurements with aard less than 4% . for real comparison and\nestimate the applicability of presented method to calculate vaporization enthalpy of pure substances , some independent data for more than 100 pure substances which are not employed in regression analysis of new proposed correlation were studied [ 2426 ] .\nfinally , aard of the new method and other mentioned models for these substances are presented in table 4 .\ntable 5 presents the statistical parameters including average absolute percentage relative deviation percentage ( aard% ) , average relative deviation , ( ard% ) , and root mean square deviation ( rmsd ) of the considered models and new proposed correlation for 100 new data points .\n3 shows calculated enthalpies of vaporizations versus experimental measured enthalpies and fig . 4 indicates cumulative frequency of different empirical correlations versus average absolute relative deviations for new 100 substances . as shown in fig .\n4 , the new presented model estimated 85% of all 100 measurements with aard less than 4 , while riedel model , that is the second accurate empirical method in this comparison , predicts 77% of 100 measurements with aard less than 4% . hence , the superiority of this new empirical method over the other empirical methods has been verified for all experimental data .\nbut our presented correlation was fitted with more experimental data for more constant parameters than other models which can helps to generalize the equation to calculate fitting data and other independent data which are not employed in regression analysis with lower deviations .\nthe new correlation has a potential validation for calculation of vaporization enthalpy for acetates , alcohols , aldehyds , alkans , alkenes , alkyl and multi - alkyl benzene , alkynes , amines , anhydrides , anilines , carboxylic acids , cetones , cyclo alkanes , dimethyl alkanes , esters , halo alkanes , halo alkenes , halo benzene , methyl alkans , naphthalenes , nitriles , nitro alkanes , pyridynes , sulfid and sulfoxids , xylene , and some other hydrocarbons .\nin this study , the new empirical method was presented to estimate the vaporization enthalpy of pure substances at their normal boiling temperature . to estimate accuracy of this correlation\n, the comparisons were done for presented model and seven commonly used empirical methods include vetere ( v-95 ) , vetere ( v-79 ) , riedel , chen , zhao et al .\nresults indicate the superiority of the new presented correlation over all other methods used to calculate vaporization enthalpies with average absolute relative deviation percent ( aard% ) of 2.28 . also to estimate the applicability of the new method ,\nsome data for more than 100 pure substances which are not participate in regression analysis are examined , and the results showed again the superiority of presented correlation with lower deviation .",
"role": "user"
},
{
"content": "the heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes . in this work \n , a new empirical method was developed to predict vaporization enthalpy of pure substances . \n this equation is a function of normal boiling temperature , critical temperature , and critical pressure . \n the presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range . \n the results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range ( 20.3722 k ) .",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.478 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nA novel mechanism for the promotion of quercetin glycoside absorption by megalo α-1,6-glucosaccharide in the rat small intestine. The presence of an α-1,6-glucosaccharide enhances absorption of water-soluble quercetin glycosides, a mixture of quercetin-3-O-β-d-glucoside (Q3G, 31.8%), mono (23.3%), di (20.3%) and more d-glucose adducts with α-1,4-linkage to a d-glucose moiety of Q3G, in a ligated small intestinal loop of anesthetized rats. We prepared α-1,6-glucosaccharides with different degrees of polymerization (DP) enzymatically and separated them into a megalo-isomaltosaccharide-containing fraction (M-IM, average DP=11.0) and an oligo-isomaltosaccharide-containing fraction (O-IM, average DP=3.6). Luminal injection of either saccharide fraction promoted the absorption of total quercetin-derivatives from the small intestinal segment and this effect was greater for M-IM than O-IM addition. M-IM also increased Q3G, but not the quercetin aglycone, concentration in the water-phase of the luminal contents more strongly than O-IM. The enhancement of Q3G solubilization in the luminal contents may be responsible for the increases in the quercetin glucoside absorption promoted by α-1,6-glucosaccharides, especially that by M-IM. These results suggest that the ingestion of α-1,6-glucosaccharides promotes Q3G bioavailability.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Q3G\", \"quercetin-3-O-\\u03b2-d-glucoside\", \"quercetin glucoside\", \"saccharide\", \"quercetin glycoside\", \"d-glucose\", \"quercetin\", \"quercetin glycosides\"]}",
"role": "assistant"
}
]
|
science.chemprot_ner | science.chemprot_ner.525 | [
{
"content": "Analyze the following text to identify specific entities. Focus on the categories provided: [\"Chemical\", \"Protein\"].\n\nDeliver your findings as a JSON structure: {\"Chemical\": [\"ExampleChemical\", ...], \"Protein\": [\"ExampleProtein\", ...]}. Each key represents an entity type, and the associated list contains the entities found. \n\nEnsure the response is strictly the JSON format without any extra text.\n\nSource:\n\nDoes aspirin acetylate multiple cellular proteins? (Review).\nAspirin is a salicylate drug that is extensively used for its anti-inflammatory, antipyretic, analgesic and anti-thrombotic effects. More recently, it has been shown to decrease the incidence of cancers of epithelial origin. In most cases, aspirin is relatively safe. However, it does cause a host of adverse effects and toxicities, including gastrointestinal bleeding, ulcerations, nephrotoxicity and hypersensitivity reactions. Although the inhibition of cyclooxygenases by aspirin, which leads to its anti-inflammatory/analgesic properties, has been well studied, the mechanisms involved in its chemopreventive effects as well as some of its adverse effects are as yet ill-defined. Studies over the past decades suggest that, besides cyclooxygenases, aspirin acetylates other cellular proteins. These studies used radiolabeled 3H or 14C aspirin, the only approach used to date for the detection of proteins acetylated by aspirin. In a recent study using protein-specific anti-acetyl lysine antibodies and immunological methods, we demonstrated the ability of aspirin to acetylate the tumor suppressor protein p53. In this review, we present current research from the literature on the aspirin-induced acetylation of proteins. We also describe an immunological approach to detecting acetylated proteins in aspirin-treated cells, and demonstrate that multiple proteins are acetylated. Since post-translational modification of proteins, such as acetylation, may lead to the alteration of their function, it is possible that some of the hitherto unexplained beneficial or adverse effects of aspirin could occur as a result of these modifications. The identification of these novel acetylation targets of aspirin represents a new area for investigation.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"lysine\", \"aspirin\", \"14C\", \"Aspirin\", \"salicylate\", \"3H\"], \"Protein\": [\"cyclooxygenases\", \"tumor suppressor protein p53\"]}",
"role": "assistant"
}
]
|
science.healthver_entailment | science.healthver_entailment.23 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: long-term exposure to PM2.5 air pollution, produced by sources including wildfires, power plants and vehicles, may make the coronavirus particularly deadly.\n\nResearch Abstract:\nBackground: Understanding and projecting the spread of COVID-19 requires reliable estimates of how weather components are associated with the transmission of the virus.\nPrior research on this topic has been inconclusive.\nIdentifying key challenges to reliable estimation of weather impact on transmission we study this question using one of the largest assembled databases of COVID-19 infections and weather.\nMethods: We assemble a dataset that includes virus transmission and weather data across 3,739 locations from December 12, 2019 to April 22, 2020.\nUsing simulation, we identify key challenges to reliable estimation of weather impacts on transmission, design a statistical method to overcome these challenges, and validate it in a blinded simulation study.\nUsing this method and controlling for location-specific response trends we estimate how different weather variables are associated with the reproduction number for COVID-19.\nWe then use the estimates to project the relative weather-related risk of COVID-19 transmission across the world and in large cities.\nResults: We show that the delay between exposure and detection of infection complicates the estimation of weather impact on COVID-19 transmission, potentially explaining significant variability in results to-date.\nCorrecting for that distributed delay and offering conservative estimates, we find a negative relationship between temperatures above 25 degrees Celsius and estimated reproduction number ([R]), with each degree Celsius associated with a 3.1% (95% CI, 1.5% to 4.8%) reduction in [R].\nHigher levels of relative humidity strengthen the negative effect of temperature above 25 degrees.\nMoreover, one millibar of additional pressure increases [R] by approximately 0.8 percent (95% CI, 0.6% to 1%) at the median pressure (1016 millibars) in our sample.\nWe also find significant positive effects for wind speed, precipitation, and diurnal temperature on [R].\nSensitivity analysis and simulations show that results are robust to multiple assumptions.\nDespite conservative estimates, weather effects are associated with a 43% change in [R] between the 5th and 95th percentile of weather conditions in our sample.\nConclusions: These results provide evidence for the relationship between several weather variables and the spread of COVID-19.\nHowever, the (conservatively) estimated relationships are not strong enough to seasonally control the epidemic in most locations.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4239 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nOBJECTIVES: To assess the effect of anticoagulation with bivalirudin administered intravenously on gas-exchange in patients with COVID-19 and respiratory failure using invasive mechanical ventilation. TRIAL DESIGN: This is a single centre parallel group, superiority, randomized (1:1 allocation ratio) controlled trial. PARTICIPANTS: All patients admitted to the Hamad Medical Corporation -ICU in Qatar for COVID-19 associated respiratory distress and in need of mechanical ventilation are screened for eligibility. INCLUSION CRITERIA: all adult patients admitted to the ICU who test positive for COVID-19 by PCR-test and in need for mechanical ventilation are eligible for inclusion. Upon crossing the limit of D-dimers (1.2 mg/L) these patients are routinely treated with an increased dose of anticoagulant according to our local protocol. This will be the start of randomization. EXCLUSION CRITERIA: pregnancy, allergic to the drug, inherited coagulation abnormalities, no informed consent. INTERVENTION AND COMPARATOR: The intervention group will receive the anticoagulant bivalirudin intravenously with a target aPTT of 45-70 sec for three days while the control group will stay on the standard treatment with low-molecular-weight heparins /unfractionated heparin subcutaneously (see scheme in Additional file 1). All other treatment will be unchanged and left to the attending physicians. MAIN OUTCOMES: As a surrogate parameter for clinical improvement and primary outcome we will use the PaO2/FiO2 (P/F) ratio. RANDOMISATION: After inclusion, the patients will be randomized using a closed envelope method into the conventional treatment group, which uses the standard strategy and the experimental group which receives anticoagulation treatment with bivalirudin using an allocation ratio of 1:1. BLINDING (MASKING): Due to logistical and safety reasons (assessment of aPTT to titrate the study drug) only the data-analyst will be blinded to the groups. NUMBERS TO BE RANDOMISED (SAMPLE SIZE): We performed a sample size calculation and assumed the data for P/F ratio (according to literature) is normally distributed and used the mean which would be: 160 and SD is 80. We expect the treatment will improve this by 30%. In order to reach a power of 80% we would need 44 patients per group (in total 88 patients). Taking approximately 10% of dropout into account we will include 100 patients (50 in each group). TRIAL STATUS: The local registration number is MRC-05-082 with the protocol version number 2. The date of approval is 18th June 2020. Recruitment started on 28(th) June and is expected to end in November 2020. TRIAL REGISTRATION: The protocol is registered before starting subject recruitment under the title: \"Anticoagulation in patients suffering from COVID-19 disease. The ANTI-CO Trial\" in ClinicalTrials.org with the registration number: NCT04445935 . Registered on 24 June 2020. FULL PROTOCOL: The full protocol is attached as an additional file, accessible from the Trials website (Additional file 2). In the interest in expediting dissemination of this material, the familiar formatting has been eliminated; this Letter serves as a summary of the key elements of the full protocol.",
"role": "user"
},
{
"content": "[\"Treatment\"]",
"role": "assistant"
}
]
|
science.chemprot_ner | science.chemprot_ner.787 | [
{
"content": "Analyze the following text to identify specific entities. Focus on the categories provided: [\"Chemical\", \"Protein\"].\n\nDeliver your findings as a JSON structure: {\"Chemical\": [\"ExampleChemical\", ...], \"Protein\": [\"ExampleProtein\", ...]}. Each key represents an entity type, and the associated list contains the entities found. \n\nEnsure the response is strictly the JSON format without any extra text.\n\nSource:\n\nThe effects of labetalol and dilevalol on isolated cardiovascular preparations of the guinea-pig and rat.\nDiffering effects of labetalol and dilevalol on cardiovascular preparations have been reported. I have studied the effects of labetalol and dilevalol on the contractile responses of the rat and guinea-pig left atria and rat portal vein. On the guinea-pig left atria low concentrations of labetalol (> or = 10(-8) M) and of dilevalol (> or = 10(-7) M) inhibited to a small extent the responses to electrical cardiac stimulation, which is indicative of membrane stabilizing activity. Labetalol (> or = 3 x 10(-8) M) and dilevalol (> or = 10(-8) M) caused surmountable antagonism of the isoprenaline responses of the atria and the pA2 values were 8.60 and 8.98 at the beta 1-adrenoceptors of the rat left atria and 7.90 and 8.31, respectively, on the guinea-pig left atria which has functional beta 1- and beta 2-adrenoceptors. Labetalol and dilevalol (both at > or = 10(-7) M) attenuated the spontaneous contractile activity of the rat portal vein and the attenuation to labetalol at 10(-6) M was abolished by ICI 118,551 which illustrates that the labetalol-induced attenuation is beta 2-adrenoceptor mediated. The isoprenaline attenuation responses of the portal vein were inhibited by labetalol and dilevalol (both at > or = 10(-7) M) and the pA2 value for the labetalol at beta 2-adrenoceptors was 7.59. It is concluded that labetalol and dilevalol are beta 1-adrenoceptor selective antagonists.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"ICI 118,551\", \"Labetalol\", \"labetalol\", \"isoprenaline\", \"dilevalol\"], \"Protein\": [\"beta 1- and beta 2-adrenoceptors\", \"beta 2-adrenoceptors\", \"beta 1-adrenoceptors\", \"beta 2-adrenoceptor\", \"beta 1-adrenoceptor\"]}",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.10289 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: The dataset of incomplete sentences.\n\nKeywords: sentence-level revision text\n\nDataset Information:\nFSDnoisy18k: The FSDnoisy18k dataset is an open dataset containing 42.5 hours of audio across 20 sound event classes, including a small amount of manually-labeled data and a larger quantity of real-world noisy data. The audio content is taken from Freesound, and the dataset was curated using the Freesound Annotator.\nJFLEG: JFLEG is for developing and evaluating grammatical error correction (GEC). Unlike other corpora, it represents a broad range of language proficiency levels and uses holistic fluency edits to not only correct grammatical errors but also make the original text more native sounding.\nVRD: The Visual Relationship Dataset (VRD) contains 4000 images for training and 1000 for testing annotated with visual relationships. Bounding boxes are annotated with a label containing 100 unary predicates.\nIIRC: Contains more than 13K questions over paragraphs from English Wikipedia that provide only partial information to answer them, with the missing information occurring in one or more linked documents. The questions were written by crowd workers who did not have access to any of the linked documents, leading to questions that have little lexical overlap with the contexts where the answers appear.\n\nAvailable Datasets: A: FSDnoisy18k B: JFLEG C: VRD D: IIRC",
"role": "user"
},
{
"content": "JFLEG",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.315 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nMechanisms in endocrinology. Role of emotional stress in the pathophysiology of Graves' disease. The role of stress in the pathophysiology of Graves' disease is suggested by several clinical observations, by recent advances in immunology and by better understanding of autoimmune diseases which provides new insights into potential effects of stress hormones on T helper cell imbalance involved in the pathogenesis of autoimmune diseases. Stress management should therefore be an important part of the treatment of Graves' disease, as stress reduction may improve the effect of therapy. However, this field still requires interventional data to support stress management in the treatment of Graves' disease.",
"role": "user"
},
{
"content": "{\"Chemical\": []}",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.2361 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nBy borrowing the wisdom of human in gaze following, we propose a two-stage solution for gaze point prediction of the target persons in a scene. Specifically, in the first stage, both head image and its position are fed into a gaze direction pathway to predict the gaze direction, and then multi-scale gaze direction fields are generated to characterize the distribution of gaze points without considering the scene contents. In the second stage, the multi-scale gaze direction fields are concatenated with the image contents and fed into a heatmap pathway for heatmap regression. There are two merits for our two-stage solution based gaze following: i) our solution mimics the behavior of human in gaze following, therefore it is more psychological plausible; ii) besides using heatmap to supervise the output of our network, we can also leverage gaze direction to facilitate the training of gaze direction pathway, therefore our network can be more robustly trained. Considering that existing gaze following dataset is annotated by the third-view persons, we build a video gaze following dataset, where the ground truth is annotated by the observers in the videos. Therefore it is more reliable. The evaluation with such a dataset reflects the capacity of different methods in real scenarios better. Extensive experiments on both datasets show that our method significantly outperforms existing methods, which validates the effectiveness of our solution for gaze following. Our dataset and codes are released in this https URL.\n\nCited Works Summary:\n[1]: Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention.\n\n[2]: Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of (2012), we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes.\n\n[3]: Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze in video by predicting where a person (in the video) is looking even when the object is in a different frame. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one frame with a person in it, our model estimates a density for gaze location in every frame and the probability that the person is looking in that particular frame. A key aspect of our approach is an end-to-end model that jointly estimates: saliency, gaze pose, and geometric relationships between views while only using gaze as supervision. Visualizations suggest that the model learns to internally solve these intermediate tasks automatically without additional supervision. Experiments show that our approach follows gaze in video better than existing approaches, enabling a richer understanding of human activities in video.\n\n[4]: For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.\n\n[5]: \n\n[6]: In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15 relative improvement.\n\n[7]: A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.\n\n[8]: We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https: imatge-upc.github. io saliency-salgan-2017 .",
"role": "user"
},
{
"content": "Saliency detection and gaze following are two different tasks [5] [3] even though they were closely related. Saliency detection predicts fixation map from observers out of the original images [1] [4] [6] . Gaze following in image predicts the position that people in a scene were looking at. Previous works about saliency prediction considered the low-level features and saliency maps at different scales [7] . Subsequently, the features from different levels were combined to model a bottom-up, top-down architecture [4] . Recently, deep neural networks have been applied to saliency prediction and achieve great success [2] [8] . However, the object in the gaze point region may be not salient, which reveals that it is hard to find the gaze point through a saliency algorithm directly.",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.10653 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\nvascular leiomyoma ( vl ) or angioleiomyoma comparable to the one described in this study is extremely rare .\nonly 8.5% to 10% of vl are found in the head and neck area and , among these , only 3% occur in the nasal cavity [ 1 , 2 ] .\nwe describe the clinical and histological findings and the feasibility of surgical treatment using a radiofrequency instrument .\na 45-year - old man reported to his general practitioner with recurrent and strong , but self - limiting , left - sided epistaxis over the preceding 2 to 3 months , each time lasting up to 10 minutes .\nhe also felt an intermittent , dull pain on the left side of his nose and the strange sensation of a slowly growing mass inside his nose , but neither nasal obstruction nor hypesthesia of the face .\nthe effort was abandoned because of strong bleeding , a nosepad was applied and then the patient was referred as an emergency to our clinic . upon arrival\nthe personal history showed an appropriately treated arterial hypertension and no regular alcohol or nicotine consumption or substance abuse .\na clinical examination showed a healthy albeit obese man with a marginally elevated blood pressure level .\nafter removal of the nosepad and suction cleaning , the nasal inspection showed a well - defined tumor of the lateral nasal wall on the anterior face of the inferior turbinate , approximately 10 mm in size .\nanterior and posterior rhinoscopy revealed a slight deviation of the nasal septum , but no other intranasal pathology .\nthe external nose showed a slight , though distinctly palpable swelling in the area between the lateral cartilage and the nasal bone on the left side .\nto further investigate the repeatedly bleeding endonasal mass a ct scan with contrast was conducted .\nit showed a tumor of the inferior turbinate , only marginally contrast enhancing , 24 mm in the largest diameter .\nno bony erosion was reported , and an infiltration of the lateral nasal cartilage could not be ruled out ( figure 1 ) .\na biopsy under local anesthesia revealed the diagnosis of angioleiomyoma . to differentiate the tumor growth and the involvement of nasal cartilage , a mri scan was ordered ( figure 2 ) .\nthe scan showed a 9 11 8 mm ( w h d ) sized , nodular , contrast enhancing lesion emanating from the lateral nasal wall and extending to the anterior face of the inferior nasal turbinate on the level of the piriform aperture without any sign of infiltrative growth .\na transnasal endoscopic tumor resection was performed under general anesthesia using a radiofrequency instrument ( ellman surgitron ) ( figure 3 ) .\nthe histological examination confirmed the diagnosis of angioleiomyoma ( figure 4 ) . during the clinical follow - ups conducted at six and twelve months after the surgery , a nasal endoscopy showed no sign of tumor recurrence and the patient remained symptom - free .\nvascular leiomyomas are benign neoplasms that are rarely found in the head and neck region .\nthe presenting symptoms are nonspecific and usually characterized by nasal obstruction , epistaxis , and nasal discharge .\nvascular leiomyomas occur as small , painless masses in the superficial layer of the skin or mucosa .\n. these tumors can affect persons of any age , but they are more common in people between 30 to 60 years of age with a female predominance ( female to male ratio of 2 to 1 ) . while the etiology of sinonasal leiomyoma remains uncertain , there are three dominant hypotheses to explain the origin of sinonasal leiomyoma .\nthey could stem from aberrant undifferentiated mesenchymal or smooth muscle elements in the blood vessel wall . in the nasal vestibule\nthe observation of many tumors located in nasal turbinates seems to support this hypothesis ; these areas are relatively abundant with blood vessels and smooth muscles .\nsome authors suggest that vascular leiomyoma could be a vascular malformation or a progressive development of smooth muscle proliferation from hemangioma , to angioma with much muscle , to leiomyoma with many vessels , and to solid leiomyoma .\nthe inferior nasal turbinate , the nasal septum , and the nasal vestibule are reported as the most common sites of occurrence .\nhypothesized that the tumor growth might be hormone - dependent due to the reported female predominance and the increased pain during pregnancy or the menstrual cycle in patients described elsewhere .\nmore studies are needed to clarify the influence and mechanism of action of sex hormones on this tumor type . the world health organization ( who ) classified leiomyoma into three groups : nonvascular leiomyoma , vascular leiomyoma , and epithelioid leiomyoma .\nmorimoto divided angiomyoma into three histologic subtypes in 1973 : solid or capillary , cavernous , and venous .\nthe vascular subtype is the most common one and angiomyoma of the head and neck usually occurs as a venous or cavernous type .\nthe solid type has smooth muscle bundles that intertwine and surround the vascular slit - like channels .\nthe venous type has vascular channels with thick muscular walls that are easily discerned from smooth muscle bundles .\nthe histopathological differential diagnosis of angioleiomyoma includes hemangioma , angiofibroma , myopericytoma , fibromyoma , and leiomyosarcoma .\nepithelial tumors are the most common and originate from the epithelial lining , accessory salivary glands , neuroendocrine tissue , and olfactory epithelium .\ninverted papillomas and osteomas are the most common benign tumors followed by fibrous dysplasia and neurogenic tumors , for example , schwannomas .\nsquamous cell carcinomas are the most common malignant sinonasal tumors ( 80% ) , followed by adenocarcinoma .\nchondrosarcoma , lymphoma , salivary gland tumors , neuroendocrine tumors , and mucosal malignant melanoma represent a further differential diagnosis . radiological imaging , for example , mri and/or ct scans ,\nrepresents an helpful tool for visualizing the extension of the tumor and for the proper planning of the surgical approach , but there are no particular imaging techniques that specifically characterize the vl .\nnevertheless , images may be helpful in differentiating deeply seated vascular leiomyoma from malignant or other benign tumors , such as lipoma or fibroma [ 12 , 13 ] .\nthe surgical approach depends on the size , the location , the extension of the tumor , and the experience of the surgeon .\ndifferent surgical techniques are well known from the surgery of the turbinates ( e.g. , resectioning , laser surgical procedures , and coagulation procedures ) . in most patients ,\ntransnasal endoscopic excision as shown in our case can be performed successfully because many of the tumors are limited to the sinonasal cavity .\nthe presented technique of excision with a radiofrequency instrument has many advantages , for example , clear surgical field and less hemorrhage , and can be safely performed by an experienced surgeon in a two - hand technique .\nthis was probably due to an incomplete excision of the tumor margin and the adjacent bony shell .\nthus , the chance of local recurrence is mainly related to incomplete excision at the time of the initial surgery .\nit appears that the absence of mitosis is the most useful histological indicator of a benign lesion .\nsurgical excision accompanied by a histological study is the only way to confirm the diagnosis .\nspecial stains for smooth muscle cells are sometimes necessary to differentiate vl from other spindle cell tumors such as hemangioma , angiofibroma , fibroma , and angiomyolipoma .\nangioleiomyoma in the nasal cavity is extremely rare but an important differential diagnosis in a patient with a nasal tumor .\nattention should be paid to intra- and postoperative hemorrhage due to the hypervascularity of this neoplasm . in our case of a vl originating from the anterior border of the inferior turbinate\n, we demonstrated that the endoscopic approach with a radiofrequency instrument resulted in a complete and safe removal of the mass with no recurrence during a period of 12 months .",
"role": "user"
},
{
"content": "vascular leiomyomas or angioleiomyomas are rare tumors that can be found in the nasal cavity . \n the etiology of angioleiomyoma remains poorly understood and there are several hypotheses to explain the origin of sinonasal leiomyoma . \n we here describe the clinical and histological findings in a case study along with the feasibility of surgical treatment using a radiofrequency instrument . \n in particular , we describe the case of an adult patient with recurrent epistaxis because of a nasal angioleiomyoma and the performed treatment in the form of complete surgical excision . \n radiological imaging is a helpful tool to give an indication of the extension of the tumor , as well as for the proper planning of the surgical approach . \n either mri or ct scans are found to be best suited for this purpose . \n this case report recommends the complete surgical excision of the angioleiomyoma , by either an endoscopic or an open procedure . \n this can be safely performed using a radiofrequency instrument as shown in this case with no recurrence during a follow - up of 12 months .",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.997 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nFurthermore,theycontainsorcinandannexinA7attachedtothelipidrafts.",
"role": "user"
},
{
"content": "[\"lipid\"]",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.3483 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nThe goal of this column is to help mental health care professionals understand coronavirus disease 2019 (COVID-19) so that they can better explain the complexities of the current crisis to their patients. The bottom-line of this column is that, while COVID-19 can infect virtually everyone in the human population, only about 5% are susceptible to severe infection requiring admission to an intensive care unit and/or causing a fatal outcome and this population can be identified on the basis of comorbid medical illness and/or age. These numbers are based on experience in China, the United States, and Europe. Table 1 presents an analysis conducted by the US Centers for Disease Control and Prevention (CDC), which is further supported by several other sources reviewed in the article. The population at risk for severe infection are individuals with comorbid medical illness and those 85 years of age and older. The comorbid medical illnesses identified as risk factors are preexisting respiratory and cardiovascular disease, immunocompromised status, morbid obesity (ie, body mass index >/=40), diabetes mellitus, and possibly significant kidney or liver impairment. Parenthetically, news reports and the literature sometimes cite age 60 years and older as a risk factor but age between 60 and 85 years is likely a surrogate for having 1 or more of these comorbid medical conditions. While 5% may initially seem like a small number, it nevertheless potentially represents 16.5 million people, given the United States population of 330 million. That is a tremendous number of people requiring intensive care unit admission and/or potentially dying, and individuals in this population have overwhelmed the US health care system in some hotspots. For this reason, this column suggests taking this at-risk population into account in mitigation strategies when attempting to open the US economy. The column addresses the following questions: (1) What are the 3 aspects of the race to minimize the damage caused by COVID-19? (2) What data are currently available to help guide decisions to be made? (3) What strategies have been employed to date and how successful have they been? and (4) Might risk stratification of exposure be a viable strategy to minimize the damage caused by the virus? The race to minimize the damage caused by COVID-19 requires that we obtain knowledge about the disease and its treatment or prevention, how to best safeguard public health and avoid overwhelming the health care system, and how to minimize the societal damage caused by substantial disruption of the economy. Data gathered over the past 4 months since the COVID-19 virus emerged as a human pathogen have provided guidance for our decisions going forward. The most widely adopted strategies for dealing with the COVID-19 pandemic to date have involved the epidemiological approach of encouraging good hygiene practices and social distancing, including orders to \"shelter in place,\" quarantine of high-risk individuals, and isolation of infected individuals. The goal of this epidemiological approach has been to \"flatten the curve\" by reducing the height of the peak of the infection to avoid overwhelming the health care system and society in general, while buying time to learn more about the disease and find more effective ways to deal with it. However, now that more is known about COVID-19 and the portion of the population that is most at risk for serious adverse outcomes including death, it may be possible to move from a shelter-in-place approach for the entire population to focus on those at most risk and thus facilitate a gradual and rational phased reduction of social restrictions to reopen the economy. Such a graduated opening would be based on regions of countries meeting specific criteria in terms of being able to contain the virus, coupled with vigorous monitoring to look for outbreaks, followed by case monitoring, isolation of infected individuals and quarantine of exposed individuals, and increased use of testing for active disease as well as for immunity. Taking the data on high-risk individuals into account would allow for a gradual lifting of restrictions on the majority of the population while maintaining more stringent safeguards to protect the vulnerable portion of the population. Nevertheless, the entire population would need to continue to practice good hygiene and social distancing while simultaneously-and perhaps even more vigorously-focusing on sheltering the vulnerable population until adequate community immunity has been achieved to prevent the spread of the virus, whether that is accomplished through natural exposure alone or with the addition of safe and effective vaccine(s) which may not be available for a year. Continued widespread testing for antibodies will help determine how far or close this country is-and other countries are-from developing effective community immunity.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
]
|
science.qasper_extractive_qa | science.qasper_extractive_qa.445.0 | [
{
"content": "Analyze the given research paper excerpts and address the question provided. Extract the necessary information directly from the text. Your answer should be formatted as a JSON object with the following structure:\n\n\"answer\": A list of text segments from the paper that directly respond to the question.\n\"evidence\": A list of text segments from the paper where the answer can be found.\n\nIf the question cannot be answered with the given text, return \"null\".\n\nDocument: A Simple Approach to Multilingual Polarity Classification in Twitter\n\nRecently, sentiment analysis has received a lot of attention due to the interest in mining opinions of social media users. Sentiment analysis consists in determining the polarity of a given text, i.e., its degree of positiveness or negativeness. Traditionally, Sentiment Analysis algorithms have been tailored to a specific language given the complexity of having a number of lexical variations and errors introduced by the people generating content. In this contribution, our aim is to provide a simple to implement and easy to use multilingual framework, that can serve as a baseline for sentiment analysis contests, and as starting point to build new sentiment analysis systems. We compare our approach in eight different languages, three of them have important international contests, namely, SemEval (English), TASS (Spanish), and SENTIPOLC (Italian). Within the competitions our approach reaches from medium to high positions in the rankings; whereas in the remaining languages our approach outperforms the reported results.\n\nIntroduction\nSentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).\nThis manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.\nIn this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.\nThe rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.\n\nOur Approach: Multilingual Polarity Classification\nWe propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.\nIn a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.\nTo obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated.\n\nParameter Optimization\nThe model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.\nThe first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .\nTo guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others.\n\nDatasets and contests\nNowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .\nTable TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ).\n\nPerformance on sentiment analysis contests\nFigure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.\nFigures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.\nThe winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.\nIn TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.\nThe remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).\nIn SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.\nTable TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.\nTable TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.\nIn BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.\n\nAcknowledgements\nWe would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research.\n\nTable 3: Datasets details from each competition tested in this work\n\nTable 4: B4MSA’s performance on cross-validation and gold standard. The subscript at right of each score means for the random-search’s parameter (sampling size) needed to find that value.\n\nTable 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters.\n\nInquiry: What eight language are reported on?",
"role": "user"
},
{
"content": "{\"answer\": [\"Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish\"], \"evidence\": [\"In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS\\u002715, SemEval\\u002715-16 and SENTIPOLC\\u002714, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.\"]}",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.5099 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Detection and Mitigation of Node Replication Attacks in Wireless Sensor Networks: A Survey \nChapter Title: A New Protocol for Securing Wireless Sensor Networks against Node Replication \n1. Abstract of Cited Paper (BIB001): Wireless Sensor Networks (WSN) are subject to several kind of threats and attacks, like eavesdropping, intrusion, battery exhausting, packets reply and nodes compromising. While most of threats can be dealt with them through cryptographic materials provided by key management protocols, some other threats, like nodes replication attacks, can still go undetectable. Nodes replication attacks are one of the most redoubtable attacks, where an attacker compromising a node, uses its secret cryptographic key materials to successfully populate the network with clones of it. Few nodes replication defending protocols were proposed in the literature, but unfortunately, they require either a high computation, transmission and energy overheads, or that nodes know their exact location coordinates, which limits their usability in most WSN scenarios. In this paper, we present a replication detection protocol for static WSN, which requires no knowledge of nodes deployment locations, and introduces no significant overhead on the resource-constrained sensors\n\nReview Chapter:",
"role": "user"
},
{
"content": "Attacks. This technique falls into the category of generation-or group-based techniques. Bekara and Laurent-Maknavicius BIB001 BIB002 have proposed a new protocol for securing WSN against node replication attack by limiting the order of deployment using symmetric polynomial for pair-wise key establishment and defined groupbased deployment model. Their scheme requires sensors to be deployed progressively in successive generations (or group). Each node belongs to a unique generation. In their scheme, only newly deployed nodes are able to establish pairwise keys with their neighbors, and all nodes in the network know the number of the highest deployed generation. Therefore, the clone nodes will fail to establish pair-wise keys with their neighbors since the clone nodes belong to an old deployed generation.",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.2569 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: A system for assisting biocurators to assess the correctness of biological assertions.\n\nKeywords: biological assertion assessment text\n\nDataset Information:\nWMT 2016: WMT 2016 is a collection of datasets used in shared tasks of the First Conference on Machine Translation. The conference builds on ten previous Workshops on statistical Machine Translation.\nCoNLL-2014 Shared Task: Grammatical Error Correction: CoNLL-2014 will continue the CoNLL tradition of having a high profile shared task in natural language processing. This year's shared task will be grammatical error correction, a continuation of the CoNLL shared task in 2013.\nBioGRID: BioGRID is a biomedical interaction repository with data compiled through comprehensive curation efforts. The current index is version 4.2.192 and searches 75,868 publications for 1,997,840 protein and genetic interactions, 29,093 chemical interactions and 959,750 post translational modifications from major model organism species.\nCOPA: The Choice Of Plausible Alternatives (COPA) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning. COPA consists of 1000 questions, split equally into development and test sets of 500 questions each.\n\nAvailable Datasets: A: WMT 2016 B: CoNLL-2014 Shared Task: Grammatical Error Correction C: BioGRID D: COPA",
"role": "user"
},
{
"content": "BioGRID",
"role": "assistant"
}
]
|
science.drug_combo_extraction_re | science.drug_combo_extraction_re.1305 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Cellular drug resistance profiles in childhood acute myeloid leukemia: differences between FAB types and comparison with acute lymphoblastic leukemia. Determining in vitro drug resistance may reveal clinically relevant information in childhood leukemia. Using the methyl-thiazol-tetrazolium assay, the resistance of untreated leukemic cells to 21 drugs was compared in 128 children with acute myeloid leukemia (AML) and 536 children with acute lymphoblastic leukemia (ALL). The differences between 3 French-American-British (FAB) types (M1/M2, M4, and M5) were also compared. AML was significantly more resistant than ALL to the following drugs, as noted by the median resistance: glucocorticoids (greater than 85-fold), vincristine (4.4-fold), L-asparaginase (6.9-fold), anthracyclines (1.8- to 3.4-fold), mitoxantrone (2.6-fold), etoposide (4.9-fold), platinum analogues (2.4- to 3.4-fold), ifosfamide (3.5-fold), and thiotepa (3.9-fold). For cytarabine and thiopurines, the median LC50 values (the drug concentration that kills 5% of the cells) were equal. Also, busulfan, amsacrine, teniposide, and vindesine showed no significant differences, but the numbers were smaller, and the median LC50 values were 1.3- to 5.2-fold higher in AML. None of the drugs demonstrated greater cytotoxicity in AML. FAB M5 was significantly more sensitive than FAB M4 to most drugs frequently used in AML , as indicated by the following ratios of median sensitivities : the anthracyclines ( 2.6- to 3.2-fold ) , mitoxantrone ( 12.5-fold ) , etoposide ( 8.7-fold ) , and cytarabine ( 2.9-fold ) . For etoposide and cytarabine (5.4- and 3.4-fold, respectively) FAB M5 was also significantly more sensitive than FAB M1/M2. FAB M5 was equally sensitive to L-asparaginase and vincristine as ALL. Only 15% of the AML samples were \"intermediately\" sensitive to glucocorticoids, mainly in FAB M1/M2. The poorer prognosis of childhood AML is related to resistance to a large number of drugs. Within AML, FAB M5 had a distinct resistance pattern. These resistance profiles may be helpful in the rational design of further treatment protocols. (Blood. 2000;96:2879-2886)\n\nTarget Sentence: FAB M5 was significantly more sensitive than FAB M4 to most drugs frequently used in AML , as indicated by the following ratios of median sensitivities : the anthracyclines ( 2.6- to 3.2-fold ) , mitoxantrone ( 12.5-fold ) , etoposide ( 8.7-fold ) , and cytarabine ( 2.9-fold ) .",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.covid_deepset_qa | science.covid_deepset_qa.1050 | [
{
"content": "Analyze the following excerpt from a biomedical text. Identify the relevant information needed to respond to the inquiry. Provide only the necessary details in your answer.\n\nContext: TIV efficacy is measured by induction of humoral responses to the hemagglutinin (HA) protein, the major surface and attachment glycoprotein on influenza. Serum antibody responses to HA are measured by the hemagglutination-inhibition (HI) assay, and the strain-specific HI titer is considered the gold-standard correlate of immunity to influenza where a four-fold increase in titer post-vaccination, or a HI titer of ≥1:40 is considered protective [4, 14] . Protection against clinical disease is mainly conferred by serum antibodies; however, mucosal IgA antibodies also may contribute to resistance against infection. Split virus inactivated vaccines can induce neuraminidase (NA)-specific antibody responses [15] [16] [17] , and anti-NA antibodies have been associated with protection from infection in humans [18] [19] [20] [21] [22] . Currently, NA-specific antibody responses are not considered a correlate of protection [14] . LAIV is administered as a nasal spray and contains the same three or four influenza virus strains as inactivated vaccines but on an attenuated vaccine backbone [4] . LAIV are temperature-sensitive and cold-adapted so they do not replicate effectively at core body temperature, but replicate in the mucosa of the nasopharynx [23] . LAIV immunization induces serum antibody responses, mucosal antibody responses (IgA), and T cell responses. While robust serum antibody and nasal wash (mucosal) antibody responses are associated with protection from infection, other immune responses, such as CD8 + cytotoxic lymphocyte (CTL) responses may contribute to protection and there is not a clear correlate of immunity for LAIV [4, 14, 24] . Currently licensed influenza virus vaccines suffer from a number of issues. The inactivated vaccines rely on specific antibody responses to the HA, and to a lesser extent NA proteins for protection. The immunodominant portions of the HA and NA molecules undergo a constant process of antigenic drift, a natural accumulation of mutations, enabling virus evasion from immunity [9, 25] . Thus, the circulating influenza A and B strains are reviewed annually for antigenic match with current vaccines, Replacement of vaccine strains may occur regularly, and annual vaccination is recommended to assure protection [4, 26, 27] . For the northern hemisphere, vaccine strain selection occurs in February and then manufacturers begin production, taking at least six months to produce the millions of vaccine doses required for the fall [27] . If the prediction is imperfect, or if manufacturers have issues with vaccine production, vaccine efficacy or availability can be compromised [28] . LAIV is not recommended for all populations; however, it is generally considered to be as effective as inactivated vaccines and may be more efficacious in children [4, 9, 24] . While LAIV relies on antigenic match and the HA and NA antigens are replaced on the same schedule as the TIV [4, 9] , there is some suggestion that LAIV may induce broader protection than TIV due to the diversity of the immune response consistent with inducing virus-neutralizing serum and mucosal antibodies, as well as broadly reactive T cell responses [9, 23, 29] . While overall both TIV and LAIV are considered safe and effective, there is a recognized need for improved seasonal influenza vaccines [26] . Moreover, improved understanding of immunity to conserved influenza virus antigens has raised the possibility of a universal vaccine, and these universal antigens will likely require novel vaccines for effective delivery [30] [31] [32] . Virus-vectored vaccines share many of the advantages of LAIV, as well as those unique to the vectors. Recombinant DNA systems exist that allow ready manipulation and modification of the vector genome. This in turn enables modification of the vectors to attenuate the virus or enhance immunogenicity, in addition to adding and manipulating the influenza virus antigens. Many of these vectors have been extensively studied or used as vaccines against wild type forms of the virus. Finally, each of these vaccine vectors is either replication-defective or causes a self-limiting infection, although like LAIV, safety in immunocompromised individuals still remains a concern [4, 13, [33] [34] [35] . Table 1 summarizes the benefits and concerns of each of the virus-vectored vaccines discussed here. There are 53 serotypes of adenovirus, many of which have been explored as vaccine vectors. A live adenovirus vaccine containing serotypes 4 and 7 has been in use by the military for decades, suggesting adenoviruses may be safe for widespread vaccine use [36] . However, safety concerns have led to the majority of adenovirus-based vaccine development to focus on replication-defective vectors. Adenovirus 5 (Ad5) is the most-studied serotype, having been tested for gene delivery and anti-cancer agents, as well as for infectious disease vaccines. Adenovirus vectors are attractive as vaccine vectors because their genome is very stable and there are a variety of recombinant systems available which can accommodate up to 10 kb of recombinant genetic material [37] . Adenovirus is a non-enveloped virus which is relatively stable and can be formulated for long-term storage at 4 °C, or even storage up to six months at room temperature [33] . Adenovirus vaccines can be grown to high titers, exceeding 10 1° plaque forming units (PFU) per mL when cultured on 293 or PER.C6 cells [38] , and the virus can be purified by simple methods [39] . Adenovirus vaccines can also be delivered via multiple routes, including intramuscular injection, subcutaneous injection, intradermal injection, oral delivery using a protective capsule, and by intranasal delivery. Importantly, the latter two delivery methods induce robust mucosal immune responses and may bypass preexisting vector immunity [33] . Even replication-defective adenovirus vectors are naturally immunostimulatory and effective adjuvants to the recombinant antigen being delivered. Adenovirus has been extensively studied as a vaccine vector for human disease. The first report using adenovirus as a vaccine vector for influenza demonstrated immunogenicity of recombinant adenovirus 5 (rAd5) expressing the HA of a swine influenza virus, A/Swine/Iowa/1999 (H3N2). Intramuscular immunization of mice with this construct induced robust neutralizing antibody responses and protected mice from challenge with a heterologous virus, A/Hong Kong/1/1968 (H3N2) [40] . Replication defective rAd5 vaccines expressing influenza HA have also been tested in humans. A rAd5-HA expressing the HA from A/Puerto Rico/8/1934 (H1N1; PR8) was delivered to humans epicutaneously or intranasally and assayed for safety and immunogenicity. The vaccine was well tolerated and induced seroconversion with the intranasal administration had a higher conversion rate and higher geometric meant HI titers [41] . While clinical trials with rAd vectors have overall been successful, demonstrating safety and some level of efficacy, rAd5 as a vector has been negatively overshadowed by two clinical trial failures. The first trial was a gene therapy examination where high-dose intravenous delivery of an Ad vector resulted in the death of an 18-year-old male [42, 43] . The second clinical failure was using an Ad5-vectored HIV vaccine being tested as a part of a Step Study, a phase 2B clinical trial. In this study, individuals were vaccinated with the Ad5 vaccine vector expressing HIV-1 gag, pol, and nef genes. The vaccine induced HIV-specific T cell responses; however, the study was stopped after interim analysis suggested the vaccine did not achieve efficacy and individuals with high preexisting Ad5 antibody titers might have an increased risk of acquiring HIV-1 [44] [45] [46] . Subsequently, the rAd5 vaccine-associated risk was confirmed [47] . While these two instances do not suggest Ad-vector vaccines are unsafe or inefficacious, the umbra cast by the clinical trials notes has affected interest for all adenovirus vaccines, but interest still remains. Immunization with adenovirus vectors induces potent cellular and humoral immune responses that are initiated through toll-like receptor-dependent and independent pathways which induce robust pro-inflammatory cytokine responses. Recombinant Ad vaccines expressing HA antigens from pandemic H1N1 (pH1N1), H5 and H7 highly pathogenic avian influenza (HPAI) virus (HPAIV), and H9 avian influenza viruses have been tested for efficacy in a number of animal models, including chickens, mice, and ferrets, and been shown to be efficacious and provide protection from challenge [48, 49] . Several rAd5 vectors have been explored for delivery of non-HA antigens, influenza nucleoprotein (NP) and matrix 2 (M2) protein [29, [50] [51] [52] . The efficacy of non-HA antigens has led to their inclusion with HA-based vaccines to improve immunogenicity and broaden breadth of both humoral and cellular immunity [53, 54] . However, as both CD8 + T cell and neutralizing antibody responses are generated by the vector and vaccine antigens, immunological memory to these components can reduce efficacy and limit repeated use [48] . One drawback of an Ad5 vector is the potential for preexisting immunity, so alternative adenovirus serotypes have been explored as vectors, particularly non-human and uncommon human serotypes. Non-human adenovirus vectors include those from non-human primates (NHP), dogs, sheep, pigs, cows, birds and others [48, 55] . These vectors can infect a variety of cell types, but are generally attenuated in humans avoiding concerns of preexisting immunity. Swine, NHP and bovine adenoviruses expressing H5 HA antigens have been shown to induce immunity comparable to human rAd5-H5 vaccines [33, 56] . Recombinant, replication-defective adenoviruses from low-prevalence serotypes have also been shown to be efficacious. Low prevalence serotypes such as adenovirus types 3, 7, 11, and 35 can evade anti-Ad5 immune responses while maintaining effective antigen delivery and immunogenicity [48, 57] . Prime-boost strategies, using DNA or protein immunization in conjunction with an adenovirus vaccine booster immunization have also been explored as a means to avoided preexisting immunity [52] . Adeno-associated viruses (AAV) were first explored as gene therapy vectors. Like rAd vectors, rAAV have broad tropism infecting a variety of hosts, tissues, and proliferating and non-proliferating cell types [58] . AAVs had been generally not considered as vaccine vectors because they were widely considered to be poorly immunogenic. A seminal study using AAV-2 to express a HSV-2 glycoprotein showed this virus vaccine vector effectively induced potent CD8 + T cell and serum antibody responses, thereby opening the door to other rAAV vaccine-associated studies [59, 60] . AAV vector systems have a number of engaging properties. The wild type viruses are non-pathogenic and replication incompetent in humans and the recombinant AAV vector systems are even further attenuated [61] . As members of the parvovirus family, AAVs are small non-enveloped viruses that are stable and amenable to long-term storage without a cold chain. While there is limited preexisting immunity, availability of non-human strains as vaccine candidates eliminates these concerns. Modifications to the vector have increased immunogenicity, as well [60] . There are limited studies using AAVs as vaccine vectors for influenza. An AAV expressing an HA antigen was first shown to induce protective in 2001 [62] . Later, a hybrid AAV derived from two non-human primate isolates (AAVrh32.33) was used to express influenza NP and protect against PR8 challenge in mice [63] . Most recently, following the 2009 H1N1 influenza virus pandemic, rAAV vectors were generated expressing the HA, NP and matrix 1 (M1) proteins of A/Mexico/4603/2009 (pH1N1), and in murine immunization and challenge studies, the rAAV-HA and rAAV-NP were shown to be protective; however, mice vaccinated with rAAV-HA + NP + M1 had the most robust protection. Also, mice vaccinated with rAAV-HA + rAAV-NP + rAAV-M1 were also partially protected against heterologous (PR8, H1N1) challenge [63] . Most recently, an AAV vector was used to deliver passive immunity to influenza [64, 65] . In these studies, AAV (AAV8 and AAV9) was used to deliver an antibody transgene encoding a broadly cross-protective anti-influenza monoclonal antibody for in vivo expression. Both intramuscular and intranasal delivery of the AAVs was shown to protect against a number of influenza virus challenges in mice and ferrets, including H1N1 and H5N1 viruses [64, 65] . These studies suggest that rAAV vectors are promising vaccine and immunoprophylaxis vectors. To this point, while approximately 80 phase I, I/II, II, or III rAAV clinical trials are open, completed, or being reviewed, these have focused upon gene transfer studies and so there is as yet limited safety data for use of rAAV as vaccines [66] . Alphaviruses are positive-sense, single-stranded RNA viruses of the Togaviridae family. A variety of alphaviruses have been developed as vaccine vectors, including Semliki Forest virus (SFV), Sindbis (SIN) virus, Venezuelan equine encephalitis (VEE) virus, as well as chimeric viruses incorporating portions of SIN and VEE viruses. The replication defective vaccines or replicons do not encode viral structural proteins, having these portions of the genome replaces with transgenic material. The structural proteins are provided in cell culture production systems. One important feature of the replicon systems is the self-replicating nature of the RNA. Despite the partial viral genome, the RNAs are self-replicating and can express transgenes at very high levels [67] . SIN, SFV, and VEE have all been tested for efficacy as vaccine vectors for influenza virus [68] [69] [70] [71] .\n\nInquiry: Which is the major surface and attachment glycoprotein on influenza virus?",
"role": "user"
},
{
"content": "hemagglutinin (HA) protein,",
"role": "assistant"
}
]
|
science.healthver_entailment | science.healthver_entailment.1192 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Diarrhea, vomiting and abdominal pain may be early symptoms of coronavirus.\n\nResearch Abstract:\nCoronavirus disease (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS CoV-2) has turned out to be a formidable pandemic.\nUpcoming evidence from confirmed cases of COVID-19 suggests an anticipated incursion of patients with neurological manifestations in the weeks to come.\nAn expression of the angiotensin-converting enzyme 2 (ACE 2), the cellular receptor for SARS-CoV-2 over the glial cells and neurons have made the brain a potential target.\nNeurotoxicity may occur as a result of direct, indirect and post-infectious complications.\nAttention to neurological deficits in COVID-19 is fundamental to ensure appropriate, timely, beneficial management of the affected patients.\nMost common neurological manifestations seen include dizziness, headache, impaired consciousness, acute cerebrovascular disease, ataxia, and seizures.\nAnosmia and ageusia have recently been hinted as significant early symptoms in COVID-19.\nAs cases with neurological deficits in COVID-19 emerge, the overall prognosis is yet unknown.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Anosmia and ageusia have recently been hinted as significant early symptoms in COVID-19.\"]}",
"role": "assistant"
}
]
|
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.7408 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Open hearts build lives: positive emotions, induced through loving-kindness meditation, build consequential personal resources.\nSummary: B. L. Fredrickson 's ( 1998 , 2001 ) broaden- and -build theory of positive emotions asserts that people 's daily experiences of positive emotions compound over time to build a variety of consequential personal re sources . The authors tested this build hypothesis in a field experiment with working adults ( n = 139 ) , half of whom were r and omly-assigned to begin a practice of loving-kindness meditation . Results showed that this meditation practice produced increases over time in daily experiences of positive emotions , which , in turn , produced increases in a wide range of personal re sources ( e.g. , increased mindfulness , purpose in life , social support , decreased illness symptoms ) . In turn , these increments in personal re sources predicted increased life satisfaction and reduced depressive symptoms . Discussion centers on how positive emotions are the mechanism of change for the type of mind-training practice studied here and how loving-kindness meditation is an intervention strategy that produces positive emotions in a way that outpaces the hedonic treadmill effect\n\n\nReport 2:\nTitle: Positive psychology progress: empirical validation of interventions.\nSummary: Positive psychology has flourished in the last 5 years . The authors review recent developments in the field , including books , meetings , courses , and conferences . They also discuss the newly created classification of character strengths and virtues , a positive complement to the various editions of the Diagnostic and Statistical Manual of Mental Disorders ( e. g. , American Psychiatric Association , 1994 ) , and present some cross-cultural findings that suggest a surprising ubiquity of strengths and virtues . Finally , the authors focus on psychological interventions that increase individual happiness . In a 6-group , r and om-assignment , placebo-controlled Internet study , the authors tested 5 purported happiness interventions and 1 plausible control exercise . They found that 3 of the interventions lastingly increased happiness and decreased depressive symptoms . Positive interventions can supplement traditional interventions that relieve suffering and may someday be the practical legacy of positive psychology\n\n\nReport 3:\nTitle: Don't hide your happiness! Positive emotion dissociation, social connectedness, and psychological functioning.\nSummary: It is now clear that positive emotion leads to enhanced psychological functioning . What is less clear , however , is just why this is so . Drawing on a social-functional perspective , we argue that positive emotional behavior that accurately signals to others the individual 's internal state will enhance social connectedness . Positive emotional behavior that does not accurately signal a person 's experience -- such as a smile that is not felt -- may impede social connectedness and , in turn , psychological functioning . This perspective suggests that ( a ) the degree to which experience and behavior are dissociated during positive emotional episodes , over and above level of positive behavior , should predict worse psychological functioning and ( b ) the effect of dissociation should be mediated by social connectedness . To test these hypotheses , we conducted a short-term prospect i ve longitudinal study , with a baseline assessment of depressive symptoms and well-being at Time 1 . Six months later , at Time 2 , we used a novel within-individual laboratory paradigm to measure the degree to which positive emotional behavior was dissociated from ( vs. coherent with ) a participant 's positive emotional experience . We also assessed level of positive behavior and experience . Then , another 6 months later , we assessed social connectedness as a mediator and depressive symptoms and well-being as outcomes at Time 3 . Even when controlling for baseline functioning and for level of positive emotion behavior and experience , we found that greater positive experience-behavior dissociation at Time 2 predicted higher levels of depressive symptoms and lower levels of well-being at Time 3 . As predicted , these associations were mediated by social connectedness\n\n\nReport 4:\nTitle: Life events, coping and depressive symptoms among young adolescents: a one-year prospective study.\nSummary: BACKGROUND To investigate prospect ively over one year , the extent to which greater exposure to negative life events , greater use of more negative coping strategies , and less use of positive coping strategies and an optimistic thinking style , predicts the onset of depressive symptoms among adolescents . METHODS A prospect i ve longitudinal study of 5,634 adolescents ( Mean Age=13.1 , SD=0.5 ) enrolled in Year 8 at secondary school . St and ard question naires were used to assess depressive symptoms and the predictor variables . RESULTS Over a one-year period , there was an independent and statistically significant relationship between transition to a CES-D score above the recommended cut-off score and i ) greater exposure to negative life events and use of negative coping strategies , and ii ) less use of positive coping strategies and an optimistic thinking style . Among males who experienced a higher number of negative life events , the impact on depressive symptoms was greater among those who made more use of negative coping strategies . LIMITATIONS Self-report question naires completed by adolescents were employed to evaluate all the variables in the study . Only two assessment points were available . Ten percent of adolescents did not complete the follow-up assessment . CONCLUSIONS Particularly among females , early adolescence is marked by increased susceptibility to depressive symptoms . Helping young adolescents to adopt more positive coping strategies and optimistic thinking styles may reduce their risk for the onset of depressive symptoms . This may be particularly important for young males who experience high levels of adverse life events and utilise negative coping strategies\n\n\nReport 5:\nTitle: Depressive symptoms and momentary affect: the role of social interaction variables\nSummary: Background : Interpersonal functioning may be one important factor in the development and course of depression symptomatology . This study used ecological momentary assessment to test the associations among depressive symptoms , social experiences and momentary affect in women . Methods : Middle‐aged women ( N=108 , M age : 41.6 years , 81 % White ) completed diary questions on h and held computers for 2 days . Diary items assessed social ( conflictive versus supportive ) and affective ( negative versus positive ) experiences at r and om times during the day . Women also completed a self‐report measure of recent depressive symptoms . Results : Multilevel modeling analyses showed that higher levels of symptoms of depression were related to higher negative affect and lower positive affect both directly and indirectly , through experiences of social conflict . Depressive symptoms were not significantly related to socially supportive interactions . In an alternative model testing the reverse association , neither positive nor negative affect significantly predicted social experiences . Generalizability is limited by the homogenous small sample and strict inclusion ary criteria ( working full‐time or part‐time , cohabitating or married , healthy ) . Due to the cross sectional nature of the data as well as the manner in which social and affective experiences were assessed , definitive conclusions regarding the temporal associations among depression symptoms , social functioning , and affect are not possible . Results are consistent with prior reports suggesting the salience of socially conflictive experiences , and the role of affect , in the etiology and maintenance of depression symptoms . Conclusions : Interventions that attempt to decrease socially conflictive experiences via cognitive‐behavioral skills training , whereas concomitantly targeting positive and negative affect , could help prevent the development of full‐blown depressive episodes in vulnerable individuals . Depression and Anxiety , 2009 . © 2009 Wiley‐Liss ,",
"role": "user"
},
{
"content": "The studies have important relations between humor and positive emotion as well as a significant improvement in signs and symptoms of depression using differents strategies of positive psychology .\nAnother relevant aspect is the preventative character of the proposed interventions by positive psychology by the fact that increase well-being and produce elements such as resilience and coping re sources that reduce the recurrent relapses in the treatment of depression .\nThe strategies of positive psychology , such as increasing positive emotions , develop personal strengths : seeking direction , meaning and engagement for the day-to-day life of the patients , appear as potentially tools for the prophylaxis and treatment of depression , helping to reduce signs and symptoms as well as for prevention of relapses",
"role": "assistant"
}
]
|
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.1509 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nEffects of a partially home-based exercise program for women with gestational diabetes.\nReport Abstract 1:\nTo examine the effectiveness of a partially home-based, moderate-intensity aerobic exercise program for women with gestational diabetes.\n This was a randomized experimental design. Thirty-three women with gestational diabetes were randomly assigned to the exercise or the no-exercise group. Subjects underwent hemoglobin A1C assay and submaximal cycle ergometer fitness tests at baseline and at study conclusion. Subjects kept diaries of home fasting and 2-hour postprandial blood glucose determinations. Exercise subjects were asked to exercise for 30 minutes three to four times weekly at 70% of estimated maximal heart rate for the weeks of study participation. Two exercise sessions weekly were supervised by the investigator, and two were unsupervised at home. Control-group subjects were asked to maintain their current activity level.\n Daily fasting and postprandial blood glucose levels, hemoglobin A1C, incidence of exogenous insulin therapy, and incidence of newborn hypoglycemia were not different between the groups. There was a training effect in the exercise group (P = .005) but not in the control group (P = .25). A significant decline in daily grams of carbohydrate consumed was observed in the control group (P = .03), but not in the exercise group (P = .97). No complications were found in the subjects who exercised.\n A partially home-based exercise program did not reduce blood glucose levels, but did result in a modest increase in cardiorespiratory fitness. The intervention appeared safe.\n\n\nReport Title 2:\nEffect of exercise on blood pressure in pregnant women with a high risk of gestational hypertensive disorders.\nReport Abstract 2:\nTo prospectively determine whether moderate exercise during pregnancy lowers blood pressure.\n A randomized, controlled trial with one test group and one control group. All subjects have a history of mild hypertension, gestational hypertensive disorders or a family history of hypertensive disorders. Subjects were recruited before 14 weeks' gestation. After four weeks of observation, the subjects were randomly assigned to either the exercise or control group. The exercise group visited the laboratory three times a week for 10 weeks (18-28 gestation weeks) to perform 30 minutes of exercise at Rating of Perceived Exertion level 13.\n A total of 16 pregnant women (mean age, 30 years) participated. The mean metabolic equivalent during exercise sessions was 4.7 (SD = 0.8). Blood pressure measurements were compared before and after the 10-week exercise period in the two groups. Systolic blood pressures did not change significantly, but diastolic blood pressure (DBP) in the exercise group decreased by 3.5 mm Hg, while that in the control group increased by 1.1 mm Hg. Thus, the pre-post change in DBP differed by 4.6 mm Hg between groups. Exercise treatment reduced the diastolic blood pressure to a near-significant level in the exercise group (t = 2.34, df = 7, P = .052). Percent body fat did not differ between the exercise and control groups either before or after exercise treatment. ANOVA revealed that pregnancy had a significant effect (F(1, 14) = 5.7, P = .03) on increasing the percentage of fat, but exercise treatment did not (F(1, 14) = .18, P = .68). Estimated energy expenditure in overall daily physical activities during the intervention did not differ between the two groups despite the inclusion of exercise.\n This study detected a strong trend that 10 weeks of moderate exercise lowered the diastolic blood pressure among pregnant women at risk of hypertensive disorders. The reductions were probably due to the effect of exercise itself, not to weight or overall daily physical activity levels.",
"role": "user"
},
{
"content": "There is insufficient evidence for reliable conclusions about the effects of exercise on prevention of pre-eclampsia and its complications.\n[Note: The four citations in the awaiting classification section of the review may alter the conclusions of the review once assessed.]",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.1452 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nAccurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.\n\nCited Works Summary:\n[1]: Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.\n\n[2]: Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.\n\n[3]: \n\n[4]: We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.\n\n[5]: We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.\n\n[6]: In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.\n\n[7]: \n\n[8]: \n\n[9]: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.\n\n[10]: We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at",
"role": "user"
},
{
"content": "Previous works have recognized that accurate orientation estimation requires a feature extraction process that captures fine-grained semantic information of objects. [2] identify that standard Faster R-CNN feature maps are too low resolution for pedestrians and instead use atrous convolutions [9] and pool features from shallower layers. Pyramid structures including image pyramids [7] and feature pyramids [1] have also been leveraged to obtain information from multiple scales. SubCNN [6] use image pyramids to handle scale changes of objects, and [10] highlight the importance of a pyramid structure for small classes such as pedestrians. Moreover, [5] [2]3 [3] have shown the importance in how methods crop ROI features. [2]2 note that the standard ROI crop can warp the appearance of shape and pose, and propose the use of a virtual ROI camera to address this. We instead obtain fine-grained details by using multiple virtual ROI cameras placed canonically around each object's detected centroid. As compared to [2]2 , we not only use 2D RGB data, but also use a 3D point cloud to produce realistic novel viewpoints that maintain consistent object appearance.",
"role": "assistant"
}
]
|
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.28024 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nRemote Loading of Small Molecule Therapeutics into Cholesterol-Enriched Cell Membrane-Derived Vesicles\n\nContent Extracts:\n\n<p>The field of nanomedicine has grown significantly over the past several decades.[1] Starting with the U.S. Food and Drug Administration's approval of Doxil, a liposomal formulation of doxorubicin (Dox), numerous other nanoparticulate drugs have since been translated for clinical use.[2] Nanodelivery has distinct advantages compared with the administration of free drug, including improved blood residence, sustained release, and enhanced tumor targeting. All of these benefits ultimately lead to improved efficacy and patient compliance, better enabling clinicians to care for those with life-threatening diseases.</p><p>More recently, researchers have begun focusing on next-generation platforms that can build upon the initial successes of the drug delivery field. Along these lines, there has been a significant shift towards bioinspired or biomimetic design principles.[3] This is motivated by the fact that biological systems have been carefully cultivated over time by the process of evolution. Despite the significant advances made in bottom-up, synthetic nanofabrication, it is still incredibly difficult to replicate the complex functionalities found in nature. One such biomimetic strategy involves the combination of cell-derived membranes with artificial nanocarriers, leading to the generation of natural–synthetic hybrids with unique functionalities.[4] Benefits can include enhanced biocompatibility, lowered immunogenicity, heightened affinity for toxins, and the ability to target diseased tissues.[5] This approach has been proven highly generalizable and has utility across a wide range of different applications.</p><p>The remote loading of drugs into liposomal carriers represents one of the most highly used and effective means of nanoencapsulation.[6] In this work, we explored a strategy to adapt this approach for use with red blood cell (RBC) membrane-derived vesicles (Figure 1). It was demonstrated that membrane vesicles supplemented with exogenous cholesterol could more effectively retain a pH gradient, which was used to drive the successful loading of two model drugs: the chemotherapeutic Dox and the antibiotic vancomycin (Vanc). The Dox formulation (Dox-RBC) was used to treat a murine model of breast cancer, significantly controlling tumor growth, while the Vanc formulation (Vanc-RBC) was used to reduce bacterial load in a methicillin-resistant Staphylococcus aureus (MRSA) skin infection model.</p><p>To fabricate cholesterol-enriched RBC vesicles, membrane ghosts were derived from purified RBCs by hypotonic lysis. Free cholesterol was then incubated with the ghosts in 300 mM ammonium sulfate, followed by physical disruption to induce vesiculation. Finally, the outer phase was exchanged with phosphate buffered saline (PBS) to create a pH gradient. At increasing cholesterol input, the final nanovesicles exhibited a corresponding increase in size up to approximately 150 nm (Figure 2a). To characterize the membrane sidedness, which is important for retention of the membrane's biological functions,[7] non-denaturing immunoblotting was employed (Figure 2b). When probed for either the intracellular or extracellular domains of the transmembrane protein CD47, ammonium sulfate-loaded vesicles had a predominantly right-side-out orientation, which was retained after transferring the vesicles to PBS. In contrast, vesicles made in water, which lacks the ions necessary to preserve orientation,[8] displayed equivalent amounts of both signals. Co-loading of the pH-sensitive dye LysoSensor Green was used to assess membrane integrity (Figure 2c). With increasing cholesterol, higher fluorescence was observed, indicating the retention of ammonium sulfate within the vesicles. This effect plateaued at a 5% input, and it was demonstrated that the enriched vesicles were also impermeable to calcein dye (Figure 2d). Fluorescence microscopy showed that the signal of dye loaded into the vesicles colocalized well with dye labeling the membrane, further confirming successful encapsulation and retention (Figure 2e). Over time, the enriched vesicles showed little change in size when incubated in serum-containing media (Figure 2f). Overall, the improved stability of the cholesterol-enriched vesicles likely stems from the ability of the molecule to positively regulate membrane fluidity.[9]</p><p>We next sought to demonstrate remote loading into the RBC vesicles. Dox, which represents the most classical example of a remotely loaded drug,[6a] was first employed. After fabricating the vesicles at a 5% cholesterol input, loading yield was quantified using various Dox input concentrations (Figure 3a). With increasing inputs, it was observed that more Dox could be loaded, up to approximately 10%. This process was also dependent on both temperature (Figure 3b) and time (Figure 3c), with optimal loading occurring at 37 °C and saturating at 40 minutes of reaction time. Observation of Dox-RBC negatively stained with uranyl acetate by transmission electron microscopy revealed vesicular structures (Figure 3d). Notably, elongated crystals were visualized, consistent with reports on Dox-loaded synthetic liposomes.[10] Drug release was also evaluated over a period of 2 days at different pH values (Figure 3e). Regardless of pH, a burst release occurred in the first 12 hours, followed by more gradual, sustained release. Lowering the pH from a physiological value of 7.4 down to 5.0 facilitated drug release, likely due to the instability of RBC membrane under acidic conditions.</p><p>Dox-RBC was incubated with murine 4T1 breast cancer cells for 24 hours to evaluate its potency in vitro. The nanoformulation showed better efficacy compared to free Dox (Figure 4a). This may be explained by the fact that nanoparticulate delivery enables cellular uptake via endocytic pathways,[11] possibly via a clathrin-mediated mechanism as reported previously for membrane-derived nanoparticles.[12] Enhanced uptake was confirmed by flow cytometric analysis (Figure 4b,c). When visualized by fluorescence microscopy, it was revealed that Dox-RBC facilitated retention of drug across the entirety of the cell; the free drug was only found in the nuclei, clearly delineated as large round objects during imaging (Figure 4d). This ability to facilitate uptake and enhance intracellular retention is a unique advantage of nanodelivery and could help to overcome drug-resistance by lessening drug efflux.[13]</p><p>Therapeutic efficacy was evaluated using an orthotopic 4T1 breast cancer model. Tumors were implanted in the mammary fat pads of female BALB/c mice. Once palpable, treatment was initiated using either Dox-RBC or free Dox. The nanoformulation had a marked effect, significantly slowing tumor growth compared to free Dox (Figure 4e). Median survival was also extended from 25.5 days for the untreated control group to 39 days for the Dox-RBC group (Figure 4f). Notably, 33% of the mice receiving nanoparticles had complete regressions and remained tumor-free. Dox-RBC was also better at preventing the formation of lung metastases, reducing the number of nodules after 21 days of treatment (Figure 4g). With regards to safety, mice subjected to a full course of Dox-RBC treatment did not experience any weight loss (Figure 4h). It should be noted that rapid tumor growth likely counteracted any weight loss for the free Dox group. The difference in safety could more readily be seen upon histopathological examination of the cardiac tissue (Figure 4i). Chronic myocardial toxicity with free Dox is a well-known occurrence in the clinic.[14] For this group, we observed that the myocardial fibers became thin and loose, with varying degrees of rupture. On the other hand, the hearts of mice treated with Dox-RBC did not show any significant alterations in fiber architecture. In all, it was shown that Dox-RBC could more effectively control tumor growth compared with free drug while also displaying evidence of increased safety. The improved performance of the nanoformulation could likely be attributed to its ability to preferentially deliver drug to the tumors (Figure 4j).</p><p>Following the antitumor studies, we explored if the remote loading strategy could be generalized to additional payloads. For this purpose, the antibiotic Vanc, another amine-containing small molecule drug, was employed. At increasing input concentrations, a loading yield of up to 40% was achieved (Figure 5a). The higher yield compared with Dox may be explained by the higher molecular weight of Vanc. Like Dox loading, the optimal temperature was 37 °C (Figure 5b), whereas saturation took longer and occurred at 1 hour (Figure 5c). Drug release kinetics followed a similar trend, with an initial burst observed in the first several hours and lower pH accelerating the process (Figure 5d). The absolute rate of release was slower when compared with Dox, and it took approximately 48 hours for all the drug to be released at pH 5.0.</p><p>Subsequently, we sought to test the nanoformulation against MRSA. A Förster resonance energy transfer (FRET) study was performed to demonstrate the interaction of the bacteria with the membrane vesicles (Figure 6a). As increasing amounts of bacteria were incubated with FRET pair-doped RBC vesicles, more donor signal was recovered, signaling that fusion was occurring. This effect could potentially be leveraged to facilitate delivery of loaded drugs to bacteria upon contact.[15] Vanc-RBC or free Vanc was then incubated with MRSA at increasing concentrations for 24 hours, and the results confirmed that the remotely loaded drug still retained its activity (Figure 6b). No effect was observed from RBC vesicles alone.</p><p>Therapeutic efficacy of the Vanc-RBC formulation was tested using a murine skin infection model. Following subcutaneous challenge with MRSA, mice were administered either free Vanc or Vanc-RBC. Over 5 days, the nanoformulation proved to be exceptionally potent, completely inhibiting the formation of skin lesions (Figure 6c,e). The free Vanc group experienced subtle lesion formation, which was a large improvement over the untreated controls. While the difference in physical effect between the treatment groups was slight, bacterial enumeration revealed that Vanc-RBC outperformed the free drug by 3 orders of magnitude (Figure 6d). Histological analysis showed that MRSA-induced lesions were marked by the loss of cilia and columnar morphology, characteristic of acanthosis and apoptosis (Figure 6f). These symptoms were decreased with free Vanc, although hyperkeratosis with aberrant hair shafts could still be observed. Samples from the Vanc-RBC group appeared normal. Ultimately, the enhanced performance of the nanoformulation could likely be attributed to its retention at the treatment site, preventing the Vanc from diffusing away.</p><p>In conclusion, we have demonstrated a method of remote loading drugs into naturally derived RBC vesicles. The process is greatly facilitated by supplementing the cell membrane with additional cholesterol, which facilitates the maintenance of a pH gradient. This concept was demonstrated using two small molecule model drugs, a chemotherapeutic and an antibiotic. In both cases, the nanoformulations were able to outperform the corresponding drugs in the free form using appropriate animal disease models. By employing a naturally derived nanocarrier, it is possible to leverage the benefits of nanoparticulate delivery, such as improved localization to diseased tissue, with fewer concerns regarding safety and immunogenicity. The work presented here can help to provide guiding principles for the development of next-generation, remote-loaded drug platforms.</p>",
"role": "user"
},
{
"content": "The increasing popularity of biomimetic design principles in nanomedicine has led to therapeutic platforms with enhanced performance and biocompatibility. This includes the use of naturally derived cell membranes, which can bestow nanocarriers with cell-specific functionalities. Herein, we report on a strategy enabling efficient encapsulation of drugs via remote loading into membrane vesicles derived from red blood cells. This is accomplished by supplementing the membrane with additional cholesterol, stabilizing the nanostructure and facilitating the retention of a pH gradient. We demonstrate the loading of two model drugs: the chemotherapeutic doxorubicin and the antibiotic vancomycin. The therapeutic implications of these natural, remote-loaded nanoformulations are studied both in vitro and in vivo using animal disease models. Ultimately, this approach could be used to design new biomimetic nanoformulations with higher efficacy and improved safety profiles.",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.3848 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nRelated work in this stream conceptualizes the factors that make some services more amenable for offshore BPO (Apte and Mason 1995; Mithas and Whitaker 2005; Venkatraman 2004), and examines supplier capabilities, best practices, and lessons from BPO using case-based research (Cullen et al. 2005; Feeny et al. 2005; Lacity et al. 2003).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
]
|
science.pico_ner | science.pico_ner.1452 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nProtein ingestion further augments S6K1 phosphorylation in skeletal muscle following resistance type exercise in males . Our objective was to determine the impact of carbohydrate and/or protein ingestion before and after exercise on ribosomal protein S6 kinase ( S6K1 ) and S6 phosphorylation status in human skeletal muscle tissue . Seven healthy , untrained men ( 22.5 +/- 0.9 y ) were randomly assigned to 2 cross-over experiments . Before , immediately after , and 1 h after a single bout of resistance exercise , subjects consumed 0.3 g x kg ( -1 ) carbohydrate with or without 0.3 g x kg ( -1 ) protein hydrolysate ( CHO+PRO and CHO , respectively ) . Muscle biopsies were taken before and immediately after exercise and after 1 and 4 h of postexercise recovery to determine 4E-BP1 , S6K1 ( both T ( 421 ) /S ( 424 ) and T ( 389 ) ) , and S6 phosphorylation status . Following resistance exercise , 4E-BP1 phosphorylation was reduced to a greater extent in the CHO treatment ( -48 +/- 7 % ) than in the CHO+PRO treatment ( -15 +/- 14 % , P < 0.01 ) . During recovery , 4E-BP1 phosphorylation increased in both experiments ( P < 0.01 ) , and tended to be higher in the CHO+PRO test ( P = 0.08 ) . S6K1 phosphorylation at T ( 421 ) /S ( 424 ) substantially increased following exercise and remained elevated during recovery with no differences between treatments . In contrast to the CHO treatment ( -4 +/- 2 % ) , S6K1 phosphorylation at T ( 389 ) was higher following exercise in the CHO+PRO treatment only ( +78 +/- 2 % , P < 0.01 ) . During recovery , S6K1 phosphorylation at T ( 389 ) remained higher in CHO+PRO than in CHO ( P < 0.05 ) . S6 phosphorylation was substantially higher following exercise in the CHO+PRO ( 1.69 +/- 0.35 ) than in the CHO experiment ( 0.45 +/- 0.07 , P < 0.01 ) and remained elevated during recovery ( P < 0.05 ) . We conclude that the availability of dietary protein further enhances phosphorylation of S6K1 during recovery from resistance type exercise .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"protein\", \"4E-BP1 phosphorylation\", \"CHO+PRO\", \"carbohydrate\", \"Protein ingestion\", \"resistance exercise\", \"protein hydrolysate\", \"Muscle biopsies\", \"CHO\", \"exercise\"], \"Outcome\": [\"4E-BP1 phosphorylation\", \"During recovery , S6K1 phosphorylation at T ( 389 )\", \"S6K1 phosphorylation at T ( 389 )\", \"S6K1 phosphorylation at T ( 421 ) /S ( 424 )\", \"S6 phosphorylation\"], \"Participant\": [\"men\", \"Seven\", \"( 22.5 +/- 0.9 y )\", \"males\"]}",
"role": "assistant"
}
]
|
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.7088 | [
{
"content": "Transform the following sections from a biomedical study into a simple summary for the general public. Focus on making the information accessible and easy to understand.\n\nResearch Heading: A stress-response-related inter-compartmental signalling pathway regulates embryonic cuticle integrity in Arabidopsis\nContent:\nAbstract:\nThe embryonic cuticle is necessary for normal seed development and seedling establishment in Arabidopsis. Although mutants with defective embryonic cuticles have been identified, neither the deposition of cuticle material, nor its regulation, has been described during embryogenesis. Here we use electron microscopy, cuticle staining and permeability assays to show that cuticle deposition initiates de novo in patches on globular embryos. By combining these techniques with genetics and gene expression analysis, we show that successful patch coalescence to form a continuous cuticle requires a signalling involving the endosperm-specific subtilisin protease ALE1 and the receptor kinases GSO1 and GSO2, which are expressed in the developing embryonic epidermis. Transcriptome analysis shows that this pathway regulates stress-related gene expression in seeds. Consistent with these findings we show genetically, and through activity analysis, that the stress-associated MPK6 protein acts downstream of GSO1 and GSO2 in the developing embryo. We propose that a stress-related signalling pathway has been hijacked in some angiosperm seeds through the recruitment of endosperm-specific components. Our work reveals the presence of an inter-compartmental dialogue between the endosperm and embryo that ensures the formation of an intact and functional cuticle around the developing embryo through an “auto-immune” type interaction.\nIntroduction:\nThe Arabidopsis seed is a complex structure composed of three genetically distinct compartments, the maternally-derived seed coat, the embryo, and the endosperm. After fertilization the expansion of the endosperm drives the growth of the seed. However, during later developmental stages the endosperm breaks down, leaving space for the growing embryo. By the end of seed development, only a single endosperm cell layer envelops the embryonic tissues( reviewed in[1]). The endosperm is an angiosperm innovation, thought to have arisen through the sexualisation of the central cell of the female gametophyte[2]. The ancestors of angiosperms probably had seeds more similar to those of gymnosperms, in which tissues of the female gametophyte proliferate independently of egg cell fertilization to produce a nutrient rich storage tissue. However, the endosperm plays not only a nutritional role, but also a role in regulating embryo development. For example, the peptide CLAVATA3/EMBRYO SURROUNDING REGION-RELATED8( CLE8) may act non-cell autonomously to regulate early Arabidopsis embryogenesis[3]. Recently, maternally-expressed peptides present in the central cell pre-fertilization, and subsequently in the early EMBRYO SURROUNDING REGION( ESR), were shown to regulate Arabidopsis suspensor development. Genetic analysis suggests that this regulation could be mediated by a pathway involving the Receptor-Like Cytoplasmic Kinase SHORT SUSPENSOR[4, 5], although the receptor involved remains unidentified. In previous work we showed genetically that the ESR-specific subtilisin protease Abnormal LEaf-shape1( ALE1) acts in the same genetic pathway as two embryonically-expressed receptor kinases, GASSHO1[( GSO1) also known as SCHENGEN3[6]] and GASSHO2( GSO2), to control the formation of the embryonic cuticle in developing seeds[7–10]. Our results indicate that a seed specific inter-tissue signalling event is necessary for the formation of a functional embryonic cuticle[7]. The results of genetic studies have led us to speculate that the role of this pathway is to ensure the robust elimination of apoplastic continuity between the developing embryo and the surrounding endosperm thus gating molecular movement between the two compartments[11, 12]. The cuticle is the outermost layer of the aerial parts of the plant. It is a highly complex structure mainly composed of a lipid polymer( cutin) and waxes, either associated with the polymer( intracuticular waxes) or deposited on the top of it( epicuticular waxes)( recently reviewed in[13, 14]). Cutin and waxes are composed of complex mixtures of hydroxylated and very long-chain fatty acid derivatives, respectively. Cuticle structure and composition are highly regulated not only at the tissue level, but also in response to environmental stimuli such as drought, radiation and pollution[13, 14]. In addition, several reports have highlighted the important role played by the cuticle in biotic interactions, and particularly in protecting plants from attack by bacterial pathogens( reviewed in[13, 15, 16]). In Arabidopsis although little, if any, evidence exists for the presence of cutin-like substances in the wall between the mature egg cell and the central cell, by the end of embryogenesis the hypocotyl and cotyledons of the embryo are covered with a continuous cuticle which renders the germinating seedling impermeable to hydrophilic dyes, and resistant to water loss[17]. Cuticle biogenesis is considered to be a unique property of epidermal cells[18]. During plant development, epidermal cells are generated by anticlinal divisions of pre-existing epidermal cells so that each cell inherits an intact external cuticularised cell wall. In this respect the embryonic cuticle is atypical as it is deposited de novo at the interface between the developing embryo and endosperm. Although mutants with defective embryonic cuticles have been described[7–10, 17], only very fragmentary evidence about when the embryonic cuticle appears is present in the literature. Furthermore, the structure of the embryonic cuticle, its composition, the mechanisms via which it is deposited and its function during seed development remain unexplored. In this study we aimed to elucidate how the embryonic cuticle is formed, and to investigate how the ALE1 GSO1 GSO2 signalling pathway impacts its biosynthesis and deposition.\nDiscussion:\nIt this study, consistent with the similarity between GSO1/2 and PEPR1/2 proteins, we found that stress-associated kinase MPK6, which has been shown to act downstream of PEPR signalling[56], shows constitutive phosphorylation in developing seeds, and that this phosphorylation is partially dependent upon GSO1 and GSO2. In addition, we showed that GSO1/GSO2, are required for the expression of a set of stress-related genes during early seed development. Our results suggest that GSO1/GSO2 dependent stress response-related signalling pathways are active in developing seeds. Because of the conserved transcriptional targets expressed downstream of GSO1/GSO2 dependent signalling, and in defence responses, this scenario is distinct from previously reported situations in which single pathway components, such as the co-receptor BAK1, play distinct roles in developmental and defence-related signalling cascades through interaction with multiple receptors[57, 58]. However, the role of the transcriptional targets of GSO1/GSO2 signalling in seeds remains to be elucidated. Our work also shows that GSO1/GSO2, ALE1 and MPK6 act in a genetic pathway involved in ensuring embryonic cuticle integrity. We show for the first time that embryonic cuticle biogenesis involves the coalescence of discontinuous patches of cutin-like material that appear on the embryo surface at the globular stage, and that pathway mutants are either incapable of completing, or retarded in the completion of “gap closure” during this process. Interestingly, GSO1( also known as SCHENGEN3[6]) was recently shown to be involved in ensuring the continuity of another apoplastic diffusion barrier, the Casparian strip, which prevents the apoplastic movement of solutes from the cortex to the stele of the root[6]. GSO1 may therefore form part of a general mechanism employed by plants for monitoring the “integrity” of apoplastic barriers formed during plant development. The role of GSO1 and GSO2 in the closure of gaps in the nascent cuticle implies spatial regulation of signalling outputs at the subcellular level. Cytoplasmic signalling components which, like MPK6 might not be uncovered by transcriptome analysis but instead be modified post-translationally, are therefore likely to be of critical importance in GSO1 GSO2 signalling in the embryonic epidermis. Indeed, although MPK6-mediated signalling has most often been implicated in the control of transcription, particularly via the modulation of the activity of WRKY transcription factors, evidence for potential roles in cytoplasmic responses, for example during funicular guidance of pollen tubes[46] and control of cell division planes[50], exist in the literature. Cytoplasmic responses downstream of receptor-like kinases include the local production of apoplastic Reactive Oxygen Species( ROS) and/or calcium influxes, and indeed localized ROS production has been implicated in Casparian strip formation[6, 59–61]. However although a plausible model has proposed that ROS release could mediate Casparian strip polymerisation though polymerisation of monolignols[59], it is less obvious how ROS could directly affect the biosynthesis of an aliphatic cutin-based barrier, although a possible role for ROS in linking the cuticle to the cell wall has been evoked[62]. ROS production has been shown to directly modulate the activation of MAPK signalling, providing a mechanism permitting the reinforcement of localised signalling events[63, 64]. Another, potentially linked, possibility is that GSO1/GSO2 activity in the embryo could spatially direct the secretion of either cuticle components or enzymes and cell wall components necessary for their integration into the cutin polymer, in a system analogous to the rapid and highly localized deposition of callose observed upon hyphal penetration into epidermal cells( reviewed in[65, 66]). Interestingly MPK6 has also been shown to be involved in phragmoplast formation during root cell division and therefore could be involved in the localised production/secretion of apoplastic compounds[50]. However, observing these processes in situ, within the living seeds, would require developments in microscopy which are not yet available. Our work highlights several questions which merit further discussion. A first important question is whether the GSO1/GSO2 signalling pathway could play a role in protecting seeds, or more generally plants, against pathogen attack. Cuticle integrity in adult plants has been shown to be required for resistance to Pseudomonas pathovars[67, 68]. The action of the ALE1 GSO1/GSO2 signalling pathway in ensuring embryonic cuticle integrity is therefore likely to have a significant influence on embryo and seedling susceptibility to bacterial pathogens. However, we have also shown that GSO1/GSO2, ALE1 and MPK6 are necessary for the expression of known defence marker genes in seeds. Cuticle permeability phenotypes have not been reported in the literature for mutants affected in the defence markers identified in our transcriptome studies. This raises the question of whether the ALE1, GSO1/GSO2, MPK6 signalling pathway, in addition to mediating localised apoplastic modifications, could act at a more global level either to protect developing seeds from the ingress of bacterial pathogens( thus affecting vertical pathogen transmission), or to “prime” embryos against pathogen attack upon germination. Exploring this possibility would necessitate functionally separating susceptibility caused by cuticle defects from lack of immune priming, and will be technically very challenging, but could ultimately inform strategies aiming to reduce vertical transmission of plant pathogens. The second question concerns how signalling via GSO1 and GSO2 is triggered in the seed in the absence of pathogens. In this study we consolidate data supporting the function of ALE1 in the same pathway as GSO1 and GSO2. We previously proposed that the function of ALE1, GSO1 and GSO2 in ensuring the apoplastic separation of the embryo and endosperm became necessary in angiosperms due to developmental constraints imposed by the sexualisation of the female gametophyte, which led to the simultaneous development of the embryo and surrounding nutritive tissues post-fertilization, rather than their sequential development[11, 12]. ALE1 expression is endosperm specific and, as previously suggested[69], the recruitment of ALE1 to a function in reinforcing the embryonic cuticle may have occurred during the emergence of the angiosperm lineage. The fact that the phenotype of ale1 mutants is weaker than that of gso1 gso2 double mutants, particularly at later stages of embryogenesis, may be due to redundancy with other members of the subtilase family in Arabidopsis, several of which are encoded by genes expressed in the developing endosperm[19, 20]. Subtilases have been shown to be involved in defence responses and immune priming in plants[70, 71]. It is thus possible that ALE1 acts to produce an as yet unidentified ligand for the GSO1 and GSO2 receptors. In such a scenario the function of ALE1 in the seed could be analogous to the “immune priming” function previously reported for the subtilase SBT3. 3[71]. Such a scenario naturally raises a third, and important question, around the identity of the ligand of GSO1 and GSO2. Two sulfated peptides, CIF1 and CIF2, which can act as ligands for GSO1 during Casparian strip formation, have recently been identified[72, 73]. Testing the role of these molecules in developing seeds will be an obvious priority. However Nakayama and colleagues specifically reported that no cuticle defects( as gauged by cotyledon fusion phenotypes) were observed in cif1 cif2 double mutants and the possibility that other signalling molecules could be involved in ensuring embryonic cuticle integrity therefore cannot be excluded. In summary, we propose that endosperm-localised factors( like ALE1) may have been recruited to hijack a defence-signalling pathway involving the ancestor( s) of GSO1 and GSO2, and downstream signalling components including MPK6, and trigger an “auto-immune” type response in the embryo to ensure cuticle integrity. The future identification of further pathway components, and in particular the substrates of ALE1 and ligands of GSO1 and GSO2, will help to confirm this hypothesis.",
"role": "user"
},
{
"content": "Plant embryogenesis occurs deep within the tissues of the developing seed , and leads to the production of the mature embryo . In Arabidopsis and many other plant species embryo-derive structure ( such as the cotyledons ) are suddenly exposed to environmental stresses such as low humidity . In these species the embryonic cuticle provides a primary defence against environmental stress , and particularly dehydration , at germination . The formation of an intact and functional cuticle during embryogenesis is thus of key importance for seedling survival . Our work shows that a signalling pathway involving receptor-kinases expressed in the embryo epidermis , and a protease expressed in the endosperm tissue surrounding the embryo , is critical for ensuring the production of an intact cuticle . Furthermore , we show that a component of stress-related MAP-Kinase signalling in plants acts downstream in this pathway , possibly to mediate transcriptional responses characteristic of responses to stress . We propose that plants have redeployed a signalling pathway associated with stress resistance to ensure the formation of an intact embryonic cuticle prior to germination , and thus ensure seedling survival at germination .",
"role": "assistant"
}
]
|
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.780 | [
{
"content": "Review the provided collection of medical RCT summaries. Your objective is to distill the essential insights into a brief conclusion. Focus solely on the synthesis of findings without additional commentary. The RCT summaries are as follows:\n\n\n\nReport Title 1:\nEffects of cervical traction and exercise therapy in cervical spondylosis.\nReport Abstract 1:\nA randomised clinical trial was conducted in the Department of Physical Medicine, Chittagong Medical College Hospital from July, 2001 to June, 2002. The objectives of the study were to find out the effects of cervical traction (CT) and exercise on the patients with chronic cervical spondylosis. A total of 199 patients with cervical spondylosis were included in the clinical trial. One hundred patients were treated with cervical traction plus exercise and 99 patients were treated with non-steroidal anti-inflammatory drug (NSAID). Posture correction advice was given to all patients. The patients were treated for 6 weeks. There was a marked improvement in both the groups after treatment (P<0.001). But there was nearly significant difference regarding improvement in treatment with CT plus exercise than with NSAID (P = 0.06). The results indicate that the improvement of the patients with chronic cervical spondylosis was more in CT plus exercise than analgesics. So, CT & neck muscle strengthening exercise may have some more beneficial effects than NSAIDs on chronic cervical spondylosis.\n\n\nReport Title 2:\nDesign and assessment of an adaptive intermittent cervical traction modality with EMG biofeedback.\nReport Abstract 2:\nAn intermittent cervical traction modality with closed-loop traction force control based on EMG biofeedback was developed and used for clinical study. This system consists of a EMG scanner, on-line self-adjusted traction force controller, audio/video alarm system, real time therapeutic status display, computer interface hardware, and control software. Twenty-four subjects with diagnosed cervical radiculopathy and muscle spasm symptom who were randomly divided into two groups served as subjects in this study. The control and experimental groups were treated with conventional open loop and new EMG biofeedback closed loop traction control protocols respectively. The results of this study indicate that the average reductions in paraspinal EMG signal during traction after 7 weeks treatment for experimental and control groups were 71 and 50 percent, respectively (p < 0.001). These results not only support the clinical use of intermittent, sitting traction to produce cervical paraspinal muscle relaxation, but also revealed that the average myoelectric activity of cervical paraspinal muscle during traction was reduced as traction force increased over the 7-week duration of traction treatment. Through EMG biofeedback traction force control, muscle injury, neck soreness, or pain after traction may be avoided.\n\n\nReport Title 3:\nCervical spine disorders. A comparison of three types of traction.\nReport Abstract 3:\nA randomized clinical trial was conducted to evaluate the efficacy of three commonly employed forms of traction in the treatment of cervical spine disorders. One hundred consenting men and women with disorders of the cervical spine were randomly assigned to one of four treatment groups, static traction, intermittent traction, manual traction, or no traction. All patients, regardless of group assignment, were seen twice weekly. The four groups were shown to be similar with regard to age, sex, diagnosis, chronicity, and prescores on the seven outcome measures. Although the entire cohort of neck patients, regardless of group assignment, improved significantly on all the outcome variables over the 6-week period, patients receiving intermittent traction performed significantly better than those assigned to the no traction group in terms of pain (P = 0.03), forward flexion (P = 0.01), right rotation (P = 0.004) and left rotation (P = 0.05).",
"role": "user"
},
{
"content": "The current literature does not support or refute the efficacy or effectiveness of continuous or intermittent traction for pain reduction, improved function or global perceived effect when compared to placebo traction, tablet or heat or other conservative treatments in patients with chronic neck disorders. Large, well conducted RCTs are needed to first determine the efficacy of traction, then the effectiveness, for individuals with neck disorders with radicular symptoms.",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3126 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nRecent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at this https URL.\n\nCited Works Summary:\n[1]: We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL\n\n[2]: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.\n\n[3]: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.\n\n[4]: Face detection has been well studied for many years and one of the remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel context-assisted single shot face detector, named PyramidBox, to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel contextual anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level contextual semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a context-sensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversities of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art on the two common face detection benchmarks, FDDB and WIDER FACE.\n\n[5]: This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.\n\n[6]: We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL.",
"role": "user"
},
{
"content": "Recent state-of-the-art face detectors are generally built based on Faster-RCNN [3] , R-FCN [1] or SSD [2] . SSH [6] exploited the RPN (Region Proposal Network) from Faster-RCNN to detect faces, by building three detection feature maps and designing six anchors with different sizes attached to the detection feature maps. S @math FD [5] and PyramidBox [4] , on the other hand, adopted SSD as their detection architecture with six different detection feature maps. Different from S @math FD, PyramidBox exploited a feature pyramid-style structure to combine features from different detection feature maps. Our proposed method, on the other hand, only builds single level detection feature map, based on VGG16, for classification and bounding-box regression, which is both simple and effective.",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.4768 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nCellsorcellularfractionswerelysedinSDSsamplebufferandhomogenatesweresubjectedtoSDS-PAGE.",
"role": "user"
},
{
"content": "[\"SDS\", \"SDS\"]",
"role": "assistant"
}
]
|
science.chemdner_ner | science.chemdner_ner.2173 | [
{
"content": "Analyze the given text to identify specific entities. Focus on extracting relevant data.\n\nProvide your findings in JSON format. Each category should list the entities found.\n\nText:\n\nA practical and scalable manufacturing process for an anti-fungal agent, Nikkomycin Z. A scalable and reliable manufacturing process for Nikkomycin Z HCl on a 170 g scale has been developed and optimized. The process is characterized by a 2.3 g/L fermentation yield, 79% purification yield, and >98% relative purity of the final product. This method is suitable for further scale up and cGMP production. The Streptomyces tendae ΔNikQ strain developed during the course of this study is superior to any previously reported strain in terms of higher yield and purity of Nikkomycin Z.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"cGMP\", \"Nikkomycin Z\", \"HCl\"]}",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.914 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nCochleotopicprojectionwasevaluatedbyinsertingDiIsoakedfilterstripsintothebaseandDiAsoakedfilterstripsintotheapicalturnof2P0and4P8Brn3cnullmutantsandasimilarnumberofwildtypelittermates.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.bioasq_factoid_qa | science.bioasq_factoid_qa.1181 | [
{
"content": "Given the following biomedical passages, answer the question succinctly. Ensure your response is precise and limited to the answer only.\n\nQuestion: Givosiran is used for treatment of which disease?\n\nPassages: Cost savings with hemin versus givosiran for the treatment of patients with acute intermittent porphyria (AIP).\nIn 2019, FDA approved givosiran for the treatment of adults with acute hepatic porphyria\nCONCLUSIONS: Hemin is expected to provide cost savings compared to givosiran for all AIP patients and subpopulations.\nNew treatment options include givosiran (siRNA) to down-regulate ALA synthase-1 (ALAS1) and the messenger RNA of PBGD (PBGD mRNA) delivered to the liver cells of patients with AIP.\nManagement of AIP includes awareness and avoidance of triggering factors, infusions of hemin for severe acute attacks, and, if indicated for chronic suppressive therapy, maintenance treatment with hemin or givosiran, a small interfering RNA molecule that antagonizes ALA synthase 1 transcripts.\nBACKGROUND: Induction of delta aminolevulinic acid synthase 1 ( ALAS1) gene expression and accumulation of neurotoxic intermediates result in neurovisceral attacks and disease manifestations in patients with acute intermittent porphyria, a rare inherited disease of heme biosynthesis. Givosiran is an investigational RNA interference therapeutic agent that inhibits hepatic ALAS1 synthesis.METHODS: We conducted a phase 1 trial of givosiran in patients with acute intermittent porphyria. In part A of the trial, patients without recent porphyria attacks (i.e., no attacks in the 6 months before baseline) were randomly assigned to receive a single subcutaneous injection of one of five ascending doses of givosiran (0.035, 0.10, 0.35, 1.0, or 2.5 mg per kilogram of body weight) or placebo.\nCONCLUSIONS: Once-monthly injections of givosiran in patients who had recurrent porphyria attacks resulted in mainly low-grade adverse events, reductions in induced ALAS1 mRNA levels, nearly normalized levels of the neurotoxic intermediates delta aminolevulinic acid and porphobilinogen, and a lower attack rate than that observed with placebo.\nIn November 2019 givosiran became the second small interfering RNA (siRNA)-based drug to receive US Food and Drug Administration (FDA) approval, it has been developed for the treatment of acute intermittent porphyria (AIP), a disorder characterized by life-threatening acute neurovisceral attacks.\nThe clinical performance of givosiran revealed that suppression of ALAS1 by GalNac-decorated siRNAs represents an additional approach for the treatment of patients with AIP that manifests recurrent acute neurovisceral attacks.\nPhase 3 Trial of RNAi Therapeutic Givosiran for Acute Intermittent Porphyria.\nGivosiran is being developed by Alnylam Pharmaceuticals for the treatment of acute hepatic porphyria (AHP).\nGivosiran is a small interfering ribonucleic acid agent that was recently approved in the United States for the treatment of acute hepatic porphyria (AHP).\nThis technology has led to the approval of givosiran for the treatment of acute hepatic porphyria, and there are another seven conjugates in registrational review or phase 3 trials and at least another 21 conjugates at earlier stages of clinical development.\nIn November 2019, givosiran was approved in the USA for the treatment of adults with AHP based on the positive results from the multinational, phase III ENVISION trial.\nIn preliminary results, givosiran achieved clinical endpoints for AIP, reducing urinary ALA levels, and presented a safety profile that enabled further drug development.\nThis phase I study evaluated the safety, pharmacokinetic, and pharmacodynamic profile of subcutaneously (SC) administered givosiran in patients with acute intermittent porphyria, the most common AHP type.\nGivosiran, another RNAi therapeutic, targeting 5-aminolevulinic acid synthase, has been positively tested in acute intermittent porphyria in phase 1/2 and ongoing phase 3 trials.",
"role": "user"
},
{
"content": "porphyria",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.1441 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\n…(Xu, 1997, 2001b); (b) an adaptive algorithm for Gaussian factor analysis and a criterion for determining the dimension of\nprincipal subspace (Xu, 1998b); (c) a learned parametric model for Independent Component Analysis (ICA) that works on sources mixed with super-Gaussian and subGaussian…",
"role": "user"
},
{
"content": "Method",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.1974 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nThe results of behavioral analyses are consistent with published data reporting that the Ts65Dn mice show normal or hyperactive exploration in the open filed (Coussons-Read and Crnic, 1996; Martínez-Cué et al., 2002; Faizi et al., 2011; Incerti et al., 2011).",
"role": "user"
},
{
"content": "Result",
"role": "assistant"
}
]
|
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.6965 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: The impact of considering adaptation in health state valuation.\nSummary: Patients with chronic health conditions often rate their quality of life ( QoL ) significantly higher than non- patients . One explanation for this discrepancy is that non- patients focus on the negative aspects of the onset of a condition , especially the early difficulties people face when they first experience a debilitating condition , without considering that patients can adapt to it over time . To test this hypothesis , we had 359 people perform person tradeoff ( PTO ) elicitations in an online survey , varying whether the treatment programs under consideration saved the lives of patients ( a ) with pre-existing paraplegia ; or ( b ) who would experience new onset of paraplegia . Half of each group completed an \" adaptation exercise \" which encouraged them to consider their own ability to emotionally adapt to negative events in general and specifically to having paraplegia . The adaptation manipulation increased the value participants placed on pre-existing paraplegia ( p=0.03 ) and on new onset paraplegia ( p=0.05 ) , relative to saving healthy lives . Moreover , the adaptation exercise dramatically reduced the differences between evaluations of pre-existing and new onset paraplegia to values within 2 % of each other . Our findings suggest that asking non- patients to do an adaptation exercise before giving QoL ratings may help close the gap in ratings between patients and citizen non- patients\n\n\nReport 2:\nTitle: DMD genotypes and loss of ambulation in the CINRG Duchenne Natural History Study\nSummary: Objective : To correlate time to loss of ambulation ( LoA ) and different truncating DMD gene mutations in a large , prospect i ve natural history study of Duchenne muscular dystrophy ( DMD ) , with particular attention to mutations amenable to emerging molecular treatments . Methods : We analyzed data from the Cooperative International Neuromuscular Research Group Duchenne Natural History Study for participants with DMD single- or multi-exon deletions or duplications with defined exon boundaries ( n = 186 ) , or small mutations identified by sequencing ( n = 26 , including 16 nonsense point mutations ) . We performed a time-to-event analysis of LoA , a strong indicator of overall disease severity , adjusting for glucocorticoid treatment and genetic modifiers . Results : Participants with deletions amenable to skipping of exon 44 had later LoA ( median 14.8 years , hazard ratio 0.31 , 95 % confidence interval 0.14–0.69 , p = 0.004 ) . Age at LoA did not differ significantly in participants with deletions amenable to exon 45 , 51 , and 53 skipping , duplications , and small rearrangements . Nonsense mutation DMD also showed a typical median age at LoA ( 11.1 years ) , with a few outliers ( ambulatory around or after 16 years of age ) carrying stop codons within in-frame exons , more often situated in the rod domain . Conclusions : As exon 44 skipping – amenable DMD has a later LoA , mutation-specific r and omization and selection of placebo groups are essential for the success of clinical trials\n\n\nReport 3:\nTitle: Long-term effects of glucocorticoids on function, quality of life, and survival in patients with Duchenne muscular dystrophy: a prospective cohort study\nSummary: BACKGROUND Glucocorticoid treatment is recommended as a st and ard of care in Duchenne muscular dystrophy ; however , few studies have assessed the long-term benefits of this treatment . We examined the long-term effects of glucocorticoids on milestone-related disease progression across the lifespan and survival in patients with Duchenne muscular dystrophy . METHODS For this prospect i ve cohort study , we enrolled male patients aged 2 - 28 years with Duchenne muscular dystrophy at 20 centres in nine countries . Patients were followed up for 10 years . We compared no glucocorticoid treatment or cumulative treatment duration of less than 1 month versus treatment of 1 year or longer with regard to progression of nine disease-related and clinical ly meaningful mobility and upper limb milestones . We used Kaplan-Meier analyses to compare glucocorticoid treatment groups for time to st and from supine of 5 s or longer and 10 s or longer , and loss of st and from supine , four-stair climb , ambulation , full overhead reach , h and -to-mouth function , and h and function . Risk of death was also assessed . This study is registered with Clinical Trials.gov , number NCT00468832 . FINDINGS 440 patients were enrolled during two recruitment periods ( 2006 - 09 and 2012 - 16 ) . Time to all disease progression milestone events was significantly longer in patients treated with glucocorticoids for 1 year or longer than in patients treated for less than 1 month or never treated ( log-rank p<0·0001 ) . Glucocorticoid treatment for 1 year or longer was associated with increased median age at loss of mobility milestones by 2·1 - 4·4 years and upper limb milestones by 2·8 - 8·0 years compared with treatment for less than 1 month . Deflazacort was associated with increased median age at loss of three milestones by 2·1 - 2·7 years in comparison with prednisone or prednisolone ( log-rank p<0·012 ) . 45 patients died during the 10-year follow-up . 39 ( 87 % ) of these deaths were attributable to Duchenne-related causes in patients with known duration of glucocorticoids usage . 28 ( 9 % ) deaths occurred in 311 patients treated with glucocorticoids for 1 year or longer compared with 11 ( 19 % ) deaths in 58 patients with no history of glucocorticoid use ( odds ratio 0·47 , 95 % CI 0·22 - 1·00 ; p=0·0501 ) . INTERPRETATION In patients with Duchenne muscular dystrophy , glucocorticoid treatment is associated with reduced risk of losing clinical ly meaningful mobility and upper limb disease progression milestones across the lifespan as well as reduced risk of death . FUNDING US Department of Education/National Institute on Disability and Rehabilitation Research ; US Department of Defense ; National Institutes of Health/National Institute of Arthritis and Musculoskeletal and Skin Diseases ; and Parent Project Muscular Dystrophy",
"role": "user"
},
{
"content": "Variability in utilities was observed based on instrument , respondent type , and country .\nUtility estimates for health states within non-ambulatory DMD are under reported ; nor were utilities for DMD-related health states such as scoliosis or preserved upper limb function identified .\nPublished health state utilities document the substantial HRQoL impacts of DMD , particularly with disease progression .",
"role": "assistant"
}
]
|
science.qasa_abstractive_qa | science.qasa_abstractive_qa.69 | [
{
"content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: Here we compare ranking performances of KERM and other PLMs based re-rankers on the first two widely used query sets. Moreover, ablation studies for each component of KERM are also explored. All experimental results were reported under the same BM25 setting. Table 3 shows the performance comparisons between different settings of knowledge injector, which is statistically significant. From this table, we can observe the following phenomena. (1) MRR@10 of KERM without interaction and propagation process decreases at least 1\\% respectively. This indicates both knowledge interaction and propagation processes play an indispensable role in ranking performance. (2) The performance of KERM without propagation is comparable to vanilla ERNIE. Not only query and passage entities, but also their multi-hop neighbors are essential for the ranking performance. (3) MRR@10 of KERM without knowledge interaction drops the most. It suggests the simple and straightforward way to aggregate knowledge graph with text does not work in the passage re-ranking scenario. The text and knowledge graph need to be refined with each other mutually in the interaction, which will be further analyzed in detail as follows. Knowledge graph distillation is performed in both global and local perspectives. To explore their roles in the ranking performance, we remove the graph pruning globally and sentence selection locally respectively, keep other settings unchanged, and derive KERM without graph pruning and sentence selection respectively. From results on TREC 2019 DL in Table 5, observations are listed as below. (1) Without global graph pruning, MRR@10 and the average edge score, calculated through Eq.(3), decrease the most, and the time efficiency drops slightly. This indicates the original knowledge graph exists noise data that affect performance. (2) Without sentence selection, the time efficiency drops the most and the average edge score decreases slightly, which proves that not every sentence in a passage has a positive effect on semantic matching. Overall, knowledge graph distillation is significant to KERM.\n\nInquiry: Does the paper show how each component of KERM can contribute to passage re-ranking performance quantitatively and qualitatively?",
"role": "user"
},
{
"content": "This work conducted ablation studies to investigate the contribution of each component in the performance of KERM. By testing different settings for the knowledge injector, this work found that performance decreases without knowledge interaction and also without knowledge propagation. By testing the model without global or local distillation, they also demonstrated that performance decreases without global distillation and efficiency decreases without either global or local distillation. These experiments demonstrate that each component of KERM contributes to passage re-ranking performance quantitatively.",
"role": "assistant"
}
]
|
science.drug_combo_extraction_re | science.drug_combo_extraction_re.1150 | [
{
"content": "Analyze the given paragraph from a biomedical study. Focus on identifying drug pairings within a specified sentence. Your goal is to determine the nature of these pairings.\n\nPresent your findings as a JSON array. Each element should be an array containing the drug names and their interaction type. Use the exact names from the sentence.\n\nOnly consider the specified sentence for your analysis. The paragraph is for context only.\n\nReturn solely the JSON array. If no pairings are found, return an empty array.\n\nContext Paragraph: Emergence of trimethoprim-resistant Escherichia coli in healthy persons in the absence of prophylactic or therapeutic antibiotics during travel to Guadalajara, Mexico. Thirty-nine healthy US students without diarrheal disease and who had not received prophylactic or therapeutic antibiotics were monitored for emergence of trimethoprim-resistant gram-negative fecal flora for a 3-week period after arrival in Guadalajara, Mexico. During this time period, most students showed no change in total fecal gram-negative bacteria (p > 0.05) but showed an increasing level of trimethoprim (TMP) resistance (p < 0.01) among fecal coliforms. Escherichia coli was the TMP-resistant organism isolated in 18 of 39 (46%) healthy students. These 18 TMP-resistant E. coli were also resistant to ampicillin ( 44 % ) , azithromycin ( 11 % ) , chloramphenicol ( 39 % ) , ciprofloxacin ( 11 % ) , doxycycline ( 89 % ) , erythromycin ( 100 % ) , furazolidone ( 72 % ) , levofloxacin ( 17 % ) , trimethoprim-sulfamethoxazole ( 89 % ) and trovafloxacin ( 17 % ) . In the absence of prophylactic and therapeutic antibiotics, increased acquisition of TMP-resistant gram-negative fecal flora in this developing country is probably due to poor sanitary conditions and the recurrent and heavy exposure to antimicrobial-resistant indigenous flora as a result of contaminated food and drink.\n\nTarget Sentence: These 18 TMP-resistant E. coli were also resistant to ampicillin ( 44 % ) , azithromycin ( 11 % ) , chloramphenicol ( 39 % ) , ciprofloxacin ( 11 % ) , doxycycline ( 89 % ) , erythromycin ( 100 % ) , furazolidone ( 72 % ) , levofloxacin ( 17 % ) , trimethoprim-sulfamethoxazole ( 89 % ) and trovafloxacin ( 17 % ) .",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.4820 | [
{
"content": "Analyze the given paragraph from COVID-19 research. Determine the relevant topics discussed in the text. Your response should be a JSON array containing the identified topics. Ensure the output is a valid JSON array with no additional commentary.\n\nText:\n\nOBJECTIVES: To determine the risk of COVID-19 transmission during minimally invasive surgical (MIS) procedures METHODS: Surgical society statements regarding the risk of COVID transmission during MIS procedures were reviewed. In addition, the available literature on COVID-19 and other viral transmission in CO2 pneumoperitoneum, as well as the presence of virus in the plume created by electrocautery during MIS was reviewed. The society recommendations were compared to the available literature on the topic to create our review and recommendations to mitigate COVID-19 transmission. RESULTS: The recommendations promulgated by various surgical societies evolved over time as more information became available on COVID-19 transmission. Review of the available literature on the presence of COVID-19 in CO2 pneumoperitoneum was inconclusive. There is no clear evidence of the presence of COVID-19 in plume created by electrocautery. Technologies to reduce CO2 pneumoperitoneum release into the operating room as well as filter viral particles are available and should reduce the exposure risk to operating room personnel. CONCLUSION: There is no clear evidence of COVID-19 virus in the CO2 used during MIS procedures or in the plume created by electrocautery. Until the presence or absence of COVID-19 viral particles has been clearly established, measures to mitigate CO2 and surgical cautery plume release into the operating room should be performed. Further study on the presence of COVID-19 in MIS pneumoperitoneum and cautery plume is needed.",
"role": "user"
},
{
"content": "[\"Prevention\", \"Transmission\"]",
"role": "assistant"
}
]
|
science.acl_arc_intent_classification | science.acl_arc_intent_classification.75 | [
{
"content": "Examine the given text from a research paper in the field of NLP. Your task is to identify the role of the citation within the context provided. Focus on understanding the intent behind the citation and classify it accordingly.\n\nSection Title:\nintroduction\n\nText Before Citation:\nHowever, the greatest increase is in the amount of raw text available to be processed, e.g. the English Gigaword Corpus (Linguistic Data Consortium, 2003). This will require more efficient learning algorithms and implementations. For example, 10 million words of the American National Corpus (Ide et al., 2002) will have manually corrected POS tags, a tenfold increase over the Penn Treebank (Marcus et al., 1993), currently used for training POS taggers.\n\nCitation Text:\nHowever , the greatest increase is in the amount of raw text available to be processed , e.g. the English Gigaword Corpus ( Linguistic Data Consortium , 2003 ) .\n\nText After Citation:\nRecent work (Banko and Brill, 2001; Curran and Moens, 2002) has suggested that some tasks will benefit from using significantly more data. Also, many potential applications of NLP will involve processing very large text databases. For instance, biomedical text-mining involves extracting information from the vast body of biological and medical literature; and search engines may eventually apply NLP techniques to the whole web. Other potential applications must process text online or in realtime.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
]
|
science.pico_ner | science.pico_ner.2262 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nEffect of weight loss on inflammation in patients with mild obstructive sleep apnea . BACKGROUND AND AIMS Inflammation may be one mediating mechanism for cardiovascular diseases in obstructive sleep apnea ( OSA ) . However , little is known about subclinical inflammation or the effect of lifestyle intervention on inflammation in early stages of OSA . The aim of this substudy of an existing randomized controlled trial , with post hoc analyses , was to determine the impact of lifestyle changes aimed at weight reduction on inflammatory biomarkers in overweight patients with mild OSA . METHODS AND RESULTS Patients were randomized to supervised intensive lifestyle intervention group ( N=28 ) or to control group ( N=31 ) , which received routine lifestyle advices . Circulating concentrations of pro- and anti-inflammatory mediators were measured before and after the 1-year intervention . The concentrations of two pro-inflammatory mediators , high-sensitivity C-reactive protein ( hsCRP ) and interleukin ( IL ) -6 , decreased significantly in both groups . Although the changes in inflammatory biomarkers favored the supervised lifestyle intervention , the only significant reduction observed between the groups was for the anti-inflammatory IL-1 receptor antagonist ( IL-1RA ) . The change in hsCRP was associated with apnea-hypopnea index , and improving night-time oxygen saturation was related to tumor necrosis factor alpha . IL-1RA and IL-6 were associated with insulin metabolism . CONCLUSION Weight loss resulted in reductions in concentrations of some pro- and anti-inflammatory mediators in overweight patients with mild OSA , overall favoring the supervised lifestyle intervention . These findings suggest that more intensive treatment of obesity in OSA patients might be well-justified .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"routine lifestyle advices\", \"supervised intensive lifestyle intervention group\", \"control group\", \"lifestyle intervention\"], \"Outcome\": [\"inflammatory biomarkers favored\", \"high-sensitivity C-reactive protein ( hsCRP ) and interleukin ( IL ) -6 , decreased\", \"change in hsCRP\", \"improving night-time oxygen saturation\", \"apnea-hypopnea index\", \"Weight loss\"], \"Participant\": [\"overweight\", \"obstructive sleep apnea ( OSA )\", \"N=28\", \"OSA\", \"mild OSA\", \"patients with mild obstructive sleep apnea .\", \"mild OSA .\", \"cardiovascular diseases\", \"N=31\"]}",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.3518 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies \nChapter Title: B. Gradient compression \n1. Abstract of Cited Paper (BIB001): Federated learning is a distributed learning method to train a shared model by aggregating the locally-computed gradient updates. In federated learning, bandwidth and privacy are two main concerns of gradient updates transmission. This paper proposes an end-to-end encrypted neural network for gradient updates transmission. This network first encodes the input gradient updates to a lower-dimension space in each client, which significantly mitigates the pressure of data communication in federated learning. The encoded gradient updates are directly recovered as a whole, i.e. the aggregated gradient updates of the trained model, in the decoding layers of the network on the server. In this way, gradient updates encrypted in each client are not only prevented from interception during communication, but also unknown to the server. Based on the encrypted neural network, a novel federated learning framework is designed in real applications. Experimental results show that the proposed network can effectively achieve two goals, privacy protection and data compression, under a little sacrifice of the model accuracy in federated learning.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Although mainly devised for the use of communication efficiency, lossy compression techniques may also be implemented to facilitate some form of information security. Konecny et al. BIB001 demonstrate how such a method reduces communication costs. The server only has victim specific incomplete information. Other forms of compression are also possible. An autoencoder compressor is proposed to encrypt data transfer BIB002 in which the server trains an autoencoder on dummy-gradient updates, and subsequently releases the encoding part to each of the clients whilst keeping the decoder part secretive. Every gradient update from each client is now encoded before transfer to the server and decoded server side. This has a couple of advantages. First, the model is compressed by the encoder which will increase communication efficiency. Second, the encoding and decoding result in some information loss about the gradient update. How well this measure prevents the server from malicious reconstruction remains to be seen.",
"role": "assistant"
}
]
|
science.pubmedqa_qa | science.pubmedqa_qa.212 | [
{
"content": "Analyze the given text and respond to the inquiry. Choose from: \"YES\", \"NO\", or \"MAYBE\".\n\nText:\n(BACKGROUND) This study sought to compare general surgery research residents' survey information regarding self-efficacy ratings to their observed performance during a simulated small bowel repair. Their observed performance ratings were based on their leadership skills in directing their assistant.\n(METHODS) Participants were given 15 min to perform a bowel repair using bovine intestines with standardized injuries. Operative assistants were assigned to help assist with the repair. Before the procedure, participants were asked to rate their expected skills decay, task difficulty, and confidence in addressing the small bowel injury. Interactions were coded to identify the number of instructions given by the participants to the assistant during the repair. Statistical analyses assessed the relationship between the number of directional instructions and participants' perceptions self-efficacy measures. Directional instructions were defined as any dialog by the participant who guided the assistant to perform an action.\n(RESULTS) Thirty-six residents (58.3% female) participated in the study. Participants who rated lower levels of decay in their intraoperative decision-making and small bowel repair skills were noted to use their assistant more by giving more instructions. Similarly, a higher number of instructions correlated with lower perceived difficulty in selecting the correct suture, suture pattern, and completing the entire surgical task.\n\nInquiry:\nDo resident's leadership skills relate to ratings of technical skill?\n\nResponse:",
"role": "user"
},
{
"content": "YES",
"role": "assistant"
}
]
|
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.15197 | [
{
"content": "Craft a concise summary for a chemistry research paper using the provided title and selected text segments.\n\nHeading:\nChemotherapy-Induced Oesophageal Stricture in a Child with Osteosarcoma: A Case Report\n\nContent Extracts:\n1. Introduction\n<p>The cure rate of paediatric malignancies has improved with the advances in both chemotherapy and radiotherapy over the last 3decades, and the long-term survival rate is currently 80% [1]. Because the number of surviving children with malignancies is increasing, the complications involved in the treatment of paediatric malignancies are becoming clear [2, 3]. Oesophageal stricture is one of the reported complications [4, 5]. Oesophageal stricture in children treated for malignancies can occur because of the administration of antineoplastic agents, radiotherapy, candidiasis, graft versus host disease, and so forth [6]. In previous reports on oesophageal complications in children with malignancies, it was suggested that treatment with a combination of chemotherapy and radiotherapy probably results in oesophageal stricture. However, to the best of our knowledge, only few studies have reported that chemotherapy alone induced oesophageal strictures in children. Here, we report a rare case of oesophageal stricture in a child who was undergoing only chemotherapy for osteosarcoma.</p>\n\n2. Case report\n<p>A 14-year-old boy was diagnosed with osteosarcoma of the left distal femur. He underwent 4 courses of chemotherapy with cisplatin (CDDP) (167 mg) and doxorubicin (42 mg). He was also administered a 5-HT3 receptor antagonist to prevent nausea and vomiting and a regimen of the histamine-2-receptor antagonist to prevent gastroesophageal reflux disease; despite this, he complained of severe nausea and vomiting. The patient experienced hearing impairment because of CDDP, therefore, after the first 4 courses of chemotherapy, the regimen was changed to comprise ifosfamide (4200 mg), etoposide (84 mg), and methotrexate (28 mg). Two courses of this new regimen were administered. Five months after the diagnosis, the patient underwent extensive tumour resection and total knee replacement with a custom-made extendable knee replacement system (Growing KotzProsthesis;Stryker, Germany).Pathological examination showed that the tumour tissue was almost necrotic, which suggested that the preoperative chemotherapy was effective. After the operation, the patient was administered 6 courses of the second chemotherapy regimen over a period of 5 months. During this treatment period, he experienced nausea and vomiting. Four weeks after the postoperative 6-course chemotherapy, he complained of dysphagia and could not eat solid foods. A barium-swallowing examination showed significant narrowing of the distal oesophagus (Figure 1). Upper gastrointestinal endoscopy revealed stricture of the distal oesophagus, demonstrating slightly reddish and inflammatory mucosa at the distal oesophagus (Figure 2). The distal oesophagus was narrow, and the entire oesophageal wall was thick, so the endoscope could not pass beyond the stricture of the oesophagus. A biopsy of the strictured part of the lesion was not performed. He was diagnosed with chemotherapy-related oesophageal stricture, and balloon dilatation for the oesophageal stricture was performed 10 times over a period of 2 months. Finally, the oesophagus was dilated till its diameter increased by 12 mm. During this period, omeprazole (proton pump inhibitor, PPI) was administered. After ballooning, his dysphagia improved, and he could ingest solid food. At the final follow-up, which was conducted 20 months after the ballooning, the oesophagus was found to be normal with no strictures, and osteosarcoma of the left distal femur had also not relapsed and metastasized.</p>\n\n3. Discussion\n<p>Children with malignancies are likely to be exposed to many risk factors for oesophageal stricture during treatment, such as the administration of antineoplastic agents, radiation therapy, candidiasis, and graft versus host disease [2–8]. In particular, it is well known that radiation therapy is associated with the risk of oesophageal stricture in both children and adults [3]. Radiation injury to the oesophagus results in severe fibrosis, mucosal atrophy, and ischemic ulcerations and finally results in oesophageal stricture. Oesophageal stricture has also been reported to be caused by antineoplastic agents such as vinblastine, doxorubicin, 5-fluorouracil, and methotrexate [5, 9]. Moreover, some antineoplastic agents act synergistically with radiotherapy and exacerbate the oesophagitis associated with radiotherapy [2, 5]. Therefore, it was reported that chemotherapy combined with radiotherapy had a definite risk of oesophageal complications [10, 11]. In addition, candidiasis and graft versus host disease after bone marrow transplantation have been reported to contribute to oesophageal stricture in children [7, 8, 12]. With regard to oesophageal stricture in children with malignancies, Ellenhorn et al. [4] and Lal et al. [5] reported that 13 paediatric patients developed oesophageal strictures after treatment for malignancies at the same institution; of these, only 1 patient underwent chemotherapy alone, while the others received a combination of chemotherapy and radiotherapy. Kassam and Mandel [6] also reported a case of oesophageal stricture in a child who received only chemotherapy for treatment of a malignancy. Oesophageal stricture resulting from chemotherapy alone in paediatric patients with malignancies is very rare. In the case of our patient, antineoplastic agents were administered for the treatment of osteosarcoma of the left femur without radiotherapy, and bone marrow transplantation was not performed. Candidiasis might contribute to stricture of the distal oesophagus, but the gastrointestinal endoscopy examination did not show findings indicative of oesophageal candidiasis, such as mucosal ulceration and white mucosal plaque. Consequently, in our case, the oesophageal stricture seems to be a result of chemotherapy. It was reported that a number of antineoplastic agents, including doxorubicin, methotrexate, and cytosine arabinoside, show severe cytotoxicity against rapidly proliferating cells of the oesophagus by arresting the cell cycle [4]. Finally, the combined oesophageal toxicity of antineoplastic agents and peptic oesophagitis caused by frequent vomiting might have predisposed our patient to the oesophageal stricture.</p><p>With regard to the treatment for oesophageal strictures in children, surgery, including oesophageal division, oesophageal diversion, and endoscopic balloon dilatation, is performed [5, 13]. Balloon dilatation is efficient for correcting oesophageal stricture in children, even though it has a low risk of oesophageal perforation [13, 14]. In severe cases where the patient is resistant to balloon dilation, aggressive surgical management is required [5]. In the case of our patient, gastrointestinal endoscopy did not reveal oesophageal fistula or ulcer, so it was suggested that the risk of oesophageal perforation in balloon dilatation is low. Finally, we performed balloon dilatation to correct the oesophageal stricture.</p><p>In summary, we experienced a rare paediatric case of oesophageal stricture associated with chemotherapy. Chemotherapy alone seldom causes oesophageal stricture in children, but additional conditions such as vomiting may cause oesophageal stricture. Physicians should bear in mind that oesophageal strictures are one of the complications of chemotherapy in children with malignancies who are treated with chemotherapy alone. In the case of patients who exhibit persistent vomiting despite treatment using antinauseant or antacid agents, mild sedation may be effective for preventing oesophageal complications.</p>\n\n\n<p>Chest radiograph obtained in the barium-swallowing examination showing stricture of the distal oesophagus. The black arrow indicates the oesophageal stenosis.</p><p>Upper gastrointestinal endoscopic image showing an annular stricture and slight reddish mucosa in the distal oesophagus.</p>",
"role": "user"
},
{
"content": "Treatment with a combination of chemotherapy and radiotherapy is known to be associated with oesophageal stricture in both children and adults with malignancies. However, oesophageal stricture resulting from chemotherapy alone is a rare complication, with few reports on it. We experienced a rare paediatric case of oesophageal stricture caused by chemotherapy for osteosarcoma of the left distal femur. After completion of the chemotherapy course, the patient showed dysphagia caused by the oesophageal stricture and underwent balloon dilatation for the oesophageal stricture. After balloon dilatation, he was able to ingest solid foods, and the oesophagus was normal without any strictures at the last follow-up (20 months after ballooning). Therefore, oesophageal stricture should be considered as a complication of treatment with chemotherapy alone in children with malignancies.",
"role": "assistant"
}
]
|
science.scicite_classification | science.scicite_classification.4724 | [
{
"content": "Examine the citation provided below. Determine the primary purpose of this citation within the context of the research paper. \n\nCitation:\n\nIntrinsic properties of NT MSNs (n = 59) were not significantly different from those measured from both hWT (n = 62) and hMT neurons both from coronal and parasagittal slices (n = 76, 27, respectively) (Table 1), and did not differ from those previously described for mouse MSN in tissue slice preparations (Goldberg et al., 2005; Bonsi et al., 2008).",
"role": "user"
},
{
"content": "Result",
"role": "assistant"
}
]
|
science.healthver_entailment | science.healthver_entailment.2809 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Some groups may face a disproportionate mental health impact during the COVID-19 pandemic.\n\nResearch Abstract:\nAs a result of the emergence of coronavirus disease 2019 (COVID-19) outbreak caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in the Chinese city of Wuhan, a situation of socio-economic crisis and profound psychological distress rapidly occurred worldwide.\nVarious psychological problems and important consequences in terms of mental health including stress, anxiety, depression, frustration, uncertainty during COVID-19 outbreak emerged progressively.\nThis work aimed to comprehensively review the current literature about the impact of COVID-19 infection on the mental health in the general population.\nThe psychological impact of quarantine related to COVID-19 infection has been additionally documented together with the most relevant psychological reactions in the general population related to COVID-19 outbreak.\nThe role of risk and protective factors against the potential to develop psychiatric disorders in vulnerable individuals has been addressed as well.\nThe main implications of the present findings have been discussed.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"As a result of the emergence of coronavirus disease 2019 (COVID-19) outbreak caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in the Chinese city of Wuhan, a situation of socio-economic crisis and profound psychological distress rapidly occurred worldwide.\"]}",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.293 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nDiurnalrhythmsareknowntodependonvisualpathwaysandtorespondtolightintensity.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.6847 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: The effect of using high facilitation when implementing the Gold Standards Framework in Care Homes programme: A cluster randomised controlled trial\nSummary: Background : The provision of quality end-of-life care is increasingly on the national agenda in many countries . In the United Kingdom , the Gold St and ards Framework for Care Homes programme has been promoted as a national framework for improving end-of-life care . While its implementation is recommended , there are no national guidelines for facilitators to follow to undertake this role . Aim : It was hypothesised that action learning alongside high facilitation when implementing the Gold St and ards Framework for Care Homes programme will result in a reduced proportion of hospital deaths for residents and improvement in the care home staff ability to facilitate good end-of-life care . Design : A cluster r and omised controlled trial where 24 nursing homes received high facilitation to enable them to implement the Gold St and ards Framework for Care Homes programme . The managers of 12 nursing homes additionally took part in action learning sets . A third group ( 14 nursing homes ) received the ‘ st and ard ’ Gold St and ards Framework for Care Homes facilitation available in their locality . Setting / participants : In total , 38 nursing homes providing care for frail older people , their deceased residents and their nurse managers . Results : A greater proportion of residents died in those nursing homes receiving high facilitation and action learning but not significantly so . There was a significant association between the level of facilitation and nursing homes completing the Gold St and ards Framework for Care Homes programme through to accreditation . Year-on-year change occurred across all outcome measures . Conclusion : There is a danger that without national guidelines , facilitation of the Gold St and ards Framework for Care Homes programme will vary and consequently so will its implementation . The nurse manager of a care home must be actively engaged when implementing the Gold St and ards Framework for Care Homes programme\n\n\nReport 2:\nTitle: Utilization of the PICO framework to improve searching PubMed for clinical questions\nSummary: Background Supporting 21st century health care and the practice of evidence -based medicine ( EBM ) requires ubiquitous access to clinical information and to knowledge-based re sources to answer clinical questions . Many questions go unanswered , however , due to lack of skills in formulating questions , crafting effective search strategies , and accessing data bases to identify best levels of evidence . Methods This r and omized trial was design ed as a pilot study to measure the relevancy of search results using three different interfaces for the PubMed search system . Two of the search interfaces utilized a specific framework called PICO , which was design ed to focus clinical questions and to prompt for publication type or type of question asked . The third interface was the st and ard PubMed interface readily available on the Web . Study subjects were recruited from interns and residents on an inpatient general medicine rotation at an academic medical center in the US . Thirty-one subjects were r and omized to one of the three interfaces , given 3 clinical questions , and asked to search PubMed for a set of relevant articles that would provide an answer for each question . The success of the search results was determined by a precision score , which compared the number of relevant or gold st and ard articles retrieved in a result set to the total number of articles retrieved in that set . Results Participants using the PICO templates ( Protocol A or Protocol B ) had higher precision scores for each question than the participants who used Protocol C , the st and ard PubMed Web interface . ( Question 1 : A = 35 % , B = 28 % , C = 20 % ; Question 2 : A = 5 % , B = 6 % , C = 4 % ; Question 3 : A = 1 % , B = 0 % , C = 0 % ) 95 % confidence intervals were calculated for the precision for each question using a lower boundary of zero . However , the 95 % confidence limits were overlapping , suggesting no statistical difference between the groups . Conclusion Due to the small number of search es for each arm , this pilot study could not demonstrate a statistically significant difference between the search protocol s. However there was a trend towards higher precision that needs to be investigated in a larger study to determine if PICO can improve the relevancy of search results\n\n\nReport 3:\nTitle: The Hawthorne Effect: a randomised, controlled trial\nSummary: Background The ' Hawthorne Effect ' may be an important factor affecting the generalisability of clinical research to routine practice , but has been little studied . Hawthorne Effects have been reported in previous clinical trials in dementia but to our knowledge , no attempt has been made to quantify them . Our aim was to compare minimal follow-up to intensive follow-up in participants in a placebo controlled trial of Ginkgo biloba for treating mild-moderate dementia . Methods Participants in a dementia trial were r and omised to intensive follow-up ( with comprehensive assessment visits at baseline and two , four and six months post r and omisation ) or minimal follow-up ( with an abbreviated assessment at baseline and a full assessment at six months ) . Our primary outcomes were cognitive functioning ( ADAS-Cog ) and participant and carer-rated quality of life ( QOL-AD ) . Results We recruited 176 participants , mainly through general practice s. The main analysis was based on Intention to treat ( ITT ) , with available data . In the ANCOVA model with baseline score as a co-variate , follow-up group had a significant effect on outcome at six months on the ADAS-Cog score ( n = 140 ; mean difference = -2.018 ; 95%CI -3.914 , -0.121 ; p = 0.037 favouring the intensive follow-up group ) , and on participant-rated quality of life score ( n = 142 ; mean difference = -1.382 ; 95%CI -2.642 , -0.122 ; p = 0.032 favouring minimal follow-up group ) . There was no significant difference on carer quality of life . Conclusion We found that more intensive follow-up of individuals in a placebo-controlled clinical trial of Ginkgo biloba for treating mild-moderate dementia result ed in a better outcome than minimal follow-up , as measured by their cognitive functioning . Trial registration Current controlled trials : IS RCT\n\n\nReport 4:\nTitle: Efficacy of treating pain to reduce behavioural disturbances in residents of nursing homes with dementia: cluster randomised clinical trial\nSummary: Objective To determine whether a systematic approach to the treatment of pain can reduce agitation in people with moderate to severe dementia living in nursing homes . Design Cluster r and omised controlled trial . Setting 60 clusters ( single independent nursing home units ) in 18 nursing homes within five municipalities of western Norway . Participants 352 residents with moderate to severe dementia and clinical ly significant behavioural disturbances r and omised to a stepwise protocol for the treatment of pain for eight weeks with additional follow-up four weeks after the end of treatment ( 33 clusters ; n=175 ) or to usual treatment ( control , 27 clusters ; n=177 ) . Intervention Participants in the intervention group received individual daily treatment of pain for eight weeks according to the stepwise protocol , with paracetamol ( acetaminophen ) , morphine , buprenorphine transdermal patch , or pregabaline . The control group received usual treatment and care . Main outcome measures Primary outcome measure was agitation ( scores on Cohen-Mansfield agitation inventory ) . Secondary outcome measures were aggression ( scores on neuropsychiatric inventory-nursing home version ) , pain ( scores on mobilisation-observation-behaviour-intensity-dementia-2 ) , activities of daily living , and cognition ( mini-mental state examination ) . Results Agitation was significantly reduced in the intervention group compared with control group after eight weeks ( repeated measures analysis of covariance adjusting for baseline score , P<0.001 ) : the average reduction in scores for agitation was 17 % ( treatment effect estimate −7.0 , 95 % confidence interval −3.7 to −10.3 ) . Treatment of pain was also significantly beneficial for the overall severity of neuropsychiatric symptoms ( −9.0 , −5.5 to −12.6 ) and pain ( −1.3 , −0.8 to −1.7 ) , but the groups did not differ significantly for activities of daily living or cognition . Conclusion A systematic approach to the management of pain significantly reduced agitation in residents of nursing homes with moderate to severe dementia . Effective management of pain can play an important part in the treatment of agitation and could reduce the number of unnecessary prescriptions for psychotropic drugs in this population . Trial registration Clinical Trials.gov NCT01021696 and Norwegian Medicines Agency EudraCTnr 2008 - 007490 - 20\n\n\nReport 5:\nTitle: ‘Less ticking the boxes, more providing support’: A qualitative study on health professionals’ concerns towards the Liverpool Care of the Dying Pathway\nSummary: Background : Despite being widely used , research into the effectiveness of the Liverpool Care of the Dying Pathway ( LCP ) and associated cases of mal practice does not match dissemination . No study exists focusing on concerns voiced by professionals . Aim : To explore the views of professionals who , during the hospital implementation of the Italian version of the Liverpool Care of the Dying Pathway ( LCP-I ) , voiced or showed concerns towards it . Design : A qualitative study nested within the LCP-I r and omized cluster trial , with semi-structured interviews analysed using thematic analysis . Setting and participants : Six nurses and five physicians from six out of the eight hospital wards who completed the LCP-I implementation were interviewed . Eligibility criteria were having taken part in all steps of the LCP-I Programme , voiced or somehow shown concerns , or failed to fully engage with the implementation process . Results : A total of 12 categories were identified , referring to four topics : the Implementation Programme , the LCP-I clinical documentation , the hospital environment and the educational and professional background of hospital healthcare staff . Issues raised by participants concerned both ‘ real ’ characteristics of the LCP-I and a misinterpretation of the LCP-I approach and clinical documentation . Furthermore , difficulties were reported which were not linked to the Programme but rather to end-of-life care . Conclusion : This study provides insights into the experience of professionals with negative opinions of or concerns with the LCP-I. A more comprehensive approach to professional training in palliative care is needed and may envisage the development of new interventions aim ed at improving the quality of care throughout the illness trajectory\n\n\nReport 6:\nTitle: Signs of Imminent Dying and Change in Symptom Intensity During Pharmacological Treatment in Dying Nursing Home Patients: A Prospective Trajectory Study.\nSummary: OBJECTIVES To investigate whether it is possible to determine signs of imminent dying and change in pain and symptom intensity during pharmacological treatment in nursing home patients , from day perceived as dying and to day of death . DESIGN Prospect i ve , longitudinal trajectory trial . SETTING Forty-seven nursing homes within 35 municipalities of Norway . PARTICIPANTS A total of 691 nursing home patients were followed during the first year after admission and 152 were assessed carefully in their last days of life . MEASUREMENTS Time between admission and day of death , and symptom severity by Edmonton symptom assessment system ( ESAS ) , pain ( mobilization-observation-behavior-intensity-dementia-2 ) , level of dementia ( clinical dementia rating scale ) , physical function ( Karnofsky performance scale ) , and activities of daily living ( physical self-maintenance scale ) . RESULTS Twenty-five percent died during the first year after admission . Increased fatigue ( logistic regression , odds ratio [ OR ] 1.8 , P = .009 ) and poor appetite ( OR 1.2 , P = .005 ) were significantly associated with being able to identify the day a person was imminently dying , which was possible in 61 % of the dying ( n = 82 ) . On that day , the administration of opioids , midazolam , and anticholinergics increased significantly ( P < .001 ) , and was associated with amelioration of symptoms , such as pain ( mixed-models linear regression , 60 % vs 46 % , P < .001 ) , anxiety ( 44 % vs 31 % , P < .001 ) , and depression ( 33 % vs 15 % , P < .001 ) . However , most symptoms were still prevalent at day of death , and moderate to severe dyspnea and death rattle increased from 44 % to 53 % ( P = .040 ) and 8 % to 19 % ( P < .001 ) , respectively . Respiratory symptoms were not associated with opioids or anticholinergics . CONCLUSION Pharmacological treatment ameliorated distressing symptoms in dying nursing home patients ; however , most symptoms , including pain and dyspnea , were still common at day of death . Results emphasize critical needs for better implementation of guidelines and staff education . TRIAL REGISTRATION Clinical Trials.govNCT01920100\n\n\nReport 7:\nTitle: Introducing the Liverpool Care Pathway into nursing homes.\nSummary: Around half a million older people aged 65 or over live in care homes across the UK and substantial numbers end their lives in these care setting s. The Liverpool Care Pathway for the dying patient ( LCP ) was developed to transfer the hospice model of care into other care setting s. This paper describes a pilot project to introduce the LCP into care homes local to the Queen Mary 's Sidcup NHS Trust with a view to reducing the number of very ill elderly patients who are transferred to the acute trust from care homes and then die in\n\n\nReport 8:\nTitle: The course of neuropsychiatric symptoms in nursing-home patients with dementia over a 53-month follow-up period.\nSummary: BACKGROUND Neuropsychiatric symptoms ( NPS ) are prevalent in nursing-home ( NH ) patients with dementia , but little is known about the long-term course of these symptoms . METHODS In this study , 931 NH patients with dementia took part in a prospect i ve cohort study with four assessment s over a 53-month follow-up period . NPS and level of dementia were assessed with the Neuropsychiatric Inventory scale and the Clinical Dementia Rating scale , respectively . RESULTS Mild , moderate , and severe dementia was present in 25 % , 33 % , and 42 % , respectively . There was an increase in the severity of the dementia from the first to the fourth assessment . Agitation , irritability , disinhibition , and apathy were the most prevalent and persistent symptoms during the study period . The affective subsyndrome ( depression and anxiety ) became less severe , whereas the agitation subsyndrome ( agitation/aggression , disinhibition , and irritability ) and apathy increased in severity during the follow-up period . More severe dementia was associated with more severe agitation , psychosis , and apathy , but not more severe affective symptoms . Mild dementia was associated with an increase in the severity of psychosis , whereas moderate or severe dementia was associated with decreasing severity of psychosis over the follow-up period . CONCLUSION Nearly all the patients experienced clinical ly significant NPS , but individual symptoms fluctuated . Affective symptoms became less severe , while agitation and apathy increased in severity . An increase in dementia severity was associated with an increase in the severity of agitation , psychosis , and apathy , but not affective symptoms . The results may have implication s when planning evaluation , treatment , and the prevention of NPS in NH patients",
"role": "user"
},
{
"content": "Conclusion LCP has not been adapted to nursing home patients and people with dementia .",
"role": "assistant"
}
]
|
science.pico_ner | science.pico_ner.4652 | [
{
"content": "Examine the following abstract and identify distinct entities. Focus on extracting entities related to participants, interventions, and outcomes.\n\nPresent your findings in a JSON format like: {\"Participant\": [\"example entity\", ...], \"Intervention\": [\"example entity\", ...], \"Outcome\": [\"example entity\", ...]}. If no entities are found for a category, use an empty list.\n\nAbstract Content:\n\nTranscutaneous electrical nerve stimulation in the treatment of myofascial pain dysfunction . The effect of transcutaneous electrical nerve stimulation ( TENS ) plus conservative therapy ( ibuprofen , bite plate , self-physiotherapy ) on myofascial pain dysfunction ( MPD ) was determined . A single-blind trial as done in 10 patients with MPD with subthreshold TENS ( frequency 35 Hz , pulse width 100 milliseconds , modulation 50 % ) compared with sham TENS at 8 visits over 14 weeks . Pain was assessed on a visual analogue scale before and after TENS at each visit and the data were analysed with the analysis of variance ( ANOVA ) for repeated measures . A highly significant effect was seen for time ( F = 4.80 , P = 0.0003 ) but not for TENS . Subthreshold TENS did not increase the symptom relief produced by conservative treatment with the protocol used .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"sham TENS\", \"self-physiotherapy\", \"plus conservative therapy ( ibuprofen , bite plate ,\", \"Subthreshold TENS\", \"TENS\", \"transcutaneous electrical nerve stimulation ( TENS )\", \"Transcutaneous electrical nerve stimulation\", \"subthreshold TENS\", \")\"], \"Outcome\": [\"Pain\", \"time\", \"visual analogue scale\"], \"Participant\": [\"10\", \"myofascial pain dysfunction\", \"MPD\"]}",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4119 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: A Survey of H.264 AVC/SVC Encryption \nChapter Title: ) Interprediction [Mode Encryption]: \n1. Abstract of Cited Paper (BIB001): Advanced Video Coding is recently announced and widely used, although the according protection means have not been developed thoroughly. In this paper, a selective encryption scheme is constructed on Advanced Video Coding. During AVC encoding, such sensitive data as intra-prediction mode, residue data, inter-prediction mode and motion vector are partially encrypted. This encryption scheme keeps secure against brute-force attack, replacement attack or known-plaintext attack, combines encryption process with compression process with low cost, and keeps the file format unchanged with some direct operations (such as displaying, time seeking, copying, cutting, etc.) supported. These properties make it suitable for secure video transmission. \n2. Abstract of Cited Paper (BIB002): As digitalized content rapidly proliferates in networked systems, content security necessarily arises as one of the most important issues. Many developers have studied techniques for allowing only authorized persons to access content. Recently, video scrambling techniques, one type of the authorizing tools, have been introduced. However, they change the original video data, which often increases the bit rate of the source data. To overcome this problem, we propose a scrambling technique which deliberately distorts the original video sequences in a reversible way by arbitrarily relocating the differential motion vectors and MB (macroblock) starting positions in a slice. This method can be applied to most common video coding techniques such as MPEG-1/2/4, and H.264. \n3. Abstract of Cited Paper (BIB003): The goal of partial encryption of a bit stream is to make the entire stream somehow useless for anyone that cannot decrypt its ciphered subset. In this paper we present the effects of the partial encryption of some H.264/AVC coding parameters, in order to obtain a moderate degradation of the video content, which can be appealing for commercial applications, like pay-per-view systems and others, without strictly focusing on security or cryptanalysis issues. The goal of preventing full quality vision and providing a low quality version, attractive enough for potential purchasers, is obtained by means of simple ciphering operations, and evaluated by visual inspection. Some results of partial encryption are presented, which show how it is possible to produce a moderate degradation of the video stream by ciphering single coding parameters or a combination of them. \n4. Abstract of Cited Paper (BIB004): In this paper, we propose an encryption algorithm to protect scalable video coding (SVC) bitstream and a method for conditional access control to consume the encrypted SVC bitstream. To design an encryption algorithm, we analyzed the encryption requirements to support scalability function in the scalable video and proposed an effective encryption method developed in the network abstraction layer (NAL). In addition, conditional access control scheme and key management scheme are proposed to consume the SVC bitstream protected with proposed method. Experiments were performed to verify the proposed method and results showed that proposed algorithms could provide an effective access control of scalable video as well as support a video encryption with scalability function. \n5. Abstract of Cited Paper (BIB005): In this paper we address the problem of providing conditional access to video sequences, namely, to generate a low-quality video to be used as preview, which can be decoded at full quality if a decryption key is obtained. We propose and investigate the performance of two different techniques, based on smoothing and separate encoding in the compressed domain, and motion vector perturbation. We show that these techniques are able to provide conditional access to different quality levels of H. 264/AVC video with very small rate overhead, and that their combination can provide different levels of security towards malicious attacks. \n6. Abstract of Cited Paper (BIB006): A scheme is proposed to implement commutative video encryption and watermarking during advanced video coding process. In H.264/AVC compression, the intra-prediction mode, motion vector difference and discrete cosine transform (DCT) coefficients' signs are encrypted, while DCT coefficients' amplitudes are watermarked adaptively. To avoid that the watermarking operation affects the decryption operation, a traditional watermarking algorithm is modified. The encryption and watermarking operations are commutative. Thus, the watermark can be extracted from the encrypted videos, and the encrypted videos can be re-watermarked. This scheme embeds the watermark without exposing video content's confidentiality, and provides a solution for signal processing in encrypted domain. Additionally, it increases the operation efficiency, since the encrypted video can be watermarked without decryption. These properties make the scheme a good choice for secure media transmission or distribution \n7. Abstract of Cited Paper (BIB007): In digital broadcast TV systems, video data is normally encrypted before transmission. For in-home redistribution, it is often necessary to transcode the bitstream to achieve optimum utilization of available bandwidth. If a signal is decrypted before transcoding and re-encrypted, this may lead to a security loophole. This paper presents a solution in the form of a novel H.264 selective encryption algorithm that encrypts sign bits of transform coefficients and motion vectors to allow secure transcoding without decryption. The performance of this system is compared with I-frame encryption. The results show that sign encryption is more secure than I-frame encryption and has a lower complexity. A hybrid system using a modified transcoder and sign encryption is found to give an optimal compromise between security and transcoding performance. \n8. Abstract of Cited Paper (BIB008): A video encryption scheme combining with advanced video coding (AVC) is presented and analyzed in this paper, which is different from the ones used in MPEG1/2 video encryption. In the proposed scheme, the intra-prediction mode and motion vector difference are encrypted with the length-kept encryption algorithm (LKE) in order to keep the format compliance, and the residue data of the macroblocks are encrypted with the residue data encryption algorithm (RDE) in order to keep low cost. Additionally, a key distribution scheme is proposed to keep the robustness to transmission errors, which assigns sub-keys to different frames or slices independently. The encryption scheme's security, time efficiency and error robustness are analyzed in detail. Experimental results show that the encryption scheme keeps file format unchanged, is secure against replacement attacks, is efficient in computing, and is robust to some transmission errors. These properties make it a suitable choice for real-time applications, such as secure IPTV, secure videoconference or mobile/wireless multimedia, etc. \n9. Abstract of Cited Paper (BIB009): With the prosperity of multimedia services over mobile network, more and more multimedia content could be downloaded or played, but no good mechanism is applied to protect intellectual properties of content producers and authors. We propose a system to provide multimedia downloading service and implement digital rights management. Experimental results show that our system achieve strong encryption and high level of security while keeping computational efficiency. \n10. Abstract of Cited Paper (BIB010): In this paper a novel conditional access scheme for the distribution of H.264/AVC video is presented. The algorithm permits to cypher the full quality video, while guaranteeing free access to a reduced quality video that can be used as preview. The scheme is based on the novel coding options, introduced in the H.264 standard, and therefore is fully compliant with the standard syntax. The secured video stream exhibits a very small rate overhead and requires limited supplementary computational cost at coding time. \n11. Abstract of Cited Paper (BIB011): Digital Rights Management (DRM) of videos is an important issue nowadays. Considering that H.264/AVC videos will be widely used in various applications, we propose a practical design, which combines the methodologies of selective encryption and fingerprinting, for effective DRM of H.264/AVC streaming videos. A selective encryption scheme is first presented to scramble the video content by encrypting a small amount of data in the compressed bit-stream. The scrambled video is H.264-compliant to reduce the complexity of decoder since it can still be played without triggering errors in the decoding process. A fingerprinting scheme is then introduced to provide further protection. We extract a reasonable amount of data from the video and embed the watermark acting as the fingerprint of the video recipient. To acquire the high-quality video for viewing, the user has to decrypt the video obtained from a video server and then combine it with the watermarked data provided from a user information server. The resulting viewable video is thus fingerprinted to deter the user from illegally redistributing the content. Experimental results will demonstrate the feasibility of the proposed approach. \n12. Abstract of Cited Paper (BIB012): In this letter, we propose a novel video encryption technique that is used to achieve partial encryption where an annoying video can still be reconstructed even without the security key. In contrast to the existing methods where the encryption usually takes place at the entropy-coding stage or the bit-stream level, our proposed scheme embeds the encryption at the transform stage during the encoding process. To this end, we develop a number of new unitary transforms that are demonstrated to be equally efficient as the well-known DCT and thus used as alternates to DCT during the encoding process. Partial encryption is achieved through alternately applying these transforms to individual blocks according to a pre-designed secret key. Analysis on the security level of this partial encryption scheme is carried out against various common attacks and some experimental results based on H.264/AVC are presented.\n\nReview Chapter:",
"role": "user"
},
{
"content": "In interprediction blocks are predicted on the basis of previously decoded reference pictures. For that end motion estimation and compensation is conducted, a novelty of H.264 is the tree-structured motion estimation and compensation, which employs variable block sizes. Some inter frame macroblock subdivisions result in the same number of motion vectors. In BIB001 , it is proposed to perform permutations on the set of inter frame macroblock subdivisions with the same number of motion vectors. The results of interprediction are residual data (the difference to the interprediction), the macroblock partition information, the interprediction mode and the motion vector data. 3) Motion Vector [Difference Encryption]: Motion estimation and compensation works on a macroblock basis (16 × 16 blocks), which can be further decomposed to 4×4 blocks. For each of the blocks of a macroblock a motion vector is calculated. The motion vector data are subject to further processing before entropy coding, i.e., motion vector prediction, which yields motion vector differences. The modification of motion vectors and motion vector difference data for encryption has been proposed frequently BIB007 , BIB008 , BIB001 , BIB002 - BIB009 , BIB012 - BIB010 . The schemes to modify the motion vectors and motion vector differences are diverse, e.g., many propose sign encryption BIB008 , BIB002 , BIB012 , BIB005 , while other encrypt the suffix of the exponential Golomb code BIB007 , BIB009 . In BIB009 , the encryption of the exponential Golomb codes is proposed as illustrated in Table I . In order to control the distortion (for perceptual/transparent encryption), it is proposed in BIB006 and BIB011 to modify the motion vectors mv in the following way: mv = mv + round(α * Z), where Z is uniformly distributed on [−1, 1] and α is used to adjust the quality degradation (the larger α, the higher the distortion). A lightweight encryption scheme for motion vector data is proposed in BIB010 , only half the motion vectors are securely encrypted in BIB001 . The motion vectors of a slice are permuted in BIB003 . BIB013 . If the coefficients of these different 4 × 4 transforms are input to the standard inverse DCT transform, a reduced quality version is decoded as shown in Fig. 8(b) . In BIB004 , it is proposed to encrypt the quantization parameter.",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.3081 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: We propose a novel boosting algorithm to integrate information from different biological sources for gene\n\nKeywords: gene functional classification sequences expression profiles\n\nDataset Information:\nVOT2018: VOT2018 is a dataset for visual object tracking. It consists of 60 challenging videos collected from real-life datasets.\nBioGRID: BioGRID is a biomedical interaction repository with data compiled through comprehensive curation efforts. The current index is version 4.2.192 and searches 75,868 publications for 1,997,840 protein and genetic interactions, 29,093 chemical interactions and 959,750 post translational modifications from major model organism species.\nAnimalWeb: A large-scale, hierarchical annotated dataset of animal faces, featuring 21.9K faces from 334 diverse species and 21 animal orders across biological taxonomy. These faces are captured `in-the-wild' conditions and are consistently annotated with 9 landmarks on key facial features.\nHollywood 3D dataset: A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes.\n\nAvailable Datasets: A: VOT2018 B: BioGRID C: AnimalWeb D: Hollywood 3D dataset",
"role": "user"
},
{
"content": "BioGRID",
"role": "assistant"
}
]
|
science.qasa_abstractive_qa | science.qasa_abstractive_qa.1385 | [
{
"content": "Review the following segment from a research document in AI. Answer the subsequent query based solely on the provided information. Limit your response to the answer itself.\n\nResearch Segment: Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a “zoom in” operation and can generate many larger training examples. To implement a “zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16× of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new “expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 6 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy.\n\nInquiry: How important is data augmentation for final model accuracy using SSD?",
"role": "user"
},
{
"content": "The data augmentation is improving the performance of SSD on small datasets like PASCAL VOC. The mAP values increase by 2% - 3% on multiple datasets after data augmentation. The new augmentation trick improves the performance on small objects as well. Thus, the data augmentation is quite important for final model accuracy using SSD.",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.2955 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nHowever,membraneorganelledisruptionsprecedesubunitcaccumulationinhomozygousCbCln3Deltaex7/8cells,sincetheyareobservedwithoutagingatconfluency.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.6827 | [
{
"content": "Read the provided biomedical research paper and create a concise abstract. Ensure the abstract captures the main points of the paper.\n\nFull Paper:\naspirin - exacerbated respiratory disease ( aerd ) is a syndrome characterized by chronic hyperplastic rhinosinusitis , nasal polyposis , asthma , and aspirin sensitivity , as described in 1922 by szczeklik et al .\n[ 1 , 2 ] . the prevalence of aerd is variable ; stevenson and szczeklik reported in 2006 that aerd occurs in 3% of adult patients with asthma in the united states , with the onset of symptoms during the third decade of life , and that it is more common in women than in men , with approximately 70% versus 30% in europe and 57% versus 43% in the usa .\nthe mechanisms underlying aspirin intolerance are not fully defined , with current research focusing on cyclooxygenase 1 ( cox-1 ) inhibition by aspirin and other nsaids diverting arachidonic acid metabolism from cox pathways to the lipoxygenase ( lo ) pathway .\nthis leads to increased synthesis of the cysteinyl - leukotrienes ( lt ) , ltc4 , ltd4 , and lte4 , resulting in bronchoconstriction , mucus hypersecretion , and possibly the development of polyps and urticaria .\nthe biological plausibility of this hypothesis fact has led to the search for polymorphisms in genes responsible for lt synthesis , to explore associations between these polymorphisms and local tissue levels of the proteins .\nother factors such as polymorphisms in the genes for proinflammatory cytokines including tnf , il1b , il6 , and il8 are involved in chronic inflammatory and autoimmune diseases .\ninterleukin 1 ( il-1 ) is a cytokine associated with inflammatory responses and found in two forms , il-1 ( produced by the il1a gene ) and il-1 ( il1b ) , with both genes located on chromosome 2 .\nil-1 is expressed in nasal polyps , nasal epithelium , macrophages , activated t lymphocytes , and monocytes ; its expression is regulated by adhesion molecules , and others inflammatory cytokines .\nil1b polymorphisms have been associated with inflammatory bowel disease and gastric cancer among other diseases .\nrecently , genetic polymorphisms in proinflammatory cytokines such as il-1 have been recognized as key players in the pathogenesis of asthma .\nsimilarly , il-8 has been implicated in the asthmatic inflammatory process , and genetic variation in this cytokine has been associated with both the susceptibility and the severity of this disease . in the present study\n, we have investigated the frequencies of polymorphisms in the genes encoding these two cytokines in aerd patients .\npatients with aspirin - exacerbated respiratory disease ( aerd ) ( n = 78 ) and aspirin - tolerant asthma ( ata ) ( n = 135 ) were recruited from the allergy and otolaryngology departments at the instituto nacional de enfermedades respiratorias ( iner ) . healthy control subjects ( hcs )\nall participants underwent simple spirometry , inhaled methacholine challenge , and nasal challenge with lysine - aspirin ( l - asa ) according to international guidelines to determine the degree of bronchial hyperresponsiveness and confirm aerd diagnosis .\nthe aerd group had positive l - asa and methacholine challenges , the asthmatic group had positive methacholine challenge , but negative l - asa challenge , and the healthy control subjects were volunteers with negative l - asa and methacholine challenges .\na positive l - asa challenge was defined as a decrease of at least 40% in total nasal airflow after l - asa application compared with baseline measures ; methacholine challenge was considered positive with a decrease of at least 20% in forced expiratory volume in one second ( fev1 ) compared with baseline fev1 after the administration of different concentrations of methacholine ( beginning with 0.03 mg / ml , increasing gradually the concentration twice every 2 minutes until the concentration of 32 mg / ml ) ; in case it does not have it , it is considered negative .\nthe study was approved by the science and bioethics committees of iner , and all participants gave their informed consent [ 8 , 9 ] .\npatients and healthy control subjects had ancestry of at least two generations born in mexico and were thus considered to be mexican mestizo in descent [ 10 , 11 ] .\nperipheral blood was drawn by venipuncture , and genomic dna was obtained using the commercial bdtract dna isolation kit ( maxim biotech , san francisco , calif , usa ) .\nthe dna was quantified by absorption of ultraviolet light at 260 nm wavelength using an actgene spectrophotometer ( actgene , inc .\nwe selected two polymorphisms in two genes related to chronic inflammation : rs16944 in il1b and rs4073 in il8 .\nallelic discrimination of snps rs16944 ( il1b ) and rs4073 ( il8 ) was performed by real - time pcr ( rt - pcr ) on a 7300 real time pcr system ( applied biosystems , calif , usa ) using taqman commercial probes ( applied biosystems , usa ) for each of the polymorphisms mentioned above and followed the cycling program : preread 50c , 1 minute ; absolute quantitation : 50c , 2 minutes , 1 cycle ; 95c , 10 minutes , 1 cycle ; 95c , 15 seconds , 60c 1 min , 40 cycles ; postread 50c , 1 minute .\nthe results were assessed taking into account the allelic discrimination and absolute quantitation in all samples ; additionally , we included four contamination controls per plate ( nontemplate controls ) .\nthe fluorescence signal detectors used were vic which was assigned to the b allele and fam assigned to the a allele for both snps .\nstatistical analysis was performed between groups of cases ( aerd and ata ) versus the healthy control subjects by test with 3 2 tables , using spss ( v. 15.0 ) software for windows , to identify the difference between the allele and genotype frequencies of each polymorphism evaluated .\nin addition , odds ratios and 95% confidence intervals were calculated with epi - info ( v. 6.04 ) software .\nwe performed a genetic association study of the il8 ( rs4073 ) and il1b ( rs16944 ) gene polymorphisms in the three groups .\ngenetic data for the snps included in this study are shown in table 1 ; minor allele frequency ( maf ) of the polymorphisms tested in healthy control subjects had a similar distribution to that reported in international databases ( table 1 ) .\ngene frequencies for each genotype within the three subject groups are shown in table 3 .\nthe frequency of genotypes aa , at , and tt of the il8 rs4073 snp was not statistically significant and different between the three studied groups .\nanalysis of gg and ga genotypes of the il1b ( rs16944 ) snp for aerd and ata patients versus the healthy control subjects showed nonstatistically significant associations .\ninterestingly , the aa genotype showed increased frequency in the aerd patients when compared to the ata group ( gf = 0.19 versus 0.07 ) ; this association was statistically significant ( p = 0.018 , or 2.98 , and ci 1.177.82 ) ( tables 3 and 4 ) and was not found when aerd or ata groups were compared to healthy control subjects .\nthere is no difference in aa versus ag + gg using contrast of healthy control subjects versus aerd patients ( data not shown ) .\nthe airways of aspirin - sensitive patients are characterized by chronic inflammation with cell infiltration even when they are not exposed to aspirin or other nsaids .\nin addition to alterations in the metabolism of arachidonic acid , several proinflammatory cytokines have been associated with aerd .\nthe role of il-1 , however , has not been investigated previously . here , we report that aerd patients show an increased frequency of the il1b-511 polymorphism ( rs16944 aa genotype ) compared to aspirin - tolerant asthmatics .\ninterleukin-1 has been reported to be involved in the genesis of both asthma [ 13 , 14 ] and chronic rhinosinusitis with nasal polyposis in a turkish population .\nmost studies have attempted to establish the association of polymorphisms in the il1b promoter gene , mainly at positions-511 g / a ( rs16944 ) and -31 c / t .\nfor example , park et al . , in 2004 , did not find any association between these polymorphisms and either asthma or atopy in a korean population . in 2007 , erbek et al .\ndescribed a susceptibility for developing nasal polyps associated with the il1b-511 polymorphism ( rs16944 ) , but mfuna endam et al .\n, in 2010 , failed to reproduce this finding in canadian patients with chronic rhinosinusitis . in 2003 , allen et al .\ndid not find any association with il1 gene polymorphisms in asthmatic families ( n = 244 ) , but they reported an association with the dna microsatellite d2s308 in these asthma families ( p = 0.00001 ) . in parallel ,\nkarajalein et al . evaluated 245 patients with asthma and nasal polyposis and did not find any association between the polymorphism il1b-511 c / t and nasal polyps .\nevidence for the role of il-1 in pulmonary immune responses has been gathered in murine models of allergic asthma using il-1r1-deficient [ il-1r1 ( / ) ] mice ; changes observed in these mice include significant reduction of pulmonary eosinophilic inflammation , diminished goblet cell hyperplasia , and reduction of cell recruitment to the lungs , as compared to control balb / c mice .\nhowever , there are no studies linking gene promoter polymorphisms and levels of expression of the cytokine in the lung microenvironment . in the present study\n, we have demonstrated that the frequency of the il1b-511 polymorphism ( rs16944 aa genotype ) is three - fold higher in aerd ( 19.2% ) than in ata patients ( 7.4% ) , suggesting that patients carrying this polymorphism may exhibit genetic susceptibility to develop aerd .\nfindings on the biological functionality of the rs16944 polymorphism have not been consistent across studies .\nthe aa genotype has been associated with higher gastric mucosal levels of il-1 in bacterial infections , while mononuclear cells from subjects with the gg genotype showed an increased release of il-1 after stimulation with lipopolysaccharide .\nrecent studies suggest that the functional role of rs16944 may depend on il1b promoter region haplotypes including rs16944 [ 2326 ] .\nalthough the findings are inconsistent , these previous studies suggest that rs16944 could affect the expression levels of il-1. our report is the first demonstration of the involvement of il1b polymorphism in aerd .\nthe sample size is relatively small , particularly in the aerd group , which may limit the statistical power , so it would be desirable to replicate our findings in an independent population .\nfuture studies must investigate whether this cytokine is released in the airways of aerd patients and whether its levels relate to genetic polymorphisms in the il1b gene .\ninterleukin-8 has been implicated in asthma and found in high concentrations in bronchoalveolar lavage fluid of patients with acute asthma exacerbations .\nin fact , polymorphic il8 alleles ( 251 t and 781c ) have been associated with asthma in a european population [ 7 , 27 ] , but not in asthmatics of korean origin ; the differences could be explained by the different ethnic populations studied . in contrast , korean asthmatics were found to show four nonsynonymous amino acid substitutions in the il-8 receptor a ( il8ra ) and an association of one synonymous variation in il8rb . in the present study ,\nwe did not find a significant difference in the rs4073 a / t between the aerd group and ata patients .",
"role": "user"
},
{
"content": "aspirin exacerbated respiratory disease ( aerd ) is characterized by chronic hyperplastic rhinosinusitis , nasal polyposis , asthma , and aspirin sensitivity . \n the mechanisms which produce these manifestations of intolerance are not fully defined , current research focuses on cyclooxygenase 1 ( cox-1 ) inhibition , metabolism of arachidonic acid , and the cox pathway to the lipoxygenase ( lo ) route , inducing increased synthesis of leukotrienes ( lt ) . \n the biological plausibility of this model has led to the search for polymorphisms in genes responsible for proinflammatory cytokines synthesis , such as il1b and il8 . \n we performed a genetic association study between il8 - 251 ( rs4073 ) and il1b-511 ( rs16944 ) polymorphisms in aerd , aspirin - tolerant asthma ( ata ) , and healthy control subjects . using allelic discrimination by real - time pcr \n , we found statistically nonsignificant associations between aerd , ata , and healthy control subjects for the gg and ga genotypes of il1b ( rs16944 ) . \n interestingly , the aa genotype showed an increased frequency in the aerd patients versus the ata group ( gf = 0.19 versus 0.07 , p = 0.018 , or 2.98 , and 95% ci 1.177.82 ) . \n this is the first observation that il1b polymorphisms are involved in aerd . \n thus , future studies must investigate whether interleukin-1 is released in the airways of aerd patients and whether it relates to genetic polymorphisms in the il1b gene .",
"role": "assistant"
}
]
|
science.bioasq_factoid_qa | science.bioasq_factoid_qa.1267 | [
{
"content": "Given the following biomedical passages, answer the question succinctly. Ensure your response is precise and limited to the answer only.\n\nQuestion: What is the cause of lactose intolerance?\n\nPassages: Lactose intolerance is a common condition caused by lactase deficiency and may result in symptoms of lactose malabsorption (bloating, flatulence, abdominal discomfort, and change in bowel habits).\nAdult lactose intolerance (ALI) significantly alters calcium intake and absorption, and thus may promote osteoporosis. ALI is a recessive condition with a geographical north-south gradient characterised by decreased levels of intestinal lactase.\nConsiderations include recognizing that a substantial proportion of the world's adult population (65%-70%) exhibits lactase nonpersistence, a reduced ability to metabolize lactose to glucose and galactose.\nFour clinical subtypes of lactose intolerance may be distinguished, namely lactase deficiency in premature infants, congenital lactase deficiency, adult-type hypolactasia and secondary lactase intolerance.\nLactose intolerance (LI) is characterized by the presence of primarily gastrointestinal clinical signs resulting from colonic fermentation of lactose, the absorption of which is impaired due to a deficiency in the lactase enzyme. These clinical signs can be modified by several factors, including lactose dose, residual lactase expression, concurrent ingestion of other dietary components, gut-transit time, and enteric microbiome composition.",
"role": "user"
},
{
"content": "Lactase deficiency",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.3214 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Reconfigurable Battery Techniques and Systems: A Survey \nChapter Title: A. CIRCUIT SAFETY \n1. Abstract of Cited Paper (BIB001): The Standard for Electrical Safety in the Workplace, NFPA 70E, and relevant OSHA electrical safety standards evolved in the US, over the past 40 years, to address the hazards of 60-Hz power that are faced primarily by electricians, linemen, and others performing facility and utility work. This leaves a substantial gap in the management of other types of electrical hazards including battery banks, DC power systems, capacitor banks, and solar power systems. Although many of these systems are fed by 50/60-Hz alternating current (ac) energy, we find substantial use of direct current (dc) electrical energy, and the use of capacitors, inductors, batteries, solar, and radiofrequency (RF) power. The electrical hazards of these forms of electricity and their systems are different than for 50/60 Hz ac power. At this workshop in 2009 we presented a comprehensive approach to classifying the electrical shock hazards of all types of electricity, including various waveforms and various types of sources of electrical energy. That paper introduced a new comprehensive electrical shock hazard classification system that used a combination of voltage, shock current available, fault current available, power, energy, and waveform to classify all forms of electrical hazards with a focus on the shock hazard. That paper was based on various research conducted over the past 100 years and on decades of experience. This paper continues the effort in understanding and managing all forms of injury from all forms of electricity with the introduction of a comprehensive approach to classifying all forms of injury from the electrical arc, including thermal, blast pressure, hearing, radiation, and shrapnel injury. The general term “arc” is divided into the arc, arc flash, and arc blast as a first subdivision of type of source of injury. Then, the parameters of voltage, short circuit current, energy, waveform, gap distance, gap geometry, enclosure geometry, and time are used to choose various approaches to analysis. Recent efforts to understand, model, and estimate injury for these types of systems, are reviewed. Most of the focus to understand and predict injury for dc, capacitor, solar, and rf arc hazards has been only in the past 10 years. A comprehensive approach to analyzing all forms of injury from all forms of electrical arcs is presented.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Circuit safety issues are critical for all circuit designs, especially the power supply system such as battery based energy storage systems. Among all circuit safety issues, short circuit is acknowledged as one of most serious problems. A short circuit is referred to as an abnormal connection between two nodes of an electric circuit, which may generate current thousands times larger than the intended operational current, causing ohm heating of circuit elements, and is a common cause of fire. The worst case of short circuit may lead to formation of an electric arc that can produce large amount of heat and result in ignition of combustible substances. Short circuit and overcharge experiments were reported in literature BIB001 , where X-ray was used to facility observation of structural destroy. The results show that battery cells can be easily destroyed in short circuit occasions. Immediately after short circuit, currents in the range of 102 A to 183 A are measured, which are extremely high for battery cells. The observed structural breakdown includes safety valves opened, leakage of electrolyte, and metal containment broken. Note that the experiments in BIB001 were done with single cells in a relatively large space; the situation could be more severe if short circuit happens in narrow space such as inside reconfigurable battery packs where hate dissipates slower. Short circuit may happen in any electronic devices, however, reconfigurable battery packs are more vulnerable to it compared with traditional pack designs. Besides potential design defect, current loop that connect anode and cathode directly can be formed by malfunction of switching circuit, followed by catastrophic consequences BIB002 . Circuit breakers for equipment (CBE) are vital to protect circuit from damage. Traditionally, electromechanical designs such as circuit breakers, relays, contactors, and power switches have been widely adopted to fulfill the goal in the past 100 years. In recent years, solid state power switching designs have been gaining increasing amount of attention as a promising technique in modern circuit design. Solid state power switches have much higher voltage drop compared to mechanical contacts, which leads to a need for heat sinks for power dissipation. As a result, extra space and weight budget are required than the semiconductor itself. Since no physical isolation that equivalent to air gap in a mechanical contact exists, off state in a solid state power switch is not a clean cut off from power. These drawbacks are getting more and more trivial with the development of highly efficient semiconductors like SCR, MOSFET, and IGBT. The most prominent advantage of deploying these semiconductors is safety since no mechanical moving exists and no arcing is generated during power switching. Meanwhile, these switches can last up to millions of cycles. However, a comprehensive design or design principle of protection circuit for large scale reconfigurable battery pack is still not provided in literature. This problem can be solved in the future with the development of battery pack design propelled by increasing market share of EVs.",
"role": "assistant"
}
]
|
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.6050 | [
{
"content": "Analyze the following collection of medical RCT summaries. Your goal is to craft a concise synthesis of the findings. Focus on the core insights and present them succinctly. Only include the synthesis in your response. Below are the RCT summaries:\n\n\n\nReport 1:\nTitle: Use of archived specimens in evaluation of prognostic and predictive biomarkers.\nSummary: The development of tumor biomarkers ready for clinical use is complex . We propose a refined system for biomarker study design , conduct , analysis , and evaluation that incorporates a hierarchal level of evidence scale for tumor marker studies , including those using archived specimens . Although fully prospect i ve r and omized clinical trials to evaluate the medical utility of a prognostic or predictive biomarker are the gold st and ard , such trials are costly , so we discuss more efficient indirect \" prospect ive-retrospective \" design s using archived specimens . In particular , we propose new guidelines that stipulate that 1 ) adequate amounts of archived tissue must be available from enough patients from a prospect i ve trial ( which for predictive factors should generally be a r and omized design ) for analyses to have adequate statistical power and for the patients included in the evaluation to be clearly representative of the patients in the trial ; 2 ) the test should be analytically and preanalytically vali date d for use with archived tissue ; 3 ) the plan for biomarker evaluation should be completely specified in writing before the performance of biomarker assays on archived tissue and should be focused on evaluation of a single completely defined classifier ; and 4 ) the results from archived specimens should be vali date d using specimens from one or more similar , but separate , studies\n\n\nReport 2:\nTitle: Panitumumab-FOLFOX4 treatment and RAS mutations in colorectal cancer.\nSummary: BACKGROUND Patients with metastatic colorectal cancer that harbors KRAS mutations in exon 2 do not benefit from anti-epidermal growth factor receptor ( EGFR ) therapy . Other activating RAS mutations may also be negative predictive biomarkers for anti-EGFR therapy . METHODS In this prospect ive-retrospective analysis , we assessed the efficacy and safety of panitumumab plus oxaliplatin , fluorouracil , and leucovorin ( FOLFOX4 ) as compared with FOLFOX4 alone , according to RAS ( KRAS or NRAS ) or BRAF mutation status . A total of 639 patients who had metastatic colorectal cancer without KRAS mutations in exon 2 had results for at least one of the following : KRAS exon 3 or 4 ; NRAS exon 2 , 3 , or 4 ; or BRAF exon 15 . The overall rate of ascertainment of RAS status was 90 % . RESULTS Among 512 patients without RAS mutations , progression-free survival was 10.1 months with panitumumab-FOLFOX4 versus 7.9 months with FOLFOX4 alone ( hazard ratio for progression or death with combination therapy , 0.72 ; 95 % confidence interval [ CI ] , 0.58 to 0.90 ; P=0.004 ) . Overall survival was 26.0 months in the panitumumab-FOLFOX4 group versus 20.2 months in the FOLFOX4-alone group ( hazard ratio for death , 0.78 ; 95 % CI , 0.62 to 0.99 ; P=0.04 ) . A total of 108 patients ( 17 % ) with nonmutated KRAS exon 2 had other RAS mutations . These mutations were associated with inferior progression-free survival and overall survival with panitumumab-FOLFOX4 treatment , which was consistent with the findings in patients with KRAS mutations in exon 2 . BRAF mutations were a negative prognostic factor . No new safety signals were identified . CONCLUSIONS Additional RAS mutations predicted a lack of response in patients who received panitumumab-FOLFOX4 . In patients who had metastatic colorectal cancer without RAS mutations , improvements in overall survival were observed with panitumumab-FOLFOX4 therapy . ( Funded by Amgen and others ; PRIME Clinical Trials.gov number , NCT00364013 . )\n\n\nReport 3:\nTitle: Panitumumab and irinotecan versus irinotecan alone for patients with KRAS wild-type, fluorouracil-resistant advanced colorectal cancer (PICCOLO): a prospectively stratified randomised trial\nSummary: Summary Background Therapeutic antibodies targeting EGFR have activity in advanced colorectal cancer , but results from clinical trials are inconsistent and the population in which most benefit is derived is uncertain . Our aim was to assess the addition of panitumumab to irinotecan in pretreated advanced colorectal cancer . Methods In this open-label , r and omised trial , we enrolled patients who had advanced colorectal cancer progressing after fluoropyrimidine treatment with or without oxaliplatin from 60 centres in the UK . From December , 2006 until June , 2008 , molecularly unselected patients were recruited to a three-arm design including irinotecan ( control ) , irinotecan plus ciclosporin , and irinotecan plus panitumumab ( IrPan ) groups . From June 10 , 2008 , in response to new data , the trial was amended to a prospect ively stratified design , restricting panitumumab r and omisation to patients with KRAS wild-type tumours ; the results of the comparison between the irinotcan and IrPan groups are reported here . We used a computer-generated r and omisation sequence ( stratified by previous EGFR targeted therapy and then minimised by centre , WHO performance status , previous oxaliplatin , previous bevacizumab , previous dose modifications , and best previous response ) to r and omly allocate patients to either irinotecan or IrPan . Patients in both groups received 350 mg/m2 intravenous irinotecan every 3 weeks ( 300 mg/m2 if aged ≥70 years or a performance status of 2 ) ; patients in the IrPan group also received intravenous panitumumab 9 mg/kg every 3 weeks . The primary endpoint was overall survival in KRAS wild-type patients who had not received previous EGFR targeted therapy , analysed by intention to treat . Tumour DNA was pyrosequenced for KRASc.146 , BRAF , NRAS , and PIK3CA mutations , and predefined molecular subgroups were analysed for interaction with the effect of panitumumab . This study is registered , number IS RCT N93248876 . Results Between Dec 4 , 2006 , and Aug 31 , 2010 , 1198 patients were enrolled , of whom 460 were included in the primary population of patients with KRASc.12–13,61 wild-type tumours and no previous EGFR targeted therapy . 230 patients were r and omly allocated to irinotecan and 230 to IrPan . There was no difference in overall survival between groups ( HR 1·01 , 95 % CI 0·83–1·23 ; p=0·91 ) , but individuals in the IrPan group had longer progression-free survival ( 0·78 , 0·64–0·95 ; p=0·015 ) and a greater number of responses ( 79 [ 34 % ] patients vs 27 [ 12 % ] ; p<0·0001 ) than did individuals in the irinotecan group . Grade 3 or worse diarrhoea ( 64 [ 29 % ] of 219 patients vs 39 [ 18 % ] of 218 patients ) , skin toxicity ( 41 [ 19 % ] vs none ) , lethargy ( 45 [21]% vs 24 [ 11 % ] ) , infection ( 42 [ 19 % ] vs 22 [ 10 % ] ) and haematological toxicity ( 48 [ 22 % ] vs 27 [ 12 % ] ) were reported more commonly in the IrPan group than in the irinotecan group . We recorded five treatment-related deaths , two in the IrPan group and three in the irinotecan group . Interpretation Adding panitumumab to irinotecan did not improve the overall survival of patients with wild-type KRAS tumours . Further refinement of molecular selection is needed for substantial benefits to be derived from EGFR targeting agents . Funding Cancer Research UK , Amgen\n\n\nReport 4:\nTitle: Prospective-retrospective biomarker analysis for regulatory consideration: white paper from the industry pharmacogenomics working group.\nSummary: One approach to delivering cost-effective healthcare requires the identification of patients as individuals or sub population s that are more likely to respond to an appropriate dose and /or schedule of a therapeutic agent , or as sub population s that are less likely to develop an adverse event ( i.e. , personalized or stratified medicine ) . Biomarkers that identify therapeutically relevant variations in human biology are often only uncovered in the later stage of drug development . In this article , the Industry Pharmacogenomics Working Group provides , for regulatory consideration , its perspective on the rationale for the conduct of what is commonly referred to as the prospect ive-retrospective analysis ( PRA ) of biomarkers . Reflecting on published proposals and material s presented by the US FDA , a decision tree for generating robust scientific data from sample s collected from an already conducted trial to allow PRA is presented . The primary utility of the PRA is to define a process that provides robust scientific evidence for decision-making in situations where it is not necessary , nor practical or ethical to conduct a new prospect i ve clinical study",
"role": "user"
},
{
"content": "Regarding PFS benefit with anti-EGFR mAbs , the hazard ratio was 0.86 ( 95 % CI ; 0.61–1.21 ) for RAS WT/BRAF MT tumours as compared with 0.62 ( 95 % CI ; 0.50–0.77 ) for RAS WT/BRAF WT tumours ( test of interaction , P=0.07).Interpretation : This meta- analysis demonstrates that there is insufficient evidence to definitively state that RAS WT/BRAF MT individuals attain a different treatment benefit from anti-EGFR mAbs for mCRC compared with RAS WT/BRAF WT individuals .\nAs such , there are insufficient data to justify the exclusion of anti-EGFR mAb therapy for patients with RAS WT/BRAF MT mCRC",
"role": "assistant"
}
]
|
science.craftchem_ner | science.craftchem_ner.626 | [
{
"content": "Analyze the following sentence to identify chemical entities of interest. List these entities in a JSON array format. If no entities are found, return an empty array.\n\nText to Analyze:\nClonesgivingfluorescentE12.5embryoshavingnoreadilyidentifiablemalformationswereusedtogenerategermlinetransmittingchimerasbyaggregationwithdiploidwildtypeICRembryosasdescribedpreviously[21,23].",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.132942 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nmany of the expected high - energy sources are variable , like blazars which emit signals that often appear to vary chaotically with time . does such signal result from the superposition of many random events ( randomly generated shocks in outflowing gas ; randomly appearing reconnections of magnetic field lines ; etc . ) ? or they are governed by some global physical mechanism described by non - linear differential equations , which shows deterministic chaos ( coherent variability of clouds of electrons ; instability of a shock at the base of the jet forming region ; etc . ) ?\nfinally , is it possible to find same kind of periodicity , at least in particular intervals of time ( hot spots in the rotating accretion disk ; helicoidal movements of the jet ; etc . ) ? for a reasonable answer to these questions , we can use - for example - one or more of the following statistical analysis : * _ correlation , autocorrelation , power spectrum estimation , with fft : _ future high energy missions working in scanning mode ( like glast ) probably will provide data well sampled at evenly spaced intervals , that can be processed with the fourier transforms ; * _ correlation , autocorrelation , power spectrum estimation , with the lomb - scargle periodogram _ ,\n( lomb 1976 , scargle 1982 ) for irregularly or incompletely sampled data ; * _ correlation and autocorrelation with the discrete correlation function ( dcf ) _ described by edelson & krolik ( 1988 ) , useful for unevenly spaced data ; * _ first order structure function ( sf ) _ , because there is a simple correspondence between power laws in the frequency space fourier analysis , and the sf analysis in the time domain ( see , e.g. , hufnagel & bregman ( 1992 ) ; * _ detrended fluctuation analysis _\n, that permits the detection of intrinsic self - similarity embedded in a seemingly nonstationary time series ( see , e.g. , peng et al .\n1995 ) ; * _ the wavelet methods _ , that together with principal component analysis and filtering can be used to extract the `` deterministic '' components from the time - series ( see , e.g. , liszka & holmstrm 1999 ) ; * _ a search for chaotic behavior _ : the most popular measure of an attractor s dimension is the correlation dimension , first defined by grassberger & procaccia ( 1983 ) .\nwe are using these and other less - known statistical techniques with the bvr@xmath0i@xmath0 database of the automatic imaging telescope at the perugia astronomical observatory ( tosti et al .\npreliminary results have been already described in fiorucci et al .\n( 1999 , 2000 ) , and suggest that the observed variability is truly stochastic and is not caused by deterministic chaos . moreover , many of these statistical descriptors lead to the provisional conclusion that variability of blazars is self - similar on time - scales from a few days up to some years , and it is characterized by power - law power spectrum @xmath1 , where the spectral slope @xmath2 is in the range @xmath3 , with an average value @xmath4 ( see fig.1 ) . to provide a frame of reference , it is convenient to consider two well known cases : white noise has @xmath5 and brownian motion has @xmath6 .\nthe point is that this kind of variability is observed in all the agn class ( see , e.g , the results of webb & malkan 2000 ) , and it does not seem to be exclusive of the optical wavelengths : _ agns seem to be characterised by variability similar to pink ( or red ) noise in a large range of frequencies_. the x - ray light curves have power spectra with slopes in the range @xmath7 to @xmath8 , ( see , e.g. , lawrance & papadakis 1993 ) .\nradio light curves have power spectra with slopes around @xmath8 in time - scales from days up to some years ( see , e.g. , hufnagel & bregman 1992 , lainela & valtaoja 1993 ) .\nmoreover , we expect to see a similar behavior at higher energies too , because in the majority of theoretical models for agns there is a strictly correlation between @xmath9-ray photons and low - frequency ones .\nif this scenario will be confirmed also with future observations , then the problem will be to identify the real time - scales and to understand the physical reasons behind this kind of variability . for the moment\n, we can try to use this observational evidence to improve our techniques in the research of this class of objects during all - sky surveys .\negret detected 271 point sources ( hartman et al .\n1999 ) , but only 101 of them were identified and , of them , 93 are agns .\nthe next generation gamma - ray instruments probably will increase this number to many thousands , but it is often crucial to discriminate faint variable sources against the background noise . for this kind of problem it is generally used the likelihood analysis ( mattox et al . , 1996 ) , that take into account the spatial and energetic distributions of the data .\ntogether with the more traditional techniques developed for a correct discovery and identification of high - energy point sources , we believe that it would be useful to use also the variability tools , for a correct inclusion of the time domain yet during the first elaboration .\nin fact many astronomical sources ( as agns , black hole x - ray binary systems , and also gamma ray bursts ) show variability characterised as _ red - noise _ in a large scale of frequencies .\nfor example , fig.2 shows the application of a traditional technique ( the wiener filter , see press et al .\n1992 ) for the `` identification '' of a simulated variable source with the assumption that the power spectrum has a @xmath10 behavior added to a white noise tail . with the original simulated signal\nit is quite difficult to verify the presence of a variable source respect to the background noise ( assumed as white noise ) , because the fluctuation is within the limit of three standard deviations .\nthe filtered signal puts in evidence the presence of a `` possible '' point source , characterized by _ red - noise _ variability .\nall the above mentioned statistical methods can be very useful for a rough estimation of the physical phenomena , but unfortunately they are not conclusive because the results are always strongly affected by many external factors . for example , the structure function seems to show a strong dependence from the more intense flares , while the discrete correlation function and the detrended fluctuation analysis are influenced by the large gaps in the light curves .\nmoreover , it is not clear if the _ red - noise _ behavior is a typical feature of agns at the gamma - rays frequencies too , and what is the statistical behavior of the background noise for the next generation high - energy detectors . for all this reasons we are developing a monte carlo technique to test all the more useful statistical descriptors , and to produce a reliable quantification of the results so obtained .\nin the same time , we intend to develop a neural network technique that can be used to analyse the time series , to find the embedding dimension of the characteristic attractor , and to perform well the frequency extraction in unevenly sampled signals .\nfinally , we are developing a light simulator for the large area telescope ( lat ) onboard of the glast satellite , with the aim to test the possible use of variability tools to discover and identify @xmath9-rays point sources .\nedelson , r. a. & krolik , j. h. 1988 , apj , 333 , 646 fiorucci , m. , tosti , g. & luciani , m. 1999 , msait , 70 , 223 fiorucci , m. & tosti , g. , 2000 , in _ blazar monitoring towards the 3th millennium _ , raiteri c. & villata m. eds .\n, 99 grassberger , p. & procaccia , i. 1983 , phys .\n50 , 346 hartman , r. c. , bertsch , d. l. , bloom , s. d. , et al . 1999 ,\napjs , 123 , 79 hufnagel , b. r. & bregman , j. n. 1992 , apj , 386 , 473 lainela , m. & valtaoja , e. 1993 , apj , 416 , 485 lawrence , a. & papadakis , i. 1993 , apj , 414 , 85 liszka , l. & holmstrm , m. 1999 , a&as , 140 , 125 lomb , n. r. 1976 , ap .\nspace sci .\n, 39 , 447 mattox , j. r. , bertsch , d. l. , chiang , j. , et al .\n1996 , apj , 461 , 396 peng , c. k. , havlin , s. , stanley , h. e. , et al .\n1995 , chaos , 5 , 82 press , w. h. , et al .\n1992 , _ numerical recipes : the art of scientific computing _ , cambridge university press scargle , j. d. 1982 , apj , 263 , 835 tosti , g. , pascolini , s. & fiorucci , m. 1996 , pasp , 108 , 706 webb , w. & malkan , m. 2000 , apj , 540 , 652",
"role": "user"
},
{
"content": "we present preliminary results of a statistical analysis obtained with a sample of blazars observed at the perugia astronomical observatory since 1992 . \n we briefly show how these statistical results can be useful to discriminate faint variable sources against the background noise . \n this technique , together with the more traditional ones , may be used to discover and identify high - energy point sources .",
"role": "assistant"
}
]
|
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.66753 | [
{
"content": "Given the full text of a scientific research paper, write an abstract section for the paper. Your response should contain only the abstract without any other text.\n\nArticle text:\nit has long been known that the hosts of powerful radio loud agn are massive early type galaxies , and that the probability of such a galaxy hosting a radio source increases rapidly with optical luminosity and stellar mass ( auriemma et al .\n1977 ; best et al . 2005a\n; mauch & sadler 2007 ) .\nwhat remains less clear is the role ( if any ) of a galaxy s environment in determining whether it hosts a radio source .\nstudies carried out in the 1980s hinted at significant differences between the clustering properties of the powerful frii radio galaxies and the less powerful fri sources . based on an imaging study of 43 radio galaxies at @xmath15 0.3 , heckman et al . ( 1986 ) found that the most powerful radio sources ( with radio luminosities above @xmath16whz@xmath5 at 408mhz ) lay in regions of below \naverage galaxy density and often showed a disturbed morphology suggestive of a recent interaction with a gas - rich companion .\nin contrast , less powerful radio sources appeared to be associated with morphologically \nnormal early - type galaxies in regions of high local galaxy density .\nsimilar results were obtained by prestage & peacock ( 1988 ) , who used an angular cross correlation technique to study the clustering environment of a sample of about 200 nearby ( @xmath15 0.15 ) radio galaxies .\nthey found that fri radio galaxies lay in regions of significantly enhanced galaxy density , whereas the clustering environment of frii sources was similar to that of the overall population of ` normal ' elliptical galaxies .\nthe picture changed significantly in the mid 1990s with the work of ledlow & owen ( 1996 ) , who discovered that the division in radio power between fri and frii radio sources was a strong function of the optical luminosity of the host galaxy . as a result , frii\nradio sources are generally hosted by less optically luminous ( and less massive ) galaxies than fri sources of similar radio power .\nsince more massive galaxies also tend to be more strongly clustered , this effect needs to be taken carefully into account when analysing the clustering properties of powerful radio sources .\nledlow & owen ( 1996 ) measured the bivariate radio luminosity function ( rlf ) of early type galaxies in rich clusters , and found no statistically significant difference between the rlfs of galaxies in rich clusters and in the field .\ntheir results suggested that the local environment plays little or no role in determining whether an early type galaxy hosts a radio loud agn , and that the optical luminosity and other properties of the host galaxy are by far the most important parameters affecting radio source formation and evolution . the advent of large galaxy redshift surveys like the 2dfgrs and sdss ( colless et al .\n2001 ; york et al .\n2000 ) , combined with `` all sky '' radio continuum surveys like nvss and sumss ( condon et al . 1998 ; bock et al .\n1999 ) made it possible to assemble samples of thousands of objects with which to carry out statistical analyses of radio galaxies in the local universe ( best et al .\n2005b ; mauch & sadler 2007 ) .\nthe nvss and sumss source catalogues are large and uniform enough that the characteristic imprint of large scale structure can easily be seen in the angular correlation function ( blake & wall 2002 ; blake et al .\n2004 ) . convolving the angular clustering amplitude in these surveys with a characteristic redshift distribution n(@xmath17 )\nsuggests that the present day clustering length @xmath9 of radio galaxies is in the range of 710@xmath8mpc , corresponding to a clustering strength similar to optically luminous elliptical galaxies in moderately rich environments ( see e.g. overzier et al .\n2003 ) .\nrecently , best et al . ( 2007 ) have revisited the question of radio source clustering using data sets much larger than those available to ledlow & owen ( 1996 ) . using a sample of 625 nearby galaxy groups and clusters selected from the sdss , they show that the brightest galaxies in groups and clusters ( bcgs ) are more likely to host a radio loud agn than other galaxies of the same stellar mass .\nthe probability is increased by up to a factor of two for the most massive galaxies ( with stellar mass @xmath18m@xmath19 ) , and by over an order of magnitude for galaxies with stellar masses below @xmath20m@xmath19 .\nthis enhanced likelihood of radio loud agn activity was only seen in the innermost regions of a group or cluster ( i.e. within 0.2@xmath21 , where @xmath21 is the virial radius of the cluster ) .\nbest et al . (\n2007 ) argue that the radio properties of both bcgs and non - bcgs can be explained if the radio emission is mainly fuelled by cooling gas from an x ray halo surrounding the galaxy .\nit therefore appears that although the radio properties of most galaxies in the local universe are largely unaffected by their environment , this is not true for massive galaxies located in the central regions of clusters . at\nhigher redshift , sadler et al .\n( 2007 ) recently combined data from the 2slaq luminous red galaxy ( lrg ) redshift survey ( cannon et al . 2006 ) and the nvss and first radio surveys ( condon et al . 1998 ; becker et al . 1995 ) to identify a volume limited sample of 391 radio galaxies at redshift @xmath22 .\nthey measured the redshift \nspace correlation between the radio detected 2slaq lrgs and the full lrg sample , and found that the 2slaq radio galaxies were more strongly clustered than the overall 2slaq lrg population .\nsince the 2slaq radio galaxies as a class were also more optically luminous than the overall lrg sample it was unclear whether the increased clustering was a luminosity effect , or represented a genuine difference in the environments of radio loud and radio quiet lrgs at @xmath12 .\nin the current paper , our goal is to answer this question by investigating the clustering properties of radio loud 2slaq lrgs in more detail .\nthe inferred lifetimes of the radio sources associated with massive galaxies ( typically 10@xmath2310@xmath24yr ; parma et al .\n1999 ) are significantly shorter than the ages of their parent galaxies , so it is generally assumed that all massive galaxies must cycle between radio - loud and radio - quiet phases over time .\nfeedback mechanisms in which the hot intergalactic gas episodically cools to fuel a central agn , and is then reheated by the ensuing radio jets ( e.g. binney & tabor 2005 , ciotti & ostriker 2007 ) provide a natural explanation for this process . for the 2slaq lrg sample , johnston et al .\n( 2008 ) have shown that the stellar populations of radio detected and radio quiet galaxies are generally indistinguishable .\nthis is consistent with a picture in which ` radio - mode ' agn feedback ( bower et al .\n2006 ; croton et al .\n2006 ) regulates the star \nformation rate in these massive galaxies , and all of them undergo radio loud episodes when their central black hole is active and can power radio jets . if this is the case , then the fraction of galaxies which are detected as radio sources above some radio power p@xmath25 simply represents the fraction of the radio galaxy duty cycle for which a typical galaxy is a radio source at or above this level .\nobservationally this is complicated by the fact that the radio luminosity function of agn is very broad , spanning at least six orders of magnitude ( mauch & sadler 2007 ) , and there is also a strong correlation between radio power and optical luminosity ( auriemma et al .\n1977 , best et al .\nthe observed radio detection rate will therefore depend strongly on both radio power and galaxy luminosity . in this paper\nwe consider only the very luminous early type galaxies which comprise the 2slaq lrg sample , so that the range in optical luminosity is small .\nwe also use the term ` radio detected ' to refer to galaxies whose 1.4ghz flux density is higher than the 12mjy detection limit of the first and nvss radio surveys . since the 2slaq lrg radio sample is close to volume limited ( see figure 7 of sadler et al . 2007 )\n, this translates to a limiting radio luminosity of @xmath26whz@xmath5 . throughout this paper , we assume a flat @xmath27dominated cosmology with @xmath28=0.27 , @xmath29=70 km s@xmath5mpc@xmath5 , and @xmath30=0.8 unless otherwise stated .\n.[tab : sample ] summary of the three 2slaq lrg samples used in this clustering study .\n[ cols=\"^ , < , > \" , ] despite the differences in the analytic and actual hods the analytic hod does provide good fits to the clustering and space density of the samples within the errors .\ntable [ tab : halomass ] gives the values @xmath31 determined from fitting the analytic hod to the measured hod and the clustering as well as that determined directly from the simulation .\n@xmath31 is overestimated by @xmath327% fitting the analytic form to either measured hod or clustering for both the volume and magnitude limited samples , due to the excess of galaxies in the ` bump ' in the hod at low halo masses .\nthe difference between the fits to the hod and the clustering is larger for the magnitude limited sample , but is still only at the 2% level and is consistent with the error on the fit to the clustering .\nwe can therefore be confident that the likely magnitude of any systematic error on @xmath31 caused by fitting the hod to a magnitude limited rather than volume limited sample is less than the measurement errors for all the samples considered in section [ sec : halo ] .\nit is interesting to note that a larger systematic error in @xmath31 would be introduced if the true hod is not well represented by the assumed analytic form of the hod .\nhowever , one would expect still relative measurements of @xmath31 to be representative even in this case .",
"role": "user"
},
{
"content": "we examine the clustering properties of low power radio galaxies at redshift @xmath0 , using data from the 2slaq luminous red galaxy ( lrg ) survey , and find that radio detected lrgs ( with typical optical luminosities of @xmath1l@xmath2 and 1.4ghz radio powers in the range 10@xmath3 to 10@xmath4whz@xmath5 ) are significantly more clustered than a matched population of radio quiet ( @xmath6whz@xmath5 ) lrgs with the same distribution in optical luminosity and colour . \n the measured scale length of the two point cross - correlation function between the full lrg sample and the radio detected lrgs is [email protected]@xmath8mpc , compared to [email protected]@xmath8mpc for the matched sample of radio quiet lrgs ; while the implied scale length of the auto - correlation function , @xmath9 , is [email protected]@xmath8mpc and [email protected]@xmath8mpc for the radio detected and radio quiet samples respectively . \n we further interpret our clustering measurements in the halo model framework and demonstrate that the radio detected lrgs have typical halo masses of [email protected] @xmath10 and bias of [email protected] , compared to [email protected] @xmath10 and [email protected] for the radio quiet sample . a model in which the radio detected lrgs are almost all central galaxies within haloes \n provides the best fit to the measured clustering , and we estimate that at least @xmath11 of all 2slaq lrgs with the same clustering amplitude as the radio detected lrgs are currently radio loud . \n our results imply that radio detected galaxies in the 2slaq lrg sample typically occupy more massive haloes than other lrgs of the same optical luminosity , so the probability of finding a radio loud agn in a massive galaxy at @xmath12 is influenced by the halo mass and/or cluster environment in addition to the well - known dependence on optical luminosity . \n if we model the radio detected fraction of lrgs , f@xmath13 , as a function of halo mass m , then the data are well - fitted by a power law of the form @xmath14 . \n the observed relationship between radio emission and clustering strength could plausibly arise either through a higher fuelling rate of gas onto the central black holes of galaxies in the most massive haloes ( producing more powerful radio jets ) or through the presence of a denser igm ( which would provide a more efficient working surface for the jets , thus boosting their observed radio luminosity ) . \n further work is needed to determine which of these effects is dominant . \n [ firstpage ] cosmology : observations galaxies : clusters : general galaxies : active radio continuum : galaxies galaxies : evolution galaxies : elliptical and lenticular , cd cosmology : large scale structure of universe",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.9627 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to implement an automated data augmentation policy search framework for image classification.\n\nKeywords: image classification images\n\nDataset Information:\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nPerspectrum: Perspectrum is a dataset of claims, perspectives and evidence, making use of online debate websites to create the initial data collection, and augmenting it using search engines in order to expand and diversify the dataset. Crowd-sourcing was used to filter out noise and ensure high-quality data.\nImageNet-Sketch: ImageNet-Sketch data set consists of 50000 images, 50 images for each of the 1000 ImageNet classes. The data set is constructed with Google Image queries \"sketch of \", where is the standard class name.\nACDC: The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to: compare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances; compare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle). The overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon.\n\nAvailable Datasets: A: CIFAR-10 B: Perspectrum C: ImageNet-Sketch D: ACDC",
"role": "user"
},
{
"content": "CIFAR-10",
"role": "assistant"
}
]
|
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.4082 | [
{
"content": "Craft a cohesive chapter for a literature review using the abstracts provided. Focus on the chapter's theme, weaving together insights and findings from the abstracts. Highlight key points and note any research gaps.\nLiterature Review Title: Towards Data Analytics of Pathogen-Host Protein-Protein Interaction: A Survey \nChapter Title: III. PHPPI DATABASES \n1. Abstract of Cited Paper (BIB001): BackgroundProtein-protein interactions (PPIs) play a crucial role in initiating infection in a host-pathogen system. Identification of these PPIs is important for understanding the underlying biological mechanism of infection and identifying putative drug targets. Database resources for studying host-pathogen systems are scarce and are either host specific or dedicated to specific pathogens.ResultsHere we describe \"HPIDB” a host-pathogen PPI database, which will serve as a unified resource for host-pathogen interactions. Specifically, HPIDB integrates experimental PPIs from several public databases into a single, non-redundant web accessible resource. The database can be searched with a variety of options such as sequence identifiers, symbol, taxonomy, publication, author, or interaction type. The output is provided in a tab delimited text file format that is compatible with Cytoscape, an open source resource for PPI visualization. HPIDB allows the user to search protein sequences using BLASTP to retrieve homologous host/pathogen sequences. For high-throughput analysis, the user can search multiple protein sequences at a time using BLASTP and obtain results in tabular and sequence alignment formats. The taxonomic categorization of proteins (bacterial, viral, fungi, etc.) involved in PPI enables the user to perform category specific BLASTP searches. In addition, a new tool is introduced, which allows searching for homologous host-pathogen interactions in the HPIDB database.ConclusionsHPIDB is a unified, comprehensive resource for host-pathogen PPIs. The user interface provides new features and tools helpful for studying host-pathogen interactions. HPIDB can be accessed at http://agbase.msstate.edu/hpi/main.html. \n2. Abstract of Cited Paper (BIB002): The International Molecular Exchange (IMEx) consortium is an international collaboration between major public interaction data providers to share literature-curation efforts and make a nonredundant set of protein interactions available in a single search interface on a common website (http://www.imexconsortium.org/). Common curation rules have been developed, and a central registry is used to manage the selection of articles to enter into the dataset. We discuss the advantages of such a service to the user, our quality-control measures and our data-distribution practices. \n3. Abstract of Cited Paper (BIB003): Protein–protein interactions, essential for understanding how a cell functions, are predicted using a new method that combines protein structure with other computationally and experimentally derived clues. \n4. Abstract of Cited Paper (BIB004): Background ::: Several computational methods have been developed to predict protein-protein interactions from amino acid sequences, but most of those methods are intended for the interactions within a species rather than for interactions across different species. Methods for predicting interactions between homogeneous proteins are not appropriate for finding those between heterogeneous proteins since they do not distinguish the interactions between proteins of the same species from those of different species. \n5. Abstract of Cited Paper (BIB005): Protein interactions play an important role in the discovery of protein functions and pathways in biological processes. This is especially true in case of the diseases caused by the loss of specific protein-protein interactions in the organism. The accuracy of experimental results in finding protein-protein interactions, however, is rather dubious and high throughput experimental results have shown both high false positive beside false negative information for protein interaction. Computational methods have attracted tremendous attention among biologists because of the ability to predict protein-protein interactions and validate the obtained experimental results. In this study, we have reviewed several computational methods for protein-protein interaction prediction as well as describing major databases, which store both predicted and detected protein-protein interactions, and the tools used for analyzing protein interaction networks and improving protein-protein interaction reliability. \n6. Abstract of Cited Paper (BIB006): The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein–protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10 000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issue. \n7. Abstract of Cited Paper (BIB007): Protein-protein interactions (PPIs) are crucial for almost all cellular processes, including metabolic cycles, DNA transcription and replication, and signaling cascades. However, the experimental methods for identifying PPIs are both time-consuming and expensive. Therefore, it is important to develop computational approaches for predicting PPIs. In this article, a sequence-based method is developed by combining a novel feature representation using binary coding and Support Vector Machine (SVM). The binary-coding-based descriptors account for the interactions between residues a certain distance apart in the protein sequence, thus this method adequately takes the neighboring effect into account and mine interaction information from the continuous and discontinuous amino acids segments at the same time. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 86.93% prediction accuracy with 86.99% sensitivity at the precision of 86.90%. Extensive experiments are performed to compare our method with the existing sequence-based method. Achieved results show that the proposed approach is very promising for predicting PPI, so it can be a useful supplementary tool for future proteomics studies. \n8. Abstract of Cited Paper (BIB008): We consider the problem of building a predictive model for host-pathogen protein interactions, when there are no known interactions available. Our goal is to predict the protein protein interactions (PPIs) between the plant host Arabidopsis thaliana and the bacterial species Salmonella typhimurium. Our method based on transfer learning, utilizes labeled data i.e known interactions from other species (we call these the source tasks). The first challenge is to pick the best instances from the source tasks, such that the resultant model when applied on the target task generates high confidence predictions. Towards this, we use the instance reweighting technique Kernel Mean Matching (KMM). The reweighted instances are used to build a kernelized support vector machine (SVM) model, which is applied on the target data. This brings forth the second challenge selecting appropriate hyperparameters while building a model for a task with no labeled data. For the purposes of evaluation, we apply our method on a task where we have some labeled data available. We find that choice of the right source examples makes a significant difference in performance on the target task.\n\nReview Chapter:",
"role": "user"
},
{
"content": "Since many database repositories are provided all over the world by a variety of both academics and industry, there are many different standardized formats for PPIs data. Recently Human Proteome Organization Proteomics Standards Initiative(HUPO-PSI) published the PSI-MI XML format to store a single, unified format for PPIs data. Another consortium International Molecular Exchange (IMEx, http://www.imexconsortium.org/) is also established to create collaboration between research groups for sharing literature-curation efforts and making a non-redundant set of PPIs available in a single search portal on one website BIB006 . As a well developed version, PSI-MI XML is supplemented by a simplified tabular format named MITAB, which enables the user to download, combine, visualize and analyze data from multiple sources. Several well developed and popular repositories are introduced below. By far, DIP, IntAct, MINT, I2D, MatrixDB, MBInfo, UniProt, Molecular Connections, HPIDB, BioGRID and PrimesDB are the IMEx partners. Notably most of these data repositories were initially built by the universities BIB006 BIB003 . * HPRD is a database manually extracted from literature, which provides more than 30,000 proteins and also BIB004 Now it provides more than 230,000 interactions and more than 34,000 proteins. The confidence scores for experimentally detected PPIs are also provided to show the reliability of the interactions. * IntAct is an open database, which provides both the source code and data. It contains more than 60,000 proteins and more than 290,000 binary interaction evidences, which are extracted from more than 5000 scientific publications and also other direct database repositories. All of the PPIs, DNA, RNA and smallmolecule interactions are included in this repository. Aside from these PPIs database, we focus on pathogenhost protein-protein interaction (PHPPI). Even though current PHPPI knowledge is still scarce and not sufficient, the research on PHPPI prediction are continuously being conducted. Considering about 26,000 human proteins, the PHPPI pairs would be millions even if we only pair them with a few thousands of pathogen proteins. Reliable experimental methods are time-consuming and expensive, making it unjustifiable, thus for some special PHPPI, we might need to verify its result with duplicate experiments. Now owing to some earlier research efforts, there are several verified PHPPI datasets available, including HPIDB BIB002 , PATRIC BIB007 , PHISTO , VirHostNet BIB001 and VirusMentha . These databases have provided some PPI pairs to build a \"golden standard dataset\", including positive PHPPI and negative PHPPI for verifying computational methods. A statistics of these PHPPI datasets repositories is shown in Table III . In order to build a PHPPI dataset between human and different types of pathogens, we used two datasets from PHISTO and PATRIC to build a comprehensive positive part of the \"golden standard PHPPI dataset\". The details will be discussed in Section VI. where stands for a Radial Basis Functions (known as RBF) kernel, and α i are the parameters from a covex quadratic programming problem, which is shown as bellow: Due to the high efficiency and performance of SVM, it has been widely used in PPIs research. In BIB008 , SVM is deployed, with a newly proposed feature curation method, to achieve a better results than others' work, i.e. . For PHPPI, the authors of BIB005 select two types of viruses, namely human papillomaviruses (HPV) and hepatitis C virus (HCV), then design a new frequency representation method and deploy a SVM model achieving an average accuracy above 80 %, which is higher than other representation scheme using SVM. Furthermore, combining with transfer learning, BIB009 utilize two human-pathogen datasets as source tasks, and the other as target task, and achieve a relative good result using SVM model for training. These researches have shown the compatibility and performance of the SVM model for PPIs prediction.",
"role": "assistant"
}
]
|
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.39 | [
{
"content": "Examine the abstracts provided from a main scientific article and its cited works. Craft a narrative that synthesizes the key insights and contributions, emphasizing their interconnections. Employ numerical citations such as [1], [2], etc., to reference the cited works.\n\nMain Article Summary:\nThis paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.\n\nCited Works Summary:\n[1]: In this paper, we will discuss a method for assigning part of speech tags to words in an unannotated text corpus whose structure is completely unknown, with a little bit of help from an informant. Starting from scratch, automated and semiautomated methods are employed to build a part of speech tagger for the text. There are three steps to building the tagger: uncovering a set of part of speech tags, building a lexicon which indicates for each word its most likely tag, and learning rules to both correct mistakes in the learned lexicon and discover where contextual information can repair tagging mistakes. The long term goal of this work is to create a system which would enable somebody to take a large text in a language he does not know, and with only a few hours of help from a speaker of the language, accurately annotate the text with part of speech information.",
"role": "user"
},
{
"content": "The present paper is concerned with tagging languages and sublanguages for which no a priori knowledge about grammatical categories is available, a situation that occurs often in practice [1] .",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.1333 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: I want to train a classifier that can distinguish transmission electron microscopy (TEM) images from\n\nKeywords: electron microscopy image classification images\n\nDataset Information:\nTTPLA: TTPLA is a public dataset which is a collection of aerial images on Transmission Towers (TTs) and Power Lines (PLs). It can be used for detection and segmentation of transmission towers and power lines.\nFaceForensics: FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces.\nDOTmark: DOTmark is a benchmark for discrete optimal transport, which is designed to serve as a neutral collection of problems, where discrete optimal transport methods can be tested, compared to one another, and brought to their limits on large-scale instances. It consists of a variety of grayscale images, in various resolutions and classes, such as several types of randomly generated images, classical test images and real data from microscopy.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\n\nAvailable Datasets: A: TTPLA B: FaceForensics C: DOTmark D: ImageNet",
"role": "user"
},
{
"content": "ImageNet",
"role": "assistant"
}
]
|
science.data_reco_mcq_sc | science.data_reco_mcq_sc.10163 | [
{
"content": "Explore the following research question and associated terms. Identify the most suitable dataset from the list provided. Use the dataset descriptions to guide your choice.\n\nInquiry: The model is trained to generate a paraphrase of the question given the answer and the original question.\n\nKeywords: paraphrase generation text\n\nDataset Information:\nOpenBookQA: OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test), which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel situations.\nWebQuestions: The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk.\nVQA-CP: The VQA-CP dataset was constructed by reorganizing VQA v2 such that the correlation between the question type and correct answer differs in the training and test splits. For example, the most common answer to questions starting with What sport… is tennis in the training set, but skiing in the test set.\nProofWriter: The ProofWriter dataset contains many small rulebases of facts and rules, expressed in English. Each rulebase also has a set of questions (English statements) which can either be proven true or false using proofs of various depths, or the answer is “Unknown” (in open-world setting, OWA) or assumed negative (in closed-world setting, CWA).\n\nAvailable Datasets: A: OpenBookQA B: WebQuestions C: VQA-CP D: ProofWriter",
"role": "user"
},
{
"content": "WebQuestions",
"role": "assistant"
}
]
|
science.healthver_entailment | science.healthver_entailment.1899 | [
{
"content": "Evaluate the given public health statement using the provided research abstract. Your goal is to determine the relationship between the claim and the abstract content.\n\n- \"decision\": Indicate whether the abstract \"SUPPORTS\", \"CONTRADICTS\", or provides \"INSUFFICIENT\" information regarding the claim.\n- \"justification\": List sentences from the abstract that back your decision. If the decision is \"INSUFFICIENT\", leave this list empty.\n\nStatement: Influenza and the common cold are diseases spread person to person in much the same way as SARS-CoV-2, the virus that causes COVID-19. So, efforts to reduce COVID-19 have the dual benefit of reducing colds and flu.\n\nResearch Abstract:\nThe infectious diseases are spreading due to human interactions enabled by various social networks.\nTherefore, when a new pathogen such as SARS-CoV-2 causes an outbreak, the non-pharmaceutical isolation strategies (e.g., social distancing) are the only possible response to disrupt its spreading.\nTo this end, we introduce the new epidemic model (SICARS) and compare the centralized (C), decentralized (D), and combined (C+D) social distancing strategies, and analyze their efficiency to control the dynamics of COVID-19 on heterogeneous complex networks.\nOur analysis shows that the centralized social distancing is necessary to minimize the pandemic spreading.\nThe decentralized strategy is insufficient when used alone, but offers the best results when combined with the centralized one.\nIndeed, the (C+D) is the most efficient isolation strategy at mitigating the network superspreaders and reducing the highest node degrees to less than 10% of their initial values.\nOur results also indicate that stronger social distancing, e.g., cutting 75% of social ties, can reduce the outbreak by 75% for the C isolation, by 33% for the D isolation, and by 87% for the (C+D) isolation strategy.\nFinally, we study the impact of proactive versus reactive isolation strategies, as well as their delayed enforcement.\nWe find that the reactive response to the pandemic is less efficient, and delaying the adoption of isolation measures by over one month (since the outbreak onset in a region) can have alarming effects; thus, our study contributes to an understanding of the COVID-19 pandemic both in space and time.\nWe believe our investigations have a high social relevance as they provide insights into understanding how different degrees of social distancing can reduce the peak infection ratio substantially; this can make the COVID-19 pandemic easier to understand and control over an extended period of time.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
]
|
science.chia_ner | science.chia_ner.548 | [
{
"content": "Analyze the provided clinical trial criteria. Identify and extract distinct entities from the text.\n\nReturn the results as a JSON structure. Ensure the output is strictly in JSON format without any additional commentary.\n\nCriteria Text:\n\nRenal insufficiency (> 265 µmol/l)\nIncapability to give informed consent\nCardiogenic shock of patient with KILLIP III or IV\npregnant or breast feeding females\ninsufficient contraception (only for substudy 3)",
"role": "user"
},
{
"content": "{\"Condition\": [\"Cardiogenic shock\", \"pregnant\", \"Renal insufficiency\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"KILLIP\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [\"breast feeding\"], \"Person\": [\"females\"], \"Procedure\": [\"contraception\"], \"Qualifier\": [\"insufficient\"], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"III or IV\"], \"Visit\": []}",
"role": "assistant"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.